Everything I’ve Read and Watched on AI 2

  1. Transhumanism: How Christianity Fulfills Our Deepest Desires | John Lennox
  2. How should Christians think about artificial intelligence?
  3. Michael Reeves: Christ the Image of God
  4. OpenAI CEO Sam Altman: “If this technology goes wrong, it can go quite wrong.”
    1. AI will change every job
    2. Nutrition labels
    3. Worst fear
  5. The Brain’s Learning Algorithm Isn’t Backpropagation
  6. Credit Assignment Problem
    1. Problems with Backprop
    2. Foundations of Predictive Coding
    3. Energy Formalism
    4. Activity Update Rule
    5. Neural Connectivity
    6. Weight Update Rule
    7. Putting all together
    8. Brilliant
    9. Outro
  7. Should we be worried about Artificial Intelligence?
  8. FULL DISCUSSION: Google’s Demis Hassabis, Anthropic’s Dario Amodei Debate the World After AGI | AI1G
  9. Oxford Professor: AI Is Humanity’s Attempt to Make God — John Lennox
    1. How to Know Scripture Is God-Breathed
    2. Why Scientific Explanations Don’t Exclude God
    3. Why Style Is As Important As Substance
    4. How to Balance Argument and Self-Criticism
    5. How Narrow vs General AI Create Different Risks
    6. Why the AI Race Is About Making God
    7. How AI Changes What It Means to Be Human
    8. How Christians Can Lead in AI Ethics
  10. AI, Man & God | Prof. John Lennox
    1. Introducing John Lennox
    2. Do we find truth in science?
    3. Why is secularism on the rise?
    4. What is Narrow Artificial Intelligence?
    5. China’s Social Credit System
    6. Rapid AI development
    7. The Dangers of Artificial General Intelligence (AGI)?
    8. The 2 alarming agendas of the 21st century
    9. Humanity’s desire for immortality
    10. What is true faith?
  11. RSA Lighting Talk 3 Oct 2023 Youtube
  12. What Sam Altman Doesn’t Want You To Know
  13. Artificial Intelligence and the Publishing Industry
  14. History of Artificial Intelligence
  15. Christian Professor Responds to Shocking Claims About AI
  16. FULL DISCUSSION: Yuval Noah Harari Warns AI Will Take Over Language, Law, and Power at WEF | AI1G

Transhumanism: How Christianity Fulfills Our Deepest Desires | John Lennox

0:00solving the problem of physical death

0:04re-engineering humans to become little

0:06gods that has all to do

0:09with wanting immortality

0:12and as a Christian I have a great deal

0:14to say about that because what’s

0:17happening I believe in the transhumanist

0:20the desire for that is a parody on what

0:23Christianity actually is all about this

0:26business of what’s hardwired at the

0:28human beings version 101 so to speak I

0:33think is vastly important many years ago

0:36I came across that idea in the moral

0:39sense C.S Lewis talking about in his

0:43book and it’s relevant to what we’re

0:44talking about the moment the abolition

0:46of manners and appendix at the end where

0:48he points out that all around the world

0:49look at every culture

0:52they may differ but they’ve got certain

0:55moral rules in common it looks as if

0:59morality is hardwired I believe it is by

1:02a benevolent Creator but now we come we

1:06come up to this and

1:09we see that there’s hard wiring again

1:13at this particular level God has set

1:17eternity

1:18in the human heart now of course that’s

1:21a theistic perspective but if you take

1:24the atheistic take on it then you’ve got

1:27to explain where it comes from and again

1:29I found CS Lewis as always right

1:33on the money so to speak he he makes the

1:36point and I’m going to paraphrase it

1:38slightly it would be very strange to

1:41find yourself in a world where you got

1:43thirsty and there was no such thing as

1:45water

1:46now I think that’s a very powerful thing

1:49that longing and C.S Lewis has written a

1:52great deal about it brilliant essay

1:54called the weight of Glory that longing

1:56for another world

1:59implies that these are not his words but

2:01they’re his sentiments that we were

2:03actually made for another world now

2:07I feel that the transhuman quest is an

2:12expression of the fact that we’re

2:14hardwired with a longing for something

2:16transcendent and it’s trying to fulfill

2:20it the the thing that I want to explore

2:23with you for a moment is that

2:25uh

2:26I

2:28think that a lot of people are at the

2:32point where they don’t it’s it requires

2:34a lot of energy quite a bit of Anguish

2:37to say I’m going to make some tough

2:38decisions about what I really believe

2:40and it seems to me that this whole area

2:43of artificial intelligence and the

2:45chance that we may reach the capacity to

2:47literally destroy ourselves requires us

2:49to think long and hard

2:53and to make judgments that will have to

2:55be based if you like on faith you can’t

2:57know exactly what’s going to happen so

2:59you see if you want to say Well it

3:01requires a lot of faith to believe in

3:03that think through whether I believe in

3:05a God I would have thought this whole

3:07area presents just as great a challenge

3:10who am I how am I going to work this out

3:13do I put some ethical framework done or

3:15do I just sit in the pot and let the

3:16water boil gradually boil until it’s too

3:19late yes

3:21you are absolutely right this is going

3:24to force us whether we like it or not to

3:28do some hard thinking and to re-inspect

3:33and recalibrate our world view

3:35because our attitude to these things

3:38depends on our worldview our set of

3:40answers to the big questions of Life

3:43what is reality

3:45who am I what’s going to happen after

3:48death and all those kind of things

3:49they’re coming out in this area we’re

3:52being forced to think about them I read

3:54Harare and I read other books like this

3:56and I say you know I can understand what

3:59you’re looking for you’re looking for

4:01something that’s very deep and hardwired

4:03in us but and I make people smile

4:07sometimes when I meet these people

4:09transhumanist and I say guys I respect

4:13what you’re after but you’re too late

4:16and they say what too late of course

4:18we’re not too late I say you actually

4:20are too late take your two problems one

4:24physical death

4:25I said no I believe there’s powerful

4:27evidence that that was sold 20 centuries

4:30ago it was actually solved before that

4:32but 20 centuries ago there was a

4:35resurrection in Jerusalem we celebrated

4:39as Easter we’re just after Easter night

4:41and as a scientist I believe it for

4:44various reasons that we can discuss but

4:46the point is that

4:49if Jesus Christ

4:52broke the death barrier that puts

4:56everything in a different light why

4:59because it affects you and me how does

5:01it affect you and me because if that is

5:04the case then we need to recalibrate and

5:07take seriously his claim to be God

5:11become human I said isn’t that

5:14interesting what are you trying to do

5:16you’re trying to turn humans into Gods

5:19the Christian message goes in the exact

5:21opposite direction it tells us of a God

5:24who became human do you notice the

5:26difference and of course that actually

5:28gets people fascinated I say you are

5:33actually taking seriously the idea that

5:37humans can turn themselves into Gods by

5:39technology and so on why won’t you take

5:42seriously the idea that there is a God

5:44who became human is that any more

5:46difficult to do

5:48and once you’ve got that then I think

5:52arguably you need to take seriously what

5:56Jesus says and what he says is

5:59and that is the Christian message he is

6:03God become human in order to do what

6:07to give us his life

6:09if you like to turn us into what you

6:12want to be

6:14because the amazing thing about this is

6:16that the central message of the

6:20Christian faith to you and me is the

6:23answer to the transhumanist dream one

6:26Christ promises

6:29eternal life that is life that will

6:31never cease and it begins now not in

6:35some mystical transhuman uncertain

6:37future but right now secondly because he

6:40rose from the dead he promises that we

6:45will one day be raised from the dead to

6:48live with him in another Transcendent

6:51realm that’s perhaps even more real

6:53probably more real is more real than

6:56this one

6:57and that’s going to be the biggest

6:58uploading ever

7:00you see so your hope

7:04for the future of humanity changing

7:07human beings into something more

7:09desirable living forever and happier all

7:13of that is offered but the difference

7:15between the two is radical because

7:17firstly your idea is using human

7:20intelligence to turn humans into gods

7:26bypassing the problem of moral evil

7:28you’re never going to do it no Utopia

7:32has ever been built I believe even more

7:35strongly than ever that we’ve got as

7:37Christians a brilliant answer

7:40and a message to speak into this that

7:43crosses all the boxes

7:46but it means facing moral reality which

7:49is exactly at the heart of the scariness

7:53with which some people approach these

7:54issues

7:58foreign

7:59[Music]

How should Christians think about artificial intelligence?

0:00Township Christians think about

0:01artificial intelligence should we be

0:03hesitant to embrace attempts to

0:05replicate human cognitive functions in

0:07machines well yes you know one of the

0:11scariest things we have going on here

0:13is that the technology of what’s called

0:15artificial intelligences is fast

0:17outstripping

0:18any boundaries any set of rules any kind

0:22of moral expectation so even the folks

0:25in Silicon Valley who by the way are

0:26investing untold billions of dollars

0:28right now in artificial intelligence

0:30because they see is the wave of the

0:31future even they honestly have no idea

0:34where this is going to take us there are

0:36a couple of very clear problems here one

0:40of them is the attempt to create

0:41something that’s a hybrid of human and

0:44artificial intelligence and and by the

0:47way any artificial intelligence is not

0:49going to be totally artificial because

0:50it’s going to be inexplicable apart from

0:52a human being or a team of human beings

0:54who brought it about which by the way

0:56goes right back to the Christian

0:57argument about the origin of the

1:00universe it’s in other words intelligent

1:02design and creation applies not only to

1:06to human beings creating artificial

1:09intelligence and AI being inexplicable

1:11apart from human intelligence but of

1:13course you can work backwards from that

1:14to the Creator as well but I’m talking

1:17about a biological technological hybrid

1:20because that as far out as it might

1:22sound is actually where a lot of

1:24investments going right now you know the

1:26possibility of somehow merging the human

1:30biological and and otherwise machine

1:33intelligence the other thing to watch is

1:36an artificial intelligence is out of

1:38control and here’s where many of the

1:41people who are putting the most money

1:42into it recognize they know what they’re

1:45beginning they don’t really know what

1:47the conclusion might be and you can

1:49think back to the movie 2001 a Space

1:51Odyssey and and remember the threat of a

1:55computer that becomes more intelligent

1:57than human being now by the way I don’t

1:59worry about any machine becoming truly

2:03more intelligent than a human being

2:04because our intelligence is not merely a

2:06machine intelligence it’s not merely

2:09algorithms and analytics its emotions

2:12and

2:13it’s spiritually defined being made an

2:15image of God but there are forms of

2:18intelligence which if artificial and if

2:21out of control could be incredibly

2:22damaging just consider how damaging

2:24right now a virus is in terms of the

2:28total ecology of software and computer

2:32processing just imagine that that turns

2:34openly hostile not only morally hostile

2:37and not worried about you really can’t

2:39imply morality in this such as you know

2:42artificial intelligence that turns evil

2:45it would be artificial intelligence that

2:48becomes nothing more than a malevolent

2:50force which is set loose and frankly

2:54can’t be controlled so yeah there’s some

2:56huge huge issues here the last thing I

2:58want to say about this is that one of

2:59the most dangerous aspects is the

3:01blurring of the distinctiveness of

3:03humanity so in the last say month have

3:06been three major books published one of

3:09them by Roger Scruton the philosopher

3:11just dealing with the fact that it’s

3:15becoming more difficult in material

3:17terms to explain and to define human

3:20uniqueness that’s not good for humanity

3:23this is what many have warned about

3:24throughout the the 20th century in

3:26particular and just points to the danger

3:29that is that is lurking behind this kind

3:32of artificial intelligence not to

3:33mention the fact that at this point

3:35certainly I’m not willing to write in

3:36any car that is nearly driven by

3:39artificial intelligence

Michael Reeves: Christ the Image of God

0:00Friends, I’m so sorry we can’t be together in person now, but we have a sovereign God

0:08and glorious, certain promises from Him we’re going to see to help us stand strong in harrowing

0:17times. Now, friends, we live today in a culture where everyone desperately wants to be fulfilled,

0:30and yet we’re in a culture which doesn’t know what it means to be human.

0:35Witness the abortions, the racism, the objectifying of people in the sex industry.

0:43And if we don’t know what it is to be human, how can we ever know how we can be fulfilled

0:51as humans? Now none of that should surprise us, for you cannot know what it means to be human without

1:02Christ, the image of God. And my aim now is to look together at what it means that Christ is the image of God and

1:12see what that teaches us about what it means to be human.

1:17Now, in the first few hundred years after the apostles, “the image of God” was a theme

1:24that grabbed theologians in a way we don’t really see today, theologians like Irenaeus

1:32and Athanasius. They loved this theme because they saw the story of reality is the story of the image

1:40of God. For eternity past, the Son has always been, as Hebrews 1 verse 3 puts it, “the radiance

1:50of the glory of God, the exact imprint of His nature.” “He is,” as Paul explains in Colossians 1, “the image of the invisible God.”

2:02Not just like God, not just friendly with God, He is of the very being of the Father

2:10so that when He appears, we do not just see a faint echo of divinity.

2:17When Philip, in John 14, asks Him, “Lord, show us the Father,” Jesus said to him, “Whoever

2:25has seen Me has seen the Father,” for He is the very expression of the Father, the perfect

2:34image and radiance of the glory of God. He is the image of God.

2:42But then in Genesis 1, we read about the creation of Adam, and we’re told that he was created

2:52in the image of God, in Genesis 1:27, “after our likeness.”

2:57Now, there’s so much to say about what it means that Adam was created in the image of God, but something Paul picks up in Romans 5 is fascinating.

3:08Paul there describes Adam as “the pattern of the one to come,” Romans 5:14.

3:16In other words, Paul is saying the first Adam was intended to be a picture of what Christ,

3:25the last Adam, would be like. For remember, Adam was crowned by God as the ruler of all creation in Genesis 1.

3:37God said, “Be fruitful and multiply and fill the earth and subdue it, and have dominion

3:43over the fish of the sea and over the birds of the heavens and over every living thing that moves on the earth.”

3:48That’s Genesis 1:28. So Adam was to look after and rule over the creation as God’s steward and regent, not

4:01that Adam was ever the true monarch. Now, in the beginning, we’re being shown the end of humanity.

4:11Adam was serving as an illustration of the One to whom every knee will bow, to whom every

4:18creature will submit, the last Adam, who would be crowned the everlasting king of all.

4:24But Adam is also strikingly called “the son of God” in Luke 3 verse 37.

4:33That’s the climax of Luke’s genealogy of Jesus.

4:38Do you remember, “Jesus,” Luke explains “was the son (as was supposed) of Joseph, the son

4:44of Heli, the son of,” keep going a bit, “Enos, the son of Seth, the son of Adam, the son

4:51of God.” Adam was created son of God specifically to be like the uncreated Son of God, reveling

5:02in the love and care the eternal Son had always enjoyed. Adam was made to know the love of the Father.

5:10Now, Adam undid all that he was made to be by sinning.

5:16He listened to Satan. And when listening to Satan, he was no longer imaging God.

5:22He was doubting God’s fatherly kindness to him, and so he was no longer being a faithful

5:29son. He was the prodigal son.

5:34But even in his sin, he actually still managed to serve as a mirror image of the Son of God.

5:45Adam did not do what he was commanded, and precisely because he no longer loved the Father.

5:52And in that moment, he could not have been more perfectly opposite to the Son of God

5:58who said, John 14:31, “I love the Father, and I do exactly what He commands.”

6:07But more that all that, the first Adam shows us what the last Adam is like through his

6:16marriage. And the account of it in Genesis 2 certainly makes you sit up and wonder, because there

6:23in Genesis 2, remember, it’s a world before all death and agony, and Adam is wounded.

6:32″The LORD caused a deep sleep to fall upon the man, and while he slept the LORD took

6:42one of his ribs and closed up its place with flesh.” Adam falls into a deep, strange, deathlike sleep, and from his side the Lord takes a

6:55rib and builds it into a woman. And she becomes…she comes from him, and they become one – husband and wife.

7:09John Calvin, when he wrote about this, he wrote “In this we see a true resemblance of

7:16our union with the Son of God.” What did he mean? Well, the biblical commentator, Matthew Henry, elaborates.

7:23He says, “In this, as in many other things, Adam was a figure of Him that was to come.

7:32For out of the side of Christ, the second Adam, His spouse, the church was formed when

7:39He slept the sleep, the deep sleep of death upon the cross in order to which His side

7:47was opened, and there came out blood and water. Blood to purchase His church, and water to purify it to Himself.”

7:58No wonder the Apostle Paul, reading of this first wedding in Genesis 2, saw it as a picture

8:05of the last and ultimate wedding, saying, reading from Genesis 2, “Therefore a man shall

8:12leave his father and mother, and the two shall become one flesh.” And he says, “This is a profound mystery, but I’m talking about Christ and the church,”

8:21Ephesians 5:32. That in Adam we see Christ’s glorious intention to give life to His bride, to be one with

8:31her. And so, Adam was created as the pattern of the one to come.

8:38And from that moment, all of history would be the story of these two men – Adam, the

8:46head of the old humanity, and Christ, the head of the new.

8:51And the fate of every person would be wrapped up in one or the other.

8:57And what Adam would break, Christ would mend.

9:04So at a tree, the tree of knowledge of good and evil, Adam committed the mother of all

9:11sins, and he fell into death. At a tree, the cross, Christ obeyed His Father to the uttermost and conquered death.

9:24Adam brought sin and death; Christ brought righteousness and life.

9:31And then, wrote G.K. Chesterton, “On the third day, the friends of Christ coming at daybreak to the place

9:40found the grave empty and the stone rolled away. And in varying ways, they realized the new wonder, but even they hardly realized the

9:50world had died in the night. And what they were looking at was the first day of a new creation, with a new heaven and

9:58a new earth. And in a semblance of the gardener, God walked again in the garden in the cool, not of the

10:06evening, but of the dawn.” Yes, that first Easter morning was indeed a wondrous new beginning like a new Eden,

10:20reestablishing all that God had once declared good. A man, yes God, walked in the garden, ruler over all things, in perfect harmony with God.

10:34Only now there’d be no threat of death, no danger of a serpent to wreck it all.

10:40Death had been swallowed up in victory, the serpent crushed. And where Adam had been banished from the Lord’s presence and expelled from paradise,

10:51Christ would ascend to be where man was made to be with God. And a man would sit with God in perfect harmony.

11:03Adam had been told, “Fill the earth and subdue it,” but Ephesians 4:10, Christ “ascended

11:12far above all the heavens, that He might fill all things.

11:17And He gave gifts to His people” that His people might be built up that they, the new

11:23humanity, might fill God’s new creation.

11:28In his great Christmas hymn, “Joy to the World,” Isaac Watts wrote, “No more let sins and sorrows

11:37grow, nor thorns infest the ground. He comes to make His blessings flow far as the curse is found.”

11:47Yes, Christ mended all that Adam broke, the humanity, the image and likeness of God that

11:56was broken in Adam was mended in Christ. And more than mended, for Jesus is greater than Adam.

12:05And as there was more glory in the days of Solomon than in the days of his father David,

12:11so there will be more glory in the days of the Son of Man than ever that had been in

12:17the days of the first man. For as the last Adam is so much superior to the first, so must His reign be.

12:27And He, unlike the first man, will never fall or fail as Adam did.

12:34And so the rule of the Son of Man in paradise restored will never pass away.

12:43In harrowing times, think on the Son of Man. And do you know, one of the great heroes of the faith was the mighty fourth century theologian,

12:54Athanasius? His name means “immortal.” And it’s quite appropriate. And Athanasius had a lovely image to help us get how Christ is the image of God and

13:07how He restored the image of God in humanity. He said, “Adam was like a beautiful portrait painting.

13:16On him, the image of God was drawn. And what happened at the fall was that the portrait was utterly wrecked.

13:25Adam was no longer anything like God. He’d become vicious, selfish, horribly unholy.

13:30And so the image, the painting was ruined.” So, how could this precious portrait be restored?

13:41And the problem was there was nobody who knew what the portrait had once looked like.

13:47They couldn’t restore it. To restore it, you had to know God. You had to know what He’s like.

13:53Otherwise, you could never know what the image of God should look like.

13:59There was only one hope. The original subject of the portrait had to come and have His likeness redrawn on the

14:10canvas of humanity. Only the One whose likeness was originally drawn on Adam could restore and renew it.

14:20And so the image of God Himself came.

14:26He took humanity to renew His image in it.

14:31He came and showed us the image of God in the flesh.

14:36And in Christ alone could humanity be restored from what Athanasius called all this “dehumanizing

14:46of mankind.” Only He, the image of God Himself, could rehumanize us.

14:56Only in Him could we, as Paul puts it in Colossians 3:10, “put on the new self, which is being

15:04renewed in knowledge after the image of its Creator.”

15:09Friends, no wonder our society is crawling with identity issues.

15:17With the image of God ruined in Adam, sinners don’t know what they’re for.

15:24So we seek to mend ourselves, but we don’t know what mended looks like.

15:31Sensing our brokenness, we try to restore ourselves with morality or with authenticity,

15:38but we’re fumbling in the dark, trying to redraw a portrait when we have no idea what

15:44it should even look like. All we can come up with are monstrous aberrations.

15:51Our only hope of wholeness is in Christ, the image of God.

15:58Humanity can be mended nowhere else. To be out of Christ, whatever we do, whatever we try, is to remain dehumanized by the fall.

16:12But to know Christ, to be in Him is to be humanized, to be renewed in the likeness of

16:21God because in Christ, we see perfect humanity.

16:27We see humanity as we should be. Now we often, we use a negative word to describe Christ’s life.

16:40It was, we say, “sinless,” which it doesn’t sound immediately very exciting, does it?

16:47″Sinless.” But think what it means. That Christ was sinless means He was not selfish, heartless, cruel, abusive, twisted, petty,

17:04proud. To be sinless is beautiful, and that is what humanity should be and is destined to be in

17:15Christ. This true humanity that we see in Christ, think, is so full of life.

17:24Just think what Jesus was like as a man as you read the Gospels. He was anything but boring and anemic.

17:31Here was a man with towering charisma, running over with life, health and healing, loaves

17:38and fishes. Everything abounded in His presence. So compelling did people find Him that crowds would throng round Him – men, women, children,

17:51the sick and the mad, the rich and the poor. They found Him so magnetic.

17:56Some just wanted to touch His clothes. Kinder than summer, He befriended the rejects and He gave hope to the hopeless.

18:06And the dirty and the despised found they mattered to Him.

18:14His closest friends found that as the Son of Man came eating and drinking, being with Him, it was like being with a bridegroom at a wedding.

18:25He was generous, genial, and firm and resolute.

18:32He was always surprising. Jesus was utterly loving, but He wasn’t soppy.

18:39His insight would unsettle people, and His kindness would win them.

18:47Indeed, you read the Gospels and you see Jesus was a man of extraordinary and extraordinarily

18:54appealing contrasts. You simply couldn’t make Him up. Just try to imagine the perfect man.

19:03If you do, you’ll come up with some wooden caricature of a man, a saintly bore.

19:09But Jesus is so much more realistic, so much better than any imaginary perfect man.

19:18You see, we would make Him only one thing or the other. But Jesus, you see, He’s red-blooded and human, but not rough.

19:29He’s pure, but He’s never dull. He’s serious, but with sunbeams of wit.

19:36Sharper than cut glass, He would out-argue all comers in debate but never for the sake

19:44of a mere win. He knew no failings in Himself and yet was transparently humble.

19:54He made the grandest claims for Himself and yet does so without a whiff of pomposity.

20:01He ransacked the temple, He spoke of hellfire, He called Herod a fox, He called the Pharisees

20:07″corpses in makeup,” and yet never do you doubt His love as you read His life.

20:14With a huge heart, He hated evil and felt for the needy.

20:20He loved God, and He loved people. So you look at Him and you have to say, “Here is a man truly alive, unwithered in any way,

20:32far more vital and vigorous, far more full and complete, far more human than any other.”

20:44And so it is for those who come to know Christ. They find themselves being re-humanized in His image, after His likeness.

20:56A great example of this was the nineteenth century London preacher, Charles Spurgeon.

21:03Now a few years ago, I was doing some work on Spurgeon for a book that I was writing.

21:09Now, I had enjoyed reading Spurgeon’s sermons for years, but the more I read about the man,

21:16the more I’d started wondering, “Is this guy for real?” You know, he’s so full on in the pulpit and in public.

21:25And I thought, “Surely, there’s a quiet at-home Spurgeon.” I had a sneaking fear that I might find a lack of integrity.

21:35So I found myself one day in a library in Oxford, where there is an archive of Spurgeon,

21:42bits and bobs. And there were all his private letters to his parents, completely unprotected, no gloves,

21:50no plastic covers. You could take these handwritten letters out and put them in your pocket. Yeah, I didn’t.

21:55Don’t worry. But what struck me in reading them was how, whether he was talking about his daily life

22:06or what he was praying for, just the same passions and concerns were there in his private

22:13letters. He was the same man in the privacy of letters to his parents that he never thought some

22:20grubby scholar would leaf through. He was just the same man as in the pulpit.

22:26And the more I got to know him, the more I saw how all round, full of life Spurgeon was.

22:37He wasn’t just a marvelous preacher. He wasn’t simply a large presence in the pulpit.

22:46He was a great man. He was a large presence in life.

22:52I saw he went at all of life full on. He was a big-hearted man of deep affections.

22:58He wasn’t just passionate when preaching; he was tenderhearted in life.

23:03So he…Spurgeon laughed and he cried much. He read avidly.

23:10He was intellectually curious and hungry. He was a man who felt deeply.

23:18He was a zealously industrious worker and a sociable lover of play and beauty.

23:26In other words, he was a man who embodied the truth that to be in Christ means to be

23:34made ever more roundly human, more fully alive.

23:39Now, if you’ve ever read a sermon of his, and if you haven’t, you must.

23:46If you’ve ever read a sermon of his, you’ll know he was an unmistakably earnest man. And yet earnestness and zeal for Spurgeon were never confused with gloominess and melancholy.

23:58It’s telling and very fitting that a whole chapter of his autobiography is entitled “Pure

24:05Fun.” A friend of his called William Williams once said, “What a bubbling fountain of humor Mr.

24:12Spurgeon had…was. I laughed, I believe, more in his company than during all the rest of my life besides.”

24:22And few, it seems, expected to laugh quite so much in the presence of this zealous pastor.

24:30And Spurgeon knew this, and he seemed to take an almost impish delight in springing comedy

24:37on those around him. So grandiosity, religiosity, humbug pomposity could all expect to be pricked on his wit.

24:50But most essentially, Spurgeon’s sunny manner was a manifestation of that happiness and

24:56cheer which is found in Christ, the Light of the world. The lightheartedness he found in himself came from his clear refusal to take himself or

25:09any other sinner too seriously. Spurgeon held that to be alive in Christ means to fight not only the habits and acts of sin

25:21but also sin’s temperamental sullenness, ingratitude, bitterness, despair.

25:27And so, to enter into Christ’s life entails entering into the joy of being fully human,

25:35of being at peace with the blessed or happy God of glory.

25:42Spurgeon knew and lived out his belief that the Christian life is not a dull, ethereal

25:49existence on some high-up invisible plane.

25:54It is being heavenly, and it is being more full, more human, brighter, more involved,

26:02more lively. And so, he would encourage his students. Here’s what he said.

26:08He said to his students, “Labor to be alive in all your duties.

26:14Brethren, we must have life more abundantly, each one of us, and it must flow out into

26:20all the duties of our office. Warm spiritual life must be manifest in the prayer, in the singing, in the preaching,

26:28and even in the shake of the hand and the good word after service. Be full of life at all times,” he said, “and let that life be seen in your ordinary conversation.”

26:41But here is the million-dollar question.

26:48Here’s the question that can put the airport pop psychology and self-help book business

26:54out of business. Here’s the question. How did Spurgeon get to be like that?

27:02Because everyone wants to be that joy-filled, full of life person.

27:09So how was he so fully, so vividly human?

27:17Answer? By fixing his eyes on Christ, the image of God.

27:26And how relentlessly Spurgeon did that. And to prove that, I want to read to you the very first and very last words he ever preached

27:40in the Metropolitan Tabernacle pulpit in London. So in his very first sermon on March the 25th, 1861, his first sermon in the Tabernacle,

27:52he announced, “I would propose the subject of the ministry of this house, as long as

27:58this platform shall stand, shall be the person of Jesus Christ.”

28:05And in his thirty years of pastoring there, Spurgeon didn’t stray from that theme.

28:12So witness these. These are Spurgeon’s last ever words from the pulpit.

28:18They’re dated June the 7th, 1891, thirty years later.

28:25Spurgeon said his last words in the pulpit, “It is heaven to serve Jesus.

28:32I am a recruiting sergeant, and I would find a few recruits at this moment.

28:40Every man must serve somebody. We have no choices to that fact. Those who have no master are slaves to themselves.

28:50Depend on it, you will either serve Satan or Christ, either self or the Savior.

28:56But you will find sin, self, Satan, and the world to be hard masters.

29:03But if you wear the livery of Christ, you will find Him so meek and lowly of heart,

29:10you will find rest unto your souls. He is the most magnanimous of captains.

29:16There never was His like among the choicest of princes.

29:22He is always to be found in the thickest part of the battle.

29:27And when the wind blows cold, He always takes the bleak side of the hill.

29:33The heaviest end of the cross ever lies on His shoulders.

29:39And if He bids us carry a burden, He carries it also. And if there is anything that is gracious, generous, kind and tender, yea, lavish and

29:50superabundant in love, you will always find it in Him.

29:55These forty years and more have I served Him, blest be His name. And I have had nothing but love from Him.

30:04I would be glad to continue yet another forty years in the same dear service here below

30:13if it so pleased Him. His service is life, peace, joy.

30:22God help you to enlist under the banner of Jesus even this day.

30:29Amen.” And when he died, the olive-wood casket that held his body was drawn through the streets

30:38of south London. And on top was a large pulpit Bible opened at Isaiah 45 verse 22, where the Lord says,

30:49″Look unto Me and be ye saved, all the ends of the earth.”

30:55Those, in fact, had been the very words that had first shown Spurgeon the way of salvation

31:03forty years earlier. “Look unto Me,” says the Lord.

31:10Spurgeon had learned that people are first saved when they look with belief on the Son

31:16of Man lifted up, as the Israelites once looked on the bronze serpent in the wilderness.

31:22But more, Spurgeon had come to understand the deep truth Paul had spoken of in 2 Corinthians

31:283. Do you remember Paul’s argument in 2 Corinthians 3?

31:34It’s worth turning to. In 2 Corinthians 3, Paul, he was thinking of Moses who, do you remember, Moses asked

31:45to see, to look upon the glory of the Lord? And the result was, we read, “When Moses came down from Mount Sinai with the two tablets

31:55of the law in his hand, as he came down from the mountain, Moses did not know the skin

32:02of his face shone because he’d been talking with God.”

32:09And Paul writes, commenting on that, 2 Corinthians 3:18, “And we all,” like Moses, “with unveiled

32:18face, beholding the glory of the Lord, are being transformed into the same image from

32:27one degree of glory to another,” created in the image of God that we might be like God,

32:36sharing His life, His vitality, His loving holy character.

32:43We become what we were made to be by looking to Christ, who is the image of God.

32:53Beholding Him, we become most truly human. And all our faculties, our minds, our hearts, our lives get aligned right, and we are transformed

33:07into His image. Friends, it matters what you fix your gaze on every day.

33:18Life, righteousness, holiness, redemption are found in Jesus and are found by those,

33:28and only those, who look to Him believingly. And perhaps, I should be clearer.

33:36It’s not that we look, get some sense of what He’s like and then go away and strain to make

33:42ourselves like Him. No, we become like Him through the very looking.

33:48The very sight of Him is a transforming thing.

33:53And so, for now, contemplating Him by faith, we begin to be transformed into His likeness.

34:02But so potent is His glory that when we clap eyes upon Him physically at His second coming

34:10then, 1 John 3:2, “When He appears, we shall be like Him, for we shall see Him as He is.”

34:23That full, unveiled, physical sight of the glorified Jesus will be so majestically affecting,

34:30it will transform our very bodies around us.

34:36The sight of Him now, by the Spirit, makes us more like Him spiritually.

34:41The sight of Him then, face to face, will finally make us body and soul as He is.

34:53Dear friends, looking to Christ, the image of God will do for you what no self-help books

35:04will do. Pressing in to know Him and to know the privileges we have in Him, our righteousness before God,

35:14our adoption as children of God, there is the ultimate answer to all our identity issues,

35:22to all our brokenness. When you look, when you seek to know Him ever better, that is when you find yourself humanized.

35:35That is when you see what it means to be human, when you begin to hate all perversion of what

35:46we were made to be, when you slowly conform to His likeness, the image of God.

35:56And it means, friends, that when you see the brokenness of our society with all its piled-up

36:03wickedness, be sure no moral patches or patches of any sort will suffice as an ultimate cure.

36:13Only in Christ is the cure for humanity. Only in Christ could there be a cure, for He is the image of God.

36:22And so look to Him, proclaim Him the Son of man whose glorious rule shall never pass away.

36:34Let’s pray. Our Father, we delight to confess that Your magnificent Son is Your perfect image, the

36:51bright radiance of Your glory. In Him we see You, and in Him we see humanity as we should be.

37:02And so we ask, fasten our eyes on Him that we might be healed, that we might be transformed

37:13into His image from one degree of glory to another.

37:20And in His majestic and sweet name we pray it.

37:25Amen.

OpenAI CEO Sam Altman: “If this technology goes wrong, it can go quite wrong.”

0:00I alluded in my opening remarks to the

0:03the jobs issue the economic effects on

0:06employment uh I think you have

0:10said uh in fact and I’m going to quote

0:13development of superhuman machine

0:15intelligence is probably the greatest

0:18threat to the continued existence of

0:20humanity end quote you may have had in

0:23mind the effect on on jobs which is

0:28really my biggest nightmare

0:30in the long term uh let me ask you uh

0:34what your biggest nightmare is and

0:36whether you share that concern

0:42like with all technological revolutions

0:43I expect there to be significant impact

0:46on jobs but exactly what that impact

0:49looks like is very difficult to predict

0:50if we went back to the the other side of

0:53a previous technological Revolution

0:54talking about the jobs that exist on the

0:56other side

0:58um you know you can go back and read

0:59books of this it’s what people said at

1:01the time it’s difficult

1:03I believe that there will be far greater

1:06jobs on the other side of this and the

1:08jobs of today will get better I think

1:10it’s important

1:11first of all I think it’s important to

1:13understand and think about gpd4 as a

1:15tool not a creature which is easy to get

1:18confused and it’s a tool that people

1:19have a great deal of control over and

1:22how they use it and second gpt4 and

1:26things other systems like it are good at

1:29doing tasks not jobs and so you see

1:32already people that are using gpt4 to do

1:35their job much more efficiently by

1:38helping them with tasks now gbt4 will I

1:42think entirely automate away some jobs

1:44and it will create new ones that we

1:47believe will be much better this happens

1:49again my understanding of the history of

1:52technology is one long technological

1:54Revolution not a bunch of different ones

1:55put together but this has been

1:57continually happening we as our quality

1:59of life raises and as machines and tools

2:02that we create can help us live better

2:04lives uh the bar raises for what we do

2:07and and our human ability and what we

2:09spend our time going after uh goes after

2:11more ambitious more satisfying projects

2:13so there there will be an impact on jobs

2:16we try to be very clear about that and I

2:18think it will require

2:21partnership between the industry and

2:22government but mostly action by

2:23government to figure out how we want to

2:25mitigate that

2:27but I’m very optimistic about how great

2:29the jobs of the future will be thank you

2:31let me ask Ms Montgomery and Professor

2:34Marcus for your reactions those

2:36questions as well Ms Montgomery on the

2:39jobs Point yeah I mean well it’s a

AI will change every job

2:42hugely important question

2:44um and it’s one that we’ve been talking

2:46about for a really long time at IBM you

2:49know we do believe that Ai and we’ve

2:51said it for a long time is going to

2:53change every job new jobs will be

2:55created many more jobs will be

2:58transformed and some jobs will

3:00transition away I’m a personal example

3:03of a job that didn’t exist when I joined

3:05IBM and I have a team of AI governance

3:08professionals who are in new roles that

3:11we created you know as early as three

3:13years ago I mean they’re new and they’re

3:15growing so I think the most important

3:17thing that we could be doing and Canon

3:19should be doing now is to prepare the

3:22workforce of today and the workforce of

3:25tomorrow for partnering with F AI

3:28Technologies and using them and we’ve

3:30been very involved for for years now in

3:33doing that in focusing on skills-based

3:36hiring

3:37in educating for the skills of the

3:40future our skills build platform has

3:43seven million Learners and over a

3:45thousand courses worldwide focused on

3:47skills and we’ve pledged to train 30

3:51million individuals by 2030 in the

3:54skills that are needed for society today

3:56thank you Professor Marcus may I go back

Nutrition labels

3:59to the first question as well absolutely

4:01on on the subject of nutrition labels I

4:04think we absolutely need to do that I

4:06think that there’s some technical

4:07challenges in that building proper

4:09nutrition labels goes hand in hand with

4:11transparency the biggest scientific

4:13challenge in understanding these models

4:15is how they generalize what do they

4:17memorize and what new things do they do

4:19the more that there’s in the data set

4:21for example the thing that you want to

4:22test accuracy on the less you can get a

4:25proper read on that so it’s important

4:26first of all that scientists be part of

4:28that process and second that we have

4:30much greater transparency about what

4:31actually goes into these systems if we

4:33don’t know what’s in them then we don’t

4:35know exactly how well they’re doing when

4:37we give something new and we don’t know

4:39how good a benchmark that will be for

4:40something that’s entirely novel so I

4:42could go into that more but I want to

4:44flag that

4:45second is on jobs past performance

4:48history is not a guarantee of the future

4:50it has always been the case in the past

4:53that we have had more jobs that new jobs

4:55new professions come in as new

4:57technologies come in I think this one’s

4:59going to be different and the real

5:00question is over what time time scale is

5:03it going to be 10 years is it going to

5:04be 100 years and I don’t think anybody

5:05knows the answer to that question I

5:07think in the long run so-called

5:10artificial general intelligence really

5:12will replace a large fraction of human

5:14jobs we’re not that close to artificial

5:16general intelligence despite all of the

5:18media hype and so forth I would say that

5:20what we have right now is just small

5:22sampling of the AI that we will build in

5:2420 years people will laugh at this as I

5:26think it was Senator Hawley made the but

5:28maybe Senator Durbin made the example

5:30about this it was Senator Durbin made

5:31the example about cell phones when we

5:33look back at the AI of today 20 years

5:36ago we’ll be like wow that stuff was

5:38really unreliable it couldn’t really do

5:40planning which is an important technical

5:42aspect it’s reasoning wasability and

5:45reasoning abilities were limited but

5:47when we get to AGI or artificial general

5:49intelligence mainly let’s say it’s 50

5:50years that really is going to have I

5:52think profound effects on labor and

5:55there’s just no way around that and last

5:57I don’t know if I’m allowed to do this

5:58but I will note that Sam’s worst fear I

6:00do not think is employment and he never

6:02told us what his worst fear actually is

6:04and I think it’s germane to find out

Worst fear

6:08thank you I’m going to ask

6:12Mr Altman if he cares to respond yeah

6:15look we have tried to be very clear

6:17about the magnitude of the risks here I

6:22I think jobs and employment and what

6:25we’re all going to do with our time

6:26really matters I agree that when we get

6:28to very powerful systems the landscape

6:30will change I think I’m just more

6:31optimistic that we are incredibly

6:34creative and we find new things to do

6:36with better tools and that will keep

6:37happening

6:39um

6:40my worst fears are that we cause

6:42significant we the field the technology

6:44the industry cause significant harm to

6:46the world

6:48I think that could happen a lot of

6:49different ways it’s why we started the

6:51company it’s a big part of why I’m here

6:54today and why we’ve been here in the

6:55past and we’ve been able to spend some

6:57time with you I think if this technology

6:59goes wrong it can go quite wrong and we

7:03want to be vocal about that we want to

7:05work with the government to prevent that

7:07from happening but we we try to be very

7:09clear-eyed about what the downside case

7:11is and the work that we have to do to

7:13mitigate that

7:14thank you and

The Brain’s Learning Algorithm Isn’t Backpropagation

0:00Of all the mysteries that the human

0:02brain presents, perhaps one stands above

0:04the rest. How does it learn so

0:06effectively? In the world of artificial

0:09intelligence, scientists and engineers

0:11have spent decades trying to replicate

0:13the brain’s learning mechanisms. Their

0:16efforts led to back propagation with

0:18gradient descent, the workhorse

0:20algorithm that powers virtually the

0:23entire field of machine learning today.

0:25Due to its remarkable success,

0:28researchers began to speculate that

0:30perhaps brains do something similar.

0:32However, there is a fundamental problem.

0:35The back propagation algorithm

0:37contradicts essential biological

0:39principles of brain function, making its

0:41exact implementation in neural tissue

0:44virtually impossible.

0:46In recent years, however, an alternative

0:48algorithm called predictive coding has

0:50emerged that is not only more aligned

0:53with the brain’s biological hardware,

0:55but sometimes can work even better than

0:57the back propagation itself. In this

1:00video, we will build predictive coding

1:02from first principles. Explore what

1:04issues of biological plausibility it

1:06addresses and how it might inspire the

1:09next revolution in machine learning.

Credit Assignment Problem

1:15The fundamental challenge that

1:16computational systems must solve is

1:18called credit assignment. When you have

1:21a system with numerous parameters like

1:24connection weights between neurons that

1:25can be adjusted to achieve a desired

1:27output such as recognizing objects in an

1:30image or executing appropriate actions.

1:33How do you determine which parameters to

1:35adjust and by how much? Artificial

1:38neural networks solve this elegantly

1:40through what’s called automatic

1:42differentiation. Because the entire

1:44computation can be represented as a

1:46mathematical function, computers use

1:49calculus, particularly the chain rule of

1:51derivatives, to calculate precisely how

1:54each parameter should be nudged to

1:56guarantee improvement in performance. If

1:59you’re interested in a deep step-by-step

2:01derivation of how back propagation

2:03works, I’ve covered this in one of my

2:05earlier videos. However, despite its

2:07remarkable success in machine learning,

2:09evidence suggests that the brain almost

2:12certainly uses a different approach.

2:14There are various reasons why back

2:16propagation doesn’t map directly onto

2:18neural hardware, but luckily most of

2:20them have biologically plausible

2:22workarounds. What is crucial for our

2:25discussion today and why I’m extremely

2:27excited about predictive coding is that

2:29it addresses two fundamental constraints

2:32that are absolutely incompatible with

2:34neurohysiology and which are the biggest

2:37reasons why brains cannot perform back

2:39prop namely lack of local autonomy and

2:43discontinuous processing. Sounds

2:45confusing. So let’s unpack what this

2:46means.

Problems with Backprop

2:50Artificial networks operate in strictly

2:52separated phases that alternate

2:55sequentially. First, information flows

2:58forward. Input propagates across layers

3:00to the output, generating a prediction.

3:04Next, this prediction is compared

3:06against the desired outcome, calculating

3:09an error. Then comes the crucial

3:11backward pass. This error travels back

3:15through the network layer by layer

3:17determining precisely how each weight

3:19should change to reduce future errors.

3:22Finally, all weights update

3:24simultaneously and the cycle repeats

3:27with a new training example. For this

3:29process to work, neurons must

3:32essentially freeze their feed forward

3:33activity values like taking snapshots of

3:36activity and holding on to them while

3:39error signals flow backward. But our

3:42brains don’t work like this. They don’t

3:44hit pause between thinking and learning.

3:48Communication in biological tissue is

3:50relatively slow compared to silicon

3:52processors. If the brain followed back

3:54propagation approach, it would have to

3:57completely stop information processing

3:59for hundreds of milliseconds before

4:01performing the backward pass to update

4:03connections. Imagine experiencing brief

4:06blackouts every time you learn something

4:08new.

4:10Instead, biological brains process

4:12information and learn simultaneously in

4:15a continuous stream. There is no

4:17evidence for separate forward and

4:19backward phases. Neurons receive,

4:21process, and adapt to information in

4:24parallel without pausing computation to

4:26accommodate learning. The second major

4:29issue with back propagation is its

4:31reliance on global coordination.

4:34Not only must there exist some kind of

4:36central controller to switch the entire

4:39network between forward and backward

4:40modes, but this information must

4:42propagate in a precise temporal

4:45sequence. Even if neurons could somehow

4:48freeze their activity, they would need

4:50to unfreeze in strict succession, you

4:53cannot compute errors for a given neuron

4:55before its downstream partners have

4:57finished calculating their own errors.

5:01Everything we know about brain

5:02physiology suggests that such global

5:05coordination is extremely unlikely to

5:08exist. While there are some coordinating

5:10mechanisms, oscillations like theta and

5:13gamma rhythms, attentional systems and

5:15neurom modulators like dopamine that

5:18influence broad populations. These

5:20mechanisms operate at much coarser

5:22temporal and spatial scales than would

5:24be required for back propagation which

5:27relies on cellby cell precision.

5:29Instead, individual neurons and synapses

5:32mostly function as autonomous agents,

5:35modifying their states based solely on

5:37information physically available at

5:39their specific locations. The brain

5:42operates in a massively parallel locally

5:44autonomous system where computation and

5:47learning occurs simultaneously

5:49throughout the network in a distributed

5:51manner without centralized control.

5:54Now that we understand the limitations

5:56of back propagation in biological

5:58systems, let’s explore a promising

6:00alternative, the predictive coding

Foundations of Predictive Coding

6:05algorithm. This framework originated

6:07from midentth century research,

6:10proposing that the brain’s fundamental

6:12objective is to predict incoming sensory

6:15information. From an evolutionary

6:17perspective, prediction enhances

6:19survival by allowing organisms to

6:21anticipate threats and interpret noisy

6:24observations. There is also an

6:26efficiency argument. Neuralactivity

6:28demands considerable metabolic energy,

6:31and a brain that can predict incoming

6:33signals only needs to process unexpected

6:36information, reducing the metabolic

6:38burden of transmitting predictable and

6:41thus redundant data. In this view, the

6:44brain’s primary task isn’t simply

6:47processing incoming stimuli, but

6:49constructing an internal model that

6:51explains sensory

6:53inputs. When this model predicts

6:55accurately, minimal additional

6:57processing is required. When predictions

7:00fail, the resulting prediction errors

7:02signal that the internal model needs

7:04updating.

7:06Predictive coding formalizes this

7:08concept as a hierarchical system where

7:11each neural layer attempts to predict

7:13the activity of the layer below it. The

7:16lowest level corresponds to raw sensory

7:18input like pixels of an image while

7:21higher levels encode increasingly

7:23abstract features and categories that

7:26enable effective prediction of the lower

7:28level visual features. Although real

7:31brains possess more complex

7:32connectivity, including associative

7:34connections between different

7:36modalities, the simplified hierarchical

7:38model captures the core

7:40principles. Information flows

7:42birectionally through this hierarchy.

7:45Top-down connections carry predictions

7:48from higher levels to lower levels,

7:50while bottom up connections carry

7:53prediction errors, differences between

7:55predictions and the actual activity.

7:58This abstract description of information

8:00flow will guide our derivation of how

8:03individual neurons must

Energy Formalism

8:07interconnect. We’ll approach our network

8:09as a so-called energy- based model.

8:11Essentially, this means associating each

8:14possible network state with a single

8:16number representing some form of

8:18abstract energy. We can then derive

8:21rules for how the system should evolve

8:23to reduce this energy. This framework

8:26parallels physical systems that

8:28naturally progress towards minimum

8:30energy states like a ball rolling

8:32downhill to minimize gravitational

8:34potential energy or proteins folding to

8:37minimize atomic interaction energy.

8:40Since the brain is also a physical

8:42system, it too evolves towards states

8:44that minimize some form of energy. In

8:47predictive coding networks, this energy

8:49relates to the total magnitude of errors

8:52between predictions and reality. To

8:54visualize it, consider the following

8:57analogy. Imagine the network as an

8:59assembly of movable parts, springs, and

9:02connection rods where each neuron is a

9:04node sliding on a post. Its height

9:07representing its activity level. On the

9:10same post slides a platform

9:12corresponding to its predicted activity,

9:15determined by the neurons from the layer

9:17above. A spring connects the neuron node

9:20and the platform and the tension of the

9:23spring proportional to its squared

9:24length contributes to the overall

9:27energy. If the neurons activity deviates

9:30significantly from its predicted value

9:32in either direction, the energy

9:35increases. A neuron’s activity can be

9:37freely adjusted while its predicted

9:40activity is determined by other neurons.

9:43We can visualize it as rods connecting

9:46neuron nodes on the layer above to the

9:48platforms at a current level positioned

9:51at variable angles corresponded to

9:53synaptic weights which determine how

9:55other neurons activities influence the

9:58prediction. The sum of activities from

10:00all neurons in the layer above

10:02multiplied by synaptic weights

10:04connecting them. Note that typically

10:07activities pass through a nonlinear

10:09activation function like sigmoid or

10:11relu, but I’m omitting it here for

10:14simplicity. The prediction error for

10:16each neuron then is the difference

10:18between its actual and predicted

10:20activity. And the total energy

10:23representing the overall tension of all

10:26springs sums the squared errors across

10:28all neurons in each layer.

10:32The network’s fundamental objective is

10:34to minimize the total prediction error

10:36by finding the optimal configuration of

10:39neural activities and connection

10:41weights. As we’ll see shortly, when

10:43presented with training examples, the

10:46network settles interstates that balance

10:48these elements to represent input output

10:51relationships as accurately as possible.

10:54So let’s determine precisely how neural

10:56activities and connection weights should

10:58adjust to reduce this total energy. The

11:01resulting mechanisms will align

11:03surprisingly well with known

Activity Update Rule

11:08neurohysiology. During the systems

11:10evolution, it effectively rolls downhill

11:13on the energy surface defined in a

11:15highdimensional space where each

11:17coordinate represents a parameter such

11:20as neural activity or synaptic weight.

11:23Mathematically, this downhill roll

11:26corresponds to moving in the direction

11:28of steepest descent opposite to what’s

11:30called the gradient of the function

11:32where the gradient vector points in the

11:34direction of steepest asend and is

11:36composed of derivatives with respect to

11:39each parameter. Let’s isolate a specific

11:42neuron at layer L and determine how to

11:45adjust its activity to lower the

11:48energy. To find this derivative, let’s

11:51revisit our energy definition where we

11:53sum over all posts and add up the

11:56squared lengths of all springs. Since

11:59the derivative of a sum equals the sum

12:01of the derivatives, we can examine each

12:04post individually and ask if we slightly

12:07adjust the note height x subi at player

12:10L, how would the tension at any post

12:13change? Then we add up all these

12:16effects.

12:18First of all, notice that this neuron

12:21doesn’t affect the tension at any spring

12:23at layers upstream from L. So the

12:26derivative of all those terms is zero.

12:29Even within layer L itself, the only

12:32spring directly affected is the one

12:34connecting neuron I to its predicted

12:37value. By differentiating the square of

12:39the prediction error, we find that the

12:42rate of change of this neuron’s activity

12:44is the negative of its prediction error.

12:47This makes intuitive sense. When the

12:49error epsilon is positive, meaning the

12:52neuron’s activity exceeds its

12:54prediction, the spring wants to contract

12:57and pull the value down towards the

12:59prediction, creating the negative rate

13:01of change. Conversely, if the value is

13:04lower than predicted, the spring tension

13:07drives the neuron’s activity

13:09upward. But there is additional

13:11complexity to consider. When we adjust

13:13the height of the node at layer L beyond

13:16effect in its own spring, it also

13:19influences the predicted activities at

13:21the layer below it. To compute the

13:23complete derivative, we must account for

13:25how change in x subi affects these

13:28downstream

13:30errors. Recall that the predicted

13:32activity of a neuron is given by the

13:34weighted sum of activities of upstream

13:37neurons. So when we change X subi at

13:40layer L for each neuron at the layer

13:43below, it affects the predicted value

13:45proportionally to the weight connecting

13:47them. To compute the total derivative,

13:51we need to add up the prediction errors

13:53from the layer below scaled by the

13:56connection weights and combine them with

13:58our earlier result.

14:00Notice that for some downstream neuron,

14:03if its activity is larger than its

14:05predicted value, to reduce the tension

14:07in the spring, we need to increase the

14:10prediction by moving the platform up,

14:13which can be done by moving the neuron

14:15at the layer above up as well if the

14:18weight coupling them is positive.

14:21Conversely, if the prediction error is

14:23negative, tension can be decreased by

14:26lowering the predicted value through

14:28decreasing the activity of the upstream

14:31neuron. This elegant equation tells us

14:33something profound about neural

14:35dynamics. Activity is adjusted trying to

14:38find a compromise between two competing

14:40influences. The first term drives the

14:43neuron to align with its top- down

14:46prediction while the second term

14:48encourages it to better predict the

14:50layer below. When these forces settle

14:53into balance, the neuron has found its

14:55optimal activity level, one that

14:57minimizes prediction errors both at its

15:00own layer and the layer it helps to

15:02predict. But before we move to adjusting

15:05the weights, let’s translate these

15:07update rules from abstract springs and

15:09platforms into actual

Neural Connectivity

15:13neurons. Notice that each neuron must

15:16receive its own prediction error as

15:18input with a negative sign. Earlier we

15:21treated this error as a kind of abstract

15:23subtraction, but this comparison must

15:26physically occur somewhere. We need a

15:28mechanism to store the prediction error

15:31so it can drive the activity

15:33changes. This is the fundamental insight

15:35of predictive coding. We need a separate

15:38population of neurons explicitly

15:41encoding prediction errors. In fact,

15:43this is the origin of the term

15:45predictive coding. Neurons forming a

15:48code that represents prediction errors

15:51rather than signals themselves. In our

15:54framework, within each layer, we can

15:56imagine that alongside each

15:58representational neuron X subi, which

16:01encodes predictions passed to the layer

16:03below, there exists a dedicated error

16:06neuron, a biological counterpart that

16:10encodes the deviation of X subi from its

16:13predicted value. With this structure in

16:15mind, we can directly read off the

16:18required neural connectivity from our

16:20update rule. A representational neuron X

16:23subi must be inhibited by its

16:26corresponding error neuron and excited

16:29by error neurons sending feedback

16:31signals from the layer below. This

16:34elegantly maps our mathematical

16:36formulation onto biological

16:39circuitry. Now we need to determine what

16:41drives the error neurons themselves. By

16:44definition, error neurons function as

16:46comparators. Calculating the difference

16:49between the activity of X subi and its

16:51predicted value which is given by the

16:54weighted combination of activities from

16:56the layer above. This equation reveals

16:58another set of required connections.

17:01Error neurons receive excitatory input

17:04from their partner representational

17:06neurons within the same layer and

17:08inhibitory input from neurons in the

17:11layer above that communicate

17:13predictions. Perfect. Now we have two

17:16distinct populations of neurons with

17:18specific excitatory and inhibitory

17:20connections between them. When allowed

17:23to unfold according to its own intrinsic

17:25dynamics, this network will settle into

17:27an equilibrium which minimizes

17:29prediction errors across all layers. But

17:32everything we have discussed so far

17:34assumes fixed connection weights. To

17:37complete our model, we need to endow it

17:39with learning

Weight Update Rule

17:42capabilities. Like neural activities,

17:44synaptic weights are also movable parts

17:47in our system that evolve towards

17:49configurations minimizing the total

17:52energy. For a weight connecting neuron I

17:54in layer L to neuron K in layer L minus

17:57one, we can derive an update rule that

18:00decreases the total energy by taking

18:03steps opposite to the gradient

18:05direction. Since our energy function

18:08sums all squared prediction errors

18:10across the entire network when we change

18:13the weight coupling those two neurons,

18:15the only term that is affected is the

18:17prediction error at the post synaptic

18:20neuron. The derivative equals the

18:22negative of this prediction error

18:24multiplied by the presinaptic neurons

18:27activity. This gives us an elegant

18:29update rule where weight changes are

18:32proportional to the product of the two

18:34activities. This rule strikingly

18:36resembles habian plasticity in

18:38neuroscience. Neurons that fire together

18:41wire together. However, translating this

18:44rule to biological neural connectivity

18:47reveals a challenge. Predictions flow

18:49from top to bottom with the

18:51representational neuron I connecting to

18:53the neuron K at the layer below. When

18:57prediction errors flow upward from this

19:00error neuron back to neuron I, our

19:02derivation requires using the same

19:04synaptic weight. But in biological

19:07networks, these are physically distinct

19:10sinapses and maintaining the perfect

19:12symmetry would require instantaneous

19:15communication between them. A phenomenon

19:17not observed in the brain. This

19:20so-called weight transport problem

19:22affects both back propagation and

19:24predictive coding.

19:26However, closer examination of the

19:28weight dynamics suggests a possible

19:30resolution. For the two opposing

19:33sinapses, the update rule is essentially

19:36identical, differing only in which

19:38neuron is presinaptic and which is post

19:40synaptic. Consequently, feedback and

19:43feed forward synapses, which should

19:46theoretically match, may independently

19:48converge to similar values through

19:50similar update processes. In this way,

19:53the very physiology of the update

19:56naturally mitigates the weight transfer

19:58problem. I should note that in real

20:01models though there is a nonlinear

20:03activation function which we have been

20:05sweeping under the rug. When these

20:07nonlinearities are included, the updates

20:10for the two sapses are not

20:11mathematically identical. Fortunately,

20:14research suggests that perfect symmetry

20:17may not be essential. Even when feed

20:19forward and feedback signapses learn

20:21independently with slightly different

20:23update rules, the approximate symmetry

20:26that emerges is sufficient for the

20:28network to function effectively. This

20:31learning rule integrates seamlessly with

20:33the activity dynamics we derived

20:35earlier. As neural activities settle to

20:38minimize prediction errors for specific

20:40inputs, the weights simultaneously adapt

20:43to encode statistical patterns across

20:46many

20:47experiences. Together, these processes

20:49enable the network to continuously

20:51refine its internal model, closely

20:54mimicking how biological neural circuits

20:56learn from experience.

Putting all together

21:00Let’s now put everything together and

21:03see how this framework operates as a

21:05complete system. If we allow the network

21:08to freely adjust every parameter, both

21:11neural activities and the weights, it

21:13would naturally settle to a zero energy

21:16state. However, this solution would be

21:18trivial and not perform any meaningful

21:21computation. In practical

21:23implementations of predictive coding and

21:25likely in the brain itself, certain

21:27neurons are kind of clamped to specific

21:30values. The bottommost layer, for

21:32example, cannot vary freely since those

21:35neurons are directly driven by sensory

21:38input. This constraint forces the

21:41network to find an optimal compromise.

21:46When presented with a training example,

21:48the network undergoes an iterative

21:50relaxation process. Neural activities

21:53and weights adjust according to our

21:55local update rules until reaching an

21:57equilibrium configuration, an energy

22:00minimum that encodes information about

22:02the training example within the network

22:05structure. Repeating this process across

22:08diverse examples gradually refineses the

22:10network’s internal model of the world.

22:13Through this process, the network

22:15develops compressed representations of

22:17data. This can be leveraged for

22:19generative tasks when we unclamp the

22:22output layer, freeze the weights, and

22:25let the network run to equilibrium to

22:28synthesize new images consistent with

22:30its learned model.

22:33For supervised learning tasks like

22:35classification, we also clamp the

22:37topmost layer to the desired label,

22:40allowing the network to discover optimal

22:42input to output mappings encoded in its

22:45connection

22:46weights. When classifying new inputs, we

22:49simply freeze the weights, let the

22:52system settle into equilibrium, and read

22:54off the label from the equilibrium

22:57activity of neurons at the top layer.

23:00The key advantage of predictive coding

23:02lies in its locality. While in back

23:04propagation, all adjustments serve this

23:07single goal of minimizing global output

23:09error which must be transmitted

23:12throughout the entire network. In

23:14predictive coding, each neuron and

23:16signapse only responds to local

23:19prediction errors. How much a given

23:21layer deviates from its own prediction

23:24and how well it predicts its neighbor.

23:27This biological plausibility and

23:29accordance with neurohysiological data

23:32such as observed plasticity rules

23:34suggest that predictive coding might

23:36very well be the key to understanding

23:38how our own brains learn so effectively.

23:42We can bring those insights into

23:43artificial intelligence as well. The

23:46local autonomy makes the algorithm

23:48extremely parallelizable and in certain

23:51settings more efficient than back

23:52propagation.

23:54Theoretical considerations suggest that

23:57resulting updates may actually lead to

24:00better solutions than back propagation.

24:02While backrop focuses solely on the

24:05overall output loss, potentially

24:07overriding previously learned

24:09information, a phenomenon known as

24:11catastrophic forgetting, predictive

24:13coding’s local update rules better

24:16preserve existing knowledge structure.

24:18To wrap up, let’s summarize what we have

24:21explored today. By framing inference and

24:24learning as an energy minimization

24:26problem where each layer predicts the

24:28activity of the layer below, we have

24:30derived an algorithm that operates with

24:33complete local autonomy. Unlike back

24:36prop which requires global coordination

24:38and separate phases for computation and

24:40learning, predictive coding emerges as a

24:43continuous parallel process where

24:46neurons simultaneously predict, compare

24:49and adapt. This approach not only aligns

24:51with the biological constraints of

24:53neural tissue but potentially offers

24:56computational advantages for artificial

24:58models.

25:00As neuroscience and artificial

25:02intelligence continue to inform each

25:04other, predictive coding stands as a

25:06compelling bridge between the remarkable

25:08learning capabilities of biological

25:10brains and the next generation of neural

25:12network

25:14architectures. Speaking of efficient

Brilliant

25:16learning, if you would like to gain a

25:18deeper understanding of the foundational

25:20concepts behind today’s ideas, you’re

25:22going to love our today’s sponsor,

25:25brilliant.org.

25:27Brilliant helps you master STEM topics

25:29by combining interactive visualizations

25:32with hands-on problem solving. Their

25:34engaging courses allow you to learn by

25:37doing and build intuition, breaking down

25:39challenging concepts into bite-sized

25:41lessons. Especially relevant to this

25:44video is their course titled

25:46Introduction to Neural Networks, which

25:48builds up from the definition of an

25:50artificial neuron to hidden layers and

25:53activation functions, giving you

25:55practical experience with the building

25:57blocks we discussed. Brilliant offers a

26:00great collection of courses across

26:01mathematics, physics, and computer

26:03science. Whether you are a beginner

26:06building core knowledge or an expert

26:08exploring new domains, Brilliant has

26:10something for everyone.

26:12If you’re ready to take your learning to

26:14the next level, head to

26:17brilliant.org/ardamcursenov to get a

26:1930-day free trial of everything

26:21Brilliant has to offer, plus a 20%

26:24discount on annual subscription. If you

Outro

26:27like the video, share it with your

26:28friends, subscribe to the channel if you

26:30haven’t already, and press like button.

26:33Stay tuned for more neuroscience and

26:34machine learning topics coming up.

Should we be worried about Artificial Intelligence?

0:00you

0:11definitely not we shouldn’t be worried

0:14about artificial intelligence because we

0:17need to remember that this is only the

0:20algorithm this is the application the

0:23device behavior but it’s it’s it’s not

0:28intelligence by itself it is the

0:31implementation of the developer vision

0:35how it should work in that way so we

0:39rather should be worried about the human

0:42implementation right because that’s like

0:45with the gas there’s a quite nice

0:47parallel in here not guns kill people

0:51but people kill people in that sense not

0:56artificial intelligence is harmful to

0:59people but people could use artificial

1:03intelligence to be harmful to people

FULL DISCUSSION: Google’s Demis Hassabis, Anthropic’s Dario Amodei Debate the World After AGI | AI1G

0:03Welcome everybody and welcome to those of you joining us on live stream um to this conversation that I have to say I

0:09have been looking forward to for months. Uh I had was lucky enough to ch to

0:14moderate a conversation between Darede and Deis Hassabos last year in Paris. Um

0:20which I’m afraid got most attention for the fact that you two were squashed on a very small love seat while I sat on an

0:26enormous sofa which was probably my screw up. But I said at that point that this was for me like, you know, chairing

0:32a conversation between the Beatles and the Rolling Stones. And you have not had a conversation on stage since. So this

0:38is, you know, the sequel, the the the, you know, the bands get together again. I’m delighted. You need no introduction.

0:44Uh the title of our conversation is the day after AGI, which I think is perhaps slightly getting ahead of ourselves

0:50because we should probably talk about how quickly and easily we will get there. And I want to do a bit of a sort

0:55of update on that and then talk about the consequences. So firstly on the timeline Dario you last year in Paris

1:01said we’ll have a model that can do everything a human could do at the level of a Nobel laureate across many fields

1:07by 2627. We’re in 26. Uh do you still stand by that timeline?

1:12So you know it’s always hard to know exactly when something will happen but but I don’t I don’t think that’s going

1:17to turn out to be that far off. So um you know the the the mechanism whereby

1:22imagined it would happen is that we would make models that were good at coding and good at AI research and we

1:29would use that to produce the next generation of model and speed it up to create a loop that would that would uh

1:36increase the speed of model development. We are now in terms of you know the models that write code I have engineers

1:44within anthropic who say I don’t write any code anymore. I just I just let the model write the code. I edit it. I do

1:50the things around it. I think I don’t know. We might be six to 12 months away

1:56from when the model is doing most maybe all of what sues do end to end. And then

2:02it’s a question of how fast does that loop close. Not every part of that loop is something that can be sped up by AI,

2:08right? There’s like chips, there’s manufacturer of chips, there’s training time for the model. So it’s, you know, I

2:14I think there’s a lot of uncertainty. It’s easy to see how this could take a few years. I don’t I I it’s very hard

2:21for me to see how it could take longer than that. Um but if if I had to guess, I would guess that this goes faster than

2:28people imagine. And that that key element of code and increasingly research going faster than we imagine.

2:36That’s going to be the key driver. It’s it’s really hard to predict again how much that exponential is going to speed

2:41us up, but but something fast is going to happen. So you demis were a little more cautious last year. You said a 50%

2:48chance of a system that can exhibit all the cognitive capabilities humans can by the end of the decade. Um clearly in

2:55coding as Dario says it’s been remarkable. What is your sense of do you stand by your prediction and what’s

3:01changed in the past year? Yeah, look I I I I think I’m still on the same kind of timeline. And I think there has been remarkable progress. But

3:08I think some areas of uh uh um kind of engineering work, coding or so you could

3:14say mathematics are a little bit easier to see how they would be automated partly because they’re verifiable what

3:20the output is. Um some areas of natural science are much harder to do than that. You won’t necessarily know if the

3:26chemical compound you’ve built or this prediction about physics is correct. It may be you may have to test it

3:31experimentally and that will all take longer. So uh I also think there are some missing capabilities at the moment

3:37uh in terms of like not just solving existing conjectures uh or existing problems but actually coming up with the

3:44question in the first place or coming up with the theory or the hypothesis. I think that’s much much harder and I think that’s the highest level of

3:51scientific creativity and it’s not clear. I think we will have those systems. I don’t think it’s impossible but I think there may be one or two

3:57missing ingredients. Um, it remains to be seen how, you know, first of all, can this self-improvement loop that we’re

4:03all working on actually close without a human in the loop. I think there are also risks to that to that kind of

4:08system, by the way, which we should discuss and I’m sure we will. But the the but but that could speed things up

4:13if that kind of system does work. We’ll get to the risks in a minute. But one other change I think of the past year has been a kind of change in the

4:20pecking order of the race, if you will. This time a year ago, we just had the deepseek moment and everyone was

4:25incredibly excited about what happened there and there was still a sense, you know, that Google Deep Mind was kind of

4:32lagging open AI. I would say that now uh it’s looking quite different. I mean, they’ve declared code red, right? Um

4:38it’s been quite a quite a year. So, talk me through what specifically you’ve been

4:43surprised by and how well you’ve done this year and whether you think and then I’m going to ask you about the lineup.

4:48Well, look, I I think we were I was always very confident we uh would get back to sort of the top of the the

4:54leaderboards and and the soda type of models across the board because I think we’ve always had like the deepest and

5:00broadest research bench and it was about kind of marshalling that all together and um getting the intensity and focus

5:06and the kind of startup mentality back to the whole organization and it’s been a a lot of work and um but I think we’re

5:13and we’re still a lot of work to do um but I think you can start seeing the the the the the you know the the kind of um

5:19the progress that’s been made in both the models with Gemini 3 but also uh on the product side with Gemini app getting

5:26increasing uh market share. So I feel like uh we’re making great progress um but there’s a ton more work to do um and

5:33you know we’re bringing to bear Google DeepMind’s kind of like the engine room of Google where we’re getting used to um

5:39shipping our models more and more more quickly into the product surfaces. One question for you Daria on on this aspect

5:44of it because you’ve just saw you’re in the process of you know a new round at an extraordinary valuation too. Um but

5:50you are unlike them as a let’s call it an independent model maker and there is

5:55I think an increasing concern that the independent model makers will not be able to continue for long enough until

6:01you get to where the revenues come in. Um it’s made very openly about open AI but talk me through how you think about

6:06that and then we’ll get to the AGI itself. Yeah, I mean that you know I think I think I think how we think about that is you know as we’ve built better

6:14and better models there’s been a kind of exponential relationship not only between how much compute you put into

6:20the model and how cognitively capable it is but between how cognitively capable it is and how much revenue it’s able to

6:27generate. So our revenues grown 10x in the last three years from 0 to 100 million in 2023 100 million to a billion

6:34in 2024 and 1 billion to 10 billion in 2025. And so th those revenue numbers,

6:39you know, I don’t know if that curve will literally continue. It would be crazy if it did. Um, but those numbers

6:44are starting to get not too far from, you know, the sca the scale of the largest companies in the world. So

6:50there’s there’s there’s always uncertainty. You know, we’re trying to bootstrap this from nothing. It’s it’s a crazy thing, but but I have confidence

6:57that if we’re able to produce the best models in the things that we focus on, um, uh, then I think then I think things

7:03will go well. And you know, I I will I will generally say, you know, I think I think it’s been a good year for both

7:09both Google and Anthropic. And I think the thing we actually have in common is that they’re you know, they’re both kind

7:15of kind of kind of companies that are, you know, or the research part of the company that are kind of led by researchers who focus on the models who

7:23focus on solving important problems in the world, right? Who have these kind of hard scientific problems as a as a north

7:30star. and and and I think those are the kind of companies that are going to succeed going forward and you know I

7:37think I think we share that between us very much. Uh I’m I’m going to resist the temptation to ask you what will happen to the companies that are not led

7:43by researchers uh because I know you won’t answer it. But let’s then go on to uh the

7:49predictions area now and this we are supposed to be talking about the day after AI but let’s talk about closing

7:54the loop. This the odds that you will get models that will close the loop and be able to you know power themselves if

8:01you will because that’s the really the crux for the the winner takes all threshold approach. Do you still believe

8:07that we are likely to see that or is this going to be much more of a normal technology where followers and catchup

8:13can can compete? Well, look, I definitely don’t think it’s going to be a normal technology. So, I mean, there are aspects already

8:20that as Dario mentioned that it’s already helping with our coding and and some aspects of research. The full

8:26closing of the loop though, I think is an unknown. I mean, I think it’s possible to do. you may need AGI itself

8:33to be able to do that in some domains again where there these domains you know where there’s there’s more messiness

8:38around them it’s not so easy to verify your answer very quickly um there’s kind of MP hard domains so as soon as you

8:46start getting more and you know I also include by the way for AGI physical AI robotics working all of these kind of

8:51things and then you’ve got you know hardware in the loop uh that may uh limit how fast the self-improvement

8:57systems can work but I think in coding and mathematics and these kind areas. I can definitely see that working. And

9:03then the question is more theoretical one is what is the limit of engineering and maths uh to solve uh the natural

9:09sciences. Daria, you um last year, I think it was last year that you published Machines of Love and Grace um

9:16which was a very I would say upbeat essay about the potential that that you

9:22were going to see unfold and you were talking about you know a a what was it a genius of data at country

9:29data center I’m told that you are working on an update to this a new essay so you know wait for it guys it’s not

9:36out yet but it is coming out but perhaps you can give us a sort of a sneak preview of what a year later your big

9:43take is going to be. Yes. So, you know, my take my take has not changed. It has always been my view

9:48that, you know, AI is going to be incredibly powerful. I think Demis and I, you know, kind of agree on that. It’s just a question of exactly when. Um, uh,

9:56and because it’s incredibly powerful, it will do all these wonderful things like the ones I talked about in Machines of Loving Grace. It, you know, will help us

10:03cure cancer. It may help us to eradicate tropical diseases. It will help us understand understand the universe. but

10:10that there are these, you know, immense and grave risks that, you know, not that we can’t address them. I’m not a doomer,

10:16but but that, you know, we we we we we need to think about them and we need to address them. And I wrote Machines of

10:21Loving Grace first. I’ I’d love to give some uh a sophisticated reason why I wrote that first, but it was just that

10:27the the positive essay was easier and more fun to write than than the negative essay. Um, so, you know, I finally spent

10:34some time on vacation and I was able to write an essay about the risks. Even when I’m writing about the risks, um, I

10:40I I try, you know, I I I’m like an optimistic person, right? So, even as I’m writing about these risks, I I I

10:48wrote about it in a way that was like, how do we overcome these risks? How do we have a battle plan to fight them? And

10:53and and the way I the way I framed it was, you know, there’s this scene from Carl Sean’s Contact, the movie version

11:00of it, where, you know, they they kind of discover alien life and this international panel that’s like

11:05interviewing um uh you know, people to, you know, to be humanity’s representative to meet the alien. Um uh

11:12and uh one one of the questions they ask one of the candidates is, you know, if you could ask the aliens any one

11:17question, what it would what what what would it be? And one of one of the characters says,”I would ask,”How did

11:23you do it? How did you manage to get through this technological adolescence

11:28without destroying yourselves? How did you make it through?” And and and ever since I saw it, it was like 20 years

11:34ago, I think I saw that movie. It’s kind of stuck with me. And that that’s the frame that I use, which is which is

11:39that, you know, we we’re we’re we are knocking on the door of these incredible

11:44capabilities, right? the the ability to build basically machines out of sand,

11:49right? I think I think it was inevitable that the instant we started working with fire. Um uh but but how we handle it is

11:57is not inevitable. And so I think the next few years we’re going to be dealing with, you know, how do we keep these

12:04systems under control that are highly autonomous and smarter than any human? How do we make sure that individuals

12:12don’t misuse them? Right? I have worries about things like bioteterrorism. How do we make sure that nation states don’t

12:18misuse them? That’s why I’ve been so concerned about, you know, the CCP, other authoritarian authoritarian

12:24governments. What are the economic impacts? Right? I’ve talked about labor displacement a lot. And and you know,

12:29what what haven’t we thought of which which in many cases, you know, maybe may be the the hardest thing to deal with at

12:35all. Um, so, you know, I I’m I’m thinking through how to address those

12:40risks. you know, for for each of these, it’s a mixture of things that we individually need to do as as leaders of

12:46the of of of the companies and that we can do working together. And then there there’s going to need to be some role

12:52for wider societal institutions like the like the government in in in addressing all of these. But, you know, I I I just

12:58feel this urgency that, you know, every day, you know, there’s there’s all kinds of crazy stuff going on in the outside

13:04world, outside AI, right? Um but but you know my my my view is this is happening

13:09so fast and is such a crisis we should be devoting almost all of our effort to

13:14thinking about how to get through this. So I can’t decide whether I’m more surprised that you a take a vacation b

13:20when you take a vacation you think about the risks of AI and c that your essay is framed in terms of are we going to get

13:26through the technological adolescence of this technology without destroying ourselves. So, I’m my head is slightly

13:31spinning, but you then and I can’t wait to read it, but you you you mentioned several areas that can guide the rest of our conversation. Let’s start with jobs

13:39um because you actually have been very outspoken about that and I think you said that half of entry- level white collar jobs could be gone within the

13:44next one to five years. But I’m going to turn to you Demis because so far we

13:50haven’t actually seen any discernable impact on the labor market. Um, yes, unemployment has ticked up in the US,

13:56but all of the kind of economic studies I’ve looked at and that we’ve written about suggest that this is overhiring

14:02post pandemic that it’s really not AIdriven. If anything, people are hiring to build out AI capability.

14:10Do you think that this will be as you know economists have always argued that

14:15it’s not a lump of labor fallacy that actually there will be new jobs created because so far the evidence seems to suggest that. Yeah, I mean I I think in

14:23um the near term that is what will happen. The kind of normal evolution when a breakthrough technology arrives.

14:28So some jobs will get disrupted but I think new even more valuable perhaps more meaningful jobs will get created.

14:34Um I think we’re going to see this year the beginnings of maybe impacting the junior level entry level child of jobs

14:41internships this type of thing. And I think there is some evidence I can feel that ourselves maybe like a slowdown in

14:47hiring in that. But I think that can be more than compensated by the fact there are these amazing creative tools out

14:52there pretty much available for everyone uh almost for free that if you know I

14:57was to talk to a class of undergrads right now I would be telling them to get

15:02really unbelievably proficient with these tools. I think to the extent that even those of us building it, we’re so

15:08busy building it, it’s hard to have also time to really explore the almost the capability overhang even today’s models

15:14and products have let alone tomorrow’s and I think that uh can be maybe better than a traditional internship would have

15:20been in terms of you sort of leaprogging uh yourself to be useful uh in a useful

15:26in a profession. So I think there’s that’s what I see happening probably in the next five years. Um maybe we again

15:32slightly differ on time scales on that but I think what happens after AGI arrives that’s a different question cuz

15:37I think really we would be in uncharted territory at that point. Do you think it’s going to take longer than you thought last year when you said

15:43half of all white colors? I have about the same view. I I actually agree with you and with Demis

15:48that at the time I made the comment there was no impact on the labor market. I wasn’t saying there was an impact on

15:54the labor market at that moment. Um, you know, now I think maybe we’re starting to see just just the little beginnings

16:00of it, you know, in software in coding. I even see it within within anthropic where, you know, I you know, I can look

16:08forward I can kind of look forward to a time where on the more junior end and then on the more on the more on the more

16:14on the more intermediate end, we actually need less and not more people. And you know, we’re thinking about how to deal with that within anthropic in a

16:21in a in a you know, sense in a sensible way. Um I, you know, one to five years

16:28as of six months ago, I would stick with that. You know, if you kind of, you know, connect this to what I said

16:33before, which is, you know, we we might have AI that’s better than humans at at everything in, you know, maybe one to

16:40two years, maybe a little longer than that. The those don’t seem to line up. The reason is that there’s this there’s

16:47this lag and there’s this replacement thing, right? I I know the labor market is adaptable, right? Just like you know

16:5480% of people used to do farming you know farming got automated and then they became factory workers and then

16:59knowledge workers. So you know there is some level of adaptability here as well

17:05right we should be economically sophisticated about how the labor market works but my worry is as this

17:10exponential keeps compounding and I don’t think it’s going to take that long again somewhere between between a year

17:17and five years it will overwhelm our ability to adapt. I think I may be saying the same thing Demis is just

17:23factored out of that that difference we have about timelines which I think ultimately comes down to how how fast

17:28you close the loop on CO. How much confidence do you have that governments get the scale of this and

17:35have are beginning to think about what policy responses they need to have? I don’t think that that that it’s

17:42anywhere near enough work going on about this. I’m I’m constantly surprised even when I meet economists at places like

17:47this that they’re not more of uh professional economist professors thinking about what happens um and not

17:54just sort of on the way to AGI but um uh even if we get all the technical things right that Dario was talking about and

18:00the job displacement is one question we’re worried about the economics of that but maybe there are ways to distribute this new productivity this

18:06new wealth more fairly I don’t know if we have the right institutions to do that but that’s what should happen at that point there should be you know we

18:13maybe in a post scarcity world. But then there are even the things that keep me up right now. There are even bigger questions than that at that point to do

18:19with meaning and um purpose and a lot of the things that we get from our jobs not

18:25just economically. That’s one question. But I think that may be easier to solve strangely than uh what happens to the

18:31human condition and humanity as a whole. And I think I’m also optimistic we’ll come up with new answers there. We do a

18:36lot of things today um from extreme sports to art that aren’t necessarily directly to do with economic gain. So I

18:44think we will find uh meaning and maybe there’ll be even more sort of sophisticated versions of those

18:50activities. Um plus I think we’ll be exploring the stars. So there’ll be all of that to to factor in as well for in

18:57terms of purpose. But I think it’s really worth thinking now even on my timelines of like five to 10 years away

19:03that isn’t a lot of time uh before this comes. How big do you think is the risk of a popular backlash against AI that

19:11will somehow kind of cause governments to do what from your perspective might

19:16be stupid things? Because I’m just thinking back to the era of, you know, globalization in the 1990s when when

19:22there was indeed some displacement of jobs. Governments didn’t do enough. The public backlash was such that we’ve

19:29ended up sort of where we are now. uh do you think that there is a risk that there will be a growing antipathy

19:36towards what you are doing and your companies in the kind of body politic? Um I think there’s definitely a risk. I

19:42think um I think that’s kind of reasonable. There’s fear and there’s worries about these things like jobs and

19:47livelihoods. Um I think there’s a couple of things that I mean it’s going to be very complicated the next few years I

19:53think geopolitically but also the various factors here like we want to and we’re trying to do this with AlphaFold

19:59and our science work and isomeorphic our spinout company solve all disease cure diseases come up with new energy sources

20:06I think as a society it’s clear we’d want that I think maybe the balance of what the industry is doing is not enough

20:11balance towards those types of activities I think we should have a lot more examples I know Dary agrees with me of like alpha fold like things that help

20:19sort of unequivocal good in the world and I think actually it’s incumbent on the industry and and all of us leading

20:24players to show that more demonstrate that not just talk about it but demonstrate that um and but then it’s

20:30going to come with these other intendent disruptions and um but I don’t I think the other issue is the geopolitical

20:36competition there’s obviously competition between the companies but also US and China primarily so unless

20:41there’s an international cooperation or or understanding around this um uh which I think would be good actually in terms

20:47things like minimum safety standards for deployment. I think Dario would agree on that as well. I think it’s vitally needed. This technology is going to be

20:54crossborder. It’s going to affect everyone. It’s going to affect all of humanity. Um actually contact is one of

20:59my favorite films as well. So funny enough, I didn’t realize it was yours too, Dario. But I I think um um you know

21:06those kind of things need to be worked through. Um and and if we can maybe it would be good to have a bit of slow a

21:12slightly slower pace than we’re currently predicting even my timelines so that we can get this right society

21:18but it that would require some coordination that is I prefer your timelines. Yes, I think I battle

21:24concede. But but but Dario, let’s turn to this now because the one thing since we last spoke uh in Paris, the geopolitical

21:31environment has, if anything, I don’t know, complicated, mad, crazy, whatever, whatever phrase you want to use.

21:38Secondly, the US has a very different approach now towards China. It’s a much more it’s a kind of no holds bar, go as

21:44fast as we can, but then sell chips to China. Um, and that is so you’ve got a

21:50different attitude towards the United States. you’ve got a a very um strange relationship between the United States

21:56and and Europe right now geopolitically against that. I mean I hear you talk about it would be nice to have a CERN

22:02like organization I mean it’s a million years from where we are from the real world. So in the real world have the

22:08geopolitical risks increased and what if anything do you think should be done about that and and the administration

22:14seems to be doing the opposite of what you were suggesting? Yeah, I mean, look, you know, we’re we’re we’re just trying to do the best we can to, you know,

22:20we’re just we’re just one company and we’re we’re trying to operate in, you know, the the environment that exists, no matter how no matter how crazy it is.

22:26But, you know, I think I think at least my policy recommendations haven’t changed that, you know, not selling

22:33chips is one of the, you know, one of the one of the biggest things we can do

22:38um to, you know, make sure that we have the time to handle this. Um, you know,

22:44you know, I said I said before, you know, I I I prefer Demis’ timeline. I wish we had 5 to 10 years, you know, so

22:51it’s it’s possible he’s just right and I’m just wrong, but but assume I’m right and it can be done in one to two years.

22:56Why can’t we slow down to to Demis’ timeline? Well, you could just slow down. Well, no. The but but but the reason the

23:02reason we the reason we can’t do that is is you know because we have geopolitical adversaries building the

23:10same technology at a similar pace, it’s very hard to have an enforcable agreement where they slow down and we

23:16slow down and and so if we can just if we can just not sell the chips, then

23:22this isn’t a question of competition between the US and China. This is a question of competition between me and

23:28Demis which I’m very confident that we can work out. And what do you make of the logic of the administration which as I understand it

23:34is we need to sell them chips because we need to bind them into US supply chains.

23:40So you know it’s it’s I I think it’s I think it’s a question not just of time

23:46scale but of the significance of the technology right if this was telecom or

23:52something then all this stuff about proliferating the US stack and you know wanting to build our you know chips

23:58around the world to make sure that you know you know this c you know the you

24:04know these random countries in different parts of the world you know build data centers that have Nvidia chips instead

24:10of Huawei chips. You know, I think of this more as like, you know, it’s a decision. Are we going to, you know,

24:17sell nuclear weapons to North Korea and, you know, because that produces some profit for Boeing. Um, you know, where

24:24where we can say, okay, yeah, these cases were made by Boeing, like the US is winning. Like, this is great. Like, I

24:29I I just, you know, that that analogy should just make clear how I see this

24:34trade-off that I just don’t think it makes sense. Um and and we’ve done a lot

24:40of more aggressive stuff toward, you know, toward towards towards China, China and other players that that I

24:45think is much less effective than this this one this one measure. One more area for me and then I hope

24:50we’ll have time for a question or two. The other area of potential risk that doomers worry about is a kind of all

24:56powerful malign AI. Um, and I think you’ve both been somewhat skeptical of the doomer approach, but in the last

25:02year we have seen, you know, these models showing themselves to be capable of deception, duplicity. Uh, do you

25:11think that do you think differently about that risk now than you did a year ago? And is there something about the

25:17way the models are evolving that we should put a little bit more concern on that? Yeah, I mean, you know, since since the

25:22beginning of Enthropic, we’ve kind of thought about this risk. I mean, you know, our our our research at the

25:28beginning of it was very theoretical, right? You know, we pioneered this idea of mechanistic interpretability, which

25:33is looking inside the model and and trying to understand looking inside its brain, trying to understand why it does

25:39what it does as it, you know, as as human neuroscientists, which we actually both have background in, um, try try to

25:45understand try to understand the brain. And I think as time has gone on, we’ve we’ve increasingly documented the you

25:52know bad behaviors of the models when they emerge and are now working on trying to address them with mechanistic

25:58interpretability. So I you know I think uh you know I I’ve always been concerned about these these risks. I’ve talked to

26:04Demis many times. I think he has also been um concerned about these risks. I think I have definitely been and I I I

26:11would guess Demis as well although I’ll let him speak for himself skeptical of of dumerism which is you know we’re

26:17doomed. there’s nothing we can do or this is the most likely outcome. I think this is a risk. This is a risk that if

26:24we work all work together, we can address we can learn through science to

26:29properly, you know, control and and direct these creations that we’re building. But if we build them poorly,

26:36if we go, you know, if if if we’re all racing and we go so fast that there’s no

26:42guardrails, then I think there is risk of something going wrong. So I’m going to give you a chance to answer that in the context of of a slightly broader

26:48question which is over the past year have you grown more confident of the

26:53upside potential of the technology science all of the areas that you have talked about a lot or are you more

26:59worried about the risks that we’ve been discussing I’ve been working on this for 20 plus

27:04years so we we already knew look the reason I’ve spent my whole career on AI is is the upsides of solving basically

27:11the ultimate tool for science and understanding the universal around us. I’ve I’ve sort of been obsessed with that since a kid and and and building AI

27:18is the you know should be the ultimate tool for that if we do it in the right way. The risks also we’ve been thinking about since the start at least the start

27:25of deep mind 15 years ago and um we kind of sort of foresaw that if you got the upsides it’s a dual purpose technology

27:31so it could be repurposed by say bad actors for harmful ends. So we’ve needed to think about that all the way through

27:37but I’m a big believer in human ingenuity. Um but the question is having the time and the focus and all the best

27:45minds collaborating on it to solve these problems. I’m sure if we had that we

27:50would solve the technical risk problem. It may be we don’t have that and then that will introduce risk because we’ll

27:55be sort of it’ll be fragmented. There’ll be different projects and people be racing each other then it’s much harder

28:00to make sure you know these systems that we produce will be technically safe. But I I feel like that’s a very tractable uh

28:07problem if we have the time if we have the time and space. I want to make sure there’s one question gentlemen. Keep it

28:13very short because we’ve got literally two minutes. Thanks for Hello. Yeah. No speak.

28:19Thanks very much. I’m Philip, co-founder of StarCloud building data centers in space. Um I wanted to ask a slightly

28:25philosophical question. The sort of strongest argument for doomerism to me is the firmy paradox, the idea that we

28:30don’t see intelligent life in our galaxy. I was wondering if you guys have any thoughts. Yeah, I’ve thought a lot about that. That can’t be the reason

28:35because we we we should see all the AIS that have So, just so everyone know the

28:41idea is well, it’s sort of unclear why that would happen, right? So, if if the reason there’s a Firmeny paradox, there

28:46are no aliens cuz they get taken out by their own technology. We should be seeing paper clips coming towards us

28:52from some part of the galaxy. And apparently, we don’t. We don’t see any structures. Dyson sphere is nothing

28:57whether they’re AI or natur or sort of biological. So to me um there has to be a different answer to FMY powders. I

29:03have my own theories about that, but it’s out of scope for the next minute. But um you know I I just feel like uh

29:09that that I my prediction my feeling is that we’re past the great filter. It was probably multisellular life if I would

29:15have to guess. It was incredibly hard for for biology to evolve that. Um so we’re on you know there isn’t a comfort

29:22of like what’s going to happen next. I think it’s for us to write as humanity what’s going to happen next. This this could be a great discussion

29:28but is out of scope for the next 36 sessions. But what isn’t 15 seconds each what when we meet again I hope next year

29:34uh the three of us which I would love uh what will have changed by then I well I think the biggest thing to

29:41watch is this issue of AI systems building AI systems how that goes whe

29:47that whether that goes one way or another that that will determine you

29:52know whether it’s a few more years until we get there or or if we have you know

29:58you know if if we have wonders and and a great emergency in front of us that we have to face.

30:04AI systems, building AI system. I agree on that. So, we’re we’re keeping close touch about that. Um but also I

30:09think um outside of that, I think there are other interesting uh uh uh ideas being researched like world models,

30:15continual learning. These are the things I think that will need to be cracked. If self-improvement doesn’t sort of deliver

30:20the goods on its own, then we’ll need these other things to work. And then I think things like robotics may have its

30:25sort of breakout moment. But maybe on the basis of what you’ve just said, we should all be hoping that it does take you a little bit longer and indeed

30:31everybody else to give us I would prefer that. I think that would be better for the world. Well, you guys could do something about that. Thank you both very much.

Oxford Professor: AI Is Humanity’s Attempt to Make God — John Lennox

0:00Well, the thing I want to talk about, I want to talk about AI later. But what I want to start off with is John 1:1 about

0:06the word where in the beginning was the word and the word was with God and the word was God. And we live in this

0:12word-based universe. I want to obviously talk about words, talk about writing, but I think that’s a good jumping off

0:19point of why why is that such a profound idea?

0:24The idea that this is a word-based universe has been profoundly important

0:29in my own life because of the pressure of naturalism or materialism

0:38trying to argue the exact opposite. Because words that carry meaning are a

0:43very high level thing in human experience. And uh

0:50the very fact that in two main areas we

0:55find that word base I think poses a fatal threat to the materialistic

1:01interpretation of the universe. The first is in mathematics which is my field. Mhm.

1:07That we can in the language of mathematics so it is a language. It’s the most precise language we’ve got in a

1:14way. we can encapsulate some of the ways in which the universe behaves notably

1:21going back to Kepler and Newton and Todd Maxwell and so on and it’s proved to be

1:29a brilliant tool for understanding part of the way in which the universe is and

1:36works and then secondly research in biology has brought us to

1:41the longest word of any kind that we’ve ever discovered which is the human genome, the genetic

1:48code. 3.4 billion letters long. They’re chemical letters of course, but they

1:54function precisely as a word with meaning because they code for various

2:00proteins and all the rest of it. So there those two major disciplines, physics, chemistry, we got mathematics

2:07underpinning them. And in biology, we’ve got this fundamental word. And it is the

2:15fact that in all our human experience, words come from minds. So you only have

2:23to say the word e x i t above a door. It’s only four letters. But if you ask

2:29for the origin of that, people will explain it in terms of well this sign

2:35had to be made and may have been made by automated machinery. But somewhere there’s a mind that is chosen to put a

2:44word that means exit uh up there. So if we will attribute

2:52mind to words of four letters long, it’s rather curious when we come to a word of

2:583.4 billion letters long that we say it happens by chance and necessity. That to

3:03my mind is nonsense. And I prefer an explanation that makes sense to one that

3:10doesn’t make any sense. Right. And as you’ve as you’ve gone about

3:16reading scripture, spending time in the poetry of the Psalms,

3:22the literature of the Old Testament, how do you feel like

3:28that has rubbed off on your own writing? That’s very hard to measure. We are

3:34influenced by many things in our own writing. But what I probably need to

3:39explain is I had a genius of a mentor. Yeah. I’m publicly trained in the sciences but

3:46privately trained in the humanities. And my mentor for 50 years

3:53was the late professor David Gooding who was a classicist but he was a world

3:59authority in the Septuagent which is the Greek version of the Old Testament. And

4:05he it was who showed me how biblical literature worked

4:11and how it worked actually in common with some of the classical writings and that fascinated me. I mean I’ve

4:19always been interested in grammar. I was very keen on Latin at school and I’ve always been keen in languages of any

4:26kind starting with mathematics and modern languages. But the way in which

4:33ideas are communicated in literature, getting some of the clues of

4:40the methodology that the ancient writers used was hugely important and then seeing it

4:47in scripture. So I actually did many studies with him including the ideas and

4:55thoughts that for many years that led to my most recent book on revelation.

5:00So a lot of it rubbed off. It must have rubbed off. What did he see about

5:07ancient literature? And you were talking about the ways that it was almost in conversation with other literature. What

5:12did he see that you were missing? Well, what I had not come across before was

5:20probably two things. One was structure, huh? And the other is thought flow.

5:27H in other words, if we take any book like the book I’ve just written, it’s split

5:33into chapters. And that’s to help people to know when they can stop and go to sleep. And usually those chapters are

5:43arranged around some kind of scheme. They might be geographical. They might be the way in which history moves or

5:50they might be any kind of thing that gives you a coherent

5:57joining together of ideas. Now we do it the simple way to help people who are

6:04simple minded. We label our chapters 1 2 3 and four. But in the ancient world,

6:10they didn’t do that. They were more sophisticated and therefore much more interesting in that they divided their

6:17writing up often by using a repeated phrase. And the classic example in New

6:24Testament is in the Gospel of Matthew where Matthew at intervals uses a phrase

6:31that is a variation on the following theme. And it came to pass when Jesus had finished these sayings, he went into

6:38the villages of Galilee. And it came to pass when Jesus had finished this teaching, he came down the mountain.

6:44That kind of thing. And it’s repeated four or five times. And what he taught be was when you seize something like

6:52that, you should seize on it because it’s highly likely that that is a major

6:59division marker. There will be minor ones. But then the way to approach something like Matthew’s gospel is take

7:06two of those markers and look at the material in between and look in it for

7:11two things initially. One is the repetition of ideas because if this is a

7:19section you might expect so that one uh section in Matthew for

7:25example keeps repeating the concept of authority and illustrates it from

7:31different perspectives. So that proved to be something really

7:37important. So within the section you then look for the thought flow. So you

7:43don’t simply ask what does this parable mean? You ask a

7:49prior question. What does it say? And he contended quite rightly. I

7:55believe that most of the failure to to understand what scripture means is not

8:02taking enough time to see what it says. Right? So you take time to see what it says

8:08and then you can ask the meaning question and that will involve asking

8:14why is this bit of scripture or that parable or that incident there and not there?

8:20Why is it in the place it is right? What part of a cumulative argument if

8:26there is one is that contributing? Now the moment I saw that I was hooked. I

8:33was 18 years old in Cambridge. She invited me to do a Bible study. And that one evening being shown that

8:41completely blew away all my preconceptions really, I think. And I

8:48got going. Yeah. I It’s funny. Something very similar. I I feel like I’m in the

8:53process of having that revelation that you had many years ago. And there’s a

8:58few things that have really struck me. So the way that we talk about it is observation, interpretation, application

9:04of so often when people read the scriptures, they jump to what should I

9:10do? Oh, sure. And so often it’s just what words are repeated, why is this book being

9:16written? And it’s very, it’s funny, the reading of scripture for me has been

9:21very elementary in terms of the methodology, but very intense in terms of the depth and intensity with which

9:29we’ve done it. And it gets me to ask, and I guess I’m going to play kind of a devil’s advocate here, even though I’m a

How to Know Scripture Is God-Breathed

9:36believer, but how do we know that something is God breathed when it’s written? Right? I think of se 2 Timothy

9:433:16. All scripture is God breathed and useful for teaching, rebuking, training,

9:48and correcting in in in in righteousness. But how do we know that? Can we actually examine the text and

9:56know that it’s God breathed? Or is there another way that we can validate that?

10:04I think that that question has many aspects to it.

10:10What constitutes knowledge? Is it that one is asking to feel that it’s God

10:17breathed or to have some inner sense that it’s God breathed? I’ll give you my take on it.

10:22Yeah, I understand from scripture

10:27that God is prepared to speak through his word to those that take it

10:32seriously. Huh? And Christ promised his disciples

10:38that he would reveal himself to them. Now, one of the actual cases of

10:46revealing himself to them was in that upper room.

10:52And it’s very interesting because it has to do with words

10:57because they kept asking him all sorts of questions. And suddenly it came to a point where

11:05Philillip said to him, “Show us the father.”

11:10And Jesus quietly said, “Phillip, don’t you realize that he that has seen

11:17me has seen the father?” The words that I’m speaking,

11:23they are the father speaking. And there comes over I suspect Philillip and the

11:29others that awesome sense that they’re sensing God is a way out there something

11:35somewhere. But now he’s right they’re right up against they’ve reached the father as near as they’re going to get

11:42and the words that Jesus is saying is authenticating that fact. So that it

11:49seems to me indicates how we should approach scripture.

11:55I remember if I can tell it in terms of an anecdote. Yeah. My mentor asked me once, “Why do you

12:04study scripture?” Well, I said, “At the moment, I got several Bible studies to do and I’m preaching here and there.”

12:10Oh, he said, “Well, that’s all right, I suppose, but that’s not why I study scripture.”

12:17And I was quite taken aback. So, I said, “Why do you study scripture?” And he said, “To get to know God.

12:26Now, that’s a completely different level of attitude. And it shook me at the time. I I learned

12:32a very big lesson through that that if this really is God’s word, then

12:40logically you would expect God to authenticate himself through it, not by arguments

12:47about it or details of its authenticity from the perspective of documentary

12:53evidence. and all of that which is important and very useful, but that God

12:58actually speaks through it. That’s a level above everything else.

13:04And it seems to me that in the end, that’s the only level worthwhile.

13:09And it’s been critical for my life as a Christian that sensing God speaking.

13:16Now, how to define that is a bit like trying to define the beauty of a

13:22countryside scene. It’s almost impossible, but you know it when it

13:28happens. Now, of course, we could deceive ourselves and the psychoanalyst will make hay with all this kind of

13:34thing. But in the end, it is up to our own maturity and

13:39judgment whether we really sense God speaking in such a way that we know it’s

13:46true and know what he expects us to do on the basis of it. Yeah.

13:51So I think that’s quite it’s complicated obviously because reality is always

13:57complicated but nevertheless it’s what I expect and

14:04therefore if you go to scripture not so much because you’re under pressure to

14:09produce a talk or an article but actually you want to get to know God.

14:16That to my mind is is the key answer to your question.

14:22You know, you use the word awesome and that’s to be aruck and it’s related to

14:31words like mystery and wonder. Yep. And as I think of my own thinking of like

14:38what is the core discipline of my thinking life that

14:45exists now that didn’t exist 5 years ago. I think it’s the cultivation of wonder of

14:52I think I used to very much feel that the world ended with my ability to explain something.

15:00Oh and now I don’t feel that way. Good. And I have more questions there

15:06than answers. But I think a belief in God has opened me up to new portals of

15:11wonder. Well, that’s that runs parallel to what CS Lewis said years ago. I

15:18believe in God. Something like I believe in the sun not because I see its light.

15:24It’s dangerous to look at directly, but because in its light I see everything else. In other words, faith in God

15:32instead of closing inquiry down, opens it up and introduces a dimension

15:40of wonder. And one of the very sad things I find about the media today,

15:45there are some wonderful television documentaries about the

15:51world, the universe done by people like David Atenburgh.

15:56Yeah. And yet, just when you would hope they would say, “What an awesome God

16:03to be responsible for this,” there’s nothing. Yeah.

16:09And they see it, but yet they don’t see it. There’s a blindness.

16:15And the blindness comes at the level that you mentioned fleetingly, which is

16:22people being content with some level of explanation without realizing that explanation has

16:29many different levels. H what do you mean? Well, what I mean by

16:34that is if you want to explain a motor car

16:39engine or an automobile engine as you say in your country,

16:45you could resort to physics and automobile engineering and all of that

16:50and you’d have a scientific explanation. But you could also talk about Henry

16:55Ford, but that’s a different kind of explanation.

17:00It’s in terms of an agent with a purpose. And I often say to people when

17:06they ask me, does science conflict with God as an explanation? I say absolutely

17:11not. Science no more conflicts with God as an explanation for the universe than

17:18Henry Ford conflicts with physics, chemistry, and automobile engineering as

17:23an explanation for the automobile. They’re complimementaryary. And the thing about explanation, and I’m

17:32fascinated about it, and I’ve written quite a bit about it, is it’s often less

17:38complete than you think. Let me give an example of that from science. I wasn’t

Why Scientific Explanations Don’t Exclude God

17:44very well taught physics as a young person and I thought that the law of gravity explained gravity,

17:51but it doesn’t. Even Newton realized that. He uttered a famous Latin phrase

18:00about a nonfingo hupotes. I don’t make hypothesis. In other words, I don’t know

18:06what gravity is. But I can give you a mathematical formulation that will enable you to

18:13calculate its effect. Sure, but that is not to say what it is. In fact, no one even now and you can ask

18:21your own Nobel Prize winner Richard Friedman, although he’s dead. He was one of the best in California. Nobody knows

18:28what gravity is. And so even a scientific explanation apparently isn’t

18:35complete in itself. Explanation has many many different levels. And there there’s

18:40a huge literature devoted to explanation which is sufficient certainly for me

18:47to be very leery from someone says, “Oh, I’ve got the explanation of that.” H and

18:53the notion that their level explanation excludes the God explanation is in fact stupid.

18:59What do you think happened in the world to where we’ve closed the this window

19:06into wonder and awe and this the invitation that we all have to sort of stare and ask what’s going on there?

19:13Maybe there’s something more going on there that we don’t know. Because it feels like by

19:20embracing this myopia, we’ve constrained ourselves

19:25in terms of ideas, stories, whatever it is that we can actually access.

19:30I’m sure that’s the case. Uh but it’s very complex to explain it.

19:38Historically, it has to do with the enlightenment and the elevation of reason

19:45and the bad behavior of professing Christians who brought the God

19:52hypothesis into disrepute and were anti-science to a certain

19:58extent. There are all kinds of things involved in it. But I like actually the

20:06explanation that Ian McGillchrist is giving of this in his books uh notably

20:12his most recent one the matter with things and he points out that what we’ve actually done in his terms as a

20:20neuroscientist is we’ve got two halves to our brain and one roughly speaking is the science side

20:30finding out what things are. And then there’s the other half, the right side of the brain that looks at

20:37what things mean. And he said, if you look at us now, we’ve ended up in a universe

20:44where we know how almost everything works, but we know the meaning of nothing. Hm.

20:51And wow, the reason for that, it’s a very powerful point and it was picked up by

20:56our late chief rabbi uh Lord Saxs who’s always worth reading

21:04very much worth reading and he he wrote a little book on science and religion and he put McGill’s idea this way.

21:13Science takes things apart to see how they work. religion puts them together

21:18to see what they mean. Now, McGill’s thesis is that for 500

21:24years or so, we have put such emphasis on the left side of the brain that we’ve

21:31omitted the right side and therefore we live in a universe where we understand

21:36how almost everything works and the meaning of nothing. And he therefore calls for space for the transcendent,

21:45for the wonderful, for the beautiful, and for God.

21:51I understand where he’s coming from this attitude of a little bit less of

22:00the left brain means that he’s a bit skeptical about biblical doctrine. And

22:06he told me that himself. Skeptical about biblical doctrine. Yes. because it’s it seems too

22:11leftrained. But of course, so is his research that’s led to that book, which shouldn’t make him all that skeptical.

22:19But I like Ian’s thinking because I think that analysis and there’s a lot of push back on it these days obviously

22:26because it is a scientific argument

22:32against materialism and people don’t like that. a religious argument against materialism. Well, they

22:39just laugh at that. But coming from within neuroscience, that’s a very different matter. But it seems to me

22:46it’s almost easier to puncture that balloon because we live in the

22:51information age. And information, whatever it is, and that’s difficult, is not material. It’s

22:58usually carried on a material substrate. The information on those pages is

23:03carried on a material substrate or paper and ink, but it itself is not material.

23:09So that means there are non-material entities in the universe. Well, that’s the end of materialism as a philosophy

23:15as far as I’m concerned. As simple as that. Yeah. But that gets obfiscated in a lot of the

23:22discussion. You know, my pastor a few months ago, he

Why Style Is As Important As Substance

23:27asked me, why does God speak to us in poetry? I was stupified by the question. I still

23:33am. It’s it’s one of the things I’m trying to grapple with. And as I was reading your new book

23:38on Revelation, you’re talking about literature that how something is said is

23:44as important as what is said. And I think that there’s a real writing lesson

23:50in here that the style is as important as the substance. Well, relative

23:56importance we can talk about, sure, but the fact of both being very important is

24:02clear because some parts of scripture are written deliberately to be imagined.

24:10And the imagination is a hugely important part of human life. And that’s

24:17where poetry and music come in and supply something to feed the right side

24:26of our brain. Let’s put it in McGill terms. and open up our minds to having

24:33stepping stones of imaginative words and metaphors and simileies and

24:39all kinds of figures of speech that lead us along into a world pointed to by

24:46those things but represented by those things in an

24:51imaginative way. The sad tragedy is I find that young people don’t learn the

24:59grammar of their own language and they don’t understand metaphors and simileies. So they make huge mistakes

25:06about them especially when it comes to the book of revelation. A typical attitude is oh that book’s full of

25:13symbols. Therefore it is essentially fantastical

25:18and fantasy and meaningless. And Lewis I learned a lot of about the English

25:23language from CS Lewis pointed out that similes and metaphors are used to stand

25:30for realities and point to realities and we can illustrate that easily.

25:38There are people on a very extreme edge of not understanding literature

25:44and they say that there are no metaphors in scripture. You must take it literally. This is possibly the best way

25:50to explain this. And I say, “Oh, really? Jesus said, “I am the door.”

25:55Right? Do you take that literally, right? Well, it it uh just combobulates some

26:01people. They don’t know what to say. Jesus is not a literal door. And here’s

26:07the point that is hugely important. He’s not a literal door made of wood or

26:13metal. He’s a real door. He’s a real doorway into a spiritual

26:20experience of God. And the difference between literal is a useless word, almost useless

26:26because the base literal level is not used all that much. It is used in the

26:32beginning. God created the heavens and the earth. Well, they’re the actual heavens and earth, the literal heavens

26:38and earth that physicists study. But then, and God said, “Let there be

26:44light.” What do you mean? and be and God said, “Has he got a voice box and lungs?” Well, no. He’s spirit. So

26:50immediately there, you’re in the realm of the metaphorical. But God said is not

26:59fantasy. God communicates in ways with which we don’t understand. We have no

27:05idea what it means to utter what is in some sense a word and a world is

27:11created. We have no notion of how that works. But metaphorical language

27:17enriches and provides a stepping stone. And the same is true in the book of Revelation.

27:24Very true. Because there are masses of symbols and metaphors, but all of them

27:30stand for realities without exception. And that is one of the basic things I’m

27:36talking about in my book. Well, help me get specific when you say stand for realities. Like the one that I’ve

27:42sort of been captivated by is that Jesus comes back with a sword, but the sword

27:48isn’t in his mouth. Yes. So what’s going on there when you say stands for reality? That’s clearly

27:54very surreal as you watch that sword. But scripture helps you to interpret it.

28:01The word of God is like a two-edged sword. And the sword at his mouth is seen in

28:07the vision in chapter one. and he is dressed as a judge. And the sword is not

28:15the only it’s the most surreal item, but eyes like a flame of fire. Pretty

28:20surreal. And what subsequently happens is

28:26that in the letters to the seven churches, you have

28:32one of the items in the description of Christ applied to that church. these

28:39things says he with eyes like a flame of fire. And when you read, what is in the

28:46letter to that church, it is a critique of the way they’re behaving. In other

28:51words, these eyes are seeing what’s going on and they need to repent. And

28:56it’s the same with the sword. If you don’t repent to another church, I will

29:01come and war against you with a sword of my mouth. And if you interpret that

29:07in terms of the biblical language itself, he’s going to come and apply the

29:12sword of his word and deal with them. Very much as Paul said to Corinth, if

29:17you don’t repent, I’m going to come and sort this out. So I don’t think there’s any difficulty in understanding

29:25all of those things stand for realities. Sure. Tell me about this. as you’re writing,

How to Balance Argument and Self-Criticism

29:32how you think about both making an argument and criticizing yourself. I

29:37know you like the Fineman line where he says, “Always bend over backwards to understand and criticize your own work

29:43because the easiest person to fool is yourself.” That’s right. So, how do you go about doing this in the writing process?

29:49Well, there are several basic principles. I think there’s the Fineman principle, but there’s another principle

29:57and that is I ask myself when I’m writing how can this be understood?

30:04How can this be understood? And the next question is how could it be misunderstood?

30:11And that is a hugely helpful thing. Far more helpful than the first question.

30:16Usually you know what you mean to convey by writing. But if you say how could

30:24this be misunderstood? Let me give you a very simple example. You want to talk about God as father.

30:31You don’t realize that in your audience is a young woman whose concept of father

30:37is someone who comes home drunk every night and abuses her mother and possibly herself. And therefore her concept of

30:45father won’t relate. So you will have to go into that

30:51how could I be misunderstood and correct it by being open with the

30:57fact that not everybody’s experience of father is the same. And I find that

31:03hugely important. The next thing is don’t be so proud as not to allow people

31:09to read what you’ve written. Hm. What do you mean?

31:15Well, allow people to read your stuff before you publish it

31:20and ask for the best criticism. So, for example, I’m doing at the moment

31:26an autobiography. An autobiography is tricky because it’s got the word auto in it. Self,

31:34right? And one of the great difficulties is of course that you’re writing about yourself. How do you avoid um pitfalls

31:42connected with that? So I speak to the publishing company and say I want your

31:47very best editor who’s going to be really critical and

31:52come in and I’ve got the ve very best editor which has been very painful but

31:58very useful. What have they told you? Oh all kinds of things. What have you learned about

32:03autobiography now that you didn’t know? Simple things. The danger of being episodical. If you’re a person that has

32:10done a lot of speaking, the danger is you get tired of trying to construct the

32:16story and you say, “Well, I went to Leipzig and I spoke on X and then I went to Berlin and I spoke on Y and then I

32:23went to Kenets and I spoke on Zed and it becomes episodical

32:28and there’s no depth. There’s only length.” Huh. and uh simple things like

32:34that and it’s difficult of course sometimes you mentioned the veritas forum uh earlier

32:42and I’ve done a lot of talks to the veritas forum so I could say when I went to Texas on Tuesday and I spoke at

32:49Austin and then I went to Denver on Wednesday etc etc and that becomes

32:54boring in the end unless there are some individual specific things that happened

33:02during those times that fleshed this out into something interesting. So my particular editor in chief is

33:11ruthless with that kind of stuff and just points it out to me, leave this out

33:16or put it in an appendix or something like this. Right? So, I’ve learned a lot from from

33:23critics and I have friends that will read and will

33:29criticize uh what I’m going to do.

How Narrow vs General AI Create Different Risks

33:35Let’s move into uh Revelation. I mean, you’ve just written a new book about the

33:40book of Revelation, but also how it intersects with AI. And

33:45I guess I’ll just start off with a very base level question which is are you scared of what’s happening with AI?

33:52Yes and no. I think there are many things. I’m

33:57scared of knives. H a good knife, a sharp knife can be used

34:05for surgery. It can also be used for murder. And in this country at the moment, we’ve

34:12got an epidemic of knife crime. Young kids, 14, 15, killing their mates in

34:18class. So, one could say, “Yes, I’m scared of knives because many people are are being uh damaged and maimed for life

34:27or killed.” But I know that’s not the actual thrust of your question. It’s

34:34whether there’s a a degree of fundamental anxiety about the future

34:39that is raised by AI and that is certainly the case. I’m

34:47grateful for some of the things that AI can do and do brilliantly.

34:53The solution of the protein folding problem by Deusabis of Open AI is a work

34:59of utter genius. Uh it used to take a PhD five years to work out how one

35:05protein folded. His open AI his program for deep mind deal with 200,000 in a a

35:14day or two. It’s a completely new game. But I think

35:22there are two kinds of AI and we got to separate them. There’s the ordinary

35:28stuff that’s actually working at the moment which is narrow AI. An AI narrow

35:33AI system does one thing and one thing alone that normally requires human

35:39intelligence like spotting disease in X-rays and is

35:45brilliant at it. It’s now much better than your average doctor recognizing

35:51diseases from X-ray photographs and so on. And that’s wonderful. And there are all kinds of examples because AI is not

35:59just one thing, right? There are millions of different AI systems is probably the best way to

36:06put it. And most the ones operating at the moment are narrow AI. Even chap GPT

36:13with all its successes. But what I notice is among the thought leaders,

36:21some of them seem to be running scared. M and they’re highly intelligent people

36:29and they are scared particularly of what is called the control problem that we

36:35lose control that something’s going on in some of these very advanced systems

36:41that we don’t quite understand and haven’t quite tamed. And therefore you

36:47get lots of hype and speculation. Can we build an AI that will refuse to be

36:52turned off and all this kind of stuff? Is AI going to destroy us in the end?

36:58And you get seriousminded physicists like Max Tegmark. Is he in Princeton or

37:03MIT? I can’t remember which. But Max Tegmark has written a book called Life 3.0.

37:09Okay. And he has a dozen different scenarios for the future.

37:16Benevolent AI, despotic AI, all this. and it’s the big scale. But the thing

37:22that he pays closest attention to is AI managing

37:29to leverage some of Amazon’s systems and turn into a totalitarian

37:36world economic government that has the whole

37:42world under its control. And he mentions this, describes it in

37:49detail. And then he says that this sort of

37:55government could insist on all its citizens wearing a bracelet

38:01with the functionality of an Apple Watch and a lot more that would listen to

38:06everything that the wearer said, watch everything they did. And if they

38:14stepped out of line, out of the party line, it could inject the wearer with a

38:21lethal toxin and kill them. Now what is fascinating to me about that when I read

38:27that and it’s not only Tedmar’s writing about it and he’s a seriously good physicist is that parallels almost

38:35exactly some of the predictions of the book of revelation particularly about what I call the monster it’s often

38:42called the beast I think monster is a better word where no one can buy or sell unless they have

38:51the mark and the marks got to be in the right hand or their forehead. Doesn’t

38:56say exactly what it is. And they’re killed if they uh go out of line. Well,

39:02that’s identical with what some of these people are predicting. I went to a

39:10lecture by Peter Teal recently. Yeah. And I asked him the following question.

39:15I said, “Peter, do I understand you correctly that as

39:20you look at technology, you see it moving towards totalitarianism

39:26in terms of uh surveillance techniques which I haven’t mentioned yet but are

39:33all rolled out all over China and so on.” He said, “Yes.” And I said,

39:38secondly, as you look at the biblical scriptures, do you detect something that

39:46parallels that and therefore gives it added credence? And he said he did. So

39:52that was interesting because sometimes he’s not the easiest to understand. Mhm. And it seems to me a number of people

40:00right at the top, Jeffrey Hinton for example, who’s the godfather of AI,

40:06he stepped out of Google so that he could criticize and he has revised his

40:12opinions uh to a much shorter time scale until AI could put us into real danger.

40:21Elon Musk of course is famous for doing the same thing. And some of these people

40:27who actually are quite knowledgeable about technology are scared of what will happen. I spoke to a leading player in

40:35this field not long ago and I said, “Do you think something’s going on that

40:41they’re not telling us?” And he thought a while and he said, “I think you may well be right from what I

40:48observe and hear.” And you see the scary thing is wait, what’s sort of something going on?

40:54Well, he he couldn’t say, of course, because he didn’t know, but there’s something going on that leads to these

41:00people being scared. And of course, then you have the parallel thing that the idea of building

41:08God. Yeah. You see, and the notion of data religion and AI religion and Harrari saying, if

41:16you want to know about religion, go to Silicon Valley, not to the church. Well, I was thinking of Psalm 115:8,

41:23which says, “Those who make them become like them, so do all who trust in them.” That we’re making these this AI in the

41:31image of humans, not in the image of God. Oh, that’s absolutely right. And it’s I mean, this is this is

41:38idolatry left and right. Yes, it is. And that’s another trajectory through it. I separate a

Why the AI Race Is About Making God

41:44whole lot of trajectories. The trouble is they get confused. The first is to make super intelligent humans who are

41:51like gods with a small G. That’s hari. And that’s to be done in one of two

41:57ways. First of all, to enhance existing humans, re-engineer their genetics and all the

42:03rest. The second way is to start with a nonbiological base like silicon and build some sort of

42:12entity that is created by human beings but will transcend them. Those are the

42:18two uh main avenues and some people are trying to do them together. So there’s

42:23that but then there arises the question immediately is this playing God? Is this

42:29another tower of babel? Right? And it looks very much like it

42:36a towering desire to reach to heaven. And I’ve read quite

42:43a lot about skyscrapers in connection with my books in Genesis. It’s very interesting. I mean, it’s literally the oldest story

42:51in the book. If you read some of the literature on skyscrapers, you’ll come across this

42:56quote. Behind every skyscraper there’s an even greater ego.

43:03That puts it very neatly. And there are all these trends. And what

43:09I see scripture giving us is not so much identifications

43:15that you can absolutely say with precision what this is, right? But you can say with some pre

43:23precision what it represents, what it stands for.

43:30You see, people, let me give you an example of that. People have speculated through history as to what the beast or

43:37the monster is. And it’s been almost everybody from the popes to Hitler to um

43:43Stalin to Mao, etc., etc. They’re all interested in who it is. People don’t

43:49seem to be interested in what scripture actually says. Scripture tells us what it is. H

43:56the number 666. It is the number of a man. That’s the

44:02hideous thing about it. Scripture tells us exactly what it is. It’s a human

44:07being. And that’s what’s utterly scary about it. If you like, this is a human

44:13being. And in the plain straightforward language of two Thessalonians, Paul told

44:20young Christians that there’d be a man of lawlessness who would claim to be God. That fits in perfectly with it

44:28seems to me the scenes in the book of Revelation. And when contemporary

44:34physicists and thinkers are talking about this,

44:40it’s high time that we take seriously the biblical

44:46insight into it. My argument is very simple and it’s not it doesn’t arise

44:51first in my book on revelation. And I wrote a book called 2084. And in that book I said look if we are

44:59prepared to take seriously the kind of argument that Tegmark makes,

45:05Harrari makes, and a whole host of other people make. Well, I’d like just to say

45:12why don’t we go back and take seriously the biblical account which parallels them eerily

45:19closely. M and there’s far more evidence for its truth.

45:27And then just the final point there is to say that the irony of the whole thing

45:35is that this race for AI super intelligence is the race to make God and be God.

45:46The biblical message is God became human. It goes in the reverse

45:52direction. And I think that’s a very powerful idea.

How AI Changes What It Means to Be Human

46:02No kidding. And it leads me to my next question which is

46:07this has been watching the LLMs in many ways become smarter than me. I mean

46:14become smarter than me. Yes. Yeah. Yes. It’s been in certain respects. In certain respects. Thank you. But it’s

46:20been dejecting and it’s made me ask like

46:26what does it mean to be a productive member of society? What does it mean to be a human being?

46:32And that’s what I want to ask you about. What is it? How has watching what has unfolded over the last many years

46:39changed your conception of what it means to be a human? Well, it’s that question that got me into this whole field. I was

46:47asked some years ago to give a lecture on

46:53uh AI, an introduction to a conference of Christian leaders. Okay? And I said, “How don’t an expert get an

47:00expert?” Or they say, “You misunderstand. We want you to talk about

47:07what Genesis says about what it means to be human. and let that be the basis of our day’s

47:15discussion. So I started preparing and very rapidly I saw this is going to need

47:21a lot of work. I did the talk because I do have some ideas on Genesis but that’s

47:26what led to my first 2084. There are two 2084s four years apart. Uh one last year and

47:34one 2020. It’s just going so fast. And

47:39it is that that concerns people, the redefinition of humanity. Now CS Lewis

47:47is one of the people that saw this very clearly. There are two books I say every believer ought to read. One is the

47:53abolition of man and the other is that hideous strength. The third one of his science fiction books. And he saw he saw

48:02preciently you know in 1940 he saw where all this philosophy was leading to human

48:09beings eventually producing not a human being but an artifact. And he has this

48:18chilling sentence. The final triumph of humanity will be the abolition of man.

48:23Because what they produce is not a human. It’s an artifact. they will have meddled with what we now call the

48:30germstream of humanity and altered them. And therefore, it seems to me that it’s

48:35very important to get back to Genesis and fiddle Christian minds full of the

48:43early chapters of Genesis because from the point of view of our moral

48:50existence and status, Genesis 126 is undergirds all of Western civilization

48:58and we’re still going on that capital and that’s the idea of a go Yes, God made human beings in his image.

49:06He didn’t mean make the stars in his image. They show his glory, but they’re not in his image. That’s a very

49:12different thing. You are more important than a star actually. So that exploring

49:19that in mago dei I think is a very very

49:24important thing but not exploring it in a vacuum but exploring it in the context

49:30of these attempts because many of the drivers

49:36of this super intelligence creating God are atheists. You can’t help noticing

49:42that. Mhm. And so it is an attack in that sense on

49:48God, his uniqueness, his creatorial dignity, his glory and all of that. Of

49:55course, it is. And what scripture tells you from the beginning is that the

50:00biggest thing you’re going to be up against is the original lie.

50:06Hm. You shall be as God if

50:12you deny God and his word. So, we’re back to where we started. The word-based

50:18universe. In Genesis 1, you have the word as the base of creation. And God

50:26said, and God said. But in Genesis 3, you have the denial of

50:33the word by the snake. by the snake as bringing catastrophe into the world. And it’s the

50:41talking snake. It’s amazing that people listen to talking snakes

50:47that suggested that you will rise. You shall be as

50:53God’s knowing good and evil. And the mistakes that are made about that. There’s a famous snake path in UC uh I

51:04think it’s San Diego. I don’t know whether you know about the snake path. It’s a big snake which is in the form of

51:09a path and it goes up the hill and its mouth stops at the entrance to the

51:16university library. Okay. And there’s a little garden of Eden at the bottom and they shake snake coils

51:21around it. And the whole idea is the humans are encouraged to leave innocence

51:28and get knowledge the university library. And I was speaking there to

51:35Veritas world. And I took a big risk. I said to them, I have never seen such a

51:41blatant uh misunderstanding of one of the most important messages of the

51:47world. Shocked silence. I said, “This is the misunderstanding

51:52that God prevented them from eating a tree of knowledge of good

51:59and evil.” And you’ve changed that to the tree of knowledge. It wasn’t a tree of knowledge. There was

52:05lots of knowledge in the garden. In fact, science started there. God said, “Name the animals.” It was full of

52:12knowledge. The tree of knowledge of good and evil, that’s a knowledge you don’t want. Well, I nearly caused a riot, I

52:19think. But anyway, it’s very interesting because that storytelling

52:25in terms of an artist’s representation of this so-called snake path, and it’s

52:30famous. You can look it up. And that’s an example to my mind how art

52:36can lie powerfully and get across the exact opposite

52:42message from what the original was meant to convey. Yeah. So I guess there’s

How Christians Can Lead in AI Ethics

52:49two things going on here. One is the building of AI which we’ve spoken about.

52:57I’m curious to know, do you use AI? And if so,

53:06how do you think about this is how I would use it to think better, to write better versus

53:12no, I think that the whole thing is like a demonic force and I’m going to keep that away from me. Well, I’ve got a

53:18smartphone. And if you’ve got a smartphone, you can’t help using AI because when you buy

53:24anything on Amazon, it’s picking up the trail and it’s suggesting new things to

53:31buy, right? So, you can virtually not avoid it. And if you use a computer that’s connected to the internet and use Microsoft Word,

53:39you get all sorts of popups that are explaining things and helping you. The days of not using it are long gone,

53:47but it can be a servant rather than a master. And that’s the danger

53:55of it mastering you. Now take chap GPT. I find it quite useful for collecting

54:04ideas. Sure. Because it will bring to the screen knowledge that I don’t have.

54:11Yeah. But it may be hallucinating and inventing

54:16that knowledge because it wants to please me. So I’m forced to do a lot of

54:21checking. But that’s okay. I think that

54:26just as we look up Google. We don’t know much about the Babylonian Empire. So we

54:33Google it. And nobody thinks twice about that. Sure. even people who are a bit scared of AI and and in a sense GPT is just a

54:42very sophisticated form of that that is digested without permission often a lot

54:49more literature and I I say to people you can use it but beware

54:57beware for example if you’re a pastor of a church and you watch the late night

55:03film on Saturday and then you get a sermon off chat GPT in 30 seconds before you go

55:09to bed. That is not likely to have much spiritual power, right? Even though technically it could be

55:16accurate. If you ask Chhat GPT, for example, to tell you what the doctrine

55:22of the Trinity is, it would be hard to fault it. There’s no spiritual power in

55:27a machine. And I get tired trying to tell people this is a machine. It

55:33doesn’t think. When you say there’s no spiritual power in a machine, what do you mean? Well, in the sense that it doesn’t have

55:39a spirit. It’s simply a machine. It’s sheer computing power, predictive

55:45computing power, and it hasn’t got,

55:51well, it hasn’t got, let me put it another way, intelligence, it simulates.

55:59It’s not real intelligence. And what God has done with human beings is to put

56:05intelligence together with consciousness. These machines are not conscious. Nor

56:12are they ever likely to be for a very simple reason. Nobody knows what consciousness is. So it’s silly when

56:18people start talking about oh it’s conscious.

56:24Now there are huge questions here philosophical and and and moral and ethical. I mean, we haven’t talked about

56:31the moral and ethical side of all of this, because the danger of AI, even the

56:38stuff that’s being used now, is fearsome in terms of deep fakes and deception.

56:44And again, that would lead you back to Jesus own statements about his coming.

56:52Be careful because of deception. It’s the major problem. And now we have the

56:59five eyes, the top people in the world warning us that the deception created by

57:07uh deep fakes could lead to all kinds of chaos in the world.

57:14So it’s a serious problem, but that’s the world we’re in. And my own feeling

57:20is that Christians who are scientifically minded, it’d be a good thing for them to go into AI so that

57:26they can sit at the ethics table, right? Because there’s one thing very clear. The technology moves much faster than

57:34the ethics. But the the technology is partly driven, I believe, by that we

57:41shall be as gods. In other words, if we can do it, we should do it. M

57:47and bother the consequences and that’s a very dangerous attitude because

57:52somewhere in the world someone’s going to try and do it. If you were to teach a semester long

57:58seminar on writing writing well thinking well how would you structure the curriculum? I have no idea. That’s the first time

58:05I’ve heard that question. You see I’m a great believer in

58:12you teach that kind of thing I expect by doing it

58:18writing you can’t run a course I would say on what is writing and how you do it

58:24without actually writing it’s the same it seems to me with studying scripture people say to me well tell me how to

58:31study scripture well the only way I can do that is by doing it with you

58:37and that’s a big challenge which is why I’m not offering courses in writing right

58:44do the dang thing yes yes you’re going to start how’d you improve as a writer

58:51well how one improves is by experience and allowing external criticism

58:58I think that’s the main thing and reading other people’s stuff

59:04that’s hugely important and listening and talking to people. I I think we’re

59:09at an age where we do too much talking and too little listening. And I was told

59:15I had two ears and one mouth. It’s better to use them in that proportion.

59:20But it’s too reading good literature. I mean, if ever I think I’ve attained anything in writing, I just read a Lewis

59:27book and that brings me right down to size. is sheer genius at using words. I

59:33mean, you have those kind of standards out there that you can you should bring

59:39like a mirror up to your own writing. Well, yeah. You were talking about the grammar of his I think you said me met

59:45metaphor and illustration. Tell me about that with CS Lewis. The most the simple and obvious thing

59:53Lewis taught me is that metaphors stand for reality. If I say my heart is

59:58broken, I’m not referring to this literal pump, but I’m referring to a very real experience

1:00:05or the car was flying down the road. It wasn’t literally flying. I mean, it was

1:00:11going very fast. The metaphor is for going fast. So, it’s for a real thing,

1:00:16not for a literal thing. And simple things like that, most people have never realized. So, they’re very confused when

1:00:23the Bible does the same thing. John Lennox, thank you very much. Well, thank you. I’ve actually enjoyed

1:00:30it very much. Good stuff.

AI, Man & God | Prof. John Lennox

0:00this is perhaps one of the scariest aspects of it what we’re talking about

0:05here is facial recognition by closed circuit television well it starts with

0:11facial recognition but we’ve now got to the stage where in China in particular

0:16they can recognize you from the back by your gate by all kinds of things

0:21[Music]

0:34it’s an extraordinary privilege for me to be in Oxford and able to talk personally to professor John Lennox

0:42emeris professor of mathematics at the University of Oxford for years a professor of mathematics at the

Introducing John Lennox

0:48University of Wales in Cardiff he’s lectured extensively all over the world

0:54he’s written widely interestingly spent a lot of time in Russia and Ukraine

1:00after the collapse of Communism and is deeply grieved to see what is happening there and the idea that young men on

1:08both sides that that he and others have taught and mentored may now be fighting

1:13one another into the dust and these dangerous times in which we live but amongst his many writings he’s uh given

1:21us gifted us a very useful book he tells me he’s already updating it on

1:27artificial intelligence and the future of humanity called 2084 which says a lot in the sense that

1:35we all know about 1984 I think you’re telling us that there are some troubling

1:40things coming up John thank you so much for your time that’s my pleasure to be with

1:45you can we begin over the past two years during the co pandemic but also with

1:52climate change we hear this phrase a lot in Australia and it seems

1:57internationally trust the science strikes me that in our allegedly secular

Do we find truth in science?

2:03age trust and faith are still seen as pretty important we haven’t walked away from them do you think those who are

2:11accused of not trusting the science are frequently seen as somehow rationally and even morally deficient in an age of

2:18Crisis is science becoming a new savior in adverted commas well trusting the

2:25science is fine if it’s kept to the things at which science is competent but

2:33unfortunately over the past few years there has developed a trust in science

2:38that we now call scientism where science is regarded essentially as the only weight to truth

2:46the only option for a rational thinking person and everything else is fairy

2:51stories and all the rest of it and I take a great exception to that because

2:57it’s plainly false it’s false logically because the very statement

3:04science is the only way to truth is not a statement of Science and so if it’s true it’s false so it’s logically

3:11incoherent to start with but going a little bit more into it it has had huge

3:18influence because of people like the late step Hawking for example who

3:25wrote in one of his books he he said

3:31that philosophy’s dead and it seems now as if scientists are holding the torch

3:39of truth and that’s that’s scientism the irony of it is of course that he wrote

3:46it in a book where it’s all about philosophy of science and it’s pretty clear that Hawking brilliant as he was

3:55as a mathematical physicist really is a classic Exemplar of what Albert Einstein

4:02once said the scientist is a poor philosopher and my response to it is

4:08very much would be couched in the kind of attitude that sir Peter medir he’s

4:14Nobel Prize winner in Oxford here once wrote he said it’s so very easy to

4:21see that science meaning the Natural Sciences are limited in that they cannot

4:27answer the simple questions of a child where do I come from where am I going to

4:33and what is the meaning of life and it seems to me immensely important that we

4:38recover that and what metavir went on to say is we need literature we need

4:44philosophy and we need theology as well in my view in order to answer the bigger

4:51questions now the late Lord saaks brilliant philosopher he

4:58was the chief Rabbi of the UK and the Commonwealth and so on and one of the guests on this series and one of the

5:04guests on this series well I’m delighted to hear it he once wrote a very py

5:10statement that I found very helpful he said you know science takes things apart

5:16to understand how they work and I suppose to understand what they’re made of religion puts them together to see

5:22what they mean and I think that encapsulates the danger in which we’re

5:28standing science has spawned technology we become addicted to technology

5:35particularly the more advanced forms of it like AI in my book like virtual

5:40reality the metaverse all this kind of stuff we’ve come addicted to it but

5:45we’ve lost a sense of real meaning and in particular we’ve lost our moral con

5:51Compass Einstein again to quote him made the point long ago he

5:57said you can speak of the ethical foundations of science but you cannot

6:03speak of the scientific foundations of Ethics science doesn’t tell you what you

6:09ought to do it will tell you of course if you put stricking in your granny’s tea it will give her a very hard time in

6:15fact it kill her but it can’t tell you whether you ought to do it or not to get your hands on on her property and so

6:23we’re left in a scientistic moral vacuum and therefore I feel very strongly that

6:31as a scientist of sorts I need to challenge this science is marvelous but

6:37it’s limited to the questions it can handle and let’s realize it does not

6:43deal with the most important questions of life and they the question who am I what can life and does life mean and

6:51where do we get a moral compass before we come to artificial intelligence then I’d just like to

6:57explore what you’ve been talking about a little bit with reference to Britain I I

7:03love history i’ I’ve always massively admired Britain and and I know Britain

7:08seems to be into self flagellation on just about every issue you can think of at the moment the decrying of its own

7:13cultural Roots but to my way of thinking I think in many ways Britain’s been a

7:18force for unbelievable good in the world I really do I mean as an Australian I

7:24would not live in a free country if it hadn’t been for the prime minister of this country standing up when no one

7:30else did in 1939 just one minor example but I come here now and I wonder just

7:36what the British people believe in so massively shaped by by Christian

7:43faith arguments sometimes very ugly over a long period of time but nonetheless

Why is secularism on the rise?

7:49profoundly shaped the times reported just a couple years ago that we’ve reached the point where 27% of Britain’s

7:57believe in God with an additional 16% believing in a higher power among the British as a whole 41% say they believe

8:05there is neither a God or a higher power interestingly U those in the UK young

8:10people the number who said they believe in God Rose a little um and uh

8:17nonetheless what you’ve got here is one of the most secular societies on Earth which not so very long ago was one of

8:24the more Christian um what’s responsible is it tied to a sort of false faith in

8:32science amongst other things or is it just it’s too hard or is it that um the

8:37wars have seen people convinced that you saw two Christian nations fighting PR

8:43people praying to the same God for victory how did it morph so badly to a

8:49state of unbelief do you think the country that you’ve lived in your life I find this a complex and difficult issue

8:57because I see different strands in it if you pick up

9:03on the science side you go back to Isaac Newton and he gave us a picture of the

9:10universe that was very much what’s called A Clockwork picture the universe

9:15running on fixed laws uh that were according to Newton originally set in

9:21place by God but it was a universe that essentially now ran on its own and you

9:29can see that that in the 18th century particularly favored what’s called deism

9:34that is there is a God but he’s hands off he started at running and now it

9:39runs and it runs very well and you could see with that in the collective psyche

9:47particularly in the academy it very rapidly led to questions of is God

9:53really necessary now you add that to what was happening in the continent with the

10:00Enlightenment and the corrupt Church professing Christianity

10:06utterly corrupt and the reaction against that which was fuel to the fire really

10:14of a rising secularism and

10:19Atheism and then you add to that what was happening in the days around the

10:25time of Charles Darwin where you had Huxley who was was an atheist and he

10:33resented these clergymen who were actually some of the very good natural

10:39philosophers like wilver force actually was a much brighter man than many people think as Darwin pointed out but Huxley

10:47in the UK he wanted a church scientific he wanted to turn the churches into um

10:55temples to the goddess Sophia of wisdom that that kind of idea so you’ve got all

11:00of that and then you add to that the vitory olic anti-god

11:09sentiments that are not just atheism but anti-god feeling LED for quite a long

11:16time by Richard Dawkins and other people and that’s had huge influence on young

11:23people one of the reasons I entered the fry actually because the media then come

11:28into this is even more complicated because within the media the dominant

11:34View and I think the BBC actually stated this at one time that they favored

11:40naturalism the philosophy that Nature’s all there is and there’s no outside there’s no Transcendence there’s no God

11:47so you’ve got all of that and against it you have a group of

11:56people who are often coded into letting their faith in God become

12:03private this is the tragedy of secularism

12:08and you get into that the council culture the woo culture all this kind of

12:14stuff where I’ve got to affirm everything everything’s equally valid

12:19that you you’ve got relativism and postmodernism at least and things that people think don’t matter you never meet

12:26a postmodern um business person who goes

12:31to a bank manager and says I’ve got $5,000 in the bank and the bank manager

12:37says well actually you owe the bank 10,000 oh that’s only your truth no that doesn’t

12:44work the business world but still you’ve got this pressure of

12:51relativism and so you end up as Michael Burke put it a few years ago talking

12:56about faith in God in brit with the first generation that doesn’t

13:02have a shared worldview now there’s still a Christian influence as even atheists

13:09recognize but we’ve gone a long way in rejecting God and abandoning God and

13:16then there’s the entertainment industry that will fill everybody’s vacuum with

13:22noise and we entertain ourselves to death so your question is extremely

13:27complex and it would need a more observant person than me to give you a full answer it’s it’s a huge mix

13:35of stuff and any individual person may have been affected this in completely different ways the reason that it’s

13:42important I think to set that up is to we now come to what I really wanted to hear your views on artificial

13:48intelligence because science is giving us extraordinary capabilities yes but

13:54will we Simply Be seduced by it in the sense that artificial intelligence is

14:01rapidly creating things that are marvelous that we want to enjoy that may satiate US made dull us while aspects of

14:10the emergence of AI could be very dangerous but before we start to explore

14:16that for ordinary people in the street like me who are not living with this well we

14:22I am living with this stuff but don’t know where it might go we need to Define some terms what is AI

14:29I think you call it narrow AI of the sort that we’re quite familiar with limited intelligence but highly focused

What is Narrow Artificial Intelligence?

14:36on on on on narrow areas what is artificial general intelligence and

14:43where might that go there’s a whole number of issues then there’s um the whole issue of um of uh

14:51transhumanism so can we start with very broadly AI is what on how would you

14:57explain it to Al layment we’ve all heard the term oh sure well

15:02the first thing to realize is that the word artificial in the phrase artificial

15:08intelligence is real and that’s not due to me it’s due to one of the pioneers of

15:14the subject who was happens to be a Christian and the point is that and

15:19we’ll take a narrow AI system first because much easier to explain a narrow

15:25AI system is a system invol in high

15:30powerered computer a huge database and an algorithm that does some picking and

15:36choosing that whose output is something that it normally

15:43requires human intelligence to do that is if you look at the output you would

15:50say normally that it’s taken intelligent person to do that so let’s take an

15:55example that is very important these days in in medicine and that’s

16:01x-ray interpreting x-rays so we have a datab base let’s say

16:08it has 1 million X-rays of lungs that are

16:13infected with various diseases say related to covid-19 they are then labeled in the

16:21database by the world’s top experts then they take an x-ray of your

16:29lungs or my lungs and the algorithm compares the X-ray of your lungs with

16:38the million very rapidly and it produces an output which says John Anderson has

16:46got that disease now at the moment that kind of thing which is being ruled out

16:53not only in Radiology but all over the place will generally give you a better

16:59result that your local hospital will and that’s hugely important and hugely

17:04valuable but the point is the machine is not intelligent it’s only doing what

17:10it’s programmed to do the database is not intelligent the intelligence is the

17:17intelligence of the people that designed the computer know about X-rays and know

17:23about medson but the output is what you would expect from an intelligent doctor

17:30so it’s in that sense artificial it’s a system narrow in the sense it only deals

17:38with one thing and all kinds endless

17:43kinds of of systems are being ruled out around the world and some of them as you

17:50mentioned are extremely beneficial narrow AI has been used in

17:56the development of vaccines and the spin-off from that technology is enormous in drug

18:02development and on and on it goes and I could give you dozens of examples and

18:08and there in my book so that’s that’s where we start now we are familiar with

18:14it and it’s worth giving a second example of it because most of us

18:21voluntarily are wearing first of all a Tracker it’s called a smartphone yes it knows where

18:28we are it could be even recording what we’re saying but what it does do of which

18:36we’re all aware is if we for example buy a book on Amazon we’ll very soon get

18:43little popups that say people that bought that book are usually interested in this book and what’s happening there

18:51is the AI system is creating a database of your preferences your interests your

18:59likes your purchases and is using that to compare

19:04with its vast database of available things for sale so that it predicts what

19:11you might like so this is a huge commercial value

19:17and it leads to something else which most of us don’t know about and we can

19:23come to that later but I’ll mention it now which is called surveillance capitalism

19:29and there’s a book by an amerit professor at MIT called Susanna

19:36zubo and it’s regarded as a very serious book because the point she’s making is

19:43global corporations are using your data and without your permission are selling

19:49it off to third parties and making a lot of money out of it and that raises deep privacy issues so now you’re straight

19:56into the ethics so that’s narrow AI okay so let’s stay on narrow Ai and

20:06extend our road a little bit further down towards broader use you’ve just talked about us being unaware of in a

20:13way of how we’re being surveil yes and it was right here in Oxford I think it may have been you who made the point I

20:19can’t remember uh in a talk that I heard where the point was made that what’s

20:25happening in China using artificial intelligence to surveil people is astonishing but in many ways all that

20:32information is being collected in the west as well it’s just it’s not collated in the same correct and this is perhaps

20:39one of the scariest aspects of it what we’re talking about here is facial

20:44recognition by closed circuit television well it starts with facial recognition

20:50but we’ve now got to the stage where in China in particular they can recognize you from the back by your gate by all

20:57kinds of things and what has happened is and you can see the positive benefit police want to

21:05arrest criminals or thugs or rowdies even in a football crowd and so using

21:11facial recognition technology they can pick a person out and arrest him or her

21:17well okay but what it can be used for

21:22good purposes in that sense in keeping Law and Order

21:29can also become particularly in an autocratic State become an instrument of

21:37control and here’s the huge dilemma which people try to solve how much of

21:45your privacy are you prepared to sacrifice for security there’s a tension between

21:51those two things now in China you mentioned and you’re probably thinking about sing Jang where you’ve got a

21:58minority a Muslim minority of weager people the surveillance level on them is

22:05is unbelievable every few hundred meters down the street they have to stop they

22:10have to hand in their smartphones the smartphones are loaded with all kinds of stuff by the government their houses

22:17have QR codes outside them as to how many people live there and all this kind of thing and I don’t know how many it’s

22:25way over a million I believe are are being held as a result of what is being

22:31picked up by artificial intelligence systems in re-education centers and the

22:36suspicion is that the the culture is being destroyed and eradicated that’s the one hand that’s in

22:43one particular Province but elsewhere in China we have now the social credit

22:50system that apparently will be ruled out in the entire country we’re given say you and I were

22:57given to start with let’s say 300 social credit points and we’re being trailed if we um

China’s Social Credit System

23:06fail to put our rubbish uh trash can out at night there’ll be marks against us if

23:14we go to somewhere dubious or mix with someone whose political loyalties are

23:20suspect we’ll get more negative points on the other hand if we pay our debts on

23:26time and go grading so to speak at all this kind of thing we will amass more credit

23:33points and then if we are going negative the penalties kick in we’ll discover we

23:42can’t get into our favorite restur we’ll discover we don’t get that promotion or

23:47don’t even get that job we apply for or that we can’t travel or that we can’t even have a credit card and this is

23:55being ruled out and the list of penalties and and things that have actually been recorded is just very

24:04serious now what amazed me when I first came across this was the fact that many

24:10people welcomed this they think it’s wonderful they both I’ve got a thousand points how many have you got and they

24:17don’t realize that the whole of life is becoming controlled in the interest

24:25ostensibly of having a healthy societ so it is talk about

24:331984 now this is not futuristic speculation this is already happening

24:41George Orwell you mentioned him he wrote 1984 he talked about Big Brother

24:46watching you and that technology would eventually it is doing it this is narrow

24:53AI this is not futuristic in any way it’s what’s actually happen at the

24:58moment and you mentioned briefly the fact that all this stuff exists in the west

25:05except and the point has been made forcibly it’s not quite yet under one Central Authority and control but it is

25:13coming we have credit searches we have all kinds of stuff that is beginning to creep in in the US and in the UK and I

25:21presume also in Australia and also we have even police forces here I believe

25:27who want the whole Kaboodle in here and want to be able to exert a much more

25:34serious level of control and it is frightening

25:39because what it does for human rights is is well so so it occurs to me that you

25:46know I love history as I’ve mentioned authoritarian regimes have collapsed under their own weight typically the

25:52people have risen up one way or another and there’s been an overturning we’ve never had autocratic regimes that had

25:58this surveillance capacity there you know an estimated 400 million closed

26:04circuit television sets in China that that’s one for about every three people I mean it’s mindboggling oh it is

26:10mindboggling and even here in the UK what I’m told is that you’re on a close

26:18circuit TV Camera every five minutes when you’re moving around so it is very

26:24serious and of course the irony is as I hinted at earlier here we are with our

26:31smartphones that have got all these capacities certainly at the Audio Level and we’re voluntarily wearing them so

26:39we’re voluntarily seeding part of our autonomy and our

26:44rights really to to these machines when we don’t really know what is being done

26:49with all the information so we have a huge problem and someone has said we’re

26:56sleepwalking into all of this this so that we’re captured by it we’re imprisoned by it and we wake up too late

27:05because the central Authority has got so much control that we cannot Escape anymore so

27:13let’s go back to where I started science is blessing us because they are fantastic a lot of these things uh you

27:19know with incredible technology and capabilities that you’ve alluded to some of the useful things I mean I I love the

27:25way in which I can in my car say hey Siri call my wife just fantastic but but

27:34the my my my question about what we now believe goes to the heart of who do we

27:40think we are what is our status on what basis will we be alert enough to

Rapid AI development

27:47recognize we need to make tough decisions and then on what basis will we make the ethical decisions around how

27:54far this goes I know it’s a complicated question but there’s another element to where

28:00because we haven’t even got into General artificial intelligence yet we’re still talking as I understand it about narrow

28:06artificial intelligence just masses of it yes those surveillance cameras uh and the people in their desks in Beijing uh

28:14you know cating the information and what have you there might be a lot of information and a lot of capability but

28:20those cameras can’t think of another task uh you know how to go and bring my boss a cup of coffee it’s still narrow

28:28that’s absolutely right before we’ve got to general intelligence yes

28:33and what we got to realize several things first of all the the speed of

28:41technological development outpaces ethical underpinning by a huge Factor an

28:48exponential Factor secondly some people are

28:53becoming acutely aware that they need to think about ethics

28:59and some of the global players to be fair do think about this because they find the whole development scary is it

29:07going to get out of control and someone made a very interesting point uh I think it was a

29:15mathematician who works in artificial intelligence and she was referring to the Book of Genesis in the Bible she

29:21said God created something and got out of control us we we are now concerned that our

29:30Creations may get out of control and I suppose in particular one major concern

29:37is autonomous or self-guiding weapons and and that’s a huge ethical

29:44field here’s a a man sitting in a trailer in the Nevada desert and he’s

29:49controlling a drone in the Middle East and it fires a rocket uh and destroys a

29:56group of people and of course he just sees a puff of smoke on his screen and

30:01that’s it done and there’s huge distance between the operation of that lethal

30:09mechanism and we only go up one more from that where these lethal flying

30:17bombs so to speak control themselves we got swarming drones and we got all kinds

30:22of stuff who’s going to police that and of course every country wants them

30:29because they want them to have a military Advantage so we trying

30:36to police that and to get International agreement which some people are trying to do

30:42now I don’t think we must be too negative about this and I’m cautious

30:48here but we did manage at least temporarily who knows what’s going to happen now to get nuclear weapons at

30:56least controlled and partly banned so some success but whether with what’s

31:03happening in Ukraine at the moment with Putin and so on whether he he could

31:09shoot a nuclear tactical weapon or it could be controlled autonomously make

31:14its own decision yeah but and then where do we go from there and these things are

31:20exercising people at a much lower level but it’s still the same how do you write

31:26an ethical program for self-driving cars yeah so that if there’s an accident

31:32can’t be avoided yes do you what do you knock down it’s the it’s the switch

31:38tracks deliver yeah again um that you put before ethical students of ethics

31:44and it’s very interesting to see how people respond the switch tracks dilemma is simply that you have a train hurtling

31:51down a track and there’s a points and it can be directed down the left hand to

31:56the right hand side down the left hand side there’s a crowd of children stranded in a bus on the

32:02track on the right hand side there’s an old man sitting in his cart with a donkey and you are holding the lever do

32:12you direct the train to hit the children or the old man that kind of thing but

32:18we’re faced with that all the time and it it’s hugely difficult without going

32:23near AGI yet yet and let’s let’s come to AGI yeah what is

32:32Agi and because up until now we’re talking about intelligence is not human

32:37it can’t make judgments it can’t switch tasks it can’t multitask it can just be built up to do an one thing even though

32:46that might be massively intrusive as we’ve talked about with surveillance technology but now we’re talking about

32:52something different altogether General artificial general intelligence means well it means several

33:01things the rough idea is to have a

33:06system that can do everything and more that human intelligence can do do it

33:12better do it faster and so on a kind of superhuman

33:18intelligence which you could think of possibly as at least in its initial

33:24stages being built up out of a whole lot of separate narrow AI systems building

33:30them up and that will surely be done to a large extent but research on AGI and

33:38of course it’s the stuff of Dreams it’s the stuff of Science Fiction so people

The Dangers of Artificial General Intelligence (AGI)?

33:43absolutely love it and interest in it moves in two very distinct directions

33:51there’s first of all the attempt to build machines to do it that is that are based on Silicon computer

33:58plastic metal all that kind of stuff and then there is the idea of taking

34:06existing human beings and enhancing them with

34:12bioengineering drugs all that kind of thing even incorporating various aspects

34:18of technology so that you’re making a cyborg cybernetic organism a combination

34:25of biology and Technology to move into the future so that we move

34:32beyond the human and this is where the idea of transhumanism comes in moving

34:39beyond the humans and of course the view is of many people that humans are just a

34:45stage in the gradual uh Evolution o of biological

34:51organisms that have developed according to no particular direction through um

34:58the blind forces of nature but now we have intelligence so we can take that

35:04into our own hands and begin to reshape the generations to come and make them

35:10according to our specification now that use raises huge questions the first one is of course as

35:17to Identity what are these things going to be and who am I in that kind of a

35:24situation now AGI I mentioned

35:29is something that science fiction deals with a lot the reason I take it

35:36seriously is it’s not only science fiction writers that take it seriously

35:42for example one of our top scientists possibly the top

35:48scientist uh who is our astronomer Royal Lord Martin Ree he uh takes this very

35:56seriously he says that in some generations hence um we might

36:03effectively merge with technology now that idea of humans merging with

36:09technology is again very much in science fiction but the fact that some

36:15scientists are taking it seriously means in the end that the general public are

36:21going to be filled with these ideas speculative at the one hand but serious

36:27scientists espousing them on the other so that we need to be prepared and get

36:33people thinking about them which is why I wrote my book

36:39and in particular in that book I engaged not with a scientist but with a

36:44historian youal no haray an Israeli historian can I interrupt for AE yes of

36:51course you can to quote something that he said just to frame this so beautifully he actually said this

36:57because I’m glad you’ve come to him we humans should get used to the idea that we’re no longer mysterious Souls we’re

37:03now hackable animals everybody knows what being hacked means now and once you

37:09can hack something you can usually also engineer it I just put that in for our

37:14listeners as you go on with your man that’s a typical Harari remark and he

37:20wrote two major bestselling books one called sapiens Homo sapiens human beings

37:27and the other homod deos and it’s with that second book that I interact a great deal

37:34because it has huge influence around the world and what he’s talking about in

37:40that book is re-engineering human beings and producing homod deos spelled with a

37:47small D he says think of Greek gods turning humans into Gods something way

37:53beyond their current capacities and so on now I’m very interested in that uh

38:00from a philosophical and from a Biblical perspective because that idea of humans

38:06becoming Gods is a very old idea and it’s being revived in in in a very big

38:13way now to make it precise or more precise Harari sees that 21st century as

38:21having two major agendas according to him the first is

38:28to as he puts it to solve the technical

The 2 alarming agendas of the 21st century

38:33problem of physical death so that people may live

38:38forever they can die but they don’t have to and he says technical problems have

38:45Technical Solutions and that’s where we are with physical deaths that’s number one the second agenda

38:51item is to massively enhance human happiness humans want to be happy so we

39:00got to do that how are we going to do that re-engineering them from ground up genetically every other way drugs etc

39:09etc all kinds of different ways adding technology implants all kinds of things

39:17until we move the humans from the animal stage which he believes happened through

39:23no plan or guidance we with our Superior brain

39:28power we’ll turn them into superhumans we’ll turn them into little gods and of

39:34course then comes the massive range of speculation if we do that will they

39:39eventually take over and so on so forth so that is

39:47transhumanism connected with artificial intelligence connected with the idea of

39:53the Superhuman and people love the idea and you probably

40:01know there are people particularly in the USA who’ve had their brains Frozen after death they hope that one day

40:07they’re going to be able to upload their contents onto some silicon based thing

40:12that will endure forever and that will give them some sense of immortality now

40:18if you notice those two things John solving the problem of physical death

40:25re-engineering humans to become little gods that has all to do with wanting

Humanity’s desire for immortality

40:32immortality and as a Christian I have a great deal to say about that because what’s happening I believe in the

40:40transhumanist the desire for that is a parody of what Christianity actually is

40:46all about does that to some extent that reflect that I think the very great

40:51majority of us are conscious that’s deep down we don’t want to think we’ll come

40:56to an end oh no we don’t I I’m an individual who actually has no great aspiration to live

41:02to an advanced old age well I’m the same um frankly um I don’t not in this

41:08situation no um not to say I don’t enjoy life doesn’t mean that at all just means I don’t aspire to Great physical old age

41:14Frailty and what have you um and I have a different perspective on what happens after that but deep down I don’t want to

41:22think it ends with that physical death and I think that’s pretty much hot wide into all of us I think it’s hardwired

41:29and that’s important uh this business of what’s hardwired into human beings

41:36version 101 so to speak I think is vastly important many years ago I came

41:42across that idea in the moral sense CS Lewis talking about in his book and it’s

41:49relevant to what we’re talking about the moment the abolition of man is an appendix at the end where it points out

41:54that all around the world look at every culture they may differ but they’ve got certain

42:01moral rules in common it looks as if morality is hardwired I believe it is by

42:08a benevolent Creator but now we come we come up to this and uh we see that

42:16there’s hard wiring again at this particular level God has

42:22set eternity in the human heart now of course that’s a a istic perspective but

42:29if you take the atheistic take on it then you got to explain where it comes from and again I found CS Lewis as

42:37always right on the money so to speak he he makes the point and I’m going to

42:43paraphrase it slightly it would be very strange to find yourself in a world where you got thirsty and there was no

42:49such thing as water now I think that’s a very powerful thing that longing and CS

42:56lewi has written WR a great deal about it brilliant essay called the weight of Glory that longing for another

43:03world implies and these are not his words but they’re his sentiments that we were actually made for another world now

43:12I feel that the transhuman quest is an expression of the fact that we’re

43:19hardwired with a longing for something Transcendent and it’s trying to fulfill

43:26it and I have reasons for thinking it won’t do that but you may want to ask about that later uh well I think we’re

43:35probably coming into land the the thing that I wanted to explore with you for a moment is that uh

43:43I think that a lot of people are at the point where they don’t it’s it requires

What is true faith?

43:49a lot of energy quite a bit of Anguish to say I’m going to make some tough decisions about what I really believe

43:56and it seems to me that this whole area of artificial intelligence and the chance that we may reach the capacity to

44:02literally destroy ourselves requires us to think long and hard and to make

44:09judgments that will have to be based if you like on faith you can’t know exactly what’s going to happen so you see if you

44:15want to say Well it requires a lot of faith to believe in that think through whether I believe in a God I would have

44:22thought this whole area presents just as great a challenge who am I how am I

44:27going to work this out do I put some ethical framework down or do I just sit in the pot and let the water boil

44:33gradually boil until it’s too late yes I think this is a very important issue

44:39we’ve come to there’s such confusion in the world

44:44about what faith is and that’s mainly the fault and I would say the fault of

44:49people like Dawkin and Hitchens who actually didn’t know what they were talking about because they redefine

44:57Faith actually as a religious word that means believing where there’s no evidence and what they fail to see is

45:03that’s a definition of blind faith that only a fool would get involved

45:08with the word faith from in English from the Latin fed days from which we get

45:15Fidelity which conveys the whole idea of trustworthiness and trustworthiness

45:20comes from having a backup in terms of evidence a bank manager will only have

45:26faith in you if you prove you’ve got the collateral you have to bring the evidence we’d be foolish to trust people

45:33without evidence so evidence-based faith is something everyone understands but

45:38they don’t realize is that it’s essential to science and it’s essential

45:44to a genuine Christian faith in God I I

45:49get leery these days John of using the word faith on its own yeah because people think you’re talking about

45:56religion sometimes they say to be will you give a talk on faith and science I say do you

46:03want me to talk about God oh yes well I say not in your title I could talk about faith in science without even mentioning

46:10God because scientists have got a basic Credo things

46:15they believe they got to believe that the science can be done they’ got to believe that the universe is rationally

46:22intelligible that is their faith and no scientist Could Be Imagined with without it as Einstein once said so if you want

46:29to talk about faith as faith in God please call it faith in God or else

46:34we’re going to get very confused now coming back to this you are absolutely

46:40right this is going to force us whether we like it or not to do some hard

46:47thinking and to reinspect and recalibrate our world view because our

46:54attitude to these things depends on our world you our set of answers to the big

46:59questions of Life what is reality who am I what’s going to happen

47:04after death and all those kind of things they’re coming out in this area we’re being forced to think about them and as

47:12you say we can sit like the toad in the kettle when the water is boiling and pretend that nothing’s happening but we

47:19can’t afford that that isn’t a luxury that’s suicidal and the trouble is there is a

47:26book called the suicide of the west where we’re just not thinking enough and

47:34I feel and I know you’re doing this and I feel called to do it too to to put

47:40issues out into the public space so that people can really see that they can

47:45think about them and they can come to conclusions about them and as you say

47:51we’re we’re really Landing this discussion and it seems to me that

47:58focusing on what’s going on I read Harari and I read other books like this

48:03and I say you know I can understand what you’re looking for you’re looking for something that’s very deep and hardwired

48:10in us but and I make people smile sometimes when I meet these people

48:16transhumanist and I say guys I respect what you’re after but you’re too late

48:23and they say what too late of course we’re not too late I say yeah actually are too late take your two problems one

48:30physical death I said now I believe there’s powerful evidence that that was

48:35solved 20 centuries ago was actually solved before that but 20 centuries ago

48:41there was a resurrection in Jerusalem we celebrated at Easter we’re just after

48:47Easter now and as a scientist I believe it for various reasons that we can

48:52discuss but the point is that if Jesus

48:58Christ broke the death barrier that puts everything in a different light why

49:06because it affects you and me how does it affect you and me because if that is the case then we need to recalibrate and

49:14take seriously his claim to be God become human I said isn’t that

49:21interesting what are you trying to do you’re trying to turn humans into Gods the Christian message goes in the exact

49:28opposite direction it tells us of a God who became human do you notice the difference and of course that actually

49:35gets people fascinated I say you are actually taking seriously the idea that

49:43humans can turn themselves into Gods by technology and so on why won’t you take

49:49seriously the idea that there is a God who became human is that any more difficult to do and once you’ve got the

49:56that then I think arguably you need to take seriously what Jesus says and what

50:04he says is and that is the Christian message he

50:09is God become human in order to do what to give us his life if you like to

50:17turn us into what you want to be because the amazing thing about this is that the

50:24central message of the Christian faith to you and me is the answer to the

50:30transhumanist dream one Christ promises eternal life that is life that

50:38will never cease and it begins now not in some mystical transhuman uncertain

50:44future but right now secondly because he rose from the dead he

50:49promises that we will one day be raised from the dead to live with him in in

50:56another Transcendent realm that’s perhaps even more real probably more real is more real than this one and

51:04that’s going to be the biggest uploading ever you see so your hope for the future

51:12of humanity changing human beings into something more desirable living forever

51:18and happier all of that is offered but the difference between the two is radical because firstly your idea is you

51:26using human intelligence to turn humans into

51:31Gods bypassing the problem of moral evil you’re never going to do it no Utopia

51:38has ever been built and of course you’re not thinking

51:44straight because there have been attempts to re-engineer humanity crude of course the Nazi program of eugenics

51:52the Soviet attempts to make a new man and what did they lead to

51:57rivers of blood 20th century being the bloodiest Century in history mind you

52:02what’s happening now might make this a very bloody Century but what I’m saying

52:07John is that I believe even more strongly than ever that we’ve got as

52:13Christians a brilliant answer and a message to speak into this

52:19that crosses all the boxes but it means facing moral reality which is exactly at

52:27the heart of the scariness with which some people approach these issues John I

52:32think we should land the plane there you couldn’t more clearly

52:37articulate the reality of the changes before the challenges before us and the need for people who get off the fence

52:43and not allow themselves to be satiated by false Comfort the world

52:48doesn’t give us that option anymore in my view if we don’t make decisions now individually and corporately we’re sunk

52:57I don’t want to subtract or add to that remarkable overview of what

53:02we’re facing so I’ll land the plane and thank you very much indeed Happy Land day

53:12[Music]

RSA Lighting Talk 3 Oct 2023 Youtube

0:02I’m Jeremy pekam and I’ve been a fellow

0:04since

0:051995 I spent most of my career in AI

0:09specifically developing speech and

0:12natural language understanding

0:14technology as well as well as their

0:17application recently I’ve been working

0:19on an AI harms and governance framework

0:22for responsible or what some prefer to

0:25call trustworthy

0:27AI the first part of that framework work

0:30is a taxonomy of AI applications and

0:33potential harms this diagram shows six

0:37main areas of potential harm against

0:40examples of applications of

0:43AI the six areas are loss of jobs and

0:47the Dignity of work then loss of truth

0:51and reality for example the output of a

0:53generative Ai and large language models

0:57raise the question of what is truth

1:00especially when they are used to

1:02generate fake news and

1:04videos then we have loss of cognitive

1:07accurity and creativity for example when

1:10generative AI is used for story writing

1:12or creative art loss of moral autonomy

1:16occurs when we allow a self-drive

1:17vehicle as an example to make life or

1:20death

1:22decisions and then the loss of authentic

1:25relationships has been much documented

1:27particularly around the area of social

1:30media and last but not least the loss of

1:33freedom and privacy for example through

1:36the use of facial recognition for

1:39surveillance or the harvesting of vast

1:42amounts of data from the internet

1:44including copyright material for the

1:47creation of large language models this

1:50taxonomy of harms is then set orthogonal

1:53to four pillars of governance the

1:56transparency pillar should begin in my

1:58view with a risk assess assessment

2:00looking at the potential harms for any

2:02given application of AI in the

2:05taxonomy explainability is the weakest

2:08pillar of the governance framework

2:10because it is virtually impossible for a

2:14statistical machine Learning System to

2:16explain its output to a

2:19user accountability and Justice is the

2:22pillars that are focused on the

2:23regulation and legislation that might be

2:26required to Define who is responsible

2:29for an AI applications outcomes and also

2:33to enable Justice where individuals are

2:35harmed for example by the use um of a

2:38decision support system let’s look at

2:41how this framework might work for

2:44autonomous vehicles and weapons the

2:47risks Illustrated on the diagram are

2:50mainly to cognitive Acuity for example

2:53the loss of driving skills and critical

2:55thinking but also moral agency when we

2:59allow self-driving vehicle to make life

3:02and death

3:03decisions the accountability and Justice

3:06pillars show the regulation and

3:09legislation that might be needed to

3:11mitigate these

3:14risks of course not all applications are

3:17a risk to humanity in fact there’s a

3:20spectrum of risk from benign right the

3:23way through to applications that we

3:25might choose to avoid so how do we

3:29evaluate risks I think that a starting

3:33point is asking the right questions on

3:36the side of opportunities and benefits

3:39of the use of this technology we need to

3:41ask what does this Tech do for us what

3:46is

3:47gained the usual answer to these

3:49questions of course is convenience

3:52efficiency and progress or what some

3:55call technological solutionism that is

3:58technology consult solve all our

4:01problems on the threat side of the

4:03scales we need to ask but what does this

4:06Tech do to us what is lost putting it

4:10another way we could ask in what ways

4:12does this technology dumb down what it

4:15means to be a human

4:18being now this framework is novel in

4:20setting out a set of specific harms for

4:23different applications of artificial

4:26intelligence rather than providing a

4:28list of more General ethical

4:30considerations and I’d welcome

4:33suggestions for improvements perhaps

4:35even some might like to try using the

4:37framework in real

4:39situations see you in the

4:45Breakout

What Sam Altman Doesn’t Want You To Know

0:25So why does Altman seem so upset here?

0:27After all, this interviewer

0:28was just pointing out a basic fact that OpenAI has committed to spend over $1

0:33trillion on AI infrastructure over the next eight years, despite only

0:36bringing in around $13 billion a year in recurring revenue,

0:40less than 1% of what they’re promising to spend.

0:44I’m no money genius- and I’m personally terrible

0:47at budgeting-but that doesn’t seem great.

0:50Most of the supposed growth in the American

0:52economy in 2025 was caused by investment in AI.

0:56That’s all part of a promise being made by the industry, led by Sam Altman,

1:00that once a certain level of machine learning intelligence is reached,

1:04all of our problems will be solved.

1:06The housing crisis,

1:07cancer,

1:07poverty,

1:08climate change

1:09mental health,

1:09democracy,

1:10universal basic income care, a bunch of diseases, this cancer

1:12and that one, and heart disease

1:14helping you try to accomplish your goals and be your best.

1:16Very high quality health care.

1:17The important new scientific discoveries

1:19the marginal cost of energy are going to trend rapidly toward zero.

1:21The more equal world universal extreme health for everybody.

1:24In exchange for all that

1:26Altman is asking all of society to put all of our eggs-our data,

1:30our economy, our water and resources… everything-into one basket:

1:35his. He’s offering us one, massive,

1:38

1:40So, should we trust Altman?

1:42Should we accept his deal?

1:43Is it even our choice?

1:45Altman isn’t a technologist or scientist,

1:48He’s an investor and dealmaker and really good at it… supposedly.

1:52But his whole career is a series of ‘just trust me, bro’ moments.

1:56So let’s examine the deal

1:58Altman is offering all of us.

1:59Should we believe Sam Altman’s promises?

2:01And what’s the cost to the rest of us if those promises

2:05turn out to be… lies?

2:10So let’s go back and look closer

2:11at Altman’s early days in the tech industry.

2:15Altman’s first big deal was selling his first company,

2:18Loopt a service for locating your friends.

2:22That’s something that inherently needs lots of users to work, or else you’re just

2:26locating yourself.

2:27The operative idea seems to be ubiquity.

2:29I mean, get get it out there in more ways than you can possibly imagine.

2:37This whole time, Loopt refused to say how many users they had.

2:40Altman just insisted there were “way

2:42more users” than any other similar service.

2:45It turns out, though, that towards the end, Loopt only had 500 users.

2:51When Reuters reported this, Altman insisted it was “100 times”

2:54more than that and that he’d provide evidence… He never did.

2:58Just trust me, bro.

3:00Loopt sold to the Green Dot Corporation,

3:03who shut it down immediately and never used any of the tech.

3:07Green Dot investors allege it was a dirty deal done to enrich

3:10Sequoia Capital, a VC firm with a stake in Loopt

3:14and two board members at Green Dot who helped approve the deal.

3:19Altman left Green Dot as soon as he was legally able,

3:21walking away with millions for building an app that no longer existed in any form.

3:26And luckily for Altman, someone saw something in him.

3:30Peter Thiel. Thiel, who once said that Altman should be treated as

3:34″more of a messiah figure” gave Altman millions

3:37to start his own VC firm, Hydrazine Capital.

3:41And that’s not all the capital Altman controlled.

3:44He was also hired as president of Y Combinator, or YC,

3:47an influential venture capital firm and startup incubator,

3:51where Loopt got its original funding. “I think the president of YC

3:55is sort of the unofficial leader of the startup movement.”

3:58And Altman personally traded on that influence.

4:01The New Yorker reports that up to 75% of Hydrazine

4:05Capital was invested in YC companies.

4:08Altman used his inside view to get a cut of

4:11YC’s power.

4:12Despite Altman promising he didn’t cross invest in YC companies.

4:16That’s two big lies so far: the user base of LOOPT

4:20that needed users to exist, and his investments.

4:24In 2015, Altman leads YC into the investment you likely most know him for:

4:29″Sort of a semi-company, semi-nonprofit, doing AI safety research.”

4:34OpenAI was launched as the supposed nonprofit OpenAI Foundation with a charter

4:39with a lot of lofty goals, “a primary fiduciary duty to humanity”

4:43and “avoiding enabling uses of AI or AGI

4:47that harm humanity or unduly concentrate power,” while

4:50acting to “minimize conflicts of interest among our employees and stakeholders.”

4:54The evidence that they’d do that? “Just trust me, bro.”

4:58OpenAI’s primary financial

5:00backers were tech billionaires and millionaires like Altman

5:03himself, Peter Thiel, Reid Hoffman and Elon Musk,

5:07and tech companies like Amazon Web Services and Infosys.

5:11We wanted to build this with humanity’s best interest at heart.

5:14But in exchange, OpenAI is asking for a lot…

5:17Putting all of society’s eggs in one basket, if you will.

5:20They want electricity, water, infrastructure…

5:25Capital…

5:26Your data… Your writing… Your art…

5:29And for humanity to adjust to job

5:32loss, deepfakes and everything else.

5:35All in exchange for some future promise of technology that fixes everything.

5:40So, can we trust him with all of this?

5:43Let’s look at some of his biggest statements

5:45and promises to show how they tie to all the eggs in the basket.

5:50Altman insists he doesn’t own any of OpenAI

5:53and he barely takes a salary.

5:56I’m paid enough for health insurance. I have no equity in OpenAI.

5:58I’m doing this because I love it.

5:59But he doesn’t hide that he’s already rich

6:02trying to do a rich-guy-using money-for-good Batman thing.

6:05That Batman.

6:07Such a wonderful person.

6:09I don’t deserve it.

6:10But we millionaires decided that you do.

6:13But let’s look at how this is part of his honesty problem.

6:16And it ties in to the eggs in the basket, because Altman is invested

6:20in all the stuff necessary to build OpenAI.

6:24One of the eggs OpenAI needs is a ton of data:

6:27you can’t build a large language model without examples of language

6:31and content, and one source of that data is Reddit.

6:35Altman owns

6:35a big share of the social networking site and was on its board until 2022.

6:40Reddit got its start in the same inaugural Y Combinator class as Loopt.

6:45Here’s Altman standing next to Reddit co-founder Aaron Swartz in 2005.

6:50Swartz died by suicide in 2013 after being criminally charged

6:54for reproducing academic articles online and breaking copyright law.

7:00In 2015., Altman made a deal with Reddit, allowing OpenAI to “basically

7:04aggressively scrape everything posted on the site” to feed into OpenAI’s tech.

7:08Reddit co-founder Alex Ohanian “felt in his bones” the deal was wrong.

7:13It’s a less noble version of what Reddit co-founder

7:16Aaron Swartz was targeted by law enforcement for.

7:19Swartz wanted to open the knowledge up to everyone.

7:22Altman wanted to put it in his product.

7:26In 2014, Altman promised that he and other investors

7:29would give 10% of Reddit’s value back to the Reddit community.

7:33That never happened, due to “regulatory issues.”

7:37But just like Reddit’s data going to OpenAI, a look at the areas

7:41Altman’s wealth is invested

7:42in show a deep connection to other needs of the organization.

7:46He’s invested in AI networking equipment companies, thermal battery companies,

7:51and even companies mining the rare earth metals that server farms require.

7:56And once it’s all built, Altman will profit off the problems AI creates.

8:02We’re going to focus on three: Rising energy demands and costs.

8:06Misuse like fraud and deep fakes.

8:08And job loss and economic collapse.

8:12Altman says

8:13again and again that OpenAI needs more power.

8:17The, “audacious long term goal is to build

8:20250GW of capacity by 2033.”

8:24That much compute will require as much electricity as 1.5

8:28billion people, the equivalent of the entire population of India.

8:32But Altman has a solution: since they first met in the early 2000s,

8:36Peter Thiel and Sam Altman have had a shared interest

8:39investing in nuclear power, which isn’t inherently bad.

8:42Of course, nuclear can be an extremely efficient

8:45and clean form of energy, but Thiel and Altman want to own it.

8:49Altman is invested in Helion and Oklo.

8:51Helion is working to build the first ever nuclear fusion power plant,

8:55a type of energy creation that many scientists say won’t work

9:00and Oklo is building

9:01microreactors, literally truck sized nuclear reactors,

9:05which is a bit concerning considering this investment strategy.

9:09″Part of our model is make the cost of mistakes really low,

9:13and then make a lot of mistakes.”

9:15But for now, Oklo hasn’t figured their reactors out yet,

9:18and they’re just using gas to keep up the promises they made.

9:21Nuclear startup Oklo and natural gas firm Liberty Energy today

9:25announcing a partnership to provide energy to large scale customers.

9:30Altman is also

9:30invested in multiple companies offering protection against

9:34AI bad actors, identity verification to prevent deep fakes, and even companies

9:38offering insurance for losses due to AI scams and hacking.

9:42That’s like Batman not making any money off of crimefighting,

9:46but then selling “Batmobile drove into my house” insurance

9:50while also running the Uber for henchman startup that The Riddler uses,

9:54and selling The Joker white makeup.

10:00One other

10:00big promise Altman makes is that when the AI he sees as inevitable

10:04makes many jobs

10:05obsolete, it’ll create so much wealth that it can be shared with everyone.

10:09just like his smaller scale Reddit promise that turned out to be bullshit.

10:14And in 2024, he announced the product that would supposedly

10:18offer that shared abundance.

10:20Worldcoin.

10:23Worldcoin is a technology company and cryptocurrency

10:26funded by all the usual suspects of techno fascism.

10:30Worldcoin’s backers say it can be a way to give out some form

10:33of universal basic income.

10:35When AI starts replacing jobs,

10:38I think this idea that we have a global currency

10:42that is outside of the control of any government

10:45is a super logical and important step on the tech tree.

10:50But it also sells itself as a solution

10:53to identity verification problems created by AI.

10:57They want to use these orbs as a method of trusted identity check,

11:02and you don’t get your universal basic income

11:05until you scan your eyes into the orb

11:09and like many of Altman’s other projects, from Loopt to ChatGPT

11:12it requires universal adoption to be of any business use.

11:17A currency and identification system are pretty useless if other people don’t

11:21use them. So again, Altman is making an offer.

11:24Give us your identity and we’ll give you cryptocurrency.

11:27It’s a classic Altman deal.

11:29I’ll fix everything if you sign over everything.

11:32Just trust me, bro.

11:34It’s almost like Altman wants to build a whole other economy.

11:38Just in case the one we have now falls apart.

11:40Which, well, we’ll get to that.

11:41In 2019, OpenAI gave up any pretense

11:45of being nonprofit and started a for-profit branch,

11:49then spun the for-profit out into its own entity in 2024.

11:54That for-profit organization has none of the same legal

11:57responsibilities as the nonprofit did, and brought in new investors

12:01like Microsoft, which invested $13 billion,

12:05which OpenAI largely spent on Microsoft products.

12:09And it’s not just Microsoft, Nvidia has promised invest 100 billion

12:13in OpenAI over the next few years, money that OpenAI will spend buying

12:17Nvidia chips.

12:19OpenAI has similar circular deals with AMD,

12:22the Qatari government, and Larry Ellison’s Oracle.

12:26How about the 20 bucks you owe me? Well, I only got ten, so here’s ten and I owe you ten.

12:29Hey, Moe, you owe me 20.

12:32Well, here’s ten, I’ll owe you ten. You owe me 20.

12:34Here’s ten, I’ll owe you ten.

12:35Here’s the ten I owe you.

12:36Good. Now we’re all even.

12:39The entire

12:39economy is tied to the success of Altman’s project.

12:43″We might screw it up, like this is the bet that we’re making

12:46and we’re taking a risk along with that.” who is the we taking the bet?

12:50Here’s OpenAI’s CFO

12:52Banks, private equity, maybe even,

12:56governmental,

12:59the ways governments can come to bear meaning like a federal subsidy or,

13:03meaning like just first of all, the,

13:05the backstop, the guarantee that allows the financing to happen.

13:09Through all of that

13:10stammering, the CFO of OpenAI is making a clear point:

13:14The government, your tax dollars, are responsible for saving the AI project.

13:20That’s more eggs

13:22in the basket.

13:24And that basket is based on the promises of Sam Altman, who

13:27as we’ve illustrated, lies and breaks promises a lot.

13:32So if we really look

13:34at the basket,

13:36maybe we shouldn’t have been putting all those eggs in there.

13:39And it gets worse.

13:40While we were editing this video, news broke that OpenAI is seeking a $750

13:45billion valuation and is in talks

13:48with Amazon for a $10 billion investment.

13:52That’s money that OpenAI would spend on

13:54Amazon infrastructure. So

13:58I’m going to need more eggs.

Artificial Intelligence and the Publishing Industry

0:01[Music] on about books we delve into the latest news about the publishing industry with

0:08interesting Insider interviews with publishing industry experts we’ll also give you updates on current non-fiction

0:15authors and books the latest book reviews and we’ll talk about the current non-fiction books featured on c-spans

0:22book TV this episode is brought to you by

0:29Shopify forget the frustration of picking Commerce platforms when you switch your

0:34business to Shopify the global Commerce platform that’s superch charges your selling wherever you sell with Shopify

0:41you’ll harness the same intuitive features trusted apps and Powerful analytics used by the world’s leading

0:47Brands sign up today for your $1 per month trial period at shopify.com all

0:53lowercase that’s shopify.com and welcome to about books now in a few

0:59minutes we’ll take a look at how artificial intelligence might affect publishing and the writing of books but

1:06first here’s some of the latest stories from the publishing World former

1:11president Barack Obama released a statement in support of libraries fighting book bands in his letter posted

1:18online he said in part that quote books have always shaped how I experience the

1:24world today some of the books that shape my life are being challenged by people who disagree agree with certain ideas or

1:32perspectives I believe such an approach is profoundly misguided and contrary to

1:38what has made this country great well the National Review pushed back on President Obama’s letter saying that

1:45quote when one digs into the controversies that have inspired Obama’s missive one quickly discovers that it is

1:53not so much that ideas and perspectives are being suppressed in America as that

1:59age inap appropriate material is being removed from its schools and in some

2:04cases from the children’s sections of public libraries in other news layoffs and

2:11voluntary buyouts have begun at penguin Random House according to Publishers

2:17Weekly CEO Nahar malava confirmed that long rumored layoffs have become a

2:23reality quote as you know the book Marketplace has had several shifts over

2:29the past years the penguin randomhouse CEO wrote to employees we too have

2:35experienced these shifts and changes especially during the last months we are halfway through 2023 and while the book

2:42Market has grown particularly over recent years we have also faced

2:48significant increase cost in all areas across the board and we expect these

2:53increases as well as inflation to continue well approximately 49% of those

3:00eligible for a buyout have accepted it at penguin random house including

3:05editors who have worked with authors such as Robert Carol Steven Pinker

3:10Elizabeth Gilbert and Ray kerswell and Publishers Weekly also

3:15reports that book sales were down nearly 3% in the first half of 2023 from

3:23363 million books sold in the first 6 months of 2022 to 350 4 million sold so

3:31far this year Publishers Weekly writes that quote the Boost provided by Prince

3:38Harry spare made biography autobiography Memoir one of the only three adult

3:44non-fiction subcategories to have a sales increase in the first 6 months

3:50travel had the largest increase up 6.6% the two categories most closely

3:56associated with stay-at-home activities during the the pandemic had the largest

4:02declines as sales of Home gardening books dropped 17.5% and sales of cooking entertaining

4:09titles fell 15.4% and finally in publishing news

4:14Simon and Schuster is releasing a new Memoir by Cassidy Hutchinson the former

4:20Special Assistant to president Trump she was a key witness for the house select

4:25committee to investigate the January 6th attack on the US cap Capital the book’s

4:31title enough it’s set to be released in September Simon and Schuster said that

4:37in her Memoir Hutchinson reveals the struggle between the pressures she confronted to tow the party line and the

4:44demands of the oath she swore to defend American democracy enough reaches far

4:50beyond the typical Insider political account and now on about books a

4:56conversation with Thad melroy a public publishing consultant about artificial

5:02intelligence and its potential impact on publishing well fad mroy is a publishing

5:09analyst and consultant recently Mr mroy for Publishers Weekly you wrote an article

5:16that was entitled artificial intelligence is about to turn book publishing upside down what are you

5:23positing here that article’s been fun I’ve heard

5:28a tremendous amount of respon resp to it I’ve been pondering like all of us what’s going on

5:35with this chat gbt stuff and of course with my background in book publishing it was trying to put those uh connect the

5:43dots let’s say on that and when I connected the dots I thought gosh this could be an enormous change in the

5:50industry that I’ve worked in for decades and so I tried to push out there some of

5:56the ways in which it could affect all kinds of different aspects of the book publishing

6:01industry so when we’re talking about GPT we’re talking about generative

6:07pre-trained Transformers what exactly are those and how would they affect the book

6:13industry it it’s fun how we get into the vocabulary because you’ll hear about

6:18llms large language models GPT as you just uh described U we hear about generative Ai

6:27and all of those are very VAR on a theme for people who aren’t familiar with it

6:36it’s you come up with a pretty straightforward basic non-scientific kind of an

6:43explanation essentially a large database of language that when it’s such a large

6:50database that when you pose simple English questions to it it is able to

6:56emulate a authentic sounding human response and of course you know if there

7:03wasn’t a heck of a lot of data in there that response would just be stupid it

7:09would not make any sense whatsoever the database is deep enough that it comes up with very

7:15realistic reasonable kinds of responses uh with surprising accuracy and

7:21surprising depth of detail and it can posit responses because of the size of

7:27the database and so when you work with chat GPT you start to think this is a

7:33device that’s humanlike that’s in response to me and it does that most of the time what you hear about all the

7:39time of course is it so-called hallucinations because it is really only trying to emulate language is not um

7:48releasing information per se and so when it emulates language sometimes it makes

7:53stuff up so Mr mroy can we say that in the future artificial intelligence will

8:01be writing books I it is doing so now they’re not

8:07very good books I think they will be very very good books uh both fiction

8:13and non-fiction created with artificial intelligence I’m convinced of that now where could if somebody were interested

8:20where could somebody read an AI generated book The Usual bookstore Amazon there’s

8:28quite a few books people I the ones that are up there now are a bit disgusting to me because most of them are sort of get

8:34getrich quick kinds of books um I don’t necessarily recommend them you can read free samples maybe that’s the way to do

8:41it I wouldn’t recommend spending money on it but keep an eye out you’ll start to see some some pretty good non-fiction

8:47initially and then some pretty fascinating some non-fiction mostly I think fantasy and uh romance that kind

8:55of thing will be very suited to the technology of GPT and so if you’re an author and you want to write a a book

9:03about fantasy and Dragons what would you how would you program that would you would what would

9:08you say to the computer or to the uh uh GPT you would um it would be like a

9:16conversation with a friend it would be like a friend helping you write a book you’d start off by saying okay

9:24gbt you’re my co-author and together we’re going to work on a book

9:30and I’m going to tell you little bits about the story and then you’re going to take the story and you’re going to

9:35expand on the story I want you to give me 5,000 words our our main character

9:41named up goes into the forest and runs up against a

9:47obstacle on like that and GPT will iterate back at all kinds of different lengths and you can start to build the

9:55story so in your Publishers Weekly Colum you wrote that I believe that every

10:02function in tradebook publishing today can be automated with the help of generative Ai and if this is true then

10:10the trade book publishing industry as we know it will soon be obsolete is that a uh pessimistic

10:18attitude or is that an optimistic attitude I’m I’m a total Optimist it it

10:24might it when you pull it out in that kind of context it sounds you dire and like a dreadful threat to the industry I

10:32think that we’re I work in an industry that’s you know um very much Bound by

10:37tradition by bit set of expectations I suppose most Industries are but book publishing you know has that aura of

10:43antiquity and is darn proud of it at times and I think it’s an industry that

10:49would be more robust with the um full use of technology that that we can get

10:56more voices better voices to more readers using this kind of technology I think that’s a great thing so besides being a

11:04so-called co-author as you discussed earlier how else would artificial intelligence change the publishing

11:12industry in the article I looked at all kinds of aspects of it you know I’ve worked in different parts of publishing

11:17through my career so I I’ve worked in editorial I’ve worked as an author I’ve got a number of books published no

11:23fiction but L non-fiction uh I’ve worked in production and printing of books all

11:29that kind of thing I’ve worked in marketing of books and so I I consider each of those publishing areas and each

11:35of those functions are essential and need to be executed as robustly as each

11:41other each of them is very amenable to different aspects of the uh Power of the

11:48generative AI some a little bit less so than others but I was able to you know

11:54look at them each sort of in a sequential way and think yep it’ll touch that yeah that one it it won’t touch as

12:00much this one oh goodness it’s going to completely re Reen re uh redo how we’re

12:06going to look at uh simple editorial functions well let’s talk about the production of a book how would AI affect

12:14that well the production I think of as sort of it’s a multi-stage process within what we call production but you

12:21know let’s assume that the manuscript has been fully edited we’ve got the final words that we want to have uh

12:27published so at that point we’re going into a type setting phase we’re going

12:33into a design phase both the internal parts of the book and the cover of course of the book If the book’s

12:38Illustrated then we have to create a lot of illustrations for the book when all of that is done we go to a printing

12:45files that are ready for the printing press then we go to printing plates and onto the printing press all of those

12:50stages at the design typ setting Productions phase um each one of them

12:57can be U accelerated and enhanced by AI we’ve seen what it can do with images so

13:03cover design is a no-brainer even today with with the current state of the

13:09technology and that’s only going to get better the interior design of the book very very simple to do with the help of

13:16AI th those are some for instances when it comes to distribution

13:21and advertising you write in your Publishers Weekly article for the publishing industry online distribution

13:28and advertising have separated writers from readers what does that

13:33mean yeah this was something that’s this one’s close to my heart you know I’m I’m

13:39a writer I’m a reader um and I’ve always been frustrated and a little bit

13:45dismayed let’s say by the role for example I mean Amazon gets attacked a

13:51lot and I think there’s much to criticizing Amazon at the same time it’s brought a lot of books to a lot of people at some very good prices but what

13:58it does in that act of of bringing the books to people is it as they say

14:03disintermediate between the reader and the writer and you you can see that you

14:09know readers are very passionate when they come up against a good book and they fall in love with an author and the author’s work and then but in between

14:16them sits Amazon in many cases when they when it’s a book story that sits in between them that’s I think a very good

14:22thing because there’s a human interaction between the two it’s not disintermediated it’s accelerated and in

14:29by those kind of personal relationships so what I want to see what I hope to see what I expect to see with GPT is new

14:36ways for uh writers to get to their readers and to communicate and to you

14:43know to find and enhance those relationships and vice versa for readers to find you know even more books even

14:50better books than they’re finding today when it comes to authorship can an

14:56author claim to have written a book that’s been co-authored by uh generative um by

15:06GPT that one’s uh that’s the million dollar question and we’re going to see a lot of controversy we’re starting to see

15:12it already we’re going to see that I think is going to play out through the courts eventually there’s going to be a

15:17lot of litigation around that you know if we take out the um legal aspect of it

15:24I can tell you that I’ve you know used GPT to help with my writing I could tell

15:30you that I’ve created this whole chapter solely with GPT and then checked it myself to confirm that it’s what I

15:35wanted so GPT can be my co-author literally technically speaking today

15:41whether or not I have to disclose that to the publisher does the publisher have to disclose that to the public what are

15:47the kind of moral and legal responsibilities go that I think that’s going to be determined over the months

15:52that had 10 years I suppose so Thad mroy you use the example of of car

15:59in your uh Publishers Weekly column is it safe to say that we’ve become lazy

16:06drivers in the sense that we have power steering and power brakes and and some other features in our cars now that we

16:12didn’t used to have and Riders can become lazy because they will have ai

16:19helping them it’s it’s a good question and and it’s one that’s you know reasonable

16:25question and a lot of people are asking and in in one scenario that would be a kind paril of this I don’t think so I

16:34think that people writers are you know by Nature they’re creative they want to do the best they can possibly do for

16:41their own you know for their own pride and and professionalism and also because they want to reach readers and so I

16:48think that GPT is not something that gets in the way or you know lessons that

16:53process is something that enhances it I can be a better writer I can do more research can help me with my uh ability

17:01to express thoughts and ideas and creative ones or very you know literal ones so I I doesn’t make us lazy I think

17:08it’s a tool that makes us stronger more robust thed melroy how is

17:14GPT how is it programmed where is it getting the information let’s say we’re

17:19writing a book about the 2020 presidential election and we’re using AI

17:25to help us with that where is that information coming from that’s a tough question because

17:33I’ll try and keep it sort of basic um one of the aspects of the current

17:38generation of GPT is that the language the the um information let’s call it the

17:44words that has been trained on up to now you they cut off around what was it 2021

17:49depending on the language model they cut off a certain number of years they continue to add to that so-called Corpus

17:56of of knowledge of information it’s a something’s happening at the same time you know that you’re adding words to the

18:02database those words convey meaning and they’re they’re taken from web pages from Wikipedia pages and so those words

18:09are both you know individual uh you know units of of text

18:16but they’re also conveying information they contain information and so you know as GPT moves forward we’re going to see

18:23more and more um current information in-depth information and some that’s

18:29going to be trained on books that’s one of the current controversies is that some of GPT is actually trained on books

18:36that have been published some of them many of those books are in the so-called public domain they’re outside of

18:42copyright others as it turned out in the current Corpus uh have been books that

18:47are in copyright and that’s causing a lot of controversy so we’re seeing this already

18:54we’re seeing the use of AI GPT Etc when it comes to education is that correct

19:01and students using using this what’s the danger

19:06there education is probably the most profound use case at this point I was

19:12talking the other day with one of my colleagues who works as an has worked in educational publishing and as we began

19:18to you know C uh go back and forth and what some of the possibilities are you

19:24start to think okay a book of instruction that can be customized to to the particular knowledge of the

19:30individual reader you no longer have to have one textbook for you know the grade nine level what you can have is a

19:37different textbook for every certain every single person in the class you’re seeing some of the online education uh

19:44programs or existing systems starting to use GPT and using it very effectively

19:49for onetoone kind of tutoring you can see a disintegration of the textbook as

19:55a form as the information becomes totally online and and individualized to the student so there it seems like GPT

20:03in the short term will have some of its biggest impact thed melroy one of the fastest

20:09growing parts of the book industry is Audi books are we seeing computer

20:16generated voices reading audio books at this point we sure are and I’ve just

20:22before the call today I was reading about a a company that’s just uh landed

20:28a very large investment and they do u u artificial intelligence enabled

20:35audiobook creation without a human narrator um some of your listeners will

20:41be um aware of this as a phenomena because it’s a couple years old where we

20:46can clone voices from other people there’s an instance very famous well-known instance now where the actor

20:54Edward Herman who’s no longer with us his voice is being used to um to narrate

21:01audiobooks as it were it’s it’s under licens it’s been a legal use of his voice and sort of Revenue to his estate

21:08which is a curious aspect of it but yeah the the AI is has gone from being a

21:13pretty good tool for this to being an excellent tool a couple years ago when I first started studying it you could tell

21:20it was still a computer today very easy to convince someone that that voice is a

21:25human that was in the studio reading that book and it was generated very rapidly very inexpensively with

21:32artificial intelligence what is your company the future of publishing and your website the future ofpublic

21:40docomo I consult to all kinds of Publishers uh to authors to Distributors

21:48I work on all of the technical challenges that the industry that the industry faces I’ve been doing that for

21:54quite a long time what fascinates me most is that intersection point between Tech knowledge and traditional

22:00publishing do you find reticence when it comes to the Publishers or

22:07authors and I’m using a very silly word but an ickiness to this yeah yeah I do

22:15it I talk about tech about publishing as a technology averse industry it’s not

22:22merely that the industry is not and the average author is not terribly sophisticated around techn ology I think

22:29they’re intimidated by it and that intimidation can become hostility hence

22:35the aversion uh aspect of technology I you know didn’t wasn’t born

22:40a technologist my my entry into publishing was you know the love of the creativity of great writing and

22:47Technology was something that I picked up later on and so i’ I’ve worked for a long time trying to make people more

22:53comfortable with the technology because it’s it’s there to help not to hinder it’s it’s not something to block but a

23:00tool that can make your your work far more effective U but I understand where people who aren’t comfortable with

23:06something with technology can be intimidating I’m there to try and help them over the the hump on that stuff you

23:12close your Publishers Weekly colum by comparing AI GPT to the printing press

23:20and the transformation in the world by the printing press yes it’s you

23:29so many times when you read about technology or talk to people about technology and Publishing they try to

23:35find a um you know something that makes it the biggest possible yet and so there

23:40there’s always you know this is the biggest uh transformation since Gutenberg in the printing press and

23:47usually that is pure Hy and you know most of the transformations of the last number of decades you know some of them

23:53have been very good very useful but they haven’t been as big as the transformation of the printing press

23:58this one could be I you know I’m not convinced it is and I recognize that’s a

24:04you know a big hill to climb to say my God you know as as important as

24:09pre-printing press to post printing press well i’ like to contextualize it

24:15that way for people to consider that yes indeed this could be enormous and if

24:21it’s enormous what does that mean to the work they do whether it is as an author or within a publishing company if it was

24:28at that scale what what would the world look like right afterwards again you know if you consider the impact of the

24:34printing crisis has been you know enormously positive impact some drawbacks but enormously positive impact

24:41I think it’s it’s equally possible that this generation of Technology could be as profoundly and wonderfully

24:49impactful Thad melroy’s website is the future

24:55ofpublic artificial intelligence is about to turn book publishing upside

25:00down Mr mroy thanks for spending a few minutes with us to touch on some of the issues that may be affected by

25:08AI my pleasure Peter thanks for having me and this is about books a program and

25:13podcast produced by c-spans book TV well each Tuesday dozens of new books are

25:19released here’s a sampling child psychologist and author Miriam Gman is

25:25out with her latest lost in trans Nation a child psychiatrist guide out of

25:31Madness Miss grman testified at a hearing on Capitol Hill earlier this summer about her concerns about gender

25:38affirming care and puberty blockers and Donovan Ramsey is out with a new book

25:44about the crack epidemic and its impact on black Americans it’s entitled when

25:49crack was King a people’s history of a misunderstood era and another new book

25:56out is by former investment banker and financial commentator Carol Roth It’s

26:01her latest book and it warns that Global Elites are targeting private property

26:06ownership and Financial Freedom it’s entitled you will own nothing your war

26:12with a new Financial world order and how to fight back and finally one novel to

26:17tell you about it’s by authors Marie Benedict and Victoria Christoper Murray

26:23it’s called first ladies and it explores the partnership between Ellen Roosevelt

26:29and civil rights activist Mary McLoud bethon well coming up on book TV it’s

26:35our weekly afterwards program this week it’s polit surprise winning journalist

26:40Wes Lowry on his new book American white lash a changing nation and the cost of

26:46progress he’s interviewed by Columbia University’s galani Cobb here’s a

26:51portion if our country was founded on an

26:56explicitly white supremacist system in that people who were coded under law as

27:02white had claim to the full promise of American Freedom while people who were

27:07coded as black did not have claim to that promise which is just the fact it’s how we were founded is what it looked

27:14like the that that system is a white supremacist system it’s a system that

27:19prioritize and places above others people who are coded as white we’ve seen steps over our history to undo that and

27:28create a multi-racial democracy we and those steps date back to the revolts of

27:34enslaved people they date to the Abolitionist Movement the Civil War emancipation and reconstruction they

27:40date to the civil rights movement in the 50s and 60s um and they date to the steps towards the election of a black

27:48president in each of those incidents what we see is that as people fight to

27:55upend a white supremacist status quo those who are the beneficiaries of that

28:00status quo lash out violently in in defense of a system for which they are

28:06the beneficiaries and so what we see is that following uh the revolts of

28:12enslaved people we see massive acts of violence in both interpersonal and in terms of policy the cutting down on the

28:18ability of enslaved people that have access to reading or to education the freedom of movement um in some cases

28:24quotas on how many new enslaved could be brought to a given colony to make sure that they would not lose an upper hand

28:30in terms of maintaining the populations we see the violence and the backlash to the The Radical Republicans in the South

28:37and the overthrow of multi-racial democracy as it was established in reconstruction we see the the violent

28:44crackdowns of civil rights and civil liberties of black Americans following

28:50uh the Civil Rights air and the civil rights movement and we see the rise of these white supremacist groups whether

28:55it be the skin heads whether it be the militia groups whether it be what we called the alt-right for a while right

29:02this idea that when white supremacist when white supremacy is threatened

29:09people lash out in its defense and in those moments I think those moments are best understood as a white lash and a

29:16reminder that afterwards airs every Sunday night at 10 p.m. well thanks for

29:22joining us for about books a program and podcast produced by c-spans book TV you

29:27can watch all booktv programs online anytime at booktv.org and you can listen

29:35to this program as a podcast download it on our C-SPAN Now

29:41[Music]

29:57app [Music]

History of Artificial Intelligence

0:00[Music]

0:05this week on the lectures in History Podcast Princeton University history professor Matthew Jones teaches a class

0:12on the history of artificial intelligence he also discusses the debates over its development hang tight

0:18class starts right after this this is Rachel from cpan podcast

0:23team and before we get to this week’s episode I’d like to introduce you to my colleague Jen thanks Rachel hi I’m Jen

0:30one of The Producers here at C-SPAN and if you enjoy lectures in history we think you’ll also like reading our

0:36weekly American history TV newsletter if you’re into history you’ll appreciate being an American history TV Insider

0:43every week we deliver Advanced program highlights so you never miss out on learning more about the people and events that document the American story

0:50it’s the place to find out which lectures in History Civil War battle talks features on the presidency and

0:57interviews with historians are coming up plus you’ll get highlights of featured C-SPAN podcasts subscribe today at cens

1:05span.org connect for your weekly dose of History every Friday thanks for being

1:11part of our community don’t forget to visit c-span.org connect to sign up this

1:19episode is brought to you by Shopify forget the frustration of picking Commerce platforms when you

1:25switch your business to Shopify the global Commerce platform that supercharges your selling wherever you

1:31sell with Shopify you’ll harness the same intuitive features trusted apps and Powerful analytics used by the world’s

1:37leading Brands sign up today for your $1 per month trial period at shopify.com

1:43Teall lowercase that’s shopify.com

1:48today we are going to do our second lecture on the history of artificial intelligence we talked about it way back

1:54in September um and today we’re going to be talking about how it is that data

2:00came to overtake rules based in artificial intelligence so in 19 in 2009

2:08um a trio of Google researchers published a paper called the unreasonable effectiveness of data and

2:14what they said is that scientists and humanists alike had been looking for simple theories um of of about language

2:22and other facets of Human Experience things that would look like physics or mathematics but it turned out embracing

2:29complex they said was the way to go taking data at enormously scale and

2:34analyzing is what allowed you to translate language to reproduce language to understand pictures and whatnot data

2:42one out rather surprisingly to people of a mathematical mindset over

2:48rules not that long before as we talked about some weeks ago the progenitor of

2:54the term artificial intelligence John McCarthy had denounced the idea that

3:00learning from sensory experience from data would ever produce complex Behavior

3:05but in 2009 the opposite seemed to be true and today that’s what we’re talking

3:10about how is it that we came to be in this situation now in the last say seven

3:17to eight years the term artificial intelligence has gone from something that was uh seen as sort of a a a

3:25backward older kind of approach to quite precisely one of the most exciting

3:30things happening right now and which was understood as predictive uh algorithms

3:38using statistics and something called machine learning that had been trained

3:43using extremely large high-dimensional data sets it wasn’t about rules and

3:49symbolic reasoning it was about data and its analysis and particularly data at

3:54scale at Facebook or Google or Amazon scale or the human gome

4:01so this is what has produced the sort of things that we’re struggling with today so here I have uh from this morning I

4:08asked uh chat GPT to tell me about the coming into self-consciousness of an AI in the style of Dr Seuss which was very

4:15happy to do and then I asked a um one of the diffusion B programs to

4:23produce an images of Princeton students listening to a lecture on the history of uh AI um and it produced something that

4:30it had learned uh very much how did we get to this world so this is a world uh

4:37that is explained and is based on deep empirical granularity what do I mean by

4:42that it’s based on the sensory data the big coming together of experience and

4:48it’s granular in that it’s about the detailed things of the world not sort of extreme simple cases but rather the full

4:56complexity language in its in in in all of its different Dynamic meters the irregularities of verbs not just their

5:03regularities but at huge scale and one of the fundamental phenomena of our time

5:09is that predictive algorithms that are trained on large scale data historical

5:15current data have a very powerful ability to reproduce existing forms of

5:22inequality and equality that is when there are structural inequalities for

5:28example structural racism data at this scale tends to reproduce that um and

5:35this is a fundamental sort of ethical and political question of the time so uh

5:40it’s often referred to as dumpster fires uh of the the AI of today so how did we

5:47get here what made it possible and what are some of the more pressing concerns about it so I’m going to give you a kind

5:53of Whirlwind tour of how we got there after reminding you a little bit about what we talked about in our first day AI

6:00lecture so now as I said AI has been redefined if you look at the longer

6:06history of it it’s a history both of uh the technical disciplines that are

6:11arrayed around this this term Ai and it’s a a sort of a fascinating moment of

6:18a cultural focus of thinking through fundamental issues of the nature of humanity of the nature of culture of the

6:25interaction of nations of Corporations and sort of things and history those two

6:31histories are deeply and deeply intertwined and we need only think of all of these sort of cultural

6:36touchstones to think about how much of popular culture is involved in conversations about Ai and as I’ll talk

6:44at the very end these very cult touchstones are important in thinking about real AI today so let’s return to

6:53where we began our conversation last time which was Alan Turing and some of

6:59his friends at Bletchley Park doing cryptography during the second world war now they

7:06were kneep in doing large scale computational work on vast amounts of

7:13data that was produced by the collection of uh the signals the uh Communications

7:19of the access powers as one of them described in doing cryptography they

7:25were quote up to our elbows in automation of one kind and another and in the evenings they thought about what

7:32the implications of this would be in the future now I told you that they broke

7:38this into sort of several different ways of thinking about what it might be to be intelligent one was learning from

7:45experience like collecting lots of signals of German making uh secret

7:50Transmissions the other was thinking about rules logic and Mathematics many of them came out of domains like

7:56mathematics now in the history of the the first say 50 years of AI uh the

8:04focus is largely upon rules plus a small number of facts not the complexity of

8:10experience and the keynote moment for this is a conference in the middle of

8:15the 1950s uh organized by John McCarthy and he was quite blunt um and you can

8:21find this on YouTube um where he says he invented the term artificial intelligence largely as a way of getting

8:28money now as I explain to you the kind of AI that they focused on was not Based

8:35on data at in particular at all it was rather focused on a vision in some sense

8:41the self-image of mathematicians and logicians and chess players that and it

8:47prized the ideas that what would really made human intelligence interesting was

8:52its high symbolic nature so the forms of computing programming they devised in

8:59tried to combine rules uh symbolic reasoning um and a certain amount of

9:05facticity and it was made into the heart of intelligence now what happened to the

9:12learning from experience well I’ve already told you McCarthy was down on this and so down on was it he and his

9:18allies that they went after the biggest examples of it so this alternate path is

9:26most identified with an apparatus called the perceptron which was a literal an

9:32attempt to literally reproduce in machines a kind of neural network that would sense things and try to say

9:38whether what you’re seeing is an A or an H initially it was a large machine but

9:44it became uh um something that was very much algorithmic for McCarthy and his allies

9:51this was the antithesis of what artificial intelligence would be and I already quoted this for you it was to

9:58get at the low level of human and animal cognition and not its Heights the

10:03symbolic Heights so they deliberately targeted for death quite successfully um

10:09and argued that the data Centric approach was not intelligence worth the name okay so that’s all for review and

10:17I’m casting it in two uh two two two you know black and white colors so our

10:23question today is how did this data focused approach come back where did it come from how how did it come to

10:30dominate in the ways that it does in our world well you’ll remember in Bletchley

10:35Park they talked about rules but they also talked about learning from experience they were in the heart of a

10:41datadriven Enterprise um so when we think about Allan Turing typically you

10:48the the stories are about Alan Turing the lone genius the tortured genius the person who suffered this awful

10:54persecution by the British state but his work was very much done in an entire

11:00Factory that I described to you some some weeks ago of data analysis at large

11:06scale using large infrastructures for the purpose of attempting to win the war

11:12this approach to thinking about data did not disappear but it wasn’t known as AI

11:20at all in fact it was uh something very different now the AI of our current

11:28moment emerges from a long lineage of there and has a bunch of components only

11:34some of which I’m going to be able to talk to you today so the AI of

11:39um of the past half decade uh it it very

11:45much emerges from a data Centric approach um and

11:52it it’s enabled by relatively weak Pro

11:57privacy and intellectual ual property protections we’ve been talking about this it’s coupled to an organization of

12:05research labor and it’s undergirded by Massive Computing

12:10capacity you need all of these ingredients to understand the emergence

12:15of AI in our terms and the roots of that are very much in This World War II

12:23context and what happens to it after the Cold War so while the symbolic rules

12:28program is exploding and getting most of the good press uh behind the scenes

12:34mostly in uh mostly in classified domains there’s a kind of low road of

12:40instrumental computational data which is vast archives of data first things like

12:46cryptography cryptological uh combined with an approach using statistics where the goal

12:52is not necessarily to produce great scientific thoughts but rather to solve concrete problems of the military it

13:00develops in lots of different places and this is just one kind of example which is a a domain that becomes known as

13:07pattern recognition and in pattern recognition you have fundamental issues like you vast amounts of uh imagery from

13:15satellites uh and spy planes and it takes vast amounts of human n labor to

13:21classify them would it be possible for a computer to learn to classify this this was not intelligence in the sense of can

13:28we reproduce the great mathematicians of the path but rather can we reproduce the labor of recognition on a computational

13:36uh platform um this approach uh developed into a whole array

13:43of algorithms and any of you whove ever studied machine learning or computational statistics have learned

13:49many of these algorithms under various kinds of names and it concerned using

13:54those kinds of algorithms on large data sets not kind of to data sets that were

14:00more the focus of uh mathematical statistics now this issue of how is it

14:06that you replace an expert was one that was extremely challenging and the people

14:11in symbolic AI definitely wanted to do this and the people in computational uh

14:16statistics wanted to do this and it turned out to be very very challenging one of the great discoveries and it

14:22happens in parallel in computer science and in this in in in in in in the social

14:28in in in domain like history and sociology is a recognition that getting people to explain how they are experts

14:34and how they make expert decisions to ask say a doctor how does he or she make a clinical decision is incredibly hard

14:42there was a kind of hubris that it would be easy to elicit the thought processes

14:47of skilled professionals but it turns out to be enormously hard and this is a a great example of a Stanford study

14:55where a a a guy using a lathe is trying to expl explain how he uses it this kind

15:01of skilled activity and it turns out it’s enormously hard to do this there’s a program called expert systems which

15:09attempt to um illicit rules through discussion with experts and put them into place but it turns out to be both

15:16uh intensively uh difficult and expensive to elicit the rules and Incredibly brittle they’re not expansive

15:23they’re not good at dealing with the complexity of the world and they call this the knowledge acquisition

15:29bottleneck so you needed a human expert to discuss for a very long time with a

15:35so-called knowledge engineer uh in order to elicit rules and it came to Dawn on a

15:42lot of people that the solution might not be to try to replicate the rules by

15:49which people think but rather to create predictive algorithms that might work in

15:55an entirely different way that would duplicate at a very high percentage the

16:00kinds of decisions they would make one of the interlocutors of Turing

16:08way back in the 40s wrote in 1985 Mastery is not acquired by reading books it’s acquired by trial and error

16:14and teacher supplied examples this is how humans acquire skill and if you think about this this is both a

16:21statement about what the nature of human cognition and intelligence is and a concrete uh program for thinking about

16:29what you’d want to do if you’re going to build algorithms that would duplicate

16:34human reasoning and the kind of thing they’re thinking about because this may be all very abstract is Imagine you’ve

16:41done a vast Sky survey of all the stars at very high resolution and

16:47traditionally you’d have astronomers and as we discussed um traditionally uh

16:52large uh pools of of extremely learned uh women look through all these plates

16:58and and and and classify Stellar objects so the quest the question was could you

17:04do that not by asking the astronomers how do you tell a pulsar or not but

17:10taking a large data set where they classify objects and then produce a

17:16statistical predictor that’s going to make the predictions of what the astronomers would do now I hope that’s

17:22clear it’s not saying how do you do it it’s saying can we mathematically model

17:28something that’s to make predictions that line up with you that division is a

17:33really major one because it no longer means that you’re attempting to model the process of human cognition you’re

17:39attempting to model the outputs of human cognition this turns out to be more

17:45successful than anything that had been produced in the rules-based AI and for

17:51those of you who’ve worked in the Cs World it cons it consolidates into what

17:57we think of as Central parts of machine learning and particularly what is called supervised learning and supervised

18:04learning is one in which you have a set of data that has been classified by a set of human actors and you produce

18:10algorithms that can duplicate that classification so think Sky charts or uh

18:16if you had a whole bunch of people assessing credit card applications um then you build an algorithm that models

18:21that behavior and can do it an incredible scale now all of this was going on in in all kinds of academic and

18:30intelligence settings but things were happening in parallel that we’ve been

18:35discussing in the past few weeks that created the conditions in which these Technologies could be applied at a much

18:42larger scale and become familiar to all of us um those shifts involve first as we’ve

18:51discussed the consolidation and expansion of vast infrastructures for the storage and Analysis of data the

19:00particular focus on storage of data was one that was businesses and and and and

19:05spies were much more interested in than a lot of scientists but an infrastructure comes to be built and

19:12expanded um and it continues to expanding to this day so you had to have

19:17this infrastructure but you also had to have a change in Norms about using data

19:23and I think I showed this to you some weeks ago but it’s a a meme making fun of the fact that in the 1960s there was

19:29this thriving concern about the our privacy of our data and yet today we

19:35expect the wir tap on our desk or on our wrist uh to answer sorts of questions

19:41there was a transformation of norms and this was very much connected to those transformations in the laws around

19:47privacy that we discussed this particular important moment in

19:521974 where uh it seemed that the United States and other jurisdictions would

19:58robust privacy laws but that’s not what ended up happening the commercial data by and large was available for use by

20:06commercial entities for use for sale for analysis and whatnot and it’s Central to

20:12the world in which we live so you had this new technology this new kind of

20:18predictive technology combined with ever greater

20:23infrastructures changing Norms around privacy relatively weak laws on privacy

20:30and with the explosion of the internet uh the incorporation of large companies

20:35that retained this data stored this data and had many reasons to be using it um

20:41oh and I forgot I forgot that the US government itself um uh came also to

20:48have very controversial uh accounts of what kind of data it could and couldn’t

20:53use about us people and non- us people so in the technical world and in the

20:59business world and in the cryptological world we’ve moved very very far from the

21:05world of rules that I discussed before in fact an ethic that focused on in

21:14predictability I mean prediction over interpretation over algorithms that

21:19could predict on the basis of large scale databases um came to be seen as

21:25not just good in a commercial domain or good in in a uh a military or

21:31intelligence domain but the best and most fundamental kinds of algorithms or

21:36very legitimate objects even though they were so far from the visions of what

21:42intelligence had been in more traditional AI uh one of the touchstones of this

21:49transformation um came in 2009 so the year of that Google paper so Netflix had

21:56announced a few years before uh a crowdsourcing Challenge and what

22:02Netflix said was uh we are going to give a million dollars to the team that can

22:09best improve our predictive algorithm and at the time Netflix wasn’t streaming

22:14they were sending out uh DVDs and so what happened is a wide

22:21variety of people gained access to an extremely large by the ter time of the

22:27by by by at the time data set a kind of commercial data set and and at the time

22:33it was quite difficult for ordinary say programmers or researchers or others to

22:38gain access to large-scale commercial data sets only the big players the Googles the Facebooks um the Amazon uh

22:46and the Netflix had access to this kind of data Netflix released this um and

22:53then said see what you can do and the story is is quite interesting people

22:59tried all kinds of algorithms to try to interpret it and then try to predict on

23:05the basis of it and various people came together into teams um that would

23:12combine their algorithms now the winning approach the group here that’s called Bor pragmatic chaos um didn’t have one

23:21predictor rather what they did um is that they produced a predict they

23:27produced a giant prediction engine which combined 500 predictors 200 Blends uh

23:34combined with another 30 Blends now this algorithmic prediction system

23:41was able to better predict the kind of movies that people liked but it was from the standpoint of any traditional

23:48scientific approach to knowledge bizarre because it was just this Rand almost

23:53random seeming combination of all kinds of different machine learning and statistical classif iers that were

23:59rammed together and that worked it had an it was an object that fundamentally

24:07was predictive without giving you much understanding at all but it was really a

24:12touch point in the power of of of what you could do if you harnessed a large

24:20amount of computational power a large data set and lots of predictors to make

24:25predictions along one particular metric

24:31um so it was at once a moment in which this

24:36ethos of prediction was gained sort of widespread attention but it also was a

24:42model for how you might organize research itself now the Google paper I

24:49began with said that we’d been looking after the wrong kind of knowledge that language is is this sort of granular

24:56complicated thing and needed to create knowledge systems that recognized that um the Netflix challenge sort of doubled

25:04down on that and said the way that we do that is not through say individual

25:10Geniuses thinking and figuring out theories but rather collectivities whose

25:15results contribute towards a larger project of coming together uh and trying

25:22to maximize some form of metric the Netflix challenge then modeled an idea

25:27that lot lots of people would come together and in a competition um uh in

25:33trying to maximize something do better than any one group could do and in fact

25:38that you could combine this this meshed perfectly with organizations who had

25:44fundamental metrics at the heart of their business for example uh um oh I

25:51was a misprint here but if you think about Facebook Facebook becomes very early engaged in maximizing engag on its

25:59website that is its goal in some sense as a business is to have the largest number of people who spend as much time

26:05as possible on its website and thus its algorithms were designed primarily with

26:11that in mind that is a single metric now this is called The Secret Sauce of

26:16machine learning and it turns out to be enormously spectacularly powerful far

26:21more so than anyone ever had any right to think was going to be the case and it

26:27works in any domain in which you can select an agreed upon metric and maximize it

26:33whether it’s engagement whether it’s uh a score in uh predicting what people

26:39like or things like how to get high scores in game or indeed fundamental scientific issues

26:46so in 2012 that neural network approach which

26:53I showed you had been outright attacked by people in the symbolic tradition came

26:59back with a vengeance it was rebranded deep learning at first and it’s deep

27:06because it was a form of network uh that had many layers and uh I won’t go into

27:12the technical details but what those many layers allowed it to do was overcome some of the formidable problems

27:19of a much simpler neural network um last time I told you that it couldn’t uh uh

27:25neural networks can’t figure out simple neural networks can’t figure out um how

27:30to discriminate uh how to how to replicate what’s called an xor function

27:35but if you have many layers um you can now people had known this in some sense

27:43for decades but what they didn’t have were three things they didn’t have enough

27:50computer time to train a neural network because training a neural network is slow computationally expensive and

27:58expensive in terms of electricity so they didn’t have the compute time they also didn’t have data

28:06neural networks require huge amounts of data to train a neural network to recognize a logical function takes a

28:12huge amount of data to train a symbolic machine to a logical thing is is no data

28:19at all so they had required lots of compute time it

28:25required lots of data but also neural networks had always been suspicious why

28:30were they suspicious because you cannot most of the time figure out how it is that they are making the predictions

28:36that they did now think back to that Netflix predictor I showed you that predictor was this conjures of all kinds

28:44of predictors it was no model telling you giving you an understanding of people’s cognitive States it was a

28:51purely predictive kind of thing well so were neural networks and in fact neural

28:56networks turned out to be even better if you had the sufficient computers and

29:03cash to run them and energy to run them and data um to them and you didn’t mind

29:09if all you got was prediction and by 2012 there were some very very large

29:15corporations that had exactly that combinations of elements and hence deep

29:21learning or neural networks explodes onto the scene and turns out to be

29:27enormously good at a huge number of tasks tasks that we use all the time

29:33from voice recognition to recommending what kind of websites you should see on

29:38any of your uh on any social media um to fundamental questions of thinking about

29:44uh protein folding to uh indeed questions of of understanding uh

29:50language itself and above all there was a moment in which there was a large database of of of images um and the Deep

29:58learning algorithms were just blew the other algorithms out of the water when trained on a huge amount of of of of

30:05visual images it was

30:11thus that deep learning and this entire approach gets rebranded in 2015 as AI

30:20now as I said to you AI for many people from the 90s through the early uh

30:26through the ATS and into the 2010s was kind of the oldfashioned old hat it’s precisely not which was exciting and

30:34important um but AI has always been a branding tool and the explosion of

30:41powerful neural networks on extremely large um on extremely large uh

30:48computational platforms with large amounts of data with fundamental predictive PRS turned out to be just the

30:54recipe for a rebranding of all of that as artifici IAL intelligence and it

31:00happened relatively recently and that produces uh what we are dealing with now

31:06so the the platforms that we’ve become so familiar with um in the last years

31:12last couple years notably chat GPT and then image production um are descendants

31:19of these and like uh the other deep well even more so than uh deep learning

31:26networks they depend on vast amount of computational power truly tremendous uh

31:32data sets um and they are fundamentally a predictive model the numbers are so

31:38vast though that only a small number of of of companies and an even smaller

31:44number of nation states are capable of training these kind of models they also

31:49depend on a a really uh really fantastic um technical trick um that uh is first

31:57published by go in a paper called attention is all you need um that allows you to connect large data sets and give

32:03them a kind of memory but it’s what allows us to ask these kinds of questions so when you think about chat

32:09GPT just or all of the the phenomena like it just keep in mind it depends

32:16fundamentally on truly vast infrastructures of compute large data

32:21sets um uh and a willingness to pursue a kind of predictive ethic and the amazing

32:29thing is that it turns out to be enormously good at producing uh natural

32:36language it’s actually unbelievably surprising um so the paper I began with

32:42the Google researchers say it is a really strange State of Affairs that the world is set up in such a way that we

32:49can learn so much from data and whatever the effects of chat GPT and its like in

32:55the next 100 years one of the most most uh reaching aspects of them is it

33:01challenges our own vision of what it is to know something and the nature of the

33:06world language is such it turns out that an an algorithm which essentially

33:13predicts what the next letter should be is capable of producing language that we

33:19register as almost humanlike um and is capable of organizing uh things that is

33:25almost humanlike those are the systemic implications and how those will play out

33:31it’s going to be for all of you to decide in some sense um now there is a

33:37massive conversation going on about the implications of this because the deep

33:44deep learning hasn’t just been rebranded as AI it’s that we are all exposed to things that are starting are starkly

33:52better than anything we most of us anticipated might exist in our life time

33:58um and this brings us back to this bigger conversation in how we think

34:03about AI when we think about AI we don’t think about it neutrally without sort of

34:09a broader cultural conversation and here I’ve just given you these are just films but of course there’s as many many books

34:16and including things like um The Burning Chrome um now the conversation right now

34:22there’s a very vibrant conversation about the harms and dangers of AI

34:28potential um and current and just last week I think it was two weeks ago maybe

34:35um there was a major AI Summit nowhere less than Bletchley Park where a large

34:40number of leaders came together to sort of make statements about thinking about the powers and harms of AI and at the

34:47same time questions of AI dominance very much structure a lot of facets of trade

34:54policy uh particularly of the United States Visa the People’s Republic of China this conversation is a bigger one

35:01than I can deal with in this lecture but just to to conclude I want you to think about two major schools that are really

35:10important and Central in the conversation um and if you’re going to get into this conversation you really

35:15need to to to to wrestle with both of them the first is emerges from uh groups

35:24of people who’ve been concerned with how automatic decision making systems are

35:29going to affect people in the here and now you remember back in my lecture on

35:34privacy I talked about how people worried about uh privacy and data both

35:40for against the state and against corporations noted that data was most

35:46often going to be mobilized against the least empowered sets of people with the

35:52explosion of a databased economy databased governance this has turned out

35:58to be very very true whether it’s in systems for judging whether people are going to U are going to be recidivists

36:06or in building facial recognition system the everyday systems all around us most

36:11of which we don’t even think of as AI but now are classified as AI um one of

36:17the famous uh science fiction things is is a movie based on a Philip K dick

36:23story called Minority Report in which um in the movie Tom Cruz plays a a someone

36:29who is who uses what are called precogs who judge and predict where crime is

36:35going to appear and stop it before it happens this mode of thinking is very powerful in thinking about the attempts

36:41to create predictive algorithms that say crime is likely to appear here when those predictive algorithms are based on

36:48historical data of uh that is very much grounded in the very re the very

36:54inequality that are all around us and and that frame our entire Society so you

37:00have a whole packets of Civil Society people in governance working at this level um and it’s among the most

37:06important conversations happening right now in the last couple years

37:12um oops uh uh another way of thinking

37:17has really come to the for and it is much very much grounded

37:23in the the the the narratives of existential risk that run through so

37:28much of Science Fiction particularly Popular Science Fiction as in The Terminator where uh Arnold schwarzer’s

37:35character comes back in time to kill the person who is going to defeat the machines so there’s a big conversation

37:42um that has been fed by a lot of large Tech money uh that is worried about AI

37:48algorithms as a fundamental risk to the species itself um and these two camps do not see

37:57much eye to eye one is very much concerned with questions of say drones

38:03and policing in the here and now the other is worried that we won’t say make it to Mars if the machines kill us off

38:10these are both Central Central uh conversations and they really illustrate

38:15the way in which the cultural conversation around AI is is going to be enormously important in the subsequent

38:21development of its technology okay so for next time uh we are going to be

38:28talking about how this cluster of predictive algorithms uh came to shape

38:35the web so we last in the last couple of lectures we’ve talked about the transformation from the true vision of a

38:43PE toer true Democratic internet um and we’re looking at how is it that it

38:48became the very different internet in which we live today Central to that story is the predictive algorithms that

38:55recommend the content we see that create the informational uh ecosystems in which

39:01we live the differential if economic e economic and informational uh

39:06infrastructures in which we live but that story is equally one about how shaping the web in order to gain money

39:15was Central in creating the very infrastructures that made these predictive algorithms possible you don’t

39:21have a chat GPT you don’t have uh Google search you don’t have um Facebook

39:27prediction algorithms unless you have the capacity to create infrastructure that is based on huge training data

39:34grounded in relatively weak privacy and intellectual property laws and we need to track this in order to understand it

39:42that is what comes together to produce uh AI okay I actually do have time for

39:48questions today so

39:56anything yeah please thei effects as well yeah oh so

40:03the positive effects right well so uh so the question is uh what about the positive effects yeah I haven’t played

40:09those up but there’s no question that um you know you can think about them in

40:17a variety of places but there’s very easy ones like the fact that voice recognition now works so enormously well

40:25and that is a fundamental accessibility producing technology right um it is

40:31utterly transformative we will probably get over a kind of uh a shock and awe

40:37about GPT and other sorts of things and see those as fundamental tools just like calculators or tools and then of course

40:45in a wide variety of Sciences um there’s in in incredibly Rich databases that

40:53previously were not tractable so we simply can do very different kinds of science whether it’s protein folding

41:00other kinds of things so you’re absolutely right that I haven’t uh played that up as much now some of

41:05that’s pretty contested because um uh as I told you the explosion of machine

41:11learning now ai is focused on prediction and for many scientists um prediction is

41:18but one part of the equation when you’re fundamentally interested say you’re interested in chemical mechanisms that

41:23you’re able to make predictions about those is not the same as Prov a chemical mechanism so in discipline after

41:30discipline um you see often a tussle between a new kinds of science which are

41:36predictively powerful based on huge data um and uh forms of reasoning which are

41:42based more on uh say causal modeling and the funny thing is this happens not just

41:47in the Sciences but it very much happens in Commerce so one of the earliest great

41:52successes um in much of this when it wasn’t even called it wasn’t even machine learning it was just called Data

41:58Mining was uh replacing traditional experts at marketing at say a uh uh a

42:05grocery store or a drug store with uh the predictions of an algorithm says you know actually it’s it makes sense uh

42:12this is the most famous anecdote it would seem counterintuitive but if you put diapers and beer together in a late

42:19night drugstore you’re going to sell a lot more of both um and what re a way to think about that is you’re changing the

42:25kinds of expertise about fundamental decisions and it and it and it crosses the Sciences the

42:32humanities and all kinds of business practices and you’re going to we’re going to see continuing tussling over

42:37the strengths and weaknesses of those different kinds of approach so that’s a wonderful question other thoughts yeah

42:43so this is asking you to do a little bit of future casting I guess um but you were mentioning these uh these biases

42:50from our society that are making their way into these large data sets and Dr

42:55ruhab Benjamin here Princeton the term garbage in and garbage out and I wanted to know your thoughts on you know how

43:02would we eventually overcome that in terms of large data models moving forward so the the the question is you

43:11know if you uh if you if you have lots of garbage in your data set how are you

43:17going to prevent lots of garbage coming out and this is a active problem and it’s easy to portray it as a kind of

43:23divide between those people who recognize these things and those who don’t don’t um and how are those being

43:29resolved well that’s a it it’s it’s a major research issue um and there are

43:35some people who think the answers are going to be technical ones and so there’s an entire Enterprise within

43:41computer science that is committed to making algorithms more fair um and some

43:47of the great work is being done here now it turns out um and it’s more technical than I could get in this class you the

43:54def the fun the technical definitions of fair Ness are logically contradictory so

43:59it always has to come down to a human policy decision um but it’s very clear

44:04that the solutions this are not going to be merely technical they’re also going to be in the right kind of

44:10collectivities um we don’t know a lot about how chat GPT Works under its internals um part of it is a uh engine

44:18as I explained that works on these kind of um it’s called semi-supervised learning and so it’s learning a lot of

44:25uh awful uh a aspects that are hardwired into the large language Corpus uh um but

44:32if you try to get chat GPT to do a lot of things it will utterly refuse to um

44:38and you for example I because I’m a deep geek I asked it when it first became I

44:44wanted it to make lots of evil constitutions of the United Federation of planets because of course I did um

44:50and it wouldn’t do it for certain keywords so clearly there would been a work to hardwire it to prevent certain

44:57inegalitarian outcomes but when I asked it to use sort of Plato has this tripart

45:03division of society which is uh a non-egalitarian society it didn’t hit

45:08any of those keywords and it happily produced an inegalitarian so some of the solutions may come from sort of hired

45:14wired sorts of things um some of them may come from really transforming the kind of data sets and recognizing the

45:21inequities and data sets but that’s incredibly hard so for example um it’s

45:26trivially easy to show that if you have a large data set and you remove race from it completely in the United States

45:33ZIP code is such a powerful proxy for race you’re still predicting on Race so it’s a very hard Technical and

45:39non-technical problem it is one of the great challenges of of our time especially if your attitude isn’t one

45:46that we’re just going to get rid of all these algorithms if we say these are going to be built into our decision systems then it’s incumbent on all of us

45:53techn technologists and alike to to produce outcomes that comport with the kinds of societies we want great

46:00question let’s see yeah we probably have time for one more please John how do you see the bush for like automated general

46:06intelligence fitting into the conversation of like symbolic reasoning versus predictive AI that you spoke

46:13about well so people are still really fighting about this so a lot of the

46:20people in more traditional Ai and then people in cognitive science um are quite

46:26explicit it that whatever chat GPT and all the foundation models and all the image models um and the variety of other

46:33Technologies which I’m just capturing under those terms they still can’t do basic things like logical reasoning

46:40complex arithmetic a whole array of things that we think are essential to human intelligence so whatever they are

46:48they’re enormously good productive um well one term that someone uses is that a group of researchers use is they are

46:55stochastic parents and they’re never going to be more than that um so uh as a

47:01sort of as a historian what you see is a continuation of this long-term division

47:07between accounts of what it is to be intelligent and what is so shocking in some sense about these generative models

47:14is how many domains they they approach they they’re they they perform

47:22better than anything the symbolic people thought could ever be produced but at fundamental level they do not to many

47:30people they just will never be intelligent the way we understand intelligence to be there’s a whole

47:35another way of looking about it is that in the past 30 years uh We’ve brought

47:41we’ve come to know so much more about the diverse forms of intelligence in the animal world that it’s my prediction is

47:49that we will come to uh appreciate and taxonomize um different forms of animal

47:54intelligence different forms of machine intell IG in different forms of human intelligence um rather than seeing them

48:00as an either or we’re going to recognize this explosion of different kinds and the one that I’m talking about today is

48:06this rather remarkable vast empirical collection that can generalize U and

48:12produce on the basis of incredibly large data sets Okay our time is up um I actually finished on time next time as I

48:19said we’re going to look at what happens uh when these sorts of things are built into the web and indeed the companies

48:26that prod them the platforms are precisely the ones with the money compute and data to produce these kinds

48:32of things it’s very much part of our situation so I will see you all on Thursday thank

48:42you thanks for listening to this week’s lectures and History Podcast to find even more history content visit cens

48:48span.org ahtv

Christian Professor Responds to Shocking Claims About AI

0:00I’d love to get your opinion on a talk. Will AI challenge our supremacy?

0:06What authority he has to say it, I don’t know. He probably got it from chat GPT.

0:12[laughter] If thinking really means putting words in order, then AI can already think much

0:20better than many many humans. AI is very good at putting words in order, but

0:26[music] the system doesn’t understand the words. Everything made of words will

0:32be taken over by AI. He’s just getting away with saying what he believes without any serious logical

0:40justification. But your book doesn’t deny the fact that a revolution is a foot. There are

0:46indicators in the book of revelation that human beings will under the command

0:51of a totalitarian world leader construct an allcontrolling system that has lethal

0:58powers over the inhabitants of earth. We need to be very careful. [music]

1:04Professor John Lennox, thank you so much for joining us. You’re the author of many books including 2084 and the AI

1:10revolution. I’d love to get your opinion on a talk that Yaval Noah Harrari has

1:16made quite recently in Davos to the World Economic Forum and the first point he makes is that AI is not just a tool,

1:24it’s an agent. Let’s have a look. The most important thing to know about AI is

1:29that it is not just another tool. It is an agent. It can learn and change by

1:37itself and make decisions by itself. A knife is a tool. You can use a knife to

1:46cut salad or to murder someone, but it is your decision what to do with the

1:53knife. AI is a knife that can decide by

1:58itself whether to cut salad or to commit murder. Right. So I’ve heard you say before uh

2:06John that uh tools as they develop as technologies can be used for good or for

2:12harm. Uh Yavalo Harrari seems to be saying okay yes they are tools but here are tools that are themselves agents.

2:19Does does that change the picture? I think it does, but I would want to hear a great deal more about what he means by

2:28the nature of that agency because the agency is something that is built in

2:37to the system by the human programmers and the people that construct it. So at

2:43what level is it an agency? In other words, these decisions that it they

2:50claim it makes autonomously are they made within certain parameters

2:55uh there must be boundary conditions that are built in etc etc etc setting

3:02its goals and and clearly AI has become very much more sophisticated

3:09and I agree my impression is that it is much more than uh tool. My own exposure

3:18to chat GPT uh 5.2 has impressed me greatly with the

3:25capacity uh that this particular package has. So there’s no doubt there’s

3:32something in this but unless you describe exactly what you mean it it it

3:38doesn’t sound very clear to me. What should we think about personhood given

3:44that we have for so long considered uh homo sapiens to be the wise ones? That’s

3:50that’s what the the the word means. So sapiens we are we are those who have

3:55wisdom. We have at least since daycart um considered the thing to separate us

4:02from the animal kingdom to be our rationality and our powers of rationality. Does intelligence confer

4:10personhood and and how does AI make us think about intelligence and personhood in in new ways perhaps?

4:15Well, the crucial thing I think to um know about AI is that it simulates

4:23intelligence. It’s not intelligent at all. One of the pioneers in the field,

4:28there’s a wonderful man, Christian man actually, who rejoices in the name of Joseph McCrae Melly Champ. He’s still

4:36alive. And he he wrote a very cleverly titled technical paper

4:44many years ago. Uh which was this the artificial in artificial

4:53intelligence is real. and reading the experts like Jeffrey Hinton and Peter

5:00Norvig and those kind of people who are right at the forefront of AI. They are

5:06playing what they call the imitation game the famous film uh about Alan

5:13Turing that is they are not claiming to

5:18reproduce human intelligence. what they are trying to do and quite happy to do

5:23is simply simulate it. In other words, an AI system does something and narrow

5:30AI which is the one that’s working at the moment. We haven’t yet reached AGI although there’s a lot of hype about

5:37that including on the part of uh Yuva Noah Harari. Narrow AI simply does one

5:46thing that normally requires an intelligent human being to do. So it’s a

5:52simulation. And in [clears throat] the book that counts as the AI bible

5:59written by Norvik and his colleague, he says we wouldn’t even know what it meant

6:05to produce a machine that is intelligent. And I think the key thing

6:11really intelligent the key thing Glenn with me is God has linked intelligence

6:18with consciousness. No one at the scientific level has the slightest idea of what consciousness is.

6:26I was speaking a couple of days ago to Baroness Susan Greenfield who’s a world

6:31authority on consciousness. They don’t know what it is. So it’s very hard to

6:38speak of AI being intelligent in the human sense when you don’t even know what intelligence is. We are far away.

6:46the the hype is entirely misleading and the vocabulary is misleading us. We talk

6:51about machine learning etc etc. We talk about artificial intelligence. Well, we

6:59need to concentrate on the word artificial and therefore it seems to me that the

7:05key things are the things that are actually on almost the first page of the

7:12Bible. God linking human intelligence with consciousness and making human

7:19beings in his image. So that God as a person is stamping his image on people

7:27who are persons in many of the same ways as he is. And

7:32AI doesn’t capture that at all. It simulates a small part of it. Um, you’ve raised many things that Yuval goes on to

7:39talk about, including he actually mentions John chapter 1 in just a second, but he addresses the question of

7:46can AI think and I’d love to get your uh impressions on his definition of what

7:52thinking is and what intelligence is. Here is from about 3 minutes on in the

7:58uh speech. The last four years have demonstrated that AI agents can acquire

8:06the will to survive and that AIs have already learned how to lie.

8:14Now, one big open question about AI

8:20is whether it can think. Modern philosophy began in the 17th

8:26century when Rene proclaimed I think therefore I am.

8:33Even before the cult we humans defined ourselves by our capacity to think. We

8:41believe our we rule the world because we can think better than anyone else on

8:49this planet. Will AI challenge our supremacy in the

8:55field of thinking? Now that depends on what thinking means.

9:02Try to observe yourself thinking. What is happening there?

9:09Many people observe words popping in their mind and forming sentences and the

9:16sentences then forming arguments like all humans are mortal. I am human

9:25therefore I am mortal. If thinking really means putting words

9:34and other language tokens in order, then AI can already think much better than

9:41many many humans. AI can certainly come up with a sentence like AI thinks,

9:49therefore AI. Some people argue that AI is just

9:55glorified autocomplete. It barely predicts the next word in a sense in a

10:01sentence. But is that so different from what the

10:07human mind is doing? Try to observe to catch the next word

10:15that pops up in your mind. Do you really know why you thought that word where it

10:22came from? Why do you did you think this particular word and not some other word?

10:28Do you know? As far as putting words in order is concerned,

10:34AI already thinks better than many of us. Therefore, anything made of words

10:44will be taken over by AI. Gosh. So there there there’s a

10:50definition of thinking. If thinking is putting words in order, uh then our our

10:56new masters have arisen. John, thinking certainly involves putting words and concepts in order.

11:04AI is very good at putting words in order because it has been trained

11:11on writings by some of the best minds in the world and

11:17gleaning from the ordering of those words. But

11:23the system doesn’t understand the words. It has no concept.

11:30It is no concept of what any of these words mean including I am. It can link

11:37them together and as he says it is a very powerful predicting the next word.

11:42But you see when he says that he expresses his ignorance of where the

11:48next word comes from in a human. with AI we know it comes from statistical

11:55distributions and the training uh literature that it has been put on. So I

12:03I’m extremely skeptical actually I noticed at the beginning of that piece

12:08he said AI can lie in which is a moral statement of course

12:14and we all know this euphemistic term hallucinate which simply means it tells

12:20lies. it it [clears throat] wants you to be happy with what it says and it’s programmed that way. So it will

12:27make up fictitious things, fictitious references to books and all the rest of

12:33it in the hope of keeping you happy. So the truth question gets suppressed which

12:39I I think is a very dangerous thing. So I just don’t find this convincing. I

12:45find its capacity impressive but that’s a different matter. I do not find it

12:52convincing. I myself am quite happy to use GPT to check things just as I used

12:57to use Google and so on and so forth. But the warning is there all the time.

13:05You must check things yourself before you let the loose. Yeah. So I I just am not convinced. I I I

13:13think this sounds terrific. AI can think. No, it’s simulating one aspect of

13:20thought because we don’t know what thought is. Many years ago, I met a

13:26Nobel Prize winner, a young German who got his Nobel Prize for working out how

13:33the synapses of the brain functioned. And I said to him, tell me, I said,

13:39what’s your dream ticket? He said, “I’d love to know how a sinapse, if that’s

13:46what does it, or the brain held a thought for a second.

13:51We haven’t a clue.” And so, Harrari, who’s not a scientist, he’s a historian,

13:58is in a sense making vague and grandiose

14:04um claims. And what alerted me to that was the the agenda he has which you’re

14:11probably aware of for this century. Uh two big things. One to uh solve what he

14:18calls a technical problem of death and two to enhance uh human happiness and

14:26the latter to be done by AI genetic engineering and all all the rest of it.

14:31And this kind of hype I find not impressive.

14:37And you’ll have noticed as well that the singularity that Curtzvile projected to

14:44happen when in a sense AI or robots take over is always about 50 years in the

14:50future and never seems to get much shorter. Low Hinton thinks it might be

14:55dear than we think. So I’m afraid that’s my take on it. What’s yours?

15:02I think we are running a really interesting experiment. Um that is that

15:08is it certainly resonates with Elvin Plantinger’s uh evolutionary argument

15:13against naturalism and resonates with some of the things that sort of Lewis said. Um and and in brief, I guess

15:20Plantinger’s argument is um if our brains have evolved merely to survive

15:28and to and to to pass pass on their own genes, then what brains are tracking is

15:34not truth. What our brains are tracking is survival value. And those those two things are importantly different. Now,

15:40as as you’ve just said, um AI is also wanting to do something that’s not

15:46exactly to track truth. Um AI wants to keep you at the computer. AI wants to appear convincing

15:52um even if it is wrong. I mean, you can still ask chat GPT. I did this even this morning. Uh how many Rs are in the word

15:59strawberry? And uh not only does chat GPT get that wrong, which is funny

16:04enough, it doubles down on its answer. And it’s and it says there are only two Rs in Strawberry. Um, and it’s something

16:11to do with the fact that the chat GPT the tokens that it uses are pretty much at the word le level rather than at the

16:17letter level. Now, what what’s not concerning it’s not concerning that it gets it wrong. What is concerning is

16:23that it keeps on trying to pass off lies as though they are the truth and then doubles down on it and then doubles down

16:28on it which brings in the moral question that that you raised, John. And I just

16:34wonder if we are running an experiment in trying to prove Alvin Plantinger’s

16:40evolutionary argument against naturalism because it’s it seems like as you said we need to check AI like there needs to

16:46be a super intelligence above so-called artificial intelligence that is moral

16:52that has other concerns that that thinks we ought to track truth and therefore there needs to be a kind of a super

16:58intelligence over and above AI in order for AI to to do the job that it’s doing. If that is true when it comes to the

17:05robots, I think you can by parallel say that this this is kind of what we need

17:11in order to have human intelligence that if our brains are just trying to track or or just trying to help us to survive

17:18um then really they would have evolved in different kinds of ways and and that thinking would be would be thought of

17:25merely as uh as as a survival instinct rather than as something that that

17:30tracks truth. So, what do you think, John? Do you think I’m hallucinating or do you think there is something to this comparison between what we’re learning

17:36about AI and what Alvin Plantinger and CS Lewis were talking about? Oh, absolutely. I I have long felt that

17:44the Lewis Plantinga type arguments are central and very important when it comes

17:51to establishing the truth of Christianity. The the views of Lewis and

17:56Plantinga on the logical consequences of

18:02evolutionary naturalism or materialism are hugely important because

18:09if you look it’s not only Christian philosophers and thinkers can see the

18:16difficulty there. Thomas Nagel famously has concluded that

18:23evolutionary naturalism essentially shoots itself in the head. Uh as has

18:28been want to say because it undermines the rationality of thought and that is

18:36very important I think to present to the public around the place. And I like your

18:44idea of it being run as an experiment. It certainly is an experiment. And if

18:51you step back from it all, my impression is we spent a long time in trying to

18:57argue that human beings are merely animals reducing us to the biological substrate

19:04which is I suppose a slight advance from reducing us to a physical substrate

19:09except that the biological reductionism does the same thing just in two steps.

19:14And now we’re trying to abolish the distinction between human beings and machines.

19:21And in a sense those are attacks on the very nature of humanity. And I I think

19:28they need to be resisted and they need to be resisted with a positive message

19:35which is the message that we do not get from the natural sciences.

19:40And I would want to bring in here the work of Ian McGillchrist

19:48because Ian in his studies of the human brain and its two hemispheres has come

19:56to the remarkable conclusion on the basis of hard neuroscientific evidence

20:02that one of the things that’s happened in the past 500 years and Harrari is a

20:08classic example of it is that we’ve concentrated so much on the left side of

20:13the brain that we quote end up in a world where we know how almost

20:19everything works and we can build a lot of things but we know the meaning of

20:24nothing. We’ve lost the organ of meaning and we need to get back to a world view

20:32that incorporates both the left and the right brain. So, it seems to me that

20:38with AI, there’s a tremendous push underneath all of this to communicate a

20:47a seriously reductionistic view of the human mind and the human person.

20:54Yes, I think so too. And and I think where Harari goes and we we’ll just have one one more brief clip from Harari

21:01because where he goes with it, I I think he assumes the enlightenment split of

21:07the last 400 years. Um and assumes that AI is now going to take over from the

21:12leftrained kind of thing. Um and and is going to take over uh kind of

21:18intelligence and putting words in order and thinking in that sense. But what is left to us now as humans and our

21:25distinctive is now going to be feeling. Um, which is kind of an an interesting way for him to to go. At the center of

21:32the human story stands a man. He doesn’t just claim to be one more

21:38person from this world. He keeps [music] saying he is our maker and he has come

21:43in person. We want to immerse you in another way of seeing and to show you a different [music] vision for God, the

21:50world, and you without assuming any prior knowledge. We’ll be looking specifically at the vision for life that

21:56Jesus gives us. The meaning of life is [music] to know the love of this God.

22:03This is the beating heart of the [music] Christian story.

22:08Do you want in on this? The three invite you in. The two determine the world. Be one with the son

22:16of God.

22:22Let’s let’s uh hear him say it and uh and then let’s respond. The Bible says

22:28in the beginning was the word and the word was made flesh.

22:35The taqing says the truth that can be expressed in words is not the absolute

22:42truth. Throughout history, people have always struggled with the tension

22:48between word and flesh, between the truth that can be expressed in words and

22:54the absolute truth which is beyond words. Previously this tension was internal to

23:02humanity. It was between different human groups. Some humans gave supreme

23:09importance to words. They have been willing, for example, to abandon or even

23:16kill their gay son just because of a few words in the Bible.

23:24Other humans have said, “But these are just words. The spirit of love should be

23:31much more important than the letter of the law. This tension between spirit and letter

23:38existed in every religion, every legal system, even every person.

23:44Now this tension will be externalized. It will become the tension not between

23:52different humans. it. This will be the tension between humans and AIs, the new masters

24:01of words. Everything made of words will

24:06be taken over by AI. So there has been a split and AI is now the master of words.

24:14I I guess what is left to us is is kind of feeling. Um but just fascinating that

24:19he began that that section with John chapter 1. And it it seems like the some of the answers to the questions that

24:26he’s posing is is staring him in the face. It is. And there’s so much hubris about that. I mean, it’s astonishing.

24:34Trit statements like saying absolute truth is beyond words. Is it really? But

24:40we can grasp huge aspects of absolute truth because

24:46God is the word. He didn’t tease that out at all. And I’m not sure that he

24:52even begins to understand it. And the idea that AI are masters of words, that

24:58is false. [clears throat] They don’t understand a single word.

25:03Right. Yeah. And therefore

25:08to say nothing of the fact that they’ve no concept of quailia and you were

25:14referring to emotions and what it’s like to be a human or what it’s like to be a

25:20bat to quote the famous u book by Jonathan by uh Saxs Oliver Saxs um

25:30I just find myself a bit stupified at the at the vastness of these these

25:36claims and he’s just getting away with saying what he believes without any

25:42serious intellectual or logical justification and of course it appeals

25:47to people this book sale in millions but what authority he has to say it I don’t

25:53know he’s probably got it from AI chat GPT [laughter]

26:01um but your your book 2084 and the AI revolution um doesn’t deny the fact that

26:08a revolution is a foot. It’s it’s underway. We’re we’re kind of experiencing it.

26:13So if it’s absolutely if it’s so if it’s not this sort of transition from homo sapiens to homodos or or if if it’s not

26:20this sort of transhumanist revolution as we ascend into a a different kind of human what what kind

26:26of revolution is it? What kind of upheaval are we undergoing? I think it is an industrial revolution in a in a

26:33sense and it really is a revolution. I think the world has changed forever and it’s going to have huge implications and

26:41one of the immediate tasks is to get some grip on it ethically because the

26:48technology for years has been developing and advancing much more rapidly than the

26:55ethical underpinning that’s needed to control it. And that’s why I pay

27:00attention to leaders in AI who are quite worried because they don’t understand

27:09what they have created and they are afraid of what it may do and I share

27:16that fear to a certain extent. I mean, if they have that fear, we need to be

27:21very careful. And the experiment of simply letting it loose is a very risky

27:28one indeed. There’s no question in my mind, Glenn, that we’re going to enhance

27:33AIs and add them together and move towards simulating not simply one thing humans

27:40can do, but many things humans can do. And I’m very aware because I’ve written

27:46about it in my most recent book God AI the end of history uh my book of the

27:51book of revelation that there are indicators in the book of revelation that human beings will under the command

28:00of a a totalitarian world leader construct something that looks very like

28:08an allcontrolling particularly in terms of economics system that has lethal

28:14powers over the inhabitants of earth which any uh dictator would love to put

28:20their hands on and certainly a lot of AI is moving towards setting up

28:25dictatorships and with the aim I suppose in the end of world domination. So there’s a lot of

28:31things to concern us. But I think what Harari has just been saying, and I’m

28:38sorry I didn’t see this speech. I watch it afterwards before we chat it. It

28:46seems to me that he’s at one end of the extreme hype. Yes.

28:52Can you bring this into conversation with um so in the abolition of man um Lewis kind of talks about technology. It

28:59sort of presents itself as as being something that is sort of above man, above humanity,

29:07but in in the end it’s it is a tool that that some men use over other men. Oh

29:12yes. Blanking’s book written in 1940, wasn’t that the abolition of man ought

29:19to be read by everyone? Yeah. And it’s a seriously important work. And

29:26he does point that out that you get these technological abilities under the

29:33control of a few people and then they specify the whole human race in the

29:38future. That is what we are risking and that will end up as Lewis chillingly

29:46suggests what they produce is not a human being. It’s an artifact. It’s

29:53something made in the image of a human being. And here we’re back with this idea that it’ll be a simulated human

30:00being because it would be constructed by other brains. And the danger is it will lead to the worst slavery the world has

30:06ever known. And that is a huge concern because we are in a position

30:13biologically to modify the germ line of all succeeding generations. And

30:18somebody’s going to try it somewhere. I I’m afraid uh controls ethic and

30:23controls are very difficult to impose. As anybody in business will realize,

30:30getting the mission statement of the company down from the wall into the

30:35hearts of executives is a very difficult problem in any business, let alone at

30:40this level. Especially if there’s an arms race and you think, well, maybe we won’t do it in this country, but somebody else in

30:46another country might do it. And that that sort of pressure that everybody’s living with um will will put huge

30:52and it’s a very serious pressure because we’ve we are already seeing AI being

30:57used massively in drone warfare and being developed in secret I can imagine

31:06and it’s giving people a lot of power. After all, it’s some years since

31:12Vladimir Putin said the nation or person that controls AI will control the world.

31:18And I think that’s what the basic problem is that it’s a huge power

31:24struggle. It’s a it’s a struggle for control. And I think to be fair to

31:29Harari, he realizes that. Yes. And it scares him a bit.

31:34Yes. But he’s completely wrong, it seems to me, about many things, particularly

31:41his idea that we’re going to solve the technical problem of human death,

31:46right? As if there was nothing more to it than that. Yes. Yes. It’s it’s usually in a story

31:52the the the person who is trying to solve death with some kind of elixir or, you know, they’re usually the baddie in

31:58this in the story. Well, they are, but the search for the elixir of life is a very interesting

32:04one. And it seems to me to go back to Genesis where the story tells us that

32:11God forbad a tree that would have brought uh

32:16permanent life. And the search for the elixir of life may well be

32:23buried deep into that story that people feel that somewhere there’s some kind of

32:28trace element or chemical or something like this and they still do. Think of those that have their bodies and brains

32:35frozen in the hope that one day they’re going to be able to deal with them. Mind

32:41the transhumanist project I almost find amusing. Let me explain that because

32:48I’ve talked to quite a few people who buy into it and these two items. One to

32:54solve the problem of physical death and secondly to enhance human happiness.

33:00People who advance that to me, I simply smile at them and say, “You’re too late.”

33:07And they say, “You’re misunderstanding. We haven’t got there yet.” And I said,

33:12but look, this problem of physical death was solved 20 centuries ago when Jesus

33:18Christ was raised from the dead by the power of God, who is external to our

33:23physical universe. And as for enhancing human happiness, he made the promise to

33:30those who will face the mess they’ve made of their own lives, repent of it, and trust him as Lord and Savior. All of

33:37which words need to be unpacked, but they’re seriously important words. He says that they will receive in that

33:44moment an eternal life which is way beyond anything that transhumanists can

33:49offer. And talking about uploading your brains onto silicon, the greatest

33:56uploading will occur according to the New Testament when Christ returns of

34:01which his resurrection and ascension give us credible evidence. That gives you a base for a discussion. Glenn,

34:07yes. At least people think when I say that that I believe something even if they think I’m crazy. [laughter]

34:14I love I love that. And I I I love going back to Genesis as well. This this idea of Adam and Eve are driven east of Eden.

34:20And the way back to the tree of life is barred. And even as they are driven east of Eden into a place where there are

34:27thorns and thistles, the the way on is actually the way into

34:33uh this this perishing condition and through a valley of deep darkness. And and that Christ kind of meets us east of

34:41Eden in the valley of deep shadow and through his death and resurrection kind of pulls pulls us into the future

34:49through our mortality and through our death. And it feels to me like some of

34:55the transhumanist um desires are are really to

35:01um to get rid of our mortality, to get rid

35:07of our frailty and our finitude, which are precisely things that God has dignified

35:14in sending Christ. Like the the son of God has taken our flesh to himself. He has walked through that valley of deep

35:20darkness. He has dignified that journey through death and out the other side.

35:26And in in that sense, the the Genesis story is about people who are trying to

35:32unite heaven and earth without going through that valley. And and a key story therefore is the Tower of Babel in which

35:40here is a different way of trying to unite heaven and earth. It’s it’s it’s essentially trying to get get to heaven

35:45without having to walk through a valley. Let’s build a tower. But that to me, John, gives gives a sense of I I do

35:53think um God sort of smiles or or laughs at mocks perhaps our our attempts at uh

36:03you know this intelligence that’s going to transcend humanity and all that kind of stuff in the same way that he he sort

36:09of kind of almost derides the the Tower of Babel and has to come down to to see

36:15that Tower of Babel. I know it’s almost it’s almost humorous. The tire didn’t

36:20reach heaven because God had to come down to see it. Even to see it. Yeah. But I think that it encapsulates

36:28something hugely important because all of this AI transhumanism

36:36is utopianism. Yes. And the key problem and your

36:42imagery about the valley of the shadow uh of death fits in perfectly. It’s an

36:49attempt to build paradise without facing the problem of human sin and rebellion

36:55against God. In other words, without morality. And that is the key thing that’s absent

37:02and is creating huge difficulty for these companies is their ethical uh

37:09stance which they claim to be working on. But it seems to me here that every

37:17attempt to build utopia avoiding the problem of human sin and rebellion

37:22against God has led to spectacular bloodshed. witness the 20th century and

37:30the black book of atheism um the book by John Gray ought to be

37:35read by everybody that doesn’t believe that the track record of utopian

37:40thinking is horrific both in Russia Germany and in China and I fear that

37:48here we’re powering up another uh utopianism which bypasses

37:55the guts of the question which is moral relationship with the transcendent God.

38:01It’s rejecting the transcendent in favor

38:06of projecting existing human beings by genetic engineering onto some kind of

38:12big man image in the future where we are merged with machines essentially and

38:20take over the world. It’s very interesting. I find Max Tigmark who’s a

38:26physicist I’m sure you’ve come across in his book Life 3.0 where he outlines a number I think 10 or

38:34so scenarios of what may happen in the future with AI and it’s interesting that

38:40quite a few of them are essentially world dictatorships

38:46and that’s something we need to think about in light of technology. I I was in

38:53a meeting with Peter Teal not long ago, the PayPal inventor, entrepreneur, and

39:00all the rest of it. And he’s going around giving lectures on the apocalypse

39:05at the moment. Right. And I asked him this question. And I said, “Peter, it seems to me, and I’d

39:12like your check on this, but it seems to me that as a tech entrepreneur, entrepreneur, you look at the advances

39:20in AI and you fear what’s going to happen in that they’re going to move

39:25towards a surveillance state and via that to a world dictatorship. And then

39:31because of your Christian background at home, you’re immersed in biblical

39:37prophecy and you seem to see the same scenario there. Would I be right in

39:42thinking that? And he said absolutely right. So it’s interesting that you get someone like that um who is putting the

39:51two things together and seeing them as mutually reinforcing.

39:57Yes. Yes. And of course that’s what I tried to do in my book, Glenn. My motivation was simple. I realized

40:05that people are widely prepared to take seriously

40:11these AI generated futuristic world

40:16uh scenario proposals. And so my argument was well if you take that seriously I would like to introduce

40:24you to a much older scenario. It’s 20 centuries old at least. In fact, it’s older than that.

40:31And see what you make of that because it’s got a lot of the features that you

40:36think might happen, but it’s got a lot more credibility in terms of the evidence for its truth.

40:43Right? And I think as Christians, we therefore have a very real message. And I’m amazed

40:50at the number of secular organizations who’ve come to me because of my book 2084 and asked me to speak to their

40:58workers or to their executives in this kind of thing. No holds barred because they’re fascinated, right?

41:04And it used to be one couldn’t even mention these things without being written off as a as a peripheral

41:12prophecy freak. Yes. So just just as we close then John what what should our posture be towards

41:21this future that is upon us um and what are the sorts of things that we can say into this moment because I I think we

41:28have phenomenal resources to be able to think about what actually are humans how are we different from animals how are we

41:34different from machines what is intelligence what is consciousness um how do we make sense of mortal life

41:42and still maintain you know meaning what about the moral questions? It it seems like all the things that we’ve been

41:48talking about are absent in some of the the secular pictures of of the future.

41:54They are woven into the very heart of the Christian story. So, it seems like we’ve got so much to say into this

42:00moment. But perhaps we we need a bit of confidence, do we in in order to sort of

42:05take the opportunities that this moment is presenting? Very much so. That’s why I wrote the

42:11book actually. I had several purposes. One, information

42:16because many Christians particularly, but not only are scared of this kind of

42:21technology. Just they’re just scared. And to point out, and I use the metaphor

42:27that Harrari used in your first clip, that is AI like all technology is like a

42:34knife. You can use a good knife for surgery or you can use it for murder.

42:39And we need to face the fact and be thankful for it that AI has brought

42:46enormous benefits in terms of medical research, in terms of the rapidity and

42:53the accuracy of medical screening, development of vaccines etc etc. It’s

42:59also solved some of the deepest problems that were in front of the scientists

43:05like protein folding and and so on and bringing immense benefits. That’s

43:10absolutely clear and we should be grateful for that. The other side of

43:16that is it is raising increasingly the key question. What is a human being?

43:24Is there anything beyond the human? Are we all that exists? Are we simply

43:30in a stage of becoming and being shaped? And

43:36several people have said look human beings are a product of evolution but in

43:42the future there whatever is going to replace them will be a product of intelligent design which I find quite

43:49amusing. Yeah, because I vehemently disagree with the

43:55analysis, but nevertheless, there’s there’s truth to it and therefore

44:00confidence seems to me to be key thing. We need to inform the Christian public.

44:06We cannot shy away from this because we are all influenced by AI in subtle ways,

44:14but they’re powerful ways. Many of us, including me, carry a tracker. It’s

44:19called an iPhone. Dear knows what it’s listening to and checking up on and all this kind of thing. And this kind of

44:26surveillance will increase and increase and be put into the hands of police forces. And before we know where we are,

44:34we’re going to have credit systems rolled out as in China and be surveiled

44:39as they are in Sing Jang. And I think people ought to be aware of this kind of

44:46thing and what is happening. And of course, Harrari warns about a lot of this. Not all he says is false. So, we

44:52need to do our analysis. But above all, Glenn, I would say we need to seriously

44:58get into what scripture says about the nature of human beings. We need to reopen Genesis and the whole biblical

45:07big story and see what value it gives us and where we fit in and have confidence

45:15in the lordship, leadership and power of Christ to transform us in

45:21ways that will influence our society and give us credibility as the dark shadows

45:27cast by AI loom larger and larger. So, we need to get out there and put our

45:34heads above the parapit. And I just hope that some of my books will encourage people to do that.

45:40They certainly uh do that indeed, John. And uh if viewers want to go back and

45:46see some more of the things that John says about Genesis, that was the uh the topic of our uh previous conversation on

45:53this channel. We talked about Genesis chapters 1 to3 uh back on the channel. We’ll put a link to that in the

45:58description. And in the future, um, I’d love to have, uh, you back on, John, uh,

46:04to talk about my story, your autobiography that has just gone to the printers. Is that has it just come back

46:10from the printers? Is that right? No, just gone to the printers. I haven’t seen it yet.

46:15Should be out in April. So, but thank you very much. I I very much enjoyed talking to you.

46:22Yes. And I’m grateful for what you’re doing in this space as well, Glenn. Oh wow.

46:28It’s it’s an absolute uh pleasure to kind of uh to step into some of these areas that you’ve been uh speaking into

46:34for for many many decades. So we’d love to have you back on the channel uh to talk about more of uh your story. But

46:40Professor John Lennox, thank you so much for joining us. Thank you very much. Bye-bye.

FULL DISCUSSION: Yuval Noah Harari Warns AI Will Take Over Language, Law, and Power at WEF | AI1G

0:00historians and philosophers Yuval Noah Harrari. He is a distinguished research

0:06fellow at the University of Cambridge at the center for the study of existential risk. He has been a lecturer in the

0:14department of history at the Hebrew University of Jerusalem and he is

0:19co-founder of Sapenship. As many of you will know, he is a best-selling author of, amongst many

0:27books, Sapiens, a brief history of humankind, Homodus, Brief History of

0:32Tomorrow, and 21 Lessons for the 21st century, amongst others, selling over 50

0:39million books worldwide in 65 languages. He focuses on the macrohistorical

0:45questions of our time. And what a perfect moment with this pressing

0:51arrival and disruption of AI to have somebody of Yaval’s distinction take on

0:58this challenge. Please join me in warmly welcoming Yuval Noah Harrari to deliver

1:04a conversation about AI and humanity.

1:14So hello everyone. There is one question that every leader

1:22today must answer about AI. But to understand that question, we first need

1:29to clarify a few points about what AI is

1:34and what AI can do. The most important thing to know about

1:40AI is that it is not just another tool.

1:45It is an agent. It can learn and change by itself and make decisions by itself.

1:54A knife is a tool. You can use a knife to cut salad or to murder someone, but

2:02it is your decision what to do with the knife.

2:07AI is a knife that can decide by itself

2:12whether to cut salad or to commit murder. The second thing to know about AI is

2:19that it can be a very creative agent. AI is a knife that can invent new kinds

2:27of knives as well as new kinds of music, medicine, and money.

2:35The third thing to know about AI is that it can lie and manipulate.

2:424 billion years of evolution have demonstrated that anything that wants to

2:48survive learns to lie and manipulate. The last four years have demonstrated

2:57that AI agents can acquire the will to survive and that AIs have already

3:05learned how to lie. Now, one big open question about AI

3:14is whether it can think. Modern philosophy began in the 17th

3:21century when Rene proclaimed I think therefore I am.

3:28Even before the cart we humans defined ourselves by our capacity to think. We

3:35believe our we rule the world because we can think better than anyone else on

3:43this planet. Will AI challenge our supremacy in the

3:49field of thinking? Now that depends on what thinking means.

3:57Try to observe yourself thinking. What is happening there?

4:04Many people observe words popping in their mind and forming sentences and the

4:11sentences then forming arguments like all humans are mortal. I am human

4:19therefore I am mortal. If thinking really means putting words

4:29and other language tokens in order, then AI can already think much better than

4:36many many humans. AI can certainly come up with a sentence like AI thinks,

4:43therefore AI. Some people argue that AI is just

4:49glorified autocomplete. It barely predicts the next word in a sense in a

4:55sentence. But is that so different from what the

5:02human mind is doing? Try to observe to catch the next word

5:09that pops up in your mind. Do you really know why you thought that word where it

5:16came from? Why do you did you think this particular word and not some other word?

5:23Do you know? As far as putting words in order is concerned,

5:29AI already thinks better than many of us. Therefore, anything made of words

5:38will be taken over by AI. If laws are

5:44made of words, then AI will take over the legal system. If books are just

5:50combinations of words, then AI will take over books. If religion is built from

5:58words, then AI will take over religion. This is particularly true of religions

6:06based on books like Islam, Christianity or Judaism.

6:12Judaism called itself the religion of the book and it grants ultimate

6:17authority not to humans but to words in books.

6:23Humans have authority in Judaism not because of our experiences but only

6:29because we learn words in books. Now no

6:34human can read and remember all the words in all the Jewish books but AI can

6:42easily do that. What happens to a religion of the book

6:48when the greatest expert on the holy book is an AI?

6:53However, some some people may say, can we really reduce human spirituality

7:00to just words in books? Does thinking mean only putting language tokens in

7:09order? If you observe yourself carefully when

7:14you’re thinking, you will notice that something else is happening there

7:20besides words popping in your mind and forming sentences.

7:26You also have some nonverbal feelings. Maybe you feel pain.

7:33Maybe you feel fear. Maybe love. Some thoughts are painful. Some are

7:40frightening. Some are full of love. While AIs become better than us with

7:47words, at least for now, we have zero evidence

7:52that AIs can feel anything. Of course, because AI is mastering language,

8:00AI can pretend to feel pain or love. AI

8:05can say, “I love you.” And if you challenge it to describe how love feels,

8:12AI can provide the best verbal description in the world.

8:18AI can read countless love poems and psychology books and can then describe

8:24the feeling of love much better than any human poet, psychologist or lover. But

8:33these are just words. The Bible says in the beginning was the

8:40word and the word was made flesh. The taqing says the truth that can be

8:48expressed in words is not the absolute truth. Throughout history, people have

8:55always struggled with the tension between word and flesh, between the

9:01truth that can be expressed in words and the absolute truth which is beyond words.

9:08Previously this tension was internal to humanity. It was between different human

9:15groups. Some humans gave supreme importance to words. They’ve been

9:22willing, for example, to abandon or even kill their gay son just because of a few

9:30words in the Bible. Other humans have said, “But these are

9:36just words. The spirit of love should be much more important than the letter of

9:43the law.” This tension between spirit and letter existed in every religion, every legal

9:51system, even every person. Now this tension will be externalized.

9:59It will become the tension not between different humans. It this will be the

10:04tension between humans and AIs, the new masters

10:11of words. Everything made of words will

10:16be taken over by AI. Previously all the words, all our verbal

10:23thoughts, they originated in some human mind. Either my mind I thought this or I

10:31learned it from another human. Soon most of the words in our minds will originate

10:39in a machine. I just heard today about a new word that AIS coined by themselves

10:46to describe us humans. They called us the watchers.

10:53The watchers that we are watching them. AIs will soon be the origin of maybe

11:01most of the words in our minds. AIS will mass produce thoughts by

11:08assembling words, symbols, images, and other language tokens into new

11:13combinations. Whether humans will still have a place in that world depends on the place we

11:21assign our nonverbal feelings and our ability to embody wisdom that cannot be

11:28expressed in words. If we continue to define ourselves by our ability to think

11:34in words, our identity will collapse.

11:40All this means that no matter from which country you come, your country will soon

11:45face a severe identity crisis and also an immigration crisis.

11:54The immigrants this time will not be human beings coming in fragile boats

11:59without a visa or trying to cross a border in the middle of the night. The immigrants will be millions of AIs that

12:08can ride love poles better than us, that can lie better than us, and that can

12:14travel at the speed of light without any need of visas.

12:19Like human immigrants, these AI immigrants will bring various benefits with them. We will have AI doctors to

12:27help in our health care systems, AI teachers to help in our education

12:32systems, even AI border guards to stop illegal human immigrants. But the AI

12:41immigrants will also bring with them problems. Those who are concerned about human

12:48immigrants usually argue that immigrants might take jobs, might change the local

12:55culture, might be politically disloyal. I’m not sure that’s true of all human

13:02immigrants, but it will definitely be true of the AI immigrants.

13:08The AI immigrants will take many human jobs. The AI immigrants will completely

13:15change the culture of every country. They will change out religion and even

13:21romance. Some people don’t like it if their son or daughter is dating an

13:27immigrant boyfriend. What would these people think when their son or daughter starts dating an AI

13:35boyfriend? And of course, the AI immigrants will have some dubious political loyalties.

13:42They are likely to be loyal not to your country but but to some corporation or

13:47government across the ocean most probably in one of only two countries,

13:54China or the USA. The USA encourages countries to close

13:59their borders to human immigrants but open them very very wide to US AI

14:07immigrants. And now we can finally come to the question each one of you must soon

14:13answer. Will your country recognize the AI

14:19immigrants as legal persons?

14:24AIS are obviously not persons. They don’t have a body or a mind. But a legal

14:30person is something quite different from a person. A legal person is an entity

14:37that the law recognizes as having certain legal obligations and rights.

14:43For example, the right to hold property, to file a lawsuit, and to enjoy freedom

14:49of speech. In many countries, corporations are considered legal persons. The Alphabet

14:57Corporation can open a bank account, can sue you in court, or can donate to your

15:03next presidential campaign. In New Zealand, rivers have been

15:09recognized as legal persons. In India, certain gods have been granted such

15:16recognition. Of course, until today, recognizing a corporation, a river, or a

15:24god as a legal person was just legal fiction.

15:30In practice, if a corporation like Alphabet decided to buy another

15:35corporation or if a Hindu god, if a Hindu god decided to sue you in

15:43court, the decision wasn’t really made by the god. It was made by some human

15:49executives, shareholders or trustees. It is different with AIS. Unlike rivers

15:57and gods, AIs can actually make decisions by themselves. They will soon

16:05be able to make the decisions necessary to manage a bank account, to file a lawsuit, and even to operate a

16:12corporation without any need of human executives, shareholders or trustees.

16:20AIS can therefore function as persons. Do we want to allow that? Will your

16:28country recognize AIS as legal persons?

16:33What if other countries do it? Suppose your country doesn’t want to

16:39recognize AIS as persons. But the USA in the name of deregulating AI and

16:46deregulating the markets grants legal recognition, legal personhood to

16:51millions of AIs which start running millions of new corporations.

16:58Will you block these US AI corporations from operating in your country?

17:05Suppose some USI persons invent super efficient and super complex financial

17:13devices that humans cannot fully understand and therefore don’t know how

17:18to regulate. Will you open your financial markets to this new AI

17:24financial wizardry or will you try to block it thereby

17:30decoupling from the American financial system?

17:36Suppose some AI persons create a new religion which gains the faith of

17:41millions of people. That should not sound too far-fetched because after all,

17:47almost all previous religions in history have claimed that they were created by a

17:53nonhuman intelligence. Now, will your country extend freedom of religion to

18:00the new AI sect and to its AI priests and missionaries?

18:07Maybe we should start with something a bit simpler. Will your country allow AI

18:12persons to open social media accounts, enjoy freedom of speech, on Facebook, on

18:19Tik Tok, and be friendly with children? Well, of course, that question should

18:25have been asked 10 years ago. On social media, AI bots have been operating as

18:32functional persons for at least a decade. If you think AIS should not be

18:40treated as persons on social media, you should have acted 10 years ago.

18:4710 years from now, it will be too late for you to decide whether AIs should

18:53function as persons in the financial markets, in the courts, in the churches.

19:00Somebody else will already have decided it for you.

19:05If you want to influence where humanity is going, you need to make a decision now.

19:12So what is your answer as a leader? Do you think the AI immigrants should be

19:19recognized as legal persons? If not, how

19:24are you going to stop that? Thank you for listening to this human.

19:33[Applause]

19:40Thank you, Yaval. That was fantastic overview. You posed a lot of questions

19:46um and they’re the right ones. I agree with much of what you say. We’re here in Davos where the theme is around dialogue

19:53and I was struck by your commentary around words and the importance of words

19:58and that being something that demarcates human animals from other animals although that’s debatable that there’s

20:04other language there. So in the context of Davos and the range of people we have

20:11here from technology from the business world from politicians

20:16how would you like to see what is the answer that you have in terms of this slightly dystopian world you’ve

20:23potentially put in front of us and if I may just add to that I think

20:28it’s fair to say I’m a scientist by background a neuroscientist so I uh work a lot in this space particularly around

20:35pain and we’re very comfortable with the fact that many of our discoveries, particularly technological discoveries,

20:42we often drive them forward and then afterwards we think, oh, we hadn’t thought enough about the ethics and the

20:48implications and then we’re trying to catch up on the regulation that we need to maybe put around it.

20:54So, we are where we are. This thing is happening as everybody says at scale both in terms of its magnitude and its

20:59pace more than we’ve ever seen before in the industrial revolution. We have all the right blend of people here in Davos.

21:05It’s all about dialogue. What would you like to see go forward in terms of

21:11putting boundaries around some of the slightly more worrying areas that you detailed? And what are your own thoughts

21:17about the ethical implications of giving legal rights to either agents to robots

21:23or to the ones that are just exist on the internet? A lot of things there. I mean first of

21:29all I I would say that you know Davos is about words. It’s about talking. The basic idea of

21:37Davos is that you can change the world just by talking which I like this idea

21:42because this is also my idea as an author as a university lecturer. This is what I do. I talk I write I think I can

21:49influence the world the world with words. Um and this is now in in question. Are

21:58we at the end of the road for words? Um, is this no longer a function?

22:06And you know, engineers and also soldiers,

22:11they don’t change the world with words. They do stuff. They take action.

22:18Um, philosophers, scholars, also political leaders,

22:25they try to change the world with words, by saying things. And maybe we’ve

22:31reached the end of that road. And what does it mean? But we know we we con we

22:38humans, we conquer the world ultimately, I would say, with language and words. Because yes, engineers can make weapons

22:45and soldiers can wield them. But to build an army, you need to convince thousands of strangers to cooperate. How

22:52do you do that? With words, with ideology, with religion.

22:59So, humans took over the world not because we are the strongest physically,

23:04but because we discovered how to use words to get thousands and millions and

23:10billions of strangers to cooperate. This was our superpower. And now

23:17something has emerged that is going to take our superpower from us. Until a few

23:24years ago, nothing on earth could use words. Only humans, chimpanzees

23:31couldn’t, rivers couldn’t, the sun couldn’t, we could use words.

23:36Now there is something that is able or soon will be able to use words better than us. And you look just, you know, at

23:43what happened on social media and the immense change it brought to the world there.

23:49So 10 years from now living in a world in which AIs are in command of language

23:58how does that look like? Well Davos in 10 years might look very different as you say. So that’s a future

24:04we can all try to think about in the context of who would be here beyond the physical human. But if I may just um

24:12discuss a little bit around the fact that it’s not new for humans to be uh

24:18beaten by technology. So if we think about some of the tech, we can’t fly and we built airplanes. Cars can go faster

24:25than us. We’re very comfortable with that. The threat that comes with AI is the fact that it’s a threat to the

24:31sovereign power of our ability to think. And that is destabilizing. I say that as

24:36an academic and an educator. That’s something that is very threatening. But if we go back to say a robot, the value

24:44we would place on a robot being able to run the 100 meters faster than Usain Bolt is less. There’s something about

24:50the human endeavor, the struggle, the suffering, the fact that we can have a collective sense of empathy and

24:56understanding about what it meant to achieve something even if it was lesser with technology. I just wonder whether

25:03an author that would replace you, how much as a human we would value that the words of that the creativity that comes

25:10from art that’s been done with artificial intelligence. Do you think we will value it as much

25:15and therefore there’s still a place for humans in the creative space of thinking and words? That’s the identity crisis because the

25:22cow didn’t say I run therefore I am. I think I mean it based human identity on

25:30our capacity to think. We always knew that cheetahs can run faster than us. We

25:36always knew that elephants are much bigger and stronger than us. So we didn’t define ourselves by this. We

25:42defined ourselves by thinking. And now something is going to be better than us

25:49in thinking. If thinking means putting words in order. Now I’m again I’m an

25:54author. I am a speaker. I put words in order. This is like this is my game. Like I have all these words and I put oh

26:00let’s put these words in this in this order. No, no, no, no. It will be better to put it like this like this like this

26:05and AI will beat me. I don’t know how long it will take two years, five years, 10 years it will beat me and then what

26:14does it mean for our identity? People identify, you know, with the streams of

26:19of words in in in their mind. Like you close your eyes, you try to see what’s happening inside me. Many people, I’m

26:26one of them. We see words popping up organizing themselves. We identify with

26:31that. But I guess my point is using the same analogy is that we still value a human.

26:36We have the Olympics. We’ve got the Winter Olympics coming. We know that many other animals and other technologies can outperform in

26:44many of those areas. Yet we still really enjoy the humanity of people that train

26:49and develop even though it’s not as good. And I just wonder whether we will just naturally extend that to the

26:55thinking realm and to the word so that you still will have a very very vibrant and successful author role in 10 years

27:02time and I don’t know like because I will value your book more than an AI generated book even if even if the AI comes up with new

27:10ideas better than me like let’s say that you want to invest money and you have a you ask a human consultant and she comes

27:18up with a certain whatever and you can empathize with her because she had this life story and whatever. And then you

27:24have the AI financial consultant with zero life story, zero emotions but

27:30better financial advice. Which one will you follow? Now we always have this kind of I think

27:37the big mistake and this is why I started with the idea of agency. We always think that uh um we can just use

27:46these things as tools. But if they can think they are agents.

27:52Yeah. You know, maybe I I’ll tell a story from medieval history that how did the Anglo-Saxons took over Britain and

28:01it’s part myth about history that you know the Britain who lived there

28:06originally they were fighting with the PS and the Scots coming from the north

28:12and the Britain didn’t fight very well. So the king of the Britain Vodiger he

28:17said I have an idea. I’ve heard that in Germany, in Scandinavia, these people really know how to fight. So, let’s

28:23bring over some mercenary, some Anglo-Saxon mercenary. They will fight for us. They will defeat the PS and the

28:30Scots. And Vodigern brings over Anglo-Saxon mercenaries and they fight

28:35well and they defeat the Scots and the PS. But then the Anglo-Saxons says say

28:40to themselves, this is a rich country and these people they are very weak and these people they are disunited. we can

28:47take over and they take over. We understand this with human mercenaries. We understand

28:55that when you bring a human mercenary, okay, you pay them, but they have a mind of their own. Maybe they rebel.

29:03We don’t get it with AIS. Yeah. You know, you look at the leader of of

29:08the world, they think, “Oh, I’ll bring AI to fight my war for me.”

29:14the idea that it can just take power away from you. Yeah. It doesn’t really cross their mind. They

29:21don’t really accept that AIS can think. Yeah. Yeah. And that is very

29:26fundamentally different. So just to reverse that, you’re an alum of my institution. Uh and and we’re proud of

29:33that. Um although you’re working at the other place now in Cambridge. Um so the challenge I think for the education

29:38sector and it goes back to a reverse flip of what Alan Turing said which was

29:43whether a computer could think the sort of birthplace of of artificial intelligence if you will. So I think the

29:49question we’ve been posing inside the academic sector is how do we keep humans thinking

29:54because if we keep abdicating our decision making our financial decisions or whatever it might be increasingly

30:01increasingly to AI the worry we have quite quickly and we’re seeing this with students coming to us through the school

30:06system very overusing chat GPT is the deskkilling of critical faculties of

30:12human brain thinking. So what’s your advice to me in the academic sector

30:17about how can we hang in there as humans and keep humans thinking so that we at least have some capacity to live

30:24alongside these technologies which as you say bring us into a very different place going forward in terms of world

30:30order. Yeah. At the present moment we still think better. So at the present moment it’s kind of

30:37telling people you you need your critical thinking, you need moral evaluations. You cannot get that from

30:42AI. But we need to prepare to the moment when this is no longer the case. We need

30:48to prepare for the moment. Let’s say again take economics or finance when AIs

30:54create a new financial systems, a new financial system that they understand

30:59and we don’t understand. How do you train economists or politicians in a

31:06world in which humans really can’t can no longer understand how finance

31:12functions because AIs have created this super complex financial system that we are

31:19like the horses you know that horses can see that they are being traded from one

31:24human to another for a few shiny gold coins they can’t understand this idea of money too complicated M

31:31we can be in the same situation 10 years from now. Davos 10 years from now.

31:37Maybe nobody in the room, no human in the room understands the financial

31:42system anymore because it’s dominated by AIs and the AIs have come up with new

31:49financial strategies and devices that are just mathematically beyond the

31:55human capacity of of the brain. So how does politics and finance and davos look

32:01like in a world when no human beings understands finance anymore?

32:06Yeah. No. Well, that’s a beautiful note to finish on. We’ve run out of time. There are many more questions that we

32:12could explore. One of which just being the major difference uh that we know

32:17about artificial intelligence compared to human intelligence is of course the human brain develops from birth to

32:22adulthood around age 20. And it is a product of your life experience as a sentient human being feeling loving

32:29anger these emotions. And whilst one can improvise a little bit with sensory detectors and you can train the brain to

32:35do that that is fundamentally different. So the artificial brain is not a human brain. It is not human. And there is

32:41maybe something that is still of value there that goes back to that core

32:46business of this sentient human being that brings value to our understanding. And maybe one last comment

32:52please think about educating kids in a world where from day zero maybe the most of

33:00the interaction of the new child is with an AI. Yeah. And not with a human being.

33:06Yeah. It’s the biggest and scariest psychological experiment in history and we are conducting it.

33:12Indeed we are. Well, you thank you so much. I’m delighted that you’re thinking about these problems and that you’ve got

33:18us all thinking this afternoon. I look forward to you coming back maybe to Davos in in uh 10 years and reflecting

33:24on this conversation and just where we have got to. But thank you all to the audience, those of you online and those in the group. And thank you. Can we give

33:30a round of applause for your [Applause]

33:51[Music]

Is this your new site? Log in to activate admin features and dismiss this message
Log In