1 00:00:00,000 --> 00:00:22,000 Thank you for this wonderful introduction. 2 00:00:22,400 --> 00:00:27,200 And yes, what I want to talk to you about is AI in the future of humanity. 3 00:00:27,200 --> 00:00:33,760 Now, I know that this conference focuses on the ecological crisis facing humanity, 4 00:00:33,760 --> 00:00:40,400 but for better or worse, AI2 is part of this crisis. 5 00:00:40,400 --> 00:00:46,480 AI can help us in many ways to overcome the ecological crisis, 6 00:00:46,480 --> 00:00:50,240 or it can make it far, far worse. 7 00:00:50,240 --> 00:00:58,000 Actually, AI will probably change the very meaning of the ecological system, 8 00:00:58,000 --> 00:01:03,040 because for four billion years, the ecological system of planet Earth 9 00:01:03,040 --> 00:01:07,040 contained only organic life forms. 10 00:01:07,040 --> 00:01:15,200 And now, or soon, we might see the emergence of the first inorganic life forms, 11 00:01:15,200 --> 00:01:21,840 the four billion years, or at the very least the emergence of inorganic agents. 12 00:01:23,920 --> 00:01:30,720 Now, people have feared AI since the very beginning of the computer age in the middle of the 20th century, 13 00:01:30,720 --> 00:01:37,760 and this fear has inspired many science fiction classics like the Terminator of the Matrix. 14 00:01:38,640 --> 00:01:45,040 Now, while such science fiction scenarios have become cultural landmarks, 15 00:01:45,040 --> 00:01:51,680 they haven't usually been taken seriously in academic and scientific and political debates, 16 00:01:51,680 --> 00:01:58,480 and perhaps for a good reason, because science fiction scenarios usually assume 17 00:01:58,480 --> 00:02:06,880 that before AI can pose a significant threat to humanity, it will have to reach or to pass, 18 00:02:07,520 --> 00:02:10,000 to important milestones. 19 00:02:10,960 --> 00:02:17,040 First, AI will have to become sentient and develop consciousness, 20 00:02:17,040 --> 00:02:23,360 feelings, emotions, otherwise why would it even want to take over the world? 21 00:02:24,480 --> 00:02:31,600 Secondly, AI will have to become adapt at navigating the physical world. 22 00:02:31,680 --> 00:02:39,520 Robots will have to be able to move around and operate in houses and cities and mountains and forests, 23 00:02:39,520 --> 00:02:43,440 at least as dexterously and efficiently as humans. 24 00:02:44,160 --> 00:02:50,000 If they cannot move around the physical world, how can they possibly take it over? 25 00:02:51,600 --> 00:03:00,480 And as of April 2023, AI still seems far from reaching either of these milestones. 26 00:03:00,560 --> 00:03:07,680 Despite all the hype around chatGPT and the other new AI tools, there is no evidence 27 00:03:08,240 --> 00:03:14,320 that these tools have even a shred of consciousness, or feelings, or emotions. 28 00:03:15,360 --> 00:03:23,360 As for navigating the physical world, despite the hype around self-driving vehicles, 29 00:03:23,440 --> 00:03:30,720 the date at which these vehicles will dominate our roads keeps being postponed. 30 00:03:31,760 --> 00:03:39,360 However, the bad news is that to threaten the survival of human civilization, 31 00:03:39,760 --> 00:03:48,400 AI doesn't really need consciousness and it doesn't need the ability to move around the physical world. 32 00:03:49,120 --> 00:03:56,560 Over the last few years, new AI tools have been unleashed into the public sphere, 33 00:03:56,560 --> 00:04:04,160 which may threaten the survival of human civilization from a very unexpected direction. 34 00:04:05,440 --> 00:04:13,840 And it's difficult for us to even grasp the capabilities of these new AI tools and the speed 35 00:04:14,000 --> 00:04:21,440 at which they continue to develop. Indeed, because AI is able to learn by itself, 36 00:04:21,440 --> 00:04:30,400 to improve itself. Even the developers of these tools don't know the full capabilities of what 37 00:04:30,400 --> 00:04:38,640 they have created and they are themselves often surprised by emergent abilities and emergent 38 00:04:38,640 --> 00:04:47,040 qualities of these tools. I guess everybody here is already aware of some of the most fundamental 39 00:04:47,040 --> 00:04:55,760 abilities of the new AI tools. Abilities like writing text, drawing images, composing music, 40 00:04:55,760 --> 00:05:05,040 and writing code. But there are many additional capabilities that are emerging, like deep-faking, 41 00:05:05,120 --> 00:05:13,600 people's voices and images, like drafting bills, finding weaknesses, both in computer code, 42 00:05:13,600 --> 00:05:20,480 and also in legal contracts and in legal agreements. But perhaps most importantly, 43 00:05:21,120 --> 00:05:30,480 the new AI tools are gaining the ability to develop deep and intimate relationships with human beings. 44 00:05:31,040 --> 00:05:39,920 Each of these abilities deserves an entire discussion and it is difficult for us to understand 45 00:05:39,920 --> 00:05:48,640 their full implications. So let's make it simple. When we take all of these abilities together, 46 00:05:48,640 --> 00:05:58,720 as a package, they boil down to one very, very big thing. The ability to manipulate and to 47 00:05:59,600 --> 00:06:09,120 language, whether with words or images or sounds, the most important aspect of the current 48 00:06:09,120 --> 00:06:18,480 phase of the ongoing AI revolution, is that AI is gaining mastery of language at a level that 49 00:06:18,480 --> 00:06:28,480 surpasses the average human ability. And by gaining mastery of language, AI is ceasing the 50 00:06:28,480 --> 00:06:38,720 master key and looking the doors of all our institutions from banks to temples. Because language 51 00:06:38,720 --> 00:06:47,840 is the tool that we use to give instructions to our bank and also to inspire heavenly visions 52 00:06:47,840 --> 00:06:57,440 in our minds. Another way to think of it is that AI has just hacked the operating system of human 53 00:06:58,240 --> 00:07:05,840 civilization. The operating system of every human culture in history has always been language. 54 00:07:06,400 --> 00:07:15,440 In the beginning was the word. We use language to create mythology and laws, to create gods 55 00:07:15,440 --> 00:07:25,360 and money, to create art and science to create friendships and nations. For example, human rights 56 00:07:25,440 --> 00:07:33,840 are not a biological reality. They are not inscribed in our DNA. Human rights is something that we 57 00:07:33,840 --> 00:07:43,520 created with language by telling stories and writing laws. Gods are also not a biological 58 00:07:43,520 --> 00:07:51,360 or physical reality. Gods too is something that we humans have created with language by telling 59 00:07:51,440 --> 00:08:01,520 legends and writing scriptures. Money is not a biological or physical reality. Banknotes are just 60 00:08:01,520 --> 00:08:09,120 worthless pieces of paper and at present more than 90% of the money in the world is not even 61 00:08:09,120 --> 00:08:14,880 banknotes. It's just electronic information in computers passing from you to there. 62 00:08:15,600 --> 00:08:25,920 What gives money of any kind? Value is only the stories that people like bankers and finance ministers 63 00:08:25,920 --> 00:08:34,000 and cryptocurrency gurus tell us about money. Sam Bankman freed Elizabeth Holmes and Bernie 64 00:08:34,000 --> 00:08:44,240 made of didn't create much of real value but unfortunately they were all extremely capable storytellers. 65 00:08:44,960 --> 00:08:56,000 Now what would it mean for human beings to live in a world where perhaps most of the stories, 66 00:08:56,000 --> 00:09:06,320 melodies, images, laws, policies and tools are shaped by a non-human alien intelligence which 67 00:09:06,400 --> 00:09:15,600 knows how to exploit with super-human efficiency, the weaknesses, biases and addictions of the 68 00:09:15,600 --> 00:09:24,080 human mind and also knows how to form deep and even intimate relationships with human beings. 69 00:09:24,640 --> 00:09:33,520 That's the big question. Already today in games like chess no human can hope to beat a computer. 70 00:09:34,320 --> 00:09:41,440 What if the same thing happens in art, in politics, economics and even in religion? 71 00:09:42,880 --> 00:09:49,600 When people think about chatGPT and the other new AI tools, they are often drawn to examples 72 00:09:49,600 --> 00:09:56,640 like kids using chatGPT to write their school essays. What will happen to the school system when 73 00:09:56,720 --> 00:10:04,240 kids write essays with chatGPT horrible? But this kind of question misses the big picture. 74 00:10:04,240 --> 00:10:12,320 Forget about the school essays. Instead think for example about the next US presidential race. 75 00:10:12,320 --> 00:10:22,720 In 2024 and try to imagine the impact of the new AI tools that can mask produce political manifestos, 76 00:10:22,800 --> 00:10:29,520 fake news stories, and even horoscriptures for new cults. In recent years, 77 00:10:30,400 --> 00:10:39,760 the politically influential QAnon cult has formed around anonymous online texts known as QDrops. 78 00:10:40,560 --> 00:10:46,480 Now followers of this cult which are millions now in the US in the rest of the world collected, 79 00:10:46,560 --> 00:10:53,040 reviewed and interpreted these QDrops as some kind of new scriptures with a sacred text. 80 00:10:53,760 --> 00:10:59,760 Now to the best of unknowledge all previous QDrops were composed by human beings 81 00:11:00,560 --> 00:11:10,720 and bots only helped to disseminate these texts online. But in future we might see the first 82 00:11:10,720 --> 00:11:19,360 cults and religions in history whose reviewed texts were written by a non-human intelligence. 83 00:11:19,360 --> 00:11:26,800 And of course religions throughout history claimed that their holy books were written by a non-human 84 00:11:26,800 --> 00:11:34,640 intelligence. This was never true before. This could become true very, very quickly with far-reaching 85 00:11:34,720 --> 00:11:44,560 consequences. Now on a more presic level we might soon find ourselves conducting lengthy online discussions 86 00:11:44,560 --> 00:11:52,080 about abortion or about climate change or about the Russian invasion of Ukraine with entities 87 00:11:52,080 --> 00:12:01,440 that we think of fellow human beings but are actually AI bots. Now the catch is that it's utterly 88 00:12:01,520 --> 00:12:09,680 useless it's pointless for us to waste our time trying to convince an AI bot to change its political 89 00:12:09,680 --> 00:12:17,200 views but the longer we spend talking with the bot the better it gets to know us and understand how 90 00:12:17,200 --> 00:12:24,560 to hone its messages in order to shift our political views or our economic views of anything else. 91 00:12:25,520 --> 00:12:32,960 Through its mastery of language AI as I also as I said could also form intimate relationships 92 00:12:32,960 --> 00:12:40,480 with people and use the power of intimacy to influence our opinions and the well view. 93 00:12:41,600 --> 00:12:48,080 Now there is no indication that AI has as I said any consciousness any feelings of its own 94 00:12:48,720 --> 00:12:56,160 but in order to create fake intimacy with human beings AI doesn't need feelings of its own 95 00:12:56,160 --> 00:13:03,360 it only needs to be able to inspire feelings in us to get us to be attached to it. 96 00:13:04,640 --> 00:13:12,400 Now in June 2022 there was a famous incident when the Google engineer Blake Lemwam publicly claimed 97 00:13:12,720 --> 00:13:20,720 that the AI chat bot lambda on which he was working has become sentient. This very controversial 98 00:13:20,720 --> 00:13:28,720 claim caused him his job was fired. Now the most interesting thing about this episode wasn't Lemwam's 99 00:13:28,720 --> 00:13:37,920 claim which was most probably false. The really interesting thing was his willingness to risk and 100 00:13:38,000 --> 00:13:45,680 ultimately lose his very lucrative job for the sake of the AI chat bot that he thought he was 101 00:13:45,680 --> 00:13:54,320 protecting. If AI can influence people to risk and lose their jobs what else can it 102 00:13:54,400 --> 00:14:05,200 induce us to do. In every political battle for hearts and minds intimacy is the most effective weapon 103 00:14:05,200 --> 00:14:14,000 of all and AI has just gained the ability to must produce intimacy with millions hundreds of millions 104 00:14:14,000 --> 00:14:22,960 of people. Now as you probably all know over the past decade social media has become a battleground 105 00:14:23,040 --> 00:14:31,840 battlefield for controlling human attention. Now with the new generation of AI the battlefront 106 00:14:31,840 --> 00:14:40,720 is shifting from attention to intimacy and this is very bad news. What will happen to human society 107 00:14:40,720 --> 00:14:48,080 and to human psychology as AI fights AI in a battle to create intimate relationships with us. 108 00:14:49,040 --> 00:14:56,320 Relationships that can then be used to convince us to buy particular products or to vote for 109 00:14:56,320 --> 00:15:05,200 particular politicians. Even without creating fake intimacy then new AI tools would have an 110 00:15:05,200 --> 00:15:13,840 immense influence on human opinions and on a worldview. People for instance may come to use or already 111 00:15:13,920 --> 00:15:25,200 coming to use. A single AI advisor is a one-stop Oracle and is the source for all the information 112 00:15:25,200 --> 00:15:30,720 they need. No wonder that Google is terrified if you've been watching that a new slide 113 00:15:30,720 --> 00:15:39,440 will Google is terrified and for a good reason why bother searching yourself when you can just ask 114 00:15:39,440 --> 00:15:45,680 the Oracle to tell you anything you want you only to search. The news industry and the 115 00:15:45,680 --> 00:15:53,680 advertiserment industry should also be terrified why read and use paper when I can just ask the 116 00:15:53,680 --> 00:16:00,800 Oracle to tell me what's new and what's the point what's the purpose of advertisements when I can 117 00:16:00,880 --> 00:16:10,000 just ask the Oracle to tell me what to buy. So there is a chance that within a very short time 118 00:16:10,000 --> 00:16:17,040 the entire advertisement industry will collapse. While AI or the people and companies that control 119 00:16:17,040 --> 00:16:24,640 the new AI oracles will become extremely, extremely powerful. What we are potentially talking about 120 00:16:24,720 --> 00:16:32,080 is nothing less than the end of human history. Now not the end of history just the end of the 121 00:16:32,080 --> 00:16:40,880 human dominated part of what we call history. History is the interaction between biology and culture. 122 00:16:41,680 --> 00:16:48,560 It's the interaction between our biological needs and desires for things like food and sex 123 00:16:49,280 --> 00:16:56,400 and our cultural creations like religions and laws. History is the process through which 124 00:16:56,400 --> 00:17:04,720 religions and laws interact with food and sex. Now what will happen to the course of this interaction 125 00:17:04,720 --> 00:17:15,840 of history when AI takes over culture? Within a few years AI could eat the whole of human culture 126 00:17:15,840 --> 00:17:19,200 everything was produced for thousands and thousands of years to eat all of it, 127 00:17:20,160 --> 00:17:29,840 digest it and start gushing out a flood of new cultural creations, new cultural artifacts. 128 00:17:30,720 --> 00:17:38,160 And remember that we humans we never really have direct access to reality. We are always 129 00:17:38,240 --> 00:17:47,520 cocooned by culture and we always experience reality through a cultural prison. Our political views 130 00:17:47,520 --> 00:17:56,480 are shaped by the stories of journalists and by the anecdotes of friends. Our sexual preferences 131 00:17:56,480 --> 00:18:03,680 are tweaked by movies and fairy tales. Even the way that we walk and breathe is sampling 132 00:18:03,760 --> 00:18:14,720 nudged by cultural traditions. Now previously this cultural cocooned was always woven by other 133 00:18:14,720 --> 00:18:23,440 human beings. Tools, previous tools like printing presses, radios or televisions, they helped 134 00:18:23,440 --> 00:18:30,960 to spread the cultural ideas and creations of humans but they could never create something new 135 00:18:31,040 --> 00:18:40,720 by themselves. A printing press cannot create a new book. It's always done by a human. AI is fundamentally 136 00:18:40,720 --> 00:18:47,040 different from printing presses from radios from every previous invention in history because it 137 00:18:47,040 --> 00:18:57,600 can create completely new ideas. It can create a new culture. And the big question is what will 138 00:18:57,680 --> 00:19:06,560 it be like to experience reality through a prism produced by a non-human intelligence by an alien 139 00:19:06,560 --> 00:19:16,720 intelligence? Now at first in the first few years AI will probably largely imitate the prototypes, 140 00:19:16,720 --> 00:19:25,920 the human prototypes that fed it in its infancy. But with each passing year AI culture will boldly 141 00:19:26,000 --> 00:19:34,560 go where no human has gone before. So for thousands of years, we humans basically lived 142 00:19:34,560 --> 00:19:42,960 inside the dreams and fantasies of other humans. We have worshipped gods, we pursued ideals of beauty, 143 00:19:42,960 --> 00:19:49,440 we dedicated our lives to causes that originated in the imagination of some human point, 144 00:19:49,920 --> 00:19:57,120 or profit, or politician. Soon we might find ourselves living inside the dreams and fantasies 145 00:19:57,120 --> 00:20:03,840 of an alien intelligence. And the danger that disposes of the potential danger, it also has positive 146 00:20:03,840 --> 00:20:09,440 potential, but the dangers in disposes are fundamentally very very different from everything 147 00:20:09,440 --> 00:20:17,840 of most of the things imagined in science fiction, movies and books. Previously, people have mostly 148 00:20:17,840 --> 00:20:26,800 feared the physical threat that intelligent machines pose. So the terminator depicted robots 149 00:20:26,800 --> 00:20:33,280 running in the streets and shooting people. The matrix assumed that to gain total control of 150 00:20:33,280 --> 00:20:43,280 human society, AI would first need to get physical control of our brains and directly connect our 151 00:20:43,360 --> 00:20:51,680 brains to the computer network. But this is wrong. Simply by gaining mastery of human language, 152 00:20:51,680 --> 00:20:59,520 AI has all its needs in order to cook on us in a matrix like world of illusions. 153 00:21:01,760 --> 00:21:07,600 Contrary to what some conspiracy theories assume, you don't really need to implant 154 00:21:07,600 --> 00:21:15,600 chips in people's brains in order to control them. For thousands of years, profits and 155 00:21:15,600 --> 00:21:22,560 points and politicians have used language and storytelling in order to manipulate and to control 156 00:21:22,560 --> 00:21:30,320 people and to reshape society. Now AI is likely to be able to do it. And once it can doubt that 157 00:21:30,320 --> 00:21:36,320 it doesn't need to send killer robots to shoot us. It can get humans to pull the trigger, 158 00:21:36,320 --> 00:21:44,480 it really needs to. Now fear of AI is haunted human kind for only the last 159 00:21:44,480 --> 00:21:48,480 future generations. Let's say from the middle of the 20th century, if you go back to Frankenstein 160 00:21:48,480 --> 00:21:55,920 maybe it's 200 years. But for thousands of years, humans have been haunted by a much, much 161 00:21:55,920 --> 00:22:04,800 deeper fear. Humans have always appreciated the power of stories and images and language 162 00:22:04,880 --> 00:22:12,560 to manipulate our minds and to create illusions. Consequently, since ancient times, humans 163 00:22:12,560 --> 00:22:21,680 feared being trapped in a world of illusions. In the 17th century, when a decart feared that perhaps 164 00:22:21,680 --> 00:22:29,440 a malicious demon was trapping him inside this kind of world of illusions, creating everything 165 00:22:29,520 --> 00:22:37,440 that the cart so endured. In ancient Greece, Plato told the famous allegory of the cave, 166 00:22:38,080 --> 00:22:46,480 in which a group of people is chained inside a cave, all their lives facing a blank wall, a screen. 167 00:22:47,680 --> 00:22:56,800 On that screen, they see projected various shadows and the prisoners mistake these illusions, 168 00:22:56,880 --> 00:23:05,120 these shadows for the reality. In ancient India, Buddhist and Hindu sages pointed out 169 00:23:05,120 --> 00:23:12,560 that only humans lived trapped inside what they called Maya. Maya is the world of illusions. 170 00:23:12,560 --> 00:23:20,480 Buddha said that what we normally take to be reality is often just fictions in our own minds. 171 00:23:21,120 --> 00:23:28,000 People may wage entirely wars, killing others and being willing to be killed themselves 172 00:23:28,000 --> 00:23:37,920 because of their belief in these fictions. So the AI revolution is bringing us face to face 173 00:23:38,560 --> 00:23:46,400 with the cart's demon, with Plato's cave, with the Maya. If we are not careful, a curtain of 174 00:23:46,400 --> 00:23:54,240 illusions could descend over the whole of humankind and we will never be able to tear that curtain 175 00:23:54,240 --> 00:24:02,160 away or even realize that it is there because we will think this is reality. And social media, 176 00:24:02,160 --> 00:24:08,160 if this sounds far-fetched, so just look at social media over the last few years, social media has 177 00:24:08,160 --> 00:24:16,160 given us a small taste of things to come. In social media, primitive AI tools, AI tools, but very primitive, 178 00:24:16,640 --> 00:24:25,120 have been used not to create content, but to create content which is produced by human beings. 179 00:24:25,120 --> 00:24:33,200 The humans produce stories and videos and whatever and the AI chooses which stories which videos 180 00:24:33,200 --> 00:24:42,080 would reach our ears and eyes, selecting those that will get the most attention that will be 181 00:24:42,160 --> 00:24:49,760 the most viral. And while very primitive, these AI tools have nevertheless been sufficient 182 00:24:50,320 --> 00:24:57,840 to create this kind of curtain of illusions that increased societal polarization all over the world, 183 00:24:57,840 --> 00:25:05,680 undermined our mental health and destabilized democratic societies. Millions of people 184 00:25:05,760 --> 00:25:16,080 have confused these illusions for the reality. The USA has the most powerful information technology 185 00:25:16,080 --> 00:25:24,480 in the whole of history and yet American citizens can no longer agree who won the last elections 186 00:25:25,200 --> 00:25:30,800 or whether climate change is real or whether vaccines prevent illnesses or not. 187 00:25:30,960 --> 00:25:39,680 The new AI tools are far, far more powerful than these social media algorithms and they could 188 00:25:39,680 --> 00:25:49,600 cause far more damage. Now of course AI has enormous positive potential too. I didn't talk about it 189 00:25:49,600 --> 00:25:56,400 because the people who develop AI naturally talk about it enough. You don't need me to add up to that 190 00:25:57,280 --> 00:26:03,920 the job of historians and philosophers like myself is often to point out the dangers. 191 00:26:04,640 --> 00:26:13,600 But certainly AI can help us in countless ways from finding new queues to cancer to discovering 192 00:26:13,600 --> 00:26:22,240 solutions to the ecological crisis that we are facing. In order to make sure that the new AI tools are 193 00:26:22,320 --> 00:26:31,440 used for good and not for ill we first need to appreciate their true capabilities and we need to regulate 194 00:26:31,440 --> 00:26:42,000 them very, very carefully. Since 1945 we knew that nuclear technology could destroy physically 195 00:26:42,000 --> 00:26:50,080 destroy human civilization as well as benefiting us by producing cheap and plentiful energy. 196 00:26:50,800 --> 00:26:58,080 With them for reshaped the entire international order to protect ourselves and to make sure 197 00:26:58,080 --> 00:27:06,880 that nuclear technology is used primarily for good. We now have to grapple with a new weapon 198 00:27:06,880 --> 00:27:15,840 of mass destruction that can annihilate our mental and social world. And one big difference between 199 00:27:15,840 --> 00:27:26,960 newx and AI, newx cannot produce more powerful newx. AI can produce more powerful AI. So we need 200 00:27:26,960 --> 00:27:36,560 to act quickly before AI gets out of our control. Drunk companies cannot sell people new medicines 201 00:27:36,560 --> 00:27:44,640 without first subjecting these products to rigorous safety checks. Biotech labs cannot just 202 00:27:45,600 --> 00:27:51,360 release a new virus into the public sphere in order to impress their shareholders with the 203 00:27:51,360 --> 00:28:00,320 technological wizardry. Similarly governments must immediately ban the release into the public domain 204 00:28:00,320 --> 00:28:07,280 of any more revolutionary AI tools before they are made safe. Again I'm not talking about stopping 205 00:28:07,360 --> 00:28:14,400 or researching AI. The first step is to stop the release into the public sphere. The research 206 00:28:14,400 --> 00:28:20,960 viruses without releasing them to the public, you can research AI but don't release them too quickly 207 00:28:20,960 --> 00:28:29,040 into the public domain. If we don't slow down the AI arms race, we will not have time. 208 00:28:29,040 --> 00:28:36,720 To even understand what is happening, let alone to regulate effectively. This incredibly powerful 209 00:28:37,280 --> 00:28:44,640 technology. Now you might be wondering or asking. Want slowing down the public deployment of 210 00:28:44,640 --> 00:28:53,520 AI caused democracies to lag behind more ruthless of authoritarian regimes. And the answer is 211 00:28:53,520 --> 00:29:01,360 absolutely no exactly the opposite. And regulated AI deployment is what will cause democracies 212 00:29:01,440 --> 00:29:09,120 to lose to dictatorships. Because if we unleash chaos, authoritarian regimes could more easily 213 00:29:09,920 --> 00:29:18,160 contain these chaos than could open societies. Democracy in essence is a conversation. 214 00:29:18,800 --> 00:29:23,520 Democracy is an open conversation. You know dictatorship is a dictate. There is one person 215 00:29:23,520 --> 00:29:29,200 dictating everything, no conversation. Democracy is a conversation between many people but what to do? 216 00:29:30,160 --> 00:29:40,880 And conversations rely on language. When AI hacks language, it means it could destroy our ability 217 00:29:40,880 --> 00:29:50,000 to conduct meaningful public conversations thereby destroying democracy. If we wait for the chaos, 218 00:29:50,640 --> 00:29:55,920 it will be too late to regulate it in a democratic way. Maybe in an authoritarian, 219 00:29:55,920 --> 00:30:00,240 not a authoritarian way, it will still be possible to regulate. But how can you regulate something 220 00:30:00,240 --> 00:30:06,080 democratically if you can't find a conversation about it? And if you didn't regulate AI on time, 221 00:30:06,080 --> 00:30:11,520 you will not be able. We will not be able to have a meaningful public conversation anymore. 222 00:30:13,360 --> 00:30:19,200 So to conclude, we have just basically encountered an alien intelligence, not in our 223 00:30:19,200 --> 00:30:26,400 spatial space, but here on earth. We don't know much about this alien intelligence except 224 00:30:26,400 --> 00:30:33,680 that it could destroy our civilization. So we should put a heart to the irresponsible deployment 225 00:30:33,680 --> 00:30:42,400 of this alien intelligence into our societies and regulate AI before it regulates us. 226 00:30:43,440 --> 00:30:47,760 And the first regulation that many regulations we could suggest, but the first regulations that I would 227 00:30:47,840 --> 00:30:56,080 suggest is to make it mandatory for AI to disclose that it is an AI. If I'm having a conversation 228 00:30:56,080 --> 00:31:03,120 with someone and I cannot tell whether this is a human being or an AI, that's the end of democracy, 229 00:31:03,120 --> 00:31:12,080 because that's the end of meaningful public conversations. Now, what do you think about what you just 230 00:31:12,160 --> 00:31:19,360 told over the last 20 or 25 minutes? Some of you, I guess, might be alarmed. Some of you might 231 00:31:19,360 --> 00:31:25,840 be angry at the corporations that develop these technologies or the governments that fail to 232 00:31:25,840 --> 00:31:33,360 regulate them. Some of you may be angry at me, thinking that I'm exaggerating the threat or that 233 00:31:33,440 --> 00:31:41,760 I'm misleading the public. But whatever you think, I bet that my words have had some emotional 234 00:31:41,760 --> 00:31:48,080 impact on you, not just intellectual impact, also emotional impact. I've just told you a story, 235 00:31:48,960 --> 00:31:55,760 and this story is likely to change your mind about certain things and may even cause you to take 236 00:31:55,840 --> 00:32:04,000 certain actions in the world. Now, who created this story that you've just heard and just changed 237 00:32:04,000 --> 00:32:10,240 your mind and your brain? Now, I promised you that I wrote the text of this presentation myself 238 00:32:10,240 --> 00:32:15,760 with the help of a few other human beings, even though the images have been created 239 00:32:17,040 --> 00:32:24,480 with the help of AI. I promised you that at least the words you heard are the cultural product 240 00:32:24,560 --> 00:32:31,760 of a human mind or several human minds, but can you be absolutely sure that this is the case? 241 00:32:32,560 --> 00:32:40,640 Now, a year ago, you could. A year ago, there was nothing on earth, at least not in the public domain, 242 00:32:40,640 --> 00:32:49,280 other than a human mind that could produce such a sophisticated and powerful text. But now, 243 00:32:49,360 --> 00:32:57,760 it's different. In theory, the text you just heard could have been generated by a non-human 244 00:32:57,760 --> 00:33:04,240 alien intelligence. So take a moment or more than a moment to think about it. Thank you. 245 00:33:10,240 --> 00:33:15,440 That was an extraordinary presentation, you've out, and I'm actually going to just find out 246 00:33:15,440 --> 00:33:25,920 how many of you found that scary? That is an awful lot of very clever people in here who found 247 00:33:25,920 --> 00:33:31,360 that scary. There are many, many questions to ask, so I'm going to take some from the audience 248 00:33:31,360 --> 00:33:38,800 and some from online. So, gentlemen here. I'm the field chibaditor of Frontieries in Sustainability. 249 00:33:38,800 --> 00:33:44,320 It was wonderful presentation. I love your book. I follow you dearly in my heart. 250 00:33:44,320 --> 00:33:54,400 So, one question I'll just make is that about the regulation of AI, regulating AI, 251 00:33:54,400 --> 00:34:00,560 I very much agree with the principle, but now the question becomes how, right? So, I think that 252 00:34:00,560 --> 00:34:08,080 it's very difficult to build a nuclear reactor in your basement, but definitely you can train your AI 253 00:34:08,080 --> 00:34:14,240 in your basement quite easily. So, how can we regulate that? And one kind of related question 254 00:34:14,240 --> 00:34:19,760 to that is that, well, this whole forum Frontieries in Forum is really about open science and 255 00:34:19,760 --> 00:34:25,680 open information, open data. And most of AI there out there is trained using publicly 256 00:34:25,680 --> 00:34:31,360 available information, including patents and books and scriptures, right? So, regulating AI does 257 00:34:31,360 --> 00:34:36,640 them mean that we should regulate and bring those information in a confined space, which goes 258 00:34:36,640 --> 00:34:42,080 against the open science and open data initiatives that we are really also thinking that is 259 00:34:42,080 --> 00:34:47,200 really important for us. For the blind box is now algorithm is net, that's the algorithm. 260 00:34:47,200 --> 00:34:53,760 Now, there are always trade-offs and the thing is just to understand what kind of regulations we need, 261 00:34:53,760 --> 00:35:00,560 we first need time. Now, it present these very powerful AI tools, they are still not produced by 262 00:35:00,560 --> 00:35:06,320 individual hackers in their basement. You need an awful lot of computing power, you need an awful lot 263 00:35:06,400 --> 00:35:12,720 of money, so it's being led by just a few major corporations and governments. And again, 264 00:35:12,720 --> 00:35:19,360 it's going to be very, very difficult to regulate something on a global level, because it's an 265 00:35:19,360 --> 00:35:26,000 arms race. But there are things which countries have a benefit to regulate even only themselves. 266 00:35:26,880 --> 00:35:34,160 Like, again, this example of an AI must, when it is in interaction with a human, must disclose 267 00:35:34,240 --> 00:35:40,640 that it is an AI, even if some authoritarian regime doesn't want to do it, the EU of the United 268 00:35:40,640 --> 00:35:46,160 States or other democratic countries can have this. And this is essential to protect the open 269 00:35:46,160 --> 00:35:52,480 society. Now, there are many questions around censorship online. So, you have this controversy 270 00:35:52,480 --> 00:35:58,160 about is Twitter or Facebook who authorized them to, for instance, prevent the former President 271 00:35:58,480 --> 00:36:03,360 of the United States from making public statements. And this is a very complicated issue. 272 00:36:04,400 --> 00:36:10,400 But there is a very simple issue with bots. You know, human beings have freedom of expression, 273 00:36:10,400 --> 00:36:16,000 bots don't have freedom of expression. It's a human right. Humans have it, bots don't. So, 274 00:36:16,000 --> 00:36:20,400 if you deny freedom of expression to bots, I think that should be fine with everybody. 275 00:36:20,480 --> 00:36:25,520 Let's take another question. If you could just pause a microphone down here. 276 00:36:26,480 --> 00:36:30,640 I'm a principal of the A list and I'm a philosopher. I just have an interesting question or I think 277 00:36:30,640 --> 00:36:35,280 it's an interesting question. There you go. I have a question for you with respect to your choice 278 00:36:35,280 --> 00:36:40,480 of language, moving from artificial to alien, because artificial suggests that there's still some 279 00:36:40,480 --> 00:36:45,440 kind of human control, whereas I think alien suggests foreign, but it also suggests that at least 280 00:36:45,440 --> 00:36:50,480 an imagination of life form, some curious as to what work you're trying to have those words 281 00:36:50,480 --> 00:36:59,280 do for you. Yeah, it's definitely still artificial in the sense that we produce it, but it's 282 00:36:59,280 --> 00:37:07,520 increasingly producing itself. It's increasingly learning and adapting by itself. So, artificial 283 00:37:07,520 --> 00:37:14,800 is a kind of wishful thinking that it's still under our control. And it's getting out of our control. 284 00:37:14,880 --> 00:37:20,560 So, in this sense, it is becoming an alien force. Not necessarily evil. 285 00:37:20,560 --> 00:37:24,640 Again, it can also do a lot of good things, but the first thing to realize it's alien, 286 00:37:24,640 --> 00:37:29,360 we don't understand how it works. In one of the most shocking things about all these technology, 287 00:37:29,360 --> 00:37:33,280 you talk to the people who lead it and you ask them questions about how it works, what can 288 00:37:33,280 --> 00:37:38,480 it do, and it says they said we don't know. I mean, we know how we built it initially, 289 00:37:38,480 --> 00:37:44,560 but then it really learns by itself. Now, there is an entire discussion to be had 290 00:37:44,640 --> 00:37:52,720 about whether this is a life form or not. Now, I think that it still doesn't have any consciousness, 291 00:37:52,720 --> 00:37:57,440 and I don't think that it's impossible for you to develop consciousness, but I don't think 292 00:37:57,440 --> 00:38:03,040 it's necessary for you to develop consciousness either. That's a problematic, that's an open 293 00:38:03,040 --> 00:38:09,040 question. But life doesn't necessarily mean consciousness. We have a lot of life forms, 294 00:38:09,120 --> 00:38:15,040 microorganisms, plants, whatever, fungi, which we think they don't have consciousness. 295 00:38:15,040 --> 00:38:22,320 We still regard them as a life form. And I think AI is getting very, very close to that position. 296 00:38:22,320 --> 00:38:28,640 Now, ultimately, of course, what is life is a philosophical question. I mean, we define the boundaries, 297 00:38:28,640 --> 00:38:33,440 and you know, like, is a virus life or not. We think that an ami-by's life, but a virus, 298 00:38:33,520 --> 00:38:41,040 it's somewhere just on the borderline between life and not life. Then it's, you know, it's 299 00:38:41,040 --> 00:38:49,600 language. It's our choice of words. So, it's, I think it's less, it is important, of course, how we call AI, 300 00:38:50,320 --> 00:38:56,560 but the most important thing is to really understand what we are facing and not to comfort ourselves 301 00:38:56,560 --> 00:39:02,160 with this kind of wishful thinking, oh, it's something we created. It's under our control, 302 00:39:02,160 --> 00:39:06,640 if it does something wrong, we'll just pull the plug. Nobody knows how to pull the plug anymore. 303 00:39:08,320 --> 00:39:14,720 I'm going to take a question from our online audience. This is from Michael Brown, the US. 304 00:39:15,440 --> 00:39:20,240 What do you think about the prosperity that artificial general intelligence already exists? 305 00:39:20,960 --> 00:39:27,040 And it, all those who have access to artificial general intelligence, are already influencing 306 00:39:27,200 --> 00:39:33,680 societal systems. I think it's very, very unlikely. It wouldn't be sitting here if the actually existed 307 00:39:33,680 --> 00:39:39,600 and artificial general intelligence. When I look at the world and the chaotic stage, it's, I mean, 308 00:39:39,600 --> 00:39:45,440 artificial general intelligence is really the end of a human history. And it's such a powerful thing. 309 00:39:45,440 --> 00:39:53,200 It's not something that anybody can contain. And so, when I look at the chaotic state of the world, 310 00:39:53,280 --> 00:39:58,640 I'm quite confident, again, for me, historical perspective, that nobody has it anywhere. 311 00:39:59,840 --> 00:40:03,840 How much time it will take to develop artificial general intelligence? I don't know. 312 00:40:05,040 --> 00:40:11,840 But to threaten the foundations of civilization, we don't need artificial general intelligence. 313 00:40:12,560 --> 00:40:20,400 And then go back to social media, very, very primitive AI was still sufficient to create enormous 314 00:40:20,480 --> 00:40:24,960 social and political chaos. If I think about it in kind of evolutionary terms, 315 00:40:25,520 --> 00:40:31,520 so AI now just crawled out of the organic soup, like the first organisms that crawled out of the 316 00:40:31,520 --> 00:40:39,200 organic soup for billion years ago. How long it will take it to reach Tiranozaro's Rex? How long 317 00:40:39,200 --> 00:40:46,480 it will take it to reach homo sapiens, not for billion years? Could be just 40 years. The thing about 318 00:40:47,440 --> 00:40:54,080 digital evolution, it's moving on a completely different time scale than organic evolution. 319 00:40:54,080 --> 00:40:58,880 Can I thank you? It's been absolutely wonderful. It's been such a treat to have you here. 320 00:41:00,800 --> 00:41:07,360 I'm no doubt you'll stay with us for a little while afterwards. But the whole audience, please join me 321 00:41:07,360 --> 00:41:09,120 in thanking you. You've all met our hello.