1 00:00:00,000 --> 00:00:09,220 This podcast is part two of two on weaponizing brain science and continues the conversation 2 00:00:09,220 --> 00:00:12,180 with Dr. Jiodano from where I left off in part one. 3 00:00:12,180 --> 00:00:20,820 I want to talk a little bit about an article that you wrote for the HDI Act back in 2016, 4 00:00:20,820 --> 00:00:23,060 which was entitled Battle Skate Brain. 5 00:00:23,060 --> 00:00:28,340 And in that article, you talk about the need for deep surveillance of neuro-weapon technologies 6 00:00:28,420 --> 00:00:33,380 that you also mentioned the challenges and identifying and tracing those developments. 7 00:00:33,380 --> 00:00:38,660 Could you elaborate on some of those challenges and potential solutions for effectively understanding 8 00:00:38,660 --> 00:00:41,660 global developments in neuro-weapon technologies? 9 00:00:41,660 --> 00:00:45,580 The idea of deep surveillance is as the name would imply. 10 00:00:45,580 --> 00:00:49,300 I mean, actually, madactically, we're looking at a surveillance across a number of verticalized 11 00:00:49,300 --> 00:00:50,300 levels. 12 00:00:50,300 --> 00:00:55,220 Clearly, it becomes important to horizontalize surveillance to be able to have a wide field 13 00:00:55,220 --> 00:00:58,980 of surveillance to see who's doing what and what it is they're doing. 14 00:00:58,980 --> 00:01:04,180 But what deep surveillance entails, again, some with superficially in a definition, 15 00:01:04,180 --> 00:01:09,340 is both an examination of those explicit factors that could be contributory to the weaponization 16 00:01:09,340 --> 00:01:12,980 of any form of bioscience in tech, in this case, neuro. 17 00:01:12,980 --> 00:01:18,980 As well as those implicit and cacid factors that might be contributory or that in some way 18 00:01:18,980 --> 00:01:24,780 might be illustrative of ongoing programs and doer directions and trajectories for the possible 19 00:01:24,780 --> 00:01:27,020 weaponization of these tools and technologies. 20 00:01:27,020 --> 00:01:31,340 Now, that level of deep surveillance, obviously, is we alluded to not only in that paper, 21 00:01:31,340 --> 00:01:34,820 the Battlescape Brain Paper, but in a subsequent paper that looked at radical-leveling 22 00:01:34,820 --> 00:01:42,060 and emerging technologies, also in HTIC currents, examine the idea of what that type of surveillance 23 00:01:42,060 --> 00:01:47,340 end, its informational transfer and ultimately action ability would obtain an entail. 24 00:01:47,340 --> 00:01:51,700 And one of the things that we argued for very strongly is that very least what is necessary 25 00:01:51,740 --> 00:01:57,220 is a whole of government approach, a cooperative, collaborative, whole of government approach. 26 00:01:57,220 --> 00:02:02,940 But although that is necessary, we argued that that is not sufficient, working with my colleague, 27 00:02:02,940 --> 00:02:08,100 a former Navy captain, Remseth, again, Lieutenant General General of Snow, Dr. Diane Duelis, 28 00:02:08,100 --> 00:02:14,860 Joe DeFranco, what we argued for is that what is really required in that type of deep surveillance 29 00:02:14,860 --> 00:02:20,460 that carries it through for the acquisition of information all the way to actionable entities 30 00:02:20,620 --> 00:02:25,620 would then target certain things to be able to then mitigate our strategic competitors 31 00:02:25,620 --> 00:02:31,860 and our adversaries efforts in this area would require more of a whole of nation approach. 32 00:02:31,860 --> 00:02:38,260 We're now we have a seamless triangulation between government entities, research entities, 33 00:02:38,260 --> 00:02:43,540 and commercial entities that are then able to mobilize those resources in the deep surveillance 34 00:02:43,620 --> 00:02:50,940 that's necessary for acknowledging, addressing and assessing, relative risks that may become threats. 35 00:02:50,940 --> 00:02:57,220 Quantifying the risk to threat index and then identifying what aspects of those quantifiable risks 36 00:02:57,220 --> 00:03:01,860 are mitigable and preventable and then engaging the resources that are necessary to do that. 37 00:03:01,860 --> 00:03:05,420 And we're not talking about things that are bellicose here. 38 00:03:05,420 --> 00:03:09,980 Very often what we're talking about is a level of ongoing discourse that would then prompt, 39 00:03:09,980 --> 00:03:14,380 for example, a dialectic approach with our strategic competitors, this, all right. 40 00:03:14,380 --> 00:03:20,620 We gain some intel that you're doing A, B, and C, and A, B, and C can very easily to X, Y, and Z. 41 00:03:20,620 --> 00:03:25,780 So the nature of the international discourse across the proverbial bargaining table needs to change. 42 00:03:25,780 --> 00:03:30,980 And very often that then requires a new level of both integration and cooperation. 43 00:03:30,980 --> 00:03:35,660 So as to be able to create what we've called cooperative competition, 44 00:03:35,700 --> 00:03:40,220 we're cooperating in such a way that we understand the competition is there, 45 00:03:40,220 --> 00:03:44,180 but that competition then creates shared dependencies, interdependencies, 46 00:03:44,180 --> 00:03:47,260 whereby that level of cooperation is fundamental, 47 00:03:47,260 --> 00:03:50,900 if there are going to be some level of distinct hegemony that are exercisable, 48 00:03:50,900 --> 00:03:54,860 such that the rising tide would then raise all boats. 49 00:03:54,860 --> 00:03:58,620 Really easy to say, not really easy to do. 50 00:03:58,620 --> 00:04:03,420 And of course, it's going to take a whole of nation approach on our side to be able to affect 51 00:04:03,420 --> 00:04:09,100 that level of both deep surveillance and actionable programs that are then able to engage 52 00:04:09,100 --> 00:04:15,020 our strategic competitors, tactical competitors, and potential adversaries on the levels that are capable 53 00:04:15,020 --> 00:04:18,020 for both preparedness and readiness in response. 54 00:04:18,020 --> 00:04:20,860 So in that overview, you mentioned competition. 55 00:04:20,860 --> 00:04:25,500 And in the article for HDIQ, you hinted at the possibility of a narrow weapon's armory 56 00:04:25,500 --> 00:04:28,580 of sorts that could follow international surveillance. 57 00:04:28,580 --> 00:04:33,220 Specifically, you know the possibility of a spiraling reaction of testing and countering. 58 00:04:33,260 --> 00:04:35,620 Is that still a concern today? 59 00:04:35,620 --> 00:04:36,580 Well, I think it is. 60 00:04:36,580 --> 00:04:40,980 I think whenever you're dealing with international science and technology that are 61 00:04:40,980 --> 00:04:47,620 lebrisable across a series of fronts that then would impart some level of hegemony or 62 00:04:47,620 --> 00:04:52,820 power rebalancing, there's always going to be a tit for tap. 63 00:04:52,820 --> 00:04:56,020 It is as old as the human condition, if you will. 64 00:04:56,020 --> 00:05:02,980 And I think probably arguably as old as evolution itself, I develop characteristics A 65 00:05:03,060 --> 00:05:06,180 and characteristics A and part upon me some advantage. 66 00:05:06,180 --> 00:05:11,460 And so it was to be able to retain some type of ecological balance and or niche presence 67 00:05:11,460 --> 00:05:16,660 if not domination, you then must develop characteristics B that mitigate or counter 68 00:05:16,660 --> 00:05:22,660 characteristics A. And so you get this sort of escalation effect that can occur. 69 00:05:22,660 --> 00:05:25,540 It's what my colleague Bob McRae refers to as brink'smanship. 70 00:05:25,540 --> 00:05:29,220 He's referred to something as brain science, brink'smanship. 71 00:05:29,220 --> 00:05:30,740 And I certainly agree. 72 00:05:30,740 --> 00:05:32,580 I mean, we're seeing off the same sheet of music. 73 00:05:32,580 --> 00:05:35,380 I think there is the potential for that level of escalation. 74 00:05:35,380 --> 00:05:40,260 And what's important to understand is that in some cases that level of escalation is 75 00:05:40,260 --> 00:05:41,860 going to be explicit. 76 00:05:41,860 --> 00:05:44,020 And in other cases, it's implicit. 77 00:05:44,020 --> 00:05:48,260 And then yet in many cases, both in the implicit and explicit range, 78 00:05:48,260 --> 00:05:50,500 it's almost unavoidable. 79 00:05:50,500 --> 00:05:52,900 And let me give you an example. 80 00:05:52,900 --> 00:05:57,860 A stance of preparedness, for example, would imply that I need to be aware of what's going on 81 00:05:57,860 --> 00:06:03,940 out there. And in my preparedness, at least have some level of readiness for response, 82 00:06:03,940 --> 00:06:09,220 which would mean I would then have to do something to further study those bad things out there, 83 00:06:09,220 --> 00:06:11,380 or those things that could turn bad out there. 84 00:06:11,380 --> 00:06:14,580 So as to be able to create things that might be an antidote, 85 00:06:14,580 --> 00:06:17,940 things that might be something preventive, or certainly the things that would allow me to be 86 00:06:17,940 --> 00:06:20,420 recuperable and go rebound. 87 00:06:20,420 --> 00:06:23,460 So I would then have to engage in this type of program. 88 00:06:23,460 --> 00:06:25,380 This is what we see with an arms race. 89 00:06:25,620 --> 00:06:28,420 Again, what I think becomes important is the nature of the discourse, 90 00:06:28,420 --> 00:06:31,700 and that discourse has to appreciate certain dialectical components. 91 00:06:31,700 --> 00:06:36,420 In other words, your point of view is certainly going to argue that you're doing these things. 92 00:06:36,420 --> 00:06:41,700 For reasons that sustain what you define to be relative goods of your way of life, 93 00:06:41,700 --> 00:06:43,780 your value, and your ideals. 94 00:06:43,780 --> 00:06:45,860 And we're doing the same thing. 95 00:06:45,860 --> 00:06:51,300 So in appropriating those things, one of the constructs that's important to consider is 96 00:06:51,300 --> 00:06:55,700 are the escalations of these types of techniques and these types of tools, 97 00:06:56,660 --> 00:07:00,420 bad unto themselves, are the tools of the methods, 98 00:07:00,420 --> 00:07:03,700 are the actual technologies, bad unto themselves. 99 00:07:03,700 --> 00:07:07,780 Or are there thresholds whereby these particular uses, in some cases, 100 00:07:07,780 --> 00:07:12,900 what we call dual uses or direct uses, then represent a threshold of potential 101 00:07:12,900 --> 00:07:14,660 malevolence and maleficence. 102 00:07:14,660 --> 00:07:19,700 The legal term here is malum in say, is the thing bad unto itself. 103 00:07:19,860 --> 00:07:21,140 So let me give you an example. 104 00:07:21,140 --> 00:07:26,020 If I develop some microbe, whether it's bacteria or a virus, 105 00:07:26,020 --> 00:07:31,300 and that virus is a to such an extent that there is no known treatment, 106 00:07:31,300 --> 00:07:36,180 there is no known vaccine, and it's highly reproducible and highly infectious 107 00:07:36,180 --> 00:07:37,780 in a human population. 108 00:07:37,780 --> 00:07:44,980 If I develop that synthetically, well, then clearly, that represents a definable harm. 109 00:07:44,980 --> 00:07:46,820 It is bad unto itself. 110 00:07:46,820 --> 00:07:52,020 If I develop a nuclear weapon, there's no way I can couch developing a nuclear weapon, 111 00:07:52,020 --> 00:07:55,380 and say, well, I was going to use it for fireworks on one of our national celebration. 112 00:07:55,380 --> 00:07:58,980 You should see the pyrotectics when this thing goes off. 113 00:07:58,980 --> 00:08:00,580 It's a bad thing unto itself. 114 00:08:01,860 --> 00:08:07,620 But what becomes problematic with the brain sciences, and many of the biosciences and technologies, 115 00:08:07,620 --> 00:08:11,940 is that the actual tools, and to the techniques, are, in fact, 116 00:08:11,940 --> 00:08:13,300 doly usable. 117 00:08:13,300 --> 00:08:17,860 And what we mean by that is they can be used in applications and biosciences and biomedicine. 118 00:08:17,860 --> 00:08:24,500 They can be used in applications that provide lifestyle and occupational enhancement and optimization. 119 00:08:24,500 --> 00:08:28,420 And certainly, just by flipping those things around, if we can make something 120 00:08:28,420 --> 00:08:33,940 called better, we can also identify those substrates and targets that could then make it worse. 121 00:08:33,940 --> 00:08:40,020 And as a consequence, the relative dual usability of the biosciences is relatively high. 122 00:08:40,420 --> 00:08:45,220 So to prevent that level of escalation, and or as Bob McRate refers to it, 123 00:08:45,220 --> 00:08:50,740 it is brain scientific brainsmanship, I think a relative level of clarity is necessary 124 00:08:50,740 --> 00:08:56,820 understanding that in some cases, opacity and frank transparency might be limited because of issues 125 00:08:56,820 --> 00:08:59,060 of what is classified and what is not. 126 00:08:59,620 --> 00:09:03,780 But I think at least understanding that there should be definable parameters that need to remain 127 00:09:03,780 --> 00:09:05,140 a pace of the science. 128 00:09:05,140 --> 00:09:09,940 And that's one of the problems with very, very rapidly progressing science and technology. 129 00:09:10,020 --> 00:09:13,940 Such as that, which you have in the brain sciences, which is a characteristically, 130 00:09:13,940 --> 00:09:17,220 the policies in the guidelines tend to lag a bit behind. 131 00:09:17,220 --> 00:09:22,660 And those lags and create opportunistic gaps for furthering programs of research development, 132 00:09:22,660 --> 00:09:28,420 testing and evaluation that could be diverted to dual use or in fact that could be used explicitly 133 00:09:28,420 --> 00:09:30,900 for national security intelligence and defense operations. 134 00:09:30,900 --> 00:09:35,460 So one of the things that we've been calling for working with my colleagues, Diane Duelis, 135 00:09:35,620 --> 00:09:40,660 Dr. Dan Gersh, thine of the Rand Corporation, has been a revisitation of the biological toxins 136 00:09:40,660 --> 00:09:46,100 and weapons and chemical weapons conventions to more appropriately assess, define, and categorize 137 00:09:46,100 --> 00:09:52,100 what things are weapons, what things are weaponizable, what tools and techniques are available 138 00:09:52,100 --> 00:09:56,420 to create such weaponizable trajectories and how the nature of the discourse and perhaps 139 00:09:56,420 --> 00:10:01,140 of surveillance needs to change, so as to allow for more effective oversight and governance. 140 00:10:02,100 --> 00:10:07,060 You've hinted at this a little bit so far with the mention of dual use, mentioned about 141 00:10:07,060 --> 00:10:11,860 capacity, and you know this in your article as well, that strict regulation of neuroweapons 142 00:10:11,860 --> 00:10:16,820 might not quite be feasible, but that developing a set of ethical frameworks could be useful 143 00:10:16,820 --> 00:10:20,820 for guiding future policies and biotechnology and its applications. 144 00:10:20,820 --> 00:10:23,700 In your opinion, what should those ethical frameworks look like? 145 00:10:25,060 --> 00:10:29,140 But, well, you know, again, I have to have a deep nod of homage to my colleagues 146 00:10:29,300 --> 00:10:35,300 with whom I worked on the European Union Human Brain Project, the SP12 subprogram that looked 147 00:10:35,300 --> 00:10:38,980 at philosophy and ethics, and particularly group that I was involved with and had the pleasure 148 00:10:38,980 --> 00:10:44,420 to be engaged with for a couple of years, which was their their task force on dual use 149 00:10:44,420 --> 00:10:51,140 brain science. And again, a deep, deep nod to my colleague Nicholas Rose of King's College to 150 00:10:51,140 --> 00:10:57,780 my colleague Atinka Evers of University of Uppsala, who were leading figures in this task force 151 00:10:57,860 --> 00:11:04,180 with whom I engaged. And I think what becomes important to understand is that when you're establishing 152 00:11:04,180 --> 00:11:10,340 ethical criteria, trying to attain some level of homogeneity and uniformity of adherence to 153 00:11:10,340 --> 00:11:17,460 ethical criteria is indeed balancing act. I also had the opportunity to work with some very esteemed 154 00:11:17,460 --> 00:11:23,700 colleagues of the organization of economic cooperation and development, and we were able to 155 00:11:23,700 --> 00:11:29,460 literally sit at the discourse table with the number of our international colleagues that are 156 00:11:29,460 --> 00:11:34,740 for all intents and purposes, at least commercial and economic competitors in the brain science 157 00:11:34,740 --> 00:11:42,100 and neurotechnology form. And what became very, very clear is that different cultures have different 158 00:11:42,100 --> 00:11:46,820 histories, different cultures have literally different geographies and different colleges, 159 00:11:46,820 --> 00:11:51,700 different cultures have different needs and values. And those needs and values very often are 160 00:11:51,700 --> 00:11:56,660 predicated upon those physical conditions that were part of their history or part of the current 161 00:11:56,660 --> 00:12:02,340 ecology, as well as their philosophies. And those philosophies are longstanding and can be 162 00:12:02,340 --> 00:12:08,820 filtered through relatively new lenses. And different philosophies will very often give rise to distinct 163 00:12:08,820 --> 00:12:15,620 ethics. Ethics is always about the effort. Ethics is always about the enterprise. Ethics is 164 00:12:15,700 --> 00:12:24,500 always about the endeavor. And if that endeavor is maintaining values, actions, ideals and more 165 00:12:24,500 --> 00:12:30,500 rays that are intrinsic to a culture historically and that are being upheld politically, 166 00:12:30,500 --> 00:12:37,700 then you're going to get varying ethics in ways that sometimes are grossly variant. And in other ways, 167 00:12:38,260 --> 00:12:45,220 that are more subtle and implicit. And the more global we become in these enterprises with regard 168 00:12:45,220 --> 00:12:50,580 to our international exchange and multinational exchange. And the more advanced the brain 169 00:12:50,580 --> 00:12:56,340 sciences as well as other sciences become. And as a consequence of that, their utility becomes 170 00:12:56,340 --> 00:13:01,860 apparent, not only in biomedicine and or in public life and occupational use where they might be 171 00:13:01,860 --> 00:13:08,660 leveraged economically in terms of soft weaponology and non-kinetic rebalancing. But also, 172 00:13:08,660 --> 00:13:13,300 literally being uptake into national security intelligence and defense operations or perhaps 173 00:13:13,300 --> 00:13:18,100 more specifically defined as warfare intelligence and national security operations wins. 174 00:13:19,060 --> 00:13:24,420 Well, that's what they're looking to do. They're looking to evoke some wind with regard to some 175 00:13:24,420 --> 00:13:29,540 form of a gemony, whether it's economic, whether it's geopolitical, and or whether it's economics 176 00:13:29,540 --> 00:13:35,060 and geopolitical also play into how these things are used in strategic tactical intelligence and 177 00:13:35,060 --> 00:13:40,580 military operations, whether by enhancing and enabling our own or by in some way influencing 178 00:13:40,580 --> 00:13:48,180 debilitating others. So the issue there becomes one of the ethical discourse. How do you go to the 179 00:13:48,180 --> 00:13:55,060 table and begin these discourses when a priori you understand that cultures are going to differ, 180 00:13:55,060 --> 00:14:00,260 needs are going to differ and they're also a certain nationalistic interest that need to be appreciated. 181 00:14:01,380 --> 00:14:07,220 That makes it difficult. That makes the types of ethics difficult that I think need to be leveraged 182 00:14:07,220 --> 00:14:11,220 and need to be perhaps developed. This is not to say that the ethical systems that are 183 00:14:11,220 --> 00:14:16,340 extant are ineffective, they certainly are, but I think that nuances of how those systems then 184 00:14:16,340 --> 00:14:22,500 come together on a common palette might need some level of revisitation and perhaps revision. 185 00:14:22,500 --> 00:14:28,180 So I was able to create a more cosmopolitan viewpoint that also has more community and 186 00:14:28,180 --> 00:14:33,780 more local senses of applications. Where it does get tricky is where you begin to deal with the 187 00:14:33,780 --> 00:14:39,140 ethics of utilizing science and technologies, national security intelligence defense and 188 00:14:39,140 --> 00:14:43,060 welfare intelligence and national security initiatives, operations and agendas. 189 00:14:43,620 --> 00:14:50,180 Because the nature of those operations, the nature of those agendas, are towards some level of 190 00:14:50,180 --> 00:14:56,420 a gemony and or superiority and advantage. And many times that information does exist on a classified 191 00:14:56,420 --> 00:15:02,260 side where transparency is less than full. So engaging those discourses I think will take some 192 00:15:02,340 --> 00:15:07,460 level of sophistication to be able to understand not only where the research is at present, 193 00:15:08,180 --> 00:15:15,540 but what can temporary or heart research obtains and entails and perhaps implies for developments 194 00:15:15,540 --> 00:15:22,260 within the next five to ten years, ten to twenty years and then beyond. We've described those as we 195 00:15:22,260 --> 00:15:27,460 mentioned last time as particular vistas of scientific technological as well as social development. 196 00:15:28,100 --> 00:15:32,500 The most proximate being the vista or zone of probability, those things that have a high 197 00:15:32,500 --> 00:15:36,660 probability of coming to fruition or that already are in that tech readiness pathway. 198 00:15:37,620 --> 00:15:43,140 Those things from let's say six to 15 years represents the zones of possibility, the vista of 199 00:15:43,140 --> 00:15:50,180 possibility by realizing certain probabilities now, the next five years, I then open up a 200 00:15:50,180 --> 00:15:54,740 new vista of what things might be possible as those things come to fruition or realization. 201 00:15:55,460 --> 00:16:01,060 And then looking into the future to the 16 to 30 year vista is really the zone of potentiality. 202 00:16:01,940 --> 00:16:07,540 And it becomes important to engage those discourses to say, do we have the ethical tool kit 203 00:16:07,540 --> 00:16:13,700 at present to be able to navigate across those vistas? And or unless we see that they might be 204 00:16:13,700 --> 00:16:19,140 new ethical situations, ethical legal situations, international and political situations that are 205 00:16:19,140 --> 00:16:24,340 predicated upon particular trajectories of advancement of these technologies across those vistas, 206 00:16:24,740 --> 00:16:29,620 that might necessitate revisitation, potential revision, and perhaps even development of certain 207 00:16:29,620 --> 00:16:35,380 ethical precepts, principles and engagements of new. So to wrap things up, thinking to the future, 208 00:16:35,380 --> 00:16:42,900 what keeps you up at night in the space? Well, you know, I think there are a few things. 209 00:16:44,820 --> 00:16:51,700 I think when you begin to open the door to technologies that are easily acquired and methods 210 00:16:51,780 --> 00:16:58,740 that are easily relatively easy to perform, that require nominal or minimal training and that access 211 00:16:58,740 --> 00:17:03,940 is very viable, clearly what you then do is you make something more accessible and more available 212 00:17:03,940 --> 00:17:09,620 to a broader range of individuals, not only nation states, but also non-state actors, 213 00:17:09,620 --> 00:17:15,940 roads, proxies, etc. There has been some concern, for example, about to do it yourself community, 214 00:17:15,940 --> 00:17:22,900 the fobiohacker community. And I want to say explicitly that no one is viewing that community as 215 00:17:22,900 --> 00:17:28,500 inherently problematic, and I certainly am not. I think the idea of public science as we're moving 216 00:17:28,500 --> 00:17:32,980 forward is more could become literate in STEM. Certainly as an interchangeable and I think 217 00:17:32,980 --> 00:17:39,380 very viable option to pursue. The problem with the do-it-yourself community is that there are certain 218 00:17:39,380 --> 00:17:45,060 gaps in those communities that then render that community and its participation in activities 219 00:17:45,060 --> 00:17:51,380 vulnerable, vulnerable to infiltration, vulnerable to influence, vulnerable to corruption. And that 220 00:17:51,380 --> 00:17:56,980 level of corruption can occur from a variety of levels. It can occur from a nation state that then 221 00:17:56,980 --> 00:18:02,980 infiltrates either its population of do-it-yourselfers or do-it-yourselfers elsewhere around the globe 222 00:18:02,980 --> 00:18:08,340 that are in fact already operable in some other sovereign state. End-or would it could then 223 00:18:08,340 --> 00:18:13,700 do is enable proxies to be able to work to fund certain things whereby other the information of 224 00:18:13,700 --> 00:18:20,740 the products then goes back to the proxy resource that was ultimately underlying funder. So the 225 00:18:20,740 --> 00:18:26,260 do-it-yourself community represents a potential point of vulnerability for infiltration and corruption. 226 00:18:26,900 --> 00:18:32,980 Only in that the nature of it itself can sometimes drift from its centerline. And what I mean by that 227 00:18:32,980 --> 00:18:40,900 is though, although 99% of that community practices good rigorous science and utilizes a variety of 228 00:18:40,900 --> 00:18:47,860 open institutional review boards and a self-policing, the nature of the community itself in 229 00:18:47,860 --> 00:18:55,620 that is in the public space. Actual to certain institutions with their oversight make those 230 00:18:55,620 --> 00:19:01,780 chunks in its potential armor viable. And with those viability there's vulnerability, which is why 231 00:19:01,780 --> 00:19:06,340 ongoing programs for example for our federal bureau investigation and interpol have looked 232 00:19:06,340 --> 00:19:11,140 very strongly to work cooperatively with to do it yourself or a biohacker community so as to be able 233 00:19:11,140 --> 00:19:16,420 to be self-protective and purport some level of protection provides a level of protection 234 00:19:16,420 --> 00:19:22,020 that would then in some cases sure up some of those gaps that then would be representative of 235 00:19:22,020 --> 00:19:26,180 vulnerabilities that could be penetrated by various nefarious nation states and darker 236 00:19:26,260 --> 00:19:37,700 perishes actors. Point one. The other thing that does keep me up at night is the ubiquity of data and 237 00:19:37,700 --> 00:19:47,540 information on three levels. Number one, what I'll call real information, real data, the growing capability, 238 00:19:47,540 --> 00:19:53,700 the developments and advancement of not only big data resources and tools, but also the 239 00:19:53,700 --> 00:20:00,660 yoking of big data in the sciences and technology to additional sciences and technology such as 240 00:20:00,660 --> 00:20:07,860 decision technologies and artificial intelligence on a variety of scales. That makes data and information 241 00:20:07,860 --> 00:20:13,860 very viable and valuable for a host of different uses and where there are uses. There's the 242 00:20:13,860 --> 00:20:19,780 potential for inadvertent misuse and there's also the potential for abuse or intentional abuse 243 00:20:19,940 --> 00:20:28,820 and misuse. Number one, number two, that viability of information I think is also becoming ubiquitous 244 00:20:28,820 --> 00:20:35,300 in the public space and the more information that we're receiving, the more potential 245 00:20:35,300 --> 00:20:42,340 misinformation we're receiving. The issue here is that the more capable we become in various areas of 246 00:20:42,340 --> 00:20:47,300 science and technology and brain science and technology of course notwithstanding and in some cases I 247 00:20:47,300 --> 00:20:53,940 would argue ever more so because there are still unknowns about brain science. There are still unknowns 248 00:20:54,260 --> 00:20:58,740 unanswered questions and problems about what the brain is and how the brain does what it does. 249 00:20:59,620 --> 00:21:05,620 Misinformation can be a problematic thing. It can lead to public attitudes, values, 250 00:21:05,620 --> 00:21:12,740 anticipations, anxieties that may be in some cases exaggerated in another cases under appreciative. 251 00:21:13,460 --> 00:21:18,420 And to think that level of knowledge is going to be important as the brain science is in other 252 00:21:18,420 --> 00:21:25,060 sciences advance. So as to make the public aware not in a way that is anxiously apprehensive 253 00:21:25,060 --> 00:21:30,100 but aware in a way that is salient and sentinel of what is going on in the sciences and the 254 00:21:30,100 --> 00:21:36,020 potential benefits as well as burdens and risks that these may pose and in so doing, understanding 255 00:21:36,020 --> 00:21:41,780 that that public is a constituency for politics and it's that interaction between the polis 256 00:21:41,860 --> 00:21:47,700 and our elected officials that will help to create that type of engine in bio-security that will be 257 00:21:47,700 --> 00:21:54,420 far more better equipped to informed and think capable in terms of our bio-security preparedness, 258 00:21:54,420 --> 00:21:58,660 readiness and response engaging that whole of government to hold a nation approach that we advocate. 259 00:22:00,660 --> 00:22:05,300 Dr. Jardano, that's it for my end. Thanks again for your kind today. As always it's a pleasure 260 00:22:05,300 --> 00:22:08,980 having you. Thank you. Thank you so much for your ongoing interest in my work. 261 00:22:09,940 --> 00:22:16,100 Thank you for joining the HDI Act podcast to learn more about our other services, 262 00:22:16,100 --> 00:22:23,380 please reach out directly or visit us online at www.hd.io.org