Center for AI Policy Podcast
Center for AI Policy Podcast
#15: Bill Drexel on AI, China, and National Security
0:00
Current time: 0:00 / Total time: -1:03:08
-1:03:08

#15: Bill Drexel on AI, China, and National Security

China’s surveillance implementation and AI capabilities, open source AI, AI’s potential role in enhancing bioweapons, U.S.-China relations, and more

Bill Drexel, Fellow at the Center for a New American Security’s Technology and National Security Program, joined the podcast to discuss China’s surveillance implementation and AI capabilities, open source AI, AI’s potential role in enhancing bioweapons, U.S.-China relations on AI, U.S. AI policy actions, global AI competition, and more.

Available on YouTube, Apple Podcasts, Spotify, or any other podcast platform.

Our music is by Micah Rubin (Producer) and John Lisi (Composer).

Relevant Links

Timestamps

0:01:29 - China’s surveillance evolution in Kashgar

0:13:01 - CCP’s motivations for surveillance

0:18:57 - Assessment of China’s AI capabilities

0:24:47 - AI’s military and censorship applications

0:33:59 - Open source AI and national security

0:40:08 - AI and bioweapons development

0:47:46 - U.S. policy actions and administration changes

0:51:33 - U.S.-China dialogues on AI

0:56:24 - Future AI risks and possible cooperation

1:00:43 - Global AI competition

Transcript

This transcript was generated safely by AI with human oversight. It may contain errors.

(Cold Open) Bill (00:00:00):

China does want to surpass the United States in AI by 2030 and biotech by 2035.

Jakub (00:00:15):

Welcome to the Center for AI Policy Podcast where we zoom into the strategic landscape of AI and unpack its implications for US policy. I'm your host, Jakub Kraus, and today's guest is Bill Drexel. Bill is a fellow for the Technology and National Security Program at the Center for a New American Security, or CNAS. His work focuses on US-China competition, artificial intelligence, and technology as an element of American grand strategy. He previously worked on humanitarian innovation at the UN and on Indo-Pacific Affairs at the American Enterprise Institute. Our conversation covers topics like China's surveillance implementation and AI capabilities, open source AI, AI's potential role in enhancing bio weapons, US-China relations on AI, US AI policy actions, and global AI competition. Note that this conversation was recorded before the release of DeepSeek V3, which I think makes it all the more relevant today. I hope you enjoy.

0:01:29 - China’s surveillance evolution in Kashgar

Jakub (00:01:29):

You've researched how China is using state surveillance, and in 2020 you published this op-ed in the Washington Post. You wrote there that "Kashgar has gone from being the cradle of Uyghur culture and the ancient pearl of the Silk Road to a neo-totalitarian ethnic theme park, urban gulag and crucible of state conformity." How has China's use of surveillance evolved in the four years since then? And then I also want to know about where you see it going in the four years to come.

Bill (00:02:13):

Sure. That's a great question and I love this kind of throwback to that op-ed because I actually only got into artificial intelligence as a policy area after going to China and especially after going to Kashgar. I was there in 2018, which was in the fall of 2018, which was around the time when a lot of the reportage about what was the atrocities that were being committed against the Uyghurs was starting to really hit the media in a big way, but it was before Xinjiang and places like Kashgar were kind of sanitized more deliberately for foreigner interference. So I went there and I had a really chilling experience of this sort of totalitarian smart city that was very effective in terms of how the local people interpreted it, how they responded to it. The degree to which the city was kind of technically effective - which is to say, the degree to which the systems worked well and in an integrated efficient fashion - may be another question, but it was a really chilling experience that kind of changed the trajectory of my life.

(00:03:36):

I think that one thing to start with is that Kashgar and Xinjiang more broadly was a very specific case for China where you had strong inter-ethnic issues at play - similar to Tibet - that really meant that the technologies and the artificial intelligence being employed in Kashgar were not at that time particularly... not very indicative of what was happening in the rest of mainland China. It was and is still a unique case and an extreme case.

(00:04:23):

But the fear, and I think a really merited fear, is that what was being experimented on in Kashgar could be a model for the rest of China and for the broader world potentially. So the CCP was much more aggressive in trying to link together these technologies into a kind of all-encompassing surveillance system that was multimodal, taking in a lot of information about individuals' lives in a lot of different ways, including things like electricity usage and analysis of who they were associating with and their biometrics and all these sorts of things.

Jakub (00:05:04):

And how did they gather that kind of info, just so people get a sense of what's happening concretely?

Bill (00:05:11):

Sure. So in the early 2010s, late 2000s, Kashgar was basically razed entirely by the Communist party and rebuilt from the ground up as a surveillance city. So it's actually an interesting urban planning example - interesting and harrowing, to be clear - of what it looks like when you really arrange a whole city for the purpose of surveillance. So it used to be the case that these homes radiated outwards from mosques in the city that kind of organized community life, and they are now systematically radiating out from surveillance centers. So the whole city has been very hardwired to collect data across a whole bunch of different sort of inputs. The most obvious is cameras, facial recognition cameras. They had some audio recording equipment also widespread, and widespread surveillance of phones. At that time, they were actually requiring that Uyghurs put surveillance software on their phones as well. But in addition to that, they had sensors installed on different houses in different public areas, and they also had a kind of forced DNA drive where they took samples from the whole kind of Uyghur adult population for a genetic database.

Jakub (00:06:59):

How did they do that? They had people line up at it and they took some saliva or what's the...

Bill (00:07:06):

I'm not sure if it was saliva or blood or exactly how it happened, but there are a lot of highly coercive rituals, and... I guess the thing to know more broadly is that at this time, and there were these large reeducation camps speckled throughout Xinjiang, which it was the largest mass internment of a religious minority since World War II. And by all accounts, those reeducation camps were really grim places of physical abuse, sexual abuse, a lot of coercive measures. The upshot was they were a kind of prison, and anyone could be sent there for any reason. So with that in the background, if the state is asking you to do anything, you're likely to do it.

(00:08:11):

So there is this kind of strong enforcement mechanism that was a forcing function for a lot of this. But what the use of AI was able to do in Kashgar itself was to make it sort of an open air prison, and the surveillance was so granular on what the state could track for people, for communities and so on, that it was able to kind of induce a civic culture that the Chinese Communist party wanted to induce. Right, historically, Kashgar was this center of Uyghur culture. So for the state to capture Uyghur culture and bend it towards its own ends, it had to capture the city and induce a new set of practices, a new set of beliefs, a new set of networks and everything from the ground up, which is quite literally what it did.

(00:09:12):

But it was an outlier, as I say. That said, China obviously outside of its sort of more restive regions like Tibet and Xinjiang has a lot of surveillance, a tremendous amount of surveillance, a growing amount of surveillance, and you'll have seen these sorts of reportage on the social credit system and things of this nature that suggest its ambitions are pretty all encompassing for what the state would like to achieve through AI-enhanced surveillance on its population.

(00:09:51):

There is an open debate about how effective those systems are, and there's definitely a case to be made that a lot of the early reportage on the social credit system as it was being prototyped in different places was exaggerated. However, at the same time, it seems pretty clear that the Communist Party's ultimate ambitions is a AI-turbocharged surveillance state, and they're heading that direction.

(00:10:27):

And Covid was a huge accelerator towards that end. So under the auspices of public health, the CCP was able to roll out a lot more invasive forms of technological surveillance to what was already quite a high baseline and has not rolled back quite a lot of that. So we're seeing this incremental creep further towards things like what I saw in Kashgar, but for the ethnic Chinese, for the kind of highly densely populated areas of China, that progress is more incremental.

(00:11:09):

But I think to your question of what do we see in the next four years? I mean, Xi Jinping continues to take the society down a more authoritarian path, and I think certainly a part of that will be continuing to strengthen surveillance censorship and control through technological means.

Jakub (00:11:34):

With the Covid measures, have any of those been scaled back or did that ratchet and then persist even after they resolved some of the pandemic problems?

Bill (00:11:48):

It varies a little bit place to place for sure, but I think in aggregate, broadly speaking, we can say that it ratcheted up and they partially ratcheted it down, but not totally. So the net effect is still a significant acceleration from what it was before. We'll see, again, how this progresses, but all indications are this sort of incremental kind of boiling the frog approach to rolling out these technologies. And part of it too is just practical - working out the kinks in these big massive systems that are trying to be more and more all encompassing and interoperable is just difficult. So there's a pace that they can go at just kind of technically that's a limiting factor, but obviously Covid gave them the excuse in terms of political will to go faster than they otherwise might have.

0:13:01 - CCP’s motivations for surveillance

Jakub (00:13:01):

And this might be a naive question, but why does China want to do this? Or perhaps it's just Xi Jinping?

Bill (00:13:12):

No, it's a good question. There are a few ways to answer it. I think that a lot of Americans have difficulty really internalizing the degree to which the Chinese Communist Party thinks differently about governance, statecraft, and human dignity in the most general sense. So a way that I try to explain this is for a lot of the elites of the Communist Party of China, they almost see themselves as something like a different species - their interests are entirely distinct from those of their population. The guiding principle of the Communist Party is the retention of its power and its rule over the people. And so they see that as their overarching goal. And they also see that the reason why China fell behind other powers was technology. So in their mind, if your ultimate goal is to solidify your rule over this massive population, and you know that technology has often been the key to state power, it's kind of natural to look for authoritarian solutions to solidify your power further.

(00:14:46):

But there are all sorts of other things that the CCP... The extremity of the CCP's worldview on ruling its people is just hard to stomach. And other examples that are useful to think about are there's a religious movement in China called Falun Gong, it's roughly based on Tai Chi, a relatively innocuous movement, but it became very popular and the CCP became nervous about its popularity and as a result, they came after them in a big way. And there's a lot of evidence to suggest that they are kind of at an industrial scale harvesting the organs of this religious minority in their country for profit. That seems like a kind of crazy proposition to a lot of people who haven't spent time in China. But when you take these sorts of things as your baseline, whipping out AI to rule your population with greater control just makes a lot more sense.

Jakub (00:16:02):

And I want to get to AI, but as the last question on this, what would someone who's more skeptical of these claims about China's ambitions be potentially saying, and how persuasive do you find some counter-arguments?

Bill (00:16:18):

Well, I think that they would... Prior to Xi Jinping, there was a really strong argument that China's this, it's not so much authoritarian as technocratic, and there are advantages to that system, and the CCP selects the best people through this sort of very intense meritocracy, and there's something to learn from that, and so on. And their ability to deploy technology commercially and for their state is just so dynamic and fast. You could make the argument maybe that what appears techno authoritarian is actually just the way that technology and society is going. And maybe there's some truth to that. Like AI creates the ability to have facial recognition cameras, which seem predisposed towards more surveillance of people. So you can make that argument. I think it became a lot harder to make that argument after Xi Jinping came to power, threw away term limits, and started restricting his society politically and socially so severely, and certainly after these technologies were directly put into the service of crimes against humanity in Xinjiang.

(00:17:57):

In light of those things, I think it does become very hard to make the case. The case that you'll hear now a little bit more is that, well, these systems are hard and it's a lot more bark than bite. So the state claims to be able to do this or wanting to be able to do this, but it's an open question whether they'll be able to, and most of the authoritarian dimensions of Chinese rule are still accomplished through human surveillance, human intelligence reporting on one another and so on. And there's a lot of truth to that too, but the systems do improve and the ambitions are there pretty explicitly in a lot of the documents and speeches of the Party. So I think while it's true that yes, the systems may not be where they want them to be, they still have a desire of where they want them to go.

0:18:57 - Assessment of China’s AI capabilities

Jakub (00:18:57):

And on systems improving. I've heard that China is pretty strong in AI on facial recognition. Then there's this new category of AI centered around OpenAI and ChatGPT: more broadly capable chatbots, multimodal chatbots that can take images and video and audio and talk with you. Now, as far as I've seen, there's not a ton of analysis of how good China's versions of these are. So the ChinaTalk Substack did one where they tested out the different language models. There's a benchmark called SuperCLUE that tracks how good the models are on a lot of different domains. But I haven't seen great English-centered analysis of the Chinese language models, partly because many people perhaps don't speak the language to talk with the Chinese chatbots in Mandarin. So it does seem to me that we don't have a super clear picture of where China's AI progress is. Even things like how much computing power they have. We know maybe that there's a large dam that has a lot of gigawatts of electricity just available if they wanted to build a large computing center there. But how accurate do you think that assessment is? Do we have a clear picture of where China is on their development of AI, frontier AI, with these general purpose models?

Bill (00:20:38):

Yeah, I mean this is the million dollar question for a lot of people. I think there's a handful of individuals who are really trying to track this very closely and it's hard to do. I think it's... A few things I'd say about this. One is that we have a lot more transparency about our own frontier models when they are released. But I mean, you hear a lot of speculation about our own frontier models in production. And so similarly with China in terms of what's in production right now, very hard to tell even harder than here for a number of reasons. And so I think if we're talking about that at least it's pretty hard to know.

(00:21:33):

And adding to that uncertainty is the fact that China has historically been excellent at stealing American intellectual property. So do they have access to our most advanced models already? Maybe. Are they leveraging it to build their next generation? Also maybe. It'd be hard to really get details about that.

(00:22:07):

In terms of their existing models. I agree. I mean obviously we know less about them than we know about ours. We know broad... I mean, I think that the benchmarks though are pretty indicative. It also speaks volumes that the PLA is apparently using Llama for one of its tools. I think that there are pretty reasonable indicators that they are behind us on large language models. It's just a question of how long will that persist and in what ways. And I mean, you talk about the models and you can equally talk about the huge amount of uncertainty in terms of the chips that are powering the models. We don't know how many are making it through export controls. And you get these snippets from reportage about how the cost of renting these GPUs in China is actually lower than it is here and things of this nature that don't inspire confidence. So I'd say there's a lot of uncertainty.

Jakub (00:23:22):

There was the Huawei smartphone thing last summer too where they have different generations of how advanced the chips are, and Huawei turned out to be making a chip that was further ahead than we thought.

Bill (00:23:34):

Yes, but seemingly difficult to scale at a reasonable cost or to produce at a reasonable cost for them. Which is a classic. A lot of these, there's also a strong tradition in Chinese technology of reaching benchmarks, but in such a way that is a little bit misleading in terms of how robust the technology really is. But it's something to watch for sure, and there is just a lot of uncertainty. But I would say too, I mean something that worries me a lot when we move beyond kind of simple frontier progress is how are they progressing on the application of frontier models? And also how are they progressing on other highly consequential areas of AI that have gotten a lot less media attention lately? I actually think on both of those questions, we maybe know there's even less kind of tracking and appreciation of what's happening.

0:24:47 - AI’s military and censorship applications

Jakub (00:24:47):

What highly consequential areas do you have in mind?

Bill (00:24:51):

So I'm thinking if we talk more about narrow AI systems, I talk to a lot of policymakers who are kind of under the impression that our export controls on advanced chips used to build frontier models also affect these other narrow models for national security that don't require that level of horsepower. So for example, I'll hear someone say, yeah, so we're primed to surge ahead in the AI-powered hypersonic modeling. And I'm like, no, the export controls don't help with that. They don't need that computational power. Or models for material science that's really critical for weapons production - they're not so compute intensive. Or even something as simple as drone swarming simulations, which, drone swarms, a lot of people think, may be the future of warfare. Again, you don't need anything near the computational power that you need for building frontier models. And so I worry, I barely see anything in the press about how their drone swarming modeling is going, but in Taiwan scenarios that could actually come into play.

(00:26:27):

And similarly, I think to return back to frontier models, if I'm the CCP already with what we have now, I could really make censorship a lot more efficient and cost effective. Historically, the CCP has employed a pretty large number of censors to go through their internet and try to identify and sensor all these posts and tilt conversations in particular ways. If they have the data on what that army of people has been doing, you could fine tune an LLM to do that at scale and at a fraction of the cost. So there are applications too that I think we're not really thinking about, but if they learn how to do that and they're able to push that out to Iran and Russia and other kind of autocratic nations or autocratic-leaning nations, that's a really potentially powerful cost effective tool that could change the game in terms of internet censorship globally. I can't imagine that there aren't Chinese firms that are working on this, but where have we read anything about it?

Jakub (00:27:52):

Yeah, can you flesh it out a little bit on what this vision is? This said you haven't heard any Chinese firms working on it, but what would it mean to work on this project?

Bill (00:28:06):

So I'm imagining that you could, for example, this is taking it in a very narrow form, but you can extrapolate it to a broader approach. But the Tiananmen Square massacre is a super sensitive topic in China that they in general just try to scrub from any discussion. And China's citizens have found a variety of ways of talking about the Tiananmen Square massacre without explicitly talking about the Tiananmen Square massacre. You know, speaking of it indirectly, or memes or pictures that are kind of evocative but maybe won't be picked up by the censors. And then the censors get wise to these things and it's kind of a cat and mouse game. But you need a lot of censors to watch that and to try to catch up with that and to monitor all of these chat rooms for topics that are not allowed.

(00:29:23):

Imagine if you could take the data logs of these censors and just kind of take a Llama or whatever, some sort of LLM and say, okay, scan text for these indicators or these indirect indicators - and maybe it's possible to generalize when they're talking about it indirectly, not for any specific code word or anything, but in a more general sense - and flag them or delete them or suppress them. And with multimodal models, you could do that with these sort of suggestive images and pictures as well. And instead of having hundreds of censors, trawling the internet for these sorts of things, you could have one or two or a handful of LLMs automatically doing it, reading through things faster than humans can, deleting things faster than humans can, doing it at scale without having to pay so many people to do it. And potentially, eventually maybe being better than humans at picking up on the signals that this is kind of a discreet indirect reference to something politically sensitive before the censors are able to do that as well.

Jakub (00:30:54):

Yeah, so it seems like you could have really more powerful forms of online censorship where... Right now you can feed a book into one of the large language models and ask it to tell you what's happening on a certain page and what it means. And you could certainly scan a paragraph on an internet forum and have it tell you, does this have anything to do with the Tiananmen Square? Is that sort of what you're imagining - that then you could use this to just super effective, perhaps not super accurate, maybe there's going to be some false positives, but certainly you can make it where it's just very hard to have any discussion on any topic.

Bill (00:31:40):

I mean they've already made it hard to have it. I think you can make it harder, and importantly, you can make it more cost effective. So the state has to spend less money, resources, attention on surveilling internet stuff in this way and can focus it more on other ways to make sure that the population is falling in line. I think it's concerning. I think there's a lot of potential for the propaganda department in LLMs that I'm sure they are exploring.

Jakub (00:32:20):

And earlier you were talking about how some of the US advancements can then get over to China without authorized access. So there was this famous case with Linwei Ding, I believe was his name, at Google, who worked on AI there, took several jets over to China multiple times, traveled there. I think he even launched an AI startup or AI company in China. And he brought some of Google's algorithmic secrets. And what he did was he copied them into the Apple Notes app on a tablet or a computer, and that was enough to bypass Google's security measures, which are quite strong. Google has pretty good security for their software. They've been working on proprietary tech for a while.

(00:33:12):

And so this was a publicly reported case, and I think he was prosecuted by the Justice Department, but there could certainly be cases that we don't know about. And I've heard people talking about how in the Bay Area where AI is, there's an AI startup every block of the street, you can just go to a party that one of these startups is hosting and people will be talking about, oh yeah, we're going to use new flow-based techniques, or we're looking at a mixture of experts model. Maybe they're not going to tell you all the tiny intricate details, but you can get a sense, okay, so OpenAI is going in this direction or they're investing resources in this, and that could be quite useful if you are trying to catch up.

0:33:59 - Open source AI and national security

Jakub (00:33:59):

So there's the algorithmic secrets part, but I want to talk about the model weights. So the parameters of a model where you can download it and run it on your own computer and fine tune it to be good for a particular application. So right now, some of these are being released openly. Which kind of benefits US national security if you like crowdsource security vulnerabilities. You can spur more innovation on AI in America. But then also as you mentioned, China can use it even for military applications. There was some recent reporting on that. So how do you think national security leaders should think about openness in AI development?

Bill (00:34:43):

Yeah, it's a hard question and I think a lot of people are thinking about this right now and there's not really consensus at all. My own sense is that Llama's not that good yet. So if the PLA is integrating it into important systems, that might be a good thing for us actually. I think it's really a case where at least so far, I think there are advantages to open source that are not outweighed by the risks of China using the models that are out there. I could see worlds in which that changes. But I basically think the payoff for what China is getting for open source so far is not worth eroding the payoff we get from open source to accelerate our tech ecosystem. But you can't really quantify this.

Jakub (00:36:02):

Okay. And do you think there's anything we can do regarding algorithmic secrets where... Even this is part of the openness debate too, because there's many dimensions of openness. So one thing you could be open about is exactly the steps you followed to build your model. You could even publish a logbook as Meta did for a model that was sort of replicating GPT-3 a while back where you talk about the blow by blow of the training, which includes some of the tacit knowledge that normally is hard to find on the internet. Is that just a lost cause? Because it's quite hard to actually prevent someone from talking about an algorithmic secret in broad strokes even. Maybe there would be less papers being published in the future with all the details, but it seems like this is going to be kind of open diffusion. Do you think there's any way that might change in the future?

Bill (00:37:10):

Well, two things. I think one thing's already changing and one I think could change in the future. The thing that's already changing is to be at the real frontier of frontier models. It is so capital intensive that I think more and more companies will start to see the necessity of being a little more closed lipped about how they're getting ahead if they're really at the forefront. I think if you zoomed out to the last 15 years, you'd already see a trajectory towards less openness about how these things are built, even if there are true blue open sourcers who are still very open.

(00:38:06):

A second thing is if it is the case that a frontier model really begins to have a killer app for national security, which is hard to predict, but if it does become the case that, to take one example, if the future is such that it ends up being the case that frontier models can very easily and seamlessly coach a non-expert into creating a horrific bio weapon and we don't find, for whatever reason, we don't find ways to constrain that capability that are reliable, that can't be fine tuned out relatively easily, then I think things will begin to change. I think the system will be responsive to the real demonstrated risks that begin to materialize. Does that answer the question?

Jakub (00:39:15):

I think so. Your perspective is roughly that we need to be watching carefully the capabilities that emerge in the coming years.

Bill (00:39:27):

Yeah, and I think even companies, if a real killer app comes out that's dangerous or a threat to national security, I think companies, certainly the companies at the forefront that are not open source will report that to the government and the government will likely take action. But even if a company stumbled upon this, they're likely to be responsive to it. So I think we're still in a monitor paradigm.

0:40:08 - AI and bioweapons development

Jakub (00:40:08):

So on this point of the emerging capabilities, you were looking into this pretty carefully with a report called AI and the evolution of biological national security risks, and there you found four key areas of concern with AI and bio capabilities. One is the general purpose models can give biological-related knowledge and potentially instructions. Two is there can be automations that reduce how much you need to have your hands on in the lab. I think one example of this is the cloud labs, but I'm not too familiar with the bio area. Another is progress on understanding how genetics affect your susceptibility to a disease. And relatedly, there can be advancements in how precisely you can engineer a virus. And specifically there's this concern that the future AI tools could let you target specific groups of people. So how many steps away are we from China being able to target a specific group? I mean, let's hope that never happens. But are there limits on the precision of a bio weapon? How targeted can it get?

Bill (00:41:23):

Yeah, so the kind of ethnically targeted bio weapon, understandably has garnered a lot of attention, a lot of fear as kind of a worst case scenario. And in principle, some people think that AI really excels in solving ultra complex multi-variable problems. And genomics and pathogen kind of research are such a super complex multi-variable problem. So AI will be able to help us do a lot more specifically. From the experts I've talked to, so far, ethnically targeted weapons would be extremely difficult to do if possible at all. But that's not to say... I mean, I don't know. I guess the only thing we could say from that is that it's speculative enough that no one sees a very clear path to it at the moment. Which is great.

(00:42:40):

However, that doesn't mean that next generation bio weapons is a solved issue. So it is true that in most cases the major reason why bio weapons aren't used is that states haven't figured out how to use them in such a way that doesn't blow back on their own populations or on their own militaries. Even if ethnically targeted bio weapons may be very, very difficult, if not impossible, there are other ways to target bio weapons that AI could potentially help with.

(00:43:32):

How's that?

(00:43:34):

Well, so different pathogens and different biological agents generally are sensitive to different environmental conditions. Some don't like UV light, some like certain levels of humidity, some like certain levels of pollution and so on. That, these more geographic features that you could target... It's more conceivable that you could adapt, use AI to help adapt biological agents to specific geographies. You could also imagine, for example, there are particular diseases that are native to particular places, not so much because of the environmental conditions of those places, but because the carrier species are specific to those places. So you could try to work on that.

(00:44:36):

You could imagine greater customizability from a variety of ways in terms of the effects. So everyone's mind immediately goes to lethality making things more or less lethal, but that may actually be misguided because there are actually things we already have that are very lethal. But you could change incubation periods or transmissibility. There are things you can do that potentially change the risk profile. It is, however, quite an open question about how specific we can get and in what ways.

(00:45:27):

It's true that Chinese military literature has had an interest in this issue. And it's also true that the State Department has publicly expressed misgivings about whether or not China actually shut down its biological weapons program. So there's a real fear here, and I think that the top level line is we don't use biological weapons. We've committed to not using them. And states broadly speaking largely haven't used them in a very long time to strategic effect. But if AI allows us to have new targetable capabilities that fix or mitigate the blowback issue, that could really be a dramatic game changer.

(00:46:30):

And it's also true that AI can be used to enhance early detection for pathogens and outbreaks, but AI could also be used to avoid detection to make attribution more difficult. It can be used to certain tactical effects that might make the use of a biological weapon more attractive. And it's a question of does the attribution technology grow faster than the anti-attribution technology, so to speak. These are the sorts of questions that could be on the horizon, but we'll see.

Jakub (00:47:19):

Wow, that sounds pretty scary.

Bill (00:47:22):

Yeah. China does want to surpass the United States in AI by 2030 and biotech by 2035. So if they're successful, this is a serious risk vector we should be considering. Or even if they're close to it, even if they're close to success.

0:47:46 - U.S. policy actions and administration changes

Jakub (00:47:46):

And then let's get into what policy actions a Trump administration might take. So the Biden administration put out a national security memorandum on this. This was on October 24th, and they also did the executive order about a year earlier in 2023. But the 2024 GOP platform stated that the plan is to repeal the executive order. And I haven't seen much on this national security memorandum because it's new, but I assume that would also be changed under a new administration. So what do you see in this existing national security memorandum or the executive order that might be useful and also realistic for the Trump administration to use potentially in a revised form to keep America's lead over China on AI?

Bill (00:48:44):

Great question. I think it's important to note that Trump administration #1 did also have executive orders on AI. So they were also trying to be forward leaning on the issue. They're interested in it, very interested in it. They're likely to continue to be interested in it. They're not scrapping these things in order to ignore them. It seems like they just want to do it differently.

(00:49:14):

I think that what we're likely to see is maybe less of a focus on AI safety as such and more on AI security, but there's a lot of overlap there. So it may be a redirection, but it's not a negation likely. We're also likely to see a focus on really trying to turbocharge AI for defense, but a major part of the national security memorandum is trying to make ways for the federal government to adopt and the defense industry defense apparatus, national security community to adopt AI more quickly, more effectively. And I'm sure that the Trump administration will have a similar goal in trying to speed these things up.

(00:50:18):

Beyond that, I think it's hard to say. Certainly beating China is a really central concern across the board for the incoming Trump administration. And AI is the pinnacle of tech competition with China at the moment. So I can't imagine a world in which they won't be focusing on accelerating AI progress, whether that's frontier models or other forms of AI. But we'll see. I think that the national security memorandum had quite a lot about trying to build out data center infrastructure for large language models, frontier models. I'd be really surprised if the Trump administration doesn't also try to accelerate the infrastructure build out for that technology. So I think we're likely to see a solid number of overlaps, even if the framing and some of the ultimate goals, the emphases of the ultimate goals shift a little bit.

0:51:33 - U.S.-China dialogues on AI

Jakub (00:51:33):

And one other aspect of this is what kind of talks the US is having with China. So Biden had a meeting with Xi Jinping, they talked about AI. I think there have been some subsequent dialogues since then. And when you were doing this CNAS analysis of the national security memorandum, one thing you added was that there's been this rare dialogue between the US and China on AI, but there have not been tangible actions materializing very much so. What tangible actions do you think - first, assuming the dialogues continue in some form - would be realistic, useful in the next few years?

Bill (00:52:27):

I think the lowest hanging fruit that some people think may actually happen is that it's possible that Beijing will launch some sort of AI safety institute. And if that's the case, it's also possible that there may be some exchange between our AI Safety Institute and Beijing's, whether directly or through some sort of consortium of AI safety institutes. People are very nervous about that for good reason. If you look for example, at the engineers and the business leaders who led China's build out of AI surveillance technologies, a lot of them came from Microsoft's AI lab in Beijing, which people didn't think would lead to these authoritarian technologies. So even on the safety angle, people are understandably very wary of cooperating much with China.

(00:53:40):

Another area that in principle could have some collaboration is on some of the safety and norms around lethal autonomous weapons, where in principle, in some areas the two countries could have overlapping incentives.

(00:53:59):

However, I think it's easy to talk about these areas of potential overlap. It all needs to be taken with a huge grain of salt because the fact of the matter is even in areas where there's clear mutual benefit diplomatically to do with technological risks and technological cooperation, Beijing has been very difficult to deal with. So you can think of the space debris conversation or biosafety since Covid or any number... It's very difficult. And American diplomats complain all the time that the Chinese really instrumentalize mutual benefit issues in favor of their strategic influence. And that being the case, it's just really hard to make any sort of really substantive progress on anything. I think these dialogues so far have been kind of remarkable in that they're even happening considering what's not happening with nuclear, for example, where China's building out its arsenal and risks are potentially rising.

(00:55:26):

But I think that part of the reason why it's happening is that a lot of the discussion is prospective. And once real advantages start to accrue to the leader in different AI subdomains, it'll be a lot harder to make any sort of progress with China on this stuff unfortunately. The thing that could change this is if there were some sort of Cuban Missile Crisis situation where the world and the Chinese saw really palpably a mutual interest in cooperating on big issues. But I think we're a ways away from that, and it's hard for me to imagine even what that would look like.

0:56:24 - Future AI risks and possible cooperation

Jakub (00:56:24):

I think one way it could look like is something with these AI agents that are coming out. We don't know when really effective AI agents will be available that can act really autonomously and go execute more and more complex tasks. But if you have anything that's automating science R&D and also AI R&D, then the whole world starts to speed up very much. And if you're the leader of any nation and suddenly, for example, one company has an extremely profitable product and might be gaining power in a way that could even be detrimental to a government's interest, there's that concentration of power aspect. There's also the aspect where, yeah, if everyone who has access to this model could be having a 50,000 person cyber army at their fingertips, this might not be a very safe situation to be in as a country. Or if everyone can build extremely quickly and we find new weapons of mass destruction, like we were mentioning these large scale coordinated drone swarms, we could be really quickly introducing lots of destabilizing events that China might not really want to be happening at such a speed, if at all. Also, I think there's a lot of unknowns when it comes to the AI that can truly do anything a human can do. How do you think about that?

Bill (00:58:05):

I mean, I think the picture you paint, yes, that would be a catalyst for more cooperation. I myself of a camp that sees that suspects more incremental progress of AI in general. And so I guess I don't plan for that necessarily. But I will say, I mean one thing that I think is true on the agentic point is if you think about the NotPetya cyber attack, for example, you wouldn't call it agentic, but it spun out of control and even rebounded on Russia. And you could imagine more dynamic AI systems, agentic systems in cyber that could really run haywire. To me, that's the nearest, the most palpable kind of agentic near-term problem we might have. And if something like that happened, then if that really had a catastrophic impact on national security systems or even just corporate operations that cost incredible amounts of money, then that could bring some people to the table.

Jakub (00:59:34):

What was that attack that you mentioned?

Bill (00:59:37):

NotPetya. So it was basically a cyber attack that Russia launched. I think it was on Ukraine, but maybe don't quote me on that. It was kind of one of these infect and spread, infect and spread systems and it spread all over the world. And even rebounded on Russia, and I believe it was a petrol company, it took out their systems.

Jakub (01:00:10):

So Russia lost control of it basically, this cyber malware.

Bill (01:00:15):

I mean, in a way they didn't expect to retain control of it. It was kind of a push and play and see what it does sort of virus. But I think that it'd be fair to say that most likely its impact went further, got more out of the hand than they anticipated. So you could imagine something like that.

1:00:43 - Global AI competition

Jakub (01:00:43):

Okay. Before we close, is there any last things you had wanted to say wish I had asked about?

Bill (01:00:52):

I guess the last thing I would say is that another issue I concern myself a lot with that I think Americans should concern themselves with as well is that China is really good, it's not great at going from zero to one at that innovation stage, but it's great from going from one to a hundred, and it's great at going from one to a hundred around the world. So I think the last thing I'd emphasize is that as much as we have to worry about our competition at the leading edge of these systems, we also need to be worried about how are we competing in terms of pushing AI systems, frontier and otherwise, around the world and developing ecosystems that are shaped by our values as opposed to theirs. And I think that's one area where we are not doing well and we really would do well to focus a lot more on it because China is pushing this stuff abroad. It has global ambitions and returning to the beginning, you wouldn't want to see things move towards a Kashgar-like scenario in other places around the world while we're focused on other things.

Jakub (01:02:21):

Yeah, and you had a Twitter thread about that where you talked about how within, I think, yeah, within one day of each other, the US and China announced these parallel initiatives to boost AI prospects in the global south, so people can check that out if they want. Is there anywhere else people should go if they want to learn about your work?

Bill (01:02:46):

You can see all of my articles on the CNAS website and all my reports as well, so that'd be the first port of call. But yes, thank you so much for having me on and it's been great to talk.

Jakub (01:03:01):

Yeah, really glad to have you. This has been a really enjoyable conversation. Thank you so much.

Bill (01:03:05):

Thank you.

Discussion about this episode