Nick Whitaker, a fellow at the Manhattan Institute working on emerging tech and AI policy, joined the podcast to discuss his AI Policy Playbook as well as progress studies, global competition in AI, artificial general intelligence, cybersecurity, export controls, AI talent recruitment, AI companions, and more.
Available on YouTube, Apple Podcasts, Spotify, or any other podcast platform.
Our music is by Micah Rubin (Producer) and John Lisi (Composer).
Relevant Links
Progress studies (Wikipedia)
A Playbook for AI Policy (Nick Whitaker, Manhattan Institute)
How AlphaChip transformed computer chip design (Anna Goldie and Azalia Mirhoseini, Google DeepMind)
Stuxnet: The world’s first cyber weapon (Joshua Alvarez, Stanford CISAC)
Nuclear close calls (Wikipedia)
NIST Special Publication 800-171 Rev. 3 (Ron Ross and Victoria Pillitteri, NIST)
A Playbook for Securing AI Model Weights (Sella Nevo et al., RAND)
BIS proposed rule establishing reporting requirements (Federal Register)
For Export Controls on AI, Don’t Forget the “Catch-All” Basics (Emily S. Weinstein and Kevin Wolf, CSET)
Keeping on the Cutting Edge (Nick Whitaker, City Journal)
Presidential Innovation Fellows, TechCongress, AAAS Fellowship
Meet My A.I. Friends (Kevin Roose, The New York Times)
The First AI-Powered Storytelling Teddy Bear Is Here (Bridget Carey, CNET)
Transcript
This transcript was generated safely by AI with human oversight. It may contain errors.
(Cold Open) Nick (00:00):
AI could become the single biggest driver of economic growth and progress.
Jakub (00:12):
Welcome to the Center for AI Policy podcast where we zoom into the strategic landscape of AI and unpack its implications for US policy. I'm your host, Jakub Kraus, and today's guest is Nick Whitaker. Nick is a fellow at the Manhattan Institute working on emerging tech and AI policy. He's also the chief operating officer of an AI investment fund based in the Bay Area. Previously Nick worked on tech policy at RAND and also founded Works in Progress, a magazine of new and underrated ideas in science, technology and economics. We topics like progress studies, global competition in AI, artificial general intelligence, cybersecurity, export controls, AI talent recruitment, AI companions, and more. I hope you enjoy. Nick, thanks for joining the podcast.
Nick (01:21):
Thank you for having me.
Jakub (01:22):
So the first question is, you have been a founding editor at Works in Progress, which later got involved with Stripe and focuses on new and underrated ideas to improve the world. And this I see as coming out of progress studies, this intellectual movement which tries to rigorously study all the different forces that can raise standards of living around the world. So what lessons have you taken from progress studies that you see as relevant for AI policy?
Nick (01:55):
Sure. I think there's too main things. I mean, the first is that our interest in starting Works in Progress, all sort of centered around economic growth and in particular economic growth on the technological frontier. So typically economists think of this as being driven, growth on the frontier, as being driven by new ideas. New ideas come from people. Often people can leverage technology to find more ideas. I sort of came to the conclusion through my work at Works in Progress in other places that AI could become the single biggest driver of economic growth and progress because in the sort of complete notion of AI, AI systems would be able to themselves come up with new ideas and deploy those new ideas to make the world better. So I think the first thing is that we should use AI in a way that's continuous with our history of economic growth, growth, introducing new ideas into our economy and society that raise standard of living and make our lives better.
(02:54):
I think the second lesson from progress studies is often that new technology doesn't have solely positive goals that we use institutions like our government and civil society to leverage technology towards pro-social ends, but sort of a simple understanding of technology shows that technology is often both employed to make our lives better, but is also employed in warfare and that we need to be cognizant of the full range of effects that new technology will have. And we can't just look at the upsides of it and sort of taking an approach that balances both these things when we think about how to deploy new technology to the world is important.
Jakub (03:34):
You've written this AI policy playbook for the Manhattan Institute. It has two sections and ones the starting section looks at trends in AI distinguishes between broadly capable AI and more narrowly focused AI. It goes over a lot of different background concepts like this, evaluations and testing, safety and control, AI agents, autonomy, future AI systems, and then there's one on global competition for ai. So you write "AI will likely become the single key military technology." And then obviously from that it's important for the US to maintain advantage in this, stay ahead in this. But then you're also talking about AI in the future, automating almost all thinking based jobs, AI that can perform as well as a human professional at any remote coworking task, anything that could be done at a laptop. And this is called for audience members, AGI or artificial general intelligence - AI that's roughly human level or beyond. So how do you square these two? What happens if the US is staying ahead, staying ahead, staying ahead, and then it builds AGI level systems or systems close to that. What comes next after that? Is that the end of the race?
Nick (04:51):
I'm not sure if it's the end of the race and I think it's really hard to know what a world with widely deployed AGI systems will look like. What I do think we know is that often being on the frontier of a technology is useful to dictate the terms on which the technology is used now from the US and I think the US has broadly brought positive liberal values to the world. And I think in terms of thinking through what sort of world and global arrangement is used in terms of how we deploy and govern AI systems, I'd like the US to lead in the development of that arrangement. So I broadly believe that if the US is able to stay ahead in the development of AGI, it's going to be able to dictate those rules and make sure that the technology is widely used for peaceful purposes and in purposes that sort of enrich the human condition rather than ones that don't.
(05:44):
I also think that in terms of developing AGI safely - and we can talk more about the ways that that could go wrong - being ahead will allow us more time to do that and to be thoughtful about how we develop AGI such that we aren't caught in what's often described as a race scenario between us and another power such as China to see who can develop AI first and in the midst of that race forgetting or sort of ignoring important caveats to the development of that technology, such as our ability to control that technology.
Jakub (06:16):
How do you handle the timing issue here? So one thing I've thought about is maybe this is looking like a year ahead. If it's specifically China, it could be months ahead or years. If it's a country like North Korea, it could be, it's definitely not going to be six months, but it could be also several years behind. And then you have an acceleration of AI development in a country that builds AGI or tools similar to AGI. So Amazon said recently that they have automated 4,500 developer years of work. They saved that much work by using their Q software coding agent. So you're getting big software speed ups. And then Google recently had this Alpha Chip system that's working on AI improving AI chip design, so on the hardware side. So you can really use AI to accelerate, first of all just AI progress and then this feeds forward into better systems. But then also in broader technological progress you can gain a quick speed up of all the trends we've seen just from 1800 to today, the world is a lot different. So if you compress that down into a few years while the other countries are catching up, to me it seems like it introduces a lot of potential destabilizing effects. I don't have an answer to what that actually looks like, but I'm curious if you have some sort of thoughts on how this will go.
Nick (07:41):
Sure. I mean it's going to be a very complex scenario. So I think the first thing is whether another country will be able to get ahead of the US. I think two components go into that. I think the US has a natural advantage because the three companies that are leading and even more than that leading the development of AI and AGI are based here. We have the locus of talent, we have the kind of right institutions to accompany those. So I don't think that by default just because another country wants to race to create AI, they'll be able to just because they don't have the same infrastructure and talent that we do here. Now that only works if we have security, which is something we should talk about later. AI, unlike other technologies like a nuclear bomb where perhaps blueprints could be stolen, if the program was infiltrated, you could literally steal an entire AI system by getting access to its model weights.
(08:34):
So that's why I think the top priority for the next year is to ensure that our labs developing AGI are secure so that technology can't be stolen not just by a country like China, but even a country like North Korea or even rogue hacker groups. So when Google DeepMind did an analysis of their current security, they rated themselves using the RAND criteria, I believe sub SL-2, suggesting that their systems were vulnerable to sort of everyday hacker groups and rogue states, not to mention state level espionage from somewhere like China. So I think that insofar as we have security we'll have, and our current locus of talent, we'll have a large advantage in terms of maintaining our lead, which will allow us to sort of set up the AI world order as we did with nuclear before other countries are able to get to it.
(09:22):
Such that if other countries are pursuing AI in a reckless way, just like if a country is pursuing nuclear weapons in a reckless way, we'll be able to take actions to prohibit that development as we did with Stuxnet when Iran was developing a nuclear program. Now I'm not sure all those efforts will work perfectly, but one thing that's an advantage of AI is that if AIs become advanced coding computer science agents, they will be able to do cyber offensive attacks that could hamstring projects that sought to develop AI for military purposes in other countries, which is part of why I think having a lead in AI will be so important.
Jakub (10:01):
And this idea that, so the sense I'm getting is that we're going to build AGI, this affords a lot of advantages in cyber, like you mentioned, obviously chips the US might be able to restrict. And then the part that's missing for me is how you were mentioning in nuclear we are able to control which states build nuclear weapons. For me with AI, the amount of computation you can do per dollar drops the techniques like model distillations to make smaller models and ways to more cheaply train a powerful AI system continues going. And a lot of these ideas seem a lot harder to protect because there's simply an algorithm for example, that just needs to be spoken by word. So do you think there's ways to actually match that world we have with nuclear where about nine countries today have nuclear weapons out of all the countries in the world?
Nick (11:09):
Yeah, I think the situation will be better in some circumstances and worse than others. So for example, the US after developed nuclear weapons first, it could have become the only power to ever develop nuclear weapons if it were to unleash nuclear attacks on any country that sought to develop nuclear weapons. Now I think that would've been really bad to do because it would've been incredibly violent, I think that's a price that no one that is saying would ever stomach. In AI, the situation is quite different because we could simply use AI to find backdoors in foreign data centers and disable those data centers from developing AI systems from running frontier training runs. So in some sense I think you could prevent the development of AI systems in other countries without any deaths, which makes it sort of a more palatable response than the nuclear response. At the same time, I take your point that the secrets underlying AI require less infrastructure in some sense than nuclear and will become more readily available over time as sort of algorithmic secrets disperse across the globe and the cost of computing lowers. At the same time, I think that the initial advantage we have in being the first to develop AGI systems will allow us to dictate norms around how further AI systems are developed and the conditions under which countries can pursue AGI. And that advantage will let us dictate some of the rules of the road for civilian AGI development while prohibiting sort of military projects that seek to build AGI to gain power on the geopolitical stage.
Jakub (12:46):
Another thing I'm wondering about is how stable this world will be. So after the development of nuclear weapons, we have limited it to only a handful of countries, but nonetheless, there have been a lot of nuclear close calls where even just one person seems to be standing in the way of a potential escalation. Everyone talks about how total nuclear war would be incredibly, incredibly bad, but it's never happened yet. So we don't know exactly what that would look like. But with AI there are some potential for a state actor to use it to build a bio weapon. And then as biological design tools develop, this could be looking more like targeting it more finely than bio weapons have been in the past, which could be really destabilizing. There's also the development of large scale autonomous weapons, large drone swarms that seemed like a new weapon of mass destruction.
(13:51):
It seems like the development of AGI could come along with the development of many new weapons of mass destruction, not just one like we saw in nuclear. And that's all to go with the existing nuclear situation. Then there are potential uses of AI for deception or manipulation or tricking world leaders into making the wrong calls. So to me, and yes, we can stop the development of a really frontier system, but over the decades it seems like eventually people are going to figure out how to write the right code. So it seems like this situation could quickly have a large scale catastrophe unfold. Do you have a way to deal with that or reduce the perpetual risk?
Nick (14:39):
Sure. So I certainly agree that we'll introduce a lot of new perils with advent AGI. I think sort of in the first instance, one of the first things that I think the developers of AGI do, the allied countries that hopefully develop it is invest in defensive systems such as vaccines against a broad swath of potential bio weapons such as missile defense systems and drone defense systems similar to the Iron Dome to preempt some of these sub- frontier AGI attack vectors that could occur if other countries are able to develop powerful but not frontier AI systems. I think in terms of frontier systems, we'll need to integrate those frontier AI systems into the same sort of international institutions that we currently have, such that the US military has AGI power behind it, such that international courts have the power of AGI behind them to reconcile disagreements caused by those kind of new threats. And I think that world is a world that's precarious, but it's similar to the world we have now where many countries have terribly powerful militaries and weapons systems that can destroy the world. At least two or three do. And we need to find ways to make sure those countries don't deploy those systems.
(16:06):
It takes a lot of different mechanisms to do that. We have insurance, we have international arbitration, we have diplomacy. We'll sort of be using the same stock of tools, but with more powerful weapon systems and hopefully better defense as well. So I certainly agree the situation will be precarious. I think what worries me the most is sort of as you were sort of alluding to earlier, rapid developments that have a destabilizing effect that don't reach a new equilibrium. And I think what we should be trying to do as a coalition with AGI is finding new equilibriums in different categories of potential avenues for mass destruction.
Jakub (16:43):
Yeah, and as you were saying earlier, a lot of this is in the future, so it's hard to say exactly what the world will look like and it's a bit hard to plan around this. So let's zoom back to the present. You've written this policy playbook. It's got four principles that are pretty focused on what to do now. First one, retain and further invest in strategic lead in AI development if you're the US. So if an AI model has substantial defense applications, this is one thing you're recommending, then there should be cybersecurity requirements consistent with or more stringent than NIST special publication 800-171. So my question on this is how stringent is that special publication that you were referencing and when would we need the conditions to be more stringent than that?
Nick (17:39):
Sure. So I believe our cybersecurity apparatus should basically be proportional to the destructive capabilities of AGI systems. Now, for example, some models today like the Llama family models are open sourced. I think that's perfectly appropriate because those models don't pose risk to society or have substantial defense applications. I think near future AGI, or excuse me, AI systems will have key cyber offensive capabilities. So I think we need to up the security for the systems. And as systems approach AGI, we're going to have even further security measures will be necessary. As I understand it this special publication 800 is sort of the industry standard for non-classified government materials. I think this is an appropriate standard to be implementing on the next generation of AI systems. I think once AI systems are sufficiently powerful that they become key targets for nation state level espionage, much more stringent security will be needed.
(18:41):
But I don't think - as I understand it from the cybersecurity community - that sort of trying to impose as stringent requirements as possible at the onset is either feasible or desirable. And instead what we need is to slowly ramp up our security apparatus proportional with development of AI systems. So beginning in this special publication 800, then eventually something like the ways that military contractors develop state-of-the-art weapon systems or nuclear submarines for the us and then from there to something like the Manhattan Project to hopefully something even more advanced than that in the AGI end game such that it's completely impermeable for nation state level espionage. Obviously the Manhattan project itself, it was successfully spied on by the Soviets. But yeah, I think that this will need to ramp up in time and as capabilities progress. Also, if capabilities don't progress as I suspect they will, I think that more lax security would be reasonable. So again, the security apparatus and the requirements that we implement should be proportional to the capabilities that we see emerge in successive generations of AI systems.
Jakub (19:54):
Now onto principle two, in the playbook: the US must protect against AI powered threats from state and non-state actors. So in this principle, one proposal you have is related to the Bureau of Industry and Security or BIS. So BIS currently put out these reporting requirements for advanced AI models and advanced computing clusters, and they kick in if you have an AI software system that relied on over 10^26, so 10 with 26 zeros behind it, mathematical operations to be trained. And then also if you have a hardware computing system that has capability to do over 10^20 operations per second. So that would not take very long to hit 10^26, I think maybe a matter of weeks if my math is right. And then if you're doing that in the coming six months, you would notify BIS, then BIS can do some question and answers about security testing, safety, reliability affecting the advanced models being developed or planned. But you put in a new requirement for US AI Safety Institute and private companies to be doing evaluations after they pass the 10^26 threshold. The evaluations you are imagining are focusing on autonomy, persuasion, weapons of mass destruction, military assistance. If the AI is found to be a significant military asset, then you propose instituting export controls. So what do these sort of software export controls actually look like? What are the concrete details of them?
Nick (21:37):
Sure. So in all sorts of military technologies, we use export controls such that we can sell these technologies to our allies and that we don't sell them to hostile foreign powers. So I think that insofar as AI systems or military systems, they should face broadly similar restrictions to any other military system. And this applies to software as well. So there's software systems that the US military develops with its partners that are subject to ITAR such that these systems can't be exported to hostile foreign powers. Some of this I think is pretty straightforward. In the case of AI, you wouldn't be able to sell the weights of the model and you wouldn't be able to license the use of a model by an API to hostile foreign governments. There are some sort of AI specific factors here, which is that we should also be protecting the algorithmic secrets underlying the models.
(22:35):
It's not simply that you shouldn't be able to sell the model itself or license the model itself, but you shouldn't be able to sell the sort of underlying technology that allows you to create a frontier AI model. If that model has military applications in particular, that consists of what's thought of as algorithmic secrets, it's the sort of optimizations that allow you to have more effective compute as you train a model. I think these should be treated like other sensitive military secrets such that if you were an employee of a lab, you couldn't be hired by a state affiliated enterprise of a hostile foreign power and you shouldn't be allowed to publish or give away these secrets as well. And again, these are things that will make sense to impose as we see more military and defense capabilities from frontier AI systems. These are some things that I think need to be imposed across the board. I think specifically the technologies that both empower AIs with military applications and that move us closer to AI should be subject to these kind of export controls.
Jakub (23:39):
And on to principle three. So building state capacity for AI. You have some proposals like funding NIST, funding the US AI Safety Institute. Doing neglected safety relevant research, investing in that, so interpretability standards for these planning frameworks, preparedness frameworks for future risks. Doing AI usage and governments. So this could be anything from back office functions to order security. And then I want to focus on recruiting AI talent. One simple thing is if you put in higher salaries for AI talent, but you're also talking about Office of Personnel Management doing fellowships, temporary appointments - or just looking into these - expanded contracting, public private partnerships, partnerships with AI talent centers and allied governments. Now we do have some existing tech talent programs like the Presidential Innovation Fellows or Tech Congress, but in terms of - and some of these are shifting to help more with AI - but how many AI focused talent programs would be enough? What kind of scale do you think we need here?
Nick (24:50):
Yeah, I think it's hard to know, and I think in some sense it's proportional with the importance of AI in our economy. But you could see if AI progresses quickly and becomes a key military and economic tool for states that it would be analogous to whether or not you are able to have scientists in government or engineers or people that are able to work with any kind of technology. And I think that we've done an okay job in terms of generally getting technologists into government, but insofar as the technology is important for the economic and military wellbeing of a country, I think basically more is better on most margins. And I think in the programs that I lay out, it's hard to know exactly which one is going to effectively channel the knowledge that the private sector has into government. But I think we need to experiment with a large variety of these programs and see which is able to effectively disperse that knowledge and expertise into government. So again, I think it's a bit hard to know from the onset, which will be the perfect program that will fix all the problems. But I'd like to see many different approaches tried as we've done with different kinds of scientists, bring them into government in the past, and seeing which ones are able to effectively bring a wide breadth of AI expertise into the government such that that expertise can be employed in the public interest.
Jakub (26:11):
Yeah, and do you have a rough sense of what will make these programs successful? Or if you had to prioritize between them, do you have any personal favorites? Anything that you think is especially promising?
Nick (26:22):
I think the general problem is that public sector hiring processes are quite different than private sector ones. So while you might be used to doing coding interviews at a top tech company and hearing back in a few weeks, you're not nearly, most people have applied to multiple of these jobs in their lifetime. People see government hiring processes as quite opaque and hard to understand and hard to know where to go. So I think in general, fellowship programs are able to work with scientists, AI experts to figure out a place in government where they can apply those skills effectively and sort of give them some of the tools they need to be an effective, effective operator within government are sort of especially helpful over and above simply raising salaries, which I think is important and necessary, but demonstrably insufficient to really bring expertise into government and use that expertise as effectively as possible. So generally, I like this fellowship models that we've seen at places like Tech Congress and AAAS.
Jakub (27:25):
Now the fourth principle, final principle is to protect human dignity and human integrity in the age of AI. You have as proposals banning non-consensual deepfake pornography, requiring the disclosure of AI usage in political ads, and analyzing current impacts, future impacts of AI on job markets. Now, one thing that I was wondering about how you might think about it in this light of protecting human dignity is the AI companions, and specifically I'm thinking of these AI therapist apps. I personally think these have some good promise for bringing therapy related tools or bringing mental health resources to people at a cheaper cost and at scale, but they could certainly go wrong. Then there's more ones that people have an intuitive concern about. So there's these romantic AI chatbots, AI boyfriends, AI girlfriends. There's really expressive AI voices that sound just like a human, that can talk in any language and can talk 24 7 with anyone around the world, so like the advanced voice mode that open AI put out. I think in the future you could see these really real time deep fake video calls with AI avatars that are very realistic and hard to distinguish from humans. And there's lots of sci-fi movies about this. There's Her. And it doesn't really take a lot of brilliant leaps to see how this could affect human dignity at the very least. So how do you think of the effects of this? And then I next want to ask you about if there's anything that should be done. But first, how do you see the actual impacts playing out?
Nick (29:01):
Yeah, look, I think it's really hard to know, and I think if it was 1990 and you said there's a thing called the internet, there's going to be chat rooms, there's going to be forums where you can talk about any topic, and if you're obsessed with pinball, you can find thousands of other people that are obsessed with pinball and you can talk about pinball with them all day. I think some people might have said, oh, that's really dystopian, and some people might have said that sounds amazing. And I basically think that we should be relying on free market mechanisms to allow consumers, allow people to figure out which of these uses they find fruitful and which of these they find more troubling. And I think there could be reasonable variation state to state in terms of what uses of AI in these contexts are allowed. And I think that we have sort of federal regulatory mechanisms such as licensing for legal and medical advice can provide some guidance here too in terms of the areas that we choose to regulate more and regulate AI within our existing frameworks and those.
(30:03):
And also places like AI Companions where we don't have existing frameworks and we're going to need some time to see how people choose to use these systems, employ the systems and whether they're used at all before we start making decisions about the right way and the wrong way to use these systems. So in general, I'm quite pessimistic about at the onset us figuring all of this out, but I would like to take a careful look as we begin integrating AI systems in more parts of our lives, which of these use cases are serving the public interest and promoting human flourishing, and which of these lead us to poor equilibriums? So my short answer is I don't think anything should be done at the moment, but I'd like both institutions in civil society and folks within government to sort of understand what is happening and be able to make decisions as they need to be made about how these things are interacting with our daily lives.
Jakub (31:08):
For me personally, I think of some of it as a bit of an abrupt departure from what we've seen before, partly for the same reasons we've been discussing throughout with AGI building something that's almost like a second species. I mean, we've had pets for a while, cats and dogs, but they're not talking with us and they're not under the control of an external actor and they're not replacing basic human functions like the role of a parent reading bedtime stories or the role of a significant other being a boyfriend or girlfriend. So that part, I take your point that it's hard to actually know what the effects will be and it might be rash to try to jump in and set the course of it. I think since that's your position, let's not talk about specific things the government should do. Let's talk a bit more about the impacts. So what do you see down the line in, in five years, 10 years, if AGI has developed in that timeframe? What does leisure time actually look like for people?
Nick (32:16):
Yeah, look, I mean I think there's a lot of really optimistic scenarios here and I'm inclined or at least more inclined to you to think that it would be more continuous with sort of existing trends than departure from them. So I could see a world in which AI create custom made podcasts for us to learn about new subjects. They sort of make your perfect history podcast, your perfect science podcasts. And maybe there's something more similar to audio books where we like to read stories to our kids, but occasionally in our commutes we like to listen to audio books or listen to music. And generally trust people to come up with new norms to figure out how to govern their personal behavior and the behaviors of their friend groups to sort of implement reasonable measures on, you know, don't be on your phone at dinner, don't be on Twitter at work.
(33:00):
And we can come up with these things and develop them communally to help us for the most part. I do take your point that if you told me that in five years that everyone would be completely addicted to AI companions and nobody would talk to each other anymore, if that's the kind of world we were looking at, I think it would make sense to take preventative measures or at least to try to stop that as that world is beginning to set in. Again, I don't think that's what the world we see right now with our current suite of media technologies, and I hope it's not one that we see with AI. And I basically think that we need to trust people to come up with the new institutions that they need and the society needs to regulate on a voluntary basis that behavior. Though I don't think that a government action in the space should be completely verboten if you ended up with a sort of clearly suboptimal outcome such that we had become sort of completely disconnected from our communities and from our friends and families.
Jakub (34:06):
Okay. And last thing, last general question is on more international policy. So this report is focused a lot on the US, but what do you think US allies such as NATO countries should be doing here, and how specifically is their role shaping with us in navigating rapid AI advancement?
Nick (34:30):
Sure. So I think that for better or worse, the key AI talent and companies are mostly in the US and the UK, but I'd like to see AI infrastructure distributed around Western allied countries. And I think this could be very important that if AI agents are widely available, that countries are able to use and deploy those agents on servers within their own country. So I think it's quite worrying right now that because of climate commitments among other things and other regulations of the European Union, we're not seeing the rapid growth of data center infrastructure. And I think that it would be a very suboptimal outcome if only the US or only the US and a smattering of other allied countries had data center infrastructure. I'd like to see that infrastructure widely distributed around the Western world. I think the key barrier to that comes in permitting and energy production. So I'd love to see much more energy production by all means necessary, whether that's solar and wind or nuclear or even coal and gas across Western allied countries. And then I'd like to see this sort of AI infrastructure being built in those countries as well, such that we can sort of share in the gains.
Jakub (35:54):
Is there anything else you had wanted to say? Anything you wish I had asked you about?
Nick (36:01):
No, I don't think so.
Jakub (36:02):
And where can the audience go if they want to learn more about you or learn more about your work?
Nick (36:08):
Yep. My name is Nick Whitaker and you can find me on Twitter and you can also find me on my Manhattan Institute page where I publish my work in City Journal and with the Manhattan Institute.
Jakub (36:18):
Great. Thank you so much, Nick, for joining the show. I really enjoyed talking with you.
Nick (36:24):
I really enjoyed talking to you too. Thank you for having me.
Share this post