#3: Daniel Colson on the American Public's Perception of AI
US public opinion on AI, based on polling data from the AI Policy Institute
Daniel Colson, Executive Director of the AI Policy Institute, joined the podcast to discuss US public opinion on AI, based on polling data from the AI Policy Institute.
Available on YouTube, Apple Podcasts, Spotify, or any other podcast platform.
Our music is by Micah Rubin (Producer) and John Lisi (Composer).
Highlights
Public Opinion on Accelerating vs Decelerating
Daniel: From our poll with YouGov in August, we asked, “which comes closest to your ideal preference of developing and deploying artificial intelligence?” A, “we should slow down the development and deployment of artificial intelligence.” B, “we should more quickly develop and deploy artificial intelligence.” Or C, “not sure.” So the results for that was 72% say slow down the development and deployment of artificial intelligence. 12% say more quickly develop. And 15% say not sure. So I basically take this to mean that in general, the American public is uneasy about the development. When you present them with the idea of it being more intense than it is, 72% say no. That's overwhelming six-to-one ratio of Americans preferring slow down over speed up. I think that's a useful temperature check for how is the public generally feeling with a really lopsided result.
Skepticism of Big Tech
Daniel: When they're presented with the idea of these tech companies, Mark Zuckerberg being the one to deploy open source AGI and to do another round of truly society-transforming technology deployment without sort of any consideration of asking the public whether they might want that. The public is generally feeling like this seems risky and not in expectation good. And I don't think that's necessarily an opposition to the technologies themselves or to AI itself, though there's elements of that. But it's also that we don't want these companies to deploy the technology because even if you could get a good outcome, you won't with these people being the ones leading. So I think those are some of the big dynamics that are making the American public not excited about high-risk Manhattan-style projects that are being led to try to revolutionize and transform our world.
Big Tech and Democracy
Daniel: I think a lot of people think that technology is really the main thing that's driving and shaping our world in a lot of ways. If technology is the main way by which we're being governed, […] and if the tech companies that are creating and deploying and managing those technologies aren't democratic institutions, then to what extent is America democratic? You know, at very least, the deployment of these technologies and the consequences of that deployment is happening in a totally undemocratic manner.
Relevant Links
AI Policy Institute website, Daniel’s X account, AIPI’s X account
What normal Americans — not AI companies — want for AI (Vox)
Big Tech and the Online Child Sexual Exploitation Crisis (Senate Judiciary Committee)
Volkswagen emissions scandal (Wikipedia)
Transcript
This transcript was generated safely by AI with human oversight. It may contain errors.
(Cold Open) Daniel Colson | 00:00.582
People don't think that you should be able to unilaterally change the nature of being human.
Jakub Kraus | 00:26.859
I'm your host, Jakub Kraus, and today's guest is Daniel Colson. Daniel is co-founder and executive director of the AI Policy Institute. They've conducted a lot of public opinion polls on AI. So Daniel and I dig into some of the results of those surveys and why the public might be coming to these conclusions. I hope you enjoy. Daniel, thanks for coming on the show.
Daniel Colson | 01:01.957
Absolutely. Thanks for having me, Jakub.
Jakub Kraus | 01:05.271
I'm eager to hear more about this polling your group has done and the results it has on what the public opinion is on AI policy. To start, what motivated you to begin working in the AI space?
Daniel Colson | 01:24.122
I've kind of expected to work on AI politics since around 2014. I read Nick Bostrom's book Superintelligence when he released it that year. And since then, I've kind of anticipated that AI technology would be sort of similar to what nuclear technology was for the 40s and 50s in terms of really being the defining technology of our generation. And also the thing that's really the primary driver reshaping a lot of the fundamental dynamics in the world and power relationships. I've spent my career sort of split between tech entrepreneurship and historical research. I've really been driven by an interest in the question around. What is the thing at a high level that's driving history and the trajectory of human society? I think a lot of people believe something along the lines of technological determinism. Basically that... Technology is a lot of the thing that's driving and shaping our world, and that there isn't very much human agency or choice in the mixture. It's sort of these high-level forces that, at the historical level, are driving things. I found that when I sort of combined that idea with an awareness of very powerful technologies constantly being invented. You know, novel weapons technologies that sort of constantly increase the destructive capacity that humans have. That creates a very pessimistic expectation for what the future might look like, where we just get more and more destructive capability and don't really have any choice in the mix. And so I was really interested in trying to understand Is there any human agency in the direction that technology takes, in the direction that history takes, and how can you use that to try to steer things towards better outcomes? In some sense, you don't want to accidentally set yourself up trying to hold back a tidal wave, but you can channel and redirect things in profound ways that completely change, I think, the outcomes of technology and societies. And so... Now I'm doing that a little bit more practically, I guess.
Jakub Kraus | 04:03.425
Fascinating. I agree that especially the last 100 years, the amount of technological new parts of our lives is increasing. So if you look at graphs of when the internet was adopted in US households, it just shoots up. If you look at when refrigerators, microwaves, all kinds of appliances When people started using smartphones, social media, these are quite new phenomena in the grand scope of human history. And with AI in particular, this notion of human agency becomes quite relevant because the goal of a company like OpenAI is to try to automate everything humans can do more or less. And there's, once you have that, then economically the pressure is to replace or substitute for a lot of the things humans are currently doing. And whether humans still have jobs left in that scenario or not, there's going to be a lot more handoff of agency to the machines, I think. But to keep us on track here, what is the AI Policy Institute?
Daniel Colson | 05:17.462
Yeah, so I can also briefly introduce myself. So my name is Daniel Colson. I'm the founder and executive director of the AI Policy Institute. The AI Policy Institute is a nonprofit research polling think tank. Over the last six months, we've conducted more than 20 state, national, and international polls, essentially looking to understand what the public thinks about AI technology. The potential of AI regulation and tech policy issues more broadly. My goal with this is really to give the American public a voice in matters of tech policy and AI regulation. You know, as we've been seeking to understand what American public opinion is and share that with the media and with political officials, I think people have been really surprised by... How lopsided American public opinion is, how concerned Americans are about the technology.
So we asked, this is from our poll with YouGov in August, we asked, which comes closest to your ideal preference of developing and deploying artificial intelligence? A, we should slow down the development and deployment of artificial intelligence. B, we should more quickly develop and deploy artificial intelligence. Or C, not sure. So the results for that was 72% say slow down the development and deployment of artificial intelligence. 12% say more quickly develop. And 15% say not sure. So I basically take this to mean that in general, the American public is uneasy about the development. When you present them with the idea of it being more intense than it is, 72% say no. That's overwhelming six-to-one ratio of Americans preferring slow down over speed up. I think that's a useful temperature check for how is the public generally feeling with a really lopsided result.
Okay. So the next question. So we asked, should AI companies be held liable for the harms from technologies they create? And we found 73% say yes, AI companies should be liable for harms from technologies they create. 11% say no, AI companies should not be liable for harms from technologies they create. 16% say don't know. So once again, you're finding that the public supports regulation even more in this case, over six to one favorability, holding companies responsible for the technologies that they're deploying. So I think these stats all kind of point to what we've been finding in our polling in general, which is just that the American public is... decided on the matter of AI risks and AI regulation. They think that the risks are high. They think that regulation should happen and they don't trust the tech companies to develop this technology in a responsible way. So that's kind of a lot of what we've found.
Jakub Kraus | 08:57.478
Wow. It's kind of startlingly lopsided, some of these, to find a question where... 72% of Americans agree that it's not as obvious as should you feed your child every day is a rarity in today's political and information ecosystem, I think.
Daniel Colson | 09:28.172
This is part of the reason why we've done so many polls is because when we've put out our first few polls, Most of the response that we got was people thought we were push-pulling, because you'd really rarely see pull numbers like this unless you work hard to sort of massage the numbers with careful language. But the thing that we find is just no matter how we ask the questions, these are the results that we see. And people are starting to believe that.
Jakub Kraus | 09:59.410
Right. It is not entirely surprising in that when I talk to most of my friends, most of my relatives they're almost in a common sense manner worried about creating such a new technology and such a potentially transformative technology of everything about society so I can understand where these are coming from. But some people, like you said, are skeptical. So can you shed some more light on how are the polls conducted? How much weight should we put on these results?
Daniel Colson | 10:44.029
Yeah, definitely. So we conduct these polls using paid online web panels. So basically people taking surveys on a website. Each survey is roughly a thousand people being surveyed, and then we weight the results by education, gender, race, respondent quality, and 2020 election results. The weighting is basically to turn that sample into a more representative sample of the entire US population. So in our respondent pool, we might have 10% Hispanic people, and then we would adjust that portion to be representative for the Hispanic percentage of the population as opposed to the respondent portion. Generally, the thing that you see is with large weighted polls like this, when you reproduce them using dramatically larger and more expensive methodologies, you get essentially the same results, with different levels of margin of error. Ours have about a 4% margin of error. And on the back end, we've done a bunch of replication of more mainstream polling sources, especially YouGov, just to make sure that a lot of our polling setup is sort of matching other polling operations and what they're seeing on different questions.
Jakub Kraus | 12:24.444
Okay. And one recent poll I saw that stood out to me was in January, you asked people how much they agree with the following statement. Tech company executives cannot be trusted to self-regulate the AI industry. And in response to that, 13% were unsure, 19% disagreed, and 68% agreed. And building on that trend, there was another poll you did in December where you asked people, which approach to regulating AI do you prefer? First option was companies should self-certify their compliance with government standards, and that received only 29% support. And the other option was the government should independently ensure that companies are complying with government standards. which received 71% of support. So why do you think the public is so skeptical of company executives having the ability to proactively take these precautions, the same precautions that more stringent regulations would enforce?
Daniel Colson | 13:36.208
You know, I think that really starts with a question of incentives. Tech executives are particularly the ones positioned to gain the most power and wealth by the aggressive and rapid deployment of AI. And so I think in some sense they have the greatest incentive to do so in a dangerous manner or a manner that, you know, sort of entails significant harms. And so I think for that reason, it’s tech executives in some sense should be the most, we should be the most skeptical of them because they have the greatest incentive to deceive us in a sense. That doesn't necessarily mean that they're bad automatically. And I think certain players in the space, I think, are actually trying to cause the industry to go in the right direction and doing so for the right reasons. But not most, I would say certainly not most of the space. And I think Americans have seen that in other industries like oils involvement in politics, Big Oil or Big Tobacco. There's an expectation that industries get involved in politics in... basically purely self-interested ways, and they're not trying to guard for the public interest.
Beyond incentives, though, we've had an experience with the tech companies over the last couple of decades via social media and via the deployment of the internet more broadly. And we saw... During the Senate Judiciary hearing last week with social media executives, including Mark Zuckerberg, we've seen very significant harms and questions of whether the tech executives deploying social media are being responsible with its deployment, with issues of child sexual abuse massively proliferating on platforms like Facebook and Instagram being the significant focus of last week's hearing. I think Americans generally are uncertain of social media, think that it's been the source of significant harms, but even more think that it hasn't been managed in a responsible manner.
Scandals that we've seen, particularly around Meta in the last five years with Cambridge Analytica, election interference, certain tech platforms even themselves becoming extremely politically partisan. For example, Twitter before Elon Musk's acquisition. 99% of its employees donated to the Democratic Party exclusively. So you see these platforms becoming outlets for very partisan media. That's convinced, I think, a huge amount of the right that you don't want platforms like Facebook to be the arbiter of truth because that will turn into kind of a political question. And so we've seen all of these issues with the tech companies, with them being at the center of COVID information and misinformation drama, and that's made the American public feel uncertain.
And when they're presented with the idea of these tech companies, Mark Zuckerberg being the one to deploy open source AGI and to do another round of truly society-transforming technology deployment without sort of any consideration of asking the public whether they might want that. The public is generally feeling like this seems risky and not in expectation good. And I don't think that's necessarily an opposition to the technologies themselves or to AI itself, though there's elements of that. But it's also that we don't want these companies to deploy the technology because even if you could get a good outcome, you won't with these people being the ones leading. So I think those are some of the big dynamics that are making the American public not excited about high-risk Manhattan-style projects that are being led to try to revolutionize and transform our world. You know, I don't think we want a revolution. I think we want things to sort of keep going and be pretty good. And for the most part, revolutions tend to go really poorly and be really messy. And so, you know, at least that's how it seems like Americans feel.
Jakub Kraus | 19:13.424
All right. That is a... historical cause for caution where we've seen all these bloody, bloody regimes under communism. And at the early days of communism, people thought, well, it's going to be great. We're going to set things up. And once it's in place, there's going to be so many great benefits. And never exactly fleshing out the details from here to there of why it couldn't go off the rails. And... Sort of similarly with AI, there's people putting out manifestos now. Why we should accelerate or die or stagnate irreversibly. For either you have to choose between innovation as fast as possible or any kind of regulation whatsoever. And it's a bit of a false dichotomy. The other thing about it is these same companies is definitely worth highlighting more. So we've got Facebook, obviously, they might have the largest amount of AI compute of anyone, and they've always had one of the best AI teams, then Twitter X is owned by Elon Musk. And you can, if you pay for their premium tier, get access to their chatbot Grok, which could be in the top 10 of AI companies. And another big social media platform is TikTok, which parent company ByteDance is over in China. They're one of the leaders in China's AI development ecosystem. And then, yeah, we've seen also companies will just blatantly pursue their own interests. There was that famous, I think, Volkswagen emission scandal where they lied about how much CO2 they were putting out. There was the. TurboTax is my favorite example. They will go to the US government and try to find different ways that—they'll argue it has benefits—but overall makes the tax code more complicated. And then their product is ways to simplify the tax code for consumers. So the incentives are quite tricky, and I can understand some of this skepticism around, let's just go with it. Let's have these same companies go forward. Now, I don't know if you had a comment on that.
Daniel Colson | 21:48.921
Yeah, you know, on the skepticism with tech executives, there's an interesting dynamic to it, which is I think a lot of people think that technology is really the main thing that's driving and shaping our world in a lot of ways. If technology is the main way by which we're being governed, the primary institutions that are managing our society, and if the tech companies that are creating and deploying and managing those technologies aren't democratic institutions, then to what extent is America democratic? You know, at very least, the deployment of these technologies and the consequences of that deployment is happening in a totally undemocratic manner.
And, you know, I think something that I like to compare this to is human cloning. where it's been possible to clone humans for more than 30 years. But so far as we understand, perhaps with the one exception in China, there has never been a human that's been cloned. And even more, you could do... what I think many would consider to be horrific genetic engineering experiments with humans. But for the most part, that simply hasn't happened because of a widespread stigma against doing something like that. Much of the reason for that is because people don't think that you should be able to unilaterally change the nature of being human. Or at the very least, it would be much better if that process could happen through some sort of governance process. And so if you're going to create a new human species or completely change, create new human entities, do wild things like this that change what it means to be human, those decisions are the most important decisions that we're trying to make. Those are the decisions that... determine what the future of being human is, what all of our children live like, the world that they're subject to. And those are the decisions that need to be made in the most legitimate and appropriate way possible. That doesn't mean that we should need universal societal consent for a bank to deploy an anti-fraud algorithm. That's not what I'm talking about. I'm talking about, do we create new entities that replace us, that make it such that humans don't work anymore suddenly, or are suddenly governing us instead of humans governing us, which I think is something that many technologists very seriously propose.
Jakub Kraus | 24:51.032
And fleshing out this poll we just talked about, you also found that when Americans were asked... have developments in AI made you more concerned about AI, less concerned about AI, or haven't changed your view? And I found it quite surprising that only 4% said they have become less concerned about these new developments, or in response to these new developments, compared to 38% who have become more concerned. So that's what, almost 10 times as many. And then 57% are about the same. Their views haven't changed. So my hypothesis is that people are seeing the more powerful and capable AI systems, perhaps ChatGPT last year is the biggest one, and reacting to that. But what do you think is going on? Why are people becoming, if anything, more concerned as AI developments continue?
Daniel Colson | 25:56.744
I think your hypothesis is basically on the money. And I think it's a great point. There's some other polling that I think really demonstrates this nicely. This was a Pew polling where they were asking the same question, are you more concerned or more excited about AI? In 2022, 15% said more excited than concerned. Whereas 38% said more concerned than excited. In 10% said more excited than concerned and 52% said more concerned than excited. So basically between 2022 and 2023, we saw a 14 point increase in more concerned than excited, a 10 point decrease in equally concerned and excited, and a five point decrease in more excited than concerned.
Jakub Kraus | 26:56.784
Right. So in your December poll, you asked people this question, do you support or oppose requiring that any political ads disclose and watermark content created by AI? And you found 49% of Democrats and 53% of Republicans strongly supported this, plus another 16% of Democrats and 21% of Republicans who somewhat supported these disclosures and watermarking of AI generated content. And we're in an election year around the world, might be one of the largest ones in recent history in terms of the number of people going to the polls. How does the public feel about this usage of AI to influence elections?
Daniel Colson | 27:46.642
The current frontier of AI technology is mostly seen as a threat to the integrity of elections. You can imagine ways in which AI could help preserve the integrity of elections. But I think for the most part, the people are seeing things like the robo fake, the robo call with a deep fake of Joe Biden's voice, president Biden's voice, um, that called New Hampshire voters during the primary and told them to save, uh, don't vote in the primary savior vote for the general election. You know, People are seeing these kind of cases of election interference and usage of AI technology in order to try to swing elections. And like you were sort of citing in those poll numbers, there's just overwhelming bipartisan support for... clarifying laws to keep AI out of elections and to try to preserve the stability of our electoral processes.
I was talking with my uncle yesterday and he showed me a video. There's a music video from the late seventies of Crosby, Stills, and Nash performing live on a variety show with Tom Waits. And- And, you know, Crosby, Stills and Nash and Tom Waits are kind of like stylistically pretty different. They have different cultural audiences in a way. So it's a little surprising to see those two performers perform together. And so he told me when he saw this video, his first thought was, this is AI. And the video looks pristine, you know, and it was a nice music video that looked like it was from the 70s. But it seemed like maybe it was kind of this weird, you know, generative video of Crosby, Stills and Nash performing with Tom Waits. I think that points to a deeper and more profound, I think. issue that I think the election interference stuff is pointing to, which is in the past, Things like secret recordings have been very important for us understanding what's going on. Now, if we obtain a scratchy recording, we can't tell the difference between an AI-generated one and a real one. And that's causing leaders around the world to have already started claiming that content of them that's real is AI-generated when they look bad in it. and that suggests that there could be a sort of terrifying erosion of our ability to even tell what's happening due to everything being able to be effortlessly perfectly faked.
Jakub Kraus | 30:54.198
Right. That specific problem where people no longer can call someone out with a recording is sometimes called the liar's dividend And the election disinformation, misinformation researchers are writing lots of papers on it. But if we continue on this trajectory of just increasingly realistic synthetic content, I'm having trouble seeing how that won't happen. Where should listeners go if they want to learn more about you and the AI Policy Institute's work?
Daniel Colson | 31:32.315
So you can follow me on X at DanielColson6 and the AIPI at TheAIPI. You can also find all of our research on our website at www.theaipi.org. We're a donor-funded nonprofit, and if you think the work we're doing is important, we would love your support and we would love to hear from you.
Jakub Kraus | 31:58.815
Great. My guest today has been Daniel Colson. Daniel, thanks for coming on.
Daniel Colson | 32:03.413
Thank you very much, Jakub.
Jakub Kraus | 32:07.755
Thanks for listening to the show. You can check out the Center for AI Policy Substack for a transcript, links, and more. And if you have any feedback, I'd love to hear from you. You can reach me at jakub at AI policy dot us. Looking ahead, next episode will feature Sam Hammond discussing the need for government modernization in the age of AI. I hope to see you there.