#11: Ellen P. Goodman on AI Accountability Policy
Federal AI policy efforts, the NTIA’s AI accountability report, watermarking and data provenance, AI-generated content, risk-based regulation, and more
Ellen P. Goodman, a distinguished professor of law at Rutgers Law School, joined the podcast to discuss federal AI policy efforts, the NTIA’s AI accountability report, watermarking and data provenance, AI-generated content, risk-based regulation, and more.
Available on YouTube, Apple Podcasts, Spotify, or any other podcast platform.
Our music is by Micah Rubin (Producer) and John Lisi (Composer).
Relevant Links
Ellen’s Rutgers homepage and profiles on X (@EllGood), Mastodon (@Ellgood@federate.social), and Bluesky (@ellgood.bsky.social)
Executive Order 14110 (Wikipedia)
Dual-Use Foundation Models with Widely Available Model Weights Report (NTIA)
Digital journalism (Wikipedia)
Broadening AI Regulation Beyond Use Case (Center for AI Policy)
AI is Like… A Literature Review of AI Metaphors and Why They Matter for Policy (Matthijs Maas)
About Bank Supervision (Federal Reserve)
Lessons from the FDA for AI (AI Now Institute)
Basics of Inspections (Public Company Accounting Oversight Board)
Transcript
This transcript was generated safely by AI with human oversight. It may contain errors.
(Cold Open) Ellen P. Goodman | 00:00.080
In connection with provenance, let's not just think about the technical tools, but about sort of the socio-technical muscle that would be required to really use those tools effectively.
Jakub Kraus | 00:19.405
Welcome to the Center for AI Policy podcast, where we zoom into the strategic landscape of AI and unpack its implications for U.S. policy. I'm your host, Jakub Kraus, and today's guest is Professor Ellen P. Goodman. Ellen is a distinguished professor of law at Rutgers Law School and recently served as senior advisor for algorithmic justice at the National Telecommunications and Information Administration, or NTIA, within the U.S. Department of Commerce. There, she led a report on policies to promote accountability in AI. So we discuss topics like the AI accountability report, federal AI policy efforts, watermarking and data provenance, AI generated content, risk based regulation, and more. I hope you enjoy.
Ellen, thank you for coming on the podcast.
Ellen P. Goodman | 01:18.002
Happy to be here, Jakub.
Jakub Kraus | 01:20.944
So last April, the National Telecommunications and Information Administration or NTIA issued a request for comments on AI accountability policy. And in the end, this got over 1400 comments. And there was a policy report on it that came out in March, which you were the lead author on. So how would you define what AI accountability policy is? And why did this project choose to focus on it?
Ellen P. Goodman | 01:56.536
Right. So it's a big question. And I think just to sort of begin by setting the stage is that we viewed AI accountability policy as sort of an ecosystem of policies and incentives and capabilities.
For accountability itself, we provided a definition in the report because it's the kind of word that people toss around and sometimes use synonymously with responsibility or trustworthy or ethical AI. And we wanted to sort of drill down on accountability in the meaning of having consequences for posing risks and causing harms. And those consequences usually come in the form of market incentives. So succeeding in the market, regulation, being held accountable by ex-ante regulation, and then liability. So after the fact, being held accountable through liability.
And our focus really, in terms of the ecosystem, was what policies are necessary or desirable in order to support those kinds of accountability mechanisms. And so, and we can get into it, but we really focused on that. Things like evaluations, audits, disclosures. In some cases, we talked about mandatory components. We also talked a lot about self-regulatory components to support that ecosystem so that downstream, those accountability mechanisms that I mentioned can function more effectively.
Jakub Kraus | 03:47.421
Got it. So... setting some of the infrastructure so that these three components you talked about, the market incentives, regulation, and liability can work effectively. Is that a correct understanding?
Ellen P. Goodman | 04:02.934
Yeah, to make it a little less abstract, you know, if we think about discrimination in the employment context, there are laws that apply to AI. But if you are someone who feels aggrieved by how an AI has treated you, it can be very difficult to sort of effectively vindicate your rights. Because, you know, first of all, you may not be aware that an AI system was used or algorithmic predictions were used. You may not have any ability to prove that they were used in a discriminatory way.
And so while there is sort of an effective redress, and there's a way theoretically to have accountability in that context… without these accountability inputs—so for example, access, knowledge that the system was used, or your lawyer's access, your advocate's access to some of that data. And in order to build the case, you're not going to be able to effectively seek redress. So that's just one example of how we were sort of looking upstream.
You put it very well, what sort of infrastructure or mechanisms would help people and entities. Sometimes we're talking about just businesses that want to make decisions about AI, risk-based decisions about AI, but don't have the information they need.
Jakub Kraus | 05:37.532
Yeah, one thing that stands out to me is that in Congress, people talk about how we should make sure we're getting what we can out of existing laws that protect against AI. And there are some existing regulatory powers too, like the FTC. And there's some market incentives already for safety, for example, privacy as well. So what is your assessment of the current state of these three—or, let's stay away from the three parts further downstream, but the state of the infrastructure that can support these parts currently?
Ellen P. Goodman | 06:21.575
Yeah, so, you know, the largely sector-based regulatory legal infrastructure that we have is pretty robust and it's a very good tool. And each of those verticals sort of, you know, jealously protects their jurisdiction over, you know, whatever it is, whether it's airplanes or drugs or employment. And so there's a lot of potential there in terms of downstream deployments.
I think some of the challenges are that most of those regulatory bodies—and certainly this is true at the state level, but it's also true at the federal level—are under-resourced, both just in terms of the resources that they have to now… they were already stretched, but now to take on kind of algorithmic X in whatever they're doing, but also especially the technical infrastructure and the compute.
And so one of the things we dealt with… There were some great comments in this area about federal horizontal capability so that, yes, for the most part, you know, enforcement authority and regulatory authority is going to be vested in the particular vertical.
But how can we think about horizontal capability, you know, especially technical capabilities so that if you are dealing with algorithmic systems at the FDA or at the FTC, you have access to both compute and personnel and other technical instrumentalities so that you can more effectively use what regulatory and legislative authority you are. And then, of course, there's the whole other basket of things that there is no federal law for… and privacy, you know, sort of data protection is the most glaring example.
Jakub Kraus | 08:31.697
Yeah. For this horizontal side of governance, what is currently going on there? Were the comments addressing whether, for example, an agency can use the DOE's supercomputers if they want to run experiments, or whether an agency in need of technical AI expertise could draw on some experts at the Department of Commerce, even if that's not their agency. Is that currently happening? I know there's AI governance bodies being set up within the government and chief AI officers being set up.
Ellen P. Goodman | 09:12.718
Yeah. So this was a big thrust of the executive order, the executive order that came out on Halloween in 2023. As you said, to set up that kind of infrastructure within the federal government that is this horizontal capability. So that's just all being set up now.
And, you know, who knows what happens to the durability of those things, you know, will depend on what happens in November and what direction they take. But yeah, I think that was a main thrust of the executive order.
And I think there is, you know, there are MOUs between agencies to sort of share resources. And there's certainly a lot of working groups, interagency working groups, and sort of a collaboration within the federal government to kind of address some of those knowledge gaps.
I think there are also, you know, DOD has taken a lead on kind of having prizes and trying to do field development and then get a pipeline into the government. I think there are some other, I can't remember the names of them, but sort of tech scholars and, you know, tech superstars, you know, trying to bring them into government. So I think that work is happening.
There's nothing like, it's not at the scale as what's happening in Europe with the EU AI Act in terms of the infrastructure that's being built out. But I think that's sort of intentional, right? So the American style of doing this is much more to push down responsibility through those vertical domains that already exist in the federal government. As you know, there are bills out there that would propose a new agency.
For the most part, I think the approach that you see in the proposed legislation is to give more authority and resources to the FTC, I think a little bit DOE, and obviously Commerce, you know, there are a lot of bills to further resource Commerce.
And NIST is kind of, NIST is not a regulator, but I think it's very much playing the role of kind of technical expert to the federal government and some of the work that the projects that it was tasked to do in the executive order and the work that it's doing on the risk management framework. And I know that it's sort of in touch horizontally throughout the government on those matters.
Jakub Kraus | 12:12.125
Yeah. And what about the NTIA? It just put out this report on foundation models with open model weights. It put out the accountability report. What do you see as the NTIA's role in AI policy?
Ellen P. Goodman | 12:27.154
The NTIA sort of plays this think tank role, sort of as a policy think tank. It also has no regulatory power. And that's kind of, I think, you know, the issue here, for better and for worse, is that these kind of horizontal bodies in the federal government tend not to have regulatory power. So they are playing a, you know, in NTIA's role, a kind of thought leadership.
I read it last night, the new NTIA report, but I need to reread it. But you can see, you know, I think that's a really good example of.. What it does in that report is it says, you know, open model weights—so sort of in the vernacular, open AI foundation models, have benefits and risks. And they define this concept of assessing the marginal risk that they pose over and above closed models like ChatGPT and Claude and Gemini and all the others other than Llama.
And in terms of assessing that marginal risk, to determine whether or not there needs to be regulation, they lay out what kind of infrastructure or sort of capabilities the government would need to have in order to make good decisions.
And I think if you look at some of the things that they talk about, which overlap with some of what we talked about in our report, things like red teaming and evaluations… all of those are capabilities that the government is a little bit light on right now. And that civil society in general is light on. And so I think it's underscoring the need for investment. And I think the AI Act was kind of a step in that direction. But ultimately, Congress is going to have to appropriate funds in order to get the government better resourced to do those things.
Jakub Kraus | 14:44.255
And with the AI Act, you're talking about Europe's AI Act?
Ellen P. Goodman | 14:47.978
Oh, did I say AI Act? I meant the EO. I meant the executive order.
Jakub Kraus | 14:51.381
EO, yeah.
Ellen P. Goodman | 14:52.242
Yeah, sorry.
Jakub Kraus | 14:53.943
Are there particular policies for strengthening evaluations and red teaming in particular that you think are promising? Anything that can bolster that ecosystem? Help it grow.
Ellen P. Goodman | 15:11.752
Yeah. So first of all, in the private sector and in academia, in terms of the stakeholder conversations that we had and from what I know generally about CS departments, they're doing red teaming. And doing research into red teaming has not been the most rewarded career path.
So one thing is that there needs to be, you know, I think a kind of shift—this is true in many, many areas of academia—towards rewarding, promoting the kind of work that society needs. And so that's one thing.
And government can be helpful there even just by giving prizes and honorifics and, you know, sort of accelerating… In academia, you know, pats on the back and non-monetary prestige rewards go a long way. So that would be sort of a low cost thing to do.
Beyond that, I think funding more red teaming and kind of evaluation development setting. One thing we talked about was providing guidance for what independent evaluation should look like substantively, but also what “independent” should mean. You know, in some areas, of course, that's regulated. In the securities area, you know, the independence of auditors is set forth in regulation, although the development of that regulation was very collaborative with the private sector.
But I think standard setting, facilitating standard setting, not just technical standards, but also kind of socio-technical standards about independence and, you know, what a good red… Red teaming is just, can mean so many different things, right?
And the NIST risk management framework and other forms of risk assessment, you know, you can check the boxes in a kind of pro forma way or in a very deep way. And it really makes a difference both internally within the entity in terms of assessing and managing their risks and also in their external communications and in what, you know, downstream buyers or deployers or people on whom the systems are deployed, you know, in terms of what they know about the systems and what risks they can expect and choose or not choose to bear.
So all of that kind of standardization and guidance on what quality looks like. And some analogies would be what the federal government has done in the food markets about organic and setting kind of seals of approval and standards, nutrition labels. I mean, those kinds of things that are market making. So, you know, I think there are a lot of creative things the government can do other than providing funding.
Jakub Kraus | 18:35.510
Got it. I wanted to talk a little bit about provenance and tracking the origin of data generally or AI generated outputs.
So one proposal is we watermark outputs. So embed some signal in them. It could be easy to detect, hard to detect, easy to remove, hard to remove. And you could use that on a social media platform, for example, downstream to check whether the content is AI generated. Maybe you could learn which AI model generated it, maybe even which user generated it. And that could help with content moderation or perhaps even liability with seeing who's responsible for criminal use like non-consensual deepfake use.
So in the context of generative AI, how do you envision watermarking, authentication, these kinds of provenance practices taking shape and contributing to accountability?
Ellen P. Goodman | 19:45.090
So our report did spend quite a bit of time talking about provenance. And what was interesting to me was that those terms are used very differently depending on what the interest is.
So for example, the copyright community is very interested in the provenance of training material. And so what they're interested in is that from the outputs, you're able to trace what the training data was and whether or not there was consent, compensation, and credit to the copyright holder if it's under copyright.
Which is a very different kind of feature and purpose of provenance than the ones you talked about, which is really about sort of synthetic content versus authentic content. And with respect to distinguishing synthetic content, provenance has a lot of value.
You also alluded to its lack of technical robustness. And I think it's not just that it can be stripped, interfered with, spoofed, right? So you're going to have a lot of false negatives and false positives. But also, you know, the struggle is that because the distribution chain for content can be so long and convoluted, you can take a screenshot of whatever it is and then, you know, share it on a different platform, and, you know, the provenance or the watermark is gone. Obviously, these are all technical challenges that are being wrestled with.
So there is all of that. The difficulty, I guess I would say there's a lot of value in it, but we shouldn't assume that provenance will sort of solve the epistemic problems and the dignitary problems you alluded to of, you know, deep fakes and or cheap fakes or, you know, mis- and disinformation that has nothing at all to do with any kind of fake, right? Any kind of synthetic fake. So it's, you know, it's an arrow in the quiver, but, you know, I don't think it solves every problem around the epistemic kind of sludge that we live in.
The other thing I would say is that I would distinguish authentication from, you know, sort of watermarking in the sense that authentication—which, you know, Adobe and Microsoft, the C2PA coalition is really, I think they are the market leader with their tool. You can kind of assume that the default is synthetic, right? So what it's trying to do is… If you want to use this tool, it's for you. If you are a credible source and you want to prove that you are the source of this content, you can authenticate it back to yourself. And I think that's very different from flagging synthetic content. And it probably is more robust, but it's a much more limited set of signals.
Jakub Kraus | 23:13.548
So to recap the ways provenance gets used… There's the copyright with tracking what training data went into an output, for example. There's the watermarking and tracking whether the content was AI generated or what went into making it. And there's proving that you are the source of the content as well, this more authentication side. Are those the three main ones? Are there any other ways to term is getting used?
Ellen P. Goodman | 23:49.887
Yeah, I would say there's one more bucket and that is provenance information or provenance manifest that is not about synthetic content detection—sort of binary, “yes, this was generated” or “no, this was not generated,” “yes, this was manipulated” or “no, it wasn't”—but rather, this is where you can sort of click on the content credentials, and then you can see a manifest about where it originated.
That could be a pseudonym or that could be a number. But so that you can tell, you know, was this two images that were merged together? Did entity B come in and change what entity A initially captured? So that is a more fine grained sort of manifest of the journey that the content took.
And obviously that is not going to be of interest to most end users, right? That is a more fine grained information that is of interest, I think, to kind of sense makers like journalists and researchers and others.
When you look at the provenance proposals, and there are many of them now in proposed bills. And they sort of deal with each of these aspects. I think it's important to understand what the intervention is trying to promote, which kind of provenance information it's trying to promote.
And then to think about, again, in the spirit of what infrastructure would you need to make effective use of that kind of fine-grained provenance information… In my view, to make the best use of it, you're going to want to invest in kind of sense-making intermediaries, right? Where the end user is not… it's not really retail disclosure, right? It's a kind of disclosure to experts.
Jakub Kraus | 26:07.118
Who are sense-making intermediaries today? Or is this a new proposed role you see people filling?
Ellen P. Goodman | 26:15.806
You know, I think, I mean, the usual sense-making intermediaries are institutions, right? They're journalistic institutions or research institutions or government or, you know, think tanks or other kinds of meaning makers.
And I think one of the things that's happened with the Internet is there has been… I sort of view provenance against the whole background of what's happened over the last, whatever, 30 years with sensemaking. And there is on the internet, you know, what scholars call context collapse, right? So everything kind of looks the same and has the same heft in terms of its validity. And this happens way before you get to counterfeit or synthetic content, right?
So the fact that there is now AI generated content complicates things, but it's really an acceleration and intensification of kind of sense-making problems that the Internet has created.
And as you know, one of the things that's happened is that the Internet has hollowed out journalism. And sort of we don't have the robust kind of fact-checking capabilities. And there are lots of fact checkers out there, but they're not well resourced. And, you know, there have been kind of fads with the big tech companies of supporting them and then backing away from them.
So there's a gap in the ecosystem around sensemaking. And, you know, maybe there are new kinds of sensemakers and there are distributed sensemakers. And blockchain, for that matter, can be a sensemaker in the sense that it can be used to create reliable ledgers of authentic—you know, that's what it's meant to do, to authenticate. We can include that in the ecosystem.
If you think about provenance information, I'm just suggesting you also want to think about who is the consumer of this provenance information other than the algorithmic system that's ingesting it and that's going to do something with it, right? Like, so maybe it's going to downrank synthetic content. Maybe it's going to do that. Maybe it's not. Maybe we need to have a conversation about whether it should be downranked, because just because it's synthetic doesn't mean it's less valuable, right? And so that's another conversation is that it's not like authentic good, synthetic bad. We might want to know about when content is synthetic.
So there's all of these complicated questions about how we understand communications, how we use communications. I mean, maybe there's an argument that—you know, there certainly has been in the earlier days of the internet, there was an argument that we didn't really need journalism anymore, right? Because we had cut out the middle person. And now I think there's a recognition that that wasn't true and that we do need journalism. And because that's my background in media and media policy, I tend to look at that part of the equation.
And so in connection with provenance, let's not just think about the technical tools, but about sort of the socio-technical muscle that would be required to really use those tools effectively.
Jakub Kraus | 30:04.662
And for some of these providence practices in particular, to what extent are they already in place? And if not, why aren't they in place? What are the incentives here, the technical feasibility that makes this possible or hard to do?
Ellen P. Goodman | 30:28.335
Yeah, it's such a great question. And I think it's moving very quickly. So all of the social media companies say that they're using some kind—I think all of them—you know, they all have community guidelines about synthetic content, and you're supposed to flag it if it's AI generated.
And as any teacher knows, you know, as we all prepare our syllabi for the fall and we deal with AI assisted learning, it's very hard to draw the lines right between… I mean, we're not just talking about freestanding ChatGPT, but we're talking about Copilot and Gemini and in law, Lexis and Westlaw all have AI assistants. And so at what point does something become synthetic content that it needs to be flagged? And I'm sure each platform has a different way of doing that and a different way of detecting synthetic content and expressing that something is AI generated.
And for example, this recent flap about Elon Musk retweeting a Kamala Harris deepfake that he did not identify as being synthetic content, even though X's terms of service require that it be identified or not posted if it has the potential to deceive. And what he said about that is that it was obvious that it was a parody, right? So these things about... what is synthetic content that needs to have provenance information or a label, you know, are somewhat subjective because a lot of them turn on deception.
And so to answer your question, I think the platforms are mostly using some kind of provenance information. And then there are all these open standards. You know, the Partnership for AI is working on responsible use or deployment of synthetic content. As I mentioned, there's the C2P3—I think I always garble the name, but the Adobe Microsoft Open Standard for authentication. So there are a lot of products out there.
The incentives, I mean, first of all, I think there's an incentive that the developers have to know for training purposes when content is synthetic and when it's authentic because of this fear of model collapse, that if you train too much on synthetic content, you just drift away from kind of ground truth. So they have their own incentives.
And then, you know, there's a lot of pressure. There's a lot of, I think, just information integrity value in the marketplace. And so, you know, there are some incentives there.
I think. The problem is, and where the government, I think, can be useful, is that when you have a proliferation of a lot of different standards and a lot of different metrics… First of all, interoperability is challenging. And this is a complicated value chain because it's, you know, you take a video and then you run it through an AI generator or an AI tool to modify it. And then you post it on a platform and then someone else takes it and reposts it on a different platform and makes further additions to it.
And so there are a lot of different pieces in this chain that all have to recognize—if you want that to be durable—that originating provenance manifest or information or metadata or whatever it is, digital signature. And so that's one place government can be useful is kind of helping with interoperability.
And then another place it can be useful is kind of setting guidelines or standards for, you know, where there needs to be provenance, what it ought to look like. Or, you know, this is in a proposed bill, that you would prohibit the stripping out… Whatever provenance information an author or a system chooses voluntarily to include, it would be illegal to strip it out. That might be another place for regulation.
Jakub Kraus | 35:06.843
Interesting. I want to transition to talking about some other aspects of AI policy.
So there's this common term of risk-based regulation, which seeks to scale the stringency of regulations or the level of strictness in them according to how much risk is being posed. And this commonly gets brought up with how much risk is the use case posing or how risky is this sector? Like if we're using it in healthcare, maybe we want different requirements than using it in agriculture.
But one problem that I think about with this is with general purpose AI that can be used in many different use cases in many different sectors. So whether that's Claude can be planning a recipe, writing a speech, there's a lot of different things one system can do. And then these are also called foundation models because not just in the direct use, but indirectly, they can then be specialized for a particular application. And they have many different downstream applications they could be specialized for.
So how do you think about the approach of risk-based regulation or risk-based policy more generally in the context of these general purpose foundation models?
Ellen P. Goodman | 36:42.840
Yeah, I mean, it's such a hard question that no one seems to have answered satisfactorily. You know, a lot depends on what analogy you choose.
So the analogy that developers use is, we're making electricity, right? Like, we're just making electricity, electricity can be used to light your house or can be used to activate explosives. And we can't know how it's going to be used. And so we can't be, um, you can't, you know, regulate us with respect to downstream harms.
Another analogy though might be something like chemicals, right? These are dual use. It's a general purpose technology, but it can also be used in harmful ways. And there are certain kinds of very limited requirements that you can put on a chemical manufacturer.
And I must say, this all changes when we're talking about open systems, right? And the big difference in this respect is that closed proprietary systems have the possibility of clawing back their model from bad actors or limiting access through an API. So they have some control over how deployers are actually using.
And so then if they have some control, sort of in terms of risk-based regulation, you might be focusing on, okay, we have to improve information flows so that developers understand how deployers are using their system. And if they have information, then maybe they have some responsibility to make modifications so that it's not used in that way.
When you're talking about open models, there's really no control right over how the foundation model gets used. And so that becomes more difficult. But I guess, you know, in general, this is why I personally think that most liability has to be downstream of the developer and has to be with the deployer who has a much better grip on the risks.
But I don't think that means the developer has no responsibilities, but probably the responsibilities sound more in the domains of disclosure, access, providing standardized disclosure so that there is information to deployers about what this system is capable of, what it's been tested for, what its training data is. So those kinds of things, you know, I think are in accordance with risk-based regulation.
Jakub Kraus | 39:32.180
And in the context of a system like Claude or ChatGPT, where the deployer and developer might be the same one in terms of setting up this chatbot interface, does that change the analysis at all? Or it's still with liability, for example, we're targeting them, but only because they're a deployer?
Ellen P. Goodman | 39:56.612
I think that's right. And this, you know, there have been some really interesting conversations around this Section 230 immunity apply to Claude when, you know, Claude defames someone. And that question is really, I think, going to turn a lot on this interaction between the user and the system, right, that it's a collaboration often between prompts and the system. And so, you know, it's still not that they're operating autonomously… They're operating in partnership with the user. But yes, I think that to the extent we think about imposing liability on Claude for harms in that context, it's because they are the developer and the deployer.
Jakub Kraus | 40:41.145
And one of my last main questions is that the report was talking a lot about different approaches in other areas. So other domains: financial, FDA regulations… So out of all of these approaches or some that you've come across since what is one policy model that might be especially effective for AI?
Ellen P. Goodman | 41:07.727
Alright, I’ll give you three. Okay, really quickly. It really depends on where in the AI ecosystem we're focusing. So if we're talking about big, huge, powerful foundation models that are proprietary, they may pose systemic risk. They may pose unpredictable systemic risk. They sort of look a bit like banks. And so you could have something that looks a little bit like bank supervision, and I think the AI executive order did this a little bit. It defines powerful by compute level—which is controversial, which may change, right? But at that point, there's much more information exchange with the government.
Second example would be FDA. So FDA has ex-ante gates that you have to pass through before you can release a drug. That's probably not a good model for most of AI. But what it does do is it's information forcing. So we used to not know much about how drugs worked, what their mechanisms were, whether they were effective. Sometimes they worked, sometimes they didn't. And what the FDA partnership with industry, with the pharmaceutical industry did was it forced the pharmaceutical industry first to know more about their own mechanisms and then to disclose those. And so that sort of mechanism of information forcing, I think, that comes from that domain is useful.
And the third, I would just say, is the audit ecosystem in the financial markets. You know, there it was very much audits developed by the private sector that then get adopted by government regulators. And that sort of sets the standard so that everyone in the public markets is using the same kind of audit structure, and it becomes comparable and interoperable and sort of really performs the information function that makes the market work.
Jakub Kraus | 43:20.537
Great. Thank you for those three examples. I think those are really interesting to think about. And before we close, was there any last things you wish I had asked you about or that you wanted to bring up?
Ellen P. Goodman | 43:36.306
Just really quickly. You know, I thought you were going to ask me what one piece of new legislation would I recommend at the federal level?
Jakub Kraus | 43:45.601
Yeah, go for it!
Ellen P. Goodman | 43:46.062
Yeah. So that is just, you know, it's so obvious it's almost not worth saying, which is privacy, data protection. You know, in every domain it's needed. And it is a big part also of especially around training data and the inclusion of private information in generative AI outputs.
And then the other thing is about the role of states. And I think it's super interesting, you know, where there's a void in federal action, the states, especially California, rush in. You know, there are pros and cons to that. Certainly when California starts regulating foundation models, there's, you know, a huge extraterritorial impact of that. So that's maybe problematic. On the other hand, there is a vacuum.
But I think where states clearly are within, you know, their rights is in the area of setting liability. So defining harms and creating new liability rules for them. And I think that can be a really effective way at the most downstream point where people are impacted by AI to give them—or their surrogates if it's just a public right of action, their attorneys general—rights to vindicate their interests.
Jakub Kraus | 45:10.634
That makes sense. So if the audience wants to learn more about your work, read more of your writing, is there a good place for them to go and find that on the internet or books or anything like that?
Ellen P. Goodman | 45:27.862
Sure. I mean, they can go to my Rutgers website, or I usually can be found at EllGood, E-L-L-G-O-O-D, on Mastodon, BlueSky, and X.
Jakub Kraus | 45:44.157
Awesome. Ellen, thank you so much for joining the show.
Ellen P. Goodman | 45:48.121
Thanks a lot, Jakub.
Jakub Kraus | 45:52.164
Thanks for listening to the show. You can check out the Center for AI Policy Podcast Substack for a transcript and relevant links. If you have any feedback for the show, please feel free to email me at jakub at AI policy dot us. Looking ahead, next episode will feature Dr. Michael K. Cohen, a postdoc at UC Berkeley, discussing the risks of advanced artificial agents and proposals for regulating them. I hope to see you there.