#9: Kelsey Piper on the OpenAI Exit Documents Incident
OpenAI’s recent incident involving exit documents, the extent to which OpenAI’s actions were unreasonable, and the broader significance of this story
Kelsey Piper, Senior Writer at Vox, joined the podcast to discuss OpenAI’s recent incident involving exit documents, the extent to which OpenAI’s actions were unreasonable, and the broader significance of this story.
Available on YouTube, Apple Podcasts, Spotify, or any other podcast platform.
Our music is by Micah Rubin (Producer) and John Lisi (Composer).
Relevant Links
ChatGPT can talk, but OpenAI employees sure can’t (Kelsey Piper)
Leaked OpenAI documents reveal aggressive tactics toward former employees (Kelsey Piper)
Removal of Sam Altman from OpenAI (Wikipedia)
A Field Guide to AI Safety (Kelsey Piper)
Transcript
This transcript was generated safely by AI with human oversight. It may contain errors.
(Cold Open) Kelsey Piper | 00:00.169
What did senior leadership know? When did this happen? How did this happen? If nobody in senior leadership knew anything about this, then who did this? Because somebody did this.
Jakub Kraus | 00:20.608
Welcome to the Center for AI Policy podcast, where we zoom into the strategic landscape of AI and unpack its implications for U.S. policy. I'm your host, Jakub Kraus, and today's guest is Kelsey Piper. Kelsey is a senior writer at Vox, and we discuss the recent OpenAI incident involving exit documents, as well as the broader significance of that story, which Kelsey reported quite extensively on. I hope you enjoy. Kelsey, thank you for coming on the show.
Kelsey Piper | 00:58.404
Yeah, absolutely. It's great to be here.
Jakub Kraus | 01:01.063
So in mid-May, you reported that OpenAI had a restrictive off-boarding agreement that had non-disclosure and non-disparagement provisions, and these forbid them from criticizing OpenAI for the rest of their lives. There was also no ability to acknowledge that the agreements existed that would violate the NDA, non-disclosure agreement. Can you paint a detailed picture of the kinds of behaviors that these exit documents were controlling and prohibiting?
Kelsey Piper | 01:46.676
Yeah, so the exit agreements at OpenAI, which Vox has actually gone ahead and published, and you can read redacted versions of them if you want to check out some of the details of the language yourself.
But they had some standard provisions, such as, of course, protecting confidentiality and trade secrets. That's normal and not at all problematic. And then provisions prohibiting disparagement of OpenAI or any of its employees, which... If taken seriously, and some of the former employees who I talked to just kind of assumed, well, OpenAI is going to exercise common sense. Surely they wouldn't intend to enforce this about criticisms that were based on public information and that were true and transparent.
But the actual language of the agreement would prohibit criticisms that were based on fully public information. And that were simply of the form, I disagree with the direction that OpenAI is taking as of their latest announcement, or I think this paper by OpenAI isn't very good. Because there's this very broad speaking, very broad sweeping language about non-disparagement.
And then there was also a non-interference provision that prohibited interference with OpenAI's relationships with any of its vendors and contractors and partners and funders and things like that, which of course includes Microsoft and as OpenAI now includes Apple. As OpenAI gets bigger, includes almost everybody. And nobody I spoke to was quite sure what the non-interference provision entailed exactly. Is it interference in OpenAI's relationship with Apple? If you, for example, strongly criticize Apple for making the arrangement, maybe.
And so you have these provisions that are very broad on paper and a lot of lack of clarity about exactly what they bar people from doing. And these were signed by every single former employee, barring a few cases, in some cases going back to 2017.
Jakub Kraus | 03:56.037
Could you elaborate on perhaps the motivations, if you have a sense for why these were in place?
Kelsey Piper | 04:06.710
So, it's very hard. You know, as a reporter, you can say these documents are very sweeping. OpenAI applied tons of pressure tactics to get former employees to sign them. When former employees pushed back, OpenAI really doubled down and found new avenues to say, we're going to take away your vested equity if you don't sign. It was clearly something that they were willing to go to a lot of lengths to secure ex-employees' agreement on these documents.
And you can say that senior leadership signed a bunch of these documents and was present for a bunch of these meetings and clearly had some knowledge, maybe not of every detail of this, but that it was happening in broad terms.
It's much, much harder as a journalist to say what anybody's motivations are. I think many of the ex-employees I talked to felt that the company wanted to suppress criticism because they felt very threatened by, you know, even fairly mild criticism of the company. They sort of wanted to maintain… You know, OpenAI has a lot of hype and enthusiasm surrounding it. Maybe they were very worried about former employees criticizing the company bursting that bubble. But, you know, it's much easier to say that something's happening than why it's happening.
I think we do know that OpenAI's leadership is very sensitive to criticism. You'll remember that what kicked off the board drama in November is that, well, there were obviously a lot of ongoing conflicts between the board and Sam Altman. But it's been reported that Helen Toner, who was on the board, wrote a report on AI deployment strategies in which she said something favorable about OpenAI competitor Anthropic compared to OpenAI in terms of how they had approached deployment strategies.
This was in a really minor white paper that, frankly, no one read. I read a ton of stuff in the industry, and I hadn't read it when everything blew up. And yeah, I would honestly guess that fewer than 20 people had read this. But Sam Altman took Toner to task for it, met with her one-on-one, demanded that she not do things like this, said it was very, very damaging to the company for a board member to be a co-author on a paper that criticized the company in comparison to a competitor. So it seems like Altman feels very strongly about no one criticizing OpenAI.
Jakub Kraus | 06:35.451
And one of the pressure tactics or leverage mechanisms that OpenAI had was that employees who didn't sign or violated the document could lose this thing called vested equity. Can you define that term and unpack it for people who might be unfamiliar?
Kelsey Piper | 07:01.148
Yeah, absolutely. So outside Silicon Valley, the way you're compensated for your work is generally you get a salary, you get a paycheck.
In Silicon Valley, a very common arrangement, I would say the most common arrangement for tech employees, is that your compensation package that you agree to up front when you join the company is a mix of a salary, the paycheck that you get every two weeks, and equity in the company. At a fully public company, this is just stock. Someone who works at Google will get much of their compensation in the form of Google stock. Outside Silicon Valley, you see this in like C-suite employees, they'll get stock compensation, but it's not a common, normal thing. But in tech, almost everybody who works is getting equity in the company that they work for, or at least it's a very, very common arrangement. So in a public company, that's stock.
In a startup like OpenAI, that is going to take the form instead of, you know, a partial share in this privately held company. Which is eventually worth something if that company ever goes public. And of course, owning equity in an early stage startup, you know, often gets analogized to holding lottery tickets. You have something that has a small chance of being worth a ton of money and early employers, early employees end up very wealthy if the company takes off. But in most cases, the company doesn't take off and that equity is not really worth anything.
But OpenAI has its own form of equity that is a little bit different than what's standard in tech as a product of it being technically a non-profit private partnership kind of deal. They call them profit participation units, and the important thing is that they are represented to employees as equity, as they are signed on to as part of a compensation package. Employees earn them over the course of their working at the company. Those shares are deposited in the employee's share account. And then in principle, those employees would be entitled to a huge payout if OpenAI ever becomes a publicly traded company. That may never happen and is certainly not going to happen anytime soon.
So instead, what OpenAI does is they organize sort of private tender offers is what they're called, where you can sell your equity to other buyers. And someone who has worked at OpenAI for a couple of years has millions of dollars in this vested equity, millions of dollars of money that was part of their compensation agreement that they've earned and that is now theirs to sell for millions of dollars the next time OpenAI does a tender offer.
Jakub Kraus | 09:44.012
Right. And what do we know about these tender offers? I think you wrote that they're pretty secretive and not much is known about them.
Kelsey Piper | 09:53.345
Yeah, so again, for a little context for people outside Silicon Valley, tender offers are a relatively new thing. It used to be that you were kind of holding onto equity until the company went public. But over time, companies have spent longer being private before they go public. And certainly for OpenAI, it's deeply unclear if they'll ever become a publicly traded company. And of course, if you're a former employee who is there early, it's one thing to say I'm holding on to this equity for five years until the company goes public. But holding on to this equity for 10 years, for 15 years, I want to turn that equity into actual money that I can use to pay my bills or buy a house or whatever.
So over, I would say, the last five to ten years, maybe even just the last five years, it's become more common for late-stage private companies to handle these tender offers. And the deal there is just lots of people want to own a piece of OpenAI. It is a company that has a very high valuation and that is earning a lot of revenue and that is doing ambitious stuff. People want to own shares of it.
If it's a publicly traded company, the way this would work is that the former employees are just like, hey, anyone want to buy my shares? And then somebody buys their shares. This is literally what the stock market is, right? But a privately traded company, you can't do that. And in fact, OpenAI exercises a lot of control over whether you can sell your OpenAI equity. You broadly cannot just make a deal with someone on the side to sell your OpenAI equity. It all has to go through OpenAI.
That itself is not OpenAI being shady. That is very normal. You would encounter the same thing in other late-stage companies. And so they organize these tender offers where they basically bring together a buyer who is willing to purchase a bunch of employees' equity, and then the employees sell the equity. The employees cash out and are no longer in this position of holding something that's worth a ton of money in expectation, but where they would really like to have some cash that they can use more immediately. And you satisfy that employee desire for liquidity while the company sort of maintains control over who has shares of it, which is important to a private company.
Jakub Kraus | 12:06.135
And one important question here that you've hinted at is how standard these practices are. So when I did a little research, I saw some news articles saying Amazon backed away from non-disparagement agreements about two decades ago. Tesla might have used some in 2019. Sometimes companies offer a really generous severance pay when someone leaves or is laid off in exchange for the employee signing a non-disparagement agreement. And then you also mentioned there's some standard exit documents like no sharing trade secrets or confidential business strategies.
When you were doing the reporting, you spoke with some experts in employment and labor law. And one attorney told you that the vested equity clawback threat was egregious and unusual. So how far did OpenAI diverge from standard industry, Silicon Valley, and big tech practices for departing employees?
Kelsey Piper | 13:17.072
So I would say that this was an enormous departure. And yeah, the employment and labor lawyer I quoted in the piece called it egregious and unusual. And I think that's about right. It is very normal for a company to offer a departing employee severance in exchange for, yeah, potentially a non-disparagement agreement or similar. And in that case, you're leaving and they say, hey, we'll throw in an extra, you know, $20,000 or whatever if you agree not to disparage the company. And I don't really think this is in the public interest. I'm not enthusiastic about it, but it is fairly common. And I wouldn't say it's particularly problematic because it is, you know, if you don't want to make that agreement, you just decline the severance pay. And these people are in demand. They are able to get other jobs. That doesn't seem to me like a huge restriction on them.
What made what OpenAI was doing so extraordinary is that. In Silicon Valley, at this point, vested equity, equity that you earned as part of your compensation package and that has already passed the typically you don't get any of your equity until you've been there for a year and then it shows up in monthly or quarterly chunks. It's called vested once you've passed that cliff of having been there for a year, the chunks that you have already received. So that is generally thought of as your compensation. You know, it is part of your paycheck. It is yours. And legally, of course, the company, if it's a private company, still has a bunch of mechanisms to claw back vested equity. But this is pretty much unheard of. Because the reason people work at startups like OpenAI is substantially for that equity. It is a big deal to them.
And it is really, really unprecedented for a tech company that's, you know, in this successful situation to threaten already vested equity. It's like only one step shy of saying, we're going to take back your last six paychecks if you don't sign this deal. Like, this is money that people, was already part of their compensation package and they already earned.
So what makes this unusual is not that there was a non-disparagement agreement. It's that it was in exchange for absolutely nothing except we won't cancel your vested equity. And that is deeply unusual. When I looked for other examples of companies doing something similar. There were a couple of individual cases where a company did this to a small number of employees, and then TikTok does something similar to this. But I think TikTok has broadly agreed not to be an ethical employer and to be a pretty bad actor in many ways, not a good example to look up to. So yeah, I think OpenAI crossed a very serious line from the perspective of the tech industry.
And when this became public, the company immediately apologized, said this was not consistent with their values as a company, and said they would immediately stop doing it. So I think OpenAI also crossed the line from the perspective of its own employees and from, you know, at least according to its leadership, from their own perspective. Leadership says they didn't know about this and they don't think this is acceptable. So OpenAI also crossed the line from their own perspective here.
Jakub Kraus | 16:27.186
Yeah, there was some sources inside OpenAI telling you that there was this turmoil happening in the company. And then a day after your article, the CEO, Sam Altman, posted an apology on X. He said, we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement or don't agree to a non-disparagement agreement. But he did acknowledge that the previous exit docs included the equity cancellation provision, and he said they'd been In the process of fixing the exit paperwork over the past month or so.
Now, what's interesting to me about that is that this employee, Daniel Kokotajlo, resigned in April. So that's about a month before this all broke out. He refused to sign the non-disparagement agreement. He might have been one of the few. He publicly confirmed that he had refused to sign in May. He said he gave up approximately 85% of his family's net worth, or that was what was at stake if they clawed it back. And so my naive reading is one story might be Daniel was the first person to refuse signing and start telling people about it. This prompted OpenAI to start changing the paperwork. And then when the story broke and the firestorm happened inside the company, they stepped up, kicked it into gear and completed the changes. Maybe they did even more generous to employee changes than they would have otherwise. But what is your best guess about why they made these changes? Why did it happen a month before the story broke out?
Kelsey Piper | 18:15.238
I am going to try and focus here very narrowly on what appears in documents I've seen. And then, you know, your listeners can extrapolate. So Daniel Kokotajlo received these documents and he responded and said, it looks like I'm being asked to sign away my right to criticize the company in exchange for retaining my vested equity. I don't feel comfortable doing that. I don't think this is consistent with OpenAI's values as a company. I have reviewed his email to the company and it is now public. And it says very clearly, you're asking me to give up my vested, to give up my right to speak out for this vested equity. I'm not willing to do that.
And OpenAI responds to that email and says. Thank you, Daniel. We're here if you change your mind. And that is the end of that conversation. And this is after they had sent Daniel the document that says very clearly, your vested equity will be canceled if you do not sign this agreement within 60 days. So I think Daniel quite reasonably got a document that said your vested equity will be canceled if you don't sign within 60 days. He didn't sign. He said to them, I'm not signing because it seems like you're asking me to waive various rights for my vested equity, and I'm not comfortable. Understood. Thank you. Have a great day. That was the last correspondence from OpenAI until a month later when Vox published the article.
So on May 16th, I reached out to OpenAI and I sent them an email and it said, we are reporting that ex-employees are asked to sign a non-disparagement agreement and told their vested equity will be canceled if they don't sign that agreement. Is that correct? Can we get on a call? Can I hear OpenAI's perspective on this? That email was ignored. 24 hours later, not having heard anything from them, we went ahead and published the piece. After we published the piece, I tweeted about the piece. The tweet started getting a lot of traction and a lot of interest.
Three hours after the article went up, OpenAI emailed Daniel Kokotajlo again. And they said to him, we are so sorry for the ambiguity in our last response. We're actually not going to cancel your equity. And we would never have done that.
Now, I read the response prior to that one. It didn't really have any ambiguity. It was a very unambiguous interaction in which OpenAI sent over documents that unambiguously said they would cancel Kokotajlo's equity. And then, you know, three hours after the article went up, they reached out to clarify the ambiguity in their last response and say they were not going to cancel his equity. I'm glad they're not going to cancel his equity.
Then there's the question of OpenAI and Sam Altman's apology claimed that this issue had come to their attention a month earlier and they had been in the process of fixing it. And I was puzzled by that because I had not actually only reviewed Kokatajlo's documents in the course of reporting this story. I had spoken to many former OpenAI employees. I had reviewed documents stretching back to, you know, 2019 that people signed on departing the company. And I had reviewed the documents from a number of recently departing employees. And I had documents that were dated April 29th. So after Daniel Kokatajlo had left. I had this interaction with the company. I had documents from other departing employees, and they still use the language.
Well, so I have one from April 22nd that still uses the language of threatening to cancel vested equity if you don't sign. And then I have one from April 29th, and it does have different language. It goes, instead of saying it will cancel vested equity, it says we will exclude you from tender offers unless you sign. So they did make a change to their language between April 22nd and April 29th. And the change that they made to their language was from going from, we will cancel your vested equity to we will exclude you from tender offers unless you sign.
I don't know if that change in the documents between April 22nd and April 29th is the change that OpenAI was referring to when they claimed they were already in the process of updating the documents. I asked OpenAI for clarification about this, and they told me that the situation was discovered in February and that updating began in April. But they were not willing to show me examples of the pre-update and post-update documents or anything like that. So, I know that that change happened in late April, but I don't know if that was maybe just a difference between different departing employees or if that was a change that was downstream of this change that OpenAI was making. Or frankly, if OpenAI was being dishonest about having intended to change this at all.
But I do know that they say they discovered this in February. As of April, they were still sending out threats to cancel vested equity. And then in late April, we have documents where instead of threatening to cancel vested equity, they threaten to exclude ex-employees from tender offers. But that's not really...
Jakub Kraus | 23:20.768
Yeah, is that meaningfully different?
Kelsey Piper | 23:22.674
Well, they're on stronger legal footing threatening to exclude ex-employees from tender offers than threatening to cancel their equity. When I reviewed this with legal experts, the legal experts felt that the threats to cancel their equity were on very shaky legal ground. The thing OpenAI was doing there was very tenuous, and they seemed to have probably been doing it, you know, benefiting from the fact that nobody had challenged it because a judge would not have been impressed. That was the general impression of the legal experts that I spoke to.
Excluding people from tender offers, it is written into their incorporation documents. They can do that for any reason, any time. So they're certainly more legally able to do that. I think morally, it's, of course, approximately the same thing. It is trying to use the threat of making an employee's compensation that they already earned worthless to them in order to get them to sign this agreement. So morally, I don't think there's an important difference, but it's on solid or legal footing.
So there is a story you could tell here that goes something like some former employees started challenging the equity cancellation thing. This gets escalated internally. Some people look at it and they're like, wait a second. The thing we're doing here is not legal. If anybody does challenge us in court on this, we're going to lose. So then they change to the threat of exclusion from tender offers, which is legally on much safer ground. And maybe that's what OpenAI meant when they said, we noticed this problem internally and started making changes, that they noticed that what they were doing was illegal and they changed to do the same thing in a way that was legal. But if so, you know, then they were absolutely not changing away from the policy of silencing all ex-employees by threatening their already earned compensation. They were just changing to a new way of doing it that put them on firmer legal ground. But I don't actually know for sure that that's what happened, because when I reached out to OpenAI asking a bunch of clarifying questions about this, they did not answer the questions that would have let us determine what was going on.
Jakub Kraus | 25:23.218
And then the other angle here is how Sam said this is one of the few times I've been genuinely embarrassed running OpenAI, and importantly, he said he was totally unaware that this was going on. But then a few days later, you broke this story that several company leaders had signed some relevant paperwork for the exit documents, and you found that Sam himself signed incorporation documents in 2023 for the holding company that handles equity in OpenAI. And these incorporation documents contain passages that give the company pretty strong authority to claw back equity or block them from selling it. So how much do you think Sam was actually aware of compared to his statement that he said he was totally unaware?
Kelsey Piper | 26:27.853
Yeah, so Sam made this statement, and I immediately was confused by that statement, because among the documents that I'd reviewed were ones that were signed by OpenAI senior leadership. It seemed very unlikely, given that these were not standard provisions, that they had happened with no knowledge of the CEO of the company. And in fact, I'd asked some legal experts about that. And I'd asked some CEOs about that.
I spent a while calling every tech CEO I knew and saying like, hey, is there any chance there could be language like this in your exit documents without you knowing about it? They all laughed at me. They were like, absolutely not. There is no way there could be non-standard language about something like this in my exit documents without my having any idea. That's not how it works. That was the consistent reaction from a number of tech CEOs who I asked this question. And Altman specifically said that language about taking back vested equity from employees was something that should never have been in any documents, which is a pretty strong claim. And while it's always hard to prove what someone knew, we certainly know that Altman's signature is on documents that give the company great power over vested equity. That he signed the incorporation documents for Aestas, which... give the company the right to buy back vested equity at a fair market value, which might be zero because for tax reasons, all open and polite AI employees declare that the fair market value of their equity is zero. That's a separate wormhole we probably won't get into, but puts employees in an awkward position with the buyback provision. And that gives them the right to exclude anyone from tender offers for any reason. And that gives the company explicitly the right to cancel vested equity if an employee is fired. So all of that language is directly in the Aestas documents.
And I guess Altman could say that he signed them without reading them. And I guess he could say that, you know, unlike the other tech CEOs I talked to, there could have been, you know, a lot of language in the exit documents that he had no knowledge of. But the other possibility here is that, as many people have said about Sam Altman over many years, he seems pretty dishonest. He often says things that just aren't true, and people find it very frustrating. Some of the incidents that precipitated the board crisis included Altman lying to some members of the board, saying like, oh, this other member of the board said they were in favor of voting to remove Toner. And that was just untrue. He had just made that up, and that was, you know, sort of part of the breaking points that led to the board crisis. So another possibility that would explain this is that Altman did not realize how many documents Vox had access to and thought that he could say he hadn't known about this, but in fact he had known about this. That is another possibility.
Jakub Kraus | 29:37.473
And to clarify that $0 point, does that mean that OpenAI could buy them back for $0 or the employees... Could the employees still sell them and make money if it's $0?
Kelsey Piper | 29:50.250
So the fair market value thing, it's a little bit complicated to say what is the fair market value of a profit participation unit in OpenAI when you mostly can't sell them without the permission of OpenAI. And they're this unusual equity structure that doesn't exist outside OpenAI. So it's very hard to say what the fair market value of that is. But I talked to OpenAI employees who confirmed that as a matter of course, when they get their profit participation unit grants, they have to send a letter to the IRS. And in that letter to the IRS, they formally state that the fair market value of their profit participation units is $0. When they sell the equity, they are not selling it for $0, obviously. They make millions of dollars. But they have all sent this letter to the IRS that says that the fair market value of their units is $0.
Now, this was done in consultation with legal advice. The situation is very complicated because OpenAI equity is very complicated. I am not saying anyone involved is committing tax fraud. I simply don't understand the exact details of what the IRS is looking for there. But I do know that a number of former employees were nervous that since they had all signed these documents saying that the fair market value was zero, and since OpenAI said, we can at any time buy back all your equity from you at the fair market value, that OpenAI had set itself up to be able to buy equity back at the fair market value of $0. I don't believe OpenAI has in fact done this, but they certainly gave themselves sort of the right to.
Jakub Kraus | 31:21.142
And one thing I think the employees were also concerned about was some of the pressure tactics and maybe I'm specifically remembering that there was some surprise potentially when they got the exit documents that they didn't know in advance that this kind of provisions could be in there. And then they might not have had very much time to decide if they were going to sign or not. Is that accurate?
Kelsey Piper | 31:51.213
Yes. So another big element of this story is how much pressure was put on former OpenAI employees to sign these documents. And... that pressure took a bunch of forms. But one of them is that this document was sent to them as an exploding seven-day offer. They had to sign it within seven days or it would no longer be valid. And when former employees said, I'm sorry, I need to consult an employment lawyer about this document, which puts a ton of new ongoing obligations on me for the rest of my life, and there's millions of dollars at stake. Can I have an extension on... this one week contract in order to consult an employment lawyer, because of course it takes some time if you've never consulted an employment lawyer before to find one and get an appointment, let them review everything.
OpenAI would push back really hard. They would say, oh, what confusions do you have? How about I connect you to our in-house counsel who can talk you through your confusions, things like that. Or I don't have the authority to extend the contract anyway. I'll ask higher up.
And in practice, if you refuse to sign and waited until the seven-day contract expired and then said, all right, I'm ready to sign now, they would just send you another seven-day contract. So it was sort of a fake deadline, but the experience was very much one of being, you know, on this really tight deadline, asking for more time, asking for time to retain outside counsel, and being told, oh, we can clear that up for you, talk to our in-house counsel. And sort of being deflected or refused on the getting an extension on the 7-day time on the document.
And employment lawyers that I spoke to thought there were serious ethical concerns about the role that OpenAI's in-house counsel played in some of the emails that Vox released, because former employees would reach out and say, I need more time so that I can access my own counsel for signing this document. And OpenAI's in-house counsel would respond and say, hey, I'm happy to talk you through the implications of this contract. Let's set up a time to talk. As in, they would say this instead of granting the person their requested time to find their own outside counsel. And the employment lawyers I talked to said that as a matter of professional ethics, if you are a lawyer representing the company, OpenAI, and you are talking to a person without representation who has their own interests, you need to say to them, You know, I represent OpenAI. I am not representing your interests. I can tell you things about how OpenAI understands and interprets this contract. And you should absolutely not be positioning yourself as advising someone when their interests are not the same as your client's interests. And so people who I showed these emails to thought there were serious professional ethics violations, potentially, in how OpenAI's in-house counsel was sort of getting in the way of people getting external counsel and trying to straddle the role of representing the company and advising the former employee. And then the seven-day time pressure made it really, really hard for former employees to figure out what was going on and push back on.
Jakub Kraus | 35:14.164
So there's a lot to be concerned about here. And maybe OpenAI was dragged kicking and screaming into making changes. But it does seem positive that they've made some revisions.
So the question that I have is, what is... left for the employees. So you wrote that OpenAI still has some methods it could use to retaliate, like the incorporation documents give the company sole and absolute discretion to reduce the vested equity holdings of any terminated employee to zero. And also there's absolute discretion over which employees can participate in the tender offers, where they can sell their equity. So to what extent are former employees free to criticize OpenAI after these revisions to the exit documents?
Kelsey Piper | 36:16.130
So I will say I know one former employee who reached out to OpenAI after they canceled his non-disparagement agreement and said, thank you for taking this step in order to feel comfortable, you know, speaking out and mentioning any concerns I have about the company. Here's what I would also want to hear. And it was, yeah, some of the stuff that you've just discussed. Assurance that access to tender offers is not going to be denied to people on the basis of things they've said about the company. Assurance that the company is changing language that lets it cancel former employees shares. And just a statement from the company that given the stakes of what they're doing, they welcome people disagreeing with them and expressing disagreement, assuming it's true, accurate disagreement that's based on real situations, that that's appropriate and one of the company's values. He sent that email and requested they respond within seven days to sort of continue the conversation about whether anything else was going to change and what the company was going to do. He never heard back.
So I don't, I wouldn't say OpenAI is doing nothing internally, but I know that external employees who have tried to ask for some of these assurances and tried to like get more clarification and a better understanding of what OpenAI means by this and where OpenAI stands with respect to people saying true critical things about it. have sort of not gotten any answers. And I know that internally, people at OpenAI are also pushing for answers to the question of what did senior leadership know? When did this happen? How did this happen? If nobody in senior leadership knew anything about this, then who did this? Because somebody did this. And my understanding is that they also haven't really received any answers.
So I think that... The current state of affairs is that OpenAI has taken some significant steps. They released people from the non-disparagement agreements that they were, you know, sort of coerced into signing with this threat of canceling their vested equity. They've made it clear they're not going to do that in the future. That's great. But when people have asked for additional clarifications about the many other routes that, you know, as little as a month ago, OpenAI was actively using, the reaction has been sort of either non-existent or very, we'll get back to you.
Jakub Kraus | 38:53.686
Ok. And are there any last comments you wanted to make about this NDA exit documents incident? I have a few final questions that sort of zoom out to the bigger picture.
Kelsey Piper | 39:06.842
One thing that we haven't gotten into very much is that OpenAI has tried to position themselves as a company that should be held to a higher standard, not just another big tech company, but one that is trying to build transformative technology that will change the way the whole world works, one that is using everybody's data in order to do it, and one that is, you know, opposing a lot of regulation on the grounds that it's better for the industry to self-regulate because things are happening so fast that the government just can't keep up. So they very much have this positioning of like, we need to be uniquely trustworthy. This was a common line for Sam Altman in interviews. He would say, of course, everybody in the world should have input into what we do. That's why we have this special hybrid structure. It's why I don't own a share in the company. It's so that we can be more accountable and more, you know, responsible than normal tech companies. This would be egregious even from a normal tech company. This would be like outrageous conduct if it was like some random ad tech company that just makes a lot of annoying mobile games. But OpenAI has said that they are much, much more than that. And I think that makes this much, much worse. And sort of seriously calls into question a lot about how they position themselves.
Jakub Kraus | 40:19.571
Are there particular other OpenAI controversies that stand out as concerning? So they've frequently been involved over the past year in some, like there was this board drama you mentioned, there was this Scarlett Johansson incident, the New York Times sued them for training on copyrighted data. There was a superalignment team that tried to ensure safety of future AI systems that are so-called super intelligent or smarter than humans, and they dissolved that team. They didn't give it the 20% of their computational resources they promised. So you mentioned that this particular drama stood out as quite bad. Is there another example of a specific practice or activity at OpenAI that you find particularly concerning?
Kelsey Piper | 41:15.947
Yeah, so of the ones that you mentioned, I am not particularly bothered by the Scarlett Johansson thing. My understanding is that whether OpenAI broke the law there turns on a bunch of complicated rules about impersonation and celebrity likenesses. And OpenAI certainly dug themselves a bit of a hole by referencing her and saying the killer app for OpenAI is Scarlett Johansson and things like that, when they didn't have Scarlett Johansson's permission to use her likeness and stuff.
But it's completely fine to have, you know, a cheery female assistant voice. And they weren't, in fact, training that cheery female assistant voice to specifically sound like ScarJo. And a bunch of people think it did sound like her and a bunch of people think it didn't sound like her. And I don't think that speaks particularly poorly to, like... anything about how the company is run necessarily. It would have been a big deal if they had trained it to deliberately impersonate ScarJo after getting a refusal from her, but it sounds like that's not what happened, so I'm not too worked up about that.
The board crisis is fascinating and I think is something that maybe a lot of people are revisiting now that there's more reason to. The board said Sam Altman lied to us about a bunch of things. We can't trust him and we're removing him as CEO. And I think at the time that came like completely out of the blue. People were like, Sam Altman, he's the coolest CEO in tech. He's awesome. And since then, there have been a lot more instances of him lying about things. So I think. If the board had waited six months and then fired him and said, he keeps lying to us, I think there would have been less incredulity. I think people would have been more like, oh, that is consistent with, you know, the many documented cases of similar issues and things like that. The board handled it incredibly poorly. They didn't reach out in advance to OpenAI's major investors and explain their reasoning and sort of get their buy-in. They did it right before a tender offer. And a lot of people said, well, if Sam Altman... was such a bad CEO, why did the employees, you know, so uniformly line up behind him to back his return? And I think there's two elements to that.
One is that I'm told there was a lot of social pressure. A lot of this was in person, like at an event at an employee's house. Most of the employees were there. People would walk up to them and say, will you sign this? You have to sign this. The company is going to collapse unless we all sign this. Will you please sign it right here? So, you know. It's very awkward to refuse your co-workers when they're urgently asking you to sign it on the spot. I think that doesn't necessarily say that much about what the results of an anonymous vote or something would have been. That's part of the story.
But the other part of the story is that there was a tender offer coming up. And Sam Altman does not actually do, my understanding is, doesn't actually do very much of like the day-to-day running of the company, but he does a lot of the investor relations and securing these tender offers. And the impression of everybody at OpenAI was that this tender offer that they were very much looking forward to, their chance to cash out their equity, was going to fall through if Altman was fired.
So all of these people had millions of dollars that they personally stood to lose or to... they were expecting to have by the end of the year in hand as cash, and they weren't going to have it in hand by the end of the year as cash if Altman was fired. So they freaked out. And I, too, would probably be pretty nervous if I thought I was going to get millions of dollars and it looked like it was going to fall through. But, you know, that's maybe not as relevant to people outside the company who are trying to assess whether Altman is a good person to be at its best.
So I think the board handled this very poorly on the communications front, obviously. And I think the board handled this atrociously just in terms of not anticipating the effects of the tender offer coming up, not getting things smoothed over with Microsoft. And then the question is, why did the board do such a bad job?
And a lot of people have observed that the board did not have very many people who were senior experienced board members of serious large companies, which OpenAI at this time is. And that's partly because its board was set before it got larger. But the bigger reason is that Sam Altman wouldn't allow the board to appoint more board members, and he forced off some of the board's most senior members, like Reid Hoffman, who is a very qualified, very capable person who was originally on OpenAI's board, on conflict of interest grounds that to me seem pretty tenuous. Hoffman was forced off the board because he was an investor in another AI company that was not, like we're not talking about Anthropic or Google or Facebook here, an AI company that was not really a direct competitor and he wasn't really all that involved in it. And okay, every VC in Silicon Valley is invested in AI companies right now, all of them. If you're going to say that it's a conflict of interest to have other AI companies that you're involved in, you have disqualified every person in Silicon Valley in the VC world. And so you have a board that's mostly made up of people who are not in the Silicon Valley VC world. And so they miss stuff like the tender offer upcoming or how to handle investor relations and stuff like that.
So the board did a bad job, but I think the board did a bad job because Sam Altman sort of set them up for failure by removing from the board the people who would have done a good job. And as a result, when the board fired him for what I suspect were sufficient grounds, like when I've talked with a lot of people about corporate relations between boards and CEOs, the CEO got caught lying to the board. Like a number of people I talked to were like, yeah, you fire the CEO if he's caught lying to the board. That's a big deal. Like there's no point in being a board if you can't say you can't just lie to us. You know, obviously it depends somewhat on the details, but it's certainly not an outrageous thing to fire someone for.
And now Sam Altman has a board that he approves of and that is much less likely to fire him or, I worry, to offer any needful oversight of him.
Jakub Kraus | 47:24.363
So, you've been covering AI for years, and back in February 2019, about five years ago, you reported on OpenAI's then-new GPT-2 model. You prompted it to write a small part of your article. You've also written a lot about the risks the technology poses, particularly as AI approaches human abilities, exceeds human abilities on. more and more tasks. And you're quite familiar with common perspectives in the AI industry, where the technology has been, where it's heading. It's come through in this interview that you're very in the know about a lot of the Silicon Valley culture. And some policy folks in DC are also very familiar with AI, but a lot are just starting to pay attention to it. So based on all your expertise on this subject, what is the message that you would most want to convey to a DC policy professional who's just starting to familiarize themselves with AI?
Kelsey Piper | 48:35.640
One thing that I write about a lot these days is that the AI community is very deeply divided over where to expect this technology to go. Like this is a point of deep disagreement, even among very senior and very respected experts in the industry. There are a large number of senior and respected people who take extreme catastrophic risk scenarios from AI very seriously. Yoshua Bengio is one of the most senior and respected AI researchers and is someone who is worked some on risks both from rogue AI, independent AI, and for misuse of powerful AI systems. And he is incredibly worth reading and worth taking seriously. There are a lot of other people in surveys of machine learning researchers published at top conferences who say they find the possibility of catastrophic outcomes from AI very credible, you know, a 5% or 10% chance of catastrophic outcomes. we're not talking about one in a thousands or one in a millions. We're talking about five or 10%. And then you also get some people who, to my mind, are like very overconfident about specific doom scenarios and specific ways that everything is going to go terribly.
And I think... The big thing that I would keep in mind as a policymaker is that these are open questions. Don't trust anyone who tells you that these are settled questions that we already know all the answers to. Some people will say, all the credible people agree with me, and it's only a tiny fringe that disagree. And there is very strong division over what this is going to look like with very serious, credible, reasonable people on all sides of that conversation. And it's much easier if you're a policymaker to look for consensus on AI to sort of see where everybody is at. And if you don't have that consensus, then it can be really hard to figure out how to approach something from a policy perspective when the experts you're talking to profoundly disagree. That's a real challenge.
I understand why some policymakers kind of want to sit there and wait for consensus to form. Unfortunately, I don't really think that's going to happen. I think by the time we have unambiguous enough results from AI systems that there's no longer disagreement about this, the form that unambiguous result takes might be a mass casualty event or might be massive destruction caused by AI systems. And so I think I would be excited about policymakers investing in the U.S. state having its own capacity to evaluate AI systems, to look at them, to understand what risks they pose, to do detailed evaluation work and to have the relationships with companies that let them exercise oversight. And then over time, that will obviously evolve.
But I think there isn't consensus. Anyone who says, oh, there's a consensus that everything is fine is not being straightforward. Anyone who says, oh, there's a consensus that we're all going to die is also not being straightforward. And you have to figure out what kinds of policy solutions make sense in the face of that uncertainty. But you can't just go, oh, we'll wait and see.
Jakub Kraus | 51:57.699
That makes a lot of sense. And where can the audience go if they want to read more of your writing or your work?
Kelsey Piper | 52:06.438
I'm on Vox. About once a week, I write the Future Perfect newsletter, which is also where the OpenAI story first broke. And I am on Twitter as Kelsey T-U-O-C.
Jakub Kraus | 52:21.139
Great. Kelsey, thank you so much for joining the podcast.
Kelsey Piper | 52:25.461
Yeah, thanks so much for having me.
Jakub Kraus | 52:30.208
Thanks for listening to the show. You can check out the Center for AI Policy podcast substack for a transcript and links. If you have any feedback, feel free to email me at jakub at AI policy dot US. Looking ahead, next episode will feature Stephen Casper of MIT discussing technical and sociotechnical AI safety research. I hope to see you there.