OpenAI has now introduced a personal data deletion request form that allows people – mainly in Europe, but also in Japan – to request that information about them be removed from OpenAI’s systems. It’s described in an OpenAI blog post about how the company develops its language models.
The form appears to be primarily designed to request that information be removed from responses ChatGPT provides to users, rather than from the training data. It asks you to provide your name; e-mail; the country where you are located; whether you are making the request for yourself or on behalf of someone else (for example, a lawyer making a request on behalf of a client); and whether you are a public person, such as a celebrity.
OpenAI then asks for proof that its systems called you. It asks you to provide “relevant prompts” that led to you being mentioned and also for screenshots that mention you. “In order to properly handle your requests, we need clear evidence that the model has knowledge of the data subject, subject to indications,” the form reads. It asks you to swear that the data is correct and that you understand that OpenAI is not allowed to delete the data in all cases. The company says it will balance “privacy and free speech” when making decisions about people’s takedown requests.
Daniel Leufer, a senior policy analyst Digital rights nonprofit Access Now says the changes OpenAI has made over the past few weeks are okay, but it’s only dealing with “the low-hanging fruit” when it comes to data protection. “They still haven’t done anything to address the more complex, systemic problem of how people’s data was used to train these models, and I expect this isn’t an issue that will just go away, especially with the establishment of the EDPB task force on ChatGPT,” says Leufer, referring to the European regulators coming together to look at OpenAI.
“Individuals may also have the right to access, correct, limit, delete, or transfer their personal information that may be included in our training information,” OpenAI’s help center page also says. To do so, it recommends emailing its data protection personnel at [email protected]. People who have already requested their data from OpenAI are not impressed by the responses. And Italy’s data regulator says OpenAI claims it is “technically impossible” to correct inaccuracies at this point.
How to delete your ChatGPT chat history
You have to be careful what you tell ChatGPT, especially given OpenAI’s limited data deletion options. By default, the conversations you have with ChatGPT can be used by OpenAI in its future large language models as training data. This means that, at least theoretically, the information could be reproduced in response to future questions from people. On April 25, the company introduced a new setting that allows anyone to stop this process, no matter where they are in the world.
Once logged in to ChatGPT, click on your user profile in the bottom left corner of the screen, click Institutionsand then Data controls. Here you can disable Chat history and training. OpenAI says disabling your chat history means data you enter in conversations “is not used to train and improve our models.”
As a result, anything you enter into ChatGPT, such as information about yourself, your life, and your work, should not reappear in future iterations of OpenAI’s major language models. OpenAI says that when chat history is turned off, all conversations are kept for 30 days “to check for abuse” and then permanently deleted.
When your data history is turned off, ChatGPT urges you to turn it back on by placing a button in the sidebar that lets you turn chat history back on – a stark contrast to the “off” setting buried in the settings menu.