Be part of the occasion trusted by enterprise leaders for practically twenty years. VB Remodel brings collectively the individuals constructing actual enterprise AI technique. Learn more
Common ChatGPT customers (together with the creator of this text) might or might not have seen that the hit chatbot from OpenAI permits customers to enter right into a “momentary chat” designed to wipe all the data exchanged between the consumer and the underlying AI mannequin as quickly because the chat session is closed.
As well as, OpenAI additionally permits customers to manually delete prior ChatGPT periods from the left sidebar on the net and desktop/cell apps by left-clicking or control-clicking, or holding down/lengthy urgent on them from the selector.

Nonetheless, this week, OpenAI discovered itself going through criticism from some ChatGPT customers after they found that the corporate has not truly been deleting these chat logs as beforehand indicated.
“You’re telling me my deleted chatgpt chats are literally not deleted and [are] being saved to be investigated by a choose?” posted X consumer @ns123abc. The remark drew greater than 1,000,000 views.
One other consumer, @kepano, added, “you may ‘delete’ a ChatGPT chat, nonetheless all chats should be retained because of authorized obligations ?”.
As AI influencer and software program engineer Simon Willison wrote on his personal blog: “Paying clients of [OpenAI’s] APIs might properly make the choice to change to different suppliers who can supply retention insurance policies that aren’t subverted by this courtroom order!”
As an alternative, OpenAI confirmed it has been preserving deleted and momentary consumer chat logs since mid-Might 2025 in response to a federal courtroom order, though it didn’t disclose this to customers till June fifth.
The order, embedded beneath and issued on Might 13, 2025, by U.S. Justice of the Peace Decide Ona T. Wang, requires OpenAI to “protect and segregate all output log knowledge that will in any other case be deleted on a going ahead foundation,” together with chats deleted by consumer request or because of privateness obligations.
The courtroom’s directive stems from The New York Instances (NYT) v. OpenAI and Microsoft, a now year-and-a-half old copyright case nonetheless being argued. The NYT’s attorneys allege that OpenAI’s language fashions regurgitate copyrighted information content material verbatim. The plaintiffs argue that logs, together with those who customers might have deleted, might comprise infringing outputs related to the lawsuit.
Whereas OpenAI complied with the order instantly, it didn’t publicly notify affected customers for greater than three weeks, issuing a weblog publish and FAQ describing the authorized mandate and outlining who’s impacted.
Nonetheless, OpenAI is putting the blame squarely on the NYT and the choose’s order, saying it believes the preservation demand to be “baseless.”
OpenAI clarifies what’s occurring with the courtroom order to protect ChatGPT consumer logs — together with which chats are impacted
In a blog post published yesterday, OpenAI COO Brad Lightcap defended the corporate’s place and acknowledged that it was advocating for consumer privateness and safety in opposition to an over-broad judicial order, writing:
“The New York Instances and different plaintiffs have made a sweeping and pointless demand of their baseless lawsuit in opposition to us: retain shopper ChatGPT and API buyer knowledge indefinitely. This basically conflicts with the privateness commitments we’ve got made to our customers.”
The publish clarified that ChatGPT Free, Plus, Professional and Staff customers, together with API clients and not using a zero knowledge retention (ZDR) settlement, are affected by the preservation order, that means even when customers on these plans delete their chats or use momentary chat mode, their chats will probably be saved for the foreseeable future.
Nonetheless, subscribers to the ChatGPT Enterprise and Edu customers, in addition to API purchasers utilizing ZDR endpoints, are not impacted by the order and their chats will probably be deleted as directed.
The retained knowledge is held beneath authorized maintain, that means it’s saved in a safe, segregated system and solely accessible to a small variety of authorized and safety personnel.
“This knowledge is just not routinely shared with The New York Instances or anybody else,” Lightcap emphasised in OpenAI’s weblog publish.
Sam Altman floats new idea of ‘AI privilege’ permitting for confidential conversations between fashions and customers, much like talking to a human physician or lawyer
OpenAI CEO and co-founder Sam Altman additionally addressed the problem publicly in a publish from his account on the social network X last night, writing:
“lately the NYT requested a courtroom to pressure us to not delete any consumer chats. we predict this was an inappropriate request that units a foul precedent. we’re interesting the choice. we’ll battle any demand that compromises our customers’ privateness; it is a core precept.”
He additionally instructed a broader authorized and moral framework could also be wanted for AI privateness:
“we’ve got been considering lately concerning the want for one thing like ‘AI privilege’; this actually accelerates the necessity to have the dialog.”
“imo speaking to an AI must be like speaking to a lawyer or a health care provider.”
“i hope society will determine this out quickly.“
The notion of AI privilege — as a possible authorized normal — echoes attorney-client and doctor-patient confidentiality.
Whether or not such a framework would achieve traction in courtrooms or coverage circles stays to be seen, however Altman’s remarks point out OpenAI might more and more advocate for such a shift.
What comes subsequent for OpenAI and your momentary/deleted chats?
OpenAI has filed a proper objection to the courtroom’s order, requesting that or not it’s vacated.
In courtroom filings, the corporate argues that the demand lacks a factual foundation and that preserving billions of further knowledge factors is neither obligatory nor proportionate.
Decide Wang, in a Might 27 listening to, indicated that the order is momentary. She instructed the events to develop a sampling plan to check whether or not deleted consumer knowledge materially differs from retained logs. OpenAI was ordered to submit that proposal by in the present day (June 6) however I’ve but to see the submitting.
What it means for enterprises and decision-makers answerable for ChatGPT utilization in company environments
Whereas the order exempts ChatGPT Enterprise and API clients utilizing ZDR endpoints, the broader authorized and reputational implications matter deeply for professionals answerable for deploying and scaling AI options inside organizations.
Those that oversee the total lifecycle of enormous language fashions (LLMs) — from knowledge ingestion to fine-tuning and integration — might want to reassess assumptions about knowledge governance. If user-facing elements of an LLM are topic to authorized preservation orders, it raises pressing questions on the place knowledge goes after it leaves a safe endpoint, and the right way to isolate, log or anonymize high-risk interactions.
Any platform touching OpenAI APIs should validate which endpoints (ZDR vs non-ZDR) are used and guarantee knowledge dealing with insurance policies are mirrored in consumer agreements, audit logs and inside documentation.
Even when ZDR endpoints are used, knowledge lifecycle insurance policies might require evaluation to verify that downstream techniques (analytics, logging, backup) don’t inadvertently retain transient interactions that have been presumed short-lived.
Safety officers answerable for managing threat should now develop risk modeling to incorporate authorized discovery as a possible vector. Groups should confirm whether or not OpenAI’s backend retention practices align with inside controls and third-party threat assessments, and whether or not customers are counting on options like “momentary chat” that not operate as anticipated beneath authorized preservation.
A brand new flashpoint for consumer privateness and safety
This second is not only a authorized skirmish; it’s a flashpoint within the evolving dialog round AI privateness and knowledge rights. By framing the problem as a matter of “AI privilege,” OpenAI is successfully proposing a brand new social contract for the way clever techniques deal with confidential inputs.
Whether or not courts or lawmakers settle for that framing stays unsure. However for now, OpenAI is caught in a balancing act — between authorized compliance, enterprise assurances and consumer belief — and going through louder questions on who controls your knowledge if you discuss to a machine.
Source link
