OpenAI CEO warns ChatGPT conversations could be disclosed to court.

Sam Altman has cautioned that as users increasingly confide personal matters in AI, existing legal frameworks provide no privacy safeguards.

OpenAI CEO Sam Altman, a prominent figure in the tech sector, has acknowledged that the industry is still grappling with how to safeguard user privacy during sensitive exchanges with artificial intelligence.

He cautioned that present systems do not adequately protect confidential discussions, at a time when millions of users, including minors, are increasingly utilizing AI chatbots for therapeutic and emotional guidance.

During an appearance on the This Past Weekend podcast, released last week, Altman stated that individuals should not anticipate legal privacy when engaging with ChatGPT, attributing this to the lack of a legal or policy structure governing AI.

“Individuals share the most personal aspects of their lives with ChatGPT,” he noted.

Altman disclosed that numerous AI users, especially younger demographics, interact with the chatbot as if it were a therapist or life coach, seeking guidance on relational and emotional challenges.

Nevertheless, unlike discussions with legal professionals or psychotherapists, which are safeguarded by legal privilege or confidentiality, no equivalent protections currently apply to AI interactions. “We have yet to resolve this issue concerning conversations with ChatGPT,” he further stated.

Altman emphasized that the matter of confidentiality and privacy in AI exchanges requires immediate consideration. “Therefore, if you discuss your most sensitive information with ChatGPT and a lawsuit or similar situation arises, we might be compelled to disclose that, which I believe is deeply problematic,” he commented.

OpenAI asserts that it purges free-tier ChatGPT conversations after a 30-day period; however, certain discussions may be retained for legal or security purposes.

The company is currently defending against a lawsuit filed by The New York Times, which alleges copyright infringement stemming from the use of Times articles to train its AI models.

This lawsuit has necessitated OpenAI to retain user conversations from millions of ChatGPT users, excluding those from enterprise clients, a directive the company has appealed, calling it an “overreach.”

Recent studies indicate a connection between ChatGPT and psychosis in certain users. Researchers are voicing increasing concern that AI chatbots, with their growing integration into personal and emotional spheres, could worsen existing psychiatric conditions.