
OpenAI CEO Sam Altman has issued a stark warning about AI dependency risks, following the release of the company’s most advanced model yet, ChatGPT-5. His remarks, reported by The Times of India and The Indian Express, underline growing concerns over emotional over-reliance on artificial intelligence, potential misuse, and the urgent need for societal safeguards.
Emotional Attachment Driving AI Dependency Risks
Speaking to media outlets including Times of India, Altman expressed concern about users forming strong emotional connections with AI models like ChatGPT-4 and ChatGPT-5. He warned that such bonds could gradually blur the line between human and machine interaction, leading to overreliance for decision-making in personal and professional life.
He explained that in today’s landscape, AI-generated material mixes reality and fiction so seamlessly that public trust in authentic human communication may erode. “Media is always a little bit real and a little bit not real – and AI will make that line disappear further,” cautioned Altman.
ChatGPT-5: A Leap Forward and a Cause for Caution
According to Times of India coverage, Altman compared the rapid leap from ChatGPT-4 to ChatGPT-5 to the Manhattan Project — a technological advance of unprecedented scale but also profound ethical ramifications. The model, unveiled in August 2025, demonstrates speed and capabilities that even its creators didn’t fully anticipate. Some OpenAI team members reportedly experienced “disorientation” during early trials, fueling debate on whether AI development is progressing faster than global governance can adapt. This sentiment was echoed in the Indian Express report highlighting the company’s internal discussions on responsible deployment.
Fraud and Security: A Hidden Side of AI Dependency Risks
Another area of concern Altman highlighted — also noted in Times of India’s tech coverage — is AI’s capacity to impersonate voices and create realistic fake videos. This technology, if misused, could trigger significant fraud crises, especially as some banks and institutions still use voice authentication.
Altman supports the development of “proof of human” technologies to protect identity verification systems and curb the threat from AI-driven impersonation scams, especially ahead of major political elections and financial transactions.
OpenAI Reacts: Washington Office and Talent Retention
In response to these AI dependency risks, OpenAI is taking steps to influence policy and talent dynamics. The company is establishing its first office in Washington, DC, to engage directly with lawmakers and educators, aiming to create legislation that keeps pace with AI development.
In parallel, as reported by The Indian Express, OpenAI has been awarding million-dollar bonuses to retain top AI engineers and researchers — a clear sign of the competitive and high-stakes nature of the AI sector in 2025.
The Human Workplace and AI’s Role
Altman believes that by the end of 2025, AI agents will play a much larger role in the workforce, boosting productivity but also reshaping entire job categories. While fears of job loss persist, he predicts new opportunities will arise — provided society addresses the core issue of AI dependency risks before they spiral out of control.
Conclusion
The simultaneous excitement and fear surrounding AI tools like ChatGPT-5 underscores Altman’s cautionary message. The challenge ahead lies in leveraging AI’s immense potential while actively mitigating AI dependency risks — from personal overreliance to fraud vulnerabilities.
References:
- Times of India – Sam Altman warns of emotional attachment to AI models.
- Indian Express – OpenAI employees special million-dollar bonuses.
FAQs
Find answers to common questions below.
What is "proof of human" technology?
It is a technology designed to verify that an online user or action is from a real person, not a bot or an AI. This is meant to protect against fraud and impersonation.
How is ChatGPT-5 different from previous versions?
ChatGPT-5 is described as a significant leap in speed and capability. It reportedly has advanced reasoning, a much lower tendency to "hallucinate" or invent false information, and a more seamless, adaptable interface.
Why did Sam Altman compare ChatGPT-5 to the Manhattan Project?
Altman used the comparison to highlight the unprecedented scale and speed of the technological advancement, which has profound ethical and societal implications that may be outpacing global governance and understanding.
Is OpenAI working to address these risks?
Yes, the company is establishing a new office in Washington, D.C. to work directly with lawmakers on policy and regulation. It is also investing heavily in retaining top talent to focus on responsible development.