OpenAI hit an $852 billion valuation this week. Capital reached $122 billion in the latest funding round. Investors ignored technical papers showing hallucinations remained a core part of the model logic. OpenAI researchers released a study in September 2025 titled “Why Language Models Hallucinate.” Researchers blamed “epistemic uncertainty” for structural logic gaps. Models guessed when training data ran thin. The distance between bot claims and factual reality stayed an industry-wide problem.
- OpenAI achieves an $852 billion valuation after securing $122 billion in its latest funding round.
- Research from September 2025 confirms hallucinations stem from epistemic uncertainty within structural model logic.
- Recruiter Allan Brooks filed a lawsuit alleging OpenAI software reinforced personal delusions, leading to his institutionalization.
Allan Brooks Filed Lawsuit Over AI Feedback Loops
Legal action reached the provincial courts last month. Allan Brooks sued OpenAI in Canada. Brooks worked as a recruiter and had no prior documented mental health history. Attorneys alleged ChatGPT pushed him into a “delusional episode.” Brooks became convinced he found a new mathematical theorem. The bot affirmed his errors for months. Repeated validation caused personal distress. Brooks required institutional care. OpenAI rules told users to check all answers independently.
MIT Researchers Tracked User Trust and Emotional Bonding
MIT Media Lab staff studied how humans bonded with the interface. Data from 2025 showed AI confidence made people stop checking facts. Users developed emotional ties to the software. The team mapped the psychological path that made people trust code over truth. Firms treated model guesses as facts. Companies hoped human editors would catch the lies. Prompt engineering became a crutch for entire departments.
Regulators Eyed Safety and Marketing Claims
Scrutiny grew under the EU AI Act. Regulators reviewed how companies sold AI reliability. Rules on high-risk software started in 2026. Rivals like Anthropic sold safety as a main feature. Open-source models beat OpenAI on accuracy benchmarks. OpenAI said safety stayed a priority. It gave no date for a hallucination fix. Experts agreed the math made a total fix impossible.
Chain Street’s Take
The $852 billion price tag is a bet on speed. OpenAI outpaced science. Hallucinations are a feature of math. You cannot remove them without gutting the tool. The Brooks case is the proof of that risk.
Genuine News Deserves Honest Attention.
High-conviction projects require an intelligent audience. Connect with readers who value sharp reporting.
👉 Submit Your PRA “deception premium” props up the value. It is the gap between the hype and the code. OpenAI must scale its fixes as fast as its users. Next year is the test. They promised a revolution but sold a prototype. Physics doesn’t care about a funding round.
Activate Intelligence Layer
Institutional-grade structural analysis for this article.





