REAL AI: I’ve been ChatGPT’d, AI facts, headlines, and quote of the week
By Kevin Hawkins with Korey Hawkins | Vol. 3 Issue 34
REAL AI is a human-created weekly roundup of all things related to artificial intelligence in real estate and emerging AI innovations in other sectors likely to impact our industry. Please share this link to a friend to subscribe to our weekly newsletter for free – realai.blog – and they will thank you!
I’ve been ChatGPT’d
Have you ever trusted AI just a little too much? I did. And it backfired: big time.
After more than a year of near-flawless performance using ChatGPT 4o for data analysis and PDF report crunching, something shifted.
Enter ChatGPT 5. We use the ultra-safe Teams version.
Yet suddenly, everything that used to “just work” became unpredictable. Numbers got scrambled, reports cross-contaminated, and trust? It tanked.
What we’re seeing is more than a glitch. It’s a pattern. And it could have serious implications for anyone relying on AI to assist with reasoning – or even creating summaries!
When AI overthinks and underperforms
We noticed an alarming uptick in errors the moment we switched from 4o to 5. One of the most consistent problems? ChatGPT 5 mashes together data points from inside the same report. Or worse, merges similar stats from different months or documents as if they’re the same. It sounds plausible, it looks legit, but it’s wrong. Dangerously wrong.
Here’s a real-world example: I uploaded a monthly Market Report and a social post we drafted for a long-time client to summarize it. I asked ChatGPT 5 to add social icons and double-check the accuracy by including a PDF of the report.
It did, but then quietly altered the copy, merging two unrelated stats. It said the number of new listings that month was a record since 2019. But that wasn’t true. The monthly active listings weren’t the record: it was the year-to-date total that set the record – big difference.
I didn’t catch it. The author of the report did.
I’d been ChatGPT’d.
Why is this happening now?
This isn’t just a one-off. We’re starting to see what researchers call an “illusion of thinking” in frontier models like GPT-5. Apple just published a paper under that exact title, and it’s incredibly revealing. (Thanks to Carlee Miller, a CS senior at ASU, for sending this to me.)
In controlled experiments, Apple’s researchers found that reasoning models like Claude and DeepSeek initially improve with problem complexity. But then, once things get hard enough, everything collapses. Performance drops to zero.
And get this: the models actually reduce their reasoning effort as the problem gets harder, even though they haven’t hit a token limit. In other words, they give up before they run out of room to think.
So, when ChatGPT 5 fails to distinguish between a monthly number and a year-to-date stat, it might not be laziness: it just might be systemic.
Is Gen AI getting dumber before it gets smarter?
There’s growing concern that the next generation of AI is suffering from what some are calling a “model drift” problem. It’s where newer models attempt to sound smarter, become more prone to reasoning errors, and hallucinations.
One possible explanation? Models are being over-optimized for speed and fluency at the expense of truthfulness and step-by-step logic.
Gary Marcus has called this out repeatedly, and it tracks with what we’re experiencing: the models are too confident, even when they’re flat-out wrong. And that’s a terrible combination if you’re relying on them for Agentic AI or autonomous workflows.
This isn’t a “just wait and the models will catch up” situation. It’s potentially a trust crisis in the making.
Right now: be the agent-in-the-loop
One of the most important concepts we’ve advocated in this newsletter is being the agent-in-the-loop. It’s not just a best practice, it’s truly your last line of defense.
Here’s why.
When I tried to verify that same flawed post with another tool – Perplexity Pro – it cited the same incorrect interpretation and even provided quotes from the report to support it: it made the same mistake as ChatGPT 5, with the same false confidence!
I then sent the draft to my client, thinking AI had my back. It didn’t.
Trust is earned, not downloaded
This was a hard lesson, but a necessary one. I believed the AI had verified my work. But it hadn’t. Both AI tools “hallucinated” a data point. And I paid the price.
Remember: AI isn’t magic. It doesn’t really understand. And it still lies.
Right now, every model – including the best-of-the-best – needs oversight.
So, if you’re using Gen AI in your business, remain the agent-in-the-loop, always.
Because if you’re not? You might get ChatGPT’d too. (-Kevin)
AI Facts and Stats

1. 89% of U.S. enterprises are actively advancing AI initiatives – The Hackett Group
2. 92% of businesses are planning to increase AI investments by 2027 – Emulent
3. Nearly one-third (33%) of enterprise software applications will have built-in Agentic AI by 2028 – Gartner
4. Workers with AI skills earn 56% more than those without – PwC
5. 51% of IT workers state governance and compliance as the foremost barrier to AI adoption across industries – Atomic Work
Source: Aloa.co (-Korey)
AI Headlines

Why ChatGPT-5 Fell Short of the Hype | 8/21/25 Built In
ChatGPT 5 represents a step backward and forward in AI development.
Zero Trust + AI: Privacy in the Age of Agentic AI | 8/15/25 The Hacker News
With privacy eroding as Agentic AI evolves, ethical boundaries need to be set.
Acrobat Studio helps you work smarter with AI, from office to home to school | 8/19/25 Adobe Blog
Adobe’s new AI platform is mixing PDF tools and AI Assistants together.
These agents are ‘drowning.’ AI was supposed to save them | 8/20/25 Inman
AI Adoption should not come at the price of personal customer service.
Companies have invested billions into AI, 95 percent getting zero return | 8/20/25 The Hill
A recent MIT report shows that the vast majority of firms investing in AI are seeing zero ROI.
Scammers have infiltrated Google’s AI responses – how to spot them | 8/21/25 ZDNET
Bad actors are exploiting AI-powered searching to target those looking for personal info. (-Korey)
AI Quote of the Week

Please subscribe to our free Real AI newsletter here – or share it with a friend!
Content suggestions welcomed: email korey@wavgroup.com