OpenAI's " deep research " is the latest artificial intelligence (AI) tool making waves and promising to do in minutes what would take hours for a human expert to complete.
Bundled as a feature in ChatGPT Pro and marketed as a research assistant that can match a trained analyst, it autonomously searches the web, compiles sources and delivers structured reports. It even scored 26.6% on Humanity's Last Exam (HLE), a tough AI benchmark, outperforming many models.
But deep research doesn't quite live up to the hype. While it produces polished reports, it also has serious flaws. According to journalists who've tried it , deep research can miss key details, struggle with recent information and sometimes invents facts.
OpenAI flags this when listing the limitations of its tool. The company also says it "can sometimes hallucinate facts in responses or make incorrect inferences, though at a notably lower rate than existing ChatGPT models, according to internal evaluations".
It's no surprise that unreliable data can slip in, since AI models don't "know" things in the same way humans do.
The idea of an AI "research analyst" also raises a slew of questions. Can a machine - no matter how powerful - truly replace a trained expert? What would be the implications for knowledge work? And is AI really helping us think better, or just making it easier to stop thinking altogether?
What is 'deep research' and who is it for?
Marketed towards professionals in finance, science, policy, law and engineering, as well as academics, journalists and business strategists, deep research is the latest " agentic experience " OpenAI has rolled out in ChatGPT. It promises to do the heavy lifting of research in minutes.
Currently, deep research is only available to ChatGPT Pro users in the United States, at a cost of US$200 per month. OpenAI says it will roll out to Plus, Team and Enterprise users in the coming months, with a more cost-effective version planned for the future.
Unlike a standard chatbot that provides quick responses, deep research follows a multi-step process to produce a structured report:
- The user submits a request. This could be anything from a market analysis to a legal case summary.
- The AI clarifies the task. It may ask follow-up questions to refine the research scope.
- The agent searches the web. It autonomously browses hundreds of sources, including news articles, research papers and online databases.
- It synthesises its findings. The AI extracts key points, organises them into a structured report and cites its sources.
- The final report is delivered. Within five to 30 minutes, the user receives a multi-page document - potentially even a PhD-level thesis - summarising the findings.
At first glance, it sounds like a dream tool for knowledge workers. A closer look reveals significant limitations.
Many early tests have exposed shortcomings:
- It lacks context. AI can summarise, but it doesn't fully understand what's important.
- It ignores new developments. It has missed major legal rulings and scientific updates.
- It makes things up. Like other AI models, it can confidently generate false information.
- It can't tell fact from fiction. It doesn't distinguish authoritative sources from unreliable ones.
While OpenAI claims its tool rivals human analysts, AI inevitably lacks the judgement, scrutiny and expertise that make good research valuable.
What AI can't replace
ChatGPT isn't the only AI tool that can scour the web and produce reports with just a few prompts. Notably, a mere 24 hours after OpenAI's release , Hugging Face released a free, open-source version that nearly matches its performance.
The biggest risk of deep research and other AI tools marketed for "human-level" research is the illusion that AI can replace human thinking. AI can summarise information, but it can't question its own assumptions, highlight knowledge gaps, think creatively or understand different perspectives.
And AI-generated summaries don't match the depth of a skilled human researcher.
Any AI agent, no matter how fast, is still just a tool, not a replacement for human intelligence. For knowledge workers, it's more important than ever to invest in skills that AI can't replicate: critical thinking, fact-checking, deep expertise and creativity.
If you do want to use AI research tools, there are ways to do so responsibly. Thoughtful use of AI can enhance research without sacrificing accuracy or depth. You might use AI for efficiency, like summarising documents, but retain human judgement for making decisions.
Always verify sources, as AI-generated citations can be misleading. Don't trust conclusions blindly, but apply critical thinking and cross-check information with reputable sources. For high-stakes topics - such as health , justice and democracy - supplement AI findings with expert input.
Despite prolific marketing that tries to tell us otherwise, generative AI still has plenty of limitations. Humans who can creatively synthesise information, challenge assumptions and think critically will remain in demand - AI can't replace them just yet.