OpenAI's 2028 Plan: AI Researcher, Superintelligence Risks, and the $1.4 Trillion Compute Bet

Visualization of OpenAI's exponential roadmap to an AI researcher by 2028, with a cautionary symbol for Superintelligence risks.

In a bold forecast for the near future, OpenAI has predicted that by 2028, it will develop systems capable of making significant discoveries, effectively creating the first true AI researcher. This ambitious timeline signals a major leap beyond the current perception of AI as chatbots and search tools, envisioning systems that can autonomously generate new scientific knowledge.

While this advancement promises to accelerate progress in fields from medicine to climate science, the company has paired its optimistic vision with one of its most serious public warnings to date. This article explores OpenAI's roadmap, the staggering potential for societal benefit, and the company's own stark assessment of the "potentially catastrophic" risks associated with the path to Artificial Superintelligence (ASI).

The Roadmap to AI Researcher 2028: New Timeline for Scientific Breakthroughs

OpenAI has outlined a clear, multi-year roadmap for its AI development, detailing a phased approach to building systems with advanced research capabilities.

The Intern-Level Assistant (2026)

The first major milestone is the development of an AI-powered "intern-level research assistant," which the company projects will be ready by September 2026. This initial system will lay the groundwork for more autonomous and powerful models to follow.

The "Legitimate AI Researcher" (2028)

By 2028, OpenAI plans to achieve its next major goal: a "fully automated 'legitimate AI researcher'". The company is confident that by this time, "we will have systems that can make more significant discoveries". According to OpenAI's Chief Scientist, Jakub Pachocki, this AI researcher 2028 is defined as a system "capable of autonomously delivering on larger research projects," including designing experiments, testing hypotheses, and generating novel scientific insights. This moves AI from a tool that retrieves and organizes existing human knowledge to one that is capable of generating fundamentally new scientific insights.

The Path to Superintelligence: Less Than a Decade Away

These milestones are stepping stones toward the ultimate goal of achieving Superintelligence. "We believe that it is possible that deep learning systems are less than a decade away from superintelligence," Pachocki stated. This is defined as systems that are "smarter than humans in a wide range of critical tasks" and can "outperform humans across most cognitive tasks".

🚀 Supercharge Your R&D!

Discover our curated list of cutting-edge AI tools designed to boost your productivity and creativity in research and development.

Explore AI & R&D Tools

Beyond Chatbots: The Staggering Acceleration of AI Capabilities

A significant gap exists between how the public perceives AI—primarily as chatbots and search assistants—and the actual, rapidly advancing capabilities being developed by labs like OpenAI.

Outperforming Human Intellect

Exponential Progress and Falling Costs

The acceleration of AI capabilities has been dramatic. This progress is fueled by a steep decline in the cost of intelligence, with OpenAI estimating that the cost to achieve a given level of AI capability has fallen by a factor of 40x per year over the last few years.

How AI-Driven Discoveries Could Reshape Our World

OpenAI’s vision for advanced AI is one of profound societal benefit, where technology acts as a force multiplier for human progress. The company expects these systems to accelerate progress and provide tangible benefits in several key areas:

⏪ Relive Our Past Events

Curious about our previous conferences? Explore the sessions, speakers, and highlights from our flagship events.

View 2024 Highlights View 2025 Highlights

The Warning: Addressing Potentially Catastrophic Risks and AI Safety

Even as OpenAI paints a picture of a future enriched by AI, it simultaneously delivers its most sober warning yet. The company explicitly states that it treats the risks of superintelligent systems as "potentially catastrophic".

The core of this warning, central to the discussion around AI Safety and AI Alignment, is a clear principle: "no one should deploy superintelligent systems without being able to robustly align and control them," a task that requires significant further technical research.

Restructuring the Future: The $1.4 Trillion Compute Bet

Achieving these ambitious goals requires a specific strategy that combines a new corporate structure with a massive investment in computational infrastructure.

New Corporate Structure and Funding

To fund its objectives, OpenAI has transitioned from a non-profit to a for-profit public benefit corporation. This change allows the company to raise the substantial capital required for its infrastructure goals. A non-profit entity, the OpenAI Foundation, retains a 26% ownership stake and governs the company's research direction. The foundation also oversees a $25 billion commitment to AI-driven scientific work and research safety.

The Two Key Technical Strategies

OpenAI is pursuing two key technical strategies to achieve its timeline:

  1. Algorithmic Innovation: Creating models that can learn and reason more efficiently.
  2. Test-Time Compute: Massively increasing the computational power and time that AI models are given to think through complex problems.

This strategy requires an unprecedented infrastructure investment, with a planned buildout of 30 gigawatts of computing infrastructure at an estimated cost of around $1.4 trillion.

Moving Forward with Caution: The Need for Global Oversight

OpenAI is calling for an international effort to manage the development and deployment of powerful AI systems. The company has outlined several steps it believes are necessary for responsible progress:


Frequently Asked Questions (FAQs)

1. Is OpenAI's "AI researcher" just a more advanced chatbot?

No. OpenAI defines the AI researcher as a system capable of autonomously conducting research projects, generating new scientific insights, and making significant discoveries, a capability far beyond current chatbot functions.

2. How does OpenAI plan to pay for the massive computing power required (the $1.4 Trillion Compute bet)?

OpenAI has restructured into a for-profit public benefit corporation. This new structure allows it to raise larger investment rounds to fund its infrastructure goals, which include a project estimated at $1.4 trillion.

3. Does OpenAI believe superintelligence is guaranteed to be dangerous?

The sources indicate OpenAI does not see it as guaranteed to be dangerous, but it treats the risks as "potentially catastrophic". The company's official position is that these systems should not be deployed until robust methods for AI alignment and control are developed, and it advocates for global oversight to manage the risks.

Sources and references:

Hungry for More Insights?

Don't stop here. Dive into our full library of articles on AI, Agile, and the future of tech.

Read More Blogs

AgileWoW Events

Agile Leadership Conference India AgileWoW

Agile Leadership Day India

Learn More
AI Artificial Intelligence Conference India AgileWoW

AI Dev Day India

Learn More
Agile Scrum Conference India AgileWoW

Scrum Day India

Learn More
Agile Scrum Product Owner Leadership Conference India

Product Leaders Day India

Learn More