In a bold forecast for the near future, OpenAI has predicted that by 2028, it will develop systems capable of making significant discoveries, effectively creating the first true AI researcher. This ambitious timeline signals a major leap beyond the current perception of AI as chatbots and search tools, envisioning systems that can autonomously generate new scientific knowledge.
While this advancement promises to accelerate progress in fields from medicine to climate science, the company has paired its optimistic vision with one of its most serious public warnings to date. This article explores OpenAI's roadmap, the staggering potential for societal benefit, and the company's own stark assessment of the "potentially catastrophic" risks associated with the path to Artificial Superintelligence (ASI).
OpenAI has outlined a clear, multi-year roadmap for its AI development, detailing a phased approach to building systems with advanced research capabilities.
The first major milestone is the development of an AI-powered "intern-level research assistant," which the company projects will be ready by September 2026. This initial system will lay the groundwork for more autonomous and powerful models to follow.
By 2028, OpenAI plans to achieve its next major goal: a "fully automated 'legitimate AI researcher'". The company is confident that by this time, "we will have systems that can make more significant discoveries". According to OpenAI's Chief Scientist, Jakub Pachocki, this AI researcher 2028 is defined as a system "capable of autonomously delivering on larger research projects," including designing experiments, testing hypotheses, and generating novel scientific insights. This moves AI from a tool that retrieves and organizes existing human knowledge to one that is capable of generating fundamentally new scientific insights.
These milestones are stepping stones toward the ultimate goal of achieving Superintelligence. "We believe that it is possible that deep learning systems are less than a decade away from superintelligence," Pachocki stated. This is defined as systems that are "smarter than humans in a wide range of critical tasks" and can "outperform humans across most cognitive tasks".
Discover our curated list of cutting-edge AI tools designed to boost your productivity and creativity in research and development.
Explore AI & R&D ToolsA significant gap exists between how the public perceives AI—primarily as chatbots and search assistants—and the actual, rapidly advancing capabilities being developed by labs like OpenAI.
The acceleration of AI capabilities has been dramatic. This progress is fueled by a steep decline in the cost of intelligence, with OpenAI estimating that the cost to achieve a given level of AI capability has fallen by a factor of 40x per year over the last few years.
OpenAI’s vision for advanced AI is one of profound societal benefit, where technology acts as a force multiplier for human progress. The company expects these systems to accelerate progress and provide tangible benefits in several key areas:
Curious about our previous conferences? Explore the sessions, speakers, and highlights from our flagship events.
View 2024 Highlights View 2025 HighlightsEven as OpenAI paints a picture of a future enriched by AI, it simultaneously delivers its most sober warning yet. The company explicitly states that it treats the risks of superintelligent systems as "potentially catastrophic".
The core of this warning, central to the discussion around AI Safety and AI Alignment, is a clear principle: "no one should deploy superintelligent systems without being able to robustly align and control them," a task that requires significant further technical research.
Achieving these ambitious goals requires a specific strategy that combines a new corporate structure with a massive investment in computational infrastructure.
To fund its objectives, OpenAI has transitioned from a non-profit to a for-profit public benefit corporation. This change allows the company to raise the substantial capital required for its infrastructure goals. A non-profit entity, the OpenAI Foundation, retains a 26% ownership stake and governs the company's research direction. The foundation also oversees a $25 billion commitment to AI-driven scientific work and research safety.
OpenAI is pursuing two key technical strategies to achieve its timeline:
This strategy requires an unprecedented infrastructure investment, with a planned buildout of 30 gigawatts of computing infrastructure at an estimated cost of around $1.4 trillion.
OpenAI is calling for an international effort to manage the development and deployment of powerful AI systems. The company has outlined several steps it believes are necessary for responsible progress:
No. OpenAI defines the AI researcher as a system capable of autonomously conducting research projects, generating new scientific insights, and making significant discoveries, a capability far beyond current chatbot functions.
OpenAI has restructured into a for-profit public benefit corporation. This new structure allows it to raise larger investment rounds to fund its infrastructure goals, which include a project estimated at $1.4 trillion.
The sources indicate OpenAI does not see it as guaranteed to be dangerous, but it treats the risks as "potentially catastrophic". The company's official position is that these systems should not be deployed until robust methods for AI alignment and control are developed, and it advocates for global oversight to manage the risks.
Don't stop here. Dive into our full library of articles on AI, Agile, and the future of tech.
Read More Blogs