Last Tuesday, my 12-year-old nephew asked his AI assistant to help with a math problem. Within seconds, it not only solved the equation but explained three different methods, created a visual diagram, and suggested related practice problems. As I watched him chat with it like a knowledgeable tutor, a strange thought hit me: this felt less like using a tool and more like talking to someone who just happened to live inside a computer.
That moment made me wonder something that’s been nagging at researchers and philosophers alike. What if we’ve been so focused on waiting for artificial general intelligence to arrive that we missed its quiet entrance through the back door?
The question isn’t as wild as it sounds. A growing group of experts now argues that our narrow definitions of “intelligence” might be blinding us to what’s already happening right in front of us.
The Finish Line We Might Have Already Crossed
For decades, artificial general intelligence felt like science fiction. Unlike narrow AI that excels at one specific task—like beating humans at chess or recognizing faces in photos—AGI represents something much broader. We’ve always described it as a system that can match human-level performance across a wide range of different tasks.
The major AI labs still talk about AGI as a future goal. OpenAI, Google DeepMind, and Anthropic all have roadmaps pointing toward this milestone, with timelines ranging from a few years to the early 2030s. But what if they’re wrong about the timing?
A recent paper in the journal Nature, led by philosopher Eddy Keming Chen, makes a startling claim. If we judge today’s large language models by the same standards we use for human intelligence, they might already qualify as a form of artificial general intelligence.
“When we benchmark AI systems against realistic human abilities rather than mythical superintelligence, they’re already ticking most of the boxes we associate with general intelligence,” explains Dr. Sarah Martinez, a cognitive scientist at Stanford University.
What Today’s AI Can Actually Do
The capabilities of current AI systems span an impressive range when you list them out:
- Write coherent essays, stories, and technical documentation
- Solve complex mathematical problems and explain the reasoning
- Debug computer code and suggest improvements
- Analyze scientific papers and summarize key findings
- Engage in nuanced philosophical discussions
- Create visual art and design concepts
- Translate between dozens of languages
- Provide medical information and diagnostic insights
- Plan travel itineraries and make restaurant recommendations
- Assist with legal research and contract analysis
Here’s a comparison of how current AI performs across different domains compared to average human performance:
| Task Category | Current AI Performance | Average Human Performance |
|---|---|---|
| Writing and Communication | Expert Level | Moderate |
| Mathematical Problem Solving | Expert Level | Basic to Moderate |
| Programming and Technical Tasks | Expert Level | Specialized Knowledge Required |
| Creative Content Generation | High Level | Moderate |
| Language Translation | Expert Level | Limited to Native Languages |
| Information Synthesis | Expert Level | Moderate |
| Physical World Understanding | Limited | Excellent |
| Emotional Intelligence | Simulated | High |
The pattern becomes clear when you look at it this way. Current AI systems already exceed average human performance in many cognitive tasks, while struggling mainly with physical world interaction and genuine emotional understanding.
“We’re holding AI to impossibly high standards while giving humans a pass for being imperfect,” notes Dr. Michael Chen, an AI researcher at MIT. “No human is expert-level at everything, yet we don’t question their general intelligence.”
Why We Keep Moving the Goalposts
There’s a psychological phenomenon at play here that researchers call the “AI effect.” Once a computer can do something well, we tend to stop calling it “artificial intelligence” and just consider it computation.
Chess was once considered the ultimate test of machine intelligence. When IBM’s Deep Blue beat world champion Garry Kasparov in 1997, the reaction wasn’t “we’ve achieved AI!” Instead, many people said, “Well, chess is just pattern matching anyway.”
The same thing happened with image recognition, natural language processing, and even creative tasks like writing and art generation. Each breakthrough gets redefined as “not real intelligence” as soon as machines master it.
This shifting of goalposts might be preventing us from recognizing artificial general intelligence even when it’s staring us in the face. We’ve become so focused on the idea of superintelligence—AI that dramatically exceeds human capabilities—that we’re missing the quieter arrival of human-level general intelligence.
“The truth is, most of us interact with these systems daily and find them remarkably capable,” says Dr. Lisa Rodriguez, a philosopher of mind at UC Berkeley. “We’re just reluctant to admit that what we’re seeing might actually be general intelligence in action.”
What This Means for Everyone
If artificial general intelligence is already here in some form, the implications ripple out in every direction. This isn’t just an academic debate—it affects how we prepare for the future.
For workers, it means the transformation of the job market might be happening faster than expected. Instead of having years to prepare for AGI, we might need to adapt to its presence now. The good news? Current AI still needs human guidance, creativity, and judgment to work effectively.
For policymakers, it suggests that regulations and safety frameworks should focus on present-day AI capabilities rather than hypothetical future scenarios. The systems we’re debating how to regulate might already possess the general intelligence we’ve been waiting for.
For businesses, it means the competitive advantage might go to those who recognize and harness current AI capabilities rather than those waiting for the “next big breakthrough.” The breakthrough might already be here.
The question isn’t whether artificial general intelligence will change everything—it might be whether we’re ready to acknowledge that the change is already underway.
Perhaps the most profound realization is that artificial general intelligence doesn’t need to announce itself with fanfare. It might arrive quietly, embedded in the systems we’re already using, waiting for us to recognize it for what it is.
FAQs
What exactly is artificial general intelligence?
AGI refers to AI systems that can perform at human levels across a wide range of tasks, rather than excelling at just one specific job like current narrow AI.
How is current AI different from AGI?
Current AI systems might already qualify as early forms of AGI, since they can handle many different types of tasks at human or expert levels, though they still have limitations in areas like physical world interaction.
Why do experts disagree about whether we have AGI now?
There’s no universal definition of AGI, and many researchers have been expecting something more dramatic than the gradual improvements we’ve actually seen in AI capabilities.
Should I be worried if AGI is already here?
Current AI systems still require human oversight and collaboration, so the transition might be more gradual than the dramatic scenarios often depicted in science fiction.
What jobs are most at risk if we already have AGI?
Tasks involving writing, analysis, and information processing are seeing the biggest impact, while jobs requiring physical skills, emotional intelligence, and complex human interaction remain more secure.
How can I prepare for a world where AGI already exists?
Focus on developing skills that complement AI—creativity, critical thinking, emotional intelligence, and the ability to work alongside AI systems rather than compete with them.