Accelerating AGI

Eric Schmidt compares the rise of AGI to a new Enlightenment — one shaped by non-human systems with superior reasoning. But while AI progress is fast, current systems still fall short of true general intelligence.

The real question:

How can we make AGI happen faster — and are we stuck on the wrong path?

🪲 Why Current AI Falls Short?

  • Compute Inefficiency: The brain runs on ~20 watts and handles flexible, lifelong learning. GPUs and data centers consume far more energy but still lack adaptability.
  • Architectural Limits: Transformers excel at pattern recognition but struggle with reasoning and abstraction.
  • Lack of Grounding: AI systems lack physical or sensory experience, limiting their ability to build world models.
  • Learning Inefficiency: Humans learn with minimal data; current models require vast datasets and still overfit.

💼 What Needs to Change

1. Smarter, More Efficient Hardware

· Neuromorphic chips like Intel’s Loihi and IBM’s TrueNorth to mimic brain-like efficiency.
· Architectures should emphasize sparse, event-driven computing — not brute-force FLOPs to drastically reduce power consumption.
· Custom AI Accelerators need task-specific chips (like Google’s TPUs or Tesla’s Dojo) for redefining what’s possible for training & inference, especially for large-scale simulations or robotic learning environments.
· Spiking neural networks (SNNs) and analog hardware can represent information more efficiently than digital systems (early stage).

2. Better Algorithms for Generalization

· Move beyond transformers to neurosymbolic models, causal learning, and lifelong learning.
· Focus on plasticity and memory retention across tasks.
· AGI requires more than just scaling compute. The brain is efficient due to how it processes information.
The true bottlenecks are architectural and semantic — not just hardware limits.

3. Grounded Intelligence

· Enable models to interact with the world — physically or virtually — to build causal understanding.
· Use predictive representations like LeCun’s JEPA to create deeper models of reality.

4. Learn from Less

· Use self-supervised and few-shot learning to reduce data dependency.
· Train with synthetic data and simulations (e.g., NVIDIA Omniverse).

5. Interdisciplinary Collaboration

· Merge AI with insights from neuroscience and cognitive science.
· Understand how humans abstract, focus attention, and retain knowledge over time.


📘 Approaches: 3 Roads Diverge

Approach Description Focus
LLMs Relies solely on large language models and pattern prediction, lacks grounding and reasoning capabilities. Predict next tokens based on text datasets (scaled transformers are strong at language, weak at logic). Language, scale, pattern matching
Neurosymbolic AI Combine neural networks with symbolic reasoning (e.g., logic, knowledge graphs), ideal for reasoning, explainability, logical inference and structured reasoning. Logical inference, explainability, hybrid reasoning
JEPA Self-supervised, predict abstract representations from sensory data (Predictive perception, no symbolic logic). World models, causality, grounded learning

💡 Takeaway: AGI Needs a New Path

  • Build efficient, brain-inspired hardware
  • Create models that learn over time and reason across domains
  • Enable physical or simulated grounding
  • Collaborate across fields like neuroscience and AI
  • Focus on understanding, not just scaling (scaling transformers may improve chatbots — but AGI requires a fundamentally ≠ approach)

"The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own, and it would redesign itself at an ever-increasing rate." — Stephen Hawking