Eric Schmidt compares the rise of AGI to a new Enlightenment — one shaped by non-human systems with superior reasoning. But while AI progress is fast, current systems still fall short of true general intelligence.
The real question:
How can we make AGI happen faster — and are we stuck on the wrong path?
· Neuromorphic chips like Intel’s Loihi and IBM’s TrueNorth to mimic brain-like efficiency.
· Architectures should emphasize sparse, event-driven computing — not brute-force FLOPs to drastically reduce power consumption.
· Custom AI Accelerators need task-specific chips (like Google’s TPUs or Tesla’s Dojo) for redefining what’s possible for training & inference, especially for large-scale simulations or robotic learning environments.
· Spiking neural networks (SNNs) and analog hardware can represent information more efficiently than digital systems (early stage).
· Move beyond transformers to neurosymbolic models, causal learning, and lifelong learning.
· Focus on plasticity and memory retention across tasks.
· AGI requires more than just scaling compute. The brain is efficient due to how it processes information.
The true bottlenecks are architectural and semantic — not just hardware limits.
· Enable models to interact with the world — physically or virtually — to build causal understanding.
· Use predictive representations like LeCun’s JEPA to create deeper models of reality.
· Use self-supervised and few-shot learning to reduce data dependency.
· Train with synthetic data and simulations (e.g., NVIDIA Omniverse).
· Merge AI with insights from neuroscience and cognitive science.
· Understand how humans abstract, focus attention, and retain knowledge over time.
| Approach | Description | Focus |
|---|---|---|
| LLMs | Relies solely on large language models and pattern prediction, lacks grounding and reasoning capabilities. Predict next tokens based on text datasets (scaled transformers are strong at language, weak at logic). | Language, scale, pattern matching |
| Neurosymbolic AI | Combine neural networks with symbolic reasoning (e.g., logic, knowledge graphs), ideal for reasoning, explainability, logical inference and structured reasoning. | Logical inference, explainability, hybrid reasoning |
| JEPA | Self-supervised, predict abstract representations from sensory data (Predictive perception, no symbolic logic). | World models, causality, grounded learning |
"The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own, and it would redesign itself at an ever-increasing rate." — Stephen Hawking