Futurism has become a hammer whose whole world is an AI nail.
Do they EVER present pros to their cons? AI's got them on THAT issue.
ChatGPT
Here’s a quick map of expert opinions on AI risks, placing them along the same spectrum I outlined earlier:
🔹 Closer to Hinton’s View (Existential / Catastrophic Risk Focus)
Geoffrey Hinton (U. of Toronto / ex-Google)
Warns AI could pose an existential threat if it surpasses human control.
Believes we may be closer than people think to systems we don’t fully understand.
Yoshua Bengio (U. of Montreal, Turing Award winner)
Shares Hinton’s concern about long-term catastrophic risk.
Advocates international regulation similar to nuclear treaties.
Sam Altman (CEO, OpenAI)
Publicly emphasizes AI’s potential for existential risk, while also pushing rapid development.
Supports global governance frameworks (e.g., “AI equivalent of the IAEA”).
🔹 Balanced / Dual-Focus (Both Near-Term and Long-Term Risks)
Demis Hassabis (CEO, DeepMind)
Acknowledges real near-term risks (bias, misuse, jobs), but also sees the possibility of transformative—and dangerous—future AI.
Advocates for cautious progress with global collaboration.
Dario Amodei (CEO, Anthropic)
Warns about alignment failures leading to catastrophic scenarios.
At the same time, stresses risks from misinformation and misuse.
🔹 Skeptical of Extinction-Level Framing (Focus on Present Harms)
Yann LeCun (Meta Chief AI Scientist, also a Turing Award winner)
Thinks existential risk is vastly overstated.
Sees current AI as “idiot savants” with no real autonomy or self-motivation.
Argues overhyping existential risk distracts from solvable present issues.
Melanie Mitchell (Santa Fe Institute, AI researcher)
Emphasizes that AI still struggles with basic reasoning and common sense.
Believes focus should be on misinformation, bias, and misuse, not “sci-fi doom.”
Gary Marcus (NYU / outspoken critic)
Not a doomer—he thinks catastrophic AI risk is unlikely in the near future.
But stresses current systems are unreliable, opaque, and dangerous if blindly trusted.
📌 So, the split looks like this:
Hinton, Bengio, Altman ? Strong warnings about extinction-level scenarios.
LeCun, Mitchell, Marcus ? Push back hard, arguing current AI is nowhere near that level.
Hassabis, Amodei ? In the middle, acknowledging both short-term harms and long-term risks.