Terms of use dolor sit amet consectetur, adipisicing elit. Recusandae provident ullam aperiam quo ad non corrupti sit vel quam repellat ipsa quod sed, repellendus adipisci, ducimus ea modi odio assumenda.
Disclaimers
Lorem ipsum dolor sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Limitation on Liability
Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
Copyright Policy
Dolor sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
General
Sit amet consectetur adipisicing elit. Sequi, cum esse possimus officiis amet ea voluptatibus libero! Dolorum assumenda esse, deserunt ipsum ad iusto! Praesentium error nobis tenetur at, quis nostrum facere excepturi architecto totam.
Lorem ipsum dolor sit amet consectetur adipisicing elit. Inventore, soluta alias eaque modi ipsum sint iusto fugiat vero velit rerum.
We use cookies to ensure you get the best experience on our website. By continuing to use our site, you accept our use of cookies, Privacy Policy, and Terms of Service.
Geoffrey Hinton Issues Stark Warning on Existential Risks After Nobel Win
Geoffrey Hinton Issues Stark Warning on Existential Risks After Nobel Win
Share:
Geoffrey Hinton
Interview, December 2024
Stockholm, Sweden — In a late-night phone call from a hotel room in California, the 2024 Nobel Prize Laureate in Physics, Geoffrey Hinton, shared his astonishment and his warnings regarding the technology he helped pioneer. Often called the "Godfather of AI," Hinton’s recent recognition by the Nobel Committee comes at a pivotal moment as he transitions from a creator of neural networks to one of their most vocal critics.
A Surprise at 2:00 AM
Adam Smith (NobelPrize.org): First of all, many congratulations. Where did the news reach you?
Geoffrey Hinton: I’m in a cheap hotel in California, without an internet connection and with a not-very-good phone line. I was planning to get an MRI scan today, but I guess I’ll have to cancel that. I had no idea I’d even been nominated. I was extremely surprised.
Adam Smith: What were your first thoughts?
Geoffrey Hinton: My very first thought was: how could I be sure it wasn’t a spoof call? But it was coming from Sweden, the person had a strong Swedish accent, and there were several of them. It would have to be a posse of impersonators, which is unlikely.
Defining a Field: Biology, Physics, or AI?
Adam Smith: How would you describe yourself? A computer scientist, or a physicist trying to understand biology?
Geoffrey Hinton: I would say I am someone who doesn’t really know what field he’s in but would like to understand how the brain works. In my attempts to understand the brain, I’ve helped to create a technology that works surprisingly well.
The Existential Threat: AI vs. Climate Change
Adam Smith: You’ve publicly expressed fears about what this technology can bring. What needs to be done to allay those fears?
Geoffrey Hinton: I think it’s rather different from climate change. With climate change, everybody knows what needs to be done: we need to stop burning carbon. It’s just a question of political will. Here, we have much less idea of what’s going to happen and what to do about it.
I don’t have a simple recipe. We are at a bifurcation point in history. In the next few years, we need to figure out if there’s a way to deal with the threat of these things getting out of control and taking over.
Adam Smith: What can governments do right now?
Geoffrey Hinton: Governments can force big companies to spend a lot more of their resources on safety research. Companies like OpenAI shouldn’t be allowed to just put safety research on the back burner. We need to put a lot of research effort into how we will keep control.
On Language and Understanding
Adam Smith: Do you think the Nobel Prize will make a difference in how your warnings are perceived?
Geoffrey Hinton: Yes, I think it will make a difference. Hopefully, it’ll make me more credible when I say these things really do understand what they’re saying.
There is a whole school of linguistics—coming from Chomsky—that thinks it’s complete nonsense to say these things understand language. I think that school is wrong. It’s clear now that neural nets are much better at processing language than anything that school ever produced.
GPN Context:Professor Hinton’s call for transparency and safety research aligns with GPN’s commitment to monitoring digital freedom and the ethical implications of emerging technologies. As AI begins to reshape governance and information, the "right to safety" becomes a primary concern for international policy.