If you’d asked me ten years ago what career advice one of the world’s greatest AI pioneers might give, I’d never have guessed, "Train to be a plumber." Yet here we are. When Geoffrey Hinton, the so-called Godfather of AI and a freshly minted Nobel Prize winner, suggests that a manual trade may outlast the job market convulsions of the AI age, I pause and wonder: Are we really about to become the chickens in a coop of our own making? You don’t have to be a philosopher (or a plumber) to appreciate the weirdness of this moment. Before diving into superintelligence risks, rogue neural networks, or my own brief flirtation with soldering pipes, let’s remember how far we’ve come—from the dusty blackboards of cognitive science to the whirring servers behind today’s chatbots.
Goodbye Academia, Hello Existential Anxiety: The Godfather’s Career Detour
When you think of Geoffrey Hinton, you might picture a Nobel Prize-winning scientist, a pioneer in artificial neural networks, and the so-called “Godfather of AI.” But the story of Hinton’s career is as much about existential anxiety as it is about academic achievement. After decades spent in university corridors and tech labs, Hinton’s journey has shifted from shaping the future of AI to warning the world about its dangers—and even offering some unexpected career advice along the way.
From Cognitive Science Skepticism to AI Pioneer
Geoffrey Hinton’s path in artificial intelligence began at a time when most experts doubted the brain could inspire machines to learn. In the early days, the field was split between two camps. One group believed that intelligence was all about logic and symbolic reasoning—if you could just encode enough rules, you’d get smart machines. The other, much smaller group, thought the brain itself held the secret. Hinton was firmly in the latter camp.
As Hinton himself explains, “There weren’t that many people who believed that we could make neural networks work, artificial neural networks.” For 50 years, he pushed the idea that simulating networks of brain cells—neurons—on computers could lead to machines that learn to recognize objects, understand speech, and even reason. This approach, now known as deep learning techniques, forms the backbone of today’s most powerful AI systems.
The Nobel Prize and Deep Learning Breakthroughs
Hinton’s persistence paid off. His breakthroughs in artificial neural networks and deep learning not only transformed the field but also earned him the Nobel Prize in Physics 2024. This recognition highlighted the practical impact of his work, which underpins everything from voice assistants to advanced medical diagnostics.
After decades in academia, Hinton’s technology caught the attention of Google, which acquired his research and brought him on board. For ten years, he worked at Google, helping to integrate deep learning into products that billions now use daily. But even as his ideas gained mainstream acceptance, Hinton’s concerns about the future of AI began to grow.
Leaving Google: Freedom to Sound the Alarm
In 2023, Hinton made headlines by leaving Google. His reason was simple: he wanted to speak freely about the risks of AI. As he put it, “So that I could talk freely at a conference.” Hinton’s departure marked a turning point—not just for his career, but for the entire conversation around AI safety.
Now, instead of quietly advancing the technology, Hinton is raising warning bells. He worries about a world where humans are no longer the most intelligent beings on the planet. In his words:
“If you want to know what life’s like when you’re not the apex intelligence, ask a chicken.”
This stark analogy captures the existential anxiety that now shapes Hinton’s public message. The very systems he helped create could, in his view, surpass human intelligence and reshape society in unpredictable ways.
Career Advice from the Godfather of AI: “Train to Be a Plumber”
Perhaps the most surprising twist in Hinton’s journey is his advice for the next generation. When asked what people should do in a world of superintelligent AI, his response is blunt:
“Train to be a plumber.”
It’s not a joke. Hinton’s point is that as AI becomes capable of automating more cognitive tasks, practical skills—like plumbing—may become more valuable. In a future where machines can outthink us, hands-on trades could offer job security that knowledge work no longer guarantees.
- Geoffrey Hinton AI: From neural networks to Nobel laureate, Hinton’s work defines the field.
- AI Pioneer: His 50-year struggle against skepticism changed how machines learn.
- Deep Learning Techniques: The foundation of modern AI, now used in everything from search engines to self-driving cars.
- Nobel Prize Physics 2024: Recognition of Hinton’s revolutionary impact on technology and society.
Hinton’s career detour—from academic pioneer to public alarmist—reflects the complex, often uneasy relationship between innovation and its consequences. His story is a reminder that sometimes, the most improbable wisdom comes from those who know the technology best.
Worrying About Smarter Machines: Real Dangers and Distant Thunder
When you think about AI Risks and Safety, it’s easy to imagine only the wildest sci-fi scenarios—machines plotting world domination or rogue neural networks running amok. But as Geoffrey Hinton, the “Godfather of AI,” points out, the real dangers are already here, and they’re growing fast. The threats range from social manipulation and job displacement to the chilling possibility of AI systems outgrowing human control. As Hinton puts it,
“We’ve never had to deal with things smarter than us...if you want to know what that's like, ask a chicken.”
AI and Cybersecurity: The New Frontline
One of the most immediate impacts of advanced AI is in AI and Cybersecurity. Between 2023 and 2024, cyberattacks surged by a staggering 12,200%. This explosion is largely driven by the rise of large language models, which make it much easier to craft convincing phishing attacks. These aren’t just the old “Nigerian prince” emails. Now, AI can clone your voice, mimic your image, and create videos that look and sound just like you. Hinton himself has been targeted by scams using his own voice and mannerisms to promote fake crypto schemes on platforms like Meta and X. Even after reporting these scams, new ones pop up like a relentless game of whack-a-mole.
Phishing is just the start. AI-powered systems can patiently sift through millions of lines of code, searching for vulnerabilities. Experts believe that by 2030, AI could be inventing entirely new forms of cyberattacks—methods that no human has ever considered. This creativity is what makes AI such a double-edged sword: it’s not just automating old threats, it’s inventing new ones.
AI Bioterrorism Potential: From Fiction to Feasibility
The potential for AI Bioterrorism is another area where the risks are no longer theoretical. As Hinton warns,
“It just requires one crazy guy with a grudge.”With today’s AI tools, someone with basic knowledge of molecular biology and a bit of funding could design new viruses. The cost is dropping, and the technical barriers are falling. Even small cults or lone actors could, in theory, create dangerous pathogens. For governments or well-funded groups, the possibilities are even more alarming. The line between science fiction and reality is blurring, and the consequences could be catastrophic.
AI Regulation Challenges: A Race Against Time
Despite these dangers, AI Regulation Challenges remain a huge problem. Regulations are not keeping up with the pace of innovation. For example, European AI regulations specifically exclude military uses, leaving a massive loophole. This is especially worrying when you consider the rapid development of autonomous weapons and other military AI technologies. As Hinton notes, “they’re not going to stop it cuz it’s too good for too many things.” The incentives to push forward are simply too strong, even as the risks mount.
- Military AI: Current regulations often exempt military applications, allowing unchecked growth in autonomous weapons and surveillance.
- Societal Impact: AI is already reshaping jobs, economies, and social trust. The potential for manipulation—deepfakes, fake news, and targeted scams—is only increasing.
- Existential Risk: Hinton estimates the risk of AI wiping out humanity at 10-20%. He admits this is a gut feeling, but it’s a number that should make anyone pause.
Protecting Yourself in an AI World
Even Hinton has changed his habits due to these risks. He spreads his savings across multiple banks to hedge against the possibility of a cyberattack taking down a single institution. He backs up his data on offline drives, just in case the internet goes down or his devices are compromised. These are practical steps you can take, but they only go so far in a world where AI-powered threats are evolving faster than most defenses.
AI and Society Impact: The Uncharted Territory
The truth is, we’re entering uncharted territory. The combination of AI creativity, unchecked military development, and slow regulatory response means that the thunder on the horizon is getting louder. The dangers are real, not distant, and they’re no longer just the stuff of fiction. As Hinton reminds us, the existential threat is not zero—and it’s time to take that seriously.
Why Plumbers Might Win the AI Revolution: Career Paths, Philosophy, and Coping Mechanisms
When Geoffrey Hinton, one of the founding fathers of deep learning techniques and machine learning breakthroughs, suggests you might want to train as a plumber, it’s not a joke. It’s a serious reflection on the future of work in a world shaped by AI. In a time when many are anxious about AI job displacement and the broader AI and society impact, Hinton’s advice stands out for its odd logic and practical wisdom. He’s not dismissing the AI career journey, but he’s highlighting an uncomfortable truth: some hands-on jobs, like plumbing, may be more resilient to automation and digital disruption than many white-collar roles.
The heart of Hinton’s suggestion is simple. AI is advancing rapidly, and with it comes the risk that many traditional jobs—especially those involving routine cognitive tasks—could be replaced or radically transformed. But practical, physical work, like fixing pipes or wiring homes, remains stubbornly difficult for machines to replicate. Plumbers, electricians, and other skilled tradespeople do work that is deeply embedded in the physical world, requiring dexterity, improvisation, and local knowledge. These are skills that, for now, remain out of reach for even the most advanced AI systems.
Hinton’s own AI career journey is a story of intellectual rebellion. He backed neural nets when most of the field dismissed them. He resisted the easy money and hype of the tech industry, focusing instead on the science and its long-term implications. Now, as he warns about the dangers of superintelligence, he’s also thinking practically about how to cope with the risks AI brings—not just to jobs, but to financial security and everyday life.
This practical mindset extends to how Hinton manages his own risks. In a recent conversation, he described how concerns about cyber attacks have changed his behavior. Despite the strong regulation and safety of Canadian banks—none of which came close to failing in 2008—he worries that a sophisticated cyber attack could still bring one down. His solution? He spreads his savings, and his children’s savings, across three different banks. “If a cyber attack takes down one Canadian bank,” he reasons, “the other Canadian banks will very quickly get very careful.” It’s a simple, human strategy in the face of complex digital threats—a kind of financial plumbing, patching leaks before they become floods.
Hinton also practices DIY data safety. He keeps a hard drive backup of his laptop, disconnected from the internet, so that if the worst happens—if the whole internet goes down—he still has his data. This is not high-tech wizardry; it’s common sense, the same kind of logic that leads someone to keep a wrench in the kitchen drawer. These coping mechanisms are deeply human responses to the uncertainty and fragility that come with living in a world increasingly run by algorithms.
The broader lesson here is not that everyone should abandon their AI career journey and pick up a pipe wrench. Rather, it’s about adaptability and humility. The AI revolution will bring both opportunity and disruption. Some jobs will disappear, but others—especially those rooted in practical skills and human relationships—will endure. Hinton’s advice is a reminder that, in the face of AI job displacement, practical work is defensible. It’s also a call to think creatively about how to protect yourself, your savings, and your data in a world where digital risks are real and growing.
As Hinton puts it, “My main mission now is to warn people how dangerous AI could be.” But he’s not hopeless. He draws a comparison to the invention of the atomic bomb: “Sometimes I think about nuclear bombs and the invention of the atomic bomb and how it compares...but we’re still here.” The future of AI and society impact is uncertain, but not predetermined. By blending intellectual rebellion with practical coping mechanisms, and by valuing the wisdom found in unlikely places—like plumbing—we may find ways to thrive, not just survive, in the age of AI.
TL;DR: Geoffrey Hinton’s wild ride through cognitive science, neural networks, and Nobel fame has led him to ringing alarms about AI risks—while still leaving room for hope, dark humor, and practical advice (plumbing, anyone?). The future may be unpredictable, but Hinton’s story is a crash course in thinking for yourself.
Post a Comment