Let me tell you about the first time a software bot outperformed me at my job. I shrugged it off as a fluke—until it happened again, and again. I was reminded of this as I listened to Dr. Roman Yampolskiy spell out something many of us privately dread: the possibility that soon, no occupation will be untouched by artificial intelligence. If you’ve ever comforted yourself with “AI can’t do what I do,” you might want to grab a cup of something strong—this post is for all of us in the crosshairs of disruption.

AI Safety Isn’t Catching Up—It’s Falling Behind

When I first started working on AI safety challenges over 15 years ago, the field barely existed. In fact, I coined the term “AI safety” before it became a buzzword. Back then, my work focused on controlling simple bots—like poker bots that were just starting to outperform average human players. I saw early on that if you project this trend forward, these systems would eventually become smarter and more capable than us. Today, that prediction has come true, not just in games, but across nearly every domain where artificial intelligence safety matters.

But here’s the uncomfortable truth: our ability to keep AI safe is not keeping pace with its rapid progress. As Dr. Roman Yampolskiy warns,

“Progress in AI capabilities is exponential or maybe even hyper-exponential, progress in AI safety is linear or constant. The gap is increasing.”
This is not just a theoretical concern. In 2024 alone, AI incidents surged by 56.4%, with 233 reported cases—a critical escalation in AI-related security and safety concerns. Despite this, only 5% of organizations feel highly confident in their AI security preparedness, while 86% admit to having moderate or low confidence in their existing security approaches.

Temporary Fixes, Permanent Problems

The reality is that most of our AI governance implementation gap is filled with patches and workarounds. Think of these as HR manuals for humans—rules that are easily sidestepped by anyone determined enough. Every time we introduce a new safety mechanism, it’s only a matter of time before someone finds a way to break it. This is especially clear with algorithmic jailbreaking resistance models. No matter how many controls we put in place, users and even the AI itself can often find creative ways to bypass them.

When I started, I believed we could solve these problems and achieve truly responsible AI development practices. But the deeper I looked, the more I realized that every solution just revealed more problems—like a fractal, where zooming in only uncovers more complexity. There are no major milestones where we can say, “We’ve solved this, and it’s done.” Instead, we have a series of patches, each one quickly outdated by new exploits.

Jailbreaking: The Fragility of AI Controls

The phenomenon of “jailbreaking” AI models—where users intentionally bypass safety protocols—shows just how fragile our current controls are. These aren’t rare events. They’re happening now, and they’re getting easier as AI becomes more advanced. Each new patch spawns more loopholes, and the cycle repeats. The gap between what AI can do and what we can reliably control is growing wider by the day.

As technology moves from narrow AI to AGI, and possibly to superintelligence, our safety controls remain basic and limited. We’re not just failing to catch up—we’re falling further behind. The uncomfortable truth is that the more powerful AI becomes, the less certain we are about our ability to keep it safe.


The Myth of Job Security: Why Even ‘Creative’ and ‘Essential’ Roles Aren’t Safe

There’s a common belief that some jobs are simply too unique, too creative, or too “human” to ever be automated. I hear it all the time—from podcasters, artists, professors, and even Uber drivers. Each person is convinced their work is an exception to the rule. But as we look at the rapid advances in generative AI and automation, this sense of security is quickly becoming a myth.

From Podcasters to Drivers: No One Is Exempt

Let’s take podcasters as an example. Many believe their personality, interview style, and preparation are irreplaceable. But today’s large language models can already analyze every episode, learn a host’s style, and generate new interviews that are optimized for engagement. As Dr. Roman Yampolskiy points out, “Large language model today can easily read everything I wrote... It can train on every podcast you ever did. So, it knows exactly your style, the types of questions you ask.”

It’s not just digital creators. Uber drivers, too, often believe their local knowledge and driving instincts make them irreplaceable. Yet, self-driving cars are already operating in cities like Los Angeles, where driverless taxi services are a reality. The automation unemployment impact is already visible, and it’s only accelerating.

Generative AI: Mimicking Creativity and Outperforming Humans

One of the most common arguments against AI job automation is that machines can’t be creative. But generative AI models now produce art, music, and writing that rivals human output. They can even analyze what content performs best and adjust their style for maximum impact. The line between human and machine-generated creativity is blurring fast.

According to recent data, 95% of AI professionals now use AI at work or home, with 76% paying for these tools out of pocket. This mainstream adoption of generative AI job automation is happening despite ongoing concerns about responsible AI development practices and AI incidents in 2024. The reality is, if a task can be done on a computer, it can be automated—and usually optimized beyond what a single human can achieve.

Physical Labor: A Temporary Safe Haven?

Some argue that physical jobs—like construction, cleaning, or delivery—are safer, at least for now. Robotics may lag behind software, but the gap is closing. Dr. Yampolskiy predicts humanoid robots could catch up within five years. “In five years all the physical labor can also be automated,” he says. Self-driving vehicles are just the beginning; soon, even jobs requiring manual dexterity and mobility will be at risk.

Retraining: No Longer a Reliable Solution

Historically, the advice has been to retrain for new roles as automation advances. But with AI model performance evaluation improving rapidly, even new “safe” jobs are quickly learned and outperformed by AI. As Dr. Yampolskiy bluntly states:

"If I'm telling you that all jobs will be automated, then there is no plan B. You cannot retrain."

We used to tell artists to “learn to code,” then suggested “prompt engineering” as the next big thing. But AI now codes and engineers prompts better than most humans. The cycle is accelerating, and the window for safe retraining is closing fast.

With predictions placing AGI as soon as 2027, and unemployment potentially reaching 99% if automation is fully deployed, the myth of job security—whether creative, essential, or physical—is being shattered by the relentless advance of AI.


If You Can’t Retrain, Now What? Rethinking Meaning and Purpose in an AI World

For decades, the standard response to automation unemployment impact has been simple: retrain. If your job is threatened by machines, learn a new skill—often, “learn to code.” But as artificial intelligence safety challenges accelerate, even this advice is losing ground. Today, AI can write code, design prompts, and automate tasks that were once considered safe havens for human workers. The fallback plan is crumbling, and with it, our old ideas about work, value, and personal meaning.

Just a few years ago, coding was the golden ticket. Artists and writers were told to pivot to tech. Then, as AI models improved, even coding became vulnerable. The next wave was “prompt engineering”—crafting clever instructions for AI. But now, AI is quickly surpassing humans at designing prompts for other AIs. As Dr. Roman Yampolskiy puts it:

“We as a humanity, then we all lose our jobs. What do we do? What do we do financially? Who’s paying for us? And what do we do in terms of meaning?”

This is not just a hypothetical. Predictions suggest that with superintelligent AI, unemployment could reach up to 99%. The automation unemployment impact will not be limited to a few sectors—it will be systemic. The old cycle of retraining and career shifts is breaking down. Even the most forward-thinking advice can’t keep up with the pace of AI risk mitigation efforts.

Abundance Without Purpose?

AI-driven economies could create enormous material abundance. If machines do almost all the work, goods and services may become cheaper and more accessible. The economic problem—who pays for us—might be solved through mechanisms like universal basic income, funded by the wealth AI generates. But the deeper challenge is meaning. For many, work is more than a paycheck; it’s a source of identity, pride, and purpose. When employment becomes optional or obsolete, what fills that void?

We’re already seeing the cracks. In one recent survey, 51% of organizations using AI reported at least one negative consequence, including inaccuracy and data privacy risks. Americans are responding with caution: 80% say they want AI safety rules, even if it means slowing progress. Blocked AI transactions now account for 18.5% of all transactions—a sign of growing apprehension and the need for robust artificial intelligence safety measures. Yet, these stopgaps don’t address the bigger question: what happens to society when work is no longer the backbone of daily life?

Rethinking Meaning in a Post-Work World

  • Personal fulfillment: With more free time, people may seek meaning in creativity, relationships, or community service. But not everyone will find it easy to redefine their purpose.
  • Societal roles: Our culture has long tied status and self-worth to jobs. Detaching from this mindset will require a major paradigm shift.
  • Financial support: Economic abundance is possible, but distribution and access will be key. Who decides how resources are shared?

As AI safety challenges and automation reshape the world, we’ll need to rethink not just our jobs, but our entire approach to meaning, purpose, and fulfillment. The next crisis may not be material scarcity, but a search for significance in a world where work is no longer central.


FAQ: Facing a World Where AI Can (Almost) Do It All

As we watch AI systems become more capable by the day, it’s natural to wonder what this means for our jobs, our skills, and the rules that are supposed to keep us safe. Here are some of the most pressing questions I hear—and the uncomfortable truths behind them.

Is any job safe from AI?

Honestly, almost no job is truly safe in the long run. Dr. Roman Yampolskiy and other experts have made it clear: the more advanced AI becomes, the more it can do. We’re not just talking about repetitive factory work or data entry. Creative fields, professional services, and even roles that require empathy or judgment are now in AI’s sights. The data backs this up—only 5% of organizations feel highly confident in their AI security preparedness, and 77% have experienced breaches in their AI systems over the past year. The AI governance implementation gap is real, and it means that even the people building these systems can’t guarantee anyone’s job is safe, including mine.

Should I still bother retraining?

Retraining is still valuable, but it’s not a permanent solution. The uncomfortable truth is that AI can often outpace newly trained humans very quickly. You might learn a new skill, only to find that AI can do it better or cheaper within a year or two. This doesn’t mean you should give up on learning, but it does mean we need to rethink what “job security” looks like in an AI-driven world. The pace of change is simply too fast for traditional retraining to keep up, especially when the AI safety challenges are evolving just as quickly.

What about regulations and ethics?

Lawmakers and companies are scrambling to catch up, but the reality is that regulations almost always lag behind technological capability. There’s a growing wave of AI risk mitigation efforts—AI-related legislative mentions rose 21.3% across 75 countries since 2023—but these are often patchwork solutions that can’t keep up with the risks. Public trust in AI companies is slipping, dropping from 50% to 47% recently. As one expert put it,

“Regulations and control mechanisms are essential, but they’re always chasing a moving target with AI.”
Companies have a legal obligation to their investors, not necessarily a moral or ethical one. Regulatory fines and legal action can help, but they’re not enough to close the gap. The AI governance implementation gap remains a major concern, and most organizations admit they’re not prepared for the security and ethical challenges ahead.

Conclusion: Living With Uncertainty

Despite a surge in new regulations and attempts at responsible development, AI’s rapid evolution means that true safety is always just out of reach. Many hope lawmakers will figure it out, but the lag is real, and the risks are growing. If you’re not in control, you’re unlikely to get the outcomes you want. The space of possible futures is vast, but the space of outcomes we’d actually like is tiny. That’s the uncomfortable truth of AI safety: no one’s job is truly safe, and the best we can do is stay informed, stay adaptable, and push for better oversight—even as we acknowledge that perfect control may never be possible.

TL;DR: AI safety is a bigger problem than most realize: not only are jobs at risk across every field, but current solutions are bandaids on a bullet wound. Superintelligent AI could change everything—whether or not we’re ready.

Post a Comment

Previous Post Next Post