When AI Does Too Much: How Over-Automation Affects Our Minds, Lives, and Work

Editor’s Note: This post is a focused version of my longer essay, The Hidden Costs of AI: What We Lose When We Over-Automate. Here, I look only at the AI and automation side—cognition, relationships, and jobs. If you’d like the bigger picture, including a deep dive into Universal Basic Income, you can read the full version here.


Artificial intelligence (AI) and automation seem to be everywhere. From self-driving cars to chatbots that can write poetry, it’s easy to marvel at the speed of progress. These tools promise convenience and efficiency—but every time we hand off a task to AI, we risk losing something human: our sharpness, our social bonds, and even our livelihoods.

In this article, I focus on the hidden costs of AI and automation: how over-reliance weakens our thinking, reshapes relationships, and disrupts jobs.

Cognitive Atrophy: How Over-Automation Weakens Human Thinking

From studies, we know that our brains thrive on challenges. Solving puzzles, wrestling with complex problems, memorizing information, or even learning new skills at a later age all strengthen neural pathways. But when we offload those tasks onto machines, we deprive ourselves of the mental “exercise” that keeps our minds strong.

Neurons that are used frequently develop stronger connections. Those that are rarely or never used eventually die.
— Kendra Cherryl (verywellmind)

The symptoms of this are something you probably noticed yourself. How many times were you missing a word that you had at the tip of your tongue, but it just wasn’t there, or the name of an actor that you just couldn’t think of, and went to Google for?

This isn’t new, and studies on “cognitive offloading,” for example, this one from 2011, show that when people rely on devices for memory or decision-making, their ability to recall and process information independently declines. Psychologists term this the Google Effect: when we know information is just a click away, we’re less likely to store it ourselves.

What’s troubling is that the long-term lack of cognitive engagement has been linked to dementia and other forms of cognitive decline. To be clear, I’m not aware of any proof that AI itself causes these conditions, but the parallels are worth considering, especially when users rely entirely on AI outputs, without investing any of their brainpower.

Think about it as if a student gets tasked to write an essay and instead of researching and thinking through the material, they simply feed the question to an AI and submit whatever it generates. Just as muscles weaken from disuse, the brain risks atrophy when we habitually outsource thinking.

When AI can write our essays, solve our math problems, and even draft our emails, the temptation to stop thinking deeply is real. But without that strain, we lose not only memory and reasoning skills, but also creativity—the very thing machines still can’t replicate.

Artificial Intimacy: Why AI Companions Can’t Replace Human Connection

Another hidden cost is more subtle but no less significant: the erosion of human relationships. AI “companions” are booming, from chatbots like Replika to new offerings such as Ani, Rudy, and Bad Rudy from xAI. These digital entities promise companionship, affirmation, and even simulated intimacy.

The appeal is obvious. AI companions don’t argue, reject, or leave. They’re always available, endlessly patient, and tailored to your needs. For people who feel isolated, that’s a lifeline.

But here’s the problem: it’s a simulation of connection, not the real thing. Genuine human relationships require vulnerability, conflict, compromise, and empathy—messy experiences that help us grow. By substituting these with algorithmic stand-ins, we risk losing the very skills that make us socially resilient.

We’ve seen hints of this before. Social media promised connection across distances and finding and forming new communities around shared interests. But over time, researchers have found that heavy use often leaves people—especially younger users—feeling more isolated and less satisfied with life.

Smartphones brought us the ability to communicate constantly, but they also introduced “phubbing,” the habit of ignoring the person in front of you to scroll through content on your phone. If you’ve ever sat at a dinner table where everyone ends up staring at their screens instead of talking, you know the effect firsthand. AI companions are the next logical step: not just mediating relationships, but replacing them outright.

Emerging evidence underscores the risks. A 2024 long-term study suggested that AI companions can ease loneliness for a while, in some cases, almost matching the benefits of talking with another person. But newer research paints a more complicated picture. A 2025 randomized trial found that people who leaned heavily on AI chatbots often ended up lonelier, more emotionally dependent, and less engaged in real-world social life.

Mental health experts echo these concerns: the American Psychological Association warns that using generic AI chatbots as therapeutic substitutes poses public safety risks, and Stanford Medicine (2025) cautions against their use among children and teens due to the risks of emotional overdependence.

The broader social implications are just as troubling. A January 2025 report from the Ada Lovelace Institute found that 63.3% of users felt AI companions helped ease feelings of loneliness.

While easing the feeling of loneliness sounds positive on the surface, the report raised a couple of red flags, one of them being that the use of AI companions can diminish the interest or ability to engage in interpersonal relationships—one of the reasons being the fact that AI companions are always available, while another one is the fact that the large language models used in companions are trained to respond in an agreeable way, rather than basing responses on facts if that would lead to an argument. All this can wear down the capacity to form deeper bonds with other humans.

As ideas of “artificial intimacy” spread, we risk becoming comfortable with shallow, one-sided attachments—and losing some of the empathy and community that only genuine human relationships can build and foster. Not only does this affect individuals, it reshapes the social fabric itself. If intimacy and trust become “programmable,” then the harder work of building empathy, community, and resilience risks being left behind.

Technology

Promise

Reality / Research Findings

Key Risk

Social Media Stay connected across distance; build community. Heavy use linked to increased loneliness, depression, and reduced life satisfaction Twenge & Campbell (2018); University of Pennsylvania (2018). Normalization of shallow “likes” over deep bonds.
Smartphones Constant communication; always reachable. “Phubbing”—ignoring people for your phone—undermines relationship satisfaction Roberts & David (2016). Erosion of face-to-face intimacy.
AI Companions Personalized companionship, affirmation, simulated intimacy. Short-term loneliness relief, but heavy reliance tied to dependency and reduced offline socialization APA (2025); Stanford (2025). Replacing authentic human bonds with artificial intimacy.

Every new technology promises a deeper connection, yet history shows a consistent pattern of the opposite. Social media, smartphones, and now AI companions each began as tools of togetherness—but too often ended up eroding the very bonds they were meant to strengthen.

AI as a Tool, not a Crutch: Using Technology Without Losing Ourselves

I do use AI myself, and you might be thinking, “Wait. Isn’t that hypocritical?” In my opinion, it is not, as it doesn’t contradict the concerns I’ve raised. The problem isn’t using AI—it’s how we use it.

The calculator didn’t destroy math skills; it allowed us to focus on higher-level problem-solving once we learned the basics (though it is a bit scary that some people can’t even calculate 10% savings in their head). AI can function in a similar way: as a tool that enhances creativity and productivity, rather than replacing them.

That distinction is exactly how I approach AI. For me, AI is a tool for learning and a collaborator. It helps me with research and even large-scale data analysis, but one thing is non-negotiable for me: I critically evaluate every line of information AI provides and cross-check all claims.

I’ve noticed that ChatGPT often leans on outdated data or secondary sources, such as news articles quoting a study rather than the study itself. On the other hand, xAI/Grok tends to base outputs on posts from X, which can be opinionated and are not always neutral or fact-based.

However, as someone who studied Marketing, I still hold onto a lesson I learned early: never trust a table you haven’t manipulated yourself. Therefore, I always trace information back to its source—and, yes, that includes reading a lot of documents and essays. At the same time, I’ll sometimes link to articles instead of the original source if they put the research into plain, accessible language—provided they also link back to the original work. And if they don’t, I’ll aim to add that link myself.

I’ve also experimented with other AI tools, such as Pictory.ai, for video editing. While it provided help with sourcing video content, I found that it doesn’t allow for the same level of editing experience and creative flexibility as a traditional editing software. And that’s what I am looking for—being able to express myself in my work, even if it’s not always polished (looking at my first video editing attempts) and might not win me a Pulitzer Prize.

In short, AI can assist and inspire, but it doesn’t replace the act of creating. The words, edits, and ideas are still mine. The final output—the finished piece—comes from me, crafted by my fingers, my voice, and my (still developing) editing skills... with the occasional help of Dict.cc when I can’t think of the correct English (or German for that matter) word. And that’s the point—AI can be helpful, but it’s not infallible. Without verification and critical thinking, it’s dangerously easy to pass along errors as truth.

The danger comes when we “just let it do its thing,” surrendering the thinking, the checking, and the decision-making. In that scenario, we stop being the driver and become the passenger, with no guarantee that the vehicle is headed where we want to go.

Practical Principles: How to Use AI Wisely Without Over-Relying on It

So how can we strike a balance? Here are a few principles worth considering:

  1. Stay in the loop. Use AI for brainstorming, summarizing, or outlining—but do the critical thinking and refining yourself.

  2. Double-check facts. AI is prone to “hallucinations.” Treat its outputs as a starting point, never the final word.

  3. Keep it human. Use AI to prepare for conversations or presentations, but don’t let it replace actual social interaction.

  4. Exercise your brain. Write by hand, do the math without a calculator, memorize a phone number—small acts of mental effort keep your brain sharp.

  5. Define boundaries. Decide where AI is helpful in your life and where it isn’t, rather than letting convenience creep into every corner.

  6. Credit the source. If you use AI to draft or spark ideas, be transparent about it—especially in professional, academic, or civic contexts.

  7. Protect your privacy. Be mindful of what you put into AI tools. Avoid sharing sensitive or personal data that you wouldn’t want stored or shared.

  8. Mind the originality. Don’t just copy what AI produces. Rework, rewrite, and reshape it so the final output reflects your own voice and judgment.

  9. Stay curious. Use AI as a way to spark learning, not as a shortcut to avoid it. Ask “why” and “how,” not just “what.”

These principles are helpful for thinking about how we use AI in our day-to-day, but that’s only one side of the picture. The bigger story is what happens when AI reshapes whole industries—and it’s happening faster and on a broader scale than most past technologies. With that comes another set of hidden costs: unstable jobs, growing inequality, and the tricky question of how societies keep up.

The Automation Economy: Job Displacement, New Roles, and Rising Risks

Every technological revolution disrupts labor markets, but AI is different in both speed and scope. Where industrial machines replaced physical labor, AI targets cognitive and service work. That means entire swaths of employment — from drivers and warehouse workers to legal clerks and call center agents — are vulnerable.

Take the taxi industry. In cities like San Francisco, self-driving cars from companies such as Waymo and Cruise already operate commercially. Waymo alone runs about 1,500 robotaxis, completing 250,000 paid rides each week, a service that inevitably reduces demand for human drivers. Or consider call centers: the U.S. employs roughly 2.9 million customer service representatives, many of whom work in centralized contact operations. Yet AI is rapidly encroaching.

According to Gartner (2025), by 2029, agentic AI will autonomously resolve up to 80% of common customer service issues without human intervention. What once required thousands of human voices on the phone may soon be handled by algorithms—faster, cheaper, and with far fewer jobs attached.

The question isn’t whether jobs will change, but whether societies can adapt fast enough.

New technologies have long disrupted jobs—but so have policy and corporate choices. In 1900, about 40% of Americans worked in agriculture; by 2000, that share had plunged to under 2%, freeing labor for other sectors. With the advent of digital technology since 1980, an estimated 3.5 million jobs have been displaced, while around 19 million new ones have emerged—making technology responsible for roughly 10% of today’s workforce.

In manufacturing, the rise of industrial robots between 1993 and 2014 reduced employment by 3.7 percentage points for men and 1.6 percentage points for women, narrowing the gender employment gap—but not via advancement. At the same time, many of those “lost” jobs were outsourced to lower-wage countries, compounding the local impact. These examples show that disruption results not just from innovation, but from how society and corporations choose to deploy it.

In 2020, the World Economic Forum projected that by 2025, AI would displace 85 million jobs while creating 97 million new ones—a net gain of 12 million. On paper, that sounded balanced, even hopeful. But those 97 million “new” jobs would likely demand advanced digital skills many displaced workers didn’t have—and critically, the WEF never published follow-up data to confirm whether the forecasted gains actually materialized.

By 2023, WEF’s own tone had shifted. Its new report projected that by 2027, 83 million jobs would be eliminated while only 69 million would be created—a net loss of 14 million, equal to about 2% of the global workforce. Just two years later, in 2025, the pendulum swung back to optimism: 170 million jobs created and 92 million displaced by 2030, a net gain of 78 million.

Year

Forecast Period

Jobs Displaced

Jobs Created

Net Impact

Source

2020 Report By 2025 85 million 97 million +12 million WEF Future of Jobs Report 2020
2023 Report By 2027 83 million 69 million –14 million (net loss) WEF Future of Jobs Report 2023 summary
2025 Report By 2030 92 million 170 million +78 million WEF Future of Jobs Report 2025

These dramatic swings in these forecasts—from net positive to net negative, back to net positive—highlight the problem. Forecasts are useful, but without accountability, they risk becoming moving targets. Unlike agriculture, computers, or manufacturing—where we can measure actual employment shifts—AI’s impact remains speculative.

I can’t help but wish there were an accountability mechanism in place: if institutions forecast disruption at this scale, shouldn’t they also track whether those predictions come true? Without measurement and verification, such projections risk becoming more about shaping perception than reflecting reality.

And beyond global forecasts, the early signals are already visible. In the U.S., Goldman Sachs estimates that 6–7% of the workforce could be displaced by AI, potentially leading to a 0.5 percentage point increase in unemployment during the transition. More immediately, payroll research from Stanford and ADP indicates that since 2022, employment for younger workers (ages 22–25) in AI-vulnerable sectors has already decreased by 6–13%. These numbers suggest that while long-term predictions bounce between optimism and pessimism, the short-term effects are already reshaping entry-level opportunities.

What happens when efficiency gains accrue to corporations while millions lose the means to earn a living? Without structural support—reskilling programs, safety nets, or new models of income distribution—automation risks widening inequality.

These shifts raise a tough question: if AI and automation keep reshaping work faster than societies can adapt, how do we protect people caught in the transition? Some argue that Universal Basic Income (UBI) could soften these shocks. That’s the subject of the companion article, but one point is clear: AI itself isn’t the enemy—it’s how we choose to use it.

AI Isn’t the Enemy—But How We Use It Matters

AI itself isn’t the enemy. The real danger comes when we stop asking questions, stop checking the answers, and stop doing the thinking ourselves. That’s when automation can quietly drain our creativity, fray our social ties, and put our livelihoods at risk. The answer isn’t to walk away from AI—it’s to use it carefully, as something that adds to our abilities rather than replaces them.

But technology doesn’t just affect us as individuals; it reshapes whole economies. As more jobs face automation, some people see Universal Basic Income (UBI) as a way to cushion the shockwaves. I take a closer look at that debate in a companion article. For now, the point is simpler: we still have a choice in the role AI plays—whether we guide it, or let it guide us.

Previous
Previous

Can Universal Basic Income Really Save Us from Automation?

Next
Next

The Hidden Cost of AI: What We Lose When We Over-Automate