The Hidden Cost of AI: What We Lose When We Over-Automate

Editor’s Note: This article brings together two big conversations—AI & automation, and Universal Basic Income (UBI). If you’d rather focus on just one, don’t worry—shorter, stand-alone versions will be rolling out here later this week.


Artificial intelligence (AI) and automation seem to be everywhere. From self-driving cars to chatbots that can write poetry, it’s easy to marvel at the speed of progress. AI and automation promise convenience and efficiency, and, at least when we listen to influencers, we need even more of it.

But there’s a shadow to this story. Every time we hand off a task to AI, we risk losing something human: our cognitive sharpness, our livelihoods, and even our ability to connect with one another. However, like any powerful tool, AI can be used well—or misused. It’s a fine line between using AI to extend our skills or letting it replace them.

Cognitive Atrophy: How Over-Automation Weakens Human Thinking

From studies, we know that our brains thrive on challenges. Solving puzzles, wrestling with complex problems, memorizing information, or even learning new skills at a later age all strengthen neural pathways. But when we offload those tasks onto machines, we deprive ourselves of the mental “exercise” that keeps our minds strong.

Neurons that are used frequently develop stronger connections. Those that are rarely or never used eventually die.
— Kendra Cherryl (verywellmind)

The symptoms of this are something you probably noticed yourself. How many times were you missing a word that you had at the tip of your tongue, but it just wasn’t there, or the name of an actor that you just couldn’t think of, and went to Google for?

This isn’t new, and studies on “cognitive offloading,” for example, this one from 2011, show that when people rely on devices for memory or decision-making, their ability to recall and process information independently declines. Psychologists term this the Google Effect: when we know information is just a click away, we’re less likely to store it ourselves.

What’s troubling is that the long-term lack of cognitive engagement has been linked to dementia and other forms of cognitive decline. To be clear, I’m not aware of any proof that AI itself causes these conditions, but the parallels are worth considering, especially when users rely entirely on AI outputs, without investing any of their brainpower.

Think about it as if a student gets tasked to write an essay and instead of researching and thinking through the material, they simply feed the question to an AI and submit whatever it generates. Just as muscles weaken from disuse, the brain risks atrophy when we habitually outsource thinking.

When AI can write our essays, solve our math problems, and even draft our emails, the temptation to stop thinking deeply is real. But without that strain, we lose not only memory and reasoning skills, but also creativity—the very thing machines still can’t replicate.

Artificial Intimacy: Why AI Companions Can’t Replace Human Connection

Another hidden cost is more subtle but no less significant: the erosion of human relationships. AI “companions” are booming, from chatbots like Replika to new offerings such as Ani, Rudy, and Bad Rudy from xAI. These digital entities promise companionship, affirmation, and even simulated intimacy.

The appeal is obvious. AI companions don’t argue, reject, or leave. They’re always available, endlessly patient, and tailored to your needs. For people who feel isolated, that’s a lifeline.

But here’s the problem: it’s a simulation of connection, not the real thing. Genuine human relationships require vulnerability, conflict, compromise, and empathy—messy experiences that help us grow. By substituting these with algorithmic stand-ins, we risk losing the very skills that make us socially resilient.

We’ve seen hints of this before. Social media promised connection across distances and finding and forming new communities around shared interests. But over time, researchers have found that heavy use often leaves people—especially younger users—feeling more isolated and less satisfied with life.

Smartphones brought us the ability to communicate constantly, but they also introduced “phubbing,” the habit of ignoring the person in front of you to scroll through content on your phone. If you’ve ever sat at a dinner table where everyone ends up staring at their screens instead of talking, you know the effect firsthand. AI companions are the next logical step: not just mediating relationships, but replacing them outright.

Emerging evidence underscores the risks. A 2024 long-term study suggested that AI companions can ease loneliness for a while, in some cases, almost matching the benefits of talking with another person. But newer research paints a more complicated picture. A 2025 randomized trial found that people who leaned heavily on AI chatbots often ended up lonelier, more emotionally dependent, and less engaged in real-world social life.

Mental health experts echo these concerns: the American Psychological Association warns that using generic AI chatbots as therapeutic substitutes poses public safety risks, and Stanford Medicine (2025) cautions against their use among children and teens due to the risks of emotional overdependence.

The broader social implications are just as troubling. A January 2025 report from the Ada Lovelace Institute found that 63.3% of users felt AI companions helped ease feelings of loneliness. While easing the feeling of loneliness sounds positive on the surface, the report raised a couple of red flags, one of them being that the use of AI companions can diminish the interest or ability to engage in interpersonal relationships—one of the reasons being the fact that AI companions are always available, while another one is the fact that the large language models used in companions are trained to respond in an agreeable way, rather than basing responses on facts if that would lead to an argument. All this can wear down the capacity to form deeper bonds with other humans.

As ideas of “artificial intimacy” spread, we risk becoming comfortable with shallow, one-sided attachments—and losing some of the empathy and community that only genuine human relationships can build and foster. Not only does this affect individuals, it reshapes the social fabric itself. If intimacy and trust become “programmable,” then the harder work of building empathy, community, and resilience risks being left behind.

Technology

Promise

Reality / Research Findings

Key Risk

Social Media Stay connected across distance; build community. Heavy use linked to increased loneliness, depression, and reduced life satisfaction Twenge & Campbell (2018); University of Pennsylvania (2018). Normalization of shallow “likes” over deep bonds.
Smartphones Constant communication; always reachable. “Phubbing”—ignoring people for your phone—undermines relationship satisfaction Roberts & David (2016). Erosion of face-to-face intimacy.
AI Companions Personalized companionship, affirmation, simulated intimacy. Short-term loneliness relief, but heavy reliance tied to dependency and reduced offline socialization APA (2025); Stanford (2025). Replacing authentic human bonds with artificial intimacy.

Every new technology promises a deeper connection, yet history shows a consistent pattern of the opposite. Social media, smartphones, and now AI companions each began as tools of togetherness—but too often ended up eroding the very bonds they were meant to strengthen.

AI as a Tool, not a Crutch: Using Technology Without Losing Ourselves

I do use AI myself, and you might be thinking, “Wait. Isn’t that hypocritical?” In my opinion, it is not, as it doesn’t contradict the concerns I’ve raised. The problem isn’t using AI—it’s how we use it.

The calculator didn’t destroy math skills; it allowed us to focus on higher-level problem-solving once we learned the basics (though it is a bit scary that some people can’t even calculate 10% savings in their head). AI can function in a similar way: as a tool that enhances creativity and productivity, rather than replacing them.

That distinction is exactly how I approach AI. For me, AI is a tool for learning and a collaborator. It helps me with research and even large-scale data analysis, but one thing is non-negotiable for me: I critically evaluate every line of information AI provides and cross-check all claims.

I’ve noticed that ChatGPT often leans on outdated data or secondary sources, such as news articles quoting a study rather than the study itself. On the other hand, xAI/Grok tends to base outputs on posts from X, which can be opinionated and are not always neutral or fact-based.

However, as someone who studied Marketing, I still hold onto a lesson I learned early: never trust a table you haven’t manipulated yourself. Therefore, I always trace information back to its source—and, yes, that includes reading a lot of documents and essays. At the same time, I’ll sometimes link to articles instead of the original source if they put the research into plain, accessible language—provided they also link back to the original work. And if they don’t, I’ll aim to add that link myself.

I’ve also experimented with other AI tools, such as Pictory.ai, for video editing. While it provided help with sourcing video content, I found that it doesn’t allow for the same level of editing experience and creative flexibility as a traditional editing software. And that’s what I am looking for—being able to express myself in my work, even if it’s not always polished (looking at my first video editing attempts) and might not win me a Pulitzer Prize.

In short, AI can assist and inspire, but it doesn’t replace the act of creating. The words, edits, and ideas are still mine. The final output—the finished piece—comes from me, crafted by my fingers, my voice, and my (still developing) editing skills... with the occasional help of Dict.cc when I can’t think of the correct English (or German for that matter) word. And that’s the point—AI can be helpful, but it’s not infallible. Without verification and critical thinking, it’s dangerously easy to pass along errors as truth.

The danger comes when we “just let it do its thing,” surrendering the thinking, the checking, and the decision-making. In that scenario, we stop being the driver and become the passenger, with no guarantee that the vehicle is headed where we want to go.

Practical Principles: How to Use AI Wisely Without Over-Relying on It

So how can we strike a balance? Here are a few principles worth considering:

  1. Stay in the loop. Use AI for brainstorming, summarizing, or outlining—but do the critical thinking and refining yourself.

  2. Double-check facts. AI is prone to “hallucinations.” Treat its outputs as a starting point, never the final word.

  3. Keep it human. Use AI to prepare for conversations or presentations, but don’t let it replace actual social interaction.

  4. Exercise your brain. Write by hand, do the math without a calculator, memorize a phone number—small acts of mental effort keep your brain sharp.

  5. Define boundaries. Decide where AI is helpful in your life and where it isn’t, rather than letting convenience creep into every corner.

  6. Credit the source. If you use AI to draft or spark ideas, be transparent about it—especially in professional, academic, or civic contexts.

  7. Protect your privacy. Be mindful of what you put into AI tools. Avoid sharing sensitive or personal data that you wouldn’t want stored or shared.

  8. Mind the originality. Don’t just copy what AI produces. Rework, rewrite, and reshape it so the final output reflects your own voice and judgment.

  9. Stay curious. Use AI as a way to spark learning, not as a shortcut to avoid it. Ask “why” and “how,” not just “what.”

These principles are helpful for thinking about how we use AI in our day-to-day, but that’s only one side of the picture. The bigger story is what happens when AI reshapes whole industries—and it’s happening faster and on a broader scale than most past technologies. With that comes another set of hidden costs: unstable jobs, growing inequality, and the tricky question of how societies keep up.

The Automation Economy: Job Displacement, New Roles, and Rising Risks

Every technological revolution disrupts labor markets, but AI is different in both speed and scope. Where industrial machines replaced physical labor, AI targets cognitive and service work. That means entire swaths of employment — from drivers and warehouse workers to legal clerks and call center agents — are vulnerable.

Take the taxi industry. In cities like San Francisco, self-driving cars from companies such as Waymo and Cruise already operate commercially. Waymo alone runs about 1,500 robotaxis, completing 250,000 paid rides each week, a service that inevitably reduces demand for human drivers. Or consider call centers: the U.S. employs roughly 2.9 million customer service representatives, many of whom work in centralized contact operations. Yet AI is rapidly encroaching.

According to Gartner (2025), by 2029, agentic AI will autonomously resolve up to 80% of common customer service issues without human intervention. What once required thousands of human voices on the phone may soon be handled by algorithms—faster, cheaper, and with far fewer jobs attached.

The question isn’t whether jobs will change, but whether societies can adapt fast enough.

New technologies have long disrupted jobs—but so have policy and corporate choices. In 1900, about 40% of Americans worked in agriculture; by 2000, that share had plunged to under 2%, freeing labor for other sectors. With the advent of digital technology since 1980, an estimated 3.5 million jobs have been displaced, while around 19 million new ones have emerged—making technology responsible for roughly 10% of today’s workforce.

In manufacturing, the rise of industrial robots between 1993 and 2014 reduced employment by 3.7 percentage points for men and 1.6 percentage points for women, narrowing the gender employment gap—but not via advancement. At the same time, many of those “lost” jobs were outsourced to lower-wage countries, compounding the local impact. These examples show that disruption results not just from innovation, but from how society and corporations choose to deploy it.

In 2020, the World Economic Forum projected that by 2025, AI would displace 85 million jobs while creating 97 million new ones—a net gain of 12 million. On paper, that sounded balanced, even hopeful. But those 97 million “new” jobs would likely demand advanced digital skills many displaced workers didn’t have—and critically, the WEF never published follow-up data to confirm whether the forecasted gains actually materialized.

By 2023, WEF’s own tone had shifted. Its new report projected that by 2027, 83 million jobs would be eliminated while only 69 million would be created—a net loss of 14 million, equal to about 2% of the global workforce. Just two years later, in 2025, the pendulum swung back to optimism: 170 million jobs created and 92 million displaced by 2030, a net gain of 78 million.

Year

Forecast Period

Jobs Displaced

Jobs Created

Net Impact

Source

2020 Report By 2025 85 million 97 million +12 million WEF Future of Jobs Report 2020
2023 Report By 2027 83 million 69 million –14 million (net loss) WEF Future of Jobs Report 2023 summary
2025 Report By 2030 92 million 170 million +78 million WEF Future of Jobs Report 2025

These dramatic swings in these forecasts—from net positive to net negative, back to net positive—highlight the problem. Forecasts are useful, but without accountability, they risk becoming moving targets. Unlike agriculture, computers, or manufacturing—where we can measure actual employment shifts—AI’s impact remains speculative.

I can’t help but wish there were an accountability mechanism in place: if institutions forecast disruption at this scale, shouldn’t they also track whether those predictions come true? Without measurement and verification, such projections risk becoming more about shaping perception than reflecting reality.

And beyond global forecasts, the early signals are already visible. In the U.S., Goldman Sachs estimates that 6–7% of the workforce could be displaced by AI, potentially leading to a 0.5 percentage point increase in unemployment during the transition. More immediately, payroll research from Stanford and ADP indicates that since 2022, employment for younger workers (ages 22–25) in AI-vulnerable sectors has already decreased by 6–13%. These numbers suggest that while long-term predictions bounce between optimism and pessimism, the short-term effects are already reshaping entry-level opportunities.

What happens when efficiency gains accrue to corporations while millions lose the means to earn a living? Without structural support—reskilling programs, safety nets, or new models of income distribution—automation risks widening inequality.

These shifts raise a tough question: if AI and automation keep reshaping work faster than societies can adapt, how do we protect people caught in the transition? One increasingly popular answer is Universal Basic Income (UBI). Supporters see it as a safety net for a volatile future, while critics warn it could become a risky crutch with hidden costs.

Universal Basic Income: Solution to AI Job Loss or Risky Crutch?

As jobs become increasingly unstable, some have proposed Universal Basic Income (UBI) as a means to provide people with financial security in an AI-driven economy. Figures like Elon Musk and Geoffrey Hinton argue that the sheer productivity gains from AI could make some form of UBI unavoidable. Others take the idea even further. In philosophical frameworks such as cognitarism, the claim is that once cognition itself—through AI—becomes the core engine of production, income can no longer be tied to traditional work.

Even a modest UBI is enormous at the national scale. A commonly cited figure is that a $1,000/month UBI would cost about $3.8 trillion per year. That estimate assumes payments to every U.S. resident, including children. If the program were restricted to adults only—about 260 million people—the cost would be closer to $3.1 trillion annually. Either way, the scale is enormous: between 10% and 13% of GDP, depending on design.

And because I live in Maine, I have to look at it from this angle as well. The often-touted UBI figure of $1,000 per month (about $12,000/year) may sound generous—but it’s far below the real cost of living in Maine. According to MIT’s Living Wage Calculator, in 2025, a single adult in Maine actually needs around $48,292/year to cover basic expenses. GOBankingRates estimates that a minimally comfortable lifestyle requires $108,287/year, while SmartAsset places the ‘comfort’ benchmark closer to $97,000/year.

With Maine’s average annual salary landing just north of $70,000, many people still fall short of even the modest benchmarks. Now, envision being someone replaced by AI and unable to find a new job. UBI would put $12,000/year in your pocket, 75% short of the minimum/year to cover basic expenses.

And this isn’t just a Maine problem. Across all 50 states, the median household income is below the cost-of-living threshold, according to GOBankingRates, as you can see in the table below.

However, even if UBI sounds appealing in theory, the central question remains: who will pay for it? Below, I’ll outline the main funding options at a high level. If you want to dig deeper, you can expand each section to view the details, examples, and trade-offs.

  • Because broad-based taxes like VAT or sales taxes apply to what people spend, they fall hardest on those with the least to spare. Lower-income households often spend nearly all of what they earn just to cover their basic needs, so a tax on spending takes up a much larger share of their income.

    People with higher incomes have more room to save or invest, so less of their paycheck gets eaten up by sales taxes. That’s why the OECD points out these kinds of taxes hit lower-income households harder when you look at them against income, even if they look more even when measured against spending.

    Now, here’s the kicker: even a 10% VAT wouldn’t get close to paying for a national UBI. In 2023, Americans spent about $19 trillion on goods and services. Taxing that at 10% would bring in around $1.9 trillion. Huge, yes — but still only about half of what UBI would cost in a single year.

    And remember, this wouldn’t replace existing sales taxes. It would sit on top of them. In Maine, the state sales tax is 5.5%. Add a 10% VAT, and you’re suddenly looking at 15.5% tax on most purchases. And Maine isn’t even the worst case. In states like Tennessee or Louisiana, where state and local sales taxes already push 9.5% and 10% respectively, adding a VAT would bring the total to around 20% every time you check out.

    So while a VAT could help raise money, it’s nowhere near enough on its own, and it will be felt mainly by those who already struggle to make ends meet. That raises a bigger question: how much room does the U.S. even have to raise taxes overall? In 2023, the U.S. tax-to-GDP ratio was approximately 25%, compared to an OECD average of around 34%. On paper, that gap shows there’s fiscal space to collect more—other advanced economies routinely collect a larger share of their GDP in taxes without collapsing under the weight of it. The U.S. could, in theory, do the same. In practice, moving even a few percentage points of GDP into new revenue is politically very difficult—and the way it’s designed matters enormously for who ends up paying the bill.

    Of course, debates over cost only tell part of the story. To see how UBI plays out in practice, it helps to look at real-world experiments.

    Finland actually tested the idea of UBI. From 2017 to 2018, it provided 2,000 unemployed people with €560/month, funded through regular tax revenues. The results were mixed: participants reported less stress and higher well-being, but employment levels remained largely unchanged, and policymakers ended the trial, citing its limited design and the high costs associated with scaling it.

    In the U.S., Andrew Yang’s 2020 “Freedom Dividend” didn’t get off the ground, but it was the first time UBI entered the mainstream presidential debate. His plan proposed $1,000/month for every adult, funded mainly by a new 10% VAT alongside program consolidation. Independent analyses found that the approach would fall far short and risk hitting lower-income households hardest unless carefully offset.

    The risk is clear: unless carefully designed, general taxation could leave lower- and middle-class taxpayers funding their own displacement — paying higher taxes to finance UBI while already struggling with job loss or wage stagnation.

  • Bill Gates popularized the idea of a “robot tax” in 2017: if a human worker pays income tax, a robot that replaces them should be taxed similarly, with the proceeds used to fund training or caregiving jobs. It never moved past the discussion stage, but it framed automation as something that should share its gains.

    Recent precedent shows how fragile such taxes are under corporate pressure. Seattle’s 2018 “head tax” on large employers—designed to fund homelessness services through a $275/employee tax on large firms—was repealed within a month after Amazon and others lobbied fiercely and threatened to halt expansion plans. In Europe, France and others adopted Digital Services Taxes (DSTs) aimed at Big Tech; the U.S. threatened tariffs and negotiated rollbacks/delays. Meanwhile, platforms visibly passed costs on: Amazon added a 2% surcharge to UK sellers, Google raised ad rates in the UK, Austria, and Turkey in line with local DSTs.

    Here’s where the hypocrisy shows. While the Trump administration fought DSTs abroad, calling them unfair and anti-consumer, Maine just adopted its own DST-lite: a tax on streaming services like Netflix and Hulu. As with European DSTs, the cost doesn’t fall on the corporations but on residents, who see their monthly bills rise. Consumers have no veto power here either. It’s a reminder that while national politicians rail against one form of digital taxation, local governments are happy to adopt similar measures at home—and the impact on ordinary people is the same.

    And that’s the larger problem with “robot” or digital service taxes: who really pays? While these policies are pitched as making corporations and the wealthy “pay their fair share,” history shows the costs are often shifted down to the very people such programs are meant to help. Whether through higher subscription fees, increased prices, or reduced investment, the practical burden rarely stays at the corporate level.

    This isn’t just about taxes. History shows that even major regulatory frameworks, such as the Dodd-Frank Act, have been rolled back after sustained lobbying efforts, with the 2018 reforms loosening oversight for mid-sized banks. After the 2008 financial crisis, banks, mutual funds, hedge funds, and credit card companies, among others, poured billions into lobbying against the Dodd-Frank reforms. A decade later, Congress gave in on some of the pressure. In 2018, it raised the threshold for “enhanced supervision” from $50 billion to $250 billion in assets. It’s a reminder that even landmark laws can be reshaped once powerful industries exert enough influence.

    The pattern is clear: with sufficient lobbying funds, corporations often prevail. This raises a dangerous question: if UBI is funded through corporate or wealth taxes, what happens when those corporations succeed in gutting or dodging the system? Without robust safeguards — and ideally international coordination — citizens could end up dependent on a revenue stream that’s politically reversible at the stroke of a pen.

  • On the surface, the idea of consolidating existing welfare programs into a single universal payment may sound promising. Less paperwork and fewer agencies could mean that more money would go directly to the people. And a look at the numbers is promising: In fiscal year 2024, Social Security alone cost approximately $1.5 trillion, Medicare and Medicaid together roughly $2 trillion, and SNAP another $99.8 billion. Combined, that’s roughly $3.6 trillion—nearly the same as the estimated $3.8 trillion annual cost of a $1,000/month UBI for every U.S. resident.

    But the numbers are misleading. These programs serve very different purposes.

    • Social Security is retirement and disability insurance. Eliminating it to fund UBI would strip away the guaranteed pensions and disability payments millions rely on after decades of payroll tax contributions.

    • Medicare and Medicaid provide healthcare coverage. Replacing them with cash would leave seniors and low-income families paying out of pocket. With U.S. healthcare spending averaging about $13,500 per person annually, a $12,000 UBI wouldn’t even cover medical costs.

    • SNAP ensures targeted food security, which cash may not replicate as effectively, as the recipients could spend cash on anything they deem more important than food.

    The per-capita math also doesn’t balance. Redistributing $3.6 trillion across all ~340 million U.S. residents would yield about $10,600 per person per year, still less than the $12,000 UBI benchmark. In other words, even if you wiped out Social Security, Medicare, Medicaid, and SNAP, you’d still fall short—and you’d create new crises in healthcare, retirement, and food security.

    Just take a look at SNAP. Its growth over the decades is largely attributed to structural shifts. In 1970, the program’s benefits totaled approximately $577 million (roughly $4.5 billion in today’s dollars, adjusted for inflation). By 2024, that figure had climbed to above $100 billion. The increase stems from several factors: more people receiving benefits, broader eligibility, repeated boosts during recessions, and, most recently, the 2021 update to the Thrifty Food Plan, which raised benefits by approximately 21% on a permanent basis. What hasn’t changed is efficiency. SNAP still spends the vast majority of its budget directly on benefits—over 90 cents of every dollar goes straight to households, with the small remainder covering things such as eligibility checks and fraud prevention.

    And real-world attempts to simplify benefits are rocky. The UK’s Universal Credit, which consolidated six different welfare programs, was plagued by payment delays, IT glitches, and hardship for claimants. The National Audit Office concluded the system wasn’t delivering value for money. The lesson: streamlining often creates new complexity in practice.

    There’s also a deeper equity issue. Vulnerable groups, such as people with disabilities, older people in long-term care, or families with housing insecurity, often need tailored support. A one-size-fits-all payment risks leaving those with the most specific needs worse off.

    So, while redirecting welfare spending is politically appealing, it rarely generates enough funding to sustain a UBI—and here too, we have to ask: who really pays? All of these programs are already taxpayer-funded. Redirecting them into a universal check doesn’t make the money free—it simply repackages existing burdens. For lower- and middle-income households, that can mean losing targeted support while still paying the taxes that keep the system afloat. In other words, “redirecting welfare” often risks recycling costs back onto the very families who are most at risk of being left behind.

  • The Alaska Permanent Fund Dividend (PFD) is the U.S.’s closest brush with UBI. Since 1982, it has paid all state residents an annual check from oil revenues. However, the yearly payouts are volatile: $3,284 in 2022 (including a special energy relief payment), $1,312 in 2023, and $1,702 in 2024—yet, nowhere near the $12,000/year UBI benchmark. That volatility shows both the appeal and the fragility of tying income to resource markets.

    Norway’s Government Pension Fund Global (GPFG)—now worth nearly $2 trillion—shows what success looks like: it invests oil revenues abroad, follows a strict fiscal rule (spends only ~3% of the fund’s expected real return annually), and is guided by an independent Council on Ethics. These structures insulate it from political meddling and ensure long-term stability. It funds public services and stabilizes the economy without being raided for short-term goals.

    But many other resource-based funds have failed. Venezuela’s oil wealth funds effectively collapsed under corruption and mismanagement, leaving citizens worse off than before. The Carnegie Endowment and the IMF both note that transparency and the rule of law are the main determinants of whether sovereign wealth funds become sustainable public assets or political piggy banks.

    Dividends can give citizens a direct stake in national prosperity, but the governance risk is real: without safeguards, the “dividend” can vanish as quickly as the revenue it relies on.

    And here again, the fairness question matters: who really pays when these funds fail or revenues dry up? In boom years, citizens enjoy generous dividends. In bust years, payouts shrink or vanish — but the cost of mismanagement doesn’t disappear. Residents may face service cuts, higher taxes, or inflation when governments raid funds to cover shortfalls. In other words, without careful governance, the “shared wealth” model can leave ordinary citizens carrying the burden of political failures while elites benefit.

  • VATs are a workhorse of global tax systems, already accounting for about 21% of total tax revenues across OECD countries. They are relatively easy to administer, hard to evade, and raise large sums. That’s why Andrew Yang leaned heavily on a 10% VAT to fund his proposed “Freedom Dividend.”

    But VATs are also politically toxic in the U.S. and regressive without design tweaks. Lower-income households spend a greater share of their income on consumption, so a VAT hike hits them harder. Many European countries mitigate this issue with rebates, zero-rating of essentials (such as food), or income-based credits; however, these measures can complicate the system.

    One practical advantage over the U.S. sales tax system is transparency: in Europe, VAT must be included in the sticker price by law. The price you see on the shelf is what you pay at the checkout. By contrast, U.S. consumers see sales tax added at the register—a reminder that while VAT is often criticized, it can at least be more straightforward for shoppers.

    But the key question remains the same as with digital taxes: who really pays? While VAT is formally levied on businesses, it is almost always passed through to consumers. That means the very people UBI is intended to protect—especially lower- and middle-income households—would shoulder a disproportionate share of the cost unless careful offsets are built in. Politicians often speak loudly about tackling inequality, yet when it comes to revenue, they rarely shy away from regressive instruments that shift the burden downward.

    France and Germany have both debated adjusting VAT to fund social programs, but VAT increases are deeply unpopular because they are highly visible at the checkout counter. In a UBI context, unless carefully designed with credits or rebates, a VAT could end up undermining the very equity goals it was supposed to advance.

  • Some argue that if corporations are driving automation and reaping its profits, they should share directly with the citizens displaced. Proposals range from mandatory profit-sharing to public ownership stakes in large tech firms, or even “data dividends”—compensating people for the value created by their data. California’s governor floated such an idea in 2019, though it never moved forward.

    The math, however, shows the limits. In 2024, the five most profitable U.S. tech firms—Apple ($93.7B), Microsoft ($88.1B), Alphabet ($100.1B), Meta ($62.4B), and Amazon ($59.2B)—reported combined net income of about $403 billion. Even if the government skimmed 5% of those profits (≈$20 billion), spread across ~260 million U.S. adults, the payout would be around $75–$80 per person per year—barely a drop in the bucket compared to a $12,000/year UBI.

    And even raising that kind of money isn’t straightforward. When the UK introduced a windfall profits tax on North Sea oil and gas companies in 2022, several firms cut or redirected investment abroad—a reminder of how quickly corporations can restructure to blunt national taxes.

    That’s why some point to international coordination as the only way forward. The OECD’s 15% global minimum corporate tax, now rolling out, is one example of what anti-avoidance looks like in practice. It’s expected to raise about$150 billion annually worldwide. However, even if the U.S. captured a generous share, that would still cover only a small fraction of the $3.8 trillion needed annually for a national UBI.

    To me, it seems clear that the big risk is avoidance: if profit-sharing or public stakes aren’t international in scope, firms can simply shift profits offshore, restructure to dodge obligations, or lobby governments into watering down rules.

    So, while corporate participation can align incentives—tying citizen welfare directly to automation-driven profits—the scale is nowhere near enough to fund a full UBI. And here again the question emerges: who really pays? History shows that even when governments aim taxes at corporations, the costs rarely stay there. When the UK introduced a 2% digital services tax, Amazon and Google both passed the cost straight through—Amazon by adding a surcharge on sellers’ fees, and Google by raising ad rates in line with the tax. Similar pushback followed the UK’s 2022 energy windfall tax, where companies warned they would scale back investment or, as researchers suggest, shift costs to others in the chain, potentially including consumers at the end of the chain. They tend to resurface in higher prices for consumers, reduced wages, or cuts to investment. In other words, the shortfall would almost certainly be passed on to taxpayers and hit those who are least able to bear it—the lower- and middle-class households.

Each model has trade-offs. Critics warn that UBI could become a risky crutch—politically unstable, prone to underfunding, or even used as a substitute for real structural reforms. Mo Gawdat, former Google X executive, cautions that UBI on its own may entrench inequality and dependency by concentrating wealth and power at the top, while overlooking the need for reskilling and meaningful human opportunities.

UBI may one day play a role as a buffer against AI’s disruptions, but it cannot be a silver bullet. Unless tied to broader reforms—such as education, skill development, and fairer wealth distribution—it risks becoming a band-aid over a widening wound.

The Price of Over-Automation: What We Lose if AI Replaces Too Much

Artificial intelligence isn’t inherently good or bad. It’s a tool—powerful, flexible, and transformative. However, if we lean too heavily on automation, we risk not only dulling our own minds but also stripping meaning from work and weakening our social bonds. We also create financial strains that ripple through the entire system.

The debate over UBI shows what’s at stake. Replacing lost wages with public payments could cost trillions of dollars each year, forcing difficult choices about who pays and how. Without careful design, those costs fall back on the same households already under pressure—through higher taxes, higher prices, or weaker safety nets.

So the real challenge isn’t just whether AI will replace us in the workplace. It’s whether we let it hollow out the qualities that make life worth living while shifting the bill to those least able to pay.

If we treat AI as a lever—something that extends our reach rather than substitutes for us—we preserve not only our creativity, dignity, and connections, but also a more stable economic foundation. The choice is ours, and the costs of getting it wrong will be paid in more than just dollars.

Previous
Previous

When AI Does Too Much: How Over-Automation Affects Our Minds, Lives, and Work

Next
Next

Offshore Wind Risks: Why Maine Must Explore Smarter Alternatives [Op-Ed]