AI Will Not Replace You. People Using AI Might.

May 15, 2026 · by Kenney Myers · 16 min read

A product manager I know opened an AI app builder on Tuesday afternoon, typed a paragraph describing the internal tool he wanted, and twenty minutes later had something running on a real URL. He dropped the link into Teams. Fire emojis followed. That night over dinner he told a friend he’d built an app this week, and the friend was impressed, and at no point did anyone ask him how it was made.

Same week, a marketing manager at the same company used an AI tool to spin up a podcast episode summarizing industry news. Two AI hosts, conversational, surprisingly listenable. He posts it. Within the hour someone in the comments calls it “AI slop”. A podcaster friend slides into his DMs to express professional disappointment. He spends his evening explaining himself to people who didn’t ask for an explanation.

Same week, same company, same category of tool, completely different reception.

That’s the conversation happening in every industry right now, and most of the people having it haven’t noticed they’re arguing with themselves.

I’ve spent more than thirty years building software and have taken several technology companies to successful exits. Along the way I’ve also written young adult fiction and performed as an actor, and I now run an AI company whose software is in production at some of the largest enterprises in the world. So, I live on both sides of this argument. I’ve shipped code, and I’ve shipped creative work where authorship is the whole point. The contradiction below is one I watch up close every day.

The numbers are not up for debate

This isn’t a fringe phenomenon anymore. The volume of AI-generated output across every category of human work has gotten too large to dismiss, and the social rules around accepting it have gotten too inconsistent to defend.

On the software side, 84% of developers globally now use or plan to use AI coding tools.[1] How much of the actual code shipped to production was written by AI is harder to pin down: developer surveys put the figure around 41% to 42%, while the most rigorous empirical study tracking 4.2 million developers measured AI-authored production code at closer to 27%.[2] The truth is probably somewhere in that range, and either way, it’s a large and growing share. Industry research firms forecast the AI code tools market growing from roughly $7.7 billion in 2025 to about $22 billion by 2030.[3] The Swedish startup Lovable, one of the loudest examples, raised $330 million at a $6.6 billion valuation in late 2025 after crossing $200 million in annual recurring revenue in just over a year, and reports that more than 100,000 new projects are created on its platform every day.[4] Among the people building those projects, available surveys suggest a strong majority are not professional developers.[5] They’re founders, product managers, marketers, and operators who finally got tired of waiting in line for engineering bandwidth.

On the podcast side, the picture is roughly similar and somehow culturally inverted. Data from the Podcast Index, reported by Bloomberg, found that 39% of newly listed podcasts over a nine-day window in 2026 showed signs of being AI-generated.[6] Inception Point AI, which runs a network called Quiet Please, ships more than 3,000 episodes a week across roughly 5,000 shows with a staff of eight, at a cost of about a dollar an episode.[7] That isn’t a typo. A dollar buys the entire episode.

So, at roughly the same time and roughly the same scale, society has collectively decided that one of these is the future and the other one is podslop. Nobody can quite explain why, but the verdict is in.

The disclosure rules don’t add up

Go industry by industry and the contradiction gets hard to ignore.

In publishing, Amazon Kindle Direct Publishing requires authors to declare AI-generated text, images, and translations at the point of upload.[8] Forget to check the box and you’ve committed a policy violation against the largest book retailer on earth.

In visual art, the U.S. Copyright Office has held a clear position: a work generated by an AI system without sufficient human creative contribution isn’t eligible for copyright protection, because a machine isn’t an author.[9] Several people have tried to challenge that, and none have won.

In academic publishing, Science treats undisclosed AI-generated text as misconduct. JAMA built an entire submission framework around AI disclosure.[10]

In games, Valve rewrote the Steam developer agreement to require explicit disclosure of AI use.[11]

Now look at software. Where’s the equivalent? There’s no App Store badge certifying that a piece of software was written by humans. There’s no public registry showing what percentage of an enterprise SaaS codebase came out of an AI model. There’s no law saying a vibe-coded application can’t claim authorship.

It doesn’t exist. The bank app you used this morning doesn’t disclose how much of its codebase was AI-generated. The startup pitching your company tomorrow won’t mention which features were prompted into existence. And here’s the part that should make everyone uncomfortable: most users wouldn’t care if they did.

That last fact is the entire problem, and it’s also where the real opportunity lives.

Why software gets a pass

I’ve got four theories on this. None of them flatter the people doing the judging, including me.

Software was always invisible. When you use an app, you don’t see the code. You don’t see the developer. You experience the result. A novel is the act of creation made visible. A podcast is a human voice in your ears doing the thing in real time. The intimacy of creative output makes substitution feel like a violation in a way that swapping out a backend service never will. You can replace the plumbing in a house without anyone crying about the plumber.

Software is treated as a means rather than an end. Nobody listens to a database for pleasure. Nobody hangs a JSON schema on the wall (other than a developer like me). Code has historically been treated as infrastructure: important, but not personal. A novel, a song, a painting, a podcast, those are understood as a person expressing something. The fact that this is a profound category error, and that good software is absolutely a form of expression with taste and judgment running through every line of it, doesn’t change the public perception. It only means the perception is wrong, which is its own separate problem and one I’ve watched developers lose arguments about for thirty years.

Software used difficulty as a status moat. For decades, knowing how to code was the membership card. It separated those who got to build from those who didn’t. Removing the moat feels democratic if you were on the wrong side of it and offensive if you were on the right side. Most of the loud celebration around vibe coding comes from people who were locked out of building for their entire careers. Most of the loud panic around AI podcasts comes from people who spent years building an audio production craft. That’s not a coincidence.

People are comfortable replacing labor they don’t personally identify with. This is the one nobody wants to say out loud. The acceptability of AI replacement maps almost perfectly to whether the person making the judgment is the one being replaced. Software developers are mostly fine with AI writing code because they’re the ones holding the tool. Podcasters are mostly upset about AI podcasts because someone else is holding the tool and pointing it at their job. The day the tool gets pointed somewhere new, the moral calculus updates fast.

The “am I equally skilled” question

If you use AI to generate a podcast and it pulls a real audience, are you as skilled as the person who built a show the long way?

Honest answer: probably not, in the traditional sense. But that may not be the right question.

Traditional podcast craft is a hundred small competencies stacked over years. Interview technique. Story structure. Audio engineering. Reading a guest’s hesitation. Knowing when silence carries more weight than another question. Building a parasocial relationship that survives a hundred episodes of you having a bad week. People who’ve put in the time deserve real respect for the work. I’m not interested in pretending otherwise.

But the AI operator has acquired a different craft, and it’s worth naming it: choosing a topic with a market, curating sources, shaping a format, picking voices, packaging episodes, building distribution, and using AI as a production engine without letting it embarrass them. That’s also a real craft. The output may not always match what the lifetime practitioner can produce, but in plenty of cases it gets close enough that the audience doesn’t draw the distinction. Which, depending on where you sit, is either great news or terrible news.

What’s happened is that the relationship between skill and outcome got rewritten on us. For most of human history, the only path to a strong outcome was acquiring the skill. The skill was the gate. AI didn’t eliminate skill. It opened a second path to outcomes that doesn’t require the same kind of skill. Those are very different statements, and confusing them is the source of about 90% of the current panic.

The people who’ll thrive across this transition are the ones who can figure out which part of their skill produced value, versus which part was just hard to acquire. Hard and valuable used to be roughly the same thing because hard things were rare. They aren’t anymore. The rarity has shifted to taste, judgment, accountability, and the ability to ship something somebody else can trust.

Where the line is forming

Three categories are emerging across industries, without anybody officially announcing them.

Replacement gets celebrated when the work is perceived as instrumental: code, spreadsheets, marketing copy, legal templates, customer service, internal documentation. Anything that exists to produce a result rather than to be the result. Nobody mourns a Jira ticket, and nobody’s checking whether the SQL query was lovingly handcrafted.

Replacement gets contested when the work is perceived as expressive: music, fiction, journalism, podcasting, performance, fine art. The audience showed up for the person, not just the output, and the substitution feels like a small betrayal even when the output is good.

Replacement gets regulated, often heavily, when the work carries authority over other humans or runs critical systems: medical diagnosis, legal judgment, hiring, sentencing, child welfare, energy infrastructure, financial services. Anything where a wrong answer costs more than embarrassment. The reason these domains require human accountability isn’t that humans necessarily do the work better. It’s that someone needs to be answerable when things go sideways, and an AI model can’t be answerable for anything.

Disclosure rules across every industry track those three buckets almost exactly. Light or absent in the first. Growing and contentious in the second. Mandatory and legally enforced in the third.

The better question isn’t “Was AI used?” It’s “Was the human contribution material?” Did somebody bring expertise, taste, judgment, accountability, and lived experience to the work? Or did they push a button and present the result as if it reflected a depth of skill that wasn’t there?

The line we’re really drawing isn’t between human-made and AI-made. It’s between honest and dishonest, accountable and unaccountable, augmenting human judgment versus concealing the absence of it. That framing matters far more than the binary of human-or-machine, and it’s the framing serious enterprises are quietly adopting whether they have a clean phrase for it yet.

Who gets to draw the line

Nobody, and everybody, at the same time.

There’s no central authority on acceptability of AI. There’s a slow, messy, market-driven, lawsuit-driven, embarrassment-driven negotiation happening in public, and the rules are getting written in real time by whoever keeps showing up to the conversation. Spotify drew a line on AI music because it got tired of pulling tracks one at a time.[12] Apple drew a line on vibe-coded apps because the submission volume broke its review process. Steam rewrote its disclosure rules because the alternative was drowning in unmoderated synthetic content. The Copyright Office drew a line on AI-generated images because somebody walked in and tried to register one.

None of these were principled stands. They were operational reactions. And the line will keep moving. It’ll move toward AI in places where the economic pressure is strongest and the displaced workers are less organized. It’ll move against AI in places where the displaced workers are well-represented and able to argue their case in public. Musicians have labels and performance rights organizations doing the arguing for them, which is why music has the most developed disclosure regime in any creative industry. Software developers, for the most part, have chosen to adopt the tools rather than resist them, which is why software has no disclosure regime at all.

The line is getting drawn by leverage. Pretending otherwise is wishful thinking.

That sounds bleak until you remember the alternative is to wait for someone else to decide for you. Companies that are intentional about how they adopt AI, how they assign accountability for AI-driven decisions, and how transparent they are about where AI ends and human judgment begins are doing measurably better than companies pretending the question isn’t happening. The leverage works both ways. The organizations that move first and move thoughtfully are the ones writing the rules everyone else will eventually live by.

What we have to adapt to

A few things are worth accepting before you try to operate in this environment.

The skill you spent twenty years building isn’t worthless, but it’s no longer sufficient on its own. The market for adequate output in any field has gotten very crowded, because adequate output is now available to anyone with a credit card and an afternoon. The market for excellent, distinctive, accountable, deeply human output has grown more valuable. That’s the part you double down on.

The disclosure double standard won’t be resolved by argument. It’ll be resolved by familiarity. Within five years, asking whether a piece of software was AI-generated will sound as quaint as asking whether a building was designed in AutoCAD. The questioning phase only lasts while the technology feels new, and new wears off fast.

The people most upset about AI replacing them are usually one category removed from the people doing the replacing. The instant the tool lands in your own hands, it stops feeling like an attack and starts feeling like a multiplier. The fastest way to stop being afraid of AI is to use it well yourself, preferably before someone else uses it badly in a way that affects you.

For enterprises, the question has shifted from whether to adopt AI to how to adopt it without losing the things that made the enterprise trustworthy in the first place. That’s the actual conversation happening inside large organizations right now, and it’s less philosophical and more practical than the public debate suggests. Somebody has to own each decision. Outputs have to be reviewable. Sensitive systems need controls around them, and customer-facing work has to be honest about how it was made. None of that is incompatible with moving fast. It’s what moving fast responsibly looks like in 2026.

No one gets a permanent exemption. Today the replacement conversation is happening around translators, copywriters, illustrators, junior developers, paralegals, customer service representatives, and a growing list of podcasters and musicians. Tomorrow it’ll be about some of the people currently doing the replacing. The logic that justified replacing the people below them will get applied upward, and the people who feel safest right now are usually the ones who should be paying the most attention.

The real adaptation isn’t learning to use the tools. Everybody’s going to do that. The real adaptation is being honest about which part of your skill, your role, or your business is genuinely irreplaceable, and which part you were charging people for because nobody else could do it yet.

Double down on the first. Let the second go without putting up much of a fight.

The people and companies that came out of every prior technology shift in good shape weren’t the ones who proved the new tool was inferior. They were the ones who got honest about what they were really being paid for, faster than anyone else.

That’s the work right now. It’s also, for what it’s worth, the most interesting work I’ve done in thirty years of doing this. The companies I get to spend my time with these days aren’t asking whether AI is real. They’re asking how to use it without losing themselves. That’s a much better question, and the answers are arriving faster than most people think.

References

[1] Stack Overflow Developer Survey 2025; reported across multiple industry summaries, including Hostinger’s Vibe Coding Statistics 2026 (https://www.hostinger.com/blog/vibe-coding-statistics).

[2] The 41% to 42% figure is self-reported AI-assisted code from the Sonar State of Code survey of more than 1,100 developers (https://shiftmag.dev/state-of-code-2025-7978/). The lower 27% figure measures AI-authored production code across 4.2 million developers and is widely considered the most rigorous empirical measurement to date. The two metrics measure different things and shouldn’t be used interchangeably.

[3] The Business Research Company, AI Code Tools Market Report 2026 (https://www.researchandmarkets.com/reports/6225896/ai-code-tools-market-report). Other research firms forecast a range from roughly $17B to $30B over a similar window, so this figure should be read as directional rather than precise.

[4] TechCrunch, “Vibe-coding startup Lovable raises $330M at a $6.6B valuation,” December 18, 2025 (https://techcrunch.com/2025/12/18/vibe-coding-startup-lovable-raises-330m-at-a-6-6b-valuation/). $200M ARR confirmed by Bloomberg (https://www.bloomberg.com/news/articles/2025-11-18/lovable-hits-200-million-arr-and-raising-funds-above-6-billion-valuation).

[5] The most-cited “63% non-developers” statistic traces back to a Solveo analysis of community comments on Reddit’s r/vibecoding and to a Product Hunt State of Vibe Coding 2025 report. It is a community-demographics sample rather than a representative global survey, so the precise number should be treated as directional, even if the broader pattern (a large share of non-developer builders) is well-supported across multiple sources.

[6] Bloomberg, “Podslop Proliferation Is Challenging the Audio Industry,” April 30, 2026 (https://www.bloomberg.com/news/newsletters/2026-04-30/-podslop-proliferation-is-challenging-the-audio-industry). The Podcast Index measurement is a probabilistic detection across a nine-day window, not a definitive audit.

[7] The Hollywood Reporter, “5,000 Podcasts. 3,000 Episodes a Week. $1 Cost Per Episode,” September 2025 (https://www.hollywoodreporter.com/business/digital/ai-podcast-start-up-plan-shows-1236361367/).

[8] Amazon Kindle Direct Publishing, Generative AI Content Guidelines (https://kdp.amazon.com/en_US/help/topic/G200672390).

[9] U.S. Copyright Office, Copyright and Artificial Intelligence, Part 2: Copyrightability (January 2025), https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf.

[10] Science (AAAS) editorial policies on generative AI; JAMA Network policy on use of AI in scholarly publishing. Both are summarized in research-administration overviews such as Kennesaw State University’s AI Disclosure Requirements page (https://campus.kennesaw.edu/offices-services/research/ai-disclosure.php).

[11] Valve, Steam Distribution Agreement and AI disclosure requirements, updated 2024 to 2025; summarized at https://store.steampowered.com/news/group/4145017/view/3862463747997849618.

[12] Spotify AI music labeling and disclosure framework, rolled out 2025; summarized by PYMNTS (https://www.pymnts.com/news/artificial-intelligence/2025/spotify-rolls-out-new-filters-disclosure-rules-ai-content/).