AI Failures in Podcasting – Working Out the Weak Points

AI Failures in Podcasting – Working Out the Weak Points

AI Failures in podcasting

I sat in on an interview recently with the former head of legal at a very large finance organisation. He was talking about the use of AI in the legal space, where accuracy is absolutely sacred, and mistakes can cost billions. And he shared this compelling argument about AI: It has its uses, but it won’t get more useful to us unless we use it now, work out its weaknesses, and work out how to improve it.

After all he (and most of us) would never  give an intern a task without checking it over, so why would we treat AI any differently?

The same (without the billion-dollar risk) can be said of AI in podcasting. 

In a field where your reputation is being staked on the things that you say or write – ‘on air’ or to potential guests in outreach – getting it right and human oversight is still an enormous part of the equation.

I’ve used AI successfully in dozens of projects, but in this post, I’m going to break down a few anecdotal failures of AI when I’ve tried to layer it into podcasting work. Maybe it’ll help you avoid some of the same pitfalls, and ensure your AI chatbot or agent is actually there to help make you a better podcaster, and not simply a clanker-shaped millstone around your neck.

Research.

AI chatbots, when asked to research a person or topic, will outright lie. Generally, most chatbots are programmed to be positive, can do, and – like any good improviser – say ‘yes, and…’. So, if you ask it to do something it can’t, or it can’t find an answer, it’ll make one up to please you.

The problem is, it looks convincing. If you ask your AI of choice for a biography on an individual, it’ll spit out a whole list of information from their university qualifications to their favourite colour. If you ask for links, it’ll provide them. But if you click on those links, they will either be made-up URLs (that appear to come from reputable sources), working links from unreliable sources, or the page simply will not contain the information the chatbot is citing. 

Google’s AI assist is particularly bad for this. Take a look through the links it provides to back up its summary, and it’ll rarely have anything to do with what the bot is claiming.

The output isn’t necessarily any better, even when you take pains to put the right information into the AI. I recently fed ChatGPT a list of 400 names from attendees of a major international political event. I asked it to filter out those with set job titles who had provided an email address. 

I then fed it a template outreach email and asked GPT to fill in the blanks from the list it had been given. After about 40 entries in the database, the AI started making up names, companies, and emails. It would enter the wrong company name into an email to the right person. It would change email addresses and more. If I’d sent out that list without checking it, I’d have looked very unprofessional to over 200 major global business leaders.

The worst-case scenarios here are that poor research ends up in the show and needs correcting, or, arguably, even more embarrassing, gets put in front of a guest at an interview. I have seen this happen before – an entire line of questioning built around a career period that simply didn’t happen. It was excruciating for the producer involved. 

So, what do you do? Well, the simplest option is to go through and fact-check your AI. Whatever is being claimed, find it somewhere. Compare it to their LinkedIn or company bio. 

The slightly more nuanced answer is to not ask the AI for answers. Instead, ask it for more intelligent questions and nuanced niches within a topic, which you can search for. This is augmentation rather than replacement – the AI isn’t doing the work for you, per se, it’s helping you to do your work better.

Scripting.

At some point, many podcasters, short on time or just curious, will ask AI to generate a script for them. It might not be a script that the client will ever see – it could just be there to provide a framework or overcome a little writer’s block – but at some point, we’ll ask AI to do a little creative writing. 

And what comes out?

A LinkedIn post. 

Bland, slightly confrontational in a very safe ‘challenging expectations’ kind of way, and it’ll be awful. Even with some re-writing, the structure will be generic and immediately disengaging. 

It’s also just not how most people actually speak in the real world. Again, if you put that in front of the host and ask them to read it, it will be immediately obvious that it’s AI slop, and you’ll look lazy.

That said, there are ways that AI can help write good scripts. You can enter previous human-written scripts and compare them to your current one to help AI suggest structural improvements and ways to help the tone match. 

I’ve personally experimented with feeding multiple transcripts of the host speaking conversationally into an AI, and asking it to write with their voice. It needs a lot of training, but it can be done. When I’ve got the tone right, I’ll ask the system to rewrite my (human-written) script in their tone. I can then compare how they differ to make myself a better writer. 

Post production

This is an interesting one. There are dozens of different AI tools for audio and video editing and post-production out there, covering everything from altering video to make sure people are making (or avoiding) eye contact, to keying and green screen, to removing extraneous noise. I’ve seen tools that can change people’s intonation, so you can edit the middle of a sentence to sound like the end. Some of them are very, very good. 

Some of them are not. Or, more reasonably, we overestimate the ability of tools to fix human error after the fact. AI is essentially maths, and you can’t fix human error with number crunching; you can only cover it up. The failure here isn’t so much in AI making mistakes, because those are easy to see or hear when you check your work (which, of course, you will). It’s that people overestimate AI’s ability to ‘clear it up in post’, and then get lazy on the day. 

I’ve recently seen cases of people not locking the focus on their camera, resulting in the image sometimes focusing on them and sometimes on the advertising billboard behind them. It took weeks to make the footage look passable. I’ve lost count of the number of times people have logged in to interviews with their laptop mic or an echoey, glass-walled room because they think there’s an audio plugin that can clear it up and make them sound studio quality. Those tools exist, but they aren’t miracle workers.

The fix, once again, is to not treat AI like a member of the team, but to use it to give your own skill level a boost, or a mirror to reflect on. AI can make an audio or video editor’s job easier, for sure, but it can’t replace them. 

And that’s the real takeaway here.

Right now, as of mid-August 2025, AI is not a staff member. AI isn’t necessarily a way for you to do more work with less time and people. AI is a great tool for making you more expert in your own skill set. It can help get you out of a bind. It’s a window of mental clarity when you are fogged and can’t see the wood through the trees. 

But all of that said, the field moves so quickly… Ask me again in six months.