Sonic Content: Part Two

Building an immersive podcast soundscape is all in the details


. 6 min read


This is Part Two of Sound Content. In this first part, Sound Designer Mark Angly set the stage for building an immersive soundscape in a podcast episode — determining when to use sound effects, how to find the right ambience, and what you need to build an effective atmospheric layer. In this article, Mark gets into how you can elevate a scene in your podcast episode with sound mixing and polishing.

In the previous article, we built a sturdy base layer for our “sound cake.” Now it’s time to add the other layers, some buttercream to stick it all together, and maybe some frosting. Detailed sounds can elevate a scene from background to part of the story, but as always they have to be supported by the script and music or the whole thing falls apart.

Mixing Sound Effects

All the good sound libraries in the world won’t save a bad mix. What is mixing? In simplest terms, mixing is any treatment you add to a recording after it’s been laid into the timeline. Audio files are the ingredients, and mixing is cooking them together.

For our purposes here, I’ll assume some knowledge of signal processing basics like equalization and compression. There are many fantastic resources available on youtube if you’d like to dive into these tools in more detail. Also a quick note on plugins: whether you’re using the ones that come with your Digital Audio Workstation, or using aftermarket plugins, read the manual! There’s often a no better person to teach how to use a tool than the one who made it.

So! Once I’ve cut in roughly the right sounds in roughly the right places, I then have to make them part of the narrative, make them work with whatever VO or music might be around them. Typically, the tools I reach for (in approximate order of importance/utility) are levels, panning, EQ, reverb/delay, and pitch. Let’s take a quick look at each of these.

  • Levels: sound effects being too loud is a quick way to pull someone out of a story. Again, everything is relative; try to make sure that your SFX are loud enough to do the job, but not overpowering (unless they need to be). I find it’s helpful to work at a consistent monitor level so that over time my ears become conditioned to hearing the “right” volume.
  • Panning: moving things in the stereo field can be done literally (telling the audience, “this thing is over on the left”), or figuratively (this remembered sound isn’t part of the current environment, so it’s panned away to the right). It can also be static or dynamic, through automation: cars zip past the listener, footsteps walk away gradually to one side, a storm rolls in from distant mono to up-close-and-personal wide stereo.
  • EQ: Equalization can be a utilitarian tool, correcting for bad mic placement or unpleasant frequencies, but it can also be a creative tool. Filtering out some high frequencies (and sometimes low end), combined with a short echo, can give the impression of distance, especially outdoors. Another common use is as “futz,” a treatment that makes things sound like they’re coming through a telephone or television.
Using equalization to filter a voice, in order to imitate the frequency response of a telephone (slightly exaggerated)
  • Reverb and Delay: Reverberation effects were originally designed to simulate an acoustic space, but interestingly, in podcasting reverb and delay, are often used to move sounds in and out of reality. We typically associate heavy, long reverberation with “subjective” or non-literal sounds, meant to convey emotion rather than information. However, reverb can also put a sound in a real acoustic space and can make pre-recorded SFX sound much more natural and believable than played bare.
  • Pitch shifting: Pitch can be extremely useful for adapting SFX to fit your needs. If I’m trying to build a big door slamming, and I’m using different recordings, I can use pitch to make them sound like they go together, or to artificially make it sound lower or “bigger” than it actually was. Also, if I can’t find a certain sound, changing pitch is useful for making it out of something else.
  • Example: The “Kraken” (a sample from an episode of Inside the Breakthrough that references a mythical sea creature) was built from several layers, one of which uses a piece of metal pitched down quite far, as well as an angry horse and whistle, pitched up and chopped. Together they make a scary sea monster

Sound of Metal
Sounds of Angry Horse and Whistle
Mixed Low and High Layers Make for a Scary Sea Monster—The Kraken

“Hero” sounds / Cinematic mixing

Sounds can support a narrative but they can also be the focus. When sound effects take center stage, everything I’ve talked about previously becomes amplified. Let’s say we need to establish an opening scene: mid-battle in World War II. Now if I was short on time, I could search for a pre-made battle ambience, and it would absolutely do the job:

Buuuuut to make it extra, I added some more elements to tell a bit of a story:

All of the previous techniques are working here. Two layers of complementary ambience open the scene (a sparse, distant, low explosion bed and some closer, high-pitched soldiers shouting). The tank drives in from the left, returning fire at the more distant soldiers (whose rifles are treated with EQ and reverb to make them farther away). Then the tank is struck with a missile, loud and frightening, debris rains down, and the SFX lower to make room for the incoming narration. The “events” of the intro, short as it is, have a rhythm and establish the pacing for the first part of the story.

It’s worth mentioning that different sound designers have different views about levels and how much is “too much.” I come from a Film and TV background, and much discussion’s been had in that space about dialogue in blockbuster films being “buried” by music and SFX that audiences find too loud and overpowering (This scene from Interstellar comes to mind; audiences complained that the dialogue was hard to hear, but director Christopher Nolan is adamant that was the intended effect because rockets are freaking loud). Granted, most podcasts aren’t reaching for the same goals as the latest Avengers film; by and large, their aim is to inform or entertain, but not to overwhelm the senses with a spectacular sonic experience. But honestly, when those opportunities do come up in podcasts, I tend to take it over the top and then back off 10% — a podcast listener has already chosen to put my work in their ears, so my goal is to keep that attention rather than fight for it.

The Final Polish/Transitions

A wise mixer once said, ‘A mix is never finished. Eventually, for one reason or another, you just stop.’ It’s difficult to stop tweaking, especially when building intricate scenes and playing with pacing, but I find a final review pass very helpful — start to finish, AFTER taking some time away from the work. I try to listen with the fresh ears of a listener interested in the content, not a sound guy ripping apart an unbalanced mix. This process can also reveal larger pacing issues that are difficult to hear when working on small sections at a time, as well as how your sound effects are working with music:

Here, the SFX are edited in time with the music as it pulses and builds to catastrophe, then settles.

How People Listen

I’ve talked a lot about what the “right” sound is, the right volume, the right way to mix and treat your effects. Ultimately, the right thing to do is to support the greater narrative. One of the most difficult things I do, as a sound designer, is taking a step back and saying, “I’ve done enough”. Or sometimes too much. At the end of the day, I’d wager that most podcast listeners don’t listen to a podcast episode the same way sound designers do. The effect that well-placed sounds have is largely a subconscious one. If my work is done well, it will improve their opinion of the episode as a whole, and not that the underwater treatment on the voiceover was really cool. A colleague once said, “If something I do ever makes the audience sit up and go ‘what was that?’ I’ve failed at my job.”

So what to take away from all this? In all honesty, I struggled with writing this piece because every piece of advice is situational. Our work as sound designers changes with each project, with each collaborator we work with, and with time and experience. But for me, the overall goal doesn’t change, and that is: to make it sound good!

Sign up for the Pacific Content Newsletter: audio strategy, analysis, and insight in your inbox. Once a week.

Related Posts