in ,

Adobe Podcast’s text-based editing turns limitation into liberation

Although top-down editing is really logical, you risk losing out on the finer control.

Ever had a brilliant idea for a podcast only to let it go once you realized how many hoops you have to jump through to get it published? While it’s never been simpler to upload your audio masterpiece to the internet, today’s listeners are savvy and won’t put up with poor sound quality or editing for very long. Learning an audio editing programmed is a skill in and of itself. All of these issues are what Adobe’s brand-new Podcast tool for browsers intends to fix.

Previously known as Project Shasta, Adobe Podcast is a cloud-based audio production platform. It is largely focused on podcast production, as the name would imply, but it would be interesting to anyone who works with narrative audio. The essential thing to keep in mind is that neither an audio timeline nor a mixer view with channels are there. You’ll first note that it doesn’t at all resemble an audio editor. In actuality, it hardly ever was.

The objective, according to Mark Webster, Director of Product at Adobe, was to develop a more comprehensive voice strategy. “That could have been speaking to Photoshop or programming a voice assistant in the Creative Cloud. It was really just about establishing services and a platform to make it extremely simple to create spoken audio, so we kind of took a step back.

Adobe Podcast, still in development, is the outcome. Although anyone can apply, you presently need to be based in the US.

You won’t operate from left to right or even actually work with audio files, unlike standard audio editors like Adobe’s own Audition. Instead, you’ll treat your podcasts like a text document as you edit it. Not simply because you operate top-down, but also because you mostly edit text documents. Any recording you make with Adobe Podcast will be automatically transcribed; to make changes, just update the text (which will then be magically reflected in the audio). Even more tools are available for making art (as can be seen above).

“Adobe Podcast is not just another audio tool in our minds. It truly is a tool for telling stories. All the features found in conventional audio tools, such as looking at audio waveforms and decibel levels, are suddenly irrelevant when considering it as a tool for storytelling. According to Adobe Podcast Lead Designer Sam Anderson, Engadget.

This is how it’s been done for a while by apps like Descript. It also has some logic. Since podcasts are about what is being said, editing the text rather than the original audio makes more sense.

Not to mention that it is far easier on the ears, eyes, and soul to be able to see what is being said rather than having to continuously replay it to find the appropriate passage. But there are certain costs associated with it.

One is that you have to learn to let go of some of your control. You can specify the precise location to which you want to cut an audio segment in an audio editor. Only text can be highlighted in Adobe Podcast; the backend handles the finer points of the edit. For the most part, that’s acceptable, but you’ll have to go outside the box if you wanted to add or remove some silence, for instance.

For instance, it’s simple to erase a sentence by selecting it in the transcription and pressing the delete key. Similar to moving objects around, you can cut and paste. However, you might not obtain as smooth of an edit as you would if you performed it manually in an audio editing programmed. Therefore, once you export from Podcast, you might still need to make a few minor modifications, at least for the time being. In the future, the system might use AI to perform these edits on your behalf.

I believe we could find a way to do it automatically by looking at the space between words and when deletions are made, using some extremely intriguing technology. stated Anderson.

The simplicity of inviting guests is one of the main advantages of online tools like Podcast or comparable services like Riverside Fm and Zencastr. In the past, you might have needed to pre-brief a visitor to determine their audio setup, assist them in recording it locally using Audacity, and then deal with moving sizable audio files around afterward.

Similar to a Zoom meeting, your guests just accept an invitation when you use Podcast. You then speak with your guests in real time as the local audio is uploaded in the background. The end result is a remarkably seamless technique to quickly receive local audio that has been transcribed and is prepared for editing.

It’s possible that Adobe is using two weapons at once. First off, unlike the competing programmes described above, Podcast just focuses on audio, so you won’t need its livestreaming, video editing, or presentation features. The second category would be some paid tools, most notably “Enhance Speech.” This miraculous button essentially turns subpar audio recorded in the worst of venues into something that sounds better with just one click.

I recorded a discussion between myself and my coworker Mat Smith to test this. I was plugging an audio interface into a specialised XLR podcasting microphone (Focusrite’s DM14v). Mat, on the other hand, was only speaking into the built-in microphone on his MacBook. When our recording was complete, I tapped the “Enhance” toggle and all of a sudden it appeared as though we were in the same room using the same equipment. The audio, both untreated and treated, can be heard below.

Audio purists may now find the altered audio to be a touch too isolated or dry (lacking in spatial awareness). in light of the fact that there are currently no controls; the impact is either completely on or off. But according to Webster, if the default setting doesn’t suit your tastes, you’ll be able to change the effect’s strength in the future.

However, the outcome was so fantastic that I attempted to upload the audio from a phone interview I had a few weeks earlier for an article. The outcome was so amazing that I’m thinking about turning it into an audio version of the article it was intended for.

The elimination of filler words (uhms, ahhs, etc.) is another feature that is in the works. Again, you can find this on competing goods, but as of right now, there isn’t even a means to remove them because the transcription doesn’t reveal them; you’d have to do this in post-production.

It’s convenient that Adobe Podcast offers a tonne of free music you can use for intros, outros, and transitions. Although it could be easier, this is an illustration of why the service is still in beta. Editing them to match your speech isn’t as intuitive as it could be. Creativity is possible. You can splice in some music, for instance, and then have it gradually increase in volume while you converse.

Don’t hold your breath if you’re hoping Adobe will include an AI voice tool so you can type in words to add to the text as well as the audio you already have (like you can with Descript). It only makes sense to use your own voice, as Webster said, because to create an effective voice model, it needs to be trained on enough content. Given that AI voices can be awkward, they just made it extremely simple to re-record the desired line. After all, patching over a misspeak isn’t any more difficult in a video than it is here.

The ease with which ideas can be expressed on paper is arguably the best quality of all. Adobe Podcast may be used to create something if you can use Google Docs. Additionally, there is a fair possibility that it will sound decent because to the included audio and microphone enhancing features.

For the time being, Podcast will continue to be in beta, and Webster assured users that there will always be a free tier. Additionally, if you like the sound of the speech-improving function but don’t even want to record a podcast, you don’t even need to join up for the beta; it’s available right here, right now.