The Investors Podcast was recently transformed into an Alexa skill called We Study Billionaires. The show is the top ranked stock investing podcast on iTunes, Google, CNBC and assorted other lists. Voicebot connected with podcast co-host Preston Pysh and XAPPmedia’s John Kelvie to discuss how the transformation took place and what new innovations were brought to life with the skill. You can also click the video below to see a demo of the We Study Billionaires skill in action.
Tell me about the audience for The Investors Podcast and what made you think your listeners might enjoy having access through Amazon Alexa.
Preston Pysh (The Investors Podcast): So our audience started out as hardcore Warren Buffett and Charlie Munger fans that implemented a valuing investing approach. As our show matured we expanded our universe of investing experts and began to study anyone with a net worth in excess of a billion dollars. Our audience really likes to read audio books and interact with the audio from the show on their smartphones, so integrating into the Amazon Alexa application was a natural part of the market for us to experiment with.
As a podcaster, how do you view Amazon Alexa and other voice platforms, their role in reaching your audience and how they can augment your community of listeners?
Preston: I think there’s a major shift that’s about to take place with AI hardware and software that hasn’t happened in the last decade. Previously, it was all about the textual applications that were outsourced to any developer with time and skill to create something of value. As companies like Amazon and others advanced to make a voice activated interface more user friendly and capable, applications are becoming more voice oriented. For our show, the content is completely in the audio format, so this capability completely coincides with our interests. If a person is working in the kitchen or garage, they can simply say, “Alexa, play We Study Billionaires,” and the show will automatically start playing. That’s really exciting. It gives people immediate access to something that can be used for personal growth while also accomplishing a less cognitive task.
John, what type of content is included in your new Alexa skill and how is it different from downloading the podcast from iTunes or another service?
John Kelvie (XAPPmedia): Well, it’s mostly the same content as the show. The only difference is that the skill includes some short audio snippets that summarize the show content. Users can listen to these before they decide which episode they would like to listen to. There is also some information on the history of the podcast and the professional backgrounds of the co-hosts spoken by Preston.
I might as well get this out of the way up front. The podcast is called The Investors Podcast. The Alexa skill is called We Study Billionaires. Why two different names?
John: Alexa’s speech recognition is excellent and represents a real step forward for the technology. I think it is one of the first voice user experiences people really love — sorry, Siri — but it is not perfect. One of the challenges with Alexa is ensuring that skills have a unique invocation name, one that Alexa does not mistake for other skills and capabilities.
“The Investors Podcast” tended to confuse Alexa, so we went with the slogan for the podcast, “We Study Billionaires”, which performs much better. This is part of the art of design for voice.
What made The Investors Podcast an attractive media property to work with in developing an Alexa skill?
John: I studied Economics in school before getting into programming, so I find the content very appealing. And, we wanted to work with a top podcaster to show off what we think are some really great capabilities that Alexa and XAPPmedia provide. The API for long-form audio, such as a podcast like The Investors Podcast, was only made generally available in late August. So, the platform is moving quickly, and we are trying to move just as quickly to take advantage of these new capabilities. We are ecstatic to have partners like Preston and Stig [Broderson, the Podcast co-host] that share our enthusiasm for innovation. We hope the listeners love it as well though – love it or hate it, let us know what you think. We want your feedback!
I understand your engineering team at XAPPmedia introduced some new features that are firsts for Alexa skills. First explain to me the Scan feature.
John: The Scan feature we think is a perfect audio-only experience. It plays summaries of the podcasts, so the listener can hear what they are about; if they want to hear more, they simply say, “Play Next” to jump into the podcast. This is an innovative and fun way to consume podcast content – basically a sort of try-before-you-buy approach to listening, and it works great with voice.
What other features are new with the launch of the We Study Billionaires skill?
John: Most of the other features are what one would expect – play, pause, resume, skip, rewind, etc. There’s nothing earth-shaking about these features, but what’s great about it is how intuitive all this is on Alexa. It’s really a great way to interact via voice, and I think listeners take to it very naturally.
Last month Voicebot interviewed you as CTO of Bespoken.tools. This week you are representing XAPPmedia. What is the difference between the two companies?
John: It’s pretty simple. Bespoken builds tools for developers of Alexa skills and other voice assistants. The tools are designed to make development faster and improve skill quality. Voice is a relatively new space for most developers and the development model, based largely on webhooks, demands a new approach and new tools. Bespoken fills that gap.
By contrast, XAPPmedia works directly with consumer brands and media properties to build and host skills with great user experiences that work across multiple platforms including Alexa. The XAPP teams employs Bespoken.tools in their development efforts.
Preston, what did you expect when you first started collaborating with XAPPmedia on the project?
Preston: I really had no idea what to expect. I was thinking that the project was going to be more developmental in nature and it was actually quite different than that. Within the first status update, John already had a working prototype of the system and he was streaming the syndication feed from our show. To put it lightly, I was blown away at the speed and capacity of XAPPmedia to go from concept to working software demonstration. I’m very grateful they were kind enough to work with our show on their first endeavor for podcasting on Alexa.
What did you need to do differently to support the Alexa skill beyond what you are doing already to distribute your podcast?
Preston: In general, it wasn’t much. I simply needed to record a few audio clips to make the skill more customized for the user experience. This didn’t take much of my time from an audio standpoint, but I’m sure there were a lot more things being done behind the scenes from a programming standpoint.
What do you think of the results and how do you expect the skill to be received by your listeners?
Preston: I think the listeners will love it. They are usually people that love new technology even though most of their value investing roots might not lead to ownership of equity in such businesses that present such cool innovations – that’s a long and boring conversation though.
John, many podcasters have added their content to Alexa using the Flash Briefing instead of developing a skill. How does the We Study Billionaires skill offer a different experience for listeners than a Flash Briefing?
John: The Flash Briefing brings together a variety of user-curated content. We think it’s great. The listener can get a combination of weather, news updates, music, and other dynamic content, ideally all of a shorter nature. It’s not meant to be an immersive listening experience as much as a series of quick updates.
A long-form audio skill though is meant to be a “lean-back” listening experience – put it on while doing things around the house, working or just relaxing. Besides providing content tuned for the particular provider, we can provide features like scan and deep browsing of content catalogs, features that are easily used hands-free and eyes-free.
It’s also about creating a persona for the podcast. When the We Study Billionaires skill launches, there’s a great introduction from Preston. He provides the directions for using the skill in his own voice, not Alexa’s. It makes a real difference in the quality of the experience, and it provides a chance for Preston and Stig to deepen their connection with their listeners.
What best practices are incorporated into the We Study Billionaires skill that others can learn from?
John: First off, we built this to “just work” with an RSS feed. So off the shelf, if someone has a podcast, they can publish a skill similar to We Study Billionaires very quickly and easily. We think this is very cool, and we are happy to help other podcasters do some amazing stuff on this new platform.
From a product/user experience perspective, the way we have re-purposed the “Play Next” intent as the Scan feature to “expand” or hear more of a piece of content we think is very useful. We are hopeful Amazon will add more top-level intents that can be used to interact with skills that use long-form audio.
And finally, from a technical perspective, we built this using what we consider all-around development best practices. We use our own home-grown Alexa emulator to build automated unit tests. We use continuous integration and we have a great continuous deployment pipeline setup for it. For non-technical people, these terms may be unfamiliar, but they will appreciate the outcome: highly reliable and maintainable software. We hope this brings together well what we are doing on both sides of our house, for brands and media as well as developers in a way that shows our leadership in the space. By the way, anyone can use these same tools for free and download them at http://Bespoken.tools.
Preston or John, Is there anything you would like to add?
Preston: I would simply like to highlight that this is such an interesting space for people to be developing applications for. When we think about the implications for voice based intelligent software, its utility is limitless. Additionally, when we look at the use of this technology in today’s world, I would argue we are at the infancy stages. As the overall architecture of the operating system is outsourced to smart software engineers like John and his team, the utility of future application will only improve and expand our universe of machine-human interaction. I couldn’t be more honored to be part of this project because I feel like we are part of a new, exciting, and revolutionary way to produce content of value.
To access the skill for The Investors Podcast, just say, “Alexa, enable We Study Billionaires,” or search for it in your Alexa app.