AI and the End of Video Media as We Know It

The World of User-Driven Content

The evolution of video and audio entertainment took us from bards and roving entertainers to fixed plays supplemented by radio theater. That was our primary evening entertainment until TV, first in black and white, and then, largely thanks to “Disney’s Wonderful World of Color,” black and white TVs were displaced by color TVs. Television then moved from CRT-based technology and standard definition with sizes up to 25 inches to flat-screen TVs today that are mostly 4K and look to a potential 8K wave later this decade. Content evolved on TV, from mostly live to taped shows and from sets to on-location to computer-generated imagery (CGI). However, some, like the coming “Beetlejuice” movie sequel, still prefer using sets to create a grittier image. Each stage of evolution resulted in changes in skills as the cameras and technology evolved to various levels of automation. But the most significant change we anticipate is the move to AI-generated content. This month, two technologies emerged in beta: OpenAI’s Sora, which creates beautiful and realistic videos that currently lack sound, and ElevenLabs’ AI voice generator, which could supply realistic sound.

AI and the End of Video Media as We Know It

OpenAI’s Sora, ElevenLabs, and the End of Video Media as We Know It

The evolution of video and audio entertainment took us from bards and roving entertainers to fixed plays supplemented by radio theater.

That was our primary evening entertainment until TV, first in black and white, and then, largely thanks to “Disney’s Wonderful World of Color,” black and white TVs were displaced by color TVs.

Television then moved from CRT-based technology and standard definition with sizes up to 25 inches to flat-screen TVs today that are mostly 4K and look to a potential 8K wave later this decade.

Content evolved on TV, from mostly live to taped shows and from sets to on-location to computer-generated imagery (CGI). However, some, like the coming “Beetlejuice” movie sequel, still prefer using sets to create a grittier image.

Each stage of evolution resulted in changes in skills as the cameras and technology evolved to various levels of automation. But the most significant change we anticipate is the move to AI-generated content. This month, two technologies emerged in beta: OpenAI’s Sora, which creates beautiful and realistic videos that currently lack sound, and ElevenLabs’ AI voice generator, which could supply realistic sound.

OpenAI’s Sora, coupled with ElevenLabs’ audio, puts us within a few years of production-quality video content. Check out these blended AI-created video clips produced without actors, writers, camera people, graphic artists, and most of the existing production crew typically tied to a TV show or movie.

While I expect this technology will initially be used mainly by individuals and upstart studios and will be focused mostly on pilots, eventually, this will be how most content is produced.

Let&rsquos talk about the entertainment world post-AI as it will exist in the second half of the decade, particularly after the recent actor and writer contracts expire. We’ll close with my Product of the Week, the Acer Swift Edge 16 laptop, which has a near-perfect balance of technology and price.

The World of User-Driven Content

If you look at YouTube, much of the content isn’t created by companies but by individuals, some with decent production budgets.

AI will allow us to create even stronger content at a lower cost and enable users to create content uniquely interesting to them. Until the regulatory bodies catch up and enforcement becomes adequate, we will undoubtedly get more fake content that looks real. Still, the real money will be in creating content that lots of people enjoy and that is designed to be altered by those who view it.

The result would be close to #Owlkitty, where a cat is added to existing movie content but where you could replace any of the characters with anyone else you wanted — your kids, for instance. However, this is just the initial wave. After that, I see efforts separating into those who like to modify content created by others and those who want to produce the content that will be altered.

While I have no doubt that professionals who are already upset with these advancements won’t be happy with this change, it really is no different than when we moved to any other form of automation. Those who were doing the work being automated were upset because their jobs were changing dramatically or going away.

The result should be a move away from static content to content that can be infinitely altered. If you don’t like the ending of a film, you can change it, or, in the future, the streaming service will know what you prefer in a movie and create or alter movies to automatically optimize it for your interests.


However, while this will work with head-mounted displays and truly benefit products like the Apple Vision Pro, it won’t play well for groups of people with different interests. In that case, the service will look for commonalities in the group and then craft content most likely to appeal to the largest number of people in a group or those who have a say in the matter — like parents over kids.

This approach could make for some interesting family dynamics or potentially result in even more isolation between family members as, much like tablets and smartphones today, they dive into their own screen and content, and group watching for anything but sports becomes a thing of the past.

Undoubtedly we will get a lot more crap from people who are trying and failing to learn how to direct AIs to create the content they, or anyone else, want.

Much like Apple figuring out how to license digital music, the winner will likely be the company (and related content creators) that figures out how to license video content that can be modified and properly charge for it.

I think YouTube has the best chance of doing this, but Facebook and even Microsoft are in the running. Steve Jobs could have figured this out, but I think Tim Cook is too rigid in his views, and getting this right would require a lot of creativity. So, while Apple could do this, I doubt they’ll be the first and are more likely to follow someone else’s lead here.

AI is changing how we consume content, moving from static to dynamic and personalized experiences.
The rise of user-generated content will be amplified by AI, leading to more creative opportunities and potential for fake content.
AI is likely to disrupt the entertainment industry, impacting jobs and content creation processes.
The future of content licensing and monetization is unclear, but companies like YouTube, Facebook, and Microsoft are in the race.

Summary:

  • AI is rapidly advancing, with tools like OpenAI's Sora and ElevenLabs' audio generators paving the way for AI-generated video content.
  • This shift towards AI-driven content creation is expected to revolutionize the entertainment industry, impacting traditional production methods and potentially leading to job displacement.
  • User-generated content is poised to become even more prominent as AI empowers individuals to create high-quality videos with ease. This rise in user-generated content brings both opportunities for creativity and concerns about the proliferation of fake content.
  • The future of content licensing and monetization is uncertain, but major platforms like YouTube, Facebook, and Microsoft are vying to dominate the market for AI-powered video content.

Review