Problem: Social media platforms have long been aware of the danger posed by misinformation hosted on their services. Although some of their actions have been effective to a certain extent, live video still serves as a potent vector for spreading misinformation.
Subscribe here to get access to the first 500 ideas from our blog. For just one coffee a month, you can have access to more than $500 billion dollars of ideas. What's not to love?
Solution: Use exponential advancements in the computational power of A.I. to analyze live videos and provide real time push back to fake news. While this may still seem a ways away, the last few years alone have proven how fast the A.I. industry can move. This graph from Our World In Data captures that perfectly.
There are two main reasons why a social media company would want to adopt such a system. Firstly, people are highly disturbed by the rise of malicious lies that can often erode the public’s trust of institutions. This was particularly prevalent during COVID-19, where over 80 percent of respondents to a poll reported seeing some form of misinformation. Secondly, they may dislike so much that it may not even be up to the social media giants of Silicone Valley to solve the issue; 55 percent of US adults now favor legislation by the federal government to restrict misinformation, compared to 39 percent 5 years ago.
This shows how systems like Instagram’s now infamous misinformation warnings and YouTube’s outright suppression of even mentioning COVID-19 massively failed due to people easily bypassing warnings with a single click, scrolling pass Wikipedia links, and creators ultimately just finding ways to circumvent these filters in the first place.
An AI fact-checker could be used to not only manage the currently rampant livestream market, which constitutes 17 percent of Instagram’s total traffic as of 2022, but also to provide more in-depth analysis to videos in general. For example, rather than simply placing misinformation warnings based off keywords in titles or A.I. generated transcripts, platforms could take the fight against misinformation one step further by putting pop-ups exactly when misguiding misinformation is mentioned and precisely where the viewer is likely to direct their attention.
Doing this is also key to fighting the rising threat posed by deepfakes, which have been used in anything from faking videos of Zelenskyy surrendering to coordinating espionage on LinkedIn. To put it into perspective, in 2023 alone there were up to 500,000 video and voice deepfakes shared to social media sites. Current solutions like Intel’s Fake Catcher are significant competitors, but the business's combination of more comprehensive fact checking as well as its ability to operate on live video gives it a competitive advantage on the market incomparable to anything we see today.
Monetization: Selling software to any and all companies looking for distribute the software to the public, (social media companies, broadcasters, and NGOs).
Contributed by: David Salinas (Billion Dollar Startup Ideas)