WE POST ONE NEW BILLION-DOLLAR STARTUP IDEA every day.

Problem: When playing video games in AR, the visual components are not rendered in real-time; often leading to lag and the realization that what you’re looking at is not real life.

Solution: A company that designs software to render images in real-time: it would push beyond the boundaries of rendering 60 times per second (60 FPS) to render at even higher speeds like 120 FPS and eventually to render images in real-time at the same speed with which humans process the world. But what is the highest frames per second (FPS) that the human brain can perceive? According to Quora,

…we know from experimenting (as well as simple anecdotal experience) that there is a diminishing return in what frames per second people are able to identify. Although the human eye and brain can interpret up to 1000 frames per second, someone sitting in a chair and actively guessing at how high a frame-rate is can, on average, interpret up to about 150 frames per second.

Paul Read and Mark-Paul Meyer build upon this in their 2000 book, Restoration of motion picture film, published by the Conservation and Museology as they wrote:

The temporal sensitivity and resolution of human vision varies depending on the type and characteristics of visual stimulus, and it differs between individuals. The human visual system can process 10 to 12 images per second and perceive them individually, while higher rates are perceived as motion.

Interestingly enough, it seems that while the hardware capabilities are there it is the software which does not allow for real-time rendering. For instance, the recently released May 2020 Jetson Nvidia Xavier NX is advertised as “The World’s Smallest AI Supercomputer for Embedded and Edge Systems” and in tests has been able to render at nearly 160 fps (faster than most people’s brains can comprehend). The hardware is sufficient enough to render (realistically) a digital world in real time. However there is yet to be a development in software that can piggy back off this accomplishment.

I anticipate that in order to arrive at real-time rendering, founders, developers, and consumers will have to cross the “uncanny valley,” a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. As Mori described in his influential work,

I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley… One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.

There is a visual representation of this concept below.

All in all, the global video game market size was valued at USD 151.06 billion in 2019 and is expected to grow at a Compound Annual Growth Rate (CAGR) of 12.9% from 2020 to 2027. While this business could apply to any industry (i.e. VR healthcare, AR manufacturing, etc.); the idea of rendering in real-time for a more delightful experience would probably hold the most sway in the video game industry. Thus, the business’ goal would be to achieve a 1% share of this huge market with it’s unique algorithms for real-time rendering.

Monetization: Selling the software for this or creating a product that sells software related to this.

Contributed by: Michael Bervell (Billion Dollar Startup Ideas)

Mailing List Analytics

Mirrorworld Connectivity for Digital Twins