It’s difficult to describe your initial reaction when you see Genie 3 spit out a forest that you can actually walk through. Mostly I wonder. A hint of discomfort beneath. At the edges, the trees aren’t quite right. No rasterizer could handle the way the light bends. However, you are free to relocate, and the world continues on. This straightforward fact—generated, not loaded—is what is currently subtly shaking some areas of the gaming industry.
After a year of trusted-tester demos and screenshots leaked to Reddit, Google DeepMind finally released Project Genie to its AI Ultra subscribers in late January. The pitch is fairly simple: type a prompt, enter an image, select first-person or third-person, and move into a scene that the model is creating frame by frame. It lasted roughly 60 seconds at 720p (24 frames per second). modest specifications by any conventional standard. Nothing was constructed in advance, which is the trick.
| Detail | Information |
|---|---|
| Flagship project | Project Genie, an experimental prototype from Google DeepMind |
| Underlying model | Genie 3, with assists from Nano Banana Pro and Gemini |
| Public access | Google AI Ultra subscribers in the U.S., 18+ |
| Output specs | Roughly 60 seconds of interactive world, 720p at 24fps |
| Predecessor | Genie 2, released December 2024 |
| Latency benchmark | ~40ms per frame (Genie 2); 30–60fps in browser-based competitors |
| Notable open-source rival | LingBot-World, runs at ~16fps, trains on Unreal Engine footage |
| Browser-based competitor | SEELE, using WebGPU acceleration |
| Market reaction | Take-Two, Roblox, and Unity shares dropped on the day of the Genie 3 reveal |
| Prototype generation time | 3–5 minutes vs. 40+ traditional development hours |
The resolution numbers don’t fully convey how important that distinction is. The process used in traditional game development is still the same as it was when Half-Life 2 was released: engineers render polygons, artists model assets, and designers place them. Stranger things are done by world models. They learn physics and object permanence from training data, just as a language model learns grammar, by predicting the next pixel of the next frame. No level exists. Simply put, it’s whatever the model believes should happen next.
The response from the market was instructive. Take-Two, Roblox, and Unity saw double-digit declines in their stock on the day Genie 3 was teased. Investors appear to think that this is gaming’s ChatGPT moment—possibly too quickly. The industry’s response was predictable: games require state, multiplayer consistency, authored experience, and deterministic rules. It’s all true. The deeper issue, which is that games compete for attention rather than models, is not really addressed by any of it. Even though nothing was killed by TikTok, here we are.

If you look at the responses from developers on Reddit and LinkedIn, you’ll discover something more measured than what the stock charts indicate. An open-source rival operating at 16 frames per second, LingBot-World, was likened by one commenter to “a mobile game cosplaying as Myst.” Another pointed out that rendering triangles is far less expensive than hallucinating every pixel in real time. They’re not incorrect. However, in 2021, neural networks for image generation were the target of similar arguments, and that discussion aged in intriguing ways.
The consistency issue is more difficult to ignore. For roughly a minute, Genie 3 can keep the world together. The model loses track of the location of the castle if it walks too far or turns around too frequently. With a layered architecture—longer-term hashing for locations you’ve already visited, short-term recall for visual continuity—SEELE, a browser-based system, promises 10-minute sessions with stable spatial memory. Multiplayer is still genuinely unresolved. The model creates two distinct representations of reality if two players move in opposing directions. As of yet, no one has a perfect solution.
Even so, it’s difficult to ignore how rapidly the floor continues to rise as you watch this happen. Genie 2 was a research curiosity a year ago. Project Genie is now available to paying subscribers. Industry observer Aleksandr Antipin described it as “more for visualizing how a game might look”—a prototype tool rather than a final product. For now, he’s probably correct. The part that no one seems willing to wager on is whether that continues for an additional eighteen months.
