Luma AI launches Dream Machine, a free text-to-video model designed to compete with Sora and Kling.


US-based startup Luma AI, specializing in visual AI, has unveiled a new video generator called Dream Machine, reminiscent of OpenAI's Sora. Dream Machine sets itself apart from Sora and Kling by offering public access. Luma AI describes Dream Machine as an advanced video model that can produce high-quality, lifelike visuals from natural language prompts using AI. The company touts Dream Machine as an AI model that swiftly generates realistic videos from text and images.


According to Luma AI, Dream Machine is a highly scalable and efficient transformer model trained directly on videos, enabling it to create physically accurate shots. Claiming it as the initial move towards a universal imagination engine, Luma AI has made the tool available to all users.

Dream Machine is said to generate 120 frames in 120 seconds, showcasing quicker iterations. It produces five-second shots with smooth motion, cinematic elements, and a dramatic flair. The tool is designed to comprehend interactions between humans, animals, and objects in the physical world, ensuring videos with consistent characters and precise physics. Additionally, it aids in experimenting with various cinematic, fluid, and naturalistic camera movements.



While Dream Machine offers impressive capabilities, it does have limitations, including morphing, movements, text, and Janus, as listed on the official website. Unlike Sora and Kling, which have respective video duration limits, Dream Machine stands out for being free to use. Upon testing by creating a short video with the prompt "Peter Pan flying on a carpet between galaxies," the process took about an hour. However, the output was unconventional, depicting Peter Pan in a dress floating with distorted fingers and lacking the envisioned carpet.

No comments