OpenAI’s Sora text-to-video tool’s impact will be ‘profound’

Estimated read time 2 min read

OpenAI last week unveiled a new capability for its generative AI (genAI) platform that can use a text input to generate video — complete with life-like actors and other moving parts.

The new genAI model, called Sora, has a text-to-video function that can create complex, realistic moving scenes with multiple characters, specific types of motion, and accurate details of the subject and background “while maintaining visual quality and adherence to the user’s prompt.”

Sora understands not only what a user asks for in the prompt, but also how those things exist in the physical world.

To read this article in full, please click here

​ OpenAI last week unveiled a new capability for its generative AI (genAI) platform that can use a text input to generate video — complete with life-like actors and other moving parts.The new genAI model, called Sora, has a text-to-video function that can create complex, realistic moving scenes with multiple characters, specific types of motion, and accurate details of the subject and background “while maintaining visual quality and adherence to the user’s prompt.”Sora understands not only what a user asks for in the prompt, but also how those things exist in the physical world.To read this article in full, please click here   Read More Computerworld 

You May Also Like

More From Author

+ There are no comments

Add yours