Every new OpenAI announcement sparks awe and terror. Late last week, the maker of ChatGPT dropped its latest new gizmo, a text-to-video model called Sora that can create up to a minute of high-quality video. Cue a flood of remarkable AI clips going viral on social media, while stock video producers, filmmakers, actors and some startup founders likely fretted about their livelihoods.
AI video-generation has been around for more than a year, but Sora’s examples look more realistic than previous efforts. The glitches are harder to spot and the humans look more human. As usual, OpenAI won’t talk about the all-important ingredients that went into this new tool, even as it releases it to an array of people to test before going public.
OpenAI needs to be more public about the data used to train Sora, and less secretive about the tool itself, given the capabilities it has to disrupt industries and potentially elections. OpenAI Chief Executive Officer Sam Altman said that red-teaming of Sora would start last Thursday, the day the tool was announced and shared with beta testers. Red-teaming is when specialists test an AI model’s security by pretending to be bad actors who want to hack or misuse it.
The goal is to make sure the same can’t happen in the real world. When I asked OpenAI how long it would take to run these tests on Sora, a spokeswoman said there was no set length. “We will take our time to assess critical areas for harms or risks," she added.
The company spent about six months testing GPT-4, its most recent language model, before releasing it last year. If it takes the same amount of time to check Sora, that means it could become available to the public in August, a good three months before the US election. OpenAI
. Read more on livemint.com