AI is one of the biggest topics circulating the tech sphere right now. From Google Gemini (and corresponding Gemma) to OpenAI’s ChatGPT, it seems like everyone is having a go with making AI, but there’s a turning point right now. OpenAI just released their new text-to-video generator Sora and it’s getting too good.
For the past year or so, text-to-video AIs have been the butt of a lot of jokes due to its quality, just look up “Will Smith eating spaghetti AI”, but as of recently AI has had a glow-up in the video department causing debate over how far we want it to go. The first examples of video AI came from image AI being fed previous frames which meant that the original prompt wasn’t carried through to next photo and instead the AI had to guess where the story is supposed to be going. But, over time it’s gotten to the point where prompts and ideas can carry over to the next frame causing people like Google, Meta, and OpenAI to take advantage of it.
Recently, OpenAI released videos of their Sora text-to-video AI and while it’s a curated selection of AI video it’s a huge step forward in video generation if it is consistent. But, with all AI there comes into the problem of how this will affect filmmakers and actors along with whether or not the videos that were fed into the AI were approved by the creator.
“If the videos get better, we could see animators and other artists losing their jobs,” said junior Allison Rigby in regard to the scene. While OpenAI wants to “advance the model to be most helpful for creative professionals” there’s no promise that it won’t be used wrongfully.
But, there are ways to counteract AI within human-made photos and videos. For example, there are programs out there called Nightshade and WebGlaze that will “poison” photos so that when the AI looks at the image, the AI will think that it’s something that it’s really not. So you could have an image of a dog tagged as a cat and after enough photos the AI will make a dog when asked for a cat. But, these programs and companies are still small and it hasn’t taken a huge effect online.
It’s always important to look closer at the media you’re viewing, see if the background and foreground are in sync, look at the background people and buildings, see if the subject ever blends into itself, all of these are good little hints that what you’re seeing might not be real. But, as we get closer and closer to perfect AI, we need to choose what we want to see with it and what we want to use it for. And maybe it’s time to start asking questions like sophomore Lorelei Wise,“Is it necessary, or is it science continuing for only the sake of continuation?”