Runway introduces an API for its video generating models
Runway, one of several AI startups developing video generation technology, today announced an API that enables developers and organizations to integrate the company's generative AI models into platforms, applications, and third-party services.
Currently with limited access (there is a waitlist), the Runway API currently offers only one model Gen-3 Alpha Turbo, a faster but less capable version of Runway's current flagship model, Gen-3 Alpha and two plans, Enterprise and Build (for individuals and teams). The basic price is the center of the loan (the cost of 1 second of the video loan 5), and the takeoff strip band states that "strategic trust partners" including the Omnicom marketing group are already using APIs.
The API RUN - in - LINE band also provides an unusual requirements for disclosing information. Any interface that uses the API must "prominently display" a "Powered by Runway" banner linking to the Runway website, the company said in a blog post. "This allows users to understand the technology behind the app while respecting our terms of service," it read.
Runway, which is backed by investors including Salesforce, Google and Nvidia and was last valued at $1.5 billion, faces stiff competition in the video generation space, including from OpenAI, Google and Adobe. OpenAI is expected to release its video generation model, Sora, in some form early this fall, while startups like Luma Labs continue to refine their technologies. With the preview of Runway API, Runway becomes one of the first AI providers to offer video generative models via an API.
But even if APIs can help companies on their path to monetization (or at least offset the high costs of training and running models), they don't solve the deep legal problems surrounding these models, and generative AI technology in general. Runway's video generation model, like all video generation models, is trained on many example videos and then "learns" patterns from those videos to generate new frames. Where does the training data come from? Runway, like many suppliers these days, has refused to disclose it for fear of losing a competitive advantage.
However, the training details would also be a potential source of intellectual property litigation if Runway was trained without permission using copyrighted data. In fact, there's evidence that this was the case: A 404 Media report in July revealed an internal spreadsheet of training data that included links to Netflix, Rockstar Games, Disney, and creator-owned YouTube channels like Linus Tech Tips and MKBHD.
It's unclear whether Runway ultimately used the videos from the spreadsheet to train its models. In an interview in June, Runway co-founder Anastasis Germanidis would say only that the company uses "curated internal datasets" to train its models, but if that's the case, it's hardly the only AI vendor flouting copyright rules. Earlier this year, OpenAI CTO Mira Murati did not directly deny that Sora was trained on YouTube content, and Nvidia has reportedly used YouTube videos to train an internal video generation model called Cosmos.
Generative AI providers believe a doctrine known as fair use offers them legal protection. Some are willing to take the risk. Adobe is said to offer artists payments to train video-generative models in exchange for their clips. If we are lucky, things make their way through the courts soon.
Be that as it may, it jumps out, one thing becomes clear: generative videos for artificial intelligence threaten to update the films and TVs, as we know it. In 2024 research conducted by an animation guild, an animation guild, a representative of Hollywood animators and cartoonists, 75 % of film production companies that adopted AI were reduced, integrated, or excluded after including technology. I showed that. The survey states that in 2026, it would violate more than 100,000 employment for US entertainment.