Twelve Labs, the video understanding company, today announced a major step forward in the world of video editing and storytelling. It has partnered with Blackbird plc (AIM:BIRD) the technology licensor, developer and seller of the market-leading cloud native video editing platform, Blackbird. This partnership makes Twelve Labs’ state of the art video understanding AI available to Blackbird’s fast, cloud-native editing suite empowering editors to deliver their most meaningful narratives with lightning quick turnaround.
The integration solves one of the essential challenges of video production; locating specific storytelling moments within vast volumes of video content.  Users across the world struggle to identify precise elements spending huge amounts of time searching. Current taxonomy-based approaches are limited in their ability to search in natural language and are costly to enable. Twelve Labs’ AI, however, allows editorial teams to find exact moments using natural language, thereby enabling editors to identify and fully leverage the perfect scenes for their stories with unparalleled speed.
As part of a commitment to a plugin architecture, Blackbird has added a new plugin embedded into its editing platform. This enables a seamless integration of Twelve Labs and in turn access to its indexing technology and AI auto sequencing productivity tool without ever having to leave the editor.
Analysis of content is performed with no user intervention for a highly efficient workflow which then allows video editors to directly query their footage with natural language search to instantly construct edit sequences for onward crafting or publishing.
The partnership between Twelve Labs and Blackbird marries forward-thinking AI capabilities with the pressing needs of video editors and storytellers. It introduces a new paradigm in video editing that reduces clutter, streamlines the creative process for both novices and experts, and, most importantly, amplifies the human touch in storytelling.
“We envision a future where editors supercharge their workflows. Instead of a blank canvas, they can start with a rough cut and refine it from there, saving countless hours of scrubbing and stitching,” said Jae Lee, co-founder and CEO of Twelve Labs. “Such a meaningful improvement over current approaches is possible through our work with Blackbird, in conjunction with the future possibilities for integrating our new Pegasus model. Together, we can remove some of the most tedious and time-consuming tasks for editors and push the boundaries of what is possible; we can craft the future of narrative excellence.”
“Blackbird has always prided itself on its speed and efficiency and the advent of AI technologies is key to improving that further. The landscape is changing fast when it comes to storytelling, and where AI can really make a difference is in making the process faster and more enjoyable. Twelve Labs’ technology addresses this well through its fast developing natural language recognition model and we are excited to be getting closer to offering this to our Blackbird customers,” commented Ian McDonough, Blackbird CEO. Â
Twelve Labs and Blackbird will be showcasing the new AI Integration at The 2024 NAB Show in Las Vegas, 13-17 April. To learn more about Twelve Labs’ technology and how it works with Blackbird, please visit twelvelabs.io.
‍
Learn how to build a semantic video search engine with the powerful integration of Twelve Labs' Embed API with ApertureDB for advanced semantic video search.
Leverage Twelve Labs Embed API and LanceDB to create AI applications that can process and analyze video content with unprecedented accuracy and efficiency.
Harness the power of Twelve Labs' advanced multimodal embeddings and Milvus' efficient vector database to create a robust video search solution.
Learn how to create a powerful semantic video search application by combining Twelve Labs' advanced multimodal embeddings with MongoDB Atlas Vector Search.