Twelve Labs has a true understanding of video through billions of AI parameters– and search is just the beginning. Our mission is to help developers build programs that can see, listen, and understand the world as we do by giving them the most power.
‍
Twelve Labs, the video search and understanding company, today announced the launch of a first-of-its-kind cloud-native suite of APIs that enable comprehensive video search in less than one second. The company’s proprietary video understanding AI system can locate exact moments almost instantly across massive video archives – and makes video search as fast and easy as CTRL-F.
Index Ventures led Twelve Labs’ $5 million seed funding round and Index partner Kelly Toole joined the board of directors. Radical Ventures, Expa and Techstars Seattle also participated, along with angel investors Dr. Fei-Fei Li of Stanford University, Alexandr Wang, CEO of Scale AI, and Jack Conte, CEO of Patreon.
“Videos are becoming the fundamental method by which we share, consume, and store information online,” said Kelly Toole, Partner at Index Ventures. “And yet, despite their ubiquity, video search today relies heavily on simple things like keywords, tags, and titles. This leaves the richness of the actual content largely untapped. Twelve Labs is changing this with truly transformational technology that will power the next generation of video-centric products.”
Eighty percent of the world’s data now resides in video form. Almost every part of our lives is deeply rooted in video– from organizational knowledge, meetings, and communications, to online learning, to our entertainment needs. It has reached a level where Gen-Zers spend one-third of their waking hours watching or creating content.
Twelve Labs’ technology makes it possible for any company to unlock the power of video for the first time. From locating noteworthy discussion points within an organization’s extensive Zoom recordings, to urgently needed scenes within a media company’s footage archive, to pinpointing sensitive content on a streaming platform, the Twelve Labs video search API enables text-based semantic search, as easy as CTRL+F. The cloud-native video understanding infrastructure is powered by an AI system that understands visual and conversational contexts to make sense of any scene or video moment, without manual input.
“There is nothing in the world like Twelve Labs,” said Pedro Almeida, CEO and co-founder of Mindprober. “We’ve tried so-called solutions from tech giants, and Twelve Labs is so far ahead of what they can do. Twelve Labs was not only easy to integrate, but it finds what’s valuable, and the accuracy of results is astounding. Knowing that I can reliably access any information we need in our video data opens new doors to business areas we’ve only imagined.”
Twelve Labs’ groundbreaking AI technology recently won first place in the world’s largest competition for video understanding, the ICCV VALUE (Video and Language Understanding Evaluation) Challenge - Video Retrieval Track. The company’s video understanding algorithm, ViSeRet, beat out several of the largest, most advanced tech companies in existence, such as Baidu and Tencent, and outperformed Microsoft’s SOTA baseline model in the video retrieval (search) track. Twelve Labs has also secured support from some of the brightest luminaries in AI, including Fei-Fei Li (Stanford), Silvio Savarese (Stanford), Oren Etzioni (AI2), and Aidan Gomez (co-creator of Transformer), as well as founders from innovative companies like Alexandr Wang (Scale), Aaron Katz (Clickhouse), John Kim (Sendbird), Dug Song (Duo Security), Jean Paoli (Docugami), and more.
“Twelve Labs has a true understanding of video through billions of AI parameters– and search is just the beginning,” said Jae Lee, CEO and co-founder of Twelve Labs “Our mission is to help developers build programs that can see, listen, and understand the world as we do by giving them the most powerful video understanding infrastructure.”
To read about Twelve Labs founding story and dive further into its mission and technology, go here.
To get the Twelve Labs API, starting today, go to https://docs.twelvelabs.io/.
The Twelve Labs team believes that to understand video is to understand the world. To this end, the company was founded to make video instantly, intelligently, and easily searchable. Twelve Labs’ state-of-the-art video understanding technology enables the accurate and timely discovery of valuable moments within an organization’s vast sea of videos so that users can do and learn more. The company is backed by leading venture capitalists, AI luminaries and founders of cutting edge technology companies. It is headquartered in San Francisco, with an APAC office in Seoul. Learn more at twelvelabs.io.
We are excited to announce Marengo 2.7 - a breakthrough in video understanding powered by our innovative multi-vector embedding architecture!
See how video foundation models can radically accelerate your film making timeline.
Leverage Twelve Labs Embed API and LanceDB to create AI applications that can process and analyze video content with unprecedented accuracy and efficiency.
The collaboration between Phyllo and Twelve Labs is set to revolutionize how we derive insights from video content on social media