We are thrilled to announce that Twelve Labs will host the inaugural "Workshop on Video-Language Models" at NeurIPS 2024! This event will unite the brightest minds in AI to explore the latest advancements in video-language modelsβmarking the first time NeurIPS has featured a workshop dedicated solely to this critical area of research.
ποΈ Save the Date: December 14, 2024!
Led by our CTO, Aiden Lee, this workshop promises to be an exciting convergence of top minds and cutting-edge research. We're partnering with renowned institutions including Allen Institute for AI, Amazon AGI, Microsoft, Apple, NAVER AI Lab, KAIST, and the University of North Carolina at Chapel Hill to foster discussions at the forefront of video-language models.
β
We invite researchers to submit their original work on topics related to video-language models. Papers may address, but are not limited to:
Submission instructions can be found here: https://openreview.net/group?id=NeurIPS.cc/2024/Workshop/Video-Langauge_Models#tab-recent-activity
Submission Tracks:
Outstanding papers will be recognized with awards, including a Best Paper Award and two Runner-Up Awards.
β
We are honored to host an outstanding lineup of speakers, each of whom has made significant contributions to the field of video-language models and AI:
β
Kristen Grauman, Professor at UT Austin and Research Scientist at Facebook AI Research, is renowned for her work in computer vision. She led the development of the Ego4D dataset, a crucial resource in video-language research.
β
Jianwei Yang, Senior Researcher at Microsoft, is recognized for advancing visual recognition through his contributions to video understanding, notably the development of Phi-3-Vision and Set-of-Marks.
β
Gedas Bertasius, Assistant Professor at UNC Chapel Hill, has made significant strides in video AI. His work on Timesformer and VindLU has set new benchmarks in video understanding.
β
Dima Damen, Professor at the University of Bristol and Researcher at Google DeepMind, is a leading expert in egocentric vision. She's best known for creating the EPIC-KITCHENS dataset, which has been pivotal for video-language research.
β
Doyup Lee, Senior Researcher at RunwayML, has pioneered video generation technology. His work on Gen-3 is transforming video content creation and editing through advanced AI tools.
β
Ishan Misra, Research Scientist at Meta GenAI Research, is known for his work on Emu Videoβa cutting-edge video-language model that excels in video understanding through self-supervised learning techniques.
β
This workshop offers a unique opportunity to engage with cutting-edge research, exchange ideas with leading experts, and shape the future of video-language models. Whether you're a researcher, developer, or enthusiast, you'll find this event to be a fertile ground for inspiration and collaboration.
We're excited to welcome you to NeurIPS 2024! Stay tuned for more updates, and don't miss your chance to be part of this great event.
We are excited to announce Marengo 2.7 - a breakthrough in video understanding powered by our innovative multi-vector embedding architecture!
See how video foundation models can radically accelerate your film making timeline.
Leverage Twelve Labs Embed API and LanceDB to create AI applications that can process and analyze video content with unprecedented accuracy and efficiency.
We had fun interacting with the AI community in Denver!