TL;DR
We are on a mission to replace Adobe Premiere for cinematic AI video creation.
Ask
We’d love to onboard your creative team on Velvet!
Launch Video
We made this launch video in ~5 hours spending ~$50 on Velvet:
Problem
Professional video editors are not well-suited for making cinematic AI content. Current workflows rely on stitching twenty separate media creation tools (such as Veo, ElevenLabs, Midjourney, Sora, etc.), each of which creates various temporary files, which are then dragged and dropped onto a professional video editor like Adobe Premiere or DaVinci Resolve. Velvet consolidates all of these tools into a single video editing suite so that media creation and editing are done in the same place. We make the video creation process faster and cheaper, since only one tool is used and one subscription is needed.
How Velvet Works
Team
We are two Lucases who met at the University of Chicago in 2021 and decided to build the future of cinematic AI video together.
Lucas Mantovani (CEO) worked on AI avatar video generation at Meta FAIR for one year after graduating from UChicago, and has published a paper on full-body avatar video generation.
Lucas Tucker (CTO) worked at Adobe’s infra team after graduating from UChicago. At Adobe, his team worked on a realtime petabyte-scale content delivery platform.
Logo
The founders met as University of Chicago undergrads in 2021 through Paragon Global Investments, a Computer Science organization on campus that Mantovani helped create and Tucker led. In 2025, after working at Meta FAIR with avatar video generation for a year, Mantovani moved to the Bay Area and met multiple times with Tucker, who had worked at Adobe’s infra team for two months. Through their conversations, they realized that, despite significant improvements, video generation models like Veo and Sora lack the context to create viral content for brands. Shortly after, Velvet was created to address this problem.
Our long-term vision is that we allow enterprise marketing teams to generate and edit video assets that can undergo variations in real time, following constraints specified by their creators. We believe that this approach will ensure that AI content follows brands’ unique styles while also taking advantage of the adaptive capabilities enabled by real-time AI video rendering.