Hume AI raises $50m to support the debut its flagship product, Empathic Voice Interface

Hume AI raises $50m to support the debut its flagship product, Empathic Voice Interface

Published: 02-04-2024 10:09:00 | By: Pie Kamau | hits: 589 | Tags:

Hume AI (Hume), a startup and research lab building artificial intelligence optimized for human well-being, has raised a $50 million Series B. The round, led by EQT Ventures, will support the debut and continued development of Hume’s new flagship product: an emotionally intelligent voice interface that can be built into any application. Union Square Ventures, Nat Friedman & Daniel Gross, Metaplanet, Northwell Holdings, Comcast Ventures, and LG Technology Ventures also participated in the round.

Hume AI was founded by Alan Cowen, a former Google researcher and scientist best known for pioneering semantic space theory – a computational approach to understanding emotional experience and expression which has revealed nuances of the voice, face, and gesture that are now understood to be central to human communication globally. The Company, which operates at the intersection of artificial intelligence, human behavior, and health and well-being, has created an advanced API toolkit for measuring human emotional expression that is already used in industries spanning from robotics to customer service, healthcare, health and wellness, user research, and more.

''The main limitation of current AI systems is that they’re guided by superficial human ratings and instructions, which are error-prone and fail to tap into AI’s vast potential to come up with new ways to make people happy. By building AI that learns directly from proxies of human happiness, we’re effectively teaching it to reconstruct human preferences from first principles and then update that knowledge with every new person it talks to and every new application it’s embedded in,'' says Alan.

In connection with the fundraise, Hume AI has released a beta version of its flagship product, an Empathic Voice Interface (EVI). The emotionally intelligent conversational AI is the first to be trained on data from millions of human interactions to understand when users are finished speaking, predict their preferences, and generate vocal responses optimized for user satisfaction over time. These capabilities will be available to developers with just a few lines of code and can be built into any application.

AI voice products have the ability to revolutionize our interaction with technology; however; the stilted, mechanical nature of their responses is a barrier to truly immersive conversational experiences. The goal with Hume-EVI is to provide the basis for engaging voice-first experiences that emulate the natural speech patterns of human conversation.

Hume’s EVI is built on a new form of multimodal generative AI that integrates large language models (LLMs) with expression measures, which Hume refers to as an empathic large language model (eLLM). The company’s eLLM enables EVI to adjust the words it uses and its tone of voice based on the context and the user’s emotional expressions. EVI also accurately detects when a user is ending their conversational turn to start speaking, stops speaking when the user interrupts the AI, and generates rapid responses in real-time with latency under 700 ms – allowing for fluid, near-human-level conversation. With a single API call, developers can integrate EVI into any application to create state-of-the-art voice AI experiences.

Ted Persson, Partner, EQT Ventures: ''Hume’s empathic models are the crucial missing ingredient we’ve been looking for in the AI space. We believe that Hume is building the foundational technology needed to create AI that truly understands our wants and needs, and are particularly excited by Hume’s plan to deploy it as a universal interface.''

Andy Weissman, Managing Partner, Union Square Ventures: ''What sets Hume AI apart is the scientific rigor and unprecedented data quality underpinning their technologies. Hume AI’s toolkit supports an exceptionally wide range of applications, from customer service to improving the accuracy of medical diagnoses and patient care, as Hume AI’s collaborations with Softbank, Lawyer.com, and researchers at Harvard and Mt. Sinai have demonstrated.''

The growing Hume AI team currently comprises 35 leading researchers, engineers, and scientists advancing Dr. Cowen’s work on semantic space theory. His research, which has been presented in numerous leading journals including NatureNature Human Behavior, and Trends in Cognitive Sciences, involves the widest range and most diverse samples of emotions ever studied and informs Hume’s data-driven approach to creating more empathic AI tools. Hume’s technology leverages these research advances to learn from the tune, rhythm, and timbre of human speech, “umms” and “ahhs” and laughs and sighs, and nonverbal signals to improve human-computer interactions.

Dacher Keltner, Chief Scientific Advisor, Hume AI: ''Alan Cowen’s research has transformed our understanding of the rich languages of emotional expression in the voice, face, body, and gesture. His work has opened up entire fields of inquiry into understanding the emotional richness of the voice and the subtleties of facial expression.''

www.hume.ai