LivingLens enhance video intelligence platform with emotional facial, tonal and object recognition

LivingLens is excited to reveal its latest capabilities for analyzing consumer video content with the launch of advanced recognition capabilities, including facial emotional recognition, tonal recognition and object recognition

The new capabilities supplement the existing video intelligence suite which allows users of the platform to decipher the full range of human behaviour demonstrated in video content.

Facial emotional recognition incorporates the latest artificial intelligence which works by identifying key landmarks and expressions of the human face. A collection of deep learning algorithms then analyze this information to classify facial expressions within video content which is then mapped to a specific emotion.

Tonal recognition provides insight into the way in which people are communicating, advancing analysis beyond ‘what’ people are saying to ‘how’ they are saying it. The tone in which words are spoken can reveal additional understanding of how consumers are feeling and add further context to the spoken word. The tonal analysis delivers an added layer of understanding when used alongside the existing sentiment analysis.

Object recognition identifies the objects within a video which provides additional context to the content. Objects help to determine where consumers are, be that in a shop, at the airport or in a kitchen for example, and therefore what they are doing. The LivingLens platform not only highlights the objects that are most prevalent within the content, it also allows users to select from all the objects seen and navigate to where they appear.

Carl Wong said, “Our mission is to unlock the insight in people’s stories to inspire decisions and technology is allowing us to accelerate our ability to interpret consumer video content at scale. We are delighted with the latest additions to our existing suite of capabilities, which provide a lens into the all-important emotions of consumers and gives additional context to consumers’ content through their surroundings.”

Carl continues, “Historically video has been challenging to work with but, we are seeing the use of video expand as technology continues to develop and improve, providing high levels of accuracy which previously would have required human intervention. Where once video was limited to small scale studies, it’s exciting to see projects with large volumes which simply weren’t practical before.”

The three new recognition capabilities are time stamped against the corresponding video content to allow researchers to quickly and easily pinpoint the exact moments of interest. Results are returned within seconds, making the analysis of video content possible in near real-time.

If you'd like to see the new capabilities in action, please arrange a demo or contact us for further information.

Latest Posts