Sections

Technology

AnyClip’s core technology is the Luminous™, the world’s first real-time content analysis engine to understand content and context.

The new LuminousX™ solution is driven by Luminous™ and includes four components:

  • A premium content library with millions of video clips (and thousands analyzed and added daily)

  • A video player that is driven by a Contextual Matching Engine that matches video to web pages

  • A Contextual Engagement Engine that enables content discovery within the player (learn more below)

  • A Contextual Targeting Engine that conveys content metadata to advertisers for more targeted ad placements

Introducing LuminousTM

Luminous™ is the world’s first real-time content analysis engine to automatically cut premium content to clips and then tag, analyze, filter out non-brand-safe clips, and categorize each clip according to official Interactive Advertising Bureau (IAB) categories, sentiments, celebrities, brands, and more.

LUMINOUSTM CLIPS

Luminous™ ingests short or long-form video content. A proprietary clip detection algorithm analyzes the video and leverages patented technology to identify the exact beginning and end timestamps of each clip, effectively cutting any video content into shorter, thematic clips.

LUMINOUSTM  VIDEO Analysis

Each clip is then processed by Luminous’ proprietary technology. By applying the most advanced image recognition, deep learning and speech-to-text technology available, Luminous™ essentially identifies everything in a given scene. It is uniquely capable of going beyond images and detecting video events and actions, while being flexible enough to learn instant models, enabling the identification of additional video events and actions.

It also identifies anything on the screen – including millions of still objects, body parts, food, animals, people, gender, age, locations, text, brand names, etc.

LUMINOUSTM  INSIGHTS

An unlimited number of tags are analyzed by an advanced Natural Language Processing (NLP) engine, and then statistically weighted and matched against Luminous’™ proprietary taxonomies. Three taxonomies are currently operational with very high accuracy:

  1. Brand Safety
  2. Advertising Category
  3. Sentiment

1. Brand Safety

The brand safety taxonomy analyzes tags against 14 brand safety violations such as nudity, profanity or drugs. Each clip is either marked as safe or unsafe, along with the reasons for concern.

  • Accidents
  • Alcohol
  • Crime
  • Death
  • Disaster
  • Drugs
  • Firearms
  • Gambling
  • Mature - Explicit
  • Mature - Suggestive
  • Negative News
  • Profanity & Hate Speech
  • Tobacco
  • War & Terror

2. Advertising Category

For each clip, the engine identifies several advertising categories that most accurately characterize the content. AnyClip’s taxonomy follows the most updated official Interactive Advertising Bureau (IAB)’s Tech Lab Content Taxonomy and includes all three levels of categories – primary, secondary and tertiary.

3. Sentiment Analysis

The Luminous™ sentiment analysis is based on a prototype approach offered by Professor Phillip Shaver and his peers in the Journal of Personality and Social Psychology. At the basic level of the emotion hierarchy are six concepts — love, joy, anger, sadness, fear, and surprise — most useful for making everyday distinctions among emotions. The taxonomy uses these six primary emotions with one of 24 secondary emotions (secondary emotions for Joy, for example, include Cheerfulness or Zest).

LuminousX™ Contextual Matching

LuminousX compares video metadata with web page metadata, finds optimal matches, and automatically enriches any page with relevant premium video content from AnyClip’s library.

Page Analysis

LuminousX applies advanced machine learning and Natural Language Processing (NLP) to parse any article and identify key people, categories, brands, dates, and topics, thereby “understanding” the article.

Video Search

LuminousX then searches millions of clips that have already been analyzed by Luminous and finds clips about similar key people, categories, brands, and topics within a pre-defined timeframe. Each clip’s relevance is statistically weighted and ranked by a proprietary formula.

Content Match

The highest scoring clips play automatically. When a given article is changed or when AnyClip’s content library is updated with new clips, both are analyzes again and if there are better matches, the dynamic playlist is updated accordingly.

  • Technology

    AnyClip’s core technology is the Luminous™, the world’s first real-time content analysis engine to understand content and context.

    The new LuminousX™ solution is driven by Luminous™ and includes four components:

    • A premium content library with millions of video clips (and thousands analyzed and added daily)

    • A video player that is driven by a Contextual Matching Engine that matches video to web pages

    • A Contextual Engagement Engine that enables content discovery within the player (learn more below)

    • A Contextual Targeting Engine that conveys content metadata to advertisers for more targeted ad placements

  • Introducing LuminousTM

    Luminous™ is the world’s first real-time content analysis engine to automatically cut premium content to clips and then tag, analyze, filter out non-brand-safe clips, and categorize each clip according to official Interactive Advertising Bureau (IAB) categories, sentiments, celebrities, brands, and more.

  • LUMINOUSTM CLIPS

    Luminous™ ingests short or long-form video content. A proprietary clip detection algorithm analyzes the video and leverages patented technology to identify the exact beginning and end timestamps of each clip, effectively cutting any video content into shorter, thematic clips.

  • LUMINOUSTM  VIDEO Analysis

    Each clip is then processed by Luminous’ proprietary technology. By applying the most advanced image recognition, deep learning and speech-to-text technology available, Luminous™ essentially identifies everything in a given scene. It is uniquely capable of going beyond images and detecting video events and actions, while being flexible enough to learn instant models, enabling the identification of additional video events and actions.

    It also identifies anything on the screen – including millions of still objects, body parts, food, animals, people, gender, age, locations, text, brand names, etc.

  • LUMINOUSTM  INSIGHTS

    An unlimited number of tags are analyzed by an advanced Natural Language Processing (NLP) engine, and then statistically weighted and matched against Luminous’™ proprietary taxonomies. Three taxonomies are currently operational with very high accuracy:

    1. Brand Safety
    2. Advertising Category
    3. Sentiment
  • 1. Brand Safety

    The brand safety taxonomy analyzes tags against 14 brand safety violations such as nudity, profanity or drugs. Each clip is either marked as safe or unsafe, along with the reasons for concern.

  • 2. Advertising Category

    For each clip, the engine identifies several advertising categories that most accurately characterize the content. AnyClip’s taxonomy follows the most updated official Interactive Advertising Bureau (IAB)’s Tech Lab Content Taxonomy and includes all three levels of categories – primary, secondary and tertiary.

  • 3. Sentiment Analysis

    The Luminous™ sentiment analysis is based on a prototype approach offered by Professor Phillip Shaver and his peers in the Journal of Personality and Social Psychology. At the basic level of the emotion hierarchy are six concepts — love, joy, anger, sadness, fear, and surprise — most useful for making everyday distinctions among emotions. The taxonomy uses these six primary emotions with one of 24 secondary emotions (secondary emotions for Joy, for example, include Cheerfulness or Zest).

  • LuminousX™ Contextual Matching

    LuminousX compares video metadata with web page metadata, finds optimal matches, and automatically enriches any page with relevant premium video content from AnyClip’s library.

  • Page Analysis

    LuminousX applies advanced machine learning and Natural Language Processing (NLP) to parse any article and identify key people, categories, brands, dates, and topics, thereby “understanding” the article.

  • Video Search

    LuminousX then searches millions of clips that have already been analyzed by Luminous and finds clips about similar key people, categories, brands, and topics within a pre-defined timeframe. Each clip’s relevance is statistically weighted and ranked by a proprietary formula.

  • Content Match

    The highest scoring clips play automatically. When a given article is changed or when AnyClip’s content library is updated with new clips, both are analyzes again and if there are better matches, the dynamic playlist is updated accordingly.