Sections

Technology

At the core of AnyClip’s Content Platform are three proprietary technologies developed over several years by our team of world-class deep learning experts: Luminous, the world’s first real-time content analysis engine that understands content and context, Luminous Recommendation Engine (LRE™), which compares video metadata with web page metadata and automatically enriches any page with highly relevant premium video content, and a proprietary Content Monetization Engine, which conveys content metadata to advertisers for more targeted ad placements.

Introducing LuminousTM

Luminous™ is the world’s first real-time content analysis engine that automatically cuts premium content to clips and then tags, analyzes, filters out non-brand-safe clips, and categorizes each clip according to official Interactive Advertising Bureau (IAB) categories, sentiments, celebrities, brands, and more.

LUMINOUSTM CLIPS

Luminous™ ingests short or long-form video content. A proprietary clip detection algorithm analyzes the video and leverages patented technology to identify the exact beginning and end timestamps of each clip, effectively cutting any video content into shorter, thematic clips.

LUMINOUSTM  VIDEO TAGGING

Each clip is then processed by Luminous’ proprietary technology. By applying the most advanced image recognition, deep learning and speech-to-text technology available, Luminous™ essentially identifies everything in a given scene. It is uniquely capable of going beyond images and detecting VIDEO EVENTS & ACTIONS, while being flexible enough to learn instant models, enabling the identification of additional video events and actions. It also identifies anything on the screen – including millions of still objects, body parts, food, animals, people, gender, age, locations, text, brand names, etc.

LUMINOUSTM  INSIGHTS

An unlimited number of tags are analyzed by an advanced Natural Language Processing (NLP) engine, and then statistically weighted and matched against Luminous’™ proprietary taxonomies. Three taxonomies are currently operational with very high accuracy:

  1. Brand Safety
  2. Advertising Category
  3. Sentiment

1. Brand Safety

The brand safety taxonomy analyzes tags against 14 brand safety violations such as nudity, profanity or drugs. Each Clip is either marked as safe or unsafe, along with the reasons for concern.

  • Accidents
  • Alcohol
  • Crime
  • Death
  • Disaster
  • Drugs
  • Firearms
  • Gambling
  • Mature - Explicit
  • Mature - Suggestive
  • Negative News
  • Profanity & Hate Speech
  • Tobacco
  • War & Terror

2. Advertising Category

The engine identifies several advertising categories that most accurately characterize the content in each clip. AnyClip’s taxonomy follows the most updated official Interactive Advertising Bureau (IAB)’s Tech Lab Content Taxonomy and includes three levels of categories – primary, secondary and tertiary. Each clip is ultimately matched with categories such as “Travel” or “Automotive” from the IAB’s primary shortlist, which includes 29 categories.

3. Sentiment Analysis

The Luminous™ sentiment analysis is based on a prototype approach offered by Professor Phillip Shaver and his peers in the Journal of Personality and Social Psychology. At the basic level of the emotion hierarchy are six concepts — love, joy, anger, sadness, fear, and surprise — most useful for making everyday distinctions among emotions. The taxonomy uses these six primary emotions with one of 24 secondary emotions (secondary emotions for Joy, for example, include Cheerfulness or Zest).

Introducing Luminous Recommendation Engine (LRETM)

Powered by Luminous™, LRE™ compares video metadata with web page metadata, find optimal matches, and automatically enriches any page with relevant premium video content from AnyClip’s library.

Page Analysis

LRE™ applies advanced machine learning and Natural Language Processing (NLP) to parse any article and identify key people, categories, brands, dates, and topics, thereby “understanding” the article.

Video Search

The LRE™ then searches hundreds of thousands of clips that have already been analyzed by Luminous™ and finds dozens of clips about similar key people, categories, brands, and topics and within a pre-defined timeframe (e.g. fresh content from the last 30 days or 5 hours). Each clip’s relevance is statistically weighted and ranked by a proprietary formula.

Content Match

The highest scoring clips are then streamed in real-time, enriching the page. When a given article is updated or AnyClip’s content library receives new clips, LRE™ analyzes both again, finds better matches, and updates the video content accordingly.

  • Technology

    At the core of AnyClip’s Content Platform are three proprietary technologies developed over several years by our team of world-class deep learning experts: Luminous, the world’s first real-time content analysis engine that understands content and context, Luminous Recommendation Engine (LRE™), which compares video metadata with web page metadata and automatically enriches any page with highly relevant premium video content, and a proprietary Content Monetization Engine, which conveys content metadata to advertisers for more targeted ad placements.

  • Introducing LuminousTM

    Luminous™ is the world’s first real-time content analysis engine that automatically cuts premium content to clips and then tags, analyzes, filters out non-brand-safe clips, and categorizes each clip according to official Interactive Advertising Bureau (IAB) categories, sentiments, celebrities, brands, and more.

  • LUMINOUSTM CLIPS

    Luminous™ ingests short or long-form video content. A proprietary clip detection algorithm analyzes the video and leverages patented technology to identify the exact beginning and end timestamps of each clip, effectively cutting any video content into shorter, thematic clips.

  • LUMINOUSTM  VIDEO TAGGING

    Each clip is then processed by Luminous’ proprietary technology. By applying the most advanced image recognition, deep learning and speech-to-text technology available, Luminous™ essentially identifies everything in a given scene. It is uniquely capable of going beyond images and detecting VIDEO EVENTS & ACTIONS, while being flexible enough to learn instant models, enabling the identification of additional video events and actions. It also identifies anything on the screen – including millions of still objects, body parts, food, animals, people, gender, age, locations, text, brand names, etc.

  • LUMINOUSTM  INSIGHTS

    An unlimited number of tags are analyzed by an advanced Natural Language Processing (NLP) engine, and then statistically weighted and matched against Luminous’™ proprietary taxonomies. Three taxonomies are currently operational with very high accuracy:

    1. Brand Safety
    2. Advertising Category
    3. Sentiment
  • 1. Brand Safety

    The brand safety taxonomy analyzes tags against 14 brand safety violations such as nudity, profanity or drugs. Each Clip is either marked as safe or unsafe, along with the reasons for concern.

  • 2. Advertising Category

    The engine identifies several advertising categories that most accurately characterize the content in each clip. AnyClip’s taxonomy follows the most updated official Interactive Advertising Bureau (IAB)’s Tech Lab Content Taxonomy and includes three levels of categories – primary, secondary and tertiary. Each clip is ultimately matched with categories such as “Travel” or “Automotive” from the IAB’s primary shortlist, which includes 29 categories.

  • 3. Sentiment Analysis

    The Luminous™ sentiment analysis is based on a prototype approach offered by Professor Phillip Shaver and his peers in the Journal of Personality and Social Psychology. At the basic level of the emotion hierarchy are six concepts — love, joy, anger, sadness, fear, and surprise — most useful for making everyday distinctions among emotions. The taxonomy uses these six primary emotions with one of 24 secondary emotions (secondary emotions for Joy, for example, include Cheerfulness or Zest).

  • Introducing Luminous Recommendation Engine (LRETM)

    Powered by Luminous™, LRE™ compares video metadata with web page metadata, find optimal matches, and automatically enriches any page with relevant premium video content from AnyClip’s library.

  • Page Analysis

    LRE™ applies advanced machine learning and Natural Language Processing (NLP) to parse any article and identify key people, categories, brands, dates, and topics, thereby “understanding” the article.

  • Video Search

    The LRE™ then searches hundreds of thousands of clips that have already been analyzed by Luminous™ and finds dozens of clips about similar key people, categories, brands, and topics and within a pre-defined timeframe (e.g. fresh content from the last 30 days or 5 hours). Each clip’s relevance is statistically weighted and ranked by a proprietary formula.

  • Content Match

    The highest scoring clips are then streamed in real-time, enriching the page. When a given article is updated or AnyClip’s content library receives new clips, LRE™ analyzes both again, finds better matches, and updates the video content accordingly.