AI Research

MIT & Adobe Introduce Real Time AR Tool for Storytelling

Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements.

From prehistoric times, when our ancestors traded tales around the fire, storytelling has been an essential element in human communication. Although storytelling today may be complemented by audio visual aids, pitch decks and power point presentations, it still comes down to the message and how the speaker delivers it — and non-verbal communication such as posture and gestures can play a huge role in bringing a presentation to life. Researchers from MIT Media Lab and Adobe Research recently introduced a real-time interactive augmented video system that enables presenters to use their bodies as storytelling tools by linking gestures to illustrative virtual graphic elements.

The research paper introduces a fairly simple and customizable user interface that enables pre-assigning gestures to trigger specific graphical elements which will be incorporated in a video output in real time. The speaker, positioned in front of an augmented reality mirror monitor, uses gestures to produce and manipulate the pre-programmed graphical elements. For example a presenter can sweep their arm left to right to produce and then plot a graph, or a weatherperson could similarly trigger an animated thunderstorm overlay effect.

The researchers identify three main contributions of the study:

• A direct manipulation interface for authoring how input bodies map to output graphical effects.

• An interactive performance interface that applies these mappings in real-time.

• A categorization of gestures and postures based on their different capability and suitability for various mapping scenarios.

Currently, the system supports graphical elements such as sketches and other images, animated GIFs, and 2D scatter plots. To enable smooth, real-time interaction between the presenter and the elements, researchers identified a number of key trigger gestures and postures:

  • Pantomimic gestures can be used to mimic interactions with a virtual object. For example, using both hands to manipulate the transformation parameters and deform the shape of the object.
  • Iconic gestures are used to identify information about an object such as size, shape and movement.
  • Semaphoric gestures are hand movements and postures that convey specific meanings.
  • Static and dynamic body postures can also be used to specify a command and/or its parameters

Using basic human non-verbal communication such as posture and gestures to trigger and manipulate virtual graphic elements in real time integrates natural storytelling styles with cutting edge tech to produce a novel and rich augmented storytelling medium. The research opens up an exciting possibilities for enhancing storytelling of all sorts with multiple virtual graphic elements in real time.

The paper Interactive Body-Driven Graphics for Augmented Video Performance is on hal.archives.


Journalist: Fangyu Cai | Editor: Michael Sarazen

1 comment on “MIT & Adobe Introduce Real Time AR Tool for Storytelling

  1. Gaven Rank

    I recall attempting to give presentations and having a hard time coming up with new company ideas. It proved to be more challenging than I had anticipated. Fortunately, I was able to find a place to visit later. For instance, I urge that you read this article https://slidepeak.com/blog/presenting-techniques This is an excellent article about where to find presentation ideas. It was quite beneficial to me!

Leave a Reply

Your email address will not be published. Required fields are marked *

%d