Tag: visual language model

AI Machine Learning & Data Science Research

NVIDIA’s Wolf: World Summarization Framework Beats GPT-4V on Video Captioning by 55.6%

In a new paper Wolf: Captioning Everything with a World Summarization Framework, a research team introduces a novel approach known as the WOrLd summarization Framework (Wolf). This automated captioning framework significantly advances video captioning—both in terms of quality (improved by 55.6%) and similarity (improved by 77.4%)—compared to GPT-4V.

AI Machine Learning & Data Science Research

No Images Are Needed! Allen AI’s CLOSE Learns to Complete Visual Tasks From Text Inputs Alone

In the new paper I Can’t Believe There’s No Images! Learning Visual Tasks Using only Language Data, an Allen Institute for Artificial Intelligence team proposes Cross Modal Transfer On Semantic Embeddings (CLOSE), an approach that learns high-level skills from textual data, then uses these skills to complete vision tasks without additional visual training data.