Tag: language-vision model

AI Machine Learning & Data Science Research

Microsoft’s LLaVA-Med Trains a Large Language-and-Vision Assistant for Biomedicine Within 15 Hours

In a new paper LLaVA-Med: Training a Large Language-and-Vision Assistant, a Microsoft research team proposes a Large Language and Vision Assistant for BioMedicine (LLaVA-Med), which can be trained in less than 15 hours and demonstrates strong multimodal conversational capability, aiding inquiries about biomedical image.