Electronics and Computer Science
Multimodal Large Language Model for Human-Centered Robots
This project explores the development of a Multimodal Large Language Model that empowers robots to understand and respond to humans through vision, language, and other sensory data. By enabling natural, adaptive, and context-aware communication, the research advances the next generation of intelligent, human-centered robotic systems.