AI
Where AI Gets Real
Artificial intelligence is moving beyond digital outputs—it’s entering the physical world. The next frontier is physical intelligence: mastering how systems perceive, interpret, and act in real time. It demands more than just algorithms. It demands precision engineering and a high fidelity bridge between the physical and digital worlds, built on application-specific models to optimize performance. From sensing and interpreting to decision-making and action, ADI delivers physical intelligence solutions that perform in the most demanding environments, enabling autonomous factories, intelligent robotics, next-generation vehicles, and predictive healthcare systems.

Fine-Tuning Vision-Language Models (VLM) for Agile Robotics
The idea of agile robotics has long been stalled by a fundamental bottleneck: the cost and time required to collect task-specific training data. ADI is systematically dismantling barriers to leveraging vision-language models (VLMs). The models’ vast pre-trained knowledge gives robots strong zero-shot performance and human-like contextual reasoning. ADI’s research team adapted these models to new tasks while drastically reducing data dependency and computational overhead... and we’re not stopping there.
Featured Highlights
Discover the latest releases, standout content, and insights making an impact across the industry.

As robots gradually become more integrated into daily life—from autonomous vehicles to assistive healthcare devices—one question persists: can we trust them?

How do we ensure a massive robot swarm leverages the latest AI model even where there’s no network infrastructure? ADI made that happen as part of the EU’s OpenSwarm project.

Tactile sensing promises breakthroughs in how robots interact with us and their environment. ADI research is realizing that vision.
Employee Spotlight
Giulia Vilone
Giulia is a research-driven AI specialist with over 15 years of cross-disciplinary experience spanning artificial intelligence, data science, statistics, and actuarial sciences. She holds a PhD in Artificial Intelligence from Technological University Dublin, where she focused on Explainable AI (XAI) and Argumentation. At ADI, she’s developing Visual-Language-Action models for robotics with a focus on visual and depth perception integration, natural language understanding, and action planning. The goal is to bridge the gap between language and action in real-world robotic applications, especially in manufacturing environments.

More to Explore
From in-depth articles to real-world success stories, here’s a curated selection of content you might have not seen yet, but definitely shouldn’t miss.
A new voice recognition approach offers accuracy, adaptability, and efficiency for low power hardware.
See what scientific papers members of ADI’s team are recommending for July about all things physical AI.
By leveraging NVIDIA Jetson Thor, ADI is further accelerating the development of humanoids and autonomous mobile robots.