The ELSA Physical AI Lab at the University of Electro-Communications (UEC), Japan, focuses on integrating generative AI with robotics (Physical AI), with research centered on quadruped robots and perception models for robotic hands and arms. High-performance simulation and frequent switching between experimental environments are essential to their work.

Photo: Daisuke Ishizaka
Challenges of On-Prem AI Deployment
Like many organizations building on-prem AI infrastructure, ELSA faced three major challenges:
- Cost efficiency of hardware: Large models such as Llama 3.1 70B require significant VRAM, making it difficult to balance performance and budget with traditional hardware setups.
- Software complexity: While AMD ROCm™ provides powerful capabilities, its backend environment configuration creates a steep setup barrier for researchers.
- Underutilized compute resources: Without effective partitioning and management, high-end GPUs are often dedicated to a single task, resulting in low utilization efficiency.

Photo: Daisuke Ishizaka
Solution: AI-Stack&AMD Deliver a High-Efficiency Research Environment
ELSA deployed high-performance ELSA VELUGA G5-ND workstations equipped with AMD Radeon™ AI PRO R9700 GPUs (32GB GDDR6), managed through the AI-Stack AI infrastructure and compute orchestration platform:
- Unified compute management: AI-Stack enables direct access to AMD compute resources on the ELSA VELUGA G5-ND, eliminating the need for complex driver and software stack configuration.
- Precise VRAM partitioning: Built-in resource isolation allows the 32GB VRAM to be segmented into multiple independent partitions, enabling concurrent model experiments on a single workstation.
- Instant environment deployment: With containerized orchestration, AI development environments can be spun up within minutes, ensuring uninterrupted robotics research and enabling a “ready-to-develop” workflow.

Photo: Daisuke Ishizaka
Impact: Seamless Transition from LLMs to Robotics Implementation
As noted by Mr. Okada, Head of the Physical AI Division at ELSA:
“With AI-Stack, we can run multiple model experiments simultaneously on a single workstation, significantly improving research efficiency.”
Through AI-Stack’s orchestration capabilities, ELSA successfully transformed high-performance hardware into tangible research output—shortening the path from theory to real-world robotics validation while maintaining cost control and data security.
Conclusion
This collaboration demonstrates that when organizations are freed from the complexity of underlying infrastructure, AI-Stack’s resource isolation and orchestration capabilities unlock the full potential of on-prem GPUs—turning hardware investment into measurable research productivity.
Related Articles:
Asking an AI expert: “What specifications are necessary to run AI locally?”