{"id":13135,"date":"2026-05-12T17:17:34","date_gmt":"2026-05-12T09:17:34","guid":{"rendered":"https:\/\/ai-stack.ai\/?p=13135"},"modified":"2026-05-13T14:25:23","modified_gmt":"2026-05-13T06:25:23","slug":"https-ai-stack-ai-zh-hant-elsa-ai-lab-with-infinitix","status":"publish","type":"post","link":"https:\/\/ai-stack.ai\/en\/https-ai-stack-ai-zh-hant-elsa-ai-lab-with-infinitix","title":{"rendered":"Maximizing On-Prem Compute Value: AI-Stack and AMD Accelerate Physical AI Research at ELSA Lab"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\">The ELSA Physical AI Lab at the University of Electro-Communications (UEC), Japan, focuses on integrating generative AI with robotics (Physical AI), with research centered on quadruped robots and perception models for robotic hands and arms. High-performance simulation and frequent switching between experimental environments are essential to their work.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img data-recalc-dims=\"1\" fetchpriority=\"high\" decoding=\"async\" width=\"640\" height=\"427\" data-attachment-id=\"13136\" data-permalink=\"https:\/\/ai-stack.ai\/en\/https-ai-stack-ai-zh-hant-elsa-ai-lab-with-infinitix\/image-22\" data-orig-file=\"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2026\/05\/image-3868eb26-1.png?fit=640%2C427&amp;quality=100&amp;ct=202603031250&amp;ssl=1\" data-orig-size=\"640,427\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"image\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2026\/05\/image-3868eb26-1.png?fit=300%2C200&amp;quality=100&amp;ct=202603031250&amp;ssl=1\" data-large-file=\"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2026\/05\/image-3868eb26-1.png?fit=640%2C427&amp;quality=100&amp;ct=202603031250&amp;ssl=1\" src=\"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2026\/05\/image-3868eb26-1.png?resize=640%2C427&#038;quality=100&#038;ct=202603031250&#038;ssl=1\" alt=\"\" class=\"wp-image-13136\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Photo: Daisuke Ishizaka<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Challenges of On-Prem AI Deployment<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Like many organizations building on-prem AI infrastructure, ELSA faced three major challenges:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Cost efficiency of hardware<\/strong>: Large models such as Llama 3.1 70B require significant VRAM, making it difficult to balance performance and budget with traditional hardware setups.<\/li>\n\n\n\n<li><strong>Software complexity<\/strong>: While AMD ROCm\u2122 provides powerful capabilities, its backend environment configuration creates a steep setup barrier for researchers.<\/li>\n\n\n\n<li><strong>Underutilized compute resources<\/strong>: Without effective partitioning and management, high-end GPUs are often dedicated to a single task, resulting in low utilization efficiency.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"640\" height=\"427\" src=\"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2026\/05\/image-6d0ad5d8.png?resize=640%2C427&#038;quality=100&#038;ct=202603031250&#038;ssl=1\" alt=\"\" class=\"wp-image-13130\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Photo: Daisuke Ishizaka<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Solution: AI-Stack\uff06AMD Deliver a High-Efficiency Research Environment<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">ELSA deployed high-performance ELSA VELUGA G5-ND workstations equipped with AMD Radeon\u2122 AI PRO R9700 GPUs (32GB GDDR6), managed through the AI-Stack AI infrastructure and compute orchestration platform:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Unified compute management<\/strong>: AI-Stack enables direct access to AMD compute resources on the ELSA VELUGA G5-ND, eliminating the need for complex driver and software stack configuration.<\/li>\n\n\n\n<li><strong>Precise VRAM partitioning<\/strong>: Built-in resource isolation allows the 32GB VRAM to be segmented into multiple independent partitions, enabling concurrent model experiments on a single workstation.<\/li>\n\n\n\n<li><strong>Instant environment deployment<\/strong>: With containerized orchestration, AI development environments can be spun up within minutes, ensuring uninterrupted robotics research and enabling a \u201cready-to-develop\u201d workflow.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img data-recalc-dims=\"1\" decoding=\"async\" width=\"640\" height=\"427\" src=\"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2026\/05\/image-7beb122e.png?resize=640%2C427&#038;quality=100&#038;ct=202603031250&#038;ssl=1\" alt=\"\" class=\"wp-image-13142\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Photo: Daisuke Ishizaka<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Impact: Seamless Transition from LLMs to Robotics Implementation<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">As noted by Mr. Okada, Head of the Physical AI Division at ELSA:<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">\u201cWith AI-Stack, we can run multiple model experiments simultaneously on a single workstation, significantly improving research efficiency.\u201d<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Through AI-Stack\u2019s orchestration capabilities, ELSA successfully transformed high-performance hardware into tangible research output\u2014shortening the path from theory to real-world robotics validation while maintaining cost control and data security.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Conclusion<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">This collaboration demonstrates that when organizations are freed from the complexity of underlying infrastructure, AI-Stack\u2019s resource isolation and orchestration capabilities unlock the full potential of on-prem GPUs\u2014turning hardware investment into measurable research productivity.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Related Articles<\/strong>:<\/h3>\n\n\n\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/www.gizmodo.jp\/2026\/03\/amd-radeon-ai-pro-r9700.html\" data-type=\"link\" data-id=\"https:\/\/www.gizmodo.jp\/2026\/03\/amd-radeon-ai-pro-r9700.html\" target=\"_blank\" rel=\"noopener\">Asking an AI expert: &#8220;What specifications are necessary to run AI locally?&#8221;<\/a><\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><a href=\"https:\/\/cgworld.jp\/regular\/202603-physicalai-003.html\" target=\"_blank\" rel=\"noopener\">Vol. 003: The Current State of Physical AI Research and Development as Seen at &#8220;ELSA Physical AI Lab&#8221;<\/a><\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The ELSA Physical AI Lab at the University of Electro-Communications (UEC), Japan, focuses on integrating generative AI with robotics (Physical AI), with research centered on quadruped robots and perception models for robotic hands and arms. High-performance simulation and frequent switching between experimental environments are essential to their work. Photo: Daisuke Ishizaka Challenges of On-Prem AI Deployment Like many organizations building on-prem AI infrastructure, ELSA faced three major challenges: Photo: Daisuke Ishizaka Solution: AI-Stack\uff06AMD Deliver a High-Efficiency Research Environment ELSA deployed high-performance ELSA VELUGA G5-ND workstations equipped with AMD Radeon\u2122 AI PRO R9700 GPUs (32GB GDDR6), managed through the AI-Stack AI infrastructure&#8230;<\/p>\n","protected":false},"author":253372388,"featured_media":13125,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[96987590,96987597],"tags":[96987735],"class_list":["post-13135","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-customer-success-stories","category-academic-research-and-education","tag-amd-en"],"blocksy_meta":[],"acf":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2026\/05\/260512_ELSA%E6%88%90%E5%8A%9F%E6%A1%88%E4%BE%8B%E6%A8%A1%E6%9D%BF-062fe203-scaled.jpg?fit=2560%2C1440&quality=100&ct=202603031250&ssl=1","jetpack_shortlink":"https:\/\/wp.me\/ph344V-3pR","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/13135","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/users\/253372388"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/comments?post=13135"}],"version-history":[{"count":3,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/13135\/revisions"}],"predecessor-version":[{"id":13178,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/13135\/revisions\/13178"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/media\/13125"}],"wp:attachment":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/media?parent=13135"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/categories?post=13135"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/tags?post=13135"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}