{"id":9902,"date":"2025-05-02T15:11:30","date_gmt":"2025-05-02T07:11:30","guid":{"rendered":"https:\/\/ai-stack.ai\/?p=9902"},"modified":"2025-05-02T15:19:22","modified_gmt":"2025-05-02T07:19:22","slug":"blackwell-vs-mi300x","status":"publish","type":"post","link":"https:\/\/ai-stack.ai\/en\/blackwell-vs-mi300x","title":{"rendered":"MI300X\u202fvs Blackwell\u202f: Who Will Wear the 2025\u201126 \u201cLLM\u202fGPU Crown\u201d?"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\"><strong>TL;DR\u202f\u2014<\/strong> If your key metric is raw tokens\u2011per\u2011second at the lowest latency, NVIDIA\u202fBlackwell is in a league of its own. If total cost of ownership, power draw, and \u201cone\u2011card\u2011per\u2011model\u201d convenience top the list, AMD\u2019s Instinct\u202fMI300X delivers unbeatable bang for the buck. In most real deployments you\u2019ll end up blending both\u2014unless the KPIs and the budget clearly point one way.<br><em>(~1\u202f750 words; feel free to trim or localize.)<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>1. Why 2025 Became a Two\u2011Horse Race<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Ever since Hopper H100 swept the market in 2023, two forces have kept GPU vendors on an arms race trajectory:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Context windows exploded<\/strong>\u2014OpenAI\u2019s GPT\u20114.1 now accepts <strong>one\u2011million\u2011token<\/strong> prompts, soaking up terabytes\u2011per\u2011second of memory bandwidth.<a href=\"https:\/\/blogs.nvidia.com\/blog\/blackwell-mlperf-inference\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> NVIDIA Blog<br><\/a><\/li>\n\n\n\n<li><strong>Open\u2011weight adoption soared<\/strong>\u2014Meta\u2019s Llama family passed <strong>1.2\u202fbillion<\/strong> downloads, pushing companies to run LLMs in\u2011house for privacy and to dodge rising API bills.<br><\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">To serve these diverging appetites, hardware vendors forked into two distinct philosophies:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Direction<\/strong><\/td><td><strong>Motto<\/strong><\/td><td><strong>Champion<\/strong><\/td><\/tr><tr><td><strong>Bigger &amp; faster<\/strong><\/td><td><em>\u201cShrink a supercomputer into a single card.\u201d<\/em><\/td><td><strong>NVIDIA\u202fBlackwell\u202fB200<\/strong><\/td><\/tr><tr><td><strong>Denser &amp; thriftier<\/strong><\/td><td><em>\u201cFit an entire GPT\u20113\u2011class model on one GPU.\u201d<\/em><\/td><td><strong>AMD\u202fInstinct\u202fMI300X<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Understanding their contrasting DNA is the key to an informed purchase\u2014or a click\u2011worthy blog post.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>2. Architecture Deep\u2011Dive<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXdoBqKFSsEmjzgh09YfaQa_BgCOUInw0Eh3kOOEZNWx2AWi7L7NCC30XQOZ3iLHrsAshpMZ-k3SwuVf2o9MOFP2dfzPDhLZewmlfz6GXf0k8vcMapezfbYxx8VXhXZriQr9ZDaz-w?key=n7rC6w9f6h9HnS6ed0XuXcsh\" alt=\"\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2.1 NVIDIA\u202fBlackwell\u202fB200<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Spec<\/strong><\/td><td><strong>Value<\/strong><\/td><\/tr><tr><td>Process<\/td><td>TSMC\u202f4N, dual\u2011die CoWoS<\/td><\/tr><tr><td>Transistors<\/td><td><strong>208\u202fbillion<\/strong><\/td><\/tr><tr><td>Memory<\/td><td><strong>192\u202fGB HBM3E<\/strong>, 8\u202fTB\/s<\/td><\/tr><tr><td>Peak AI<\/td><td><strong>40\u202fPFLOPS<\/strong> (FP4), 20\u202fPFLOPS (FP8)<\/td><\/tr><tr><td>Interconnect<\/td><td>NVLink\u20115 @\u202f1.8\u202fTB\/s\u202fper card<\/td><\/tr><tr><td>Board Power<\/td><td>\u2248\u202f1\u202fkW<\/td><\/tr><tr><td>Street Price*<\/td><td>US$30\u202fk\u202f\u2013\u202f40\u202fk<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">*Prices are typical hyperscaler or OEM quotes, not official MSRP.<a href=\"https:\/\/datacrunch.io\/blog\/nvidia-blackwell-b100-b200-gpu?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\">&nbsp;<\/a><\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Blackwell\u2019s headline act is <strong>FP4<\/strong>\u2014a 4\u2011bit floating\u2011point format that keeps accuracy within 1% of FP8 yet doubles throughput. NVLink\u20115 stitches up to <strong>72\u202fGPUs into a \u201csingle logical GPU\u201d (GB200 NVL72)<\/strong>, giving model trainers a flat 1.4\u202fEFLOPS memory space.<a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-blackwell-delivers-massive-performance-leaps-in-mlperf-inference-v5-0\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> NVIDIA Developer<\/a><\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXfwazsYLjbmnksLPlaM6cQ227lO-_zpIeJ27d30tmdFi0sVGDSQ9Uht6KmwpzQZFGSuZtedxGulOS2pRF4uh9XtnJIvmziFVM_BIjoiGxmqB8878k3SOMvBIt1ncwU185T2-nHLlQ?key=n7rC6w9f6h9HnS6ed0XuXcsh\" alt=\"\"\/><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>2.2 AMD\u202fInstinct\u202fMI300X<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Spec<\/strong><\/td><td><strong>Value<\/strong><\/td><\/tr><tr><td>Process<\/td><td>5\u202fnm + 6\u202fnm CDNA\u202f3 chiplets<\/td><\/tr><tr><td>Memory<\/td><td><strong>192\u202fGB HBM3<\/strong>, 5.3\u202fTB\/s<\/td><\/tr><tr><td>Peak AI<\/td><td><strong>2.6\u202fPFLOPS<\/strong> (FP8)<\/td><\/tr><tr><td>Board Power<\/td><td>750\u202fW (OAM module)<\/td><\/tr><tr><td>Street Price*<\/td><td>US$10\u202fk\u202f\u2013\u202f15\u202fk<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">MI300X stacks 24 layers of HBM directly on top of eight GPU chiplets. The result: the <strong>same 192\u202fGB footprint<\/strong> at just three\u2011quarters the power\u2014and roughly one\u2011third the price\u2014of a Blackwell card.<a href=\"https:\/\/www.amd.com\/en\/products\/accelerators\/instinct\/mi300\/mi300x.html?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> AMD<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>3. Benchmarks: What MLPerf v5.0 Reveals<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">MLCommons\u2019 latest Inference v5.0 run is the first to feature both Blackwell and MI300\u2011family silicon.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Test (Datacenter scenario)<\/strong><\/td><td><strong>8\u202f\u00d7\u202fBlackwell B200<\/strong><\/td><td><strong>8\u202f\u00d7\u202fH200 (baseline)<\/strong><\/td><td><strong>8\u202f\u00d7\u202fMI325X\u202f\u2020<\/strong><\/td><\/tr><tr><td><strong>Llama\u202f2\u202f70B \u2013 Interactive<\/strong><\/td><td><strong>3.1\u202f\u00d7<\/strong> baseline<\/td><td>1.0<\/td><td>0.93\u202f\u00d7<\/td><\/tr><tr><td><strong>Llama\u202f3.1\u202f405B \u2013 Server<\/strong><\/td><td><strong>3.4\u202f\u00d7<\/strong> baseline<\/td><td>1.0<\/td><td>n\/a<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">MI325X shares architecture and memory with MI300X but runs a slightly higher clock; treat it as MI300X\u2019s upper bound. <a href=\"https:\/\/developer.nvidia.com\/blog\/nvidia-blackwell-delivers-massive-performance-leaps-in-mlperf-inference-v5-0\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\">NVIDIA Developer<\/a><a href=\"https:\/\/rocm.blogs.amd.com\/artificial-intelligence\/mi325x-accelerates-mlperf-inference\/README.html?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\">ROCm Blogs<\/a><\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Key takeaway:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Latency tyranny<\/strong>\u2014If your SLO is sub\u2011100\u202fms p99, Blackwell\u2019s FP4 + NVLink combo is 2\u20114\u00d7 faster than anything else on the chart.<br><\/li>\n\n\n\n<li><strong>Capacity counts<\/strong>\u2014MI300X\u2019s identical 192\u202fGB envelope lets you keep <strong>70\u2011110\u202fB\u2011parameter<\/strong> models on a single card, avoiding tensor\u2011parallel splits that inflate latency and power.<br><\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>4. Software Ecosystem: CUDA\u2019s Moat vs. ROCm\u2019s Blitz<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Layer<\/strong><\/td><td><strong>NVIDIA Stack<\/strong><\/td><td><strong>AMD Stack<\/strong><\/td><\/tr><tr><td>Core SDK<\/td><td><strong>CUDA\u202f12<\/strong><\/td><td><strong>ROCm\u202f6.4<\/strong><\/td><\/tr><tr><td>LLM Toolkit<\/td><td>TensorRT\u2011LLM (built\u2011in FP4 quant)<\/td><td>vLLM \/ SGLang Docker images optimized for MI300X<a href=\"https:\/\/www.amd.com\/en\/developer\/resources\/technical-articles\/how-to-use-prebuilt-amd-rocm-vllm-docker-image-with-amd-instinct-mi300x-accelerators.html?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> AMD<\/a><\/td><\/tr><tr><td>Attention Kernels<\/td><td>Flash\u2011Attention\u202f3<\/td><td>HIP\u2011flavored Flash\u2011Attention\u202f3<\/td><\/tr><tr><td>Cloud Availability<\/td><td>AWS, Azure, GCP preview Blackwell nodes<\/td><td>Azure, Meta\/FAIR, Lambda roll out MI300X<\/td><\/tr><tr><td>Open\u2011source vibe<\/td><td>Mostly closed kernels<\/td><td>Rapid upstreaming; llama.cpp, vLLM, MII already merged<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">CUDA still offers the richest, lowest\u2011tuning path to peak numbers\u2014particularly if you rely on proprietary kernels like <strong>Paged\u2011Attention<\/strong> or NV\u2019s brand\u2011new <strong>Transformer Engine 2<\/strong>. Yet AMD\u2019s \u201cupstream first\u201d sprint has slashed the gap; a one\u2011line Docker pull now lands you a vLLM runtime fully tuned for MI300X.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>5. Economics: The Silent KPI<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>5.1 Hardware CAPEX &amp; Power OPEX<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Item<\/strong><\/td><td><strong>Blackwell<\/strong><\/td><td><strong>MI300X<\/strong><\/td><\/tr><tr><td>Card cost (street)<\/td><td>$35\u202fk<\/td><td>$12\u202fk<\/td><\/tr><tr><td>Board power<\/td><td>1\u202fkW<\/td><td>0.75\u202fkW<\/td><\/tr><tr><td>Annual energy (US\u202f$0.12\u202f\/\u202fkWh)<\/td><td>$10\u202fk<\/td><td>$7.9\u202fk<\/td><\/tr><tr><td>Rack density (8\u2011GPU box)<\/td><td>14\u202fkW<\/td><td>6\u202fkW<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">A 256\u2011GPU training pod:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Blackwell DGX pods<\/strong> \u2192 Capex \u2248\u202f$9\u202fM, power \u2248\u202f360\u202fkW.<br><\/li>\n\n\n\n<li><strong>MI300X pods<\/strong> \u2192 Capex \u2248\u202f$3\u202fM, power \u2248\u202f192\u202fkW.<br><\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Multiply by a five\u2011year depreciation and the difference becomes a C\u2011suite discussion, not just an engineer\u2019s wishlist.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>5.2 <\/strong><strong><em>Effective<\/em><\/strong><strong> Token Cost<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Blackwell\u2019s FP4 reduces per\u2011token energy by ~25\u202f% versus H100, but the card\u2019s higher TDP means watt\u2011for\u2011watt efficiency gains hover around 15\u202f%. ROCm\u2019s latest \u201cDeepGEMM\u201d kernels claw back 30\u201150\u202f% throughput on MI300X; if AMD lands FP4\u2011class quantization in 2026, the math could flip.<a href=\"https:\/\/rocm.docs.amd.com\/en\/latest\/about\/release-notes.html?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> ROCm Documentation<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>6. Decision Matrix: Mapping Needs to Silicon<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Primary KPI<\/strong><\/td><td><strong>Typical Workload<\/strong><\/td><td><strong>Best\u2011fit GPU<\/strong><\/td><td><strong>Why<\/strong><\/td><\/tr><tr><td><strong>99th\u2011percentile latency<\/strong><\/td><td>Global chat assistant \/ live copilots<\/td><td><strong>Blackwell\u202fB200<\/strong><\/td><td>FP4 &amp; NVLink annihilate queueing delay.<\/td><\/tr><tr><td><strong>Cost per token<\/strong><\/td><td>Internal RAG search, batch inference<\/td><td><strong>MI300X<\/strong><\/td><td>3\u00d7 cheaper card, 25\u202f% less power.<\/td><\/tr><tr><td><strong>Single\u2011card fine\u2011tuning<\/strong><\/td><td>Enterprises retraining 70\u2011110\u202fB models<\/td><td><strong>MI300X<\/strong><\/td><td>Entire model in RAM, no tensor\u2011parallel.<\/td><\/tr><tr><td><strong>Massive pre\u2011training (400\u202fB+)<\/strong><\/td><td>Frontier labs, foundation vendors<\/td><td><strong>Blackwell NVL72<\/strong><\/td><td>1.4\u202fEFLOPS unified memory pool.<\/td><\/tr><tr><td><strong>AI SaaS start\u2011up<\/strong><\/td><td>Burst traffic, limited capex<\/td><td>Mixed**<\/td><td>Spin up MI300X for long\u2011tail, Blackwell cache for hot paths.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>7. Looking Forward: Three Variables That Could Upend Today\u2019s Verdict<\/strong><\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Software cadence<\/strong> \u2013 CUDA\u2019s head start is narrowing; if ROCm brings FP4 or dynamic sparsity into its mainline by mid\u20112026, MI300X\u2019s tokens\u2011per\u2011watt advantage could double.<br><\/li>\n\n\n\n<li><strong>HBM supply<\/strong> \u2013 Both chips lean on the same HBM3\/3E pipeline. Any yield hiccup will favor the architecture that squeezes more from fewer stacks\u2014i.e., AMD.<br><\/li>\n\n\n\n<li><strong>Regulation &amp; carbon math<\/strong> \u2013 The EU AI Act and nascent carbon taxes make \u201cgrams\u202fCO\u2082 per prompt\u201d a board\u2011level KPI. Saving 250\u202fW per GPU might not sound huge until you scale to a thousand\u2011card cluster\u2014it\u2019s a <strong>250\u2011kW<\/strong> delta.<a href=\"https:\/\/datacrunch.io\/blog\/nvidia-blackwell-b100-b200-gpu?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> GPU<\/a><\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>8. Final Words: There Is No Perfect GPU, Only a Perfect\u2011for\u2011You GPU<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Speed King<\/strong> \u2013 Blackwell turns a datacenter into a single\u2011digit\u2011millisecond inference engine.<br><\/li>\n\n\n\n<li><strong>Value King<\/strong> \u2013 MI300X lets you deploy GPT\u20113.5\u2011class models on\u2011prem at one\u2011third the capex and noticeably lower TCO.<br><\/li>\n\n\n\n<li><strong>Who really wins?<\/strong> \u2013 The answer hides in an Excel row labeled <strong>\u201c$$ \/ delivered token\u201d<\/strong>\u2014after you factor in engineering time, compliance overhead, and carbon offsets.<br><\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Before signing any PO, plug your own traffic forecast into that spreadsheet. Let the numbers\u2014not vendor hype\u2014decide whom you crown the <em>LLM GPU King<\/em> of 2025\u201126.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>TL;DR\u202f\u2014 If your key metric is raw tokens\u2011per\u2011second at the lowest latency, NVIDIA\u202fBlackwell is in a league of its own. If total cost of ownership, power draw, and \u201cone\u2011card\u2011per\u2011model\u201d convenience top the list, AMD\u2019s Instinct\u202fMI300X delivers unbeatable bang for the buck. In most real deployments you\u2019ll end up blending both\u2014unless the KPIs and the budget clearly point one way.(~1\u202f750 words; feel free to trim or localize.) 1. Why 2025 Became a Two\u2011Horse Race Ever since Hopper H100 swept the market in 2023, two forces have kept GPU vendors on an arms race trajectory: To serve these diverging appetites, hardware vendors forked&#8230;<\/p>\n","protected":false},"author":253372376,"featured_media":9903,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[96987604,96987592],"tags":[96988055,96988056],"class_list":["post-9902","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-news","category-featured-articles","tag-blackwell-2","tag-mi300x-2"],"blocksy_meta":[],"acf":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2025\/05\/%E6%A8%A1%E5%9E%8BA-6.jpg?fit=1920%2C1080&quality=100&ct=202603031250&ssl=1","jetpack_shortlink":"https:\/\/wp.me\/ph344V-2zI","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/9902","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/users\/253372376"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/comments?post=9902"}],"version-history":[{"count":0,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/9902\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/media\/9903"}],"wp:attachment":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/media?parent=9902"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/categories?post=9902"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/tags?post=9902"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}