{"id":9867,"date":"2025-04-25T19:58:14","date_gmt":"2025-04-25T11:58:14","guid":{"rendered":"https:\/\/ai-stack.ai\/?p=9867"},"modified":"2025-04-25T20:05:55","modified_gmt":"2025-04-25T12:05:55","slug":"chatgpt-model","status":"publish","type":"post","link":"https:\/\/ai-stack.ai\/en\/chatgpt-model","title":{"rendered":"The 2025 Guide to OpenAI\u2019s GPT &amp; o-Series Models"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\"><strong>1\u2002|\u2002Why So Many Models?<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">OpenAI now ships two parallel families:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Family<\/strong><\/td><td><strong>Core Goal<\/strong><\/td><td><strong>Optimised For<\/strong><\/td><\/tr><tr><td><strong>GPT-series<\/strong><\/td><td><strong><em>Breadth<\/em><\/strong> \u2013 huge unsupervised pre-training for general knowledge + fluent text<\/td><td>Creative writing, multilingual chat, knowledge retrieval, vision &amp; audio (in 4o)<\/td><\/tr><tr><td><strong>o-series<\/strong><\/td><td><strong><em>Depth<\/em><\/strong> \u2013 explicit planning &amp; tool-use reasoning<\/td><td>Multi-step maths, coding, data analysis, autonomous workflows<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">Rather than \u201cnew replaces old,\u201d models are tuned for different budgets, latencies, and reasoning needs, giving builders a menu rather than a single \u201clatest.\u201d<a href=\"https:\/\/openai.com\/index\/introducing-o3-and-o4-mini\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> OpenAI<\/a><a href=\"https:\/\/community.openai.com\/t\/announcement-release-of-o3-and-o4-mini-april-16-2025\/1230164?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\">OpenAI Community<\/a><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>2\u2002|\u2002Timeline &amp; Genealogy<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXdqHOmyBWeb-Eywt-jFTgn0bj1vfSROk1MU7PU0-jFDJqWsn5k-j3IKhTMNIRHMDlyKydD72skD1HsqqtzavX53Rlz4Y1wzm5GFmTzshvEl3NZxwIIm4AkAcETqbj1GWBryRsdhFg?key=r7ZgTVzHEzwa7oPdyJ1yVp63\" alt=\"\"\/><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\"><em>Solid arrows<\/em> mark official releases; dotted lines (not shown) represent internal iterations.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>3\u2002|\u2002Deep Dive: GPT-3.5 \u2192 GPT-4 Turbo<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>GPT-3.5-Turbo \u2013 the workhorse<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\"><em>Launched<\/em>: Nov 2022<br><em>Context<\/em>: 16K<br><em>API cost<\/em>: <strong>$0.002<\/strong> in \/ <strong>$0.006<\/strong> out per 1 k tokens<a href=\"https:\/\/openai.com\/index\/new-models-and-developer-products-announced-at-devday\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> OpenAI<\/a><\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Best for:<\/strong> prototypes, high-volume chatbots, first-draft content when budget trumps accuracy.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>GPT-4 (legacy 8K)<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\"><em>Launched<\/em>: Mar 2023 \u2013 the first widely available <em>multimodal<\/em> GPT (image inputs).<br><em>Context<\/em>: 8K (32K retired).<br><em>API cost<\/em>: <strong>$0.03<\/strong> in \/ <strong>$0.06<\/strong> out.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Best for:<\/strong> regulated workflows already audited on v4; still valued for determinism.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>GPT-4 Turbo<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\"><em>Launched<\/em>: DevDay Nov 2023<br><em>Key upgrade<\/em>: <strong>128K<\/strong> context + 3\u00d7 cheaper than GPT-4 ( $0.01 in \/ $0.03 out )<a href=\"https:\/\/openai.com\/index\/new-models-and-developer-products-announced-at-devday\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> OpenAI<\/a><\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Best for:<\/strong> long-form document QA, contract analysis, codebase chat.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>4\u2002|\u2002The Multimodal Leap \u2014 GPT-4o (\u201comni\u201d)<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Feature<\/strong><\/td><td><strong>GPT-4 Turbo<\/strong><\/td><td><strong>GPT-4o<\/strong><\/td><\/tr><tr><td>Text &amp; images<\/td><td>\u2714\ufe0e<\/td><td>\u2714\ufe0e<\/td><\/tr><tr><td><strong>Real-time audio I\/O<\/strong><\/td><td>\u2013<\/td><td>\u2714\ufe0e<\/td><\/tr><tr><td>Speed vs 4 Turbo<\/td><td>baseline<\/td><td><strong>\u22482 \u00d7 faster<\/strong><\/td><\/tr><tr><td>Cost<\/td><td>$0.01 \/ $0.03<\/td><td><strong>$0.005 \/ $0.015<\/strong><a href=\"https:\/\/openai.com\/index\/hello-gpt-4o\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> OpenAI<\/a><\/td><\/tr><tr><td>Context window<\/td><td>128K<\/td><td>128K<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">4o\u2019s single model handles voice, vision, and text <strong>in one feed-forward pass<\/strong>, enabling live, near-human video demos<a href=\"https:\/\/openai.com\/index\/gpt-4o-system-card\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> OpenAI<\/a>.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Use it when:<\/strong> you want the richest UX (voice-chat, screenshot Q&amp;A) at mid-range cost.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>5\u2002|\u2002Beyond Knowledge \u2014 The o-Series<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>What makes an \u201co\u201d model different?<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Longer internal \u201cthought\u201d budget<\/strong> \u2013 the model learns to deliberate.<br><\/li>\n\n\n\n<li><strong>Native tool use<\/strong> \u2013 in ChatGPT it can autonomously open Python or the web.<br><\/li>\n\n\n\n<li><strong>Vision-reasoning baked in.<\/strong><strong><br><\/strong><\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>o3<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\"><em>Launched<\/em>: 16 Apr 2025<br><em>Profile<\/em>: highest reasoning\/coding scores in OpenAI\u2019s public suite, tuned for autonomy.<br><em>Price<\/em>: $0.01 \/ $0.04? (API tiers mirror Turbo) \u2013 official card: $10 \/ $40 per million tokens<a href=\"https:\/\/community.openai.com\/t\/announcement-release-of-o3-and-o4-mini-april-16-2025\/1230164?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> OpenAI Community<\/a><\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Sweet spot:<\/strong> data-science notebooks, multi-step coding help, advanced tutoring.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Why o3 stands out<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Unlike the GPT family, o3 can <em>act<\/em> while it thinks. During its chain-of-thought it decides when extra evidence is needed, then autonomously invokes any ChatGPT tool: live <strong>web search<\/strong>, Python execution, file analysis, or image generation. It can pull public data, run a script, plot a chart, and explain the result\u2014typically in under a minute\u2014while performing light self-fact-checking to curb hallucinations. If your workflow demands data-backed answers that blend text, numbers, and visuals, o3 is currently the most capable publicly available model.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>o4-mini<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">A cost-efficient sibling (\u2248 30 % of o3\u2019s price) for \u201cgood-enough\u201d multi-step tasks<a href=\"https:\/\/openai.com\/index\/introducing-o3-and-o4-mini\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\"> OpenAI<\/a>.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><strong>Sweet spot:<\/strong> batch code review, lightweight autonomous agents.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>o-mini vs GPT-3.5<\/strong><\/h3>\n\n\n\n<p class=\"wp-block-paragraph\">Although some folks substitute o-mini for GPT-3.5, remember that <em>o-mini is optimised for reasoning,<\/em> not chit-chat; its writing style is plainer and sometimes slower.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>6\u2002|\u2002What About GPT-4.5 Preview?<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">4.5 is a public <em>canary<\/em> build:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>More <em>emotional nuance<\/em> \u2013 marketing copy reads less \u201cAI-ish.\u201d<br><\/li>\n\n\n\n<li>Small factual &amp; code-gen gains over 4 Turbo, <strong>but still costlier<\/strong> (>$0.008 in).<br><\/li>\n\n\n\n<li>Available only in ChatGPT Plus \/ Team for feedback; not yet a stable API model.<br><\/li>\n<\/ul>\n\n\n\n<p class=\"wp-block-paragraph\">Use it to test edge cases in creative or brand-tone-sensitive content, but don\u2019t lock production flows until GA.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>7\u2002|\u2002Decision Matrix<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>\u201cI care most about\u2026\u201d<\/strong><\/td><td><strong>Pick this first<\/strong><\/td><td><strong>Why<\/strong><\/td><\/tr><tr><td><strong>Lowest cost \/ scale<\/strong><\/td><td>GPT-3.5-Turbo<\/td><td>5 \u00d7 cheaper than any GPT-4 tier.<\/td><\/tr><tr><td><strong>Realtime voice or vision UX<\/strong><\/td><td>GPT-4o<\/td><td>Native audio + faster latency.<\/td><\/tr><tr><td><strong>&gt;100 K-token workspace<\/strong><\/td><td>GPT-4 Turbo or 4o<\/td><td>Same 128K, pick 4o if audio\/vision needed.<\/td><\/tr><tr><td><strong>Deep reasoning \/ tool calls<\/strong><\/td><td>o3<\/td><td>Premier chain-of-thought.<\/td><\/tr><tr><td><strong>Creative polish \/ subtle tone<\/strong><\/td><td>GPT-4.5 preview<\/td><td>Richer stylistic control.<\/td><\/tr><tr><td><strong>Regulated, validated compliance<\/strong><\/td><td>GPT-4 (legacy)<\/td><td>Deterministic sampling.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>8\u2002|\u2002Cost &amp; Performance Benchmarks<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\"><em>All prices USD \/ 1 k tokens, Apr 2025.<\/em><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Model<\/strong><\/td><td><strong>In<\/strong><\/td><td><strong>Out<\/strong><\/td><td><strong>Relative speed*<\/strong><\/td><\/tr><tr><td>GPT-3.5-Turbo<\/td><td><strong>$0.002<\/strong><\/td><td>$0.006<\/td><td>\u2605\u2605\u2605\u2605\u2606<\/td><\/tr><tr><td>GPT-4<\/td><td>$0.03<\/td><td>$0.06<\/td><td>\u2605\u2605\u2606\u2606\u2606<\/td><\/tr><tr><td>GPT-4 Turbo<\/td><td>$0.01<\/td><td>$0.03<\/td><td>\u2605\u2605\u2605\u2606\u2606<\/td><\/tr><tr><td><strong>GPT-4o<\/strong><\/td><td><strong>$0.005<\/strong><\/td><td>$0.015<\/td><td>\u2605\u2605\u2605\u2605\u2605<\/td><\/tr><tr><td>GPT-4.5 prev.<\/td><td>$0.008<\/td><td>$0.024<\/td><td>\u2605\u2605\u2605\u2605\u2606<\/td><\/tr><tr><td><strong>o3<\/strong><\/td><td>$0.01<\/td><td>$0.04<\/td><td>\u2605\u2605\u2605\u2606\u2606<\/td><\/tr><tr><td>o4-mini<\/td><td>$0.003<\/td><td>$0.012<\/td><td>\u2605\u2605\u2605\u2605\u2606<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">*Speed ranking combines latency &amp; tokens-per-second averages from OpenAI\u2019s April 2025 dashboard.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>9\u2002|\u2002Prompting Tips by Model<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Model<\/strong><\/td><td><strong>Tip #1<\/strong><\/td><td><strong>Tip #2<\/strong><\/td><\/tr><tr><td>GPT-3.5<\/td><td>Be explicit\u2014fewer hidden assumptions.<\/td><td>Break long tasks into numbered steps.<\/td><\/tr><tr><td>GPT-4 Turbo<\/td><td>Use <em>system<\/em> messages to lock tone for long docs.<\/td><td>Exploit 128K to paste entire manuals.<\/td><\/tr><tr><td>GPT-4o<\/td><td>Include small image snippets to ground context; for voice, punctuate clearly.<\/td><td>Use \u201cspeak as\u201d in system role for voice persona.<\/td><\/tr><tr><td>GPT-4.5<\/td><td>Leverage style-transfer: e.g. \u201crewrite with empathetic tone for Taiwanese tech readers.\u201d<\/td><td>Provide brand lexicon to push creative boundaries safely.<\/td><\/tr><tr><td><strong>o3 \/ o4-mini<\/strong><\/td><td>Allow the model to plan: \u201cThink step-by-step before answering.\u201d<\/td><td>Give structured JSON schema for function-calling\u2014reduces hallucination.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>10\u2002|\u2002Looking Ahead<\/strong><\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">OpenAI\u2019s public roadmap signals:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Gradient of effort knobs<\/strong> \u2013 o-series already offers low\/med\/high reasoning passes; expect similar controls in GPT-series for latency-sensitive apps.<br><\/li>\n\n\n\n<li><strong>More multimodal fusion<\/strong> \u2013 4o\u2019s single-pass audio-vision will likely cascade into 4.5+ and o5.<br><\/li>\n\n\n\n<li><strong>Native agents<\/strong> \u2013 ChatGPT\u2019s tool-calling hints at sandboxed \u201cmicro-agents\u201d executing on-device for privacy.<br><\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>11\u2002|\u2002Conclusion<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Not a ladder, but a toolbox.<\/strong> Each model trades cost, speed, modality, and reasoning depth differently.<br><\/li>\n\n\n\n<li><strong>Prototype, benchmark, iterate.<\/strong> No doc (even this one!) beats a 100-message pilot with <em>your<\/em> dataset.<br><\/li>\n\n\n\n<li><strong>Stay agile.<\/strong> Prices have dropped <strong>>80 %<\/strong> since GPT-4\u2019s debut; workflows locked to one tier risk overpaying.<br><\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>TL;DR<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Budget<\/strong><\/td><td><strong>Go-to<\/strong><\/td><td><strong>Upgrade path<\/strong><\/td><\/tr><tr><td>\ud83d\udcb8 Shoestring<\/td><td>GPT-3.5<\/td><td>Add o4-mini for tricky code<\/td><\/tr><tr><td>\ud83d\udcbc SMB app<\/td><td>GPT-4o<\/td><td>Slot in o3 for analytics<\/td><\/tr><tr><td>\ud83c\udfe2 Enterprise<\/td><td>GPT-4 Turbo + o3<\/td><td>Test 4.5 for polished CX<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p class=\"wp-block-paragraph\">May this guide help you navigate the expanding model landscape and pinpoint the tool that best elevates your AI projects.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\"><\/p>\n","protected":false},"excerpt":{"rendered":"<p>1\u2002|\u2002Why So Many Models? OpenAI now ships two parallel families: Family Core Goal Optimised For GPT-series Breadth \u2013 huge unsupervised pre-training for general knowledge + fluent text Creative writing, multilingual chat, knowledge retrieval, vision &amp; audio (in 4o) o-series Depth \u2013 explicit planning &amp; tool-use reasoning Multi-step maths, coding, data analysis, autonomous workflows Rather than \u201cnew replaces old,\u201d models are tuned for different budgets, latencies, and reasoning needs, giving builders a menu rather than a single \u201clatest.\u201d OpenAIOpenAI Community 2\u2002|\u2002Timeline &amp; Genealogy Solid arrows mark official releases; dotted lines (not shown) represent internal iterations. 3\u2002|\u2002Deep Dive: GPT-3.5 \u2192 GPT-4 Turbo GPT-3.5-Turbo&#8230;<\/p>\n","protected":false},"author":253372376,"featured_media":9868,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_crdt_document":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[96987604,96987592],"tags":[96987713,96987715],"class_list":["post-9867","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-news","category-featured-articles","tag-chatgpt-en","tag-openai-en"],"blocksy_meta":[],"acf":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/ai-stack.ai\/wp-content\/uploads\/2025\/04\/%E6%A8%A1%E5%9E%8BA-1.jpg?fit=1920%2C1080&quality=100&ct=202603031250&ssl=1","jetpack_shortlink":"https:\/\/wp.me\/ph344V-2z9","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/9867","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/users\/253372376"}],"replies":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/comments?post=9867"}],"version-history":[{"count":0,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/posts\/9867\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/media\/9868"}],"wp:attachment":[{"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/media?parent=9867"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/categories?post=9867"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ai-stack.ai\/en\/wp-json\/wp\/v2\/tags?post=9867"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}