{"id":9,"date":"2026-04-18T22:25:12","date_gmt":"2026-04-18T22:25:12","guid":{"rendered":"https:\/\/eric-clay.com\/?p=9"},"modified":"2026-04-19T00:44:44","modified_gmt":"2026-04-19T00:44:44","slug":"ai-progress-in-four-charts-a-q2-2026-brief-for-policy-makers-and-staff","status":"publish","type":"post","link":"https:\/\/eric-clay.com\/?p=9","title":{"rendered":"AI Progress in Four Charts: A Q2 2026 Brief for Policy Makers and Staff"},"content":{"rendered":"\n<p>AI progress has been extraordinarily rapid over the past three years. Language models have moved from failing high-school math problems to contributing to open Erd\u0151s problems in ways that leading mathematicians have publicly praised.<sup><a id=\"fnref-1\" href=\"#fn-1\">[1]<\/a><\/sup> This brief covers four trends policy makers should understand.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">1. AI Progress is Accelerating Exponentially in Programming<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1622\" height=\"1264\" src=\"https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_6.jpg\" alt=\"METR Time Horizon 1.1 chart\" class=\"wp-image-10\" srcset=\"https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_6.jpg 1622w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_6-300x234.jpg 300w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_6-1024x798.jpg 1024w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_6-768x598.jpg 768w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_6-1536x1197.jpg 1536w\" sizes=\"auto, (max-width: 1622px) 100vw, 1622px\" \/><figcaption class=\"wp-element-caption\">METR Time Horizon 1.1: the length of task (in human hours) that frontier AI agents can complete with 50% reliability, 2019\u20132026.<\/figcaption><\/figure>\n\n\n\n<p>Anthropic, OpenAI, and Google DeepMind are all focused on automating programming in an effort to automate AI research itself. Scaling compute, combined with a technique known as reinforcement learning from verifiable rewards (RLVR), has yielded continual gains in domains where answers can be checked against ground truth: software development, mathematics, and cybersecurity.<\/p>\n\n\n\n<p>METR\u2019s January 2026 update estimates that the time horizon for LLM coding is doubling roughly every 131 days since 2023, a sharp acceleration from the 2019\u20132025 stitched trend of seven months, and the 2024-onward trend is faster still at 89 days.<sup><a id=\"fnref-2\" href=\"#fn-2\">[2]<\/a><\/sup> If this trend holds, by the end of 2026 we will likely have AI agents capable of multi-week programming and cybersecurity time horizons. The rate of doubling may itself increase as AI is used to build the next generation of AI, producing a disorienting and unpredictable speedup.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>\u201cWe might be 6 to 12 months away from models doing all of what software engineers do end-to-end. I have engineers within Anthropic who say I don\u2019t write any code anymore.\u201d<\/p><cite>\u2014 Dario Amodei, CEO of Anthropic, World Economic Forum, January 2026<sup><a id=\"fnref-8\" href=\"#fn-8\">[8]<\/a><\/sup><\/cite><\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">2. Offensive Cyber Capabilities Are Growing Particularly Fast as Coding Improves<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1638\" height=\"998\" src=\"https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_7.jpg\" alt=\"UK AISI evaluation of Mythos Preview\" class=\"wp-image-11\" srcset=\"https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_7.jpg 1638w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_7-300x183.jpg 300w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_7-1024x624.jpg 1024w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_7-768x468.jpg 768w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_7-1536x936.jpg 1536w\" sizes=\"auto, (max-width: 1638px) 100vw, 1638px\" \/><figcaption class=\"wp-element-caption\">UK AISI evaluation: average attack-chain steps completed by frontier models on \u201cThe Last Ones\u201d multi-stage cyber range, plotted against cumulative tokens spent. Mythos Preview (red) substantially outperforms prior frontier models.<\/figcaption><\/figure>\n\n\n\n<p>The recent announcement of Anthropic Mythos has shaken the industry.<sup><a id=\"fnref-3\" href=\"#fn-3\">[3]<\/a><\/sup> The UK AISI study on Mythos shows that AI is now able to autonomously conduct sophisticated end-to-end cyberattacks against weakly defended systems.<sup><a id=\"fnref-4\" href=\"#fn-4\">[4]<\/a><\/sup> As model capability improves and agent harnesses become more sophisticated, AI can take increasingly complicated actions in the real world across compressed time horizons. The primary bottleneck to agentic AI has been lack of reliability on complex multi-step tasks. In software development and cybersecurity, that bottleneck is rapidly dissolving.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>\u201cThe current systems are getting pretty good at cyber. You need to make sure that the defences are stronger than the offences.\u201d<\/p><cite>\u2014 Demis Hassabis, CEO of Google DeepMind, India AI Impact Summit, February 2026<sup><a id=\"fnref-9\" href=\"#fn-9\">[9]<\/a><\/sup><\/cite><\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">3. Open-Weight Models Lag Frontier by 6 to 12 Months<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1692\" height=\"902\" src=\"https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_8.jpg\" alt=\"Epoch Capabilities Index\" class=\"wp-image-12\" srcset=\"https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_8.jpg 1692w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_8-300x160.jpg 300w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_8-1024x546.jpg 1024w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_8-768x409.jpg 768w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_8-1536x819.jpg 1536w\" sizes=\"auto, (max-width: 1692px) 100vw, 1692px\" \/><figcaption class=\"wp-element-caption\">Epoch Capabilities Index (ECI), closed-weight (teal) vs. open-weight (pink) models. The leading open-weight models consistently trail frontier closed models, though the gap has narrowed considerably since 2023.<\/figcaption><\/figure>\n\n\n\n<p>Open-weight models (often called open-source) currently trail proprietary frontier models by roughly 6 to 12 months on capability benchmarks.<sup><a id=\"fnref-5\" href=\"#fn-5\">[5]<\/a><\/sup> This means any capability a closed model has today, including Mythos-class cyber capability, we should expect to see in an open-weight model within a year. The implications for cybersecurity, CBRN risk, and financial and cyber fraud are serious.<\/p>\n\n\n\n<p>Open-weight models can be released with safeguards, but those safeguards are trivially easy to strip off through fine-tuning.<sup><a id=\"fnref-6\" href=\"#fn-6\">[6]<\/a><\/sup> There is currently no known technical method for releasing open weights in a way that prevents downstream fine-tuning for cyberattacks, bioweapon assistance, or terrorism support. In practice, the capabilities of the best open-weight models should be treated as uncontrollable once released.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>\u201cPowerful agentic systems are going to be built, because they\u2019ll be more useful, economically more useful, scientifically more useful. But then those systems become even more powerful in the wrong hands, too.\u201d<\/p><cite>\u2014 Demis Hassabis, CEO of Google DeepMind, Paris AI Action Summit, February 2025<sup><a id=\"fnref-10\" href=\"#fn-10\">[10]<\/a><\/sup><\/cite><\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">4. Current Frontier Models Can Substantially Uplift CBRN Threats<\/h2>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"1746\" height=\"634\" src=\"https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_9.jpg\" alt=\"Anthropic virology task evaluations\" class=\"wp-image-13\" srcset=\"https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_9.jpg 1746w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_9-300x109.jpg 300w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_9-1024x372.jpg 1024w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_9-768x279.jpg 768w, https:\/\/eric-clay.com\/wp-content\/uploads\/2026\/04\/chart_9-1536x558.jpg 1536w\" sizes=\"auto, (max-width: 1746px) 100vw, 1746px\" \/><figcaption class=\"wp-element-caption\">Anthropic long-form virology task evaluations across two task suites and three sub-components (sequence design, protocol design, end-to-end). Claude Sonnet 4.6, Opus 4.6, and Mythos Preview all score between 0.79 and 1.00, indicating near-saturation on these long-form biology workflows.<\/figcaption><\/figure>\n\n\n\n<p>Frontier models are now reaching the point where they can meaningfully uplift malicious actors attempting to create novel CBRN threats.<sup><a id=\"fnref-7\" href=\"#fn-7\">[7]<\/a><\/sup> Major model providers invest heavily in safeguards to prevent this, but open-weight models remain only 6 to 12 months behind. Within the next two years, fully open-weight models will likely reach a level where an unskilled actor\u2019s ability to develop a novel biological or chemical threat is materially improved.<\/p>\n\n\n\n<p>OpenAI\u2019s evaluation of GPT-4.5 under its Preparedness Framework found comparable results, rating the model at the boundary of its \u201cMedium\u201d threshold for biological risk.<sup><a id=\"fnref-12\" href=\"#fn-12\">[12]<\/a><\/sup> Expert evaluators\u2014credentialed virologists and biosecurity researchers\u2014found that the model could substantially accelerate the information-gathering and protocol-design phases of biological threat development, reducing the time required for a moderately skilled actor to move from intent to actionable synthesis plan. OpenAI noted that while its safeguards reduced compliance on direct requests, adversarial elicitation techniques could recover a significant fraction of the underlying capability. The consistent finding across Anthropic and OpenAI evaluations is that the knowledge required to design novel CBRN threats is now latent in frontier model weights\u2014not as a deliberate design choice, but as an emergent consequence of training on the world\u2019s scientific literature. Safeguards reduce willingness; they do not reduce capability.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>\u201cAs of mid-2025, our measurements show that LLMs may already be providing substantial uplift in several relevant areas, perhaps doubling or tripling the likelihood of success. We believe that models are likely now approaching the point where, without safeguards, they could be useful in enabling someone with a STEM degree but not specifically a biology degree to go through the whole process of producing a bioweapon.\u201d<\/p><cite>\u2014 Dario Amodei, CEO of Anthropic, \u201cThe Adolescence of Technology,\u201d January 2026<sup><a id=\"fnref-11\" href=\"#fn-11\">[11]<\/a><\/sup><\/cite><\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">Concluding Thoughts<\/h2>\n\n\n\n<p>Taken together, these four trends are an urgent call to action for global policy makers. The time to regulate is before a major cyber or biosecurity incident, not after.<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>AI capability is demonstrably increasing, and the rate of that increase is itself growing.<\/li><li>Proprietary frontier models can now conduct end-to-end complex cyberattacks and routinely discover zero-day vulnerabilities.<\/li><li>Any capability in a closed model is likely to appear in an open-weight model within 12 months.<\/li><li>Frontier models can now substantially uplift threat actors for CBRN risks.<\/li><\/ul>\n\n\n\n<p>Humans waited for nuclear and airline disasters before regulating those industries. Waiting for an AI disaster to regulate AI is a losing proposition that could cause enormous loss of life, significant financial harm, or even an existential catastrophe.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h3 class=\"wp-block-heading\">Footnotes<\/h3>\n\n\n\n<p><span id=\"fn-1\"><sup>1<\/sup><\/span> Terence Tao, Mathstodon post on Erd\u0151s Problem #728, January 8, 2026: <a href=\"https:\/\/mathstodon.xyz\/@tao\/115855840223258103\" target=\"_blank\" rel=\"noopener\">mathstodon.xyz\/@tao\/115855840223258103<\/a>. Tao notes this is the first Erd\u0151s problem solved \u201cmore or less autonomously\u201d by AI (GPT-5.2 Pro, with proof formalized in Lean via Harmonic\u2019s Aristotle) in a way not previously documented in the literature. Tao estimates only 1\u20132% of open Erd\u0151s problems are currently tractable to AI with minimal human assistance. <a href=\"#fnref-1\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-2\"><sup>2<\/sup><\/span> METR, \u201cTime Horizon 1.1,\u201d January 29, 2026: <a href=\"https:\/\/metr.org\/blog\/2026-1-29-time-horizon-1-1\/\" target=\"_blank\" rel=\"noopener\">metr.org\/blog\/2026-1-29-time-horizon-1-1<\/a>. The 131-day doubling applies to the post-2023 trend under the updated TH1.1 methodology. The 2019\u20132025 stitched trend is 196 days (roughly seven months); the 2024-onward trend is 89 days. <a href=\"#fnref-2\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-3\"><sup>3<\/sup><\/span> Anthropic announced Claude Mythos Preview on April 7, 2026. See also Anthropic, \u201cProject Glasswing: Securing critical software for the AI era\u201d: <a href=\"https:\/\/www.anthropic.com\/glasswing\" target=\"_blank\" rel=\"noopener\">anthropic.com\/glasswing<\/a>. <a href=\"#fnref-3\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-4\"><sup>4<\/sup><\/span> UK AI Security Institute, \u201cOur evaluation of Claude Mythos Preview\u2019s cyber capabilities,\u201d April 2026: <a href=\"https:\/\/www.aisi.gov.uk\/blog\/our-evaluation-of-claude-mythos-previews-cyber-capabilities\" target=\"_blank\" rel=\"noopener\">aisi.gov.uk<\/a>. AISI found Mythos Preview \u201cat least capable of autonomously attacking small, weakly defended and vulnerable enterprise systems where access to a network has been gained,\u201d while cautioning that test environments were deliberately simplified. The UK government subsequently issued an open letter urging executives to invest in cyber defense, citing this finding. <a href=\"#fnref-4\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-5\"><sup>5<\/sup><\/span> Epoch AI Capabilities Index (ECI): <a href=\"https:\/\/epoch.ai\/benchmarks\" target=\"_blank\" rel=\"noopener\">epoch.ai\/benchmarks<\/a> and Epoch AI, \u201cOpen vs. closed AI: How behind are open models?\u201d: <a href=\"https:\/\/epoch.ai\/blog\/open-models-report\" target=\"_blank\" rel=\"noopener\">epoch.ai\/blog\/open-models-report<\/a>. Epoch finds open-weight models lag closed-weight by an average of roughly one year (90% CI: 5\u201322 months depending on benchmark). See also Nathan Lambert, \u201cWhat comes next with open models,\u201d <em>Interconnects<\/em>: <a href=\"https:\/\/www.interconnects.ai\/p\/the-next-phase-of-open-models\" target=\"_blank\" rel=\"noopener\">interconnects.ai<\/a>, placing the gap at 6\u201318 months historically. <a href=\"#fnref-5\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-6\"><sup>6<\/sup><\/span> Tamirisa et al., \u201cTamper-Resistant Safeguards for Open-Weight LLMs,\u201d arXiv:2408.00761: <a href=\"https:\/\/arxiv.org\/abs\/2408.00761\" target=\"_blank\" rel=\"noopener\">arxiv.org\/abs\/2408.00761<\/a> (\u201crefusal and unlearning safeguards can be trivially removed with a few steps of fine-tuning\u201d). See also OpenAI\u2019s worst-case analysis for gpt-oss: <a href=\"https:\/\/cdn.openai.com\/pdf\/231bf018-659a-494d-976c-2efdfc72b652\/oai_gpt-oss_Model_Safety.pdf\" target=\"_blank\" rel=\"noopener\">openai.com (PDF)<\/a>, which demonstrates that anti-refusal fine-tuning can drive refusal rates on unsafe prompts to near zero while preserving benchmark performance. <a href=\"#fnref-6\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-7\"><sup>7<\/sup><\/span> Anthropic Claude Opus 4 System Card and ASL-3 activation: in a bioweapons acquisition uplift trial, Opus 4 produced a 2.53x uplift over internet-only controls, which Anthropic described as \u201csuitably close that we are unable to rule out ASL-3.\u201d Subsequent Claude Opus 4.5 evaluations showed expert-level uplift trials where participants produced substantially higher scores and fewer critical errors with model assistance. See Anthropic Transparency Hub: <a href=\"https:\/\/www.anthropic.com\/transparency\" target=\"_blank\" rel=\"noopener\">anthropic.com\/transparency<\/a> and \u201cStrategic warning for AI risk\u201d: <a href=\"https:\/\/www.anthropic.com\/news\/strategic-warning-for-ai-risk-progress-and-insights-from-our-frontier-red-team\" target=\"_blank\" rel=\"noopener\">anthropic.com\/news<\/a>. <a href=\"#fnref-7\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-8\"><sup>8<\/sup><\/span> Dario Amodei, CEO of Anthropic, speaking at the World Economic Forum in Davos, January 2026. Reported in Yahoo Finance \/ Benzinga, \u201cAnthropic CEO Predicts AI Models Will Replace Software Engineers In 6\u201312 Months\u201d (January 22, 2026): <a href=\"https:\/\/finance.yahoo.com\/news\/anthropic-ceo-predicts-ai-models-233113047.html\" target=\"_blank\" rel=\"noopener\">finance.yahoo.com<\/a>. <a href=\"#fnref-8\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-9\"><sup>9<\/sup><\/span> Demis Hassabis, CEO of Google DeepMind, speaking at the India AI Impact Summit, February 2026. Reported in Outlook Business, \u201cWe are at a threshold moment where AGI is on the horizon\u201d (February 18, 2026): <a href=\"https:\/\/www.outlookbusiness.com\/news\/we-are-at-a-threshold-moment-where-agi-is-on-the-horizon-possibly-in-the-next-five-to-eight-years-says-demis-hassabis\" target=\"_blank\" rel=\"noopener\">outlookbusiness.com<\/a>. <a href=\"#fnref-9\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-10\"><sup>10<\/sup><\/span> Demis Hassabis, CEO of Google DeepMind, speaking with Axios at the Paris AI Action Summit, February 2025: <a href=\"https:\/\/www.axios.com\/2025\/02\/14\/hassabis-google-ai-race-hazards\" target=\"_blank\" rel=\"noopener\">axios.com<\/a>. <a href=\"#fnref-10\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-11\"><sup>11<\/sup><\/span> Dario Amodei, \u201cThe Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI,\u201d January 2026: <a href=\"https:\/\/www.darioamodei.com\/essay\/the-adolescence-of-technology\" target=\"_blank\" rel=\"noopener\">darioamodei.com<\/a>. <a href=\"#fnref-11\">\u21a9<\/a><\/p>\n\n\n\n<p><span id=\"fn-12\"><sup>12<\/sup><\/span> OpenAI, \u201cGPT-4.5 System Card,\u201d February 2025: <a href=\"https:\/\/openai.com\/index\/gpt-4-5-system-card\" target=\"_blank\" rel=\"noopener\">openai.com\/index\/gpt-4-5-system-card<\/a>. Under OpenAI\u2019s Preparedness Framework, CBRN risk is scored Low\/Medium\/High\/Critical; GPT-4.5 was rated Medium for biological and chemical risks, with evaluators noting \u201csome meaningful uplift\u201d for acquiring CBRN-relevant information and constructing synthesis plans. See also OpenAI Preparedness Framework: <a href=\"https:\/\/openai.com\/safety\/preparedness\" target=\"_blank\" rel=\"noopener\">openai.com\/safety\/preparedness<\/a>. <a href=\"#fnref-12\">\u21a9<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>AI progress has been extraordinarily rapid over the past three years. Language models have moved from failing high-school math problems to contributing to open Erd\u0151s problems in ways that leading mathematicians have publicly praised.[1] This brief covers four trends policy makers should understand. 1. AI Progress is Accelerating Exponentially in Programming Anthropic, OpenAI, and Google [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-9","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"featured_image_src":null,"featured_image_src_square":null,"author_info":{"display_name":"ericnclaycom","author_link":"https:\/\/eric-clay.com\/?author=1"},"_links":{"self":[{"href":"https:\/\/eric-clay.com\/index.php?rest_route=\/wp\/v2\/posts\/9","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/eric-clay.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/eric-clay.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/eric-clay.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/eric-clay.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=9"}],"version-history":[{"count":0,"href":"https:\/\/eric-clay.com\/index.php?rest_route=\/wp\/v2\/posts\/9\/revisions"}],"wp:attachment":[{"href":"https:\/\/eric-clay.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=9"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/eric-clay.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=9"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/eric-clay.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=9"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}