{"id":42103,"date":"2026-03-23T08:50:03","date_gmt":"2026-03-23T15:50:03","guid":{"rendered":"https:\/\/maccelerator.la\/blog\/news-2\/growth-experiment-documentation-checklist\/"},"modified":"2026-03-23T08:50:03","modified_gmt":"2026-03-23T15:50:03","slug":"growth-experiment-documentation-checklist","status":"publish","type":"post","link":"https:\/\/maccelerator.la\/en\/blog\/entrepreneurship\/growth-experiment-documentation-checklist\/","title":{"rendered":"Checklist For Growth Experiment Documentation"},"content":{"rendered":"\n<p>Growth experiments are only as effective as the insights they generate. Without proper documentation, teams risk wasting time, repeating mistakes, and misinterpreting results. Research shows that companies with strong documentation practices achieve <strong>15% greater cumulative ARR impact<\/strong> and <strong>2x ROI<\/strong> in B2B SaaS compared to those that don&#8217;t document their tests.<\/p>\n<p>This article provides a step-by-step checklist to help you systematically document growth experiments, from hypothesis creation to post-analysis reporting. By following these steps, you can reduce errors, speed up decision-making, and turn every test into actionable knowledge.<\/p>\n<h3 id=\"key-takeaways\" tabindex=\"-1\">Key Takeaways:<\/h3>\n<ul>\n<li><strong>Pre-Experiment<\/strong>: Define goals, assign roles, and back up hypotheses with data.<\/li>\n<li><strong>Hypothesis Design<\/strong>: Write clear, testable hypotheses and pre-register success criteria.<\/li>\n<li><strong>Execution<\/strong>: Ensure flawless implementation and quality assurance to avoid skewed results.<\/li>\n<li><strong>Analysis<\/strong>: Summarize results, extract learnings, and plan next steps for scaling or iteration.<\/li>\n<\/ul>\n<p><strong>Pro Tip<\/strong>: Use AI tools to automate documentation, saving time and ensuring accuracy. Teams using AI report <strong>35% faster experiment cycles<\/strong> and <strong>60% less administrative work<\/strong>.<\/p>\n<h3 id=\"why-this-matters\" tabindex=\"-1\">Why This Matters:<\/h3>\n<p>Proper documentation builds a repository of insights that competitors can&#8217;t replicate, creating a long-term advantage. It transforms isolated wins into repeatable strategies and keeps teams aligned on what drives growth.<\/p>\n<p>Read on for detailed checklists to optimize every stage of your experimentation process.<\/p>\n<figure>         <img decoding=\"async\" src=\"https:\/\/assets.seobotai.com\/undefined\/69c0f5451b352ff267cbb4da-1774255287427.jpg\" alt=\"4-Stage Growth Experiment Documentation Checklist\" style=\"width:100%;\" title=\"\"><figcaption style=\"font-size: 0.85em; text-align: center; margin: 8px; padding: 0;\">\n<p style=\"margin: 0; padding: 4px;\">4-Stage Growth Experiment Documentation Checklist<\/p>\n<\/figcaption><\/figure>\n<h2 id=\"pre-experiment-documentation-checklist\" tabindex=\"-1\" class=\"sb h2-sbb-cls\">Pre-Experiment Documentation Checklist<\/h2>\n<p>Before kicking off any experiment, having clear documentation is key to keeping the team aligned. Want to streamline your process? Consider joining our <a href=\"#eluid160000aa\" style=\"display: inline;\">AI Acceleration Newsletter<\/a> for tips on AI-driven documentation strategies. Skipping this step can lead to what we call &quot;test orphans&quot; &#8211; experiments that launch but are forgotten because no one knows who owns them or what they were supposed to achieve.<\/p>\n<h3 id=\"define-experiment-details\" tabindex=\"-1\">Define Experiment Details<\/h3>\n<p>Start by giving your experiment a <strong>unique Experiment ID<\/strong> and a descriptive title that explains the &quot;What&quot; in plain terms. For example, titles like &quot;Improve CTR from Search&quot; or &quot;Reduce Checkout Abandonment on Mobile&quot; make the goal clear. Your description should provide enough detail so that even six months later, someone can understand exactly what was tested. Be sure to specify the <strong>growth area<\/strong> you\u2019re targeting &#8211; Acquisition, Activation, Retention, Revenue, or Referral &#8211; and how the experiment ties into your company\u2019s broader strategy.<\/p>\n<p>Nail down your <strong>target audience and scope<\/strong>. This includes defining the markets (e.g., &quot;US only&quot;), customer types (New, Existing, Paid), and eligibility criteria (specific user actions or characteristics). Add any technical details, such as the platform (Web, iOS, Android), the test location (Checkout, Landing Page), and the components being modified (Form, Image, Value Proposition). Use a <strong>test duration estimator<\/strong> to calculate how long the experiment should run to achieve statistical significance based on audience size and expected lift. If you\u2019re using prioritization frameworks like ICE (Impact, Confidence, Ease) or PIE, document your scores to justify the resources being allocated.<\/p>\n<h3 id=\"assign-roles-and-stakeholders\" tabindex=\"-1\">Assign Roles and Stakeholders<\/h3>\n<p>Every experiment needs an <strong>Experiment Owner<\/strong> &#8211; someone who oversees the process from start to finish, including sign-offs, monitoring progress, and addressing alerts. Clearly identify key contributors from teams like Design, Data Science, Analytics, Engineering, and Legal, and specify their roles. For instance, the Data Analyst might validate statistical confidence and estimate the financial impact, while the UX\/CRM Lead takes charge of implementing successful variants into the final product.<\/p>\n<blockquote>\n<p>&quot;An experimentation design document is crucial in helping an experiment owner think through all cases &#8211; to frame, provide clarity, and democratize knowledge cross functionally.&quot; &#8211; Satheesh Kumar, Growth Engineering<\/p>\n<\/blockquote>\n<p>Assign AARRR roles (Acquisition, Activation, Retention, Revenue, Referral) to ensure everyone understands their responsibilities. Also, identify dependencies early on. If you\u2019ll need engineering or design resources, document those needs during the planning stage to avoid delays once the test is underway.<\/p>\n<h3 id=\"include-supporting-research\" tabindex=\"-1\">Include Supporting Research<\/h3>\n<p>Back up your hypothesis with solid <strong>evidence<\/strong>. Link to results from past experiments or &quot;priors&quot; to avoid repeating mistakes and to build on existing knowledge. Combine quantitative data, like drop-off rates, with qualitative insights from user interviews or support tickets to create a well-rounded understanding.<\/p>\n<p>Use a structured <strong>problem statement<\/strong> to frame your hypothesis: &quot;This is a [problem\/opportunity] because [assumptions about value based on data].&quot; For example: &quot;Checkout abandonment on mobile is 42% higher than desktop because user interviews reveal friction with the payment form layout.&quot; Incorporate competitor insights, market benchmarks, or third-party research when applicable. This ensures your experiment is grounded in data, not just ideas, and helps determine if a formal test is even necessary &#8211; sometimes the evidence is compelling enough to implement changes directly without testing.<\/p>\n<p>Once this foundation is in place, you\u2019re ready to move on to crafting your hypothesis and experiment design.<\/p>\n<h6 id=\"sbb-itb-32a2de3\" class=\"sb-banner\" style=\"display: none;color:transparent;\">sbb-itb-32a2de3<\/h6>\n<h2 id=\"hypothesis-and-design-checklist\" tabindex=\"-1\" class=\"sb h2-sbb-cls\">Hypothesis and Design Checklist<\/h2>\n<p>Once your pre-experiment documentation is ready, the next step is crafting a hypothesis that&#8217;s clear and testable. A poorly defined hypothesis can lead to unclear results and wasted effort. To strengthen your approach, consider leveraging AI tools &#8211; <a href=\"#eluid160000aa\" style=\"display: inline;\">Join the AI Acceleration Newsletter<\/a> for actionable insights. The aim is to connect your independent variable (what you&#8217;re changing) to the dependent variable (what you expect to happen) with reasoning grounded in research. Expert teams, like those at <a href=\"https:\/\/maccelerator.com\" style=\"display: inline;\" target=\"_blank\" rel=\"noopener nofollow external noreferrer\" data-wpel-link=\"external\">M Accelerator<\/a>, emphasize the importance of precise documentation to guide experiments effectively.<\/p>\n<h3 id=\"write-a-testable-hypothesis\" tabindex=\"-1\">Write a Testable Hypothesis<\/h3>\n<p>A strong hypothesis builds on your detailed planning and must be both clear and falsifiable. Skip vague &quot;If\/Then&quot; statements and use this format instead: <strong>&quot;We believe [change] will [outcome] because [reasoning based on user knowledge].&quot;<\/strong><\/p>\n<p>For example: <em>&quot;We believe adding personalized product suggestions at checkout will increase average revenue per user by 15% because user interviews indicate that customers often struggle to find complementary products quickly.&quot;<\/em> This approach ties your prediction to user insights, ensuring it\u2019s grounded in evidence.<\/p>\n<p>Your hypothesis should also be <strong>falsifiable<\/strong> &#8211; meaning you should be able to observe outcomes that could disprove it. Clearly document the independent variable (what you\u2019re changing), the dependent variable (the metric you\u2019re measuring), the expected direction (increase or decrease), and the research insights that shaped your thinking. As Satheesh Kumar from Growth Engineering explains: <em>&quot;Clear learnings come only from clear hypotheses.&quot;<\/em><\/p>\n<h3 id=\"define-metrics-and-decision-criteria\" tabindex=\"-1\">Define Metrics and Decision Criteria<\/h3>\n<p>Organize your metrics into four key categories:<\/p>\n<ul>\n<li><strong>Primary<\/strong>: Your main success indicator, such as the trial-to-paid conversion rate.<\/li>\n<li><strong>Secondary<\/strong>: Supporting behaviors, like clicks on an &quot;Upgrade&quot; button.<\/li>\n<li><strong>Guardrail<\/strong>: Metrics that must not decline, such as churn or unsubscribe rates.<\/li>\n<li><strong>Data Quality<\/strong>: Indicators like Sample Ratio Mismatch to ensure reliable tracking.<\/li>\n<\/ul>\n<p>Before launching the experiment, pre-register your decision criteria to avoid bias after seeing the results. Use a straightforward format, such as: <em>&quot;If Variant A improves the primary metric by \u226510% with neutral or positive effects on guardrail metrics, we will roll out to 100%.&quot;<\/em><\/p>\n<p>Also, define auto-fail conditions. For instance, if the results fall within a neutral zone of \u00b12% after two weeks, terminate the test to avoid wasting time on inconclusive data. Companies with well-developed experimentation programs often see a 15\u201325% annual boost in ARR by setting clear success criteria and acting promptly on results. Now, outline how you\u2019ll compare the control and treatment groups.<\/p>\n<h3 id=\"document-experiment-variants\" tabindex=\"-1\">Document Experiment Variants<\/h3>\n<p>Detail the differences between your Control (baseline) and Treatment (variation) groups. Include visual aids, such as Figma mockups or screenshots, to make these distinctions clear. Specify key details like target market, customer type, device, and platform. Define the <strong>unit of randomization<\/strong>, whether it\u2019s based on User ID, Device ID, or Email recipient, and clarify when users are assigned to a group.<\/p>\n<p>Outline your traffic split, required sample size, and anticipated test duration. For example, if your current conversion rate is 5% and you\u2019re aiming for a 15% relative lift, you might need 8,000 visitors per variant over 14 days to reach 95% confidence. Additionally, document your start and stop rules, whether you\u2019re using fixed durations, fixed sample sizes, or sequential testing boundaries. This ensures that everyone involved knows exactly when the experiment will conclude and under what conditions.<\/p>\n<h2 id=\"execution-and-quality-assurance-checklist\" tabindex=\"-1\" class=\"sb h2-sbb-cls\">Execution and Quality Assurance Checklist<\/h2>\n<p>With your design ready to go, the next step is ensuring flawless execution and thorough quality checks to validate your hypothesis. Once the experiment is live, the hard work truly begins. Speed up your QA process with AI-powered tools &#8211; <a href=\"#eluid160000aa\" style=\"display: inline;\">sign up for the AI Acceleration Newsletter<\/a> for actionable insights. Proper execution and quality assurance are essential to avoid costly mistakes and ensure the data you collect is trustworthy. Skipping these steps risks wasting time and resources on flawed tests that lead to misleading conclusions.<\/p>\n<h3 id=\"implementation-details\" tabindex=\"-1\">Implementation Details<\/h3>\n<p>Keep a detailed record of every technical aspect of your experiment setup. Start with the <strong>unit of randomization<\/strong> &#8211; whether you&#8217;re grouping users by User ID, Device ID, or session &#8211; and note the exact moment users are assigned to a group. Include platform-specific details like geo-targeting, device types (e.g., iOS vs. Android), and browser requirements. Document the tools you\u2019re using, any custom code, and the time it took to build the experiment. This serves as a handy reference in case something breaks or if you need to replicate the test later.<\/p>\n<p>Also, plan your <strong>progressive ramp-up schedule<\/strong> carefully. For example, begin with 10% of traffic, then increase to 50%, and finally to 100%. This gradual rollout helps you catch potential issues early, minimizing the risk of widespread problems.<\/p>\n<h3 id=\"perform-quality-assurance\" tabindex=\"-1\">Perform Quality Assurance<\/h3>\n<p>Before launching, create a detailed QA checklist. Make sure your traffic split matches your intended allocation (e.g., 50\/50) and test all variants across different browsers, devices, and user states (such as new users versus logged-in users). Verify that <strong>event tracking works correctly<\/strong> for both control and treatment groups, and ensure data flows seamlessly into your analytics tools. Check page load speeds to spot any delays and monitor API calls to prevent anything from skewing your results.<\/p>\n<p>For more complex setups, consider running an <strong>A\/A test<\/strong> first. This means showing the same experience to both groups to confirm your data collection is functioning properly before introducing any changes. Set up automated alerts to flag anomalies and establish clear rollback procedures. If guardrail metrics &#8211; like subscriber cancellations or page load times &#8211; start to worsen, you\u2019ll need a plan to halt the test immediately.<\/p>\n<h3 id=\"set-success-criteria\" tabindex=\"-1\">Set Success Criteria<\/h3>\n<p>Define your success criteria upfront to avoid bias. Use a simple format, like: <em>&quot;If the primary metric improves by \u226510% and guardrail metrics remain stable or positive, we\u2019ll scale to 100%.&quot;<\/em> Clearly outline <strong>auto-fail conditions<\/strong>, such as ending the test if results fall within a neutral range (e.g., \u00b12%) after reaching the target sample size.<\/p>\n<p>Document your stop rules &#8211; whether that\u2019s a fixed duration (e.g., 14 days), a set sample size (e.g., 10,000 users per variant), or sequential testing limits. Also, keep an eye out for <strong>Sample Ratio Mismatch (SRM)<\/strong>, which can indicate problems with your randomization process. Since about 70% of A\/B tests fail to show meaningful differences, having clear criteria for inconclusive results prevents endless debates over marginal data and keeps your team moving forward.<\/p>\n<h2 id=\"analysis-and-reporting-checklist\" tabindex=\"-1\" class=\"sb h2-sbb-cls\">Analysis and Reporting Checklist<\/h2>\n<p>Transforming raw data into actionable insights is the cornerstone of effective analysis. To streamline this process, organize your results into three categories: <strong>Primary metrics<\/strong> (your main goal), <strong>Secondary metrics<\/strong> (supporting indicators), and <strong>Guardrail metrics<\/strong> (to catch any unintended side effects). These metrics help uncover the nuances of test performance, guiding better decision-making. Be sure to check for data quality issues like Sample Ratio Mismatch (SRM), which can signal traffic allocation problems. Additionally, record key statistical parameters &#8211; p-values, confidence intervals, and power &#8211; to ensure your results are reliable. Once your test is complete, follow these steps to turn the data into meaningful insights.<\/p>\n<h3 id=\"summarize-results\" tabindex=\"-1\">Summarize Results<\/h3>\n<p>Start by comparing the actual outcomes to your original hypothesis. Highlight differences between expectations and reality, including impacts on revenue and segmented performance. For instance, if you anticipated a 15% lift in sign-ups but achieved 18%, document that success. On the flip side, if conversion rates dropped by 3%, note it down. Connect these behavioral changes to financial metrics like <strong>Customer Acquisition Cost (CAC)<\/strong> payback or shifts in <strong>Average Revenue Per User (ARPU)<\/strong>. Break down results by segments &#8211; new versus returning users, iOS versus Android, or even geographic regions. This segmentation can reveal cases where a variant performs well overall but struggles in specific groups. To enhance communication with stakeholders, include screenshots of your Control and Treatment variants along with graphs that illustrate data trends.<\/p>\n<h3 id=\"extract-key-learnings\" tabindex=\"-1\">Extract Key Learnings<\/h3>\n<p>Focus on understanding not just what happened, but <em>why<\/em>. If your hypothesis was supported, identify which assumptions held true. If it was disproven, pinpoint where your reasoning fell short. Look for patterns across user segments and consider alternative explanations for the observed results.<\/p>\n<blockquote>\n<p>&quot;Understanding the why of the results means you&#8217;ve achieved a learning outcome that can be reinvested back into the business, shared with other teams, and generally improve the likelihood of success of others inside the company.&quot; &#8211; Adam Fishman<\/p>\n<\/blockquote>\n<p>Categorize your insights into two groups: &quot;high-value&quot; (those that could change strategy) and &quot;informational&quot; (those that refine tactics). This helps teams prioritize what matters most. Keep in mind that around 70% of A\/B tests fail to show meaningful differences, so even neutral results can provide valuable lessons. These insights will guide your next steps.<\/p>\n<h3 id=\"plan-next-steps\" tabindex=\"-1\">Plan Next Steps<\/h3>\n<p>Decide whether to scale, iterate, or stop. If your hypothesis was validated with a strong positive result, outline how to implement the winning variant permanently &#8211; update your CRM, adjust product settings, and document the new approach in your playbook. For inconclusive results, refine your hypothesis and plan a follow-up test with adjusted variables or a larger sample size. If the experiment had a negative impact or revealed significant risks, archive the findings to avoid repeating mistakes. Define your next steps <em>before<\/em> diving into the analysis to avoid bias, and store all findings in a centralized Growth Log.<\/p>\n<blockquote>\n<p>&quot;Wins become playbooks. Losses become insights. Either way, you&#8217;ve reduced uncertainty.&quot; &#8211; James Praise<\/p>\n<\/blockquote>\n<h2 id=\"conclusion\" tabindex=\"-1\" class=\"sb h2-sbb-cls\">Conclusion<\/h2>\n<h3 id=\"key-takeaways-1\" tabindex=\"-1\">Key Takeaways<\/h3>\n<p>A well-structured documentation process is the backbone of effective experimentation. From pre-launch planning to post-analysis reporting, documenting every step ensures that each test contributes to a growing knowledge base. This approach not only sharpens your understanding of what drives revenue but also helps teams work more efficiently. With automated workflows and organized documentation, teams report <strong>faster experiment cycles<\/strong> and <strong>60% less administrative overhead<\/strong>, allowing them to focus on running more tests and gathering actionable insights. For tips on streamlining this process, consider subscribing to our <a href=\"#eluid160000aa\" style=\"display: inline;\">free AI Acceleration Newsletter<\/a>.<\/p>\n<p>The real advantage lies in treating documentation as an <strong>evidence moat<\/strong>. While competitors can mimic your outward strategies, they can&#8217;t replicate the depth of insights you&#8217;ve gained from consistent testing. Leading growth organizations achieve a learning velocity of <strong>0.7 to 1.0 validated insights per working day<\/strong>, constantly feeding new knowledge into their systems. This compounding effect is what differentiates scalable teams from those stuck in reactive marketing efforts.<\/p>\n<h3 id=\"enable-scalable-experimentation\" tabindex=\"-1\">Enable Scalable Experimentation<\/h3>\n<p>Documentation turns isolated successes into repeatable strategies and helps avoid repeating past mistakes. By centralizing findings in a Growth Log and pre-registering decision criteria, you create a scalable infrastructure that grows with your team.<\/p>\n<p>At <a href=\"https:\/\/maccelerator.com\" style=\"display: inline;\" target=\"_blank\" rel=\"noopener nofollow external noreferrer\" data-wpel-link=\"external\">M Studio<\/a>, we specialize in helping founders automate this process. Using AI-powered GTM systems, we log experiments, analyze results, and uncover patterns in growth data. Through our Elite Founders program, we work closely with teams in weekly sessions to build these automated systems &#8211; moving from manual spreadsheets to scalable, data-driven solutions.<\/p>\n<blockquote>\n<p>&quot;Sustainable growth doesn&#8217;t come from a single win. It comes from a machine that keeps producing them.&quot; &#8211; James Praise, Product &amp; Growth Marketer<\/p>\n<\/blockquote>\n<h2 id=\"faqs\" tabindex=\"-1\" class=\"sb h2-sbb-cls\">FAQs<\/h2>\n<h3 id=\"what-should-every-experiment-doc-include\" tabindex=\"-1\" data-faq-q>What should every experiment doc include?<\/h3>\n<p>Every experiment document should cover a few essential elements to ensure it\u2019s clear and leads to meaningful insights:<\/p>\n<ul>\n<li><strong>Problem Statement<\/strong>: Clearly explain the business goal, the customer issue being addressed, and the reasoning behind pursuing this opportunity.<\/li>\n<li><strong>Hypothesis<\/strong>: Outline the expected outcomes and the logic behind them.<\/li>\n<li><strong>Metrics<\/strong>: Define what success looks like and the key measurements that will track progress.<\/li>\n<li><strong>Design<\/strong>: Detail the methodology, tools, and step-by-step process for conducting the experiment.<\/li>\n<li><strong>Results<\/strong>: Provide a concise summary of the findings and outline the next steps based on those results.<\/li>\n<\/ul>\n<p>Including these components helps teams work together effectively and ensures resources are used wisely.<\/p>\n<h3 id=\"how-do-i-pick-primary-vs-guardrail-metrics\" tabindex=\"-1\" data-faq-q>How do I pick primary vs. guardrail metrics?<\/h3>\n<p>When setting up an experiment, start by picking a <strong>primary metric<\/strong> that reflects your main business objective. For instance, this could be revenue, conversion rate, or user retention &#8211; whatever best represents success for your goals. Alongside this, use <strong>guardrail metrics<\/strong> to keep an eye on any unintended consequences, like a decrease in customer satisfaction or a rise in churn. By tracking both, you can push for growth while protecting other essential parts of your business.<\/p>\n<h3 id=\"when-should-i-stop-scale-or-rerun-a-test\" tabindex=\"-1\" data-faq-q>When should I stop, scale, or rerun a test?<\/h3>\n<p>When running a test, it\u2019s crucial to stop once you\u2019ve collected enough reliable data to either confirm or disprove your hypothesis. If the results are unclear or show irregularities, consider rerunning the test to ensure accuracy. If the test delivers a clear and positive outcome, backed by proper controls and statistical analysis, it\u2019s time to scale it. Throughout the process, keep a close eye on progress to ensure the experiment yields actionable insights.<\/p>\n<h2>Related Blog Posts<\/h2>\n<ul>\n<li><a href=\"\/en\/blog\/entrepreneurship\/ultimate-guide-to-ab-testing-onboarding-flows\/\" style=\"display: inline;\" data-wpel-link=\"internal\">Ultimate Guide to A\/B Testing Onboarding Flows<\/a><\/li>\n<li><a href=\"\/en\/blog\/entrepreneurship\/its-not-10000-hours-its-10000-iterations\/\" style=\"display: inline;\" data-wpel-link=\"internal\">It\u2019s not 10,000 hours, it\u2019s 10,000 iterations<\/a><\/li>\n<li><a href=\"\/en\/blog\/entrepreneurship\/13-ab-testing-mistakes-to-avoid\/\" style=\"display: inline;\" data-wpel-link=\"internal\">13 A\/B Testing Mistakes To Avoid<\/a><\/li>\n<li><a href=\"\/en\/blog\/entrepreneurship\/checklist-for-partnership-compliance-audits\/\" style=\"display: inline;\" data-wpel-link=\"internal\">Checklist for Partnership Compliance Audits<\/a><\/li>\n<\/ul>\n<p><script async type=\"text\/javascript\" src=\"https:\/\/app.seobotai.com\/banner\/banner.js?id=69c0f5451b352ff267cbb4da\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"<p>A practical checklist to document growth experiments end-to-end so teams capture learnings, avoid repeated mistakes, and scale proven tests.<\/p>\n","protected":false},"author":14,"featured_media":42101,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1271],"tags":[],"class_list":["post-42103","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-entrepreneurship"],"_links":{"self":[{"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/posts\/42103","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/users\/14"}],"replies":[{"embeddable":true,"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/comments?post=42103"}],"version-history":[{"count":0,"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/posts\/42103\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/media\/42101"}],"wp:attachment":[{"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/media?parent=42103"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/categories?post=42103"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/maccelerator.la\/en\/wp-json\/wp\/v2\/tags?post=42103"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}