Why We Built an Experiment Journal Instead of Another A/B Testing Tool
Last updated: April 12, 2026
The A/B testing market has more tools than it needs. Optimizely, VWO, LaunchDarkly, PostHog, GrowthBook, Convert, Kameleoon. All of them are good at running tests. Some of them are great at it. The technical problem of splitting traffic and measuring outcomes is solved.
What none of them solve is the question that actually matters: what did you learn?
What the Market Gets Right
The A/B testing industry has made experimentation accessible. Ten years ago, running a proper split test required engineering resources, statistical knowledge, and custom infrastructure. Today, you can install a snippet, create variants in a visual editor, and get results in a dashboard.
Feature flags, multi-armed bandits, server-side testing, statistical engines. The infrastructure is impressive. For large organizations with dedicated experimentation programs, these tools work well.
The market optimizes for two things: statistical rigor and analytics depth. More data, faster results, more confidence in which variant won. This is the right optimization for companies running hundreds of experiments per quarter with dedicated analysts.
What the Market Ignores
Here is the gap nobody talks about: running the test is 20% of the value. Understanding and documenting what the test means is 80%. And the entire market is built around the 20%.
Every A/B testing tool has a results page. Conversion rates, statistical significance, confidence intervals. The experiment ends. You see the numbers. You ship the winner. Then you start the next test from zero.
Where is the insight? Where is the connection to the experiment you ran last month? Where is the documentation of why you designed this test the way you did? Where is the reflection on what went wrong when the hypothesis failed?
It is in your head. Or in a Notion page you stopped updating in week two. Or nowhere.
The testing tools are optimized for throughput. More tests, faster results, bigger statistical confidence. For a solo founder running maybe two experiments per month, throughput is irrelevant. Understanding is everything.
The Moment That Started Blazeway
After six months and fifteen experiments across three tools, every dashboard showed which variants won and what the conversion rates were. Simple questions remained unanswerable: Why did I test this headline? What did February's failed experiment reveal? How does this week's result connect to what I learned three months ago?
All data, zero knowledge.
I started keeping a manual experiment journal. A simple document where I wrote down the hypothesis before each test, the reasoning behind it, the result, and what I learned. Within a month, the journal was more valuable than any dashboard.
The data was identical. The journal forced me to think about what the data meant. It forced me to write connections between experiments. It forced me to articulate insights that would otherwise have evaporated the moment I shipped the winning variant.
Why "Journal" and Not "Platform"
The word matters.
A platform is infrastructure. You set it up, you run things on it, you check dashboards. The mental model is operational: how many tests am I running? What is my velocity? How confident is the result?
A journal is a thinking tool. You write in it. You reflect. You connect entries. You re-read past entries and see patterns you missed. The mental model is learning: what do I understand now that I did not understand before?
I built Blazeway as a journal because the problem I am solving is: how do you learn from experiments? And learning requires structure, reflection, and connection.
What an Experiment Journal Does Differently
Hypotheses as prose sentences. In most testing tools, the hypothesis is a label on a dashboard card. "Homepage CTA test." That tells you nothing about why you are running this test. In Blazeway, you write a full sentence: "I observe that visitors leave the pricing page without scrolling. I believe that adding a value summary above the fold will reduce bounce rate because users need to see the outcome before they evaluate the price." You do not stare at a blank text field. Blazeway guides you through each part of the sentence: what you observe, what you believe will change, and why. The structure does the thinking for you. That sentence captures the reasoning. Six months later, you can re-read it and understand what you were thinking. This is the core of what makes an experiment journal work.
Experiments as chapters in a story. Instead of a flat list of tests, every experiment becomes a chapter in your product's story. Chapter 1 establishes a hypothesis. Chapter 2 tests a variation based on Chapter 1's insight. Chapter 3 tests the boundaries. The story structure makes the learning arc visible. Experiment chains reveal patterns that isolated tests never can.
Structured learning. Every experiment ends with a reflection step. Write what this result tells you about your users. This reflection is part of the workflow, not an optional add-on. When you close an experiment, you write the insight. The tool does not let you skip it.
Connections between experiments. When you create a new experiment, you can mark it as "builds on" a previous one. This creates an explicit knowledge chain. When you review your experiment history, you see a network of connected learning, not isolated data points.
AI pattern recognition. After enough documented experiments, AI analysis can surface patterns across your journal: recurring themes, contradictions between experiments, and strategic implications you have not drawn yourself. This only works because the entries are structured. Raw dashboard data cannot produce these insights. Documented, reflected-upon journal entries can.
The Bet I Am Making
The A/B testing market is commoditized. The tools compete on statistical engines, visual editors, and integration depth. They are all converging on the same feature set.
I am making a different bet. I believe that the value of experimentation is in the documented learning that accumulates over time. I believe that a solo founder who has documented twenty experiments with structured hypotheses, reflections, and connections knows more about their product than a company with a thousand unexamined test results.
Traffic splitting is a solved problem. Blazeway is a better way to think about what the traffic is telling you. The tracking is cookieless and GDPR-compliant by design, so your documented insights reflect your entire audience, not just the segment that accepted cookies.
They optimize for statistical rigor and analytics depth. I optimize for the documented learning path.
If that resonates, you are probably the person I built this for.
Key Takeaways
The technical problem of A/B testing is solved. The learning problem is not addressed by any major tool. Most experimentation tools optimize for throughput: more tests, faster results. For solo founders, the bottleneck is understanding, not throughput. An experiment journal captures the 80% of value that test results alone miss: the reasoning, the insight, and the connections. Structured documentation compounds. Raw dashboard data does not. The market competes on statistical engines. Blazeway competes on the documented learning path.
Frequently Asked Questions
Is Blazeway replacing A/B testing tools?
Blazeway includes built-in experiment tracking with cookieless, GDPR-compliant measurement. But the core value is the journal layer: structured hypotheses, learning reflections, experiment connections, and AI pattern analysis. If you already use a testing tool and want to keep it, you can use Blazeway purely for the documentation and learning system. If you want a simpler all-in-one setup, Blazeway handles the tracking too. Either way, the journal turns your experiments into building-in-public content naturally.
Do I need a lot of traffic for an experiment journal to be useful?
No. The journal is valuable even for manual experiments with no automated tracking at all. If you change your pricing page and manually track results over two weeks, the journal captures the hypothesis, the outcome, and the insight. The value is in the documentation, not in the statistical engine.
How is Blazeway different from keeping experiment notes in Notion?
Structure and connections. A Notion page is freeform. You can write anything, which means most people write nothing useful. Blazeway enforces the five-part structure (hypothesis, setup, result, insight, next action) and lets you connect experiments into one continuous story. The structure is what makes the documentation compound rather than accumulate.
Who is Blazeway built for?
Solo founders and indie hackers who run one to four experiments per month and want to learn from each one. If you run hundreds of experiments and need enterprise-grade statistical engines, Optimizely or VWO are built for that. If you want every experiment to make the next one smarter, Blazeway is built for you.
Blazeway is the experiment journal for solo founders. Every test starts with a guided hypothesis and ends with an insight that compounds.
Start free →Free plan available. No credit card.
Daniel Janisch
Founder of Blazeway. Indie builder focused on privacy-first product tooling for solo founders.