Rabbit Bricks is a research-style media property covering AI tools. We do not publish first-person testing claims unless we have run the tools in production for the duration we cite. Most of our coverage is built from publicly verifiable signals — vendor documentation, third-party rating platforms, market research, and community sentiment.
How we research
Every article begins with a verified factsheet. Before any draft is written, our research process pulls 10 to 20 specific facts that meet four criteria: a primary source, a paraphrased quote, a confidence rating, and a category. Claims that cannot be backed by a factsheet entry are either hedged with phrases like “reportedly” or omitted from the final piece.
Primary sources we rely on
Source priority order
- Vendor pages: Pricing tiers, feature lists, release dates, integration documentation
- G2 & Capterra: Aggregate user ratings, review counts, common complaint patterns
- Statista: Market size, adoption rates, segment growth figures
- HubSpot research: Industry surveys, marketer adoption studies
- Reddit communities: Real-world workflow discussions, complaint signals, recommended use cases
- Major news & academic: When relevant for context or independent verification
How we evaluate tools
Each tool is assessed across six dimensions: pricing transparency, feature breadth, market adoption, user reviews, vendor track record, and community sentiment. We do not run paid time-trials, simulated workflows, or A/B tests on production data — that level of access requires a paid relationship with the vendor that would compromise our editorial independence.
What we do instead is read the documentation, compare it against G2 and Capterra reviews, search Reddit for unsolicited operator feedback, and triangulate. When sources disagree, we say so.
Hedging and honesty
If a claim cannot be backed by the factsheet, we hedge. “Reportedly,” “according to,” and “in roughly the same range” appear deliberately throughout our pieces. The catch: hedged claims feel less authoritative, but they are honest. We would rather sound less certain than fabricate confidence we do not have.
Update cadence
AI tools evolve weekly. Articles in our active coverage are reviewed monthly. The Last updated field on each piece reflects the most recent factsheet refresh. Pricing and version data are the most volatile signals — we re-verify those before any major republication.
Affiliate disclosure
Some of our links are affiliate links. We may earn a commission when readers sign up through them, at no additional cost to the reader. Affiliate relationships do not influence which tools we cover or how we rank them. Editorial decisions are made before commercial relationships are explored. If a tool we recommend stops being available through an affiliate program, we do not remove the recommendation — and conversely, a generous payout does not earn a tool a higher tier.
Corrections
If you spot a factual error, send it to [email protected]. We log every correction in a public note at the bottom of the affected article and update the Last updated field accordingly.
What we do not do
We do not publish AI-generated content without an editorial review pass. We do not pad articles with fabricated personal anecdotes (“I spent three months testing…”) to chase E-E-A-T scores. We do not accept paid placement disguised as editorial coverage. We do not promise rankings to vendors.