Skip to content
← All Posts
Optimization Nov 2025

A/B Testing Subject Lines Is Not Enough

Every email marketer runs subject line A/B tests. "Should we use an emoji?" "Question vs statement?" "Short vs long?" The tests are fine. The lifts are small. Typically 2-5% improvement in open rate, which barely moves revenue.

The bigger opportunity is testing what is inside the email.

Content block testing

Instead of testing the entire email as variant A vs variant B, test individual blocks within the email. The hero image. The CTA button color and text. The product grid layout. The social proof section.

This is how we found that customer review blocks placed above the product grid outperform the reverse layout by 22% in click-through. That single change affected every campaign and flow email. The compound effect over a year was a 6-figure revenue increase for one account.

How to set it up

Most ESPs support basic A/B testing but not block-level testing. We build this using dynamic content blocks with conditional rendering.

Create two versions of a content block (say, the hero section). Assign subscribers randomly to group A or group B using a custom property. Render the appropriate block based on that property.

Track clicks on each block variant separately. After statistical significance (we use a minimum of 1,000 sends per variant and 95% confidence), promote the winner.

What to test first

CTA placement and copy. "Shop Now" vs "See the Collection" vs "Get Yours" sounds trivial, but we have seen 18% differences in click rate between CTA variants.

Product grid layout. Single hero product vs 3-product grid vs 6-product grid. The winner varies by brand, but the difference is usually 10-20% in revenue per email.

Social proof format. Star ratings vs written reviews vs UGC photos. UGC photos outperform star ratings by 31% in our data across DTC brands.

Send time testing

Beyond content, test send times at the individual level. Most ESPs have send-time optimization built in, but the algorithms vary in quality. We test manually: split your list into 4 cohorts, send at 8am, 11am, 2pm, and 6pm. Measure by click rate over 4 weeks. The optimal time is usually not what you expect.

Build a testing calendar

Run one test per flow per month. Document everything: hypothesis, variants, sample size, result, revenue impact. After 6 months, you will have a library of proven optimizations that compound across your entire program.

Want us to build this for you?

We implement the strategies we write about. If you want these systems running on your account, get in touch.

Start a Project