That’s a great question! We’re never “sure” that any individual person who we say is engaged will click, nor that any person we say is unengaged wouldn’t have clicked. Orders are an extremely rare event relative to the number of emails you send.
We talk a lot about Machine Learning but the way to think about what we do is a constant risk-reward analysis, over a long time horizon. Some of the questions our model tries to answer:
- What is the likelihood of this profile taking a positive action right now (e.g. click, purchase), versus the likelihood of them taking a negative action (e.g. unsubscribe, click spam, or literally not clicking, as that depresses your global click rate)
- What is the unique “engagement window” for a brand, and how close is a subscriber to re-entering that window. Say that it’s September, and there is a subscriber who hasn’t engaged in two months, but November is by far the best time to email someone … maybe you should consider not risking the unsubscribe, and just waiting two months to email!
- How is the general email program performing, and do we need to hyper-focus on an engaged segment to get back in the good graces of Gmail, Yahoo, and other email inbox providers?
At the start of every single brand engagement we run an A/B test where we take the list of profiles we would suppress, and split that into two. Group 1 is suppressed! They follow Orita’s recommendations. Group 2 is left to follow the existing brand strategy. What are some of the most interesting things we find?
- Some people from the suppressed group actually purchase. Our model doesn’t say “this person is interested in the brand or will buy”. Our model measures “is email the right channel through which a brand should contact the person right now.” You, as a consumer, don’t only buy when a brand emails you right?
- Folks within the Orita holdout group do click. But, importantly, they tend to click at 10% the rate of those folks we deem as “engaged”. Meaning if the core list click rate is 2%, the click rate of folks we wanted to suppressed by “left alone” might be around 0.2%. Yes, clicks but … yikes, not removing those folks really drags down the average click rate, which impacts how Gmail / Yahoo see you as a sender, which impacts inbox placement, which impacts revenue.
When you do segmentation - “90 day click window” for example - you’re making a risk-reward calculation. Our job is to take that same concept and supercharge it by looking at 1,000 or 10,000 times the amount of data that you are leveraging.
As a cherry on top, we’ve built our model to think about LTV, not just a single send. Sure, you could make more money from one campaign by sending to a broader list. But if you create a ton of unsubscribes from folks who might have purchased in the future, you might be losing a lot of revenue. And, just because we like getting into the weeds of deliverability, having low engagement on a send today actually will impact how you’re seen by inbox providers, which could mean you make less on a future send. Or (yay!) more on a future send if you’re seen as a good sender.
Of course, you never want to save $0.01 on an email and risk losing a sale, so we’ve built our model accordingly. We wait for a LOT of signal that someone isn’t engaged on email before we consider putting them on do-not-disturb.
Finally, we do two things to improve your model over time
- We maintain a 10% holdout group to make sure we’re performant
- We always randomly sample a subset of people, and show them as engaged (“reactivated” in our parlance), just to make sure we don’t miss folks.