I’ve said this before and I’ll say it again: email deliverability is hard. There are so many factors that ISPs (also known as Mailbox Providers) consider when determining whether an email will even be accepted into their system, as well as if it will be delivered to the inbox of recipients.
Before we jump in to a discussion about metrics, it’s important for all email senders to remember: There is no silver bullet to the inbox. So, if you’re relying on a single metric to determine the health of your email program this discussion is going to be very eye-opening to say the least.
This month, we asked deliverability experts from ConvertKit, EmailKarma, 360 Inbox, SocketLabs, dotdigital, Netcore Solutions, Iterable, ActiveCampaign, Zeta Global as well as yours truly, to share why certain metrics are overrated or misunderstood and what metrics you should be monitoring.
The “Deliverability Rate” Is Highly Missunderstood—For Good Reason
The metric I think is the most misunderstood by marketers is the “deliverability rate”.
Over the years, ESPs and reputation monitoring services have created their own internal deliverability ratings or sender scores that are intended to make deliverability simpler for their customers to understand.
These scores can be incredibly helpful to a marketer who knows very little about deliverability because it gives them a quick and easy 👍 or 👎 on their inbox placement.
At the same time, the way these scores are created tends to leave a lot of things open to interpretation, which can lead to decisions that are detrimental to deliverability in the long run.
Let’s walk through a few key reasons why the deliverability rate is such a commonly misunderstood metric:
Delivery rate and deliverability rate are often confused or used interchangeably. This is a problem because these are two incredibly different metrics! By definition, the delivery rate is when a message has been accepted by a recipient’s servers, while deliverability is the ability to deliver emails to subscribers’ inboxes.
Senders can know if their messages were accepted into the recipient servers because ISPs deliver feedback confirming this, but what happens after that has always been a bit of a mystery.
Did it go to the inbox? The spam folder? Was it dropped into a black hole somewhere (yeah, I’m lookin’ at you, Outlook… 👀). There is no feedback given by ISPs to inform if an email was delivered to the inbox or the spam folder.
Direct tracking of inbox placement rate doesn’t really exist. In fact, up until Verizon Media Group announced their Email Deliverability and Performance Feeds in February 2020, there has never been a true way for senders to know if their messages went to the inbox or spam folder.
And even with VMG’s feeds, the data is anonymized to an extent. You’ll only know how many emails from a sender domain are delivered to the inbox, spam, and folders. Which is a lot! But you still won’t be able to tell if email@example.com received her email, and you won’t have insights into deliverability with any domain outside the VMG universe, either.
Traditional data points only paint a partial picture. To answer the age-old question of “did it go to the inbox?”, marketers have historically focused on many data points, including positive and negative engagement metrics (such as opens, clicks, unsubscribes and complaints).
The main idea here being that recipients won’t (or can’t) engage with emails if they’re in the spam folder. Many marketers attempt to overcome this lack of data through various forms of spam and seed testing, but in the end they are chasing the ever-elusive “inbox placement rate” and failing to see the full picture.
Very little is known about how third-party deliverability scores have been calculated. How are marketers supposed to know how that score was calculated, or what it means? Are they even aware that their deliverability rate isn’t tied to any one data point that is being tracked or reported on?
Just like deliverability itself, these ratings tend to live inside of a black box. Marketers have little time or motivation to understand these scores, which means they are placing far too much trust in them.
Also consider that if you do not know how a deliverability score was calculated, you cannot be sure if that score actually relates to the destinations they send to the most. For example, Gmail’s anti-spam filters care mostly about engagement, so it would not be meaningful to have a high deliverability rate within a scoring algorithm that is highly focused on issues driven by email content.
While we can’t know the true deliverability rate of our emails, we can take measures to monitor our deliverability beyond just opens and clicks.
Marketers must monitor all of their delivery and engagement metrics over time in order to understand their likelihood of hitting the inbox.
Focus on positive signals such as open, click and conversion rates and negative metrics including spam complaint and unsubscribe rates. This is also key to understanding what content their audience finds value in.
Lastly, include domain-level statistics in your monitoring as well, since you may identify that your performance with one destination (such as *cough, cough* Outlook) is a bit worse than others.
Beyond email metrics, applying reputation monitors that check for spam trap hits, blocklistings and fluctuations in reputation ratings (for example, with Google Postmaster and SNDS) as well as engaging in spam or seed testing can be done through paid email deliverability monitoring tools like Kickbox, allowing marketers to maintain trust with recipients and inbox providers alike.
Open Rates Should Not Be Your Go-To Metric, Here’s Why
Open rates are both overrated and misunderstood by most marketers. Opens have long been the go-to metric for marketers to measure the success of their email marketing.
Oftentimes, even a slight drop in open rates can send marketers into a state of panic. But as the email landscape continues to change, open rates become less and less reliable.
Email opens are calculated by embedding a hidden pixel in the email. When a subscriber’s mailbox provider loads the images in an email, the open-tracking pixel is also loaded, which logs an open for the subscriber.
However, there are many different scenarios where an email can be opened but the tracking pixel isn’t loaded, or the email isn’t opened but the tracking pixel is loaded. Here are just a few of the many reasons why opens can be inaccurate:
- The message gets clipped
If an email is too large, it might be clipped. If an email is clipped, the open tracking pixel will likely be clipped as well. In this case, there are often many subscribers opening and reading the email, but their open isn’t calculated.
- The mailbox provider automatically preloads images for a better mobile viewing experience
I’ve seen examples of Gmail automatically loading images for a subscriber if they heavily use the Gmail app. In this case, an open would be logged before the subscriber even has a chance to view the message.
- The subscriber is using HEY.com or another provider that blocks tracking pixels If you haven’t heard, there’s a new mailbox provider on the scene and they take a strong stance against open-tracking pixels. If your subscriber uses HEY.com, you won’t see any open data from them.
Watching your open rate trends can still be helpful for diagnosing deliverability issues, especially if you can break them down by mailbox provider, but it’s important to understand the potential inaccuracies of your open rates as well. There are a lot of other great metrics to focus on, but my favorite metric to watch is your complaint rate.
A complaint happens when a subscriber marks your message as spam. Each complaint is a really helpful piece of feedback, and digging in more can help you understand how to improve your strategy. Looking at the patterns surrounding your spam complaints can help uncover things like:
- Listbombing issues
- Sending too frequently
- Not sending frequently enough
- Content that isn’t valuable or relevant
- Opt-in methods that aren’t working well
Open Rates Are Easily Manipulated
If I had one metric to choose as an overrated it would be open rate. This is the ultimate vanity metric for marketers. I’m not saying discount it entirely, but to use it as a barometer of your program.
Look at open rates as a trend over time by domain if you have it, if not be sure to get this data. This can show you the early warning signs of your email programs health being in decline and where your messages might be landing in the inbox or the spam folder.
This domain level view will allow you to focus on specific challenges your messages may be facing with delivery and make some necessary adjustments.
Why should you change your views on open rates, let’s take a look at a couple examples:
- Need your open rates to go up overnight so you look like a rockstar at your next quarterly marketing meeting with the CMO? Suppress a number of non-responders from your next few campaigns and BOOM! your metrics are quickly growing exponentially.
- Open rates are also inflated by anti-phishing and security tools that are interacting with the messages to ensure that the content and the links are safe for their intended recipients. These opens and clicks are hard to filter out of reporting and may be influencing your future targeting decisions or your current A/B test or results.
Stick to metrics that are harder to influence by a simple change of targeting to show the real truth to your marketing success – How many successful downloads, registrations or the total shopping cart value of your subscriber.
Open Rates Should Not Be Your Sole Measure of Success
Founder & Managing Director
Open-rate is likely the most well known statistic in email marketing and deliverability, and is often used to measure the ‘success’ of an email campaign; however, it remains one of the most commonly misunderstood metrics. Perhaps the underlying challenge is how we talk about and measure email metrics.
Of course, if no one opens the email, the email campaign is hardly successful. And yes, open-rates play a very important role in engagement/reputation, which we know contributes to mailbox-placement -and yet the age old question continues to be asked, “what is an optimal open-rate”?
Open-rate is a Deliverability metric, and like other scientific qualitative measurements, the indicator should be tied to a specific objective, and be measured ‘in-relation-to’ other metrics.
If the statistical measure is flawed or misunderstood, it can lead to misguided strategies and unintended results. The relative value of an open-rate as a stand-alone metric is subjective, if not inconclusive; therefore, it should answer the following questions: (i) What is the objective? (ii) What are you trying to measure? (iii) Is this the right metric to optimize to achieve the objective?
For example, if absolute reach is the sole objective, perhaps a low open-rate in relation to a huge target-audience could be deemed ‘OK’, in contrast to a high open-rate for a small target audience. For this comparative relationship to exist, the open-rate must be relative to something else; in this case, the resulting ‘number of opens’ and the ‘delivered email volume.’
If reducing spam-placement is the objective, we often hear – “how high should the open-rate be to resolve our spam issue”? Well, we know that some mailbox algorithms are dynamic, and the impact of those weighted metrics can also be ‘relative’ in relation to other metrics.
In other cases, ISPs have static, absolute thresholds for metric ‘health’. So, if the mix of other deliverability indicators (like bounce-rates, complaint-rates, and spam-traps) are relatively high, does the open-rate need to be very high – or could it be acceptably lower, if all other metrics were also lower?
We must also understand that open-rates can be measured differently, both in (i) Calculation (eg. total or unique opens; against total sends or successful delivered) and in (ii) Tracking Mechanics (eg. tracking pixel; image-enablement; plug-ins that block ‘hidden’ tracking mechanism, etc.). Inbox filtering and tracking will continue to evolve, and so we must be cognisant of the variables that impact each tracking metric.
By adopting a more scientific approach to tracking, measurement, and analysis, we can better understand the relationship of metrics, and strengthen our ability to explain ‘what makes a certain metric successful or important.’ This also helps to ensure we are applying the ‘right’ strategy to achieve the objective, leading to a more effective and successful email program.
Don’t Be Fooled By Positive Delivery Rates, It’s Likely Not What You Think
The most overrated metric is Delivery Rate, which also happens to be the one of the most misunderstood metrics too. I think part of the problem is it’s the easiest metric to gather from a technical perspective, so it is ubiquitous across all ESPs.
Also, it is easy for novice marketers to mistake “delivered” as a definitive statement that the message reached an inbox.
Finally, just about any sender with functional failure and bounce processing will almost always produce a good Delivery Rate.
So, the fact that this metric is almost always available, easy to misunderstand, and tends to be an “overly positive” indicator of performance, makes it essentially worthless.
With that said, if your Delivery Rate is bad, there are clearly some huge issues with the marketing program, so it isn’t totally wasteful to monitor. However, in this situation there are likely many other indicators that would appear before the Delivery Rate metric dropped to a point of obvious concern.
I wish I could offer Deliverability Rate as an alternate metric for marketers to monitor, but unlike Delivery Rate, this is a phantom metric.
Deliverability can’t really be measured in a concise or accurate manner by ESPs. While I’m personally proud of the work we’ve done at SocketLabs on our StreamScore analysis system to help simplify and aggregate analysis of important metrics, I don’t think it would be fair to label this score as a Deliverability Rate.
If marketers really had to choose just one metric to focus on, I think Open Rate is the metric that most closely tracks with reaching the inbox.
When It Comes to Deliverability, Quality Wins The Day
You might be surprised how often I see list size being used as a key metric, whether this is by marketers or C-level executives using it as relevant KPI. Here the emphasis is on quantity. Yet often it is the list quality that is really important and the best way to assess and build quality is with engagement metrics.
Click rates, receive rates and reply rates all show the current levels of the email conversation. Focusing on these engagement metrics over time can help to generate a stronger contact list and a really positive interaction with customers.
Marketers who build their list with confirmation emails and consent preferences enable subscribers to have their say in the conversation. This empowers the customer to help the business succeed and also enables the business to develop much higher levels of engagement. In short, the goal is a high-quality list that generates strong interaction with customers and generally the stronger the consent willingness the stronger the engagement.
The focus on quantity can bring with it real problems, even detracting from the ultimate goal of revenue or other business KPI’s. The key problems with a large list size come when it is acquired or used indiscriminately.
If the list is low-quality (gathered from weak sources and then not monitored) it can result in emails being sent to people who did not want to receive them, and to potential customers being targeted at a counter-productive rate.
When mailbox providers see bounces, emails marked as junk or spam, and emails going to defunct addresses and spam traps they step in and apply their own filters so with a low quality list the more that’s sent can lead to less being received in the first place. When the focus is on engagement metrics then everyone wants the conversation to happen.
So, in summary, quality is key. A focus on engagement metrics empowers recipients to develop the business and helps marketers get the quantity side right because the recipient can control what they receive in a way that encourages them to interact.
Engagement metrics also help put the focus where the mailbox provider needs it. For the marketer, the customer and for the mailbox provider this can be a much better place to be.
Dig A Little Deeper, Don’t Over-Rely On Open Rates
In my opinion, the open rate is a metric that is misunderstood by most marketers. There is an over-reliance to justify the success of your campaign due to the engagement received.
Having your users open your campaigns is a good sign for maintaining sending domain hygiene and increasing a brand’s equity inside the inbox. But, it does very little for a marketer who is chasing after different objectives like growth in their revenue and number
Open rates could show engagement but, open rate alone does not count for the success of your email program. It could actually be misleading.
For brands having an online store, other metrics like click-rate, click-to-open-ratio should take precedence over the number of opens. A marketer should be focusing more on getting more click-worthy campaigns out the door.
Open rates are a good metric to measure for your educational/informative content but not for those campaigns with a call-to-action button in it.
Which brings us to the skewed campaign results.
More opens are not all that great:
A high open rate does not necessarily mean a high amount of leads or orders generated.
It could mean a lot of users who regularly view your campaign content, hoping that they will find a suitable offer, but end up being disappointed. We call this category of users as ‘concealed churn’.
These users are repeated openers but not clickers. If you ignore this trend for a period of time, they stop responding altogether and they turn ‘silent’. That is a loss to engage your potential customers.
So just viewing open rates will not give you the full picture here, instead, it will cloud the real story of your customer life-cycle churn.
That does not mean that you have to aim for a 100% click-rate. That’s difficult. But it won’t hurt to better the number of clicks you receive from each of your campaigns. That would mean more users interacting and exploring your products on your website or app.
Recently, there has been a talk of spam-bots that have been opening and clicking on emails to a large percentage from a sender. A high amount of opens could mislead the marketer to believe that their campaign was opened by every user in the mailing list. This could skew your campaign performance results.
Analyze your campaign metrics to find the root cause of such issues and mitigate them with smart solutions. Staying neutral and happy with over-inflated results could be detrimental to your email program.
High opens may equal a high spam complaint rate:
While solving day-to-day deliverability related issues at Netcore, we have seen in some cases that a high amount of open rates generally corresponds to a high amount of spam complaints. High spam rates could damage your domain hygiene in a bad way to the point of being a show-stopper.
Every email you send should be valuable to your customer. Else, the high spam complaint rate pulls down deliverability as well as any scope for improvement in your campaign performance. This shows you the folly for judging the success of your campaigns just based on the engagement you receive.
Brands should set up their key performance indicators on metrics that matter the most to their bottom line. This will vary according to the business model of each brand. If they need more conversions/leads than clicks, click-through-on-opens are the metrics to measure. If they want more users to read their emails, then open rates could be the metric to consider.
Going deeper into your deliverability and engagement metrics like unsubscribe reasons, authentication results and delivery metrics, will provide a marketer greater insights into the impact that your content is creating and the behaviour pattern of your subscribers.
Over Simplified “Health” Scores Add Confusion to an Already Complex Issue
I actually love this question, because I get asked about it all the time. Relative to one metric, it depends on one’s definition of “metric.” For the purpose of this topic, I’ll consider the metric a “single data point.” To this point, I am frequently asked by senders “what does (this metric) mean? Should we be concerned/excited/panicked?”
And that metric is some sort of aggregated, over-simplified, qualitative score. In other words, “Your campaign scored an 83% deliverability score!” More (and very common) examples are green/yellow/red scales that are intended to indicate deliverability health as well as the generic A/B/C/D/F letter grades.
It makes sense that these are so widely available for senders to consume, because they’re in such high demand. These days marketers are thankfully eager to recognize deliverability health, and are looking for easy ways to do that, which is understandable enough. Work smarter, not harder, right?
There are two main issues with this. One, deliverability “health” is a complex combination of sending practices over time, technical message infrastructure, and the intrinsic recipient value the sender provides. More so, deliverability is dynamic.
So qualities that deem a message worthy of an inbox today may not do so tomorrow regardless of changes a marker may or may not execute. Because of this, it is not really logical to expect a message or “sender status” within the deliverability ecosystem to be accurately defined as a “B+.”
Secondly, the methods that are used to derive these scores are really inconsistent and oftentimes a black box of an equation as simple as “number of bounced messages divided by delivered messages,” etc. If I was a marketer, I would want to understand every single variable of a “score” before I would allow it any influence over my strategy.
That brings me to what metrics should a marketer be watching instead. The answer isn’t sexy to some, but the truth is that they are metrics they’ll already have access to: the organic, direct message performance data.
What are the unique open rates over time? This will be telling on not only deliverability “health,” but also serves as a feedback mechanism for understanding the value recipients are finding in the mail.
What are the complaint rate trends? Are they oddly elevated in a particular message type? Are they higher at a particular stage of the recipient lifecycle? Are they just high in general? (That would be bad).
From there a marketer can get a little more sophisticated and do things like examine recipient engagement profiles for different cohorts of their different mailing segments, understand the ratio between message openers that click through to site versus unsubscribe or complain as well as track click to open rates over time.
The point is, really understanding the deliverability health of a program requires legwork and attention to detail. It would be incredibly rare to be able to summarize so many different, dynamic variables into a singular figure, letter grade, or color. While it’s understandable that it would be convenient to have a quick resource that would always be reliably accurate and simple, no external source could (or should) know a program’s deliverability health more than those that operate it.
Spam Complaints Are Not As Simple As You May Think
In my experience, one of the most commonly misunderstood deliverability metrics is the spam complaint rate. Recipients use the “Report Spam” option in their inbox for a variety of reasons:
- The email IS spam
- They forgot that they signed up for the email list
- They are no longer interested
- “Ooh what does this button do?”
- They don’t agree with the message
- The list goes on…
Here are a few key insights into the spam complaint metric that hopefully helps readers better understand how to get more mail to the inbox, opened, and enjoyed.
CALCULATIONS: Email service providers (ESPs) tend to agree on the need to keep your spam complaint rate below mailbox providers’ recommended threshold of 0.1%.
ESPs do their best to show you your spam complaint rate but may not make it clear how your spam complaint rate is calculated. Some ESPs only use spam complaints shared by mailbox providers through Feedback Loops (FBLs); others may combine FBLs with the number of people who say they unsubscribed from your email list because of spam reasons. Both are valid calculation methods and give you key insight into how your recipients view your emails.
It’s important to note that, if you are moving from one ESP to another, you need to understand how your old ESP calculated your spam complaint rates compared to how your new ESP is calculating them. This knowledge can help you investigate any sudden dips or spikes in your spam complaint rate after the switch. It will also let you know if you need to start monitoring unsubscribes for spam reasons separately. This is valuable data to have since not all mailbox providers offer FBLs.
ACCESS: Most senders do not know that mailbox providers are not required to share spam complaint information, and if they do provide FBLs, they may only send a selection of spam complaints received. This is especially common for B2B recipients, and even mail going to big mailbox providers like Gmail.
Whether you receive FBLs or not, it is important to note that the mailbox providers have all of the spam complaint information for mail that’s sent to their customers. They use this information when determining the trustworthiness of your email and whether it should be delivered to the inbox, spam folders, or rejected altogether.
With that said, if you see your spam complaint rate nearing 0.1%, you should think of it as an indicator of a possibly bigger problem that could impact your sender reputation. This is a good signal to start reviewing your sending practices and implementing improvements to protect your deliverability.
EASE-OF-USE: It’s true that some recipients use the Report Spam option even though they did ask to receive your email. This still indicates an issue with the email!
A recipient may use the Report Spam option if they don’t remember signing up, they are no longer interested, or they are just trying to clear out their inbox. There are a few methods you can use to help combat ease-of-use spam complaints:
- Add a detailed permission reminder to the top of your email to remind the recipient how, when, and why they opted into your email.
- Include an unsubscribe link at the top of your email to encourage recipients to use that option rather than the Report Spam option. This unsubscribe link can protect your sender reputation with mailbox providers while deterring spam complaints.
- Personalize your subject lines and email content with dynamic content and/or custom fields. A recipient who feels a one-on-one connection with the email is more likely to engage and less likely to report an email as spam.
- Set up a sunset policy to let go of contacts who are no longer engaging so they don’t default to the Report Spam option.
ENTER THE OPEN RATE: If you have a low open rate and minimal-to-no spam complaints, then it’s highly likely that your emails are being filtered to the spam folder. Just because your spam complaints are low doesn’t mean that all of your emails are being delivered to the inbox. If you have low open rates and you are not receiving spam complaints, it is because the email is already in the spam folder — your recipients did not even get the chance to Report Spam. This is a strong indication that you need to make improvements to increase your deliverability.
It is worth noting that when recipients go into their spam folders, they can use the “This is not spam” option to move an email out of the spam folder and signal to the mailbox provider this email is wanted. This action is weighed heavily by mailbox providers and can help boost sender reputation and deliverability. If you have recipients contacting you to complain of not receiving your mail, be sure to let them know about this option.
THE HOLY GRAIL: The most important piece of deliverability advice I could provide to a sender is to focus on sending messages to recent and explicitly opted-in contacts with content relevant to what they signed up for and at a frequency they are expecting.
While it is normal for all senders to experience some spam complaints, there are clear indicators of when it’s time to take a deep dive into your processes to identify room for improvement. Whether you’ve been sending email for 20 years or 20 days, I hope that the above knowledge helps you better understand the spam complaint metric.
All of the Above – All Metrics Have Some Degree of Limitation
Have you ever opened a puzzle box, taken out one piece, looked at it and said, “This was fun, it’s a beautiful picture, I’m done!”.
Not likely, unless you really hate puzzles. I view deliverability analysis in a similar fashion. As the industry continues to evolve, what we once thought was THE best metric for measuring deliverability is really just ONE piece of a larger puzzle.
Use these pieces the wrong way and your view of campaign success or reputation health is either smattered with inaccuracies or is just completely confusing and obfuscated.
Even with a mosaic, a story lies in each image within it, but a more intricately woven one that holds a lot more truth sits across the entire canvas. Once you step back, you can see a lot more context and how all the pictures relate to, support, and influence one another. And then suddenly, you have a whole different story before you.
Ok back to the metrics…
At one point, open rate was THE metric to measure success, but was it really? When I first started in email, there were a lot more text only and multi-part messages loading text messages that would prevent an open pixel from even being downloaded.
Despite technology advancing to make HTML the standard, this metric continues with a degree of inaccuracy leaving campaign performance reports skewed more than it may have once been thought.
Why? Users block images, platforms block images, some platforms trigger multiple opens, some opens are non-human interactions due to spam appliances, some cache, and so on and so forth. The same concerns arise with click data.
In addition, just because images aren’t loaded, doesn’t mean there isn’t interest in the message. I know I want my account statement notifications from my bank, but I never open them.
However, knowing all of this, it’s still one of my must have metrics that I use as an indicator; never as a singular measure in time, though, but, instead, how it lives over time. And unfortunately, for some, it is often one of the few metrics they have to make decisions. There is still a lot of value there, so long as you understand the limitations.
At one point, complaints were THE metric to measure issues. However, as ISPs remove feedback loops (FBLs) or ISPs that don’t support FBLs, like Gmail, start to make up more and more of a sender’s list, complaint metrics are no longer clear indicators of issues.
Often analysis of complaint rates is viewed over the entire campaign. This is ok if you are focused on trending. But, really, the enlightening aspect comes from when you drill down to review the results associated with only the domains that do provide FBLs.
Here’s another twist, low complaints could mean a strong campaign or one that bulked so heavily no one even saw it to complain. Consider as well that filtering has developed so sophisticatedly that while some ISPs still rely solely on complaints, other ISPs are incorporating behavioral indicators for what is considered unwanted mail.
Unfortunately, this is the data senders don’t see. Complaints are still a huge negative factor, but so is not moving an email to your spam folder, never interacting, deleting as soon as you get the email, etc.
Data quality and list makeup (active versus inactive versus uninterested versus opt-in) alone can be the true culprits behind a campaign not succeeding.
At one point, inboxing was THE metric for deliverability. However, that too comes with many caveats. The once prized panel data is now going or has gone by the wayside.
This was inboxing data based on real users that overlapped with a sender’s list so you could get a feel for what was happening in the mailbox. There are basic seeds to indicate where an email to a new user will go or basic reputation status.
Now there are AI seeds that are supposed to mimic user behavior and associated inboxing. These are all great points, but none are truly your audience and therefore will never be 100% accurate to what is occurring on the other end of your send.
The data is often skewed by how panel data overlaps, how the AI interacts, or basic filtering not factoring in engagement.
I once had a client whose seeds were inboxing 100% at Microsoft only to dig further and see they had engagement rates that were 10% while all other ISPs were showing 20%+. I had another client showing poor inboxing at Gmail, but the open rate was incredibly showing 30%+. It’s all relative to the greater picture.
So what metric is overrated or even misunderstood by marketers? Nearly all metrics from delivery through performance. They each have some degree of limitation. No single metric in time is truly complete on its own.
Moving forward, look at as many data points as you can over time, including campaign delivery metrics, reputation, etc. Once you put them together, you can start to see your mosaic develop.
If the metrics are all pointing in the same direction, you’ll be able to better identify and treat the problem or celebrate the success. However, if there are a number of mismatched leads, then you know you need to start digging deeper, finding more data to rely on, seeing the impact beyond email metrics, and looking at the end result.
Pervasive issues or resounding successes are best identified when they are lasting. Sometimes we trip, sometimes we jump, but most days we walk. So when you see a blip, it could very well be just that, but if it goes down and stays down, you likely have fallen and need a little pick me up.
For me, the big picture is THE ‘metric’ for which you are looking.
Keep your list healthy and boost campaign performance by regularly cleaning your email list. We’ll let you know which email addresses are good, bad and risky, before you hit send.
Your first 100 verifications are free-hundo! Verify for Free