If you are running ads for your app without some form of advertising testing, the opportunity cost could be huge.
As smartphone users, our exposure to in-app mobile advertising is at an all-time high. In a survey conducted in 2021, 57% of respondents said they use their smartphone for over 5 hours a day, while only 5% said they use their mobile devices for less than 1 hour. Much of this time is spent on mobile apps - plenty of which contain mobile ads, especially now that their popularity with advertisers has snowballed. But not all these advertisements are what we would classify as "good". Many are poorly constructed and fail to capture the attention of the user, resulting in weak performance metrics and hindered user experience.
This is not a desirable situation to be in, regardless of whether you are the advertiser, publisher, or user. Especially as an app developer looking to run ads for user acquisition, it s vital you take control and run the best ad possible - and there s many ways to do this. You could focus on improving your creatives, copy, ad format, targeting, and more. But knowing where to start is difficult, and knowing what to do to improve them even harder. This is where ad testing comes into play.
Advertising testing refers to the process of directly comparing different variants of the same ad campaign against each other, collating data you can use to directly compare different elements. After doing so, you can put together an advertisement that better contributes to your specific marketing goals. As the advertiser, you will get better results; as the publisher, you will have higher quality ads that complement your app; and as the user, you see ads that potentially pique your interest and complement the user experience.
Ad tests are essentially market research, and there s no single best way to do your ad testing as it depends on your situation. Broadly speaking, you can divide the different types of ad testing into two categories: in-market testing, and survey testing.
In-market ad testing is where you trial run your ad with a series of different variables which you can control. Based on the results from running these variations, you can come up with a conclusion as to what the most successful combination of ad elements is, and then place the remaining majority of your budget where it will be most effective. Two of the most common ways of doing in-market ad tests are A/B tests, and multivariate tests.
An A/B ad test is a way to compare two versions of your ad by changing one variable. The goal of an A/B test is to determine which ad variant performs better. This is done by dividing those who view your ad into two groups, with one receiving version A, and the other receiving version B. Then, the performance of each group is measured. If one version performs better, then that version is rolled out with all your ad spend.
Multivariate ad testing is like A/B testing but takes the number of variants to a new level. In any mobile ad, you will have a series of variables, whether it be the on-image text, CTA placement, color choices, graphics, and so on. Multivariate ad testing can change several of these elements at the same time, creating lots of different versions of your ad to find the best possible combination of different elements. Multivariate testing can be a much faster way of reaching a more ideal advertisement but, because there are more variations, it requires more traffic for the sample size of each variant to be big enough to be meaningful.
Unlike in-market testing, survey testing does not involve actually running ads. Instead, you show respondents your ads, collecting their feedback into actionable data and insights. There are two types of survey advertising testing: monadic and sequential monadic.
In a monadic test, you may have several different variants of an advertisement. Each participant in the survey will be shown one single ad in isolation and be asked to provide feedback on it. There are a couple of benefits of doing this. Firstly, with only one advertisement to look at, their opinions will not be influenced by any other potential variations. Secondly, each participant must only review a single ad, and the shorter the survey the better. The downside of monadic ad tests is like that of a multivariate test. To get meaningful results on a wide variety of ad variants in your test, you need to have a large number of respondents.
The alternative to this is to run a sequential monadic ad test. The principle is the same, and the questions you ask will be similar. However, instead of asking each survey participant for feedback on a single ad variant, you can ask for their feedback on several, one after the other. The benefit of running a sequential monadic ad test is that you can get valuable feedback on a multitude of ad variations and gain an understanding of which is best with far fewer responses. However, the feedback on different ad variations will to an extent be influenced by those that came prior.
Depending on what the ultimate goal of your ads are, there are different ways you can evaluate how good they are. Here are some of the useful metrics, as well as what they reveal.
CTR: Click-through rate - tracking the CTR is great if you want to maximize the number of possible clicks and, therefore, traffic from your ad.
CVR: Conversion rate - understanding how many of those who saw your mobile ad went on to convert is a valuable metric to use if you want to better understand and compare ads based on the quality of traffic or clicks you are receiving.
CPI: Conversions per impression - Conversion rate and click-through rate are both valuable metrics - but they alone cannot specify the number of conversions you have received if you only know the number of impressions. In your ad test, the CPI statistic helps combine both CTR and CVR to understand how much value a specific ad variant offers you.
CPA: Cost per acquisition - The number of conversions an ad offers you is not the full picture - the price per conversion is something vital to those on a tighter budget.
ROAS: Return on ad spend - This metric shows you how much money you are getting back from your ad spend.
It should be fairly clear by now that the foremost goal of ad testing is to make the most out of your ad budget. As an app developer, if you are running ads without testing them first, you are most likely not reaping the biggest reward for your money. And as we ve mentioned in previous blogs, gaining more users for your app is a crucial part of growth and will greatly help your monetization journey.
Marketability testing is a process where you test out different versions of your product or service to determine which one is best for your target audience. This helps you find out which features work best, which don t, and to gauge market interest in your app.
Marketability testing is often used for mobile game developers, for example, who want to test the waters and see if their new game or update will be successful with their target market. One of the most effective ways of undertaking marketability testing is to run in-app mobile ads through an ad network like Pangle. By combining it with your efforts in advertising testing, you can gain a more thorough and detailed overview of the market situation and how well your app will be received. After all, you ll need to be running the best possible ads to gain the most accurate representation of market interest in your app.
Mobile ad viewability is the percentage of time spent viewing an advertisement in an app. The higher the percentage, the better. Mobile ad viewability is a great metric to understand as it is closely related to and has huge influence over the click-through rate of your in-app ad and subsequent results. If you run advertising testing and create the best possible ads, your mobile ad viewability will improve, and so will the results of your marketing ad campaign. Not to mention, this is an incredibly competitive space: in 2020, global mobile ad viewability rates increased 13.6% to 70.8%. Particularly with your ad viewability figures, keeping ahead of your competition ensures you the best and most consistent results possible.
Interested in learning more about A/B ad testing, multivariate testing, and more? Don t hesitate to browse our resources center or get in touch with us at email@example.com.