Verve adheres to the highest standards regarding the integrity and quality of our advertising platform, which delivers targeted mobile advertising campaigns. At Verve, brand safety is an integral part of our platform and business culture. We take brand safety seriously and work to protect advertisers using our platform. Details of those processes are set out in this policy.
Apps within the Verve platform are audited before being allowed to join our network. We do not work with any party on a blind basis. Our inventory is tagged by site and ad placement individually, regardless of the publisher.
Developers and publishers are required to agree and adhere to our legal terms, which prohibit, among other things:
- Racial, ethnic, political, hate-mongering, and other objectionable content;
- Investment, money-making opportunities, or advice not permitted under law;
- Violence and profanity;
- Pornographic, obscene, and sexually explicit content;
- Defamatory material and material that threatens physical harm to others;
- Promotion of illegal substances or activities;
- Material that discriminates on the basis of race, ethnicity, gender, age, disability, religion or sexual orientation;
- Content which is inappropriate for, or harmful to children;
- Content which is clearly geared towards children;
- Promotion of terrorism or terrorist-related activities, sedition, or similar activities;
- Any content that infringes upon the intellectual property rights of any third party;
- Any content that is unfair, deceptive, or otherwise in violation of any applicable laws, rules, and/or regulations.
When a new app or site applies to become part of the Verve platform it is reviewed before being integrated into the network. This includes authentication checks; a review of all placement URLs; checks for inappropriate, harmful, and prohibited content; and confirmation that clicks and impressions are accurately recorded and reported. New apps are not made available for Verve advertising campaigns until this process is complete.
On a regular basis we review the inventory provided by our publisher and developer partners to ensure they continue to meet our content quality standards. Our platform performs analysis of the traffic within our network. If any irregularities are identified which could suggest questionable activity, the relevant app owner is contacted and may be blocked pending investigation and resolution of the issue.
We do not control the content our publisher and developer partners choose to create. However, if we become aware that any partner has breached its contractual obligations, then we work quickly to resolve such breach or remove such partner content from our platform.
Where we source inventory via an exchange, our relationship with the developers and publishers of the apps and sites on which our clients’ ads appear differs from the relationship we have with our network developers and publishers. As such, we are selective about the exchanges with whom we work and conduct due diligence on each of our exchange partners prior to serving any ads through them. In addition, we advise that a whitelist is always implemented when using apps and sites accessed via an exchange (please see below under Whitelists).
Verve operates a blacklist, also known as an inappropriate schedule. The Verve blacklist is for exchange placements only. If a direct publisher ends up on the Verve blacklist, they are removed from rotation permanently. The blacklist is applied to all brand and agency campaigns.
The Verve blacklist also extends beyond apps to IP subnets, device IDs, and advertising IDs that generate fraudulent or non-human traffic. For example, if a bot ends up installing a perfectly legitimate, brand safe app, we will still block that impression. This applies outside of the exchange context as well. This is less relevant to a “real” user of any app, but could nominally affect a publisher and is a brand safety measure we take on behalf of the advertiser.
In addition to the blacklist Verve operates and which applies to all of our sites and apps, advertisers can create their own blacklist of what they deem to be inappropriate over and above our blacklist. We take manual and automated steps to implement a client blacklist on behalf of an advertiser that requests that we do so.
We work with advertisers to create whitelists, also known as an appropriate schedule of sites and apps that are approved by an advertiser. We have technology and processes in place to prevent campaigns being run on apps not included on a whitelist. A whitelist is implemented at the request of an advertiser, although we suggest that a whitelist is always used when serving ads on apps accessed via an exchange. Client blacklists and whitelists are available for advertisers on a case-by-case depending on the scale of the campaign.
Brand Safety Policy last updated: January 25, 2018