Is server-to-server (S2S) the universal “header” bidding solution for publishers?

When header bidding was introduced to the web publishing industry, it did truly add so much value that ranges from ad latency to user and bid transparency. It opened up part of the black box in the digital advertising ecosystem that was much appreciated and greatly welcomed by publishers. But what about publishers that don’t just have a web presence but also an app in the Apple Store and/or the Google Play store? Is there an opportunity where publishers can monetize their content across multiple platforms without needing to onboard dozens of ad-tech solutions that are platform specific?

When Server-To-Server (S2S) bidding was introduced to publishers, I got excited for the opportunity that will finally bypass several layers of restrictions and limitations of web browsers when attempting to involve many bidders in the client side auction. Publishers will be able to communicate from a single API call through their web servers to multiple SSPs (supply side platforms) without impacting user experience, page load times and adding a hefty amount of JavaScript to their pages. And in return, the response from that API call includes the highest bids for each ad unit involved. How amazing is that? Unfortunately that setup comes with several caveats.

Low user match rate

By moving SSPs from client-side to server side, they won’t be able to gain access to their cookies and other data that further identifies the user or their interests in their systems. Without that user-matching cookie SSPs and advertisers will have less data to go by when bidding and thus lead to an inefficient bid due to insufficient data. Whether the ad call is being made from the web or a mobile app, the user match rates will always be lower than with client-side bidding until the internet comes around and potentially offers a universal user identification platform without actually identifying the user.

SSP JavaScript Libraries

To solve the low user match rate mentioned above, SSPs are asking for their JS library to be added to the page as a way to augment their S2S offering in an effort to improve user matching. This hybrid solution isn’t aiding the goals behind moving SSPs from client-side to server-side. Publishers will still bear the responsibility of loading JavaScript libraries for each SSP on top of the S2S connection needed to trigger the auction.

What about the ad server? Is it S2S too?

For publishers using DFP, the S2S solution isn’t including the DFP call in that environment. The DFP call is still made on the client-side after the S2S call responds back with the highest bids for each ad unit. This might not be the case for mobile apps, but it’s definitely the case for web publisher using ad server that doesn’t offer server-to-server ad serving capabilities.

Tech ownership and scalability

For publishers willing to migrate from client-side to server-side bidding, they have a few options today. One option is to use the free Prebid Server but the free offering goes through AppNexus’ servers which removes a lot of control and ownership from the publisher. Other SSPs offer S2S services similar to their header bid wrapper services, but you’re still losing a lot of control and ownership over the tech (which perhaps for many publishers isn’t a deal breaker). For publishers to truly own their S2S solution, they would essentially need to build and host their own ad server that is able to make and receive server-side API calls to all the desired SSPs. This is a challenging task since you’re essentially building your own global real-time bidding server which comes with scalability, availability, performance and other challenges which took SSPs years (if not decades) to perfect.

For many publishers today, knowing the restrictions and limitations mentioned above, they decided to run S2S in a hybrid environment: client-side header bidding that includes their highest performing bidders plus server-side bidding for the remaining participants. This is probably an interim solution until the issues above are widely resolved, especially regarding the user-cookie mismatch and tech owenrship.

Did the digital advertising industry ever think about asking the user for data in return for a reward?

With all the current news and concerns around user data privacy there is no doubt that ad-tech companies around the world are only adding more problems to the matter and not effectively tackling the issue at hand.

The methods and strategies that are used today to tackle ad-tech from a user targeting perspective likely started by mainly appeasing to the advertisers’ plethora of specifications and requirements, which eventually leaves the users in a dust storm of ad-tech junk that floods their browsers and consumes their precious mobile bandwidth.

A lot are asking “what are the users gaining from all of that tech if their data is that important to be labelled a ‘gold mine’?” I say they get nothing more than an ad delivery that leads to a bad enough user experience and campaign targeting that further promotes the use of ad blocks.

So much money wasted.

The way I see it where a lot of the discussed issues are resolved (in my opinion at least) is by getting consent from the user (across only his/her browser, device, etc) to provide machine readable data (not human readable) so that it’s encrypted and locked down in a “database” of some sort (blockchain makes sense in this case I suppose). This data will reside alongside the user’s devices, browsers, etc and not escape them. This is different from browser cookies and user web trails as it’s specific user inputted data.

From the advertisers’ side, they will provide their targeting parameters for their campaigns and it will be up to some sort of platform that matches those parameters to the user’s locally stored data. This is perhaps a very high level explanation/concept and would most likely be more complex, but the main idea is to keep the consented user data with the user.

Publishers can start promoting this platform first to ad-block users by telling them that they aren’t just supporting the site by white-listing them, but they’ll also get rewarded.

Why would the user supply data to feed this process? Mainly due to the fact that they will get rewarded and as a result their ad experience will greatly improve.  I personally think those users with ad blockers are the main audience to entice first as they are more likely to understand the pros/cons of such a technical problem. I mean, those users went above and beyond to block ads in the first place. If they know they will get rewarded and have a better ad experience, why wouldn’t they consent?

That platform would most likely require a whole new pipeline between buyer, seller and user but will dramatically improve the cost and efficiency of ad delivery and greatly reduce ad fatigue, fraud, ad blockers, and the hundreds of third-party ad-tech vendors that keep taking a cut out of the advertiser’s budget resulting in much higher eCPMs for publishers.

The key here is to get the user to supply their data and for them to know it’s stored and secured locally where it will be only used to match against existing advertiser campaigns and for that they get rewarded. There are some initiatives or discussions today around rewarding users for viewing ads, but that’s still not solving the user data targeting mess. It’s still placing the advertisers before the users to improve their campaign’s viewability.

Consumers all over the world are already used to such methods, especially with their credit cards that offer rewards in a form of either points or cash. They are already used to it and understand the value returned to them even though they are essentially giving up their purchasing trails and insights to some vendor.

Internet users have changed and they are more adaptable than ever. I totally see a future where users will be selecting their favorite brands, hobbies, sports, and more into a platform that is vendor, browser and device agnostic to improve their browsing experience and get rewarded outside the walled gardens of Facebook and Google. Could this platform become an open source initiative that gets native Windows, MacOS, iOS, and Android compatibility?

One device to power them all

I’m sure I’m not the only one thinking about this being the real solution for the multiple devices nightmare that many of us are facing nowadays. You have a smartphone, a tablet, a personal laptop, and possibly a work PC. We are basically juggling devices. Each device needs to be maintained and kept up to date and they all come with their list if headaches.

But what is the only device that is always with you at all times?

Your smartphone.

It is the answer to all of that. Today, they are powered by powerful quad core CPUs, has oodles of RAM and enough disk space to accommodate your cloud storage for personal and work related files. So why can’t your smartphone be the power house? The device that powers them all?

Simply put, I believe eventually everyone will have a single “host” device (your smartphone), that will do the job of your tablet, your laptop, your office computer and your car’s
infotainment system. Those will be nothing more of cheap dumb terminals or hubs that your smartphone will either wirelessly communicate or dock with to transfer power and data. Of course by then, some of those devices may not even exist.

One device.

My Compaq Presario Got a New Battery

Finally, my old Compaq Presario got a new lease on life with a new 12-cell battery which I bought off eBay recently. My laptop has been suffering for a good year now with the old 6-cell battery which gave me almost 15 minutes of power. The new battery seems to give me a good day of work which is great considering the age of my laptop and the overall shelf age of any new battery I get for it.

My Compaq Presario (V2650CA) still runs Windows XP which I am considering to upgrade to Windows 7 when I give it more RAM. Currently it only has 1GB of RAM and that cause the system to crawl if it ran Windows 7. The RAM type is a bit old (DDR) so it wont be cheap to upgrade.

“Why even bother?” you ask? Good point. I mean it’s an old laptop and probably exceeded its end of life date but, nevertheless, it’s still running and can act as a good backup laptop if I ended up investing in a new system.

Add CAPTCHA to BlogEngine.NET

Recently I added a blog module to my company’s website and I used BlogEngine.net as the blogging platform, mainly because it’s free. But a week after the module was implemented, I started getting loads of spam a day. In the beginning I thought they were real comments on the posts but it turns out they all had websites associated with their names that was advertising products and services from different parts of the world.

I was under the impression that BlogEngine already had a spam filter built in (as extensions) and it did, but unfortunately it wasn’t catching most of the spam. I was surprised to see that BlogEngine lacked a captcha extension, but instead has recaptcha that still didn’t stop all the spam from passing through.

After some Googling, I realized BlogEngine users had to add captcha manually to the platform as there was no extension available. There are a couple of websites that gives you step-by-step instructions on how to implement a captcha function in your blog but the one that was easiest to follow and worked from the first try was the post titled “How to Block Spam Comments in BlogEngine.NET” from Code Capers.

Adding captcha seems to solve the spam issue for now, although it is quite annoying to type in those barely visible letters every time you want to comment. It’s a temporary solution for now till I research other ways to kill spam.