To Kill Personalization: Search Engines and Social Networks Stop Helping Advertisers

social media personalization

Do you remember how everything was planned initially? Search engines would provide people with a free service, and for this, people allow platforms to remember what they were looking for and which websites they visited. Websites provide users with free information, and in return, users allow websites to write cookies in their browsers, collect their IPs, show them ads by interests and monitor their actions on pages using analytical services.

Social networks give users free and quite a broad functionality. Instead, the users allow to collect and use any personal data that they have mentioned about themselves. Social networks and search engines provide businesses with great opportunities for target advertising and ads personalization, while businesses pay real money for it.

So that's how it should be in a dreamy perfect world. That wasn't an ideal system, that everyone would equally benefit from (platforms, consumers, advertisers), but as a planned sceme it reached close to have an acceptable balance of interests.

On the one hand, social media platforms accumulate unimaginably detailed data about users and non-publicly sell it to someone or simply transfer it. On the other hand, advertisers, who come to the platforms, are willing to pay for advertising with good targeting and personalization. And they get more and more limited opportunities for quality ads. But you have to pay, because there are no other platforms.

What Do Search Engines and Social Networks Know About Consumers

Well, almost everything that you can imagine: who got married and when and who was widowed, who belongs to the LGBT, and who doesn't; who has psoriasis; who is depressed, and who uses drugs; your level of income and political preferences, etc. Indirectly to understand that Google knows about its users we can tell - by bans for advertisers. For example, a business is prohibited from using so-called “sensitive interest categories” for targeting and promoting goods and services.

In an Ideal, Bright and Tolerant World, the Situation Would Look Like This:

People trust social media platforms and publicly talk about themselves, not caring that their data can be used for evil purposes (the world is not wicked).
Social Media platforms are open and transparent - they inform users that outside businesses are allowed to use their data to personalize ads, and users are not against it (because no one is wicked).
Businesses really want to be useful for people and with great tact uses personal data of users to advertise goods and services (business is not wicked).

In fact, the situation is as follows:

Social media platforms are shady, they collect all the personal data they can, and do not warn about it clearly. What platforms do with the gathered information, with whom it is exchanged, is mainly revealed after investigations by the media. As a result - scandals and lawsuits. And the appearance of complex "policies" comes because of the shady advertising technology usage (platforms are wicked).

What platforms do with the gathered information, with whom it is exchanged, is mainly revealed after investigations by the media.

Social media users in the mass are absent-minded, impulsive and irrational. They publish about themselves directly and indirectly the most intimate information. From time to time they experience attacks of paranoia and remove some personal information, - yet too late. They use the personal data of other users to harass each other, and they become the object of harassment themselves (users are wicked).

Regulators can't keep up with the technologies and lately initiate legislation to protect the privacy of Internet users - such as General Data Protection Regulation (GDPR). In a legislative mess, officials often ignore the problems caused by new and poorly developed laws for private businesses (regulators are evil and slow).

Third-party businesses are ready to use any systems for targeting and remarketing. Businesses' cynicism is flourishing. Taking care of users' privacy means losing to competitors. Only a moderation of advertising and privacy policies of social media platforms can help users from improper company ads seeing, and even so it doesn't work, according to more and more scandals in media (business is evil).

The regulators also can't succeed in user data protection. Nowadays, it is obvious that there is a multi-level personal data trading market on the Internet. From completely illegal and shady options to relatively decent (yet sometimes scandalous) methods of major social media and search platforms.

Third-party businesses also does not mind getting their piece in social media. And as a rule, the ethical use of “sensitive data” for him is ranked second to last place. This issue should be the subject of a broad public debate at a worldwide level.

So, is Personalized Advertising Super Effective? Or Not?

  • The 2016 study (Kalyanaraman, Sundar) proves that setting up the content of company websites for user characteristics increases the level of customer engagement in communication with companies.

  • US researcher Catherine Tucker found out that personalized ads offer a higher percentage of clicks and conversions, with the condition that the users have the right to control the privacy settings on a web page (Catherine Tucker, 2014).

  • Back in 2004, it was stated that when an ad contains the user's name, the user demonstrates a more explicit intention to make a purchase (Howard, Kerin 2004).

  • Some experiments have shown that people are more likely to notice ads with a high degree of personalization (for example, when they see their own photo or name in it). However, many people feel uncomfortable to contact such ads (Malheiros, Jennett 2012).

  • An American scientist Kong Lee (C. Li, 2016) presented the thesis: a personalized message can be interpreted as not personalized and vice versa. The problem is that in the scientific researches the actual personalization and the perceived one are not differentiated - this definitions are not clear. Therefore, different researches of the same statements are giving opposite results.

  • A group of researchers, consisted of Seth Noar, Nancy Harrington and Rosalie Aldrich (2009), argued that the "personalization effect" is a spontaneous process, and it is not possible to determine which components of it lead to favorable results, as it is impossible to take into account the multiple variables of a psychological, geographical and socio-economic nature.

  • Scientists from the American Academy of Advertising (Grzyb T., Dolinski D., Kozlowska A., 2018) found that an excessively personalized attention focused on the user reduces his ability to memorize the brand and the advertising message itself. It also reduces the user's desire to commit conversion actions on the website.

How Can a Business Solve a Targeting Issue if There's No Targeting?

Alright, the scientific researches mentioned above, stated that personalized advertising is more effective, as compared to unified one. But the tendency to reduce the choice options for targeted ads is getting worse. Most likely, in the coming years the platforms would further reduce the target options and user categories available for targeting, under this tremendous pressure from the public, the media and regulators.

In contextual advertising, a possible scenario would be that search engines either close the keyword search services completely, or severely limit their capabilities, as there might be a discrimination by keywords and Google ads as well.

Most likely, in the coming years the platforms would further reduce the target options and user categories available for targeting, under this tremendous pressure from the public, the media and regulators.

The alternative ways of selecting keywords could be a reasonable solution, for example, search suggestions parsing algorithms. Probably a hypothetical, advanced software would help to select negative keywords and assume the targeted keywords and topics. Maybe specialized linguistic neural networks would fill these needs.

The search engines are unlikely to be able to completely “close” from parsing prompts. It is possible that in the near future we will have a "war of algorithms".

As for targeting in social networks, the solution seems to be similar - there could be an external smart parsing of targeted audiences, being able to track the behavioral characteristics of their representatives. By all means, social networks would be catching and preventing parsing bots from collecting users personal data. And the creators of the algorithms would find new ways to avoid the rules of the platforms.

Social networks might prohibit the import of audiences into Ad Manager systems, to forbid parsing, once and for all.

A more honest and effective way to target potential customers in this "brave new world" would be the direct collection of personal data by the businesses, using email campaigns and push notification subscriptions - on their own websites and in their own applications (with a scrupulous compliance of "personal data laws").

The good old content marketing will receive even greater importance, as related to the fuss of personal data regulators, media and social media users, and the personal data restriction process. Third-party businesses would collect personal data and resell it, without the social media platforms involved.

No one doubts that search engines and social networks really want to sell data about users. However, as we see, they are increasingly limited by public opinion, the media and regulators - this wouldn't work as before.

Third-party businesses would collect personal data and resell it, without the social media platforms involved.

Having examined this issue from such a sad angle, we would understand that saving the advertiser is the problem of the advertiser himself. Saving social media platforms from the drop in advertising revenue is their problem. And saving (or rather not providing) your personal data to platforms is the business of yours, as social media users.