Facebook and Google accused of deceiving users into sharing personal data

null

Following Facebook’s massive privacy scandal, the European Union took a strong stance on the issue and passed the General Data Protection Regulation, or GDPR, to protect the data everyday users share with companies online.

Despite that, large tech companies seem to have found a way around it, using duplicitous ways to trick users into sharing their data.

According to a report entitled Deceived By Design published by the Norwegian Consumer Council (Forbrukerrådet or NCC), Facebook, Google and Microsoft have been employing “dark patterns”, or design tricks, that manipulate the user into accepting privacy options they don’t necessarily want to accept.

The underhandedness of it all

The 44-page report cites examples of how Facebook, Google and, to a lesser degree, Microsoft have used dark patterns to their benefit. These include “privacy intrusive default settings, misleading wording, giving users an illusion of control, hiding away privacy-friendly choices, take-it-or-leave-it choices, and choice architectures where choosing the privacy friendly option requires more effort for the users”.

According to the report, if users choose to not accept some privacy policies, they are threatened “with loss of functionality or deletion of the user account”.

For instance, if Google users want to turn off ad personalization, they are presented with reasons why they ought to keep the feature turned on, forcing the user to reconsider their choice. The negative effects of leaving ad personalization on is, however, never presented.

Face-recognition-Book

Another example from the report talks about Facebook’s use of semantics and site design to force people into accepting the use of its facial recognition feature. 

The wording Facebook uses states that “if you keep face recognition turned off, we won’t be able to use this technology if a stranger uses your photo to impersonate you. If someone uses a screen reader, they won’t be told when you’re in a photo unless you’re tagged”. This forces the user to accept facial recognition by suggesting impersonation is a risk they’ll need to take if they refuse.

In terms of design, the user is shown a blue box to accept facial scanning, but the option to decline is hidden within a page only accessible by clicking on “manage data settings”, making acceptance the easier option.

Getting governments involved

Whether these double-dealing methods are legal or not is debatable, but letters sent by consumer groups across Europe, including France, Norway and the UK, to their respective national privacy regulators ask for investigations into the use of dark patterns.

Consumer groups in the US have also asked the Federal Trade Commission to look into the matter.

In fact, Bleeping Computer reports that an Austrian privacy advocate filed a complaint against Facebook and Google for exactly the same reasons within hours of GDPR going into effect, long before the NCC report was published.

And what do you have to say for yourself?

When asked to comment, Facebook chose to not to directly address any of the claims made in the report. Instead, a spokesperson simply said that the company’s “approach complies with the law, follows recommendations from privacy and design experts, and are designed to help people understand how the technology works and their choices”.

Google also decided to tread the same path, not challenging the claims made, but stating that it is “constantly evolving these controls based on user experience tests”.

If investigations are conducted and these companies are found guilty of underhandedness, they could be slapped with fines of up to €20 million ($24 million) or 4% of their annual turnover.