Putting trust back into location services

Putting trust back into location services
(Image credit: Pixabay)

In recent years, we have seen a dramatic rise in the number of location-based services running through apps for Android and apps for iPhone. This trend is largely due to the popularity of smartphones, which share their users’ location information. Geolocation is at the heart of these services, whether it be verifying the authenticity of transactions, identifying where exactly photos are taken, using location-aware ride-hailing apps, or simply marking destinations on a map. The upsurge in location-based offerings also means that consumers regularly give their own data so these apps can operate as effectively as possible. 

Despite this, the truth is that consumers are still sceptical about giving out their private location data to an unknown entity, even if they understand that location data improves experiences on apps and other similar services. This issue is at the crux of a 2019 study The Privacy Paradox Reloaded: Changes in Consumer Behaviour and Attitudes since 2018, conducted by HERE Technologies and ESOMAR. The study, a survey of more than 10,000 people across 10 countries, explores data privacy from the consumer perspective. 

About the author

Henri Kujala is Chief Privacy Officer at HERE Technologies.

According to the study, data-sharing frequency has increased since 2018, and UK consumers are more willing to share location data than the global average. The study also found that three in five respondents had experienced some form of data misuse, ranging from phishing to hacked bank accounts, in the past two to three years. This explains, in part, consumers’ lack of trust and mixed feelings about offering up their data, and why so many are turning to VPN services to protect their privacy. In fact, trust and transparency were the main factors in a consumer’s decision to share their data with a company. More than a quarter of respondents did not trust companies or regulations, such as GDPR, to prevent data misuse.

Open data platforms must take a considered approach

It is important to note that most companies sincerely want to protect consumer privacy and are using tracking data responsibly in order to create safer, smarter data-driven services. Companies that deal with large amounts of data need to take especially thoughtful precautions to ensure data privacy. 

Many companies that have access to large amounts of data choose to host this information on an “open data platform”, which can be an intelligent way to purchase or licence data and combine datasets to create useful outcomes. For example, the HERE Open Location Platform (OLP), a platform for big data analytics and creation of new data services centred on location intelligence, allows users to bring their own data and combine it with the wealth of proprietary HERE geospatial information: road networks, administrative boundaries, traffic data, EV charging stations and points of interest (POIs). The platform also acts, in part, as a marketplace, where users can purchase and sell anonymised location-related datasets. 

Safeguarding privacy lies at the core of maintaining customer trust, which directly affects a platform company’s ability to add value and monetise data. In terms of privacy, transparency and controls on data usage may lead to increased trust. For data subjects (end-users producing data), their need for transparency and control means having clear communications combined with simple means to manage their permissions or controls. For data providers (companies collecting and making use of end-user data), transparency and control mean providing more options and implementing controls in balance with their “risk versus utility” trade-off.

Building trust

The research that builds better businesses and services requires trust from the people providing the data behind that research. Privacy issues arise when companies openly provide information to data consumers who can then choose between using information with a positive or negative intent. In the first case, developers or researchers can use rich open data to build smarter data-driven solutions. In the second case, they can reveal information never intended to be shared by adding outside information to augment their insights. Companies supplying data cannot distinguish those two in advance, because they are the same data consumers with different intentions.

Any company that provides consumer data to outside parties is capable of inadvertently providing information, which may provide identifiability of the people who the data relates to. Some companies have the advantage that the data is confined internally to improve their own services. But open data platform companies, which are in the business of providing open data that will power solutions we have yet to imagine, must adopt a thoughtful approach on prior considerations before disclosing information.

Maintaining that trust is a deeply important job. Although companies must protect user privacy as a priority, it is also important to preserve enough value in the data to create better services. How do we strike that balance?

Components of privacy by design: anonymisation and privacy-aware product design

If anonymised data is published or disclosed, that limited data is less likely to be useful for gaining insights for smart data-driven services. If extensive, rich data is published or disclosed with the sole consideration being its value for creating new services, there is a much bigger chance it will reveal more than is intended, even if unintentionally.

There is no single, perfect solution. One approach to answering this challenge is to first carefully consider how the data is expected to be used, and whether the data is potentially too revealing for those intended uses. Use-case specification also helps us understand the opportunities for data anonymisation while maintaining high quality of data-driven services. The challenge is to strike a balance by ingesting the least amount of private information necessary to create a product. 

Take the example of estimating traffic. If there is no congestion, we don’t need redundant speed updates from vehicles. Similarly, if there is a slow-down, we don’t need all vehicles stuck in traffic to report the same situation. In fact, we don’t need to publish any information about individual cars at all. Instead, we can simply report when a threshold for traffic jam is reached, and the number of cars above that threshold. By tailoring data for the intended use case, the processed information is specifically limited to the data that informs a service, but still does not compromise consumer privacy. 

Businesses are waking up to the potential of emerging technologies to build trust between companies and consumers. For consent management, for example, a blockchain-based solution offers a lot of benefits. Blockchain can increase individuals’ control over their personal data because there is no central party, there is a high level of transparency and there are imbedded encryption and pseudonymisation techniques. 

If companies embrace a considered approach to their shared data, they will take a giant step forward on the path to consumer trust.

 

Henri Kujala is Chief Privacy Officer at HERE Technologies.

Henri Kujala

Henri Kujala is Chief Privacy Officer at HERE Technologies.

He is passionate about privacy with the right understanding on how to ensure privacy is a business enabler and not a blocker. He sees an opportunity in every challenge, goes and tackles them. He has an uncommon skill set: in-depth understanding of the global privacy regulation combined with technology and solution understanding. This enables him to not just turn privacy requirements into implementable stories and features for R&D, but also to ensure privacy is actually implemented in running code and solutions.