Play Protect

Google
2017-2023

Google Play Protect is a service that detects potential privacy and security threats to an Android device, mainly from malicious apps. It was launched in a limited form shortly before I joined the Play UX team. I took over as lead content designer for the team designing the full version.

The problem

The initial version of Play Protect didn't have a "home," and it couldn't provide much specific information about the device's status. The service needed to have a dedicated landing page accessible from the Google Play main menu, and it needed to be able to cover a wide variety of scenarios and potential threats.

Several legal experts were also worried that the existing language could be misleading under certain conditions.

The Google Play Protect logo
Three security statuses that can be displayed in Play Protect

The solution

I worked closely with a UX designer to create a completely new destination for privacy and security warnings. We started by expanding the existing warning system into three levels, in a "traffic light" pattern.

An example of Play Protect when there are no detected security threats
An example of Play Protect when there is a potential threat detected
An example of Play Protect when a known threat is detected

We then created a system of warnings and categorized them according to those three levels. Play Protect would notify the user with a system notification, and when the user tapped through, they would see a recommendation for how to resolve the problem.

Dialog asking the user if they want to allow apps to be installed from another source
Dialog asking if the user wants Play Protect to perform a security check on an app
Notice that Play Protect blocked a harmful app

We also built flows for scenarios where the user wasn't currently in Google Play, such as when a user installs an app from another service, or when the user thinks an app should be checked for malware.

Example of Play Protect flagging an app as harmful
System notification from Play Protect about a harmful app

Working with a renowned security engineer and a member of the legal team, I put together a list of approximately 25 types of privacy and security threats. I then refined the strings explaining those threats with the help of the legal team. There was a very fine line to walk because we needed to avoid causing undue alarm or over-exaggerating the threat. It took many rounds of revision and wordsmithing to get to a compromise.

The entire project involved many negotiations like this. For example, the word "safe" isn’t useful because we can't fully guarantee anything is safe. Similarly, many security-related words were too sensitive or scary to be useful at a consumer level. Words like "threat" or "dangerous" mean vastly different things when an app is trying to commit ad fraud versus when it’s actively trying to steal the user's banking information or sell the user's location. Even the word "warning" was too alarming to cover all scenarios.

We eventually settled on the word "unsafe" to describe apps and "risk" as a general descriptor because those words can apply to many types of threats.

This was the beginning of several security improvements, including specific controls for IT and administrator roles at businesses, and advanced threat detection. All of these improvements are built on the framework I helped implement.