On June 11, Facebook released a new application for android users with the stated purpose of studying individual phone habits, specific app usage, and time spent using those apps. In exchange for downloading the app, users will be paid for their participation in the study, and Facebook has promised not to sell their data to third parties.
This is Facebook’s second attempt at studying users’ phone habits. The first was much maligned for excessively collecting data from teenaged participants via tools that granted the company “root access” to the teens’ phones. Root access allows for control of nearly every aspect of a user’s phone, which, as revealed by TechCrunch, meant Facebook had access to private messages sent to and from participants’ devices.
Facebook wants to reassure the public that the new study will be different — more transparent, safer. Naturally, we’d be wise to remain skeptical; Facebook’s early motto was “move fast and break things.” We need to recognize that in their rush to innovate, Facebook and other large tech companies have an intermittent tendency to expose their users to unanticipated dangers. Given the intense scrutiny the industry has faced of late, such a potentially invasive study from a company with a history of gross mishandling of user data is sure to fuel more calls for the government to step in and protect users.
Indeed, there are already a handful of legislative efforts to regulate the tech sector underway in Congress. Senator Marsha Blackburn introduced a bill in April that requires users be provided with the power to opt-out of the collection of specific information, including medical, financial, and geolocation data. Her bill followed Senator Marco Rubio’s unveiling in January of a plan to more closely align online-privacy requirements with the Privacy Act of 1974, restricting disclosure of records, mandating an individual have access to records held by a company, and enabling an individual to correct errors in such records.
As it turns out, Rubio’s proposal makes for a sensible middle ground. But to understand why, it’s worth looking closely at Europe’s efforts to regulate the tech industry.
The General Data Protection Regulation, or GDPR, was implemented by the European Union in late May of last year. It institutes new rules governing what companies may collect from a consumer, permits users to opt-out of such collection, and allows regulators to fine companies as much as 4 percent of their global revenue for violating the new statutes. Its proponents pitched it as a straightforward way to enhance consumers’ privacy rights with minimal impact on Internet browsing. Unfortunately, as soon as it went into effect, many European consumers of American media found themselves unable to access dozens of major U.S. websites — including, e.g., the Los Angeles Times — for months. Given that even large websites couldn’t ensure compliance with the cumbersome regulations, one can imagine how hard these regulations must have hit smaller companies.
Despite these problems, the American media’s response to the GDPR’s rollout has been somewhat positive. On June 8, barely a year after it went into effect, the New York Times published an editorial asking “Why Is America So Far Behind Europe on Digital Privacy?” The piece mainly serves to criticize the inability of U.S. regulators to keep pace with their European counterparts, presuming that heavy government regulation is the answer to the problem.
Of course, consumers should be extremely suspicious of any company collecting the amount of sensitive data that the biggest tech firms do today, and the Times is right to give voice to those suspicions. In the same way you would not approve of your doctor sharing private conversations involving medical conditions, symptoms, or prescriptions without your consent, neither should you be entirely comfortable with your personal information being stored or shared irresponsibly by the nation’s tech giants.
It is also true that current regulations don’t go far enough in protecting consumers. The Electronic Communications Privacy Act, which governs online data collection, only prohibits companies from intercepting communications without permission; it does nothing to address how companies handle the data they freely collect with consumers’ tacit, and often-unintentional, consent. For example, most consumers would be surprised to learn how many apps freely and legally access a phone’s audio for the collection of keywords to be used in private advertising.
So it is not hard to understand the desire for additional guardrails to be put in place in order to prevent the kinds of abuses and overreach to which endeavors such as Facebook’s new study could lead. But it should also be noted that the market can and does self-correct, and it largely did after the abuses of Facebook’s first study were revealed by TechCrunch. After a public outcry, Apple removed Facebook’s enterprise certificate from its app store due to violations of its terms of service. This is as it should be: Consumers were educated by TechCrunch, they demanded action, and the market responded. And there is a good argument to be made that any comprehensive regulatory plan passed by Congress would be easily circumvented by the largest firms and thus serve only as a barrier to entry for smaller companies.
Yes, the vast amounts of data collected by companies around the world should shock even the least privacy-conscious individual. How collected data is handled by these companies also largely remains shrouded in mystery, and that lack of transparency is a problem for anyone who believes in better educating consumers and letting the market correct itself. The best methods for protection are to remain aware of the permissions apps are requesting from your devices, download apps only from trusted developers, and consider installing a Virtual Private Network (VPN) on devices with information that you wish to keep secure while interacting with the Internet.
There still exists a legitimate need for information to be collected by companies if they are to maintain and improve the services virtual consumers use every day. Nor will there ever be a way to guarantee complete online privacy or data security to consumers; anyone who uses the Internet should recognize that doing so inevitably carries some risk. But something like Rubio’s proposal, which aims to extend pre-Internet privacy protections to the web, would be sensible. Holding tech companies accountable for the same sorts of data spillage that would violate the law if they came from a “traditional” or “analog” company seems to be a much simpler solution than a complete digital firewall designed by the oldest members of Congress and put in place to protect consumers from themselves.