It is not a secret that US companies assemble towering mountains of data on their customers. Over the years that we’ve covered the intersection of privacy and the internet, it’s been openly acknowledged that there are vast data brokers operating behind the scenes, selling information about us to other companies that want the data. But the very nature of these firms is that they operate in silence and secrecy. Companies historically haven’t wanted to admit that they collect data, and if they do collect data, they don’t want to acknowledge just how much.
The New York Times, it seems, has had some luck prying open the data vaults and discovering exactly what’s going on behind the curtain. Each of us have “secret scores,” — scores being used to calculate everything from how likely you are to return a product to whether or not you should be allowed to borrow money. Here’s the NYT. The author requested a copy of her own data file from one such data broker, a company named Swift. The author requested a data dump on herself and received the following:
More than 400 pages long, it contained all the messages I’d ever sent to hosts on Airbnb; years of Yelp delivery orders; a log of every time I’d opened the Coinbase app on my iPhone. Many entries included detailed information about the device I used to do these things, including my IP address at the time.
Sift knew, for example, that I’d used my iPhone to order chicken tikka masala, vegetable samosas and garlic naan on a Saturday night in April three years ago. It knew I used my Apple laptop to sign into Coinbase in January 2017 to change my password. Sift knew about a nightmare Thanksgiving I had in California’s wine country, as captured in my messages to the Airbnb host of a rental called “Cloud 9.”
Companies have begun to offer customers a look at their own data records after the EU passed the GPDR and California passed its own Consumer Privacy Act. In June, the Consumer Education Foundation also asked the FTC to investigate the use of these shadow-scores that impact how people are allowed to shop and what offers they receive in ways people generally are not aware of. One point that the article makes is that simply having these voluminous reports is insufficient — we know absolutely nothing about the algorithms and data analytics being used to track our own behavior. It’s not clear if these companies have even found useful ways to measure the traits they claim to measure, while the damage caused by being misclassified by algorithms can impact everything from the interest rates you’re offered to the customer service you receive. The problem of bias in algorithms is no longer a theory; corporations like Amazon and Google have acknowledged the current limits of these technologies when forced to do so. That hasn’t stopped companies from rushing to implement these ideas, however.
We’re at the point where companies are starting to show what data they use to make decisions, but not how they weight or use it. But those things matter. If we’re going to invent HumanMark as a society, we humans who will be rated by it ought to have some say in how those ratings take place and how companies are allowed to use the information.
The NYT article steps through the specific process for requesting your own personal data file from Sift (consumer trustworthiness), Zeta Global (identifies high rollers with money to burn), Retail Equation (helps companies decide whether or not to accept a return), Riskified (develops fraud scores), and Kustomer, which promises “unprecedented insights into a customer’s past experiences and current sentiment.” It also notes that just because companies promise to provide data doesn’t mean they actually will, using the example of Kustomer, which has apparently been rather hard to reach.
- Capitalism Didn’t Bring Democracy to China, but It’s Yoking the Rest of the World to Chinese Authoritarianism
- Google’s AI Ethics Council Is Collapsing After a Week
- An Ex-Google Engineer Is Founding a Religion to Worship AI. He’s Decades Too Late.