Data snatchers! The booming market for your online identity

A huge, mostly hidden industry is raking in billions collecting, analyzing, sharing information you put on the Web. Should you be worried?

Page 2 of 3
datasnatchersimg6.jpg
Bad, Good Cookies: Some browsers, such as Google’s Chrome, allow a user to block third-party cookies, which track the user’s movements around the Web, while permitting (generally useful) “first-party” cookies that remember user settings and content choices for just one site. This “local data” helps to tailor-fit content to the user in future site visits.

Using cookies to recognize people online and sync up data about them isn't ideal, however. A cookie associated with a particular IP address might contain the browsing histories of multiple people in the household who use that PC. And cookies may not last very long in the browser: Security software is often set to delete cookies once a week. People in the online advertising industry call such deletions "cookie erosion."

Naturally, companies are springing up with technologies that re­­solve these issues. New "fingerprinting" technologies rely on some highly sophisticated means to verify that the personal data collected at different sites at different times, and for different reasons, are all from the same consumer.

BlueCava, based in Irvine, California, has developed a "device ID" technology that identifies site visitors based on the unique combination of settings in their Web browser. The company then buys demographics, preference, and Web tracking data from site publishers all over the Web, and matches and adds that data to the identified users' profiles in its database. It can then sell all that profile data to advertisers and marketers. BlueCava CEO David Norris says that his company's technology can identify devices with 99.7 percent accuracy, and that it has already identified roughly 10 percent of the 10 billion Internet-connected devices in the world.

Fingerprinting Challenges Anonymity Online

Fingerprinting technologies like BlueCava's give some in the privacy community serious pause. "I think device ID is really unethical," says Kaliya Hamlin of the Personal Data Ecosystem Consortium. "It's one thing to put cookies in your browser, because you can throw them out; but a device ID is permanent, and takes away your means of defining context in your digital life."

Hamlin believes that device ID degrades privacy by taking away our ability to use alternate identities online to keep assorted aspects of our digital lives separate.

In the physical world, Hamlin points out, we can use physical distance and time to separate the various contexts in which we operate. We can get in the car and drive to our kids' school for a teacher conference, then drive across town to an AA meeting, and maybe participate in a hobby on the weekends. The info we give out in each of these contexts stays separate because we give it to different people at different places at different times.

But online, Hamlin notes, those firewalls just don't exist. Instead, to stay anonymous, people rely on various nicknames and avatars at the sites they frequent. But device ID defeats this practice. Device ID concerns itself with the de­­vice and the browser people use to access websites, not the identities they set up there. It ties all those identities together into one big profile.

"Device ID is almost like the police putting GPS trackers on cars, which the Supreme Court just ruled illegal [in United States v. Jones]," Hamlin says. The one difference is that a driver can remove a GPS tracker, but a device ID is established far away, so a computer user can't easily remove it.

BlueCava's Norris counters that his company will remove a device ID from its system if a consumer requests it at the company's website. Norris says that this accommodation is more privacy-promoting than Do Not Track for cookies, because, he says, Do Not Track cookies can easily be deleted in the browser (by the user or by antivirus software), but the deletion of a device ID is permanent.

The problem, however, is that most people will never even know that a device ID exists for them.

'Big Data' Analysis Infers a Lot From a Little

So-called Big Data is one of the few big concepts that will define technology and culture in the first part of the 21st century. The term refers to the capture, storage, and analysis of large amounts of data. This can mean any kind of data, but the term often refers to the collection and analysis of personal data.

Running deep analysis of terabytes of data was perhaps pioneered by Google, but Big Data practices are now in place at all kinds of organizations, from law enforcement to dating sites to UPS to Major League Baseball. IDC (owned by the same parent company as PCWorld) says that the $3.2 billion that companies spent on Big Data in 2010 will grow to $16.9 billion in 2015.

Among people involved in the personal data economy in one way or another, one anecdote comes up over and over again, and beautifully demonstrates both the possibilities and the dangers of Big Data.

A story by Charles Duhigg in the New York Times Magazine in February described how analysts in the predictive data department of Target developed a way to use the company's customer data to predict the pregnancies (and future baby product needs) of its female customers, sometimes even before the woman's family knew she was pregnant.

This was an extremely important discovery for Target because it allowed the company to show the women ads for various baby products timed to each phase of the pregnancy. There was an even bigger bonus. During the stressful months of pregnancy, future moms' and dads' normal buying habits frequently go out the window, and they look for the most convenient place to buy everything. If Target could get the women into its stores to buy baby products, it might become their go-to source for all sorts of products.

The Target analysts got their breakthrough by looking at the buying histories of women who had signed up for new baby registries at Target. The analysts noticed that pregnant women often bought large amounts of unscented lotion around the start of their second trimester, and that sometime during the first 20 weeks of their pregnancies they bought lots of supplements like calcium, magnesium, and zinc.

The analysts then searched for these same "markers" in all females of childbearing age, found the likely moms-to-be, and sent them offers and coupons for baby products carefully timed to the various stages of pregnancy. Ka-ching.

This is a relatively simple example, and one that happened to be reported in the media. But, as the Duhigg article points out, most large companies in America now have "predictive analysis" departments and are learning to look for the kind of markers that Target discovered hidden in its data.

Big Data Puts Privacy in a New Light

In the Target case, future parents were served with highly relevant ads and offers, and the retailer found a new way to reach its customers and pump up sales. No problem, right?

Wrong, say privacy advocates. The warehousing and analysis of so much data, and so many types of data, might lead the curators of the databases to infer things about us that we never intended to share with anybody. The data might even predict our future behaviors--what even we don't yet know that we're going to do.

The "predictive analysis" of Big Data is often called "inductive analysis" in academic and re­­search circles because it induces large meanings from small sets of facts or markers.

"Inductive analysis concerns itself with singular things that can seem to be innocuous, but that when combined with other innocuous data points--like your favorite soda--can create meaningful predictors of behaviors," says Solon Barocas, a New York University graduate student who is working on a dissertation about inductive analysis.

Target, for instance, didn't even need to know the names of the women it ended up sending pregnancy ads to. It simply delivered a target ad to a group of addresses with the right demographics and a common pattern of past purchases. A process so totally cold and machinelike being used to predict something so human, so personal, like pregnancy, is creepy.

In the next ten years, marketers and advertisers will spend more and more on Big Data science, focusing on finding analysts who can discern patterns in large pools of data. Big Data analysis positions are the new hot jobs, and the people who will fill them are a new breed, with new skills. "These people need traditional statistics and computer-science backgrounds, but also some coding and basic hacking skills," Barocas says.

Big Data analysts don't just help target ads for products. A political campaign might do a survey of 10,000 people to learn about their demographics and political choices. It might buy more data about those people from one of the large data sellers, like Acxiom or Experian, then search for unique markers in the data that would predict future political leanings.

But those predictors may bear no ob­­vious relation to what they predict, Ba­­rocas says. "For instance, the analysts might find that something odd--like what fashion-magazine subscription people hold--is a strong predictor of the kind of candidate they're likely to vote for."

In future elections and ballot initiatives, billions will be spent on making inferences about voters, and about the issues, candidates, and political ad content that they might be sympathetic to. The campaign with the best personal data and the best analysts may win. That seems like a very undemocratic way to choose our policies and leaders.

Experts say that in the future, predictive analysis will advance to the point where it can tease out information about people's lives and preferences using far more, and far more subtle, data points than were used in the Target case. The inductive models that some companies al­­ready use are huge, containing up to 10,000 different variables--each with an assigned weight based on its ability to predict.

But Big Data analysis may have a built-in public relations problem, be­­cause its way of predicting human behavior seems to have little to do with human behavior. Unlike traditional analysis, which seeks to predict future preferences or be­­haviors based on past ones, the field's inductive analysis concerns itself only with patterns in the numbers.

datasnatchersimg7.jpg
Will They Stop?: Browsers like Firefox 11 and Internet Explorer 9 and 10 allow you to tell websites (via an HTML message sent to the Web server) that you don’t want to be tracked. Unfortunately, not all websites and online advertisers have committed to honoring this message. And many sites merely stop serving targeted ads to the browsers of users who send a “do not track” request, but continue to collect the user’s personal data.

After Target "targeted" baby ads at women it thought were pregnant, the women and their families criticized the company's tactics. They were creeped out by the ads be­­cause Target's inference about them could not be mapped to any piece of data that they had already provided. Even though Target was correct in its inferences, it was simply not intuitive that the purchase of cotton balls and lotion would predict that the buyer was pregnant and would soon be buying diapers.

More than anything else, this new, mathematical method of analysis may force us to look at our privacy and the way we manage our personal data in a whole new light. After all, it's unsettling to know that hundreds of unrelated bits of our data can be pulled together from a hundred different sources (perhaps verified by fingerprinting technology like BlueCava's) and analyzed to reveal numeric patterns in our behavior and preferences.

"Even the smallest, most trivial piece of information might be strung together with other pieces of information in a pattern that is sufficient enough to infer something about you, and that's a challenging world to live in because it upsets our basic intuitions about discretion," Barocas says.

Transparency, Inclusion Might Help Everyone

When Target realized its baby-products ads were getting a negative re­­sponse, it didn't pull the ads; instead, it elected to hide them among un­­related and less-targeted ads when showing them to pregnant women. Rather than asking female customers if they were interested in special offers for baby products, the company chose to infer the answer in secret.

And that lack of transparency may be the single biggest objection to consumer tracking and targeting today. Advertisers are spending millions to combine, transmit, and analyze personal data to help them infer things about consumers that they would not ask directly. Their practices with regard to personal data remain hidden, and they're ac­­ceptable only because people don't know about them.

Such tracking and targeting also feels arrogant. Consumers may not mind being marketed to, but they don't want to be treated as if they were faceless numbers to be manipulated by uncaring marketers. Even the term "targeting" betrays a not-so-friendly attitude toward consumers.

Ironically, advertisers might be far more successful if they pulled back the curtain and included consumers in the process. It's well known that the personal data in the databases of marketers and advertisers is far from completely accurate.

Maybe, as several people I talked to for this story pointed out, the best way to collect accurate data about consumers is to just ask them. And if an advertiser is hesitant to ask for a certain piece of personal data, the advertiser shouldn't infer it.

"What our organization is trying to work out is whether or not there's a way to [collect personal data] where the user knows what's happening and companies [get] their data not by stalking [users] but by asking them," says the Personal Data Ecosystem Consortium's Hamlin.

Related:
| 1 2 3 Page 2
ITWorld DealPost: The best in tech deals and discounts.
Shop Tech Products at Amazon