Showing posts with label Consumer Data Markets. Show all posts
Showing posts with label Consumer Data Markets. Show all posts

Sunday, February 24, 2019

Lloyd v. Google

If it were simply a play, Shakespeare might have called it "Privacy, or What You Will."

On Friday, the Wall Street Journal broke just the latest story, its lens aimed on Facebook, concerning the all-too-fluid movement of smartphone users' information from (other) apps to Facebook.
"It is already known that many smartphone apps send information to Facebook about when users open them, and sometimes what they do inside. Previously unreported is how at least 11 popular apps, totaling tens of millions of downloads, have also been sharing sensitive data entered by users. The findings alarmed some privacy experts who reviewed the Journal’s testing."
According to the WSJ's tests, heart-rate monitoring apps were sharing users' hearts rates with Facebook.  And period-and-ovulation tracking apps "told Facebook when a user was having her period..."  (As you might have guessed, much or all of this intra-app sharing reportedly occurred without the user's consent - the apps share the information with Facebook, but do not share the with their users that they will share their information with Facebook.  And so, consumers come to realize that they are the product, not the customer.)

Why would Facebook care to know?  One answer is that if Facebook understands its users better, it can send them more targeted advertising.  If you're known to be pregnant, you're perhaps more likely to click on diaper or baby-crib adverts.  (This is, ostensibly, a benefit to Facebook's users, who enjoy receiving advertisements more likely to be of interest to them.  Ostensibly!)

Lloyd v. Google

The "key" to your online data - unsecured
Over the last few months, we have been researching an interesting lawsuit (and ruling) out of London.  The lens in that case was focused on Google, but many of the issues were similar.

In the Google matter, Google was alleged to have found a way around Apple's safety guards, imposing its own third-party cookies on Apple users' iPhone devices - so that Google could track iPhone user's web activities.

When looking into online privacy-related actions in the UK, at least 3 interesting similar examples came to light, in each case with the U.K. Information Commissioner’s Office (the ICO) leading the charge in fining entities:
  1. ICO fined the Leave.EU campaign for “serious breaches of electronic marketing laws” during the 2016 Brexit referendum. The ICO found a significant relationship (e.g., overlapping directors) to exist between Leave.EU and an insurance company Eldon Insurance Services Ltd (“Eldon”). Commissioner Denham noted that it “is deeply concerning that sensitive personal data gathered for political purposes was later used for insurance purposes and vice versa. It should never have happened.” Eldon would, for example, pitch Leave.EU campaign supporters by way of email newsletters offering “10% off” for Leave.EU supporters. Leave.EU did little, if anything, to protect the acquired data when sharing it with Eldon (which trades as GoSkippy Insurance). “It was confirmed that there is no formal contract in place between Leave.EU and GoSkippy to provide direct marketing, and that the inclusion was an informal arrangement.

  2. ICO found that Emma’s Diary (a website that provides pregnancy and related advice to mothers and mothers-to-be) illegally collected and sold personal information on over one million people to Experian Marketing Services, a branch of the consumer credit rating agency, “specifically for use by the Labour Party.” 

  3. ICO fined Facebook £500,000 for serious violations of data protection law – the maximum fine allowable under the applicable laws at the time the incidents occurred. The ICO determined that “between 2007 and 2014, Facebook processed the personal information of users unfairly by allowing application developers access to their information without sufficiently clear and informed consent ....” According to Commissioner Denham, “Facebook failed to sufficiently protect the privacy of its users before, during and after the unlawful processing of this data.” The personal information of over one million users was harvested and consequently “put at risk of further misuse.” 
Putting the ICO fines and the Lloyd v. Google case itself together, we see at least one common theme and outcome: people’s social information (often personal/private) is clearly being mixed with their financial and political interests, whether they are aware of it or not.

We have also gone back to the late 1800s and early 1900s to quote the revered jurist Louis Brandeis.  In his famous dissent, in Olmstead, he defined the “right to be let alone” as “the most comprehensive of rights, and the right most valued by civilized men.” Olmstead v. United States, 277 U.S. 438 (1928).  

------------
Our analysis of the London High Court's ruling in Lloyd v. Google is now available to be reviewed by anyone interested.  We have sought to add a data market analysis to the commentary, so that readers can easily come to terms with how one might value personal data (and in an effort to make it an engaging read!). 

For our prior coverage of consumer data markets, click the "Consumer Data Markets" label on the right hand panel of this blog.

Monday, January 8, 2018

Blog 1 of 2018 / Markets for Consumer Data & User-Beware

Hello readers, and thanks for joining us for our first blog of the new year.

This year, in addition to our typical musings on financial markets, we'll be writing a little more on consumer data.

Hacks have been all the rage for a while already (Equifax and Yahoo! more recently; Target in 2013, Sony Playstation in 2011 and TJ Maxx in 2007 are not too-distant memories).

But our interest lies somewhere else: in what is happening behind the scenes with our data.

hiQ v LinkedIn

hiQ is a San Fran-based startup which was doing something pretty interesting: it was scraping data from LinkedIn and then selling that data or analyses done using that data.

LinkedIn didn't much like this, with hiQ's automated robots ("bots") bypassing LinkedIn's security measures to scrape the data, which LinkedIn felt undermined LinkedIn’s privacy commitments to its members.

They battled it out in court, with the court finding in August, probably reasonably, that LinkedIn could not stop hiQ from scraping the publicly-viewable information from LinkedIn's website.  In fairness, LinkedIn doesn't own its users' data -- it's our data! -- and therefore couldn't limit hiQ's ability to access or study it.

The ruling is interesting for several reasons, including some of the First Amendment-type arguments made by hiQ to support its right to scrape.  
“To choke off speech and the precursor of speech, the gathering of facts and the analysis of information, is a dangerous path down which we should not go,” 
         -- Harvard law professor Laurence Tribe, representing hiQ, reportedly told                 the judge.    
“hiQ believes that public data must remain public, and innovation on the internet should not be stifled by legal bullying or the anti-competitive hoarding of public data by a small group of powerful companies, ... It is important to understand that hiQ doesn’t analyze private sections of LinkedIn – we only review public profile information. We don’t republish or sell the data we collect. We only use it as the basis for the valuable analysis we provide to employers. ”
            -- hiQ said in a statement 

Okay, meh. But how about what happens next?  Like Microsoft (which owns LinkedIn) firms like Facebook, Amazon, eBay and Google, control and study copious amounts of customer data (and they sometimes get hacked and lose control of it).  But importantly they also sell it (as does hiQ).  We might like to think that they only sell aggregate data, but how would we know? 

Personal, Personnel Information

What interests us is that hiQ sells information about LinkedIn users to those LinkedIn users' bosses, including information, generated from scraping LinkedIn, about the likelihood of an employee leaving. hiQ's clients reportedly include companies like CapitalOne and GoDaddy, and hiQ's products include their Keeper product, which identifies, for employers, when their employees are at risk of leaving for another job. (For example, when employees are "looking around," they tend to make connections on LinkedIn.)

So that's a sale not necessarily of the more obvious, vanilla, personal information (name, address, date-of-birth), but user/employee/personnel's tendencies and movements. But it's almost certainly not aggregated: if it were aggregated, it would be worthless.  Sure, they're not selling the individual's vanilla data itself but they've done a basic analysis of individual's behavior and are selling the analysis.  Aggregated, it is not. 

hiQ would have no greater ownership interest in our data than LinkedIn would have.  Through hiQ's bots, we just have a simple work-around (imagine, for example, that LinkedIn were simply to buy a stake in hiQ).  If we knew that LinkedIn could "tell on us" to our employers -- and make money doing so -- would we have signed up?

We have the Latin expressions caveat emptor and caveat venditor to connote the short-hand principles of buyer-beware and seller-beware, when entering into transactions.  In 2018, the awkward expression caveat utilitor -- user-beware -- might just become part of our lexicon.

The value of Johnny's house, or Sandy's choice of handbags, is information that would help advertisers target Johnny or Sandy more appropriately.  If Johnny's house price is on the low end, all else equal one wouldn't push Maserati ads at him.  If Sandy is buying Louis Vuitton bags, well, maybe she would like this newly-released Prada bag or another Louis Vuitton bag.  But whose data is that, and do companies have a right to sell and profit from that information?  And how do we separate data, the sale of which may be limited on an individual basis, from analysis of data, which seems to be fair game.  Are these two both analyses?

  • Johnny's house was purchased for $200K this year
  • Last year, Sandy bought two of Brand X's bags and three of Brand Y's bags.

All the best for 2018.  Keep watching the Watchmen.

~ PF2

Wednesday, June 21, 2017

The Electronification of Consumer Pricing

Would it be interesting if we told you that owners of Apple products often pay more for the same product when purchasing it online?  What about if we told you that airlines might jack up the price of their products based on your level of enthusiasm for buying a ticket?  

A version of this is occurring in the United States.  We're going to explain some of the beauty of this process, from an efficient markets perspective, but we'll also try to home in on the ways in which it can be jarring, if you're the user/customer/client/consumer, that is.

Briefly, companies like Expedia.com, Hotels.com, or Home Depot or Walmart might "collect" data from you or your computer when you search for a product or a flight or a hotel -- or "make a request."  A user's request often brings with it information such as the user's browser, operating system, IP address and other information being maintained in what are called tracking cookies.  Of course, if you're logged in, the company may already have some of this information about you (your purchase preferences) or other information it cannot get via the cookies or IP address.

Based on your information, companies often personalize their results for you.  

There are at least two forms.  They might steer you, say, to relatively more expensive hotels if you're using an Apple product, as they might infer that you're relatively wealthy.  Or they might customize the pricing of the product based on your perceived level of interest.  A person who regularly flies home for Christmas may be a prime suspect for an increased price, or if a prospective buyer has already checked on the price of a flight twice previously, it might indicate that he's relatively more desperate to buy a specific ticket.



Image licensed by PF2 Securities from Condé Nast (New Yorker)
Different from typical supply and demand economics, the new age presents a form of dynamic pricing that measures a person's capacity and motivation for purchasing.  

Here, it's not just a demand of one unit or one ticket: it's a relatively enthusiastic or relatively needy and able buyer, as opposed to a marginal buyer, who may go elsewhere if the price is too costly.

From a company's perspective, they are using your information to price their product accordingly for you, and if they optimize this process, they can make a lot more money than store sales allow, with store sales typically requiring them to fix a single price per good.

One way to think of dynamic pricing is that it adjusts to you.  If you're regularly checking the price, it may just go up because you're checking it.

This seems in many ways to perfectly capture the idea of capitalism.  It may also be agreeable to consumers in perfectly competitive markets.  But absent perfect competition (e.g., the presence of some monopolies)  or when market players dynamically calibrate their models in a way that they seem to work in concert (even if it's not collusion), consumer-customers might feel they are being squeezed.

This graphic shows the results of ticket searches on Kayak's website and Google Flights on different days.
Several airlines are perfectly matching one another, to the dollar, on certain tickets (for better or worse).

Some jurisdictions take issue with price customization, which is often called price discrimination, although that term brings with it negative connotations.   While many jurisdictions have (or should have) legal concerns if price discrimination is gender or race based, some already have difficulties if the pricing is not objectively justifiable relative to the seller, for example, based on the specifics of the order (e.g., company's cost of producing it for a specific user, delivery location or quantity).  But many jurisdictions feel it is in the seller's power to adjust the price as he or she see fits, outside of, say, race and gender-based discriminatory issues.

The problem is magnified in the following example:

Suppose you want to buy a rare and expensive item -- maybe an sought-after watch or antique furniture or a Stradivarius -- and you call to ascertain its price and availability.  You drive 200 miles to buy it and, on arriving at the destination, the seller tells you that, since you asked after it, he realized that demand was high and he lifted the price in the intervening hours.  As a prospective purchaser, you might rightly feel that your eagerness to purchase the product was used against you (and you are now committed to purchasing at the newly higher price, given you have just driven 200 miles).

You were, in concept, taken advantage of.  But legally, is there anything wrong with that?  This is a question for the new age of online, inter-day price re-calibration.  Companies can publicly access or otherwise purchase information about your spending habits, even the price you paid for your home, and this data may help them fit their pricing algorithm, uniquely tailored for you.  This is happening.  This is the new world of e-commerce and we should try to understand and discuss the issues involved.  We welcome your feedback.

---------------

For further reading on dynamic pricing on e-commerce websites, see here.

For issues on pricing/valuation concerns in the financial markets, click here.