Ryan Calo writes:
UPDATE: As told to Jules Polonetsky over at The Future of Privacy Forum, Capital One was engaging in “totally random” rate changes that were not related to browser type. On the other hand, according to the Wall Street Journal, Capital One was at one point using [x+1] data to calibrate what credit card offers to show.
The other day, I suggested that the facts of the Clementi suicide may perfectly illustrate why no actual transfer of information is necessary for someone to suffer a severe subjective privacy harm. (Thanks to TechDirt and PogoWasRight for the write ups.)
Just now I learned about an allegation against Capital One that the company offered someone a different lending rate on the basis of what browser he used (Chrome vs. Firefox). A similar allegation was made against Amazon, which apparently used cookies for a time to calibrate the price of DVDs.
Here you have a clear objective privacy harm: your information (browser type) is being used adversely in a tangible and unexpected way. It matters not at all whether a human being sees the information or whether a company knows “who you are.” Neither personally identifying information, nor the revelation of information to a person, is necessary for there to be a privacy harm.
Okay, I haven’t had enough coffee yet today and I’m exhausted from a trip to Atlanta, but I’m having a tough time grokking how this situation has anything to do with privacy at all.
I have no doubt that there’s a negative impact of information based on browser or cookie as described above, but are we now equating “information” with “privacy?” People connect to a web site generally understand that the site they visit can detect what browser they’re using and a whole slew of other information. I don’t consider most of that information “private” information. Where you were before you visited the site (referral url) should be private, as should IP (in my opinion, anyway), but the other stuff? I don’t see it.
If someone discriminates against you because they know your race, creed, gender, religion, etc., is that necessarily an “objective privacy harm” just because it is based on a personal factor or characteristic? Don’t we need to distinguish between “personal information,” “private information,” and “privacy?” And if we don’t, then we run the risk of having to conclude that any unequal treatment of people based on any information about them or their belongings is a “privacy harm,” which could make the whole notion of “privacy harm” so broad as to be totally useless.
Ryan, if I’m missing something in your argument, please clarify, but I don’t see where the differential rates based on browser type is *any* type of “privacy harm” even though it is economically disadvantageous or unfair in some sense.