A Slice of Pizza For Your Most Intimate Secrets? How Much (Or Little) We Value Our Digital Privacy

<div _ngcontent-c23 innerhtml="

Graphic by Daniel Kleinman, Forbes StaffMANDEL NGAN/AFP/Getty Images

Graphic by Daniel Kleinman, Forbes Staff

On this edition of the #BytesChat, we explore the “digital privacy paradox”—the notion that people say they care about their online privacy, but they are willing to relinquish private data online quite easily when incentivized to do so. What is it about the digital environment that may be driving individuals away from their ideal level of privacy? Our special guest was Christian Catalini, assistant professor at MIT’s Sloan School of Management, and co-author of the new Sloan research paperThe Digital Privacy Paradox: Small Money, Small Costs, Small Talk. We were joined by Robert Seamans, associate professor at NYU’s Stern School of Business; and Jodi Beggs, a consulting economist for Akamai Technologies and founder of Economists Do It With Models. The chat was moderated by Washington Bytes editor Hal Singer. The transcript has been edited lightly for readability.

Hal Singer: Let’s get going. I am so excited to have Christian with us. I’ve been hearing about this MIT pizza study for a while. I didn’t even know pizza was a thing in Boston. Was this Chicago-style pizza or New York pizza? Inquiring readers want to know!!

Christian Catalini: Ha! Boston style.

Rob Seamans: Hal, you should let your readers know that Pizzeria Regina is some of the best pizza in the US.

Catalini: That’s a lot of detail for a privacy sensitive crowd.

Jodi Beggs: I was about to say there’s no such thing as Boston pizza (though there is Cape Cod pizza, aka bar pizza), but the Regina comment is correct.

Singer: Christian, your new paper with Susan Athey and Catherine Tucker lends credence to the digital privacy paradox. Can’t we think of other things that we genuinely care about, yet don’t place too dear a value on? We care about feeding our kids, but the pricing for kids at restaurants reflects a lower willingness to pay. Is this also a paradox? Or just humans being cheap?

Catalini: Economists have highlighted for a long time that what we say and what we do can be radically different. One way to think of our result is that there is no paradox at all, and that people just don’t care about privacy as much as they say when asked in a survey. The other interpretation is that although we do care about privacy, we make rushed or impulsive decisions when placed in the wrong context.

Beggs: I think the kid thing is an incentive to get parents to go to restaurants. It’s not like the alternative is just not feeding your kid, so it’s a little different.

Catalini: And the portions are smaller!

Beggs: I feel like people say they value privacy in the abstract but then don’t think about it in more concrete contexts. This is interesting because it seems to usually work the other way with other goods or principles.

Catalini: My intuition is that we do care about privacy, and in particular the parts of our lives that rely on privacy, but we discount its consequences because they are still somewhat vague, and likely to materialize only in the future. Behavioral economics has shown that individuals tend to discount future, uncertain events. Online we’re drawn to instant gratification and have not only very short attention spans, but also low willingness to wait. Any complexity becomes and obstacle between us, and that imaginary slice of pizza.

Seamans: Hal, I agree that there is not a paradox per se here. We can value privacy highly, but if we don’t have a way to monetize it, then what are our options?

Beggs: We monetize privacy when web-hosting companies offer private DNS registry services, for example. The prices are pretty low, and I wonder how many people do it. (I don’t, and don’t stalk me!)

Seamans: I wonder how many people are aware of the option. Arguably MIT students should be aware.

Singer: Christian, the big takeaway from your study is that consumers, as reflected by a survey of MIT undergraduates, are readily willing to surrender certain elements of privacy in return for small emoluments: a pizza or the convenience of a simple banking app or not having to learn to use PGP. Do I have that right?

Catalini: Yes! Small incentives, small talk and small costs seem all to change behavior when it comes to digital privacy.

Singer: By the way, if MIT students can’t use PGP, I fear for the ordinary civilian! And what the heck is PGP?

Beggs: Can’t use and don’t want to use are two different things

Catalini: PGP, which stands for “pretty good privacy,” is an encryption software that provides cryptographic privacy and authentication for data communication. PGP is a wonderful technology that has achieved little mainstream adoption because of its complexity. Encryption has a long way to go to really become user-friendly. Humans are just not wired for its mathematical certainty. You forget your private key, your data is lost forever. And its complexity. But here the pizza really represents the daily temptation we all face online when offered free services. Free email, a free messaging platform, etc.

Seamans: What’s the downside to not valuing privacy? Is there any real harm to putting a low value on privacy? What are consumers missing?

Catalini: My sense is that online we’re often discounting the future. Data is easy to replicate and difficult to protect, so once it’s out, it’s extremely difficult if not impossible to get it back.

Seamans: And by that logic, if someone is willing to offer something (anything) for my info—info that they could probably get from some other source relatively easily—I may as well take it.

Beggs: There’s no real downside if people are rationally not valuing privacy, but problems come in if they are caving to a desire for immediate gratification, or not given full information, when forming their valuation.

Catalini: Exactly!

Beggs: The behavioral literature talks about a whole bunch of things that sway our valuations of stuff, so it’s not clear that these low valuations are long-term happiness maximizing.

Singer: Christian, to understand your paper, the reader needs to know the difference between three key concepts: (1) stated preference, (2) revealed preference, and (3) normative preference. Can you explain the difference among these?

Catalini: Stated preferences are what we say we care about. Revealed preferences are what we care about based on our actual behavior and the choices we make. Normative preferences are aspirational, and focus on what we think we should be aiming towards. Of course on the last one, by design, we can disagree for a long time on what our preferences should be!

Singer: And what did your study reveal about the relationship between those three variables?

Catalini: We do not take a stand on the normative preferences side. That is for policymakers to decide. The evidence we present shows a stark contrast between how much people say they care about privacy, and what they actually do when they have to make privacy choices. If you believe that individuals are honest about their preferences when asked, then you have to wonder what it is about the digital environment that may be driving individuals away from their ideal level of privacy. The experiment highlights a few possible mechanisms: incentives, navigation costs on a web page, and cheap talk.

Beggs: Given Thaler’s Nobel prize, it might be a good time to discuss how the endowment effect plays here. If people need only small payments to give up privacy, would they be willing to pay even less to keep it? Or are we thinking of things wrong when we see privacy as the good?

Catalini: That is an interesting question! How much would people be willing to pay to take their data out of a recent breach like the Equifax one? Assuming that was even possible! I don’t think we care about privacy per se, but we do care about the things we can do every day because we enjoy some degree of privacy. As more of our lives become digital, that balance is quickly shifting. Moreover, the cost of collecting vast amount of data is lower every day… and the cost of protecting it is increasing over time.

Beggs: See, that’s the thing that fascinates me. The endowment effect shows that we irrationally overvalue things that we already have, which would imply that even this seemingly absurdly low “willingness to sell” for privacy is an overvaluation! I’m curious how much this same group would pay to get their privacy back.

Seamans: Taking a step back so I can understand: Behavioral economics is one lens to take to understand Christian’s results—namely, people don’t value the future enough. It sounds like you are also talking about a different, more industrial-organization lens that could explain the results. Assume everyone is rational (for the sake of argument).

Beggs: Ha!

Seamans: Stick with me. I value my privacy, but I know it’s already compromised. So if you’re foolish enough to give me a free pizza for something I don’t have control over, good for me.

Catalini: We could already be in an equilibrium where individuals assume their privacy is gone.

Beggs: One thing that I wonder is whether people feel like their privacy is going to get violated anyway via hacking. So giving it up voluntarily is kind of inconsequential.

Catalini: Correct. Then the question becomes, “How do we get out of this equilibrium, if we think it may cause problems down the road?” It’s also important to realize that this has implication for competition in these markets. If we’re already locked in from a privacy perspective with a specific player, we may be more willing to share even more with it as new services arise.

Beggs: From a game theory perspective, I don’t feel like I’m divulging incremental information if I think that other people are going to divulge my friends’ email addresses anyway.

Seamans: Sounds very rational to me!

Catalini: A privacy prisoner’s dilemma!

Singer: Let’s turn to the study’s methodology. Christian, you conducted an experiment of 3,108 MIT undergraduates involving “digital wallets,” which was a grant of $100 of Bitcoin. Before they got the grant, however, students had to rate their digital wallets “in terms of their traceability by peers, the wallet service provider, and the government.” We know from economics that we can’t just ask the respondent how much she values a certain attribute like privacy of their friends; instead, we have to get them to reveal their willingness to pay for this feature by offering them a series of choices, with the “price” associated with each choice varying slightly. In this case, the price (or incentive) was a pizza. Can you explain how this worked?

Catalini: For the study, we needed to accurately reconstruct the relevant social network of participants. In a different paper with Catherine Tucker, we use this information to study the process of diffusion and adoption of Bitcoin on campus. We knew that simply asking students about the emails of their friends would give us low-quality data, so we added a probing question where the incentive was introduced. That’s how we were then able to study the role of the incentive on the students’ likelihood to share information.

Singer: Your primary finding is that a small financial reward incentivizes students to disclose private information. In particular, you find that “In the raw data, within the subsample randomly exposed to the incentive, 5% of students gave all invalid emails under the ‘Ask’ condition, and 2.5% under the ‘Ask + Incentive’ condition (see Figure 1a).” Let me see if I can translate this into English: Does this mean that 95% (1 minus 5%) of students who were not incentivized with a pizza gave away their friends’ email addresses, whereas 97.5% (1 minus 5%) who were incentivized with a pizza gave away their friends’ email addresses?

Catalini: Yes. Many of the students are probably used to sharing this information with apps, online services or even retailers. In this case they knew they were sharing it with us, researchers from their own university, so you can think of it as an upper bound on their willingness to share that kind of information. At the same time, when the incentive is introduced, pretty much everyone, even the ones that were holding back, give away the emails. The result holds no matter your preferences, background, degree, gender, Mac vs PC.

Beggs: I think it’s important to note however that it’s the friends’ email addresses, which could be a particular quirk.

Singer: Setting aside whether your friends’ emails are sensitive, there is no doubt that these differences in propensities to disclose across such a big sample are statistically significant—indeed, your regression shows that they are different even after controlling for other factors that could explain differences in privacy views. But is the difference in your opinion economically significant? In other words, it seems like incentives mattered, but the baseline without the incentive was so high (95%) that these students arguably didn’t need to be bought off given their low willingness to pay for privacy.

Catalini: When you think about the context of the experiment, I’d say yes. As part of the registration process, students were about to receive $100 in Bitcoin, so that incentive probably also played a role in the high level of disclosure to begin with. They probably also trusted us with that data.

Singer: You also find that the ordering of the choices of the type of digital wallet—a “bank-like” wallet that involves an intermediary compared to a wallet that is more difficult for a government to track—had a big impact on type of wallet chosen. To what do you attribute this finding? Inattention to detail or undue haste? Or is something else at work?

Catalini: Our attention online is scarce. Any additional click, scroll or reading is costly. What was novel in this setting was that we could test individual preferences across a range of dimensions, and that the different digital wallets presented different trade-offs on those dimensions. Moreover, even when we provided the students with more information about these trade-offs, the ordering still dominated individual choices.

Beggs: Another way of saying that, again in the spirit of Thaler, is that nudges work better when attention is scarce.

Singer: Finally, you find that “whenever privacy requires additional effort or comes at the cost of a less smooth user experience, participants are quick to abandon technology that would offer them greater protection.” This is a bit more intuitive, but I’m having a hard time thinking of examples of such technologies. Can you give me a few?

Catalini: Just think of every single privacy setting on your mobile phone. Do you allow this app to track your location or not? Do you allow it to access the microphone? Apps leverage the fact that when we use them we often need something done quickly to gain a lot of permissions. You can use a messaging app that protects your privacy, but you’re most likely to default to the one with the most users. When is the last time that someone checked the terms on a free VPN service or free Wi-Fi hotspot? Often, these free services come with a substantial amount of data disclosure.

Beggs: What you seem to be saying is that we’re not so much agreeing as not paying attention.

Catalini: Attention is so expensive these days! And convenience matters a lot.

Seamans: But this is the same in the offline world too. Who reads all the terms and conditions of their credit card statement?

Catalini: A key difference here is the scale at which you can collect. Offline the frictions are still somewhat higher.

Singer: Let’s now turn to some potential criticisms of the study. Is it possible that the attitudes of undergraduates towards privacy don’t reflect those of the working-adult population? Who cares if someone intercepts my MIT school mail and finds out that I’m meeting my pals for hamburgers at 6:30? Perhaps undergrads just don’t have that much information that absolutely needs to be kept secret.

Catalini: Overall, there are some privacy savvy users on the MIT campus, so if the problem is here, you can bet it’s possibly more severe elsewhere. Some of the MIT dorms (East Campus) have an extensive culture around digital privacy. They understand encryption, use open source software, and are developing some of the software—in the cryptocurrency and blockchain space—that may help us regain some control over our digital data in the next years. At the same time, like everyone else, when faced between convenience or privacy, or between an incentive or privacy… well…

Seamans: [everyone waits with baited breath]

Beggs: I wonder how much the choice of food affected the result. We joke all the time that students will do anything for food, but it really is true that they seem to irrationally overvalue it. Like, I can get more students to show up by offering burritos than I can by offering the amount of money that a burrito costs. There are also some studies (e.g., by Heyman and Ariely) that show that non-monetary payments are more powerful than small amounts of money in extracting favors, and giving up information here could be seen as a favor.

Catalini: We actually asked a few of them before deciding on the right incentive. They said pizza!

Seamans: So the study is not representative, but that’s a feature not a bug.

Catalini: Yes, every experiment is somewhat idiosyncratic, but here the bias should at least run against us. And the whole premise of Bitcoin is to give users some privacy back!

Singer: And now for a criticism that Jodi hinted at earlier . . . The study assumed friends’ email addresses and the nature of the bitcoin transactions were something the students cared about keeping private. But what if this issue is not a real privacy concern for students? If you would have asked for access to the students’ own browsing history across all devices, or a dump of their contact lists, or what they watch on Netflix, do you think you would have received a much different reaction? I’d pay a lot for folks not to learn that I secretly listen to Kenny G. (Oh crap, did I just type that?)

Beggs: My Taylor Swift playlist and I apologize for nothing.

Seamans: Nor me and my Grateful Dead playlist. But Hal should apologize for Kenny G.

Beggs: Clive Davis’s autobiography has a section on Kenny G that might change your mind!

Catalini: I’ll take a pizza (Neapolitan style) for my playlists! Joking aside, the emails of your closest contacts are one of the most sensitive data pieces one can ask. In surveys, individuals rank that right after their social security numbers. This was not just about the initial $100 in Bitcoin, but about the overall use of those digital wallets in the future. Imagine I offered you a digital wallet that is more convenient to use than your current checking account, but came with some privacy strings. Maybe at the beginning you only use it for a few transactions, but then the service picks up and more and more of your friends and family are using it. At that point, would you switch to an alternative because of the better privacy? Path dependency is a concern here.

Beggs: I think I would be more careful with my friends’ info than my own, since I don’t want to be a jerk to them. For example, I don’t accept most random LinkedIn requests because I know that affects who can see my friends and such. Whereas my life is basically on Twitter.

Singer: Hold on guys. It seems like there are more sensitive items in your digital life. Like where Uber has taken you. Or your calendar. Or your phone logs. Or your recent Google searches. You’d have to pry this personal info from my dead cold fingers. (Except for my Google searches, which are just “Hal Singer” in the last 24 hours.)

Seamans: Or your friends personally identifiable information. Which email is not, right? How sensitive is my friends’ email addresses, really?

Catalini: Nowadays you can infer a lot from an individual’s moves and location. Except many apps query that data and sell it without the user even knowing. Even if you don’t use an email service that mines your data, if most of your contacts do… well… To Rob’s point, in this case it’s more about the relationship than the email. And from our analysis of the network data, the one provided under the incentive, looked quite accurate.

Beggs: So you’re saying it’s mostly that I’m divulging who I’m friends with?

Catalini: Yes.

Seamans: So there are different types of data, some more sensitive than others, some “worth” more than others. How much would it take for a MIT student to give up a social security number? Two pizzas?

Catalini: Are they part of the Equifax breach?

Beggs: Now I’m curious whether asking for social security numbers would get Institutional Review Board clearance.

Singer: To the extent folks perceive your friends’ email as not being very sensitive, I can see someone characterizing your findings as demonstrating that MIT students are willing to relinquish a trivial amount private data in exchange for a trivial reward. And thus the paradox disappears. Is that unfair?

Catalini: But then why the change in behavior when the incentive is added?

Singer: Do you have any plans of going back into the lab and tweaking the study, perhaps to address or preempt some of these criticisms?

Catalini: Not in this setting. But we’re exploring these questions more in general in the digital environment.

Beggs: And do you want help?

Seamans: With eating pizza?

Beggs: Well that too, but I wouldn’t mind testing the upper limits of what students will do for pizza!

Singer: Hate to end this banter, but let’s turn to the policy implications. You say that one interpretation of your findings is that “consumers need to be protected from themselves, above and beyond the protection given by a notice and choice regime, to ensure that small incentives, search costs or misdirection are not able to slant their choices away from their actual normative preferences.” What are the solutions? If consumers are being harmed, what can we do to help? Is more/different regulation needed? Is there something that consumers can do to "fight back"?

Catalini: If you believe our findings, then it is clear that the current regime of “Notice and Choice” is extremely ineffective. There are ways you can ask for access to private data that makes it extremely likely that a consumer will grant it. Since consumers are time-constrained online, we need an approach that delivers them the right information in a simple and digestible way. What are the trade-offs involved? How will my data be used in the future? What rights do I retain? Something like a creative-commons license for privacy regimes. By increasing transparency, I believe we can support consumers in making more informed decisions.

Seamans: My turn. Are “privacy” and “data portability” at odds with each other? The latter is sometimes discussed as a way to empower consumers and promote competition. What are the implications for privacy?

Catalini: Absolutely not. The whole premise of blockchain technology, when applied to privacy, is to enable data portability and access, while giving consumers control back over their data. For example, Bitcoin gives the user full control over a scarce, digital store of value that they can use to transact with a higher degree of privacy. They can also take their financial assets and move them between different service providers that are compatible with the Bitcoin standard at a low cost, increasing competition. This is likely to become increasingly important as artificial intelligence (“AI”) and machine learning diffuse through the economy, as to deliver valuable services, companies will need to rely on massive amounts of data. If we want a competitive market for AI, we need to start thinking about how to allow for digital privacy and new forms of data ownership, licensing etc.

Seamans: You make blockchain seem like a panacea.

Catalini: It’s not.

Singer: Before its rules were repealed by Congress, Obama’s Federal Communications Commission subjected Internet service providers to an “opt in” standard, which—by requiring consumers to affirmatively give permission before their granting access to their data—was more stringent than the “opt out” standard used by the Federal Trade Commission to regulate tech platforms such as Google and Facebook. Where does the study come down on the “opt in” versus “opt out” debate?

Catalini: Our study shows that even under the more stringent “opt in,” consumers can be tempted to share their data in many different ways. But at least in that case is a moment in time where they are asked to make a choice, and you have an opportunity to inform the public about those choices. My guess is that under the “opt out” regime, only a tiny fraction of consumers will actually incur the cost of removing themselves from the data collection list. Moreover, you can use the same incentives, navigation features and irrelevant information we study to stop individuals when they’re trying to opt out! Depending on how much providers care about this segment, the incentives can be even higher as they’re not applied to everyone.

Singer: Another policy question: The “risk” of non-privacy is public disclosure of personal facts, or, worse, identity theft. We can quantify the latter risk; there’s a mature insurance market for this. I can buy a million dollars’ worth of identity theft protection for $4 a month. If I can have the insurance, I’m willing to give up a lot of my privacy for convenience, even for a pizza. Will the advent of insurance markets for privacy breaches affect attitudes towards privacy (and behaviors) going forward?

Catalini: It’s not clear the highest costs are financial. You can’t compensate someone for how a specific disclosure may affect their personal or professional life. As data covers more of our social, economic and personal interactions, some of the consequences will be far beyond identity theft. Identify theft we can solve with a more robust identity and verification infrastructure. But to regain control over our digital privacy, we have to figure out what those new business models look like.

Beggs: Whether there is a policy implication depends crucially on whether we can confirm that people are making "mistakes" or if there are externalities involved with consumers not caring about their privacy

Singer: So where is your latest research taking you?

Catalini: From an economics perspective, you can run a digital platform without giving the same market power to the central intermediary. This has implications for prices, privacy, but also cybersecurity. We are launching a new research lab at MIT that will explicitly focus on how the emerging field of Cryptoeconomics can help us not only identify the right technologies, but also design the right incentive systems and business model to increase digital privacy and competition. In a new paper on blockchain with Joshua Gans, we rely on economic theory to discuss how blockchain technology will shape the rate and direction of innovation.

Seamans: Does the current state of digital privacy benefit incumbent or entrants?

Catalini: Incumbents.

Seamans: Seems to me it benefits entrants—it is easy to get customer data. A digital start up needs data. They can get good data on MIT students, at least, by giving away pizza.

Beggs: Or by not giving pizza, as your results show.

Singer: Rob, the entry barrier is data on millions of users—not just a sample of MIT students. That’s the bottleneck. So I agree with Christian. The current state of the world benefits incumbents.

Catalini: Some data, in particular historical data, are harder to get. That limits competition when you’ve accumulated massive datasets and can train your machine learning to make better predictions than everyone else.

Seamans: Agreed. But I want to contrast this state of the world with a state of the world where it is harder to get data. If we change the world and make this data hard to get, incumbents (with the data) will benefit. Changing the current “Notice and Choice” regime that Christian referenced earlier, without making data more portable, will benefit incumbents not entrants.

Catalini: It doesn’t need to become harder to get. It’s about creating better property rights on it and having a more competitive marketplace. If consumers have more control over their digital assets and data, you can build better marketplaces for licensing it and using it.

Seamans: Agreed, but there are two issues here: (1) protecting privacy by making it harder to collect personal data; and (2) enabling consumer sharing of data with anyone they choose. I worry about a "fix" that focuses on one but not the other.

Catalini: I still think that data ownership should come first. We don’t own much of what we create every day online, from messages to content etc. We “rent” resources over the internet, many offered to us for free in exchange for our privacy. In some contexts this may be optimal, but maybe we’re discounting the future.

Beggs: There’s nothing wrong with discounting the future, but if we’re discounting the future “too much” in a way we will regret later…

Singer: Final thoughts?

Seamans: Christian, what pizza place did you use?

Beggs: I feel like the fact that this is the first and last question suggests that people might irrationally overvalue pizza.

Singer: Christian? You still there? Damn, he’s not gonna share it with us. Perhaps you need to raise your price, Rob. Thanks for joining, folks. I’ll send you a draft of the chat tonight.

Seamans: Guess it will remain a secret. I’ll be on an international flight so I won’t be able to review the chat until tomorrow sometime. Don’t sell that info to anyone.

Catalini: Safe travels! (Just tweeted it.)

“>

Graphic by Daniel Kleinman, Forbes StaffMANDEL NGAN/AFP/Getty Images

Graphic by Daniel Kleinman, Forbes Staff

On this edition of the #BytesChat, we explore the “digital privacy paradox”—the notion that people say they care about their online privacy, but they are willing to relinquish private data online quite easily when incentivized to do so. What is it about the digital environment that may be driving individuals away from their ideal level of privacy? Our special guest was Christian Catalini, assistant professor at MIT’s Sloan School of Management, and co-author of the new Sloan research paperThe Digital Privacy Paradox: Small Money, Small Costs, Small Talk. We were joined by Robert Seamans, associate professor at NYU’s Stern School of Business; and Jodi Beggs, a consulting economist for Akamai Technologies and founder of Economists Do It With Models. The chat was moderated by Washington Bytes editor Hal Singer. The transcript has been edited lightly for readability.

Hal Singer: Let’s get going. I am so excited to have Christian with us. I’ve been hearing about this MIT pizza study for a while. I didn’t even know pizza was a thing in Boston. Was this Chicago-style pizza or New York pizza? Inquiring readers want to know!!

Christian Catalini: Ha! Boston style.

Rob Seamans: Hal, you should let your readers know that Pizzeria Regina is some of the best pizza in the US.

Catalini: That’s a lot of detail for a privacy sensitive crowd.

Jodi Beggs: I was about to say there’s no such thing as Boston pizza (though there is Cape Cod pizza, aka bar pizza), but the Regina comment is correct.

Singer: Christian, your new paper with Susan Athey and Catherine Tucker lends credence to the digital privacy paradox. Can’t we think of other things that we genuinely care about, yet don’t place too dear a value on? We care about feeding our kids, but the pricing for kids at restaurants reflects a lower willingness to pay. Is this also a paradox? Or just humans being cheap?

Catalini: Economists have highlighted for a long time that what we say and what we do can be radically different. One way to think of our result is that there is no paradox at all, and that people just don’t care about privacy as much as they say when asked in a survey. The other interpretation is that although we do care about privacy, we make rushed or impulsive decisions when placed in the wrong context.

Beggs: I think the kid thing is an incentive to get parents to go to restaurants. It’s not like the alternative is just not feeding your kid, so it’s a little different.

Catalini: And the portions are smaller!

Beggs: I feel like people say they value privacy in the abstract but then don’t think about it in more concrete contexts. This is interesting because it seems to usually work the other way with other goods or principles.

Catalini: My intuition is that we do care about privacy, and in particular the parts of our lives that rely on privacy, but we discount its consequences because they are still somewhat vague, and likely to materialize only in the future. Behavioral economics has shown that individuals tend to discount future, uncertain events. Online we’re drawn to instant gratification and have not only very short attention spans, but also low willingness to wait. Any complexity becomes and obstacle between us, and that imaginary slice of pizza.

Seamans: Hal, I agree that there is not a paradox per se here. We can value privacy highly, but if we don’t have a way to monetize it, then what are our options?

Beggs: We monetize privacy when web-hosting companies offer private DNS registry services, for example. The prices are pretty low, and I wonder how many people do it. (I don’t, and don’t stalk me!)

Seamans: I wonder how many people are aware of the option. Arguably MIT students should be aware.

Singer: Christian, the big takeaway from your study is that consumers, as reflected by a survey of MIT undergraduates, are readily willing to surrender certain elements of privacy in return for small emoluments: a pizza or the convenience of a simple banking app or not having to learn to use PGP. Do I have that right?

Catalini: Yes! Small incentives, small talk and small costs seem all to change behavior when it comes to digital privacy.

Singer: By the way, if MIT students can’t use PGP, I fear for the ordinary civilian! And what the heck is PGP?

Beggs: Can’t use and don’t want to use are two different things

Catalini: PGP, which stands for “pretty good privacy,” is an encryption software that provides cryptographic privacy and authentication for data communication. PGP is a wonderful technology that has achieved little mainstream adoption because of its complexity. Encryption has a long way to go to really become user-friendly. Humans are just not wired for its mathematical certainty. You forget your private key, your data is lost forever. And its complexity. But here the pizza really represents the daily temptation we all face online when offered free services. Free email, a free messaging platform, etc.

Seamans: What’s the downside to not valuing privacy? Is there any real harm to putting a low value on privacy? What are consumers missing?

Catalini: My sense is that online we’re often discounting the future. Data is easy to replicate and difficult to protect, so once it’s out, it’s extremely difficult if not impossible to get it back.

Seamans: And by that logic, if someone is willing to offer something (anything) for my info—info that they could probably get from some other source relatively easily—I may as well take it.

Beggs: There’s no real downside if people are rationally not valuing privacy, but problems come in if they are caving to a desire for immediate gratification, or not given full information, when forming their valuation.

Catalini: Exactly!

Beggs: The behavioral literature talks about a whole bunch of things that sway our valuations of stuff, so it’s not clear that these low valuations are long-term happiness maximizing.

Singer: Christian, to understand your paper, the reader needs to know the difference between three key concepts: (1) stated preference, (2) revealed preference, and (3) normative preference. Can you explain the difference among these?

Catalini: Stated preferences are what we say we care about. Revealed preferences are what we care about based on our actual behavior and the choices we make. Normative preferences are aspirational, and focus on what we think we should be aiming towards. Of course on the last one, by design, we can disagree for a long time on what our preferences should be!

Singer: And what did your study reveal about the relationship between those three variables?

Catalini: We do not take a stand on the normative preferences side. That is for policymakers to decide. The evidence we present shows a stark contrast between how much people say they care about privacy, and what they actually do when they have to make privacy choices. If you believe that individuals are honest about their preferences when asked, then you have to wonder what it is about the digital environment that may be driving individuals away from their ideal level of privacy. The experiment highlights a few possible mechanisms: incentives, navigation costs on a web page, and cheap talk.

Beggs: Given Thaler’s Nobel prize, it might be a good time to discuss how the endowment effect plays here. If people need only small payments to give up privacy, would they be willing to pay even less to keep it? Or are we thinking of things wrong when we see privacy as the good?

Catalini: That is an interesting question! How much would people be willing to pay to take their data out of a recent breach like the Equifax one? Assuming that was even possible! I don’t think we care about privacy per se, but we do care about the things we can do every day because we enjoy some degree of privacy. As more of our lives become digital, that balance is quickly shifting. Moreover, the cost of collecting vast amount of data is lower every day… and the cost of protecting it is increasing over time.

Beggs: See, that’s the thing that fascinates me. The endowment effect shows that we irrationally overvalue things that we already have, which would imply that even this seemingly absurdly low “willingness to sell” for privacy is an overvaluation! I’m curious how much this same group would pay to get their privacy back.

Seamans: Taking a step back so I can understand: Behavioral economics is one lens to take to understand Christian’s results—namely, people don’t value the future enough. It sounds like you are also talking about a different, more industrial-organization lens that could explain the results. Assume everyone is rational (for the sake of argument).

Beggs: Ha!

Seamans: Stick with me. I value my privacy, but I know it’s already compromised. So if you’re foolish enough to give me a free pizza for something I don’t have control over, good for me.

Catalini: We could already be in an equilibrium where individuals assume their privacy is gone.

Beggs: One thing that I wonder is whether people feel like their privacy is going to get violated anyway via hacking. So giving it up voluntarily is kind of inconsequential.

Catalini: Correct. Then the question becomes, “How do we get out of this equilibrium, if we think it may cause problems down the road?” It’s also important to realize that this has implication for competition in these markets. If we’re already locked in from a privacy perspective with a specific player, we may be more willing to share even more with it as new services arise.

Beggs: From a game theory perspective, I don’t feel like I’m divulging incremental information if I think that other people are going to divulge my friends’ email addresses anyway.

Seamans: Sounds very rational to me!

Catalini: A privacy prisoner’s dilemma!

Singer: Let’s turn to the study’s methodology. Christian, you conducted an experiment of 3,108 MIT undergraduates involving “digital wallets,” which was a grant of $100 of Bitcoin. Before they got the grant, however, students had to rate their digital wallets “in terms of their traceability by peers, the wallet service provider, and the government.” We know from economics that we can’t just ask the respondent how much she values a certain attribute like privacy of their friends; instead, we have to get them to reveal their willingness to pay for this feature by offering them a series of choices, with the “price” associated with each choice varying slightly. In this case, the price (or incentive) was a pizza. Can you explain how this worked?

Catalini: For the study, we needed to accurately reconstruct the relevant social network of participants. In a different paper with Catherine Tucker, we use this information to study the process of diffusion and adoption of Bitcoin on campus. We knew that simply asking students about the emails of their friends would give us low-quality data, so we added a probing question where the incentive was introduced. That’s how we were then able to study the role of the incentive on the students’ likelihood to share information.

Singer: Your primary finding is that a small financial reward incentivizes students to disclose private information. In particular, you find that “In the raw data, within the subsample randomly exposed to the incentive, 5% of students gave all invalid emails under the ‘Ask’ condition, and 2.5% under the ‘Ask + Incentive’ condition (see Figure 1a).” Let me see if I can translate this into English: Does this mean that 95% (1 minus 5%) of students who were not incentivized with a pizza gave away their friends’ email addresses, whereas 97.5% (1 minus 5%) who were incentivized with a pizza gave away their friends’ email addresses?

Catalini: Yes. Many of the students are probably used to sharing this information with apps, online services or even retailers. In this case they knew they were sharing it with us, researchers from their own university, so you can think of it as an upper bound on their willingness to share that kind of information. At the same time, when the incentive is introduced, pretty much everyone, even the ones that were holding back, give away the emails. The result holds no matter your preferences, background, degree, gender, Mac vs PC.

Beggs: I think it’s important to note however that it’s the friends’ email addresses, which could be a particular quirk.

Singer: Setting aside whether your friends’ emails are sensitive, there is no doubt that these differences in propensities to disclose across such a big sample are statistically significant—indeed, your regression shows that they are different even after controlling for other factors that could explain differences in privacy views. But is the difference in your opinion economically significant? In other words, it seems like incentives mattered, but the baseline without the incentive was so high (95%) that these students arguably didn’t need to be bought off given their low willingness to pay for privacy.

Catalini: When you think about the context of the experiment, I’d say yes. As part of the registration process, students were about to receive $100 in Bitcoin, so that incentive probably also played a role in the high level of disclosure to begin with. They probably also trusted us with that data.

Singer: You also find that the ordering of the choices of the type of digital wallet—a “bank-like” wallet that involves an intermediary compared to a wallet that is more difficult for a government to track—had a big impact on type of wallet chosen. To what do you attribute this finding? Inattention to detail or undue haste? Or is something else at work?

Catalini: Our attention online is scarce. Any additional click, scroll or reading is costly. What was novel in this setting was that we could test individual preferences across a range of dimensions, and that the different digital wallets presented different trade-offs on those dimensions. Moreover, even when we provided the students with more information about these trade-offs, the ordering still dominated individual choices.

Beggs: Another way of saying that, again in the spirit of Thaler, is that nudges work better when attention is scarce.

Singer: Finally, you find that “whenever privacy requires additional effort or comes at the cost of a less smooth user experience, participants are quick to abandon technology that would offer them greater protection.” This is a bit more intuitive, but I’m having a hard time thinking of examples of such technologies. Can you give me a few?

Catalini: Just think of every single privacy setting on your mobile phone. Do you allow this app to track your location or not? Do you allow it to access the microphone? Apps leverage the fact that when we use them we often need something done quickly to gain a lot of permissions. You can use a messaging app that protects your privacy, but you’re most likely to default to the one with the most users. When is the last time that someone checked the terms on a free VPN service or free Wi-Fi hotspot? Often, these free services come with a substantial amount of data disclosure.

Beggs: What you seem to be saying is that we’re not so much agreeing as not paying attention.

Catalini: Attention is so expensive these days! And convenience matters a lot.

Seamans: But this is the same in the offline world too. Who reads all the terms and conditions of their credit card statement?

Catalini: A key difference here is the scale at which you can collect. Offline the frictions are still somewhat higher.

Singer: Let’s now turn to some potential criticisms of the study. Is it possible that the attitudes of undergraduates towards privacy don’t reflect those of the working-adult population? Who cares if someone intercepts my MIT school mail and finds out that I’m meeting my pals for hamburgers at 6:30? Perhaps undergrads just don’t have that much information that absolutely needs to be kept secret.

Catalini: Overall, there are some privacy savvy users on the MIT campus, so if the problem is here, you can bet it’s possibly more severe elsewhere. Some of the MIT dorms (East Campus) have an extensive culture around digital privacy. They understand encryption, use open source software, and are developing some of the software—in the cryptocurrency and blockchain space—that may help us regain some control over our digital data in the next years. At the same time, like everyone else, when faced between convenience or privacy, or between an incentive or privacy… well…

Seamans: [everyone waits with baited breath]

Beggs: I wonder how much the choice of food affected the result. We joke all the time that students will do anything for food, but it really is true that they seem to irrationally overvalue it. Like, I can get more students to show up by offering burritos than I can by offering the amount of money that a burrito costs. There are also some studies (e.g., by Heyman and Ariely) that show that non-monetary payments are more powerful than small amounts of money in extracting favors, and giving up information here could be seen as a favor.

Catalini: We actually asked a few of them before deciding on the right incentive. They said pizza!

Seamans: So the study is not representative, but that’s a feature not a bug.

Catalini: Yes, every experiment is somewhat idiosyncratic, but here the bias should at least run against us. And the whole premise of Bitcoin is to give users some privacy back!

Singer: And now for a criticism that Jodi hinted at earlier . . . The study assumed friends’ email addresses and the nature of the bitcoin transactions were something the students cared about keeping private. But what if this issue is not a real privacy concern for students? If you would have asked for access to the students’ own browsing history across all devices, or a dump of their contact lists, or what they watch on Netflix, do you think you would have received a much different reaction? I’d pay a lot for folks not to learn that I secretly listen to Kenny G. (Oh crap, did I just type that?)

Beggs: My Taylor Swift playlist and I apologize for nothing.

Seamans: Nor me and my Grateful Dead playlist. But Hal should apologize for Kenny G.

Beggs: Clive Davis’s autobiography has a section on Kenny G that might change your mind!

Catalini: I’ll take a pizza (Neapolitan style) for my playlists! Joking aside, the emails of your closest contacts are one of the most sensitive data pieces one can ask. In surveys, individuals rank that right after their social security numbers. This was not just about the initial $100 in Bitcoin, but about the overall use of those digital wallets in the future. Imagine I offered you a digital wallet that is more convenient to use than your current checking account, but came with some privacy strings. Maybe at the beginning you only use it for a few transactions, but then the service picks up and more and more of your friends and family are using it. At that point, would you switch to an alternative because of the better privacy? Path dependency is a concern here.

Beggs: I think I would be more careful with my friends’ info than my own, since I don’t want to be a jerk to them. For example, I don’t accept most random LinkedIn requests because I know that affects who can see my friends and such. Whereas my life is basically on Twitter.

Singer: Hold on guys. It seems like there are more sensitive items in your digital life. Like where Uber has taken you. Or your calendar. Or your phone logs. Or your recent Google searches. You’d have to pry this personal info from my dead cold fingers. (Except for my Google searches, which are just “Hal Singer” in the last 24 hours.)

Seamans: Or your friends personally identifiable information. Which email is not, right? How sensitive is my friends’ email addresses, really?

Catalini: Nowadays you can infer a lot from an individual’s moves and location. Except many apps query that data and sell it without the user even knowing. Even if you don’t use an email service that mines your data, if most of your contacts do… well… To Rob’s point, in this case it’s more about the relationship than the email. And from our analysis of the network data, the one provided under the incentive, looked quite accurate.

Beggs: So you’re saying it’s mostly that I’m divulging who I’m friends with?

Catalini: Yes.

Seamans: So there are different types of data, some more sensitive than others, some “worth” more than others. How much would it take for a MIT student to give up a social security number? Two pizzas?

Catalini: Are they part of the Equifax breach?

Beggs: Now I’m curious whether asking for social security numbers would get Institutional Review Board clearance.

Singer: To the extent folks perceive your friends’ email as not being very sensitive, I can see someone characterizing your findings as demonstrating that MIT students are willing to relinquish a trivial amount private data in exchange for a trivial reward. And thus the paradox disappears. Is that unfair?

Catalini: But then why the change in behavior when the incentive is added?

Singer: Do you have any plans of going back into the lab and tweaking the study, perhaps to address or preempt some of these criticisms?

Catalini: Not in this setting. But we’re exploring these questions more in general in the digital environment.

Beggs: And do you want help?

Seamans: With eating pizza?

Beggs: Well that too, but I wouldn’t mind testing the upper limits of what students will do for pizza!

Singer: Hate to end this banter, but let’s turn to the policy implications. You say that one interpretation of your findings is that “consumers need to be protected from themselves, above and beyond the protection given by a notice and choice regime, to ensure that small incentives, search costs or misdirection are not able to slant their choices away from their actual normative preferences.” What are the solutions? If consumers are being harmed, what can we do to help? Is more/different regulation needed? Is there something that consumers can do to “fight back”?

Catalini: If you believe our findings, then it is clear that the current regime of “Notice and Choice” is extremely ineffective. There are ways you can ask for access to private data that makes it extremely likely that a consumer will grant it. Since consumers are time-constrained online, we need an approach that delivers them the right information in a simple and digestible way. What are the trade-offs involved? How will my data be used in the future? What rights do I retain? Something like a creative-commons license for privacy regimes. By increasing transparency, I believe we can support consumers in making more informed decisions.

Seamans: My turn. Are “privacy” and “data portability” at odds with each other? The latter is sometimes discussed as a way to empower consumers and promote competition. What are the implications for privacy?

Catalini: Absolutely not. The whole premise of blockchain technology, when applied to privacy, is to enable data portability and access, while giving consumers control back over their data. For example, Bitcoin gives the user full control over a scarce, digital store of value that they can use to transact with a higher degree of privacy. They can also take their financial assets and move them between different service providers that are compatible with the Bitcoin standard at a low cost, increasing competition. This is likely to become increasingly important as artificial intelligence (“AI”) and machine learning diffuse through the economy, as to deliver valuable services, companies will need to rely on massive amounts of data. If we want a competitive market for AI, we need to start thinking about how to allow for digital privacy and new forms of data ownership, licensing etc.

Seamans: You make blockchain seem like a panacea.

Catalini: It’s not.

Singer: Before its rules were repealed by Congress, Obama’s Federal Communications Commission subjected Internet service providers to an “opt in” standard, which—by requiring consumers to affirmatively give permission before their granting access to their data—was more stringent than the “opt out” standard used by the Federal Trade Commission to regulate tech platforms such as Google and Facebook. Where does the study come down on the “opt in” versus “opt out” debate?

Catalini: Our study shows that even under the more stringent “opt in,” consumers can be tempted to share their data in many different ways. But at least in that case is a moment in time where they are asked to make a choice, and you have an opportunity to inform the public about those choices. My guess is that under the “opt out” regime, only a tiny fraction of consumers will actually incur the cost of removing themselves from the data collection list. Moreover, you can use the same incentives, navigation features and irrelevant information we study to stop individuals when they’re trying to opt out! Depending on how much providers care about this segment, the incentives can be even higher as they’re not applied to everyone.

Singer: Another policy question: The “risk” of non-privacy is public disclosure of personal facts, or, worse, identity theft. We can quantify the latter risk; there’s a mature insurance market for this. I can buy a million dollars’ worth of identity theft protection for $4 a month. If I can have the insurance, I’m willing to give up a lot of my privacy for convenience, even for a pizza. Will the advent of insurance markets for privacy breaches affect attitudes towards privacy (and behaviors) going forward?

Catalini: It’s not clear the highest costs are financial. You can’t compensate someone for how a specific disclosure may affect their personal or professional life. As data covers more of our social, economic and personal interactions, some of the consequences will be far beyond identity theft. Identify theft we can solve with a more robust identity and verification infrastructure. But to regain control over our digital privacy, we have to figure out what those new business models look like.

Beggs: Whether there is a policy implication depends crucially on whether we can confirm that people are making “mistakes” or if there are externalities involved with consumers not caring about their privacy

Singer: So where is your latest research taking you?

Catalini: From an economics perspective, you can run a digital platform without giving the same market power to the central intermediary. This has implications for prices, privacy, but also cybersecurity. We are launching a new research lab at MIT that will explicitly focus on how the emerging field of Cryptoeconomics can help us not only identify the right technologies, but also design the right incentive systems and business model to increase digital privacy and competition. In a new paper on blockchain with Joshua Gans, we rely on economic theory to discuss how blockchain technology will shape the rate and direction of innovation.

Seamans: Does the current state of digital privacy benefit incumbent or entrants?

Catalini: Incumbents.

Seamans: Seems to me it benefits entrants—it is easy to get customer data. A digital start up needs data. They can get good data on MIT students, at least, by giving away pizza.

Beggs: Or by not giving pizza, as your results show.

Singer: Rob, the entry barrier is data on millions of users—not just a sample of MIT students. That’s the bottleneck. So I agree with Christian. The current state of the world benefits incumbents.

Catalini: Some data, in particular historical data, are harder to get. That limits competition when you’ve accumulated massive datasets and can train your machine learning to make better predictions than everyone else.

Seamans: Agreed. But I want to contrast this state of the world with a state of the world where it is harder to get data. If we change the world and make this data hard to get, incumbents (with the data) will benefit. Changing the current “Notice and Choice” regime that Christian referenced earlier, without making data more portable, will benefit incumbents not entrants.

Catalini: It doesn’t need to become harder to get. It’s about creating better property rights on it and having a more competitive marketplace. If consumers have more control over their digital assets and data, you can build better marketplaces for licensing it and using it.

Seamans: Agreed, but there are two issues here: (1) protecting privacy by making it harder to collect personal data; and (2) enabling consumer sharing of data with anyone they choose. I worry about a “fix” that focuses on one but not the other.

Catalini: I still think that data ownership should come first. We don’t own much of what we create every day online, from messages to content etc. We “rent” resources over the internet, many offered to us for free in exchange for our privacy. In some contexts this may be optimal, but maybe we’re discounting the future.

Beggs: There’s nothing wrong with discounting the future, but if we’re discounting the future “too much” in a way we will regret later…

Singer: Final thoughts?

Seamans: Christian, what pizza place did you use?

Beggs: I feel like the fact that this is the first and last question suggests that people might irrationally overvalue pizza.

Singer: Christian? You still there? Damn, he’s not gonna share it with us. Perhaps you need to raise your price, Rob. Thanks for joining, folks. I’ll send you a draft of the chat tonight.

Seamans: Guess it will remain a secret. I’ll be on an international flight so I won’t be able to review the chat until tomorrow sometime. Don’t sell that info to anyone.

Catalini: Safe travels! (Just tweeted it.)

Let’s block ads! (Why?)

About the author

Related