PRIVACY + SECURITY BLOG

News, Developments, and Insights

high-tech technology background with eyes on computer display

Murky Consent: An Approach to the Fictions of Consent in Privacy Law – FINAL VERSION

Article - Murky Consent - Fictions of Privacy Consent - Solove 01

I’m delighted to share the newly-published final version of my article:

Murky Consent: An Approach to the Fictions of Consent in Privacy Law
104 B.U. L. Rev. 593 (2024)

I’ve been pondering privacy consent for more than a decade, and I think I finally made a breakthrough with this article.  I welcome feedback and hope you enjoy the piece.

Mini Abstract:

In this Article I argue that most of the time, privacy consent is fictitious. Instead of futile efforts to try to turn privacy consent from fiction to fact, the better approach is to lean into the fictions. The law can’t stop privacy consent from being a fairy tale, but the law can ensure that the story ends well. I argue that privacy consent should confer less legitimacy and power and that it be backstopped by a set of duties on organizations that process personal data based on consent.

Also check out Prof Stacy-Ann Elvy’s insightful response piece, Privacy Law’s Consent Conundrum. Additionally, here’s a video of me presenting the article.

Full Abstract:

Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.

Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. Murky consent should be subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid. Murky consent should rest on shaky ground. Because the law pretends people are consenting, the law’s goal should be to ensure that what people are consenting to is good. Doing so promotes the integrity of the fictions of consent. I propose four duties to achieve this end: (1) duty to obtain consent appropriately; (2) duty to avoid thwarting reasonable expectations; (3) duty of loyalty; and (4) duty to avoid unreasonable risk. The law can’t make the tale of privacy consent less fictional, but with these duties, the law can ensure the story ends well.

Continue Reading

A Regulatory Roadmap to AI and Privacy

Solove - Infographic- AI and PrivacyOver at IAPP News, I wrote a short essay called A Regulatory Roadmap to AI and Privacy.  It summarizes my longer article, Artificial Intelligence and Privacy

I created an infographic to capture the issues, but I couldn’t include it in the IAPP piece, so I’ll include it here (see above).

For those of you who want the short 2,000 word version of my thoughts on AI and privacy, please check out my essay at IAPP. The long article is here.

From the short essay:

Although new AI laws can help, AI is making it glaringly clear that a privacy law rethink is long overdue. . . .

Understanding the privacy challenges posed by AI is essential. A comprehensive overview is necessary to evaluate the effectiveness of current laws, identify their limitations and decide what modifications or new measures are required for adequate regulation.

Button Read 01

Continue Reading

Webinar – Another Privacy Bill on Capitol Hill: The American Privacy Rights Act Blog

In case you missed my recent webinar with Laura Riposo VanDruff and Jules Polonetsky, you can watch the replay here.   We discussed the strengths and weaknesses of the American Privacy Rights Act (APRA) and its likelihood of passing.

Button Watch Webinar 02

Continue Reading

AI, Algorithms, and Awful Humans – Final Published Version

Article - Solove Matsumi AI Algorithms Awful Humans 09

I am pleased to share the final published version of my short essay with Yuki Matsumi. It was written for a symposium in Fordham Law Review.

AI, Algorithms, and Awful Humans
92 Fordham L. Rev. 1923 (2024)

Mini Abstract:

This Essay critiques arguments that algorithmic decision-making is better than human decision-making. Two arguments are often advanced to justify the increasing use of algorithms in decisions. The “Awful Human Argument” asserts that human decision-making is often awful and that machines can decide better than humans. Another argument, the “Better Together Argument,” posits that machines can augment and improve human decision-making. We argue that such contentions are far too optimistic and fail to appreciate the shortcomings of machine decisions and the difficulties in combining human and machine decision-making. Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself. Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex. Human and machine decision-making often do not mix well. Humans often perform badly when reviewing algorithmic output.

Download the piece for free here:

Article - Solove Matsumi AI Algorithms Awful Humans 10

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy and data security training.

NEWSLETTER: Subscribe to Professor Solove’s free newsletter

Prof. Solove’s Privacy Training: 150+ Courses

Privacy Awareness Training 03

New Edition of PRIVACY LAW FUNDAMENTALS

Cover - Privacy Law Fundamentals 06

HOT OFF THE PRESS!  Privacy Law Fundamentals, Seventh Edition (2024).  This is my short guide to privacy law with Professor Paul Schwartz (Berkeley Law).

Believe it or not, there have been some new developments in privacy law . . .

“This book is an indispensable guide for privacy and data protection practitioners, students, and scholars. You will find yourself consulting it regularly, as I do. It is a must for your bookshelf” – Danielle Citron, University of Virginia Law School

“Two giants of privacy scholarship succeed in distilling their legal expertise into an essential guide for a broad range of the legal community. Whether used to learn the basics or for quick reference, Privacy Law Fundamentals proves to be concise and authoritative.” – Jules Polonetsky, Future of Privacy Forum

Button Learn More 01

If you’re interested in the digital edition, click here.

Cover - Privacy Law Fundamentals Digital

Button Learn More 01

Continue Reading

Webinar – The FTC, Privacy, and AI Blog

In case you missed my recent webinar with Maneesha Mithal, you can watch the replay here.  We discussed recent FTC enforcement actions, algorithmic deletion, the FTC’s current rulemaking, enforcement of the health breach notification rule, the FTC’s role in regulating AI, and other issues. Button Watch Webinar 02

Continue Reading

The Failure of Data Security Law

Failure of Data Security Law - Solove and Hartzog 02

Professor Woodrow Hartzog and I are posting The Failure of Data Security Law as a free download on SSRN. This is a chapter is from our book, BREACHED! WHY DATA SECURITY LAW FAILS AND HOW TO IMPROVE IT

In this book chapter, we survey the law and policy of data security and analyze its strengths and weaknesses. Broadly speaking, there are three types of data security laws: (1) breach notification laws; (2) security safeguards laws that require substantive measures to protect security; and (3) private litigation under various causes of action. We argue that despite some small successes, the law is generally failing to combat the data security threats we face.

Breach notification laws merely require organizations to provide transparency about data breaches, but the laws don’t provide prevention or a cure. Security safeguards laws are often enforced too late, if at all. Enforcement authorities wait until a data breach occurs, but penalizing organizations after a breach increases the pain of a breach marginally, but not enough to be a game changer. Private litigation has increased the costs of data breaches but has accomplished little else. Courts have often struggled to understand the harm from data breaches, so data breach cases have frequently been dismissed.

Overall, we contend that data security law is too reactionary. The law fails to do enough to prevent data breaches, focuses too much on organizations that suffer data breaches and ignores other contributing actors, and doesn’t take sufficient steps to mitigate the harm from data breaches.

Failure of Data Security Law - Solove and Hartzog 03

This chapter can stand alone, but of course, we encourage you to read our whole book, BREACHED! WHY DATA SECURITY LAW FAILS AND HOW TO IMPROVE IT

Cover - Breached 3D 03

* * * *

This post was authored by Professor Daniel J. Solove, who through TeachPrivacy develops computer-based privacy and data security training.

NEWSLETTER: Subscribe to Professor Solove’s free newsletter

Prof. Solove’s Privacy Training: 150+ Courses

Privacy Awareness Training 03

European Data Protection Supervisor Interview

In this video, the European Data Protection Supervisor (EDPS) interviewed me as part of its 20 Talks Series to celebrate its 20th anniversary. From the EDPS description of this talk: “20 Talks is a series of insightful discussions with experts and influential personalities across diverse domains, looking into the profound implications of privacy and data protection within their specific spheres. In this episode, our guest is Daniel J. Solove, Professor of Intellectual Property and Technology Law, George Washington University Law School and President & CEO of TeachPrivacy.”

You can also watch the video on the EDPS 20 Talks site.

Continue Reading

Webinar – Trust: What CEOs and Boards Must Know About Privacy and AI Blog

Dominique Shelton-Leipzig - Trust

In case you missed my recent webinar with Dominique Shelton-Leipzig (Mayer Brown), you can watch the replay here.  We had a great discussion about why privacy is an issue that the C-Suite and Board must address. Dominique is the author of a new book on this topic, Trust.: Responsible AI, Innovation, Privacy and Data Leadership.

Button Watch Webinar 02

Continue Reading