Automated Decision Making Regulations

An Explanation of Privacy Rules

Over the last few years, many privacy regulations have come into effect both at the state level and globally. The General Data Protection Regulation (GDPR) in the EU, Brazil’s data protection regulation (LGPD), the California Consumer Privacy Rights Act (CPRA), the Virginia Consumer Data Protection Act (VCDPA) and Colorado’s Privacy Act (CPA) are just a few of the new privacy regulations. Although all these laws are impactful in their own right, the GDPR was the first and had a far reaching impact, affecting the legal privacy landscape globally. It thrusted many changes onto businesses that operated in the EU. Further, regulators in other jurisdictions, such as California, consider how the EU has drafted and enforced the GDPR when deciding how to implement their privacy regulations, particularly in areas of new technology, like machine learning and automated decision making. 

Machine learning, or artificial intelligence (“AI”), refers to the use of computer systems that can learn and adapt without following explicit instructions by using algorithms and statistical models to analyze and draw inferences from patterns of data. 

Regulating Automated Decision Making

The CPRA and the GDPR, among other privacy regulations, have sought to regulate companies that use machine learning and AI to make automated decisions about consumers. Automated decision making is not explicitly defined in the GDPR or the CPRA, but it is generally understood to involve using machine learning to make decisions without human involvement. For the purpose of the regulation, “automated decision making” can range from banks using data about customers to accept or deny loan applications, companies using algorithms to serve targeted ads online, and insurance companies using data about patients to approve or deny coverage.

While both the CPRA and the GDPR introduced automated decision-making regulations that require companies to be transparent about the logic and information involved, they differ significantly. Article 22 of the GDPR requires companies to provide notice of significant automated decision making. It also grants data subjects, or people living in the EU, the right to object and express their point of view regarding any automated decision making, and the right to demand human intervention. 

As intended by the regulation, this aims to give control to the data subject and ensures there is due process before companies make decisions about them that may have lasting impacts on their lives. It also obligates companies to provide notices where appropriate, implement processes to handle customer questions and concerns and provide human resources that can intervene if customers object to automated decision making. There are some limited exceptions to this, including, if the automated decision making is required for the performance of a contract between the parties or if the data subject has consented to the automated decision making. 

Interestingly, in addition to the GDPR, the European Commission proposed the Artificial Intelligence Act (“the AI Act”) to establish rules for the development, placing on the market, and use of AI systems. The AI Act takes the data protection impact assessment requirement within the GDPR a step further. It seeks to establish a risk-based approach to automated decision making, carving out categories of prohibited practices, high risk practices that must be regulated, limited risk practices that require at least transparency, and lower risk practices which are not regulated. Moreover, providers of “high-risk AI systems” must complete a conformity assessment of their systems and, once completed, execute a declaration of conformity. 

Unlike the GDPR, the CPRA does not grant individuals the right to object to automated decision making. Rather, the CPRA, in Section 21(16), seeks to provide access and opt-out rights to consumers. Additionally, the CPRA established the California Privacy Protection Agency (“CPPA”), tasking it with deciding how to enforce this and other clauses of the CPRA. How the CPPA will choose to enforce the CPRA as it relates to automated-decision making is still unclear.

Dark Patterns

Both Californian and EU regulators attempt to provide more autonomy and control to the public when it comes to AI and automated decision making. The AI Act explicitly prohibits AI practices that deploy “harmful manipulative subliminal techniques” and “exploit specific vulnerable groups.” Similarly, section 14 of the CPRA states that agreement through the use of dark patterns does not constitute consent. 

Dark patterns are essentially the use of novel technologies, for example, technologies that employ AI methods of data processing, or user interfaces that are designed or manipulated with the substantial effect of subverting or impairing user choice. Dark patterns can exist when one option is more aesthetically prominent or attractive, or when the alternative is hidden or difficult to select. While this could be done simply through design, machine learning can also be used to repeatedly change the user interface to obtain a specific response. 

For example, a website may offer a popup with only a “yes” button, but leave out a “no” button or require more clicks to achieve the “no” option; or a button may have an undesired consequence, such as when closing a banner functions as acceptance rather than rejection. Machine learning models may be used to serve the user interface in different ways to users to elicit a desired choice. The FTC has expressed concern over dark patterns as well, suggesting, at an FTC workshop, that companies can expect aggressive FTC enforcement in this area. The intention is to use Section 5 of the FTC Act and the Restoring Online Shoppers’ Confidence Act to exercise its authority by enacting new rules, policy statements, or enforcement guidance.

The Federal Trade Commission (“FTC”) has also weighed in on what appropriate practices are for the usage of automated decision making. In their blog Aiming for Truth, Fairness and Equity in Your Company’s Use of AI, they expressed concern over possible discrimination based on race and other classes framing any such outcome as unfair. The FTC cited a study suggesting that if models use data reflecting existing racial bias in healthcare, it could worsen current biases and inequalities. In order to address these concerns, the FTC laid out three existing pertinent laws for developers and users of automated decision making and profiling. 

  • Section 5 of the FTC Act, prohibits “unfair or deceptive acts or practices in or affecting commerce.” 15 USC Section 45
  • Fair Credit Reporting Act (“FCRA”), promotes accuracy, fairness, and privacy of consumer information contained in the files of consumer reporting agencies. The FCRA becomes relevant for automated decision making where an algorithm is used to deny people credit, employment, insurance, housing, or other benefits.
  • Equal Credit Opportunity Act (“ECOA”) makes it unlawful for any company to discriminate against individuals, with respect to any aspect of a credit transaction, on the basis of race, religion, national origin, sex, marital status, or age, making illegal for a company to use a biased algorithm that results in credit discrimination.

Status of Future Reforms

Furthermore, the FTC provides key guidance to companies on how to successfully use automated decision making and more complex artificial intelligence, stating that the foundational data sets should be appropriate and accurate, algorithms should be tested for bias and improved where appropriate, and companies should embrace transparency and communicate truthfully to consumers and hold themselves accountable to ensure they are doing more good than harm. 

In March of this year, the CPPA conducted Pre-Rulemaking sessions where they requested public comment on this and other topics. Speakers suggested conducting data protection impact assessments as required by the GDPR, providing consumers transparency and requiring companies to have adequate cybersecurity standards, and limiting racial profiling. These recommendations were quite predictable given that European regulators and the FTC have raised the same issues. 

The CPPA will hold additional review sessions in the near future, ahead of publishing their rules. The open-ended power wielded by the CPPA in a state like California, the home of Silicon Valley, means there is both unpredictability and high stakes for how they will choose to ultimately enforce the CPRA.

***

This piece is adapted from Tai’s article published by Secure Justice under the title Privacy Regulations in the Area of Automated Decision Making

Shabnam Tai is a global privacy, product and cybersecurity attorney. She is an Advisory Board Member at Secure Justice and is a Certified Information Systems Security Professional (CISSP) and a Certified Information Privacy Professional (CIPP/T). She graduated from University of California, Berkeley and Santa Clara University School of Law.
Beacon Posts by Shabnam Tai
Comments
  • Catalyst
  • Beyond Homeless
  • MyGovCost.org
  • FDAReview.org
  • OnPower.org
  • elindependent.org