Artificial Intelligence and the Regulatory Landscape

The rise of artificial intelligence (“A.I.”) and automated decision-making tools in making consumer-facing decisions led federal regulators, such as those at the Federal Trade Commission (“FTC”), to identify bias in algorithms along with deceptive and manipulative conduct on the internet among their top regulatory priorities moving forward. (Ali Arain, et. al., Bloomberg Law). In particular, regulators seek to identify whether A.I. and algorithms exclude specific consumer groups in an unfair and discriminatory manner, whether data collection efforts accurately reflect real-world facts, and whether automated decision-making tools are used in a transparent manner. Id. The extent to which A.I. replicates human bias and what, if anything can be done about that, is a question regulators will need to grapple with in the coming years. 

Companies implement A.I. tools to enhance efficiency in decision-making and reduce time spent on tasks that require significant amounts of man-hours. (Proskauer Rose). However, the increased use of A.I. is not without risks; namely that automated algorithms, like their human predecessors, are not immune from unfair and discriminatory bias which can exclude particular consumer groups, whether intentionally or not. (Id.; Ali Arain, et. al., Bloomberg Law). Bias in relation to algorithms can stem from “unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities” or social conventions. (Id.; Nicol Lee, et. al., Brookings).

Algorithms that formulate the criteria for consumer-facing decisions, such as loan approval, are often the product of machine-learning. “Machine-learning algorithms operate by learning patterns in historical data and generalizing them to unseen data.” (Harini Saresh and John Guttag, Association for the Advancement of Artificial IntelligenceMichael Totty, The Wall Street Journal). The issues around machine learning systems don’t stem from the neutrality of their coding, but rather from the quality of training data inputted into the program and the bias within that data. (Proskauer Rose). Additionally, risk of unfair and discriminatory bias exists in the data the algorithm reviews on its own. Id. For example, “if in reviewing resumes from a training data set, algorithms are able to determine a person’s age, gender, race, or other protected characteristic(s), it may begin to impermissibly consider these traits.” Id. In layman’s terms, the end result may be bias in, bias out. Id.

The FTC has emphasized through their recent guidance documents that the use of A.I. tools should be, “transparent, explainable, fair, and empirically sound,” further cautioning companies that “regardless of how well-intentioned their algorithm is, they must still guard against discriminatory outcomes and disparate impact on protected classes of consumers”­––for instance, denying loans to creditworthy women far more than creditworthy men. (Andrew Smith, FTC; Ali Arain, et. al, Bloomberg LawMichael Totty, The Wall Street Journal). Despite this guidance, the FTC has had to use its authority, under Section 5 of the FTC Act, to bring enforcement actions regarding unfair and deceptive practices resulting in consumer injury arising from the use of A.I. (Andrew Smith, FTC). In contrast, the Consumer Financial Protection Bureau (“CFPB”) has taken a more hands-off approach to discriminatory issues arising from A.I. Testifying before Congress, CFPB Director Rohit Chopra “expressed a desire to reinvigorate ‘relationship banking,’ explaining that it would counteract the ‘automation and algorithms [that] increasingly define the consumer financial services market.’” (Ali Arain, et. al, Bloomberg Law). In subsequent congressional testimony, Director Chopra has announced the CFPB will be launching initiatives to grow the pool of firms competing for customers with an eye toward promoting ways in which smaller financial institutions, who utilize a relationship banking model, may compete with their larger counterparts. The planned FedNow program was cited as a way to allow smaller financial institutions to capture market share while preserving their relationship banking model. (Rohit Chopra, CFPB).

The CFPB has taken a more hands-on approach regarding accuracy of the data used to make consumer-facing decisions. To this end, in a recent advisory opinion, the CFPB “has affirmed that matching on name alone [for background checks] is a practice that falls well below the statutory mandate to follow reasonable procedures to assure maximum possible accuracy of consumer information before placing it into a consumer report, as required by the Fair Credit Reporting Act (FCRA).” (Rohit Chopra, CFPB). In a related effort to increase data input accuracy, the FTC has released guidance documents which have cautioned companies against relying on “data set[s] missing information from particular populations” and advised companies to give “consumers access and an opportunity to correct information used to make decisions about them.” (Ali Arain, et. al, Bloomberg Law).

The FTC’s efforts to ramp up enforcement against bias in algorithms have included a directive to staff to “use compulsory processes to demand documents and testimony to investigate potential abuses.” Id. Proposed legislation hitting the House floor in 2022 would streamline this process by requiring certain entities to bring impact assessments of their use of A.I. straight to the FTC. (Squire Patton Boggs). This proposed legislation, the Algorithmic Accountability Act of 2022, would require covered entities to perform “impact assessments” for their algorithmic decision-making systems and require the FTC to create regulations for these impact assessments. Id. The bill features extensive requirements for impact assessments, including “information regarding the outcome of ongoing testing and evaluation.” Id. The bill would further require information on, “historical performance of the automated decision system or augmented critical decision process pertaining to ‘any differential performance associated with consumers’ race, color, sex, gender, age, [or] disability.’” Id. The bill would not merely require an assessment by covered entities, it would “require each covered entity to attempt to eliminate or mitigate, in a timely manner, any impact made by an augmented critical decision process that demonstrates a likely material negative impact that has legal or significant effects on a consumer’s life.” (H.R. 2231). 

Existing enforcement and proposed regulation of consumer-facing A.I. share an ultimate goal of making the technology more “fair,” and that is a worthy pursuit. (Id.; Rohit Chopra, CFPBAndrew Smith, FTC).  However, how one person defines “fair” can be drastically different from another. "Do we want an algorithm that makes loans without regard to race or gender? Or one that approves loans equally for men and women, or whites and blacks? Or one that takes some different approach to fairness?" (Michael Totty, The Wall Street Journal). In certain instances, making A.I. fairer can make it less accurate; reducing unfair bias may mean accepting a decline in statistical accuracy. Id. The argument ultimately becomes a question of balance. Id.

While regulation is warranted, the nascent nature of A.I. technology being used in consumer-facing decisions must be taken into account. Generally, the private sector innovates faster and more effectively than the federal government ever has or will. At large, companies are seeking to make their business decisions as objective as possible with the use of A.I., not, for example, create a new age of A.I. based red lining, ostensibly penalizing consumers for their immutable traits. Id.Google, Amazon, and Pinterest have all scrapped A.I. programs that proliferate bias and rebuilt them without agency enforcement action being brought against them. (Id.; Nish Parikh, Forbes; Jeffrey Dastin, Reuters). Regulations preventing gross misconduct and willful ignorance are warranted, but the creation of a regulatory "nanny state" that stifles innovation must be avoided.