From AI to Z: Navigating the Nuances of the Ongoing AI Debate and the FTC’s Investigation into OpenAI

From generating basic JavaScript code to creating a 7-day European trip itinerary, ChatGPT and other artificial intelligence (“AI”) programs are a one-stop shop for many internet users. Businesses from a wide array of industries have tapped into the countless uses of AI, revolutionizing the workforce as we know it. (Serenity Gibbons et al., Forbes). Within two months of ChatGPT’s November 2022 launch, it became the “fastest-growing consumer application in history.” (Krystal Hu, Reuters). The impressive growth of AI has fueled concerns over this ever-changing technology – including data privacy and the spread of misinformation.  (David Grier, IEEE Computer Society). In response to these concerns, the Federal Trade Commission (“FTC”) is investigating OpenAI, the maker of ChatGPT, as to whether OpenAI violates consumer protection laws. (Cat Zakrzewski, Washington Post). The FTC investigation highlights the need for effective AI regulation and has sparked a nationwide discussion amongst lawmakers, employers, employees, AI companies, and other stakeholders involved in this AI arms race. (Andrew Chow et al., TIME).  

In July 2023, the FTC began their investigation into Open AI with a demand letter requesting records addressing the company’s current data security practices. (Cat Zakrzewski, Washington Post). Federal regulators were motivated to launch this investigation after reports of ChatGPT spreading misinformation as well as a data breach this past March. Id. The data breach involved a bug in ChatGPT’s open-source library which resulted in some users gaining access to other user’s chat history. (OpenAI). The FTC is pursuing this investigation under Section Five of the FTC Act, which addresses unfair or deceptive practices. Id. With the potential to misuse biometric data that could result in harm to consumers, the growth of AI presents new challenges to federal agencies – with some calling it the “new civil rights frontier.” (Id.; Federal Trade Commission). Under FTC Chair Khan, the federal agency has played a more active role in policing the latest developments in the tech industry – as demonstrated by the FTC dialing in on tech giants like Microsoft and Meta (Dave Michaels, Wall Street Journal). The FTC’s unprecedented investigation is considered “the most potent regulatory threat” to OpenAI since its explosive launch late last year. (Karen Hao, Wall Street Journal).

With the FTC investigation's unprecedented nature, many voices in the ongoing AI debate have not remained silent. Companies have articulated their support for AI automation in the workplace due to its sweeping effects on productivity and increased quality of work products. (Lauren Weber et al., Wall Street Journal). On the contrary, many workers and labor unions have been experiencing anxiety due to the growing presence of AI. A recent study has shown that AI could replace “300 million full-time jobs.” (Jose Cox, BBC). Lawmakers must effectively respond to the rapid adoption of ChatGPT and other AI systems in a manner which balances a variety of considerations, from labor groups to large tech corporations. To show the Biden Administration’s commitment to regulating AI, the President met with tech leaders and secured their “voluntary agreement” to AI regulation moving forward. (Deepa Shivaram, NPR). Despite the current AI arms race, companies like Microsoft, Google, Meta, and OpenAI have demonstrated their willingness to abide by and support the creation of AI safeguards. (Michael Shear et al., New York Times).

In the past several months, tech leaders and lawmakers have been convening, and attempting to gain a better understanding of what AI is and how to regulate it effectively without impeding innovation. (Ryan Tracy, Wall Street Journal). The CEO of Open AI suggested that a possible accountability measure on AI could be the creation of a federal agency solely tasked with AI regulation. Id. The proposed agency would oversee AI platforms and ensure compliance with data security measures. Creating the regulatory framework will require the active involvement of many AI and technology experts to develop effective standards. (Brian Fung, CNN).

An additional proposed step towards AI regulation is the creation of a third-party certification process, which vets AI platforms before they are released for public use. (Ryan Tracy, Wall Street Journal). The possibility of a third-party certification has already received harsh criticisms, claiming that it will be “resource-intensive, difficult to obtain, and cost prohibitive.” Id. However, New York City has already enacted a law that requires bias audits for artificial intelligence programs. (Richard Vanderford, Wall Street Journal). These audits aim to expose any biases embedded within automated systems powered by AI due to the rise of the technology being used in recruitment and hiring. Id.

Importantly, the worldwide enamor of AI software has completely overshadowed the data labelers in Kenya who played an integral role in ChatGPT’s overnight success. (Billy Perrigo, TIME). In the years leading up to ChatGPT’s launch, OpenAI outsourced workers, called “labelers,” to review and categorize content. This specific type of labor was necessary to ensure that the software would not generate grotesque and inappropriate content to users. (Deepa Seetharaman, Wall Street Journal). These labelers, with hourly wages as low as $1.32, were tasked with parsing through incredibly disturbing content from the "darkest parts of the internet," – leaving many of them with mental illness and trauma. (Karen Hao, Wall Street Journal). Although OpenAI and their outsourcing partner have severed business ties, ChatGPT would not be where it is today without the contributions of the Kenyan labelers. (Billy Perrigo, TIME).

AI is a multi-faceted topic. The distrust that many Americans share regarding this groundbreaking technology is natural because of all the unknowns associated with AI – effects on job security, spread of misinformation, issues with data privacy, and so much more. The fear and lack of knowledge catalyzes the general hostility towards AI. To combat this, conversations must continue between lawmakers and leaders in the AI and tech space to ensure that there are no misconceptions surrounding AI. For tangible next steps with AI regulation, lawmakers and regulators should follow in New York’s footsteps and implement a requirement for bias audits in automated processes powered by AI systems. The implementation of requiring these audits will open doors for more large-scale efforts, such as establishing a federal agency that oversees AI licensing and development. If lawmakers and tech leaders can achieve a proper balance between safety and technological innovation, AI can be used as a tool to enhance human productivity, rather than replace it.