Photo: ASSOCIATED PRESS

Author Tom Kemp stresses the urgency of retraining workers for the high-tech jobs of tomorrow.

The rapidly growing AI industry has moved beyond the early stages of development and is now facing the consequences of its rapid progress. ChatGPT, a generative AI system released last November, has gained widespread use in various fields, including machine coding, industrial applications, game design, and virtual entertainment. However, it has also been misused for illicit activities such as spam email operations and creating deepfakes.

Recognizing the need for regulation, Silicon Valley-based author, entrepreneur, investor, and policy advisor, Tom Kemp, argues in his new book, “Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy,” that it is crucial to address the potential harms caused by AI. Kemp suggests that an equivalent of the Food and Drug Administration (FDA) should be established to regulate AI, with the Federal Trade Commission (FTC) taking on the role of evaluating AI impact assessments for high-impact areas like housing, employment, and credit to combat issues like digital redlining. These assessments would promote accountability and transparency for consumers.

The Biden Administration’s Office of Science and Technology Policy (OSTP) has proposed a “Blueprint for an AI Bill of Rights,” which includes the right to know when automated systems are being used and understand their impact on individuals. This concept could be incorporated into the rulemaking responsibilities of the FTC if the AAA or ADPPA (American Data Dissemination and Privacy Act) is enacted. Consumers should have the right to object to AI-based systems and have recourse if harmed. Websites with significant AI-generated content should provide clear labels distinguishing between AI-generated and human-generated content.

To ensure responsible AI development, certification programs, codes of conduct, and industry standards should be established. Similar to the accredited certified public accountants (CPAs) in the finance industry, AI professionals should obtain certifications. Organizations could adhere to quality management standards for AI, similar to the International Organization for Standardization’s (ISO) standards for cybersecurity and food safety. The ISO has already initiated the development of a new standard for AI risk management, and the National Institute of Standards and Technology (NIST) has released an initial framework for AI risk management.

Promoting diversity and inclusivity in AI design teams is crucial in mitigating biases. Olga Russakovsky, an assistant professor at Princeton University, emphasizes that diversifying the pool of individuals building AI systems will lead to less biased AI systems.

As regulators and lawmakers focus on antitrust issues related to Big Tech firms, the impact of AI should not be overlooked. Acquisitions of AI companies by Big Tech should be closely scrutinized, and the government should consider mandating open intellectual property for AI to prevent concentration of technological advancements in a few firms’ hands. Additionally, society and the economy should prepare for the displacement of workers due to automation by providing education and training for new jobs in an AI-driven world.

Given that Big Tech is at the forefront of AI development, it is crucial to ensure that the effects of AI are positive. The collection and processing of sensitive data by Big Tech using AI pose threats to individuals and society. Similar to the need for containing digital surveillance, it is essential to prevent Big Tech from opening Pandora’s box with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *