U.S. Senator Shelley Moore Capito of West Virginia, along with other bipartisan lawmakers, has proposed a bill focused on improving transparency and accountability in high-risk Artificial Intelligence (AI) applications. This bill, the Artificial Intelligence (AI) Research, Innovation, and Accountability Act of 2023, aims to differentiate between human and AI-generated content. Its goal is to ensure citizens know when they are interacting with AI instead of a human.

Capito, together with Senators John Thune, Amy Klobuchar, Roger Wicker, John Hickenlooper, and Ben Ray Luján, introduced the legislation last week. All are members of the Senate Committee on Commerce, Science, and Transportation. Capito stated that the bill would promote 'transparent and commonsense accountability' in AI development without slowing the advancement of machine learning.

The bill proposes that the National Institute of Standards and Technology (NIST) should conduct research to develop standards for AI authenticity. NIST would also be responsible for creating methods to detect and understand unexpected behavior in AI systems.

The bill includes new definitions for 'generative,' 'high-impact,' and 'critical-impact' AI systems. It would require large internet platforms to inform users when generative AI creates the content they view. The U.S. Department of Commerce would enforce this requirement.

The proposed legislation also suggests forming an advisory committee of industry stakeholders. This committee would provide input and recommendations on proposed critical-impact AI certification standards. The Department of Commerce would need to submit a five-year plan for testing and certifying critical-impact AI to Congress and the advisory committee. It would also establish a working group to develop voluntary, industry-led consumer education initiatives for AI systems.