Your Company
Arshad Ahmed

Co-Founder / CEO

States are turning their attention to regulating deepfakes and AI usage

State lawmakers around the U.S. are actively addressing the challenges posed by artificial intelligence (AI) and deepfakes in elections.

We're seeing a variety of approaches to minimize the impact of AI-driven misinformation and disinformation. Disclosure requirements for AI-generated content are a common strategy, and some states are also considering bans and criminalization. Last year in 2023, Michigan enacted a law that outlines specific conditions for AI-driven content to avoid prohibition, emphasizing the need for transparency.

Legislation in states such as Hawaii, South Dakota, and others is focused on regulating AI-generated media and providing avenues for persons misrepresented in deepfakes to seek legal action. This is mirrored in proposals in numerous states, with Virginia criminalizing the creation of deceptive media for criminal purposes.

These legislative moves have significant implications for technology companies. The new laws could introduce liability risks and require companies to comply with diverse state-specific regulations. Companies might need to invest in advanced detection systems, implement accurate content labeling practices, and navigate complex legal requirements. The risk of facing legal consequences for non-compliance or accidental distribution of deceptive content adds to the challenges.

Transition from proposed bills to enacted laws remains uncertain, highlighting the complex nature of legislating around this evolving technology landscape, and the need for all affected people and businesses to monitor closely.

Civiqa delivers bespoke insights on legislation and government processes that are important to you.