AI in health care
In the first half of 2025, 34 states introduced more than 250 AI-related health accounts. The accounts generally fall into four categories: disclosure requirements, consumer protection, the use of AI by insurers and the use of AI by clinici.
Accounts on transparency define requirements for information that AI system developers and organizations that implement the systems reveal.
Consumer protection accounts are intended to prevent AI systems from discriminating unfairly and ensuring that users of the systems have a way to dispute decisions that have made the technology.
Insurers accounts offer supervision of the use of AI by the payers to make decisions about approvals and healthcare payments. And accounts on clinical use of AI regulate the use of technology in diagnosing and treating patients.
Face recognition and supervision
In the US there is a long -term legal doctrine that applies to privacy protection issues, including face supervision, to protect individual autonomy against government interference. In this context, face recognition technologies are important privacy challenges and risks of possible prejudices.
Face recognition software, which is often used in predictive police and national security, has shown prejudices against people of color and is therefore often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru discovered that face recognition software is important challenges for black people and other historically disadvantaged minorities. Face recognition software was less likely to correctly identify dark faces.
Bias also takes the data that is used to train these algorithms, for example when the composition of teams that guide the development of such face recognition software miss diversity.
By the end of 2024, 15 states in the US had established laws to limit the possible damage due to face recognition. Some elements of regulations at state level are requirements for suppliers to publish bias test reports and data management practices, as well as the need for human assessment when using these technologies.