Artificial Intelligence (AI) is rapidly reshaping all sectors, including government and the judiciary. With 2024 shaping up to be the year of AI transformation and regulation, here’s a review of ethical, legal, and regulatory concerns and requirements driving AI’s safe and responsible integration.
Federal Government Initiatives
Executive Orders and Policies
In October 2023, President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence mandating federal agencies to ensure AI safety and security, emphasizing privacy and civil liberties protections:
“Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”
The Order, organized by eight guiding principles and priorities—some of which are derived from the Blueprint for an AI Bill of Rights previously released by the Biden administration in October 2022—requires agencies to publish an inventory of their AI systems and designate chief AI officers to oversee compliance and ethical use.
On March 28, 2024, White House memorandum M-24-10 titled “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” provided comprehensive guidelines for federal agencies on integrating AI technologies responsibly, emphasizing governance frameworks, innovation facilitation, and robust risk management to ensure ethical, secure, and effective use of AI.
Chief AI Officers, on behalf of the heads of their agencies, are to convene AI governance bodies and develop compliance plans that include, among other things:
- Identifying and assessing AI’s impact on equity and fairness and mitigate algorithmic discrimination when it is present (Section 5(c)(v)(A)).
- Consulting and incorporating feedback from affected communities and the public in the design, development, and use of the AI system (Section 5(c)(v)(B)).
- Conducting ongoing monitoring and mitigation for AI-enabled discrimination (Section 5(c)(v)(C)).
- Notifying individuals when the use of the AI results in an adverse decision or action that specifically concerns them (Section 5(c)(v)(D)).
- Maintaining human consideration and remedy processes, providing a timely opportunity for affected individuals to appeal or contest the AI’s negative impacts on them, where practicable and consistent with applicable law (Section 5(c)(v)(E)).
- Maintaining options for individuals to conveniently opt-out from the AI functionality in favor of a human alternative, where practicable and consistent with applicable law (Section 5(c)(v)(F)).
Legislative and Regulatory Developments
Federal Legislation
Building on the initial work of both the current administration, the Senate, and the House of Representatives, the Future of AI Innovation Act, introduced April 18 2024, aims to promote US leadership in AI and establish the AI Safety Institute. Additionally, various bills addressing AI regulation, including antitrust, transparency, and training data, are under consideration.
International Cooperation
The US and EU are collaborating to develop compatible AI regulatory environments, fostering international standards and cooperation. In May 2024, the European Council gave the final greenlight to the EU Artificial Intelligence Act, aligning with the key principles of the Biden Executive Order and focusing on risk level.
“The new law categorizes different types of artificial intelligence according to risk. AI systems presenting only limited risk would be subject to very light transparency obligations, while high-risk AI systems would be authorized, but subject to a set of requirements and obligations to gain access to the EU market.”
Ethical and Legal Concerns
Bias and Discrimination
AI systems can embed patterns of gender, racial, and income discrimination, leading to concerns about fairness and equality. Ensuring accuracy and cultural sensitivity in AI translation tools is a crucial area of concern.
AI-Powered Language Services
AI can provide real-time translation and interpretation services, assisting individuals with limited English proficiency (LEP) in low-risk legal and governmental settings. While this technology has the potential to significantly improve access to information and services for non-English speakers, the risks posed can be particularly dangerous in some cases.
Transparency and Accountability
The “black box” nature of AI poses significant risks in the legal system, with calls for standards on AI use and disclosure in criminal justice to ensure transparency and accountability. AI chatbots can offer 24/7 customer service and information access in multiple languages, facilitating communication between clients with LEP and government agencies, but these chatbots do not provide equal quality and safety across languages, with marginalized and indigenous languages suffering most.
Current and newly adopted polices continue to reaffirm that AI tools must comply with legal standards for language access and that Language Access Plans should not exacerbate existing disparities.
Language access initiatives like the Interpreting SAFE-AI Task Force, in collaboration with standards organizations such as ISO and ASTM, are pioneering efforts to develop guidelines that emphasize the human role in AI-assisted interpreting and translation.
Key areas of focus for these standards:
- End-User Autonomy
- Enhancing Safety and Wellbeing
- Transparent Operations
- Human Oversight
- Privacy and Security
Following a recent public comment period, the official guidance for the safe and ethical use of AI in language interpreting was released on June 27, 2024: https://safeaitf.org/guidance/
2024 is the Year of AI Transformation
AI is transforming US government agencies, bringing both opportunities and challenges. Addressing ethical, legal, and regulatory concerns is critical to ensure AI’s responsible use.
Talk with an expert at MasterWord about bringing your Language Access Plan to full compliance, including “human review” and “human-in-the-loop” mechanisms to maximize your plan for AI integration: [email protected].