Artificial Intelligence Briefing: Senate Committee Hears Testimony on Algorithmic Transparency, Accountability | Faegre Drinker Biddle & Reath LLP


Our latest briefing takes an in-depth look at the Senate's exploration of transparency and accountability to prevent bias in algorithmic decision-making systems, the U.S. Copyright Office's new guidance on works including content generated by AI technologies, and the FTC A warning to companies that may overpromise what their artificial intelligence products or services can deliver.

Regulatory and Legislative Developments

  • Senate Homeland Security and Governmental Affairs Committee Hears on AI Risks and Opportunities: On March 9, the Senate Homeland Security and Governmental Affairs Committee held a hearing Entitled “Artificial Intelligence: Risks and Opportunities”. Gary Peters, chair of the committee, highlighted that one of the biggest challenges posed by AI is the lack of transparency and accountability of how algorithms arrive at their results, and that AI models can produce biased results that can lead to unintended consequences. harmful consequences. Additionally, Chairman Peters noted that building more transparency and accountability into these systems would help prevent bias that could undermine the utility of AI. The committee heard witness testimony from the Center for Democracy and Technology, Brown University and the RAND Corporation. Chairman Peters concluded the hearing by noting that there will be more hearings to explore the topic in greater depth.
  • New York Assembly Bill to Regulate Algorithmic Decision-Making Systems: On March 7, the New York Assembly passed New York Assembly Bill 5309. The bill would require that if a state unit purchases a product or service that is or incorporates an algorithmic decision-making system, that system adhere to responsible artificial intelligence standards.TonThe bill also clarifies that conduct deemed unlawful discriminatory conduct through algorithmic decision-making systems relating to interns, non-employees and creditors is prohibited.
  • Guidelines for Copyright Registration: Works Containing Artificial Intelligence Generated Material. On March 16, the U.S. Copyright Office released policy statement Aims to clarify its application of the human authorship requirement when evaluating and registering works containing material generated using artificial intelligence techniques. Specifically, the Copyright Office states that “a work lacks human authorship and will not be registered by the Copyright Office if traditional authorship elements of a work are produced by a machine.” Thus, “for example, when artificial intelligence technology only When a human prompt is received and a complex written, visual or musical work is produced in response, the ‘traditional elements of authorship' are determined and performed by the technology rather than the human user” and the work will not be registered.
  • Congressional Research Service publishes key copyright issues on generative AI: February 24, Congressional Research Office published Legal sidebar on generative artificial intelligence and copyright issues. Sidebar explores issues that courts and the US Copyright Office are beginning to face regarding the output of generating AI programs (e.g. ChatGPT, Steady Diffusion, etc.), such as: whether AI output is entitled to copyright protection; who owns the copyright on AI output; and training and how the use of generative artificial intelligence programs infringes the copyrights of other works.Sidebar invites Congress Consider whether any questions raised by generative AI programs would require changes to copyright laws or other legislation.
  • Discussion Paper: Artificial Intelligence in Pharmaceutical Manufacturing, informs: On March 1, the Food and Drug Administration released discussion paper All about Artificial Intelligence in Drug Manufacturing.people Comment The deadline for this document is May 1, 2023. The document reaffirms FDA's commitment to supporting the development of advanced drug manufacturing methods and recognizes that existing regulatory frameworks may need to evolve to allow timely adoption of these technologies.The paper solicits comments on several areas that could affect development and manufacturing, including whether and how the use of AI in specific areas of pharmaceuticals would be regulated, the potential content of standards for the development and validation of AI models for process control and support release tests, and Change management and lifecycle frameworks and expectations for continuous learning AI systems.
  • FTC to Businesses: Take Control of Your AI Claims. On February 27, the FTC issued a blog Urging companies to think twice Regarding “New Tools and Devices Purportedly Reflecting the Capabilities and Advantages of Artificial Intelligence (AI)”. The FTC is concerned that some companies may overpromise, thereby misleading consumers about what their claimed AI products or services can offer. The FTC warns businesses that their claims about the capabilities of their AI-enabled products or services must be supported by evidence, and that if they are not, companies should refrain from claiming that their products or services are AI-enabled, stating that “False or unsubstantiated claims about product efficacy are [the FTC’s] bread and butter. “
  • MHRA: Large Language Models and Software as Medical Devices. The UK Medicines and Healthcare Products Regulatory Agency (MHRA) has published a blog Point out that large language models (LLMs) like ChatGPT and Bard may be regulated as medical devices when used in medicine. The MHRA acknowledged that while “LLM-based medical devices may have difficulty complying with the medical device requirements, they are not exempt from these requirements.” The MHRA also clarified that LLMs not marketed for medical Devices are regulated.
  • Chamber of Commerce releases AI Council report: The U.S. Chamber of Commerce’s AI Competitiveness, Inclusion, and Innovation Committee released a report Report This requires a risk-based regulatory framework that allows for the responsible and ethical use of AI. Policies supporting the responsible use of AI must be a top priority, the report states, “failure to regulate AI will harm the economy, potentially weaken individual rights, and limit the development and introduction of beneficial technologies.” The new regulations run counter to the Chamber's usual practice.
  • The California AI Act borrows from the White House blueprint and the NIST framework: california AB 331 New obligations will be imposed on developers and deployers of automated decision-making tools used in a long list of contexts. Among other things, the bill will require impact assessments, governance plans and notification of natural persons when automated tools are used to make corresponding decisions. It would prohibit deployers from using automated decision-making tools in ways that facilitate algorithmic discrimination, authorize fines, and create private rights of action.
  • NAIC continues to focus on artificial intelligence: The National Association of Insurance Commissioners continues to focus on AI-related matters during its spring national meeting in Louisville. The Big Data and Artificial Intelligence Working Group provided an update on the life insurer AI/ML survey, indicating that a formal notice of review letter will be issued by the end of April, with a deadline for responses of 31 May. The Innovation, Cybersecurity and Technology Committee announced that the regulator is in discussions with subject matter experts to create an independent dataset that insurers and regulators could use to test algorithms that unfairly discriminate. The Commission also updated the development of the Model Bulletin, which will provide regulatory guidance on insurers' use of big data and artificial intelligence.

what are we looking at



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top