AI News: Biden’s Executive Order Aims to Limit Artificial Intelligence Risks
Many, or all, of the products featured on this page are from our advertising partners who compensate us when you take certain actions on our website or click to take an action on their website. However, this does not influence our evaluations. Our opinions are our own. Here is a list of our partners and here's how we make money.
Update: On Nov. 1 the U.S. joined 28 other countries plus the European Union in the Bletchley Declaration — an international cooperation agreement on AI safety and research. The declaration was signed at the first ever global “AI Safety Summit” hosted by Britain. It encourages transparency and accountability in the development of AI technology within nation borders.
The agreement seeks to get ahead of possible dangers that could emerge in the early days of AI advancement including “potential intentional misuse or unintended issues of control relating to alignment with human intent.” It requires increased transparency by governments as well as “private actors” that are developing AI capabilities, as well as safety metrics, tools, testing and research.
The White House released its first-ever standards aimed at reducing a bevy of safety and security concerns raised by artificial intelligence.
The order is sweeping and addresses potential risks AI technology poses to consumers, workers, national security, privacy, innovation and immigration.
“One thing is clear: To realize the promise of AI and avoid the risk we need to govern this technology,” said President Joe Biden during a press conference on Oct. 30. “There's no other way around it, it must be governed.”
The executive order includes an emphasis on the development of new standards, tools and tests across the board. There are eight parts to the order:
Safety and security. Create new safety and security standards for AI, with an emphasis on national security.
Consumer privacy. Protect consumer privacy in AI systems.
Advancing equity. Avoiding algorithmic discrimination in the workplace; by federal contractors; and landlords. It also calls for best practices for using AI in the judicial system.
Health care and education. Advance the use of AI in the development of affordable and life-saving drugs. Provide educators with the resources to deploy AI-enabled educational tools.
Mitigate risks to workers. Collect information on how AI could impact the labor market while developing principles and best practices to maximize the benefits of AI while addressing potential risks like job displacement.
Promote innovation and competition. Expand grants for AI research in key areas like health care and climate change. Expand the ability of highly-skilled immigrant workers with expertise in critical areas to remain in the U.S.
Collaborate with other nations. Establish international frameworks for the use of AI worldwide.
Ensure effective government use of AI. Develop guidance for agencies’ use of AI systems; helping agencies acquire AI products and services; and accelerate the hiring of AI professionals in the federal government.
Biden said during the press conference that the order is the “most significant action any government anywhere in the world has ever taken on AI safety, security and trust.” At the conclusion of the press conference, Biden also called on Congress to pass legislation on AI.
What’s in the Blueprint for an AI Bill of Rights?
On Oct. 5 the White House released its “Blueprint for an AI Bill of Rights” outlining five principles to guide the creation and distribution of automated systems. Recommendations under each of the principles include continuous risk identification and mitigation, as well as testing and evaluation during all phases of the creation of AI systems.
Safe and effective systems. Automated systems should be designed to proactively protect users from harm.
Algorithmic discrimination protection. Automated systems should be used and designed in a way that avoids discriminatory treatment of people based on protected classifications such as race, sex, religion and disability.
Data privacy. Automated systems must include built-in protections to protect users from abusive data practices and users should have agency over how their personal data is used.
Notice and explanation. People should be notified when an automated system is being used and should be provided with plain language explanations of outcomes from that system.
Human alternatives, consideration and fallback. People should be able to opt out of an automated system and have access to a human alternative, when it’s appropriate — based on “reasonable expectations in a given context.”
More AI news
Aug. 9: The White House announced a two-year competition called the “AI Cyber Challenge” that challenges competitors to identify and fix software vulnerabilities using AI. It includes collaboration with AI companies like Anthorpic, Google, Microsoft and AI. The competition includes $20 million in prizes and is intended to drive the development of new improved computer code security technology.
May 4: The White House announced it secured voluntary commitments from 15 of the top leaders in AI to have their systems publicly evaluated to find out how they align to the AI Bill of Rights. The companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.
April 25: A joint statement made by the Federal Trade Commission, Department of Justice, Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission reaffirmed its commitment to enforcing existing discrimination and bias laws among those who use AI to conduct business including social media platforms, banks, landlords, employers and other businesses.
Additional coverage of artificial intelligence from NerdWallet:
Investing
Work
Travel
Small business
Personal finance
(Photo by Chip Somodevilla/Getty News via Getty Images)
On a similar note...