Web Stories Saturday, April 27
Newsletter

By the end of the year, travelers should be able to refuse facial recognition scans at airport security screenings without fear it could delay or jeopardize their travel plans.

That’s just one of the concrete safeguards governing artificial intelligence that the Biden administration says it’s rolling out across the US government, in a key first step toward preventing government abuse of AI. The move could also indirectly regulate the AI industry using the government’s own substantial purchasing power.

On Thursday, Vice President Kamala Harris announced a set of new, binding requirements for US agencies intended to prevent AI from being used in discriminatory ways. The mandates aim to cover situations ranging from screenings by the Transportation Security Administration to decisions by other agencies affecting Americans’ health care, employment and housing.

Under the requirements taking effect on Dec. 1, agencies using AI tools will have to verify they do not endanger the rights and safety of the American people. In addition, each agency will have to publish online a complete list of the AI systems it uses and their reasons for using them, along with a risk assessment of those systems.

The new policy from the Office of Management and Budget (OMB) also directs federal agencies to designate a chief AI officer to oversee how each agency uses the technology.

“Leaders from governments, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm, while ensuring everyone is able to enjoy its full benefit,” Harris told reporters on a press call Wednesday. She said the Biden administration intends for the policies to serve as a global model.

Thursday’s announcements come amid the rapid adoption of AI tools by the federal government. US agencies are already using machine learning to monitor global volcano activity, track wildfires and count wildlife pictured in drone photography. Hundreds of other use cases are in the works. Last week, the Department of Homeland Security announced it’s expanding its use of AI to train immigration officers, protect critical infrastructure and pursue drug and child exploitation investigations.

Guardrails on how the US government uses AI can help make public services more effective, said OMB Director Shalanda Young, adding that the government is beginning a national talent surge to hire “at least” 100 AI professionals by this summer.

“These new requirements will be supported by greater transparency,” Young said, highlighting the agency reporting requirements. “AI presents not only risks, but also tremendous opportunity to improve public services and make progress on societal challenges like addressing climate change, improving public health and advancing equitable economic opportunity.”

The Biden administration has moved swiftly to grapple with a technology experts say could help unlock new cures for disease or improve railroad safety yet could just as easily be abused to target minorities or develop biological weapons.

Last fall, Biden signed a major executive order on AI. Among other things, the order directed the Commerce Department to help fight computer-generated deepfakes by drawing up guidance on how to watermark AI-created content. Earlier, the White House announced voluntary commitments by leading AI companies to subject their models to outside safety testing.

Thursday’s new policies for the federal government have been years in the making. Congress first passed legislation in 2020 directing OMB to publish its guidelines for agencies by the following year. According to a recent report by the Government Accountability Office, however, OMB missed the 2021 deadline. It only issued a draft of its policies two years later, in November 2023, in response to the Biden executive order.

Still, the new OMB policy marks the latest step by the Biden administration to shape the AI industry. And because the government is such a large purchaser of commercial technology, its policies around procurement and use of AI are expected to have a powerful influence on the private sector. US officials pledged Thursday that OMB will be taking additional action to regulate federal contracts involving AI, and is soliciting public feedback on how it should do so.

There are limits to what the US government can accomplish by executive action, however. Policy experts have urged Congress to pass new legislation that could set basic ground rules for the AI industry, but leaders in both chambers have taken a slower, more deliberate approach, and few expect results this year.

Meanwhile, the European Union this month gave final approval to a first-of-its-kind artificial intelligence law, once again leapfrogging the United States on regulating a critical and disruptive technology.

Read the full article here

Share.

Leave A Reply

© 2024 Wuulu. All Rights Reserved.