Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
NIST Fails to Provide Information on Award Process for AI Research, Lawmakers Say
Lawmakers are demanding further transparency from the U.S. National Institute of Standards and Technology as it prepares to fund external research efforts to promote the development and responsible use of artificial intelligence technologies.
See Also: Entering the Era of Generative AI-Enabled Security
Rep. Frank Lucas, R-Okla., chair of the House Science, Space, and Technology Committee, led a bipartisan group of lawmakers in writing a letter Dec. 14 that was sent to NIST Director Laurie Locascio. The letter said, “The current state of the AI safety research field creates challenges for NIST” as the agency leads efforts to develop a robust, scientific framework for AI trust and safety research.
The agency has assumed a primary role in federal government efforts to place safeguards on AI even in the absence of a comprehensive national regulation. An executive order signed in October by President Joe Biden directed NIST to stand up the Artificial Intelligence Safety Institute and develop guidelines for developers of AI, “especially of dual-use foundation models,” to conduct red-teaming tests (see: NIST Seeks Public Comment on Guidance for Trustworthy AI).
The research agency, for decades a sleepy scientific backwater, took on a high-profile role in cybersecurity during the Obama administration and now, thanks to the Biden administration, is at the center of federal efforts to tame AI. Scrutiny has already followed.
Lawmakers expressed specific concerns about NIST’s plans to fund AI research through its new AI institute. NIST has so far failed to provide adequate information about how it plans to award the funding opportunities to research institutions and private organizations, the letter said, and did not discuss the awards process during a Dec. 11 congressional staff briefing.
“There does not appear to be any publicly available information about the process for these awards – no notice of funding opportunity, announced competition, or public posting,” the letter says. It also says that the process for the AISI-funded awards “differs significantly” from information NIST provided to organizations interested in entering into cooperative research and development agreements in the consortium.
“As NIST prepares to fund extramural research on AI safety, scientific merit and transparency must remain a paramount consideration,” the letter says.
The letter adds to a chorus of AI and cybersecurity experts warning of challenges ahead for NIST and federal agencies in achieving key components of the AI executive order (see: Why Biden’s Robust AI Executive Order May Fall Short in 2024).
NIST issued a request for information on Dec. 19, seeking input on the development of red-teaming and security evaluation guidelines for AI developers. Forthcoming guidance will be applied to those developing advanced AI systems that may pose national security or safety risks to the U.S. public, as required under the executive order.
The agency is also continuing to accept letters of interest until Jan. 15, 2024 from organizations interested in participating in a new consortium associated with the AI institute. NIST said the goal of the consortium is to help advance collaborative efforts in establishing “proven, scalable and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI.”
The lawmakers applauded the establishment of the new AI safety institute and said NIST is “rightly viewed as a leader in developing a robust, scientifically grounded framework for the field of AI trust and safety research.”
“We expect NIST to hold the recipients of federal research funding for AI safety research to the same rigorous guidelines of scientific and methodological quality that characterize the broader federal research enterprise,” the lawmakers said.
NIST did not immediately respond to Information Security Media Group’s request for comment.