Tuesday, November 14, 2023
Context: In my role as division director of IIS, I’m sending out a short message to the IIS mailing list on the Second Tuesday Every Month (STEM). Here’s the installment for November 2023.
Hi IIS Community,
One of the biggest pieces of IIS-related news since my update last month is the big AI Executive Order, issued at the end of October. It directs government agencies and AI companies in the US to carry out a number of activities in the short and medium term. Among the goals are promoting the use of AI to support the function of government, but to make sure that that happens in a way that puts people first.
There are various ways that the IIS Community can engage with the Executive Order. Here’s a summary of the elements that you might find most relevant.
A lot of what the Executive Order focuses on is directed toward people making societally-facing AI systems and making sure the systems:
are evaluated/tested;
aren’t used to deny people opportunities they are entitled to;
include content markings with their outputs so that people know where things came from;
are secure and don’t increase risks around biotechnology, cybersecurity, fraud, and the like.
The IIS Community can help by studying the foundational topics that inform these items. In fact, you are! But there’s a sense that more work is needed and would be valued.
The Executive Order directs federal agencies to make investments in:
AI education/training;
AI research and development.
That may sound like exactly what we do. And that’s no coincidence… the folks who contributed to writing the Executive Order know about and value your contributions. Keep up the great work!
NSF is directed to a number of activities, including:
Launch a pilot implementation of the National AI Research Resource (NAIRR);
Fund and launch an AI-related Regional Innovation Engine;
Establish at least four new National AI Research Institutes;
Prioritize resources for AI-related education and workforce development;
Invest in scaling up and promoting the use of Privacy Enhancing Technologies (PETs) in the context of AI/ML;
Serve on a government council on the use of AI in government agency operations;
Develop AI testbeds (with the Departments of Commerce and Energy);
Coordinate on AI topics with other countries (with the Department of State and USAID);
Ensure protections are in place around research on synthetic biology.
Although these items are primarily NSF’s responsibility, we are representing the academic research community and will be reaching out to people for input. Keep an eye out for opportunities to engage, thanks!
Other areas where NSF have input:
Establishing explicit guidelines for developing and deploying safe, secure, and trustworthy AI systems (with NIST);
Managing AI-specific cybersecurity risks, particularly in relation to critical infrastructure (with the Department of Homeland Security, Department of the Treasury, and NIST);
Cataloging approaches to digital content authentication (with the Department of Commerce);
Mitigating climate change risks with AI support (with the Department of Energy);
Developing guidance around the use of AI in education (with the Department of Education);
Creating standards around the use of “synthetic nucleic acid sequences (with the Office of Science & Technology Policy).
There is way more in there that I didn’t mention; after all, it’s 100+ pages. But my hope was to share a few high priority items. Big thanks to Tess DeBlanc-Knowles, who works in NSF’s TIP Directorate and has been taking a leadership role in AI policy topics. She drafted an internal summary for NSF that I used as the basis for this message.
If there’s more you want to know, we are planning to devote our IIS Office Hour in December to this topic. We’ll hear from Wade Shen, the head of the National AI Initiative Office. Stay tuned for more information.
Until next time!
-Michael