On Oct. 30, Pres. Biden signed an Executive Order about AI - “safe, secure, and trustworthy AI.” This was a long time coming, and has had a lot of great summaries and takes about it, which won’t be stopping anytime soon. Particularly, here are some good resources:
Nicol Turner Lee on “the current” podcast — “Unpacking President Biden’s executive order on artificial intelligence”
Biden administration executive order tackles AI risks, but lack of privacy laws limits reach on The Conversation
The EO isn’t a perfect, progressive, and protective dream, but it’s really quite something to behold and something to take a step back and appreciate. Especially in light of the massive tech industry influence and limits of solely executive action, it’s pretty bold.
Functionally, it doesn’t do that much yet but hopefully starts a domino effect
from agency actions that are instructed. It requires generative AI companies making particularly large models disclose a lot, using the Korean-war era Defense Production Act, and mostly just instruct agencies to do other things. It also calls for guidance on things like how to disclose/watermark when content is generated.
Rhetorically/politically, it’s fantastic overall, with three of the most exciting highlights:
It is something! And it uses traditional EO actions instructing a few things, calling for privacy legislation, and using the defense production act.
It talks a lot about civil rights, and although it could do more, this paragraph is excellent and nice to see: “Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights. My Administration cannot — and will not — tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice. From hiring to housing to healthcare, we have seen what happens when AI use deepens discrimination and bias, rather than improving quality of life. Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms. My Administration will build on the important steps that have already been taken — such as issuing the Blueprint for an AI Bill of Rights, the AI Risk Management Framework, and Executive Order 14091 of February 16, 2023 (Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government) — in seeking to ensure that AI complies with all Federal laws and to promote robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation. It is necessary to hold those developing and deploying AI accountable to standards that protect against unlawful discrimination and abuse, including in the justice system and the Federal Government.”
Privacy!! It touches and prioritizes privacy, recognizing the inextricable link between privacy
aaaaand three of the most worrying parts:
It legitimizes the use and development of AI. This is a bit of a tough needle to thread, but between the engagements of the White House with companies to secure voluntary commitments to do/not do certain things and many of the attributes in the EO that support the use, research, and development of AI needlessly promotes this tool/technology in a way the government arguably does not need to intervene. The “AI” industry that has exploded and made many people rich over the last few years is particularly ripe for over broad claims, and I fear that as the dust settles on the limits of AI technology, the government will have funded
It doesn’t do enough on law enforcement use of AI, the place where AI is used widely, unaccountably, and to the detriment of people’s lives and liberty. This is fully within government purview, and it’s really upsetting that swift action wasn’t taken to undertake reviews of federal funding of law enforcement tools that have placed innocent black people in jail and exacerbate cycles of racist policing.
It doesn’t recognize the negative effects that the use of large AI models on climate change. Climate change is mentioned, but in a way that endorses AI’s potential application to mitigate the effects of climate change, ignoring the reality that it is already worsening the effects of climate change and will only continue to.
while that was happening…
-another great report from Duke on how easy it is to buy very sensitive data on the open web from data brokers — burn them all down now (Sanford Center @ Duke)
-Excellent and disturbing reporting from Pranshu Verna of WaPo about how the widely available image creation AI’s are being used to make fake nudes of people (mostly women, many young) which can ruin these people’s lives. It’s yet another reflection that its not the AI, its the action, but to give them a better or easier tool is a real tough look for the companies who put it out. (WaPo - nov. 5)
-Research is demonstrating how easy it is for people to manipulate a chatbot to getting content (WaPo — Nov. 2)
-Pres. Obama, while boosting the EO, gave some examples of content he learned from in learning about AI policy, and it included the Zero Trust AI Governance framework I had the honor to help along with AI Now and Accountable Tech, who led the development of the framework (Medium - nov. 1)
-Excellent WaPo piece, reminiscent of the Rest of World piece on AI and stereotypes that goes further to support the well-known point that AI exacerbates stereotypes and otherwise discriminatory actions. (Washington Post - nov. 1)
-G7 Countries signed an agreement around AI (countries: Canada, France, Germany, Italy, Japan, Britain and the United States) in Hiroshima on the same day that the EO was released (oct. 30) (VentureBeat for details)
-A fun but interesting read on WIRED inspired by a tidbit in Politico that Biden watched the newest Mission Impossible prior to signing the Executive Order. (Nov. 3)
-Great reporting from 404media on Fusus, a government surveillance contractor that is expanding its footprint rapidly. A good example of how these vampire companies suck money from governments and establish surveillance presences that cover whole cities and counties in cameras, with AI that tracks people/god knows what layered over. (404media -nov. 6)