Biden signs AI executive order, what you need to know
Monday, October 30, 2023
Richard Harris |
Joe Biden recently signed an executive order about AI, which outlines the new security and safety standards for AI, how the US will promote innovation, protect Americans' privacy, advance leadership globally, ensure responsible government use of AI, and much more.
President Joe Biden recently signed an executive order specifically focused on artificial intelligence (AI). The order is a testament to the ever-growing importance of AI in various sectors, and it outlines a comprehensive strategy aimed at harnessing the transformative power of AI while addressing the ethical, national security, and economic aspects associated with its rapid advancement. Below, we highlighted the key points of President Biden's AI executive order and insights from AI experts, on the potential impact of the new order.
What you need to know about the AI executive order
According to The White House Fact Sheet, the order is directing Congress to the actions below:
- New Standards for AI Safety and Security
- Protecting Americans’ Privacy
- Advancing Equity and Civil Rights
- Standing Up for Consumers, Patients, and Students
- Supporting Workers
- Promoting Innovation and Competition
- Advancing American Leadership Abroad
- Ensuring Responsible and Effective Government Use of AI
Michael Leach from Forcepoint weighs in on Expert weighs in on Biden's AI executive order
The Executive Order on AI that was announced today provides some of the necessary first steps to begin the creation of a national legislative foundation and structure to better manage the responsible development and use of AI by both commercial and government entities, with the understanding that it is just the beginning. The new Executive Order provides valuable insight into the areas that the U.S. government views as critical when it comes to the development and use of AI, and what the cybersecurity industry should be focused on moving forward when developing, releasing, and using AI such as standardized safety and security testing, the detection and repair of network and software security vulnerabilities, identifying and labeling AI-generated content, and last, but not least, the protection of an individual’s privacy by ensuring the safeguarding of their personal data when using AI.
The emphasis in the Executive Order that is placed on the safeguarding of personal data when using AI is just another example of the importance that the government has placed on protecting American’s privacy with the advent of new technologies like AI. Since the introduction of global privacy laws like the EU GDPR, we have seen numerous U.S. state-level privacy laws come into effect across the nation to protect American privacy and many of these existing laws have recently adopted additional requirements when using AI in relation to personal data. The various U.S. state privacy laws that incorporate requirements when using AI and personal data together (e.g., training, customizing, data collection, processing, etc.) generally require the following: the right for individual consumers to opt-out profiling and automated decision making, data protection assessments for certain targeted advertising and profiling use cases, and limited data retention, sharing, and use of sensitive personal information when using AI. The new Executive Order will hopefully lead to the establishment of more cohesive privacy and AI laws that will assist in overcoming the fractured framework of the numerous, current state privacy laws with newly added AI requirements. The establishment of consistent national AI and privacy laws will allow U.S. companies and the government to rapidly develop, test, release, and adopt new AI technologies and become more competitive globally while putting in place the necessary guardrails for the safe and reliable use of AI.
Shameek Kundu's insights on Biden's AI executive order
As someone living outside the US, I found the US President's long-awaited executive order on AI remarkable, for three reasons:
1. The fact that it found a way to ride on existing laws and frameworks, rather than a new AI law
2. The fact that it plans to use the US government's (non-trivial!) procurement heft to drive traction in a messy space
3. The fact that it is overwhelmingly focused on "here and now" dangers - e.g., misinformation, security and privacy threats, fraud, physical safety, privacy, and impact on the workforce - vs. the potential longer-term dangers of AGI
1 is presumably at least partially out of necessity, given US legislative challenges. But it is an approach that others considering new AI laws may want to explore.
2 is not an option that is open to every other country, but some like the EU, China, and possibly India will probably try.
3 is just a pragmatic assertion of how much we have to do with the more "mundane" challenges of today before we get to tomorrow's existential ones.
On a side note, as a Singaporean member of the Global Partnership on AI, I loved the reference to collaboration with Singapore and GPAI towards the end.
What Fight for the Future has to say: The executive order on AI says a lot of the right things, but requires follow-through to ensure real change
Caitlin Seeley George, campaigns and managing director at Fight for the Future said, "It’s far from breaking news that Artificial Intelligence is exacerbating discrimination and bias, but it’s a positive step for the Biden Administration to acknowledge these harms and direct agencies to address them in this Executive Order.
However, it’s hard to say that this document, on its own, represents much progress. Biden has given the power to his agencies to actually do something on AI. In the best-case scenario, agencies take all the potential actions that could stem from the Executive Order, and use all their resources to implement positive change for the benefit of everyday people. For example, Agencies like the FTC have already taken some action to rein in abuses of AI, and this Executive Order could supercharge such efforts, unlocking the federal government’s ability to put critical guardrails in place to address harmful impacts of AI.
But there’s also the possibility that agencies do the bare minimum, a choice that would render this Executive Order toothless and waste another year of our lives while vulnerable people continue to lose housing and job opportunities, experience increased surveillance at school and in public, and be unjustly targeted by law enforcement, all due to biased and discriminatory AI.
It’s impossible to ignore the gaping hole in this Order when it comes to law enforcement agencies’ use of AI. Some of the most harmful uses of AI are currently being perpetrated by law enforcement, from predictive policing algorithms and pre-trial assessments to biometric surveillance systems like facial recognition. Many AI tools marketed to law enforcement require massive amounts of data that are often unjustly procured via data brokers. These systems deliver discriminatory outcomes, particularly for Black people and other people of color. As written, the primary action required by the Executive Order regarding law enforcement's use of racially biased and actively harmful AI is for agencies to produce reports. Reports are miles away from the specific, strong regulatory directives that would bring accountability to this shadow market of harmful tech that law enforcement increasingly relies upon.
We cannot stress enough that if the Biden Administration fails to put real limits on how law enforcement uses AI, its effort will ultimately fail in its goal of addressing the biggest threats that AI poses to our civil rights.
A good portion of the Executive Order focuses on ways to maximize the opportunities that AI presents. People often say that if the AI cat is already out of the bag, we might as well ensure that it benefits the U.S. as much as possible. But it’s critical that the federal government not only focus on expanding its use of AI but also on cases where it must be restricted. Agencies directed to set “standards” must consider cases where AI should not be used.
Specifically, we believe that there are high-impact uses where AI decision-making should not be allowed at all, including for hiring and/or firing in the workplace; law enforcement suspect identification, parole, probation, sentencing, and pretrial release and detention; and military actions. While the Executive Order may call for the development of “best practices” in these areas, we argue this is a misnomer, as there is no “best” way to use automated decision-making in these cases where the consequences are so significant. People’s lives and livelihoods depend on the Administration aggressively drawing lines that should not be crossed, and that will now require followthrough from the agencies."
Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.
MEMBERS GET ACCESS TO
- - Exclusive content from leaders in the industry
- - Q&A articles from industry leaders
- - Tips and tricks from the most successful developers weekly
- - Monthly issues, including all 90+ back-issues since 2012
- - Event discounts and early-bird signups
- - Gain insight from top achievers in the app store
- - Learn what tools to use, what SDK's to use, and more
Subscribe here