Executive Order on AI Education: What It Means for Schools and the Law
May 2025
Abstract
The Executive Order (EO) on Advancing Artificial Intelligence Education for American Youth, issued on April 23, 2025, seeks to enhance AI literacy and skills among American students. This article analyzes the EO’s legal framework, compares it with AI policies in the European Union (EU), United Kingdom (UK), U.S. states and cities, and international standards from ISO and IEEE to identify conflicts, risks, and opportunities. While the EO focuses on education, its limited legal weight contrasts with the EU’s regulatory approach and raises questions about ethical AI training.
1. Introduction
Artificial Intelligence (AI) is transforming industries and societies, necessitating robust policies to harness its potential while mitigating risks. The Executive Order (EO) titled “Advancing Artificial Intelligence Education for American Youth” was issued on April 23, 2025, by U.S. President Donald J. Trump. It represents a strategic effort to prepare American students for an AI-driven future. This article examines the EO’s legal framework, compares it with global and domestic AI policies, and evaluates its implications.
Compared to other international and local initiatives, the EO highlights a global patchwork of regulation. Users and providers of AI applications must be attentive to complementary and sometimes conflicting legal requirements to avoid compliance issues.
2. Summary of the Executive Order
By express intention, the EO focuses on AI education for K–12 school systems, students, and teachers. Whether intentional or not, it also impacts post-secondary education, federal agencies, and skills-based workforce training. Specifically, the EO assigns responsibilities to a White House Task Force on Artificial Intelligence Education, which consists of:
The Director of the Office of Science and Technology Policy, who shall serve as the Chair of the Task Force.
The Task Force membership shall include:
(i) the Secretary of Agriculture
(ii) the Secretary of Labor
(iii) the Secretary of Energy
(iv) the Secretary of Education
(v) the Director of the National Science Foundation (NSF)
(vi) the Assistant to the President for Domestic Policy
(vii) the Special Advisor for AI & Crypto
(viii) the Assistant to the President for Policy
(ix) the heads of other executive departments, agencies, and offices as designated by the Chair.
In addition to the Task Force, the EO establishes several initiatives to promote AI education:
Presidential AI Challenge: To be planned within 90 days and held within 12 months, highlighting student and educator achievements.
Education Improvements: Public-private partnerships will develop K–12 AI literacy resources within 180 days.
Educator Training: The Secretary of Education must prioritize AI in teacher training within 120 days.
Apprenticeships: The Secretary of Labor is called upon to promote AI-related, federally Registered Apprenticeships.
Although the EO does not create new laws, its directives to federal agencies are detailed and time-sensitive.
3. Legal Framework of the EO
As an executive order, the EO is a presidential directive to federal agencies, grounded in constitutional authority but lacking the force of legislation. It references existing statutes, such as 15 U.S.C. § 9401(3) for AI definitions, and operates within legal and budgetary constraints. While it includes policy directives, implementation timelines, and federal coordination, it does not establish enforceable rights.
4. Comparison with EU AI Act
The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI law, categorizing systems by risk:
Unacceptable Risk: Bans practices like social scoring
High Risk: Requires assessments for systems in healthcare or law enforcement
Transparency Risk: Mandates disclosures for chatbots
Minimal Risk: No specific rules
Unlike the EO’s educational focus, the EU Act regulates AI use broadly, with emphasis on safety and ethics.
5. Comparison with UK AI Policy
The UK’s “pro-innovation” approach, outlined in a 2023 white paper, relies on five principles: safety, transparency, fairness, accountability, and contestability. Existing regulators enforce these principles without new AI-specific laws. Like the EO, the UK prioritizes innovation, but it addresses AI use holistically, not just in education.
6. U.S. States and Cities AI Policies
AI laws vary across U.S. states and cities:
New York City: Local Law 144 requires bias audits for AI hiring tools and prohibits discriminatory practices.
California: Laws regulate AI in consumer protection, prohibiting misrepresentation, fraud, and factual misstatements in AI-generated engagement.
Task Forces: States like Utah have established groups to study AI risks.
Federal education initiatives under the EO may conflict with these local regulations.
7. International Standards: ISO and IEEE
International standards organizations offer globally informed guidelines for AI governance.
ISO (International Organization for Standardization):
Founded in 1947, ISO promotes international trade and cooperation.
Key AI standards:ISO/IEC 42001:2023 – AI management systems
ISO/IEC 5338:2023 – AI lifecycle processes
IEEE (Institute of Electrical and Electronics Engineers):
Founded in 1884, IEEE is the world’s largest technical professional society.
Key AI standards:IEEE 7000-2021 – Ethical system design
IEEE 7003-2024 – Algorithmic bias
These standards could support the EO’s goals. One of Guardrail Technologies’ principals contributed to the development of the IEEE standards.
8. Conflicts, Risks, and Opportunities
Table 1: Conflicts, Risks, and Opportunities of the EO
CategoryDetailsConflictsPotential misalignment with state AI laws; differences with the EU’s regulatory approach. AI platform providers should expect to comply with foreign jurisdictions. The EU’s enforcement mechanisms expand responsibility and liability for non-compliance affecting EU citizens, businesses, and governments.RisksLagging in AI regulation and insufficient ethical training. Without refining U.S. policy, uncertainty will persist for AI users and providers. Merely claiming leadership in AI is not enough; global perception may shift.OpportunitiesThe EO offers leadership potential in AI education, public-private partnerships, and innovation through the Presidential Challenge. These initiatives offer AI companies high-profile engagement and visibility.
The greatest risk is the missed opportunity this unique moment presents. With global attention on AI, proactive developers should model the protections these initiatives advocate.
Guardrail Technologies is answering the call to provide Responsible AI by creating solutions addressing concerns such as data misuse, misinformation, and ethical functionality.
Guardrail Fact Checker™ – Verifies factual accuracy of AI output
Guardrail Data Masker™ – Prevents LLM access to confidential data
Guardrail Sunscreen™ – Protects sensitive data in virtual meetings
Guardrail Code Analyzer™ – Detects digital security risks
Guardrail Summarization Analyzer™ – Evaluates AI content against metrics
Guardrail Prompt Protect™ – Prevents prompt-based data leaks
Guardrail AI Suite™ – An integrated solution suite for Responsible AI
AI literacy is vital across all education levels. Guardrail’s partnership with Digital Citizen Academy highlights the role of structured guidance in reducing AI risks in schools. Dr. Lisa Strohman’s work protects K–12 students from AI misuse, cyberbullying, pornography, and exploitation.
Responsible AI advocates can shape the future, set standards, and align with legal and ethical norms. A proactive, anticipatory approach to Responsible AI is essential.
Guardrail Technologies is committed to building proactive solutions within this ecosystem.