Legislation, Regulation, and Policy in the Age of Artificial Intelligence
Discover how AI is already drafting legislation globally, from Brazil to Massachusetts to Arizona. Explore the transformative capabilities, democratic risks, regulatory gaps, and the urgent need for guardrails before machines reshape how laws are made.
Artificial intelligence is already drafting legislation. In 2023, a Brazilian municipality unanimously approved a bill written by ChatGPT. Massachusetts lawmakers used the same tool to draft bills regulating AI itself. Arizona passed deepfake election legislation partially authored by AI.
The US House's Office of the Clerk now uses AI to analyze how bills relate to existing law. Ohio has been using AI tools for wholesale revision of state administrative law since 2020. This is not a hypothetical scenario. It is happening now, often without public disclosure, and the pace is accelerating.
HOW AI ENTERED THE LEGISLATIVE PROCESS
The First Undisclosed Use: Porto Alegre, Brazil, 2023
In late 2023, Councilor Ramiro Rosário of Porto Alegre, Brazil, approached ChatGPT with a 289-character directive, to create legislation exempting residents from paying water meter fees when meters are stolen.
The system generated a complete bill with eight articles and supporting rationale. The city council voted unanimously to approve it. The mayor signed it into law. Only after the bill became law did Rosário publicly reveal that ChatGPT had written it, without informing his colleagues beforehand.
The significance of this moment cannot be overstated: legislators voted on AI-generated law without knowing it was AI-generated. When they learned the truth, responses ranged from enthusiasm to concern.
City Council President Hamilton Sossmeier called it a "dangerous precedent." Rosário maintained that the AI's work was competent, noting that human legislators routinely copy language from existing laws, and AI was simply doing the same thing with more efficiency.
The Transparent Approach: Massachusetts, 2023
Months later, Massachusetts legislators took a different approach. State Senator Barry R. Finegold and Representative Josh S. Cutler openly used ChatGPT to draft bills regulating generative AI itself titled, with ironic honesty, "An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT."
The drafting process revealed both AI's capabilities and limitations. Finegold and Cutler repeatedly prompted ChatGPT to generate legislative text in the style of Massachusetts General Laws. The system initially refused, apologizing and saying it was "not able to draft bills." Through persistent, carefully-worded prompts, they eventually got ChatGPT to produce substantive legislative language.
Notably, ChatGPT contributed original ideas that the legislators kept in the final version: it defined "generative artificial intelligence," expanded on operating standards for AI models, and clarified the attorney general registration process.
However, the system also exhibited limitations: it struggled with amendments and line-item strikethrough. Humans had to substantially refine ChatGPT's work before submitting it.
Rapid Proliferation Across Jurisdictions
Following these early experiments, AI adoption in legislative drafting expanded rapidly:
Arizona (2023): Representative Alexander Kolodin used ChatGPT to draft the definition of "deepfake" for a bill regulating deepfakes in elections. The bill passed unanimously in both houses. Kolodin received no public blowback.
South Korea (2024): Korean legislators deployed an AI-powered legislative editor fully integrated with Korea's legal information system, 100 percent of current statutes, amendment history, and case law. The primary purpose was standardizing and modernizing legislative language across institutions.
U.S. House of Representatives (2024): The Office of the Clerk began using AI to accelerate production of cost estimates for bills and to analyze how proposed legislation relates to existing legal code.
California (2024): Governor Newsom signed 17 AI-related bills covering education, labor, privacy, healthcare, and elections, many drafted with AI assistance or reviewed by AI systems.
HOW AI CHANGES LEGISLATION ITSELF
Two Profound Shifts in What Laws Can Become
Scholars and practitioners identify two ways AI fundamentally alters legislative capacity and output. Understanding these shifts is crucial to anticipating how AI-written laws will differ from human-written ones.
Capability #1: Breadth of Expertise
Traditional legislators and their staff possess specialized knowledge in a handful of domains. A healthcare policy expert might struggle with complex energy regulation. An education specialist rarely masters transportation infrastructure law. Human cognitive limitations force specialization.
Large language models operate without this constraint. ChatGPT generates legislative text on specialty crop harvesting mechanization with the same facility as text on energy efficiency standards for street lighting. It maintains coherence across thousands of pages of existing statutes, amendments, and case law simultaneously.
This enables legislators to address dramatically more policy domains at once. Rather than limiting attention to one or two specializations, an AI-assisted legislator can address ten domains in parallel. When combined with the second capability, the implications become profound.
Capability #2: Complexity Tolerance
Humans can hold only so much complexity in working memory. Legislators typically constrain laws to a comprehensible level of detail. Exceptions, caveats, technical specifications, and cross-references are limited, not because they're unnecessary, but because human minds cannot comfortably track extremely baroque statutory language.
LLMs have no such constraint. They instantly perform simultaneous multistep reasoning across thousands of pages of documents. An AI system can construct legislation with extraordinary internal complexity, that is, intricate regulatory frameworks, granular exceptions, precise technical definitions, dense cross-references. The system doesn't experience cognitive load.
The U.S. Supreme Court's 2024 decision overturning Chevron deference creates powerful incentive for this shift. Historically, statutes left implementation details to executive agencies.
Courts deferred to agency interpretation. Chevron's overturning means Congress must now specify detailed implementation in the statute itself, or courts will invalidate delegations to agencies. The result is that statutes become dramatically more complex. AI is the tool that enables writing, and understandin this new generation of legislation.
The Political Consequence: Faster Policy Shifts
When unified government occurs (single party controlling executive and legislature), political pressure intensifies to rapidly implement the party platform. Historically, the small size of legislative drafting teams constrained speed. You cannot write complex bills faster than qualified humans can think and write them.
AI removes this constraint. A legislative assistant can prompt ChatGPT to wholesale revise large bodies of state administrative law in a single night. Entire regulatory frameworks can be restructured faster than traditional processes permit. When government changes hands, the new administration can implement wholesale policy shifts on unprecedented timescales.
This could be viewed as more responsive governance, where voters demanded change, and government delivers rapidly. Or it could enable destabilizing political whiplash, where fundamental policy frameworks oscillate with each electoral cycle. The same technology enables both interpretations.
Implications for Legal Coherence and Quality
AI's strength in grammar, syntax, and rule-enforcement means AI-written laws will likely exhibit superior consistency and clarity compared to human drafting. The system will catch typographical errors, enforce parallel structure across related provisions, ensure cross-references are accurate, and maintain consistent terminology throughout thousands of pages of statute.
However, AI will also propagate biases encoded in its training data. AI systems learn from existing law, including laws that embed historical inequities and discrimination.
THE RISKS TO DEMOCRATIC GOVERNANCE
Transparency and Accountability
The Porto Alegre example exposes a critical risk: legislators may use AI without disclosing it. When voters and oversight bodies don't know that laws are AI-drafted, they cannot assess whether the process was appropriate. Democratic legitimacy depends on understanding how laws are made.
Currently, there is no legal requirement in most jurisdictions to disclose AI involvement in legislative drafting. A bill could be entirely AI-written, approved by legislators who haven't read it, and signed into law with the public never knowing.
Accountability for Errors
Who is responsible when AI-drafted law contains errors, ambiguities, or unintended consequences? Is it the legislator who submitted the bill? The legislative staff member who prompted the AI? The AI company that created the system? Courts have yet to address this. The absence of clear accountability creates moral hazard: legislators can blame AI for poor drafting, and AI companies claim they're just tools.
Dominance of Wealthy, AI-Capable Legislatures
AI legislative drafting capacity is not equally distributed. Well-resourced federal legislatures, large state legislatures, and legislatures in wealthy nations will adopt AI tools first. Less-resourced legislatures like rural counties, developing nations, small municipalities will lag. This creates asymmetry in legislative capacity.
Wealthy jurisdictions will generate more legislation, faster, addressing more policy domains than less-resourced jurisdictions. Over time, this could entrench policy divergence and constrain the ability of less-well-off jurisdictions to respond to constituent needs.
Bias Amplification
AI systems trained on existing law will replicate and potentially amplify historical biases in legal systems. If training data reflects patterns of discriminatory enforcement, AI will learn those patterns. The system has no moral judgment to resist bias.
For example, if historical criminal statutes exhibit racial disparities, AI trained on that history may generate new criminal legislation exhibiting similar patterns. This isn't because AI is intentionally racist, it's because AI learns statistical patterns from data, and data embeds historical discrimination.
The "Laundering" Problem
There's a subtle but serious risk because AI could be used to launder corporate or partisan influence through law. A lobbyist prompts an AI system to draft legislation favoring their client.
The legislator introduces it without fully reading it, trusting the AI. The AI's authority and apparent neutrality lend credibility to fundamentally partisan language. By the time stakeholders realize what the bill does, it's been passed.
Reduced Human Deliberation
Deliberation is central to democracy. Legislators debate bills, amendments are negotiated, compromises emerge through discussion. AI-drafted bills risk bypassing this process. If bills appear fully formed from a machine, legislative attention may diminish. Why debate what an AI has optimized?
This could accelerate legislative gridlock or, conversely, eliminate the deliberative friction that sometimes produces better policy. The consequences remain unclear.
REAL-WORLD IMPLICATIONS AND CASE STUDIES
Case Study 1: Arizona's Deepfake Election Bill (2023)
Representative Alexander Kolodin used ChatGPT to draft the definition of "deepfake" in legislation regulating deepfakes in elections. The bill passed unanimously in both houses. On one level, this is a success: technical language was precise, process was transparent (eventually), and the legislation addressed a genuine policy need.
But examine the deeper dynamic: Kolodin used AI to write law regulating a technology (deepfakes) by using another technology (generative AI) to do it. The bill stands as an early example of how AI creates policy about AI itself, often without deep human deliberation about what that policy should be.
Kolodin reported receiving no public blowback, suggesting legislators don't object to AI-drafted law if the final product is functional.
Case Study 2: Massachusetts Regulating AI with AI (2023)
Senator Barry Finegold and Representative Josh Cutler attempted to draft legislation regulating generative AI using ChatGPT itself. The process required extensive prompting, iteration, and human refinement. ChatGPT initially refused the task. When pushed, it generated workable language but with gaps and technical inaccuracies the humans had to fix.
Finegold explicitly kept a line where ChatGPT wrote: "Any errors or inaccuracies in the bill should not be attributed to the language model, but rather to its human authors." ChatGPT, with apparent awareness of the irony, was disclaiming responsibility.
The outcome: humans and AI partnered on the final bill. Neither alone would have produced adequate legislation. This partnership model AI as assistant, not author may represent the sustainable future of legislative AI use.
Case Study 3: Ohio's Administrative Law Overhaul (2020)
Ohio deployed AI tools for wholesale revision of state administrative law starting in 2020. This represents the most extensive use of AI in legislative drafting to date. Rather than drafting individual bills, Ohio used AI to systematically review and modernize entire bodies of administrative regulation.
Outcomes: standardization improved, inconsistencies were identified and resolved, and regulatory language became more coherent. However, because this occurred quietly without sustained public discussion, few outside government know about it. The success or failure of Ohio's approach remains understudied.
CRITICAL QUESTIONS WITHOUT ANSWERS
Unanswered Legal and Governance Questions
As AI drafts more legislation, fundamental questions remain unresolved:
Who is Legally Responsible for AI-Drafted Law? If an AI system generates legislative language with latent defects or unintended consequences, who bears legal responsibility? The legislator who introduced it? The staff member who used the tool? The software company? Current law provides no clear answer.
Can Legislation Drafted Without Full Legislative Understanding Be Democratic? If legislators pass bills they haven't fully read or understood—delegating comprehension to AI is the result legitimate democracy or something else?
How Do We Prevent Algorithmic Manipulation of the Law? If AI is biased or poisoned with bad data, those biases embed into law affecting millions. Detection and correction are difficult. How do we prevent this?
What Happens When AI-Drafted Law Conflicts With Its Regulation? Already, legislators are using AI to draft bills regulating AI. What happens when AI-generated regulation of AI contains errors or bias?
How Do We Preserve Deliberation? Deliberation debate, negotiation, compromise is central to democratic legitimacy. Does AI undermine deliberation by making legislative output faster and more technical, reducing space for democratic argument?
RECOMMENDATIONS: ESTABLISHING GUARDRAILS BEFORE IT'S TOO LATE
Based on current practice and risks identified, several recommendations emerge:
For Legislators
- Disclose AI involvement transparently. If AI contributed to drafting, say so. Include this in the legislative record and communicate it to colleagues.
- Personally review substantive portions. Don't rely on AI abstracts or summaries. Read enough of the bill to understand its intent and implications.
- Solicit broad deliberation before passage. Don't allow AI drafting to compress deliberation timelines. Allow stakeholders time to analyze and respond.
- Commission independent bias assessment for legislation likely to affect specific populations disproportionately.
For Governments
- Establish clear legal accountability for AI-drafted law. Create statutory language clarifying who bears responsibility when AI-drafted legislation produces unintended harms.
- Mandate AI disclosure in legislative processes. Require that bills contain metadata indicating whether and how AI was used in their drafting.
- Create audit frameworks for legislative AI systems. Government IT offices should review and test AI tools used for legislative work, assessing capabilities and limitations.
- Invest in legislative research capacity. Legislatures must hire staff with AI expertise capable of understanding and overseeing AI tools, not just using them.
For the Public and Civil Society
- Demand transparency. Voters should ask legislators whether bills they introduced were AI-drafted. Build social pressure for disclosure.
- Scrutinize bills that pass with minimal deliberation. Complex legislation passing through legislatures rapidly suggests possible AI involvement without proper review.
- Support regulatory frameworks. Advocate for laws establishing guardrails around legislative AI use.
- Monitor outcomes. Track whether AI-drafted legislation produces better or worse policy outcomes compared to traditionally drafted law.
Fast Facts
Is AI Actually Writing Laws Right Now, or Is This Hypothetical?
AI is writing laws right now. This is not a future scenario. In 2023, a Brazilian city council unanimously approved a bill that ChatGPT had written, the councilor didn't disclose this until after the mayor signed it into law. Massachusetts legislators openly used ChatGPT to draft bills regulating generative AI itself. Arizona passed deepfake election legislation partially authored by AI. The U.S. House Office of the Clerk now uses AI to analyze how bills relate to existing law. Ohio has been using AI tools for wholesale revision of state administrative law since 2020.
How Does AI Change What Laws Look Like and How They Function?
AI fundamentally alters legislation in two ways:
First, breadth of expertise. Human legislators specialize—a healthcare expert knows little about energy regulation. Cognitive limits force this. AI has no such constraints. ChatGPT generates legislative text on specialty crop mechanization with equal facility as text on street lighting efficiency standards. An AI-assisted legislator can address ten policy domains simultaneously. This represents an unprecedented expansion of what legislators can attempt.
Second, complexity tolerance. Humans can only hold so much complexity in working memory. Laws are constrained to comprehensible detail levels. AI experiences no cognitive load. It instantly performs multistep reasoning across thousands of pages of statutory text, cross-references, and case law. AI can construct legislation with extraordinary internal complexity—intricate regulatory frameworks, granular exceptions, precise technical definitions, dense cross-references.
What Are the Biggest Democratic Risks If AI Continues Writing Laws Without Guardrails?
Currently, there is no legal requirement to disclose AI involvement in legislative drafting. Legislators can pass bills written by machines without voters knowing. Democratic legitimacy depends on understanding how laws are made. If citizens don't know machines are drafting legislation, they cannot assess whether the process is appropriate. This creates accountability gaps where legislators blame AI for poor drafting, voters cannot assign responsibility, and no one is accountable.