My 4 predictions for AI regulatory culture in 2026
A deep dive into the emerging ideas, behaviours and culture flowing from AI regulation in 2026.
It’s that time of the year again when I share my predictions for AI gleaned from patterns tracked in my Global AI Regulation Tracker (see last year’s wrap below).
But with a slight twist this time. Rather than predicting what new AI regulation will come out in 2026, I’m pivoting into a more nuanced (+ understudied) angle: AI regulatory culture. Basically, looking into the ‘zeitgeist’ (i.e. the changing behaviours, sentiments and norms) driving AI regulatory thinking and compliance approaches in specific markets (rather than simply chasing the ‘next new law’). It’s a fascinating crossover of legal, geopolitics, economics, demographics and culture, and which deserves its own article one day.
Recapping on AI in 2025, the zeitgeist of the year was quite clearly about sovereignty. We had the Trump 2.0 tariffs, which shook up the global order and forced countries to seriously rethink supply chain resilience (with AI being no exception). Added to the mix was the launch of DeepSeek-R1 which kicked off a new chapter in the US-China AI rivalry, spiralling into a high-stakes standoff over Nvidia export controls and a constant game of one-upmanship between American and Chinese labs in the name of tech sovereignty. Sovereignty also played out in the buildout of “AI corridors” across the world, from the US-led Pax Silica to China’s “AI+” International Cooperation. These events, among others, reflect a shifting perspective of AI as no longer just an emerging science, but a critical infrastructure that needs to be secured within domestic borders and aligned with sovereign cultural values. Common buzzwords along those lines are “AI factories”, “growth zones”, “full stack autonomy”, “techno-nationalism”, etc.
I expect this sovereignty theme to continue into 2026 (and even til 2030), along with four specific cultural trends:
Executive Order 14365 ramps up regulatory arbitrage practices across the US.
EU AI compliance culture pivots towards the Spanish interpretation.
Amended cybersecurity law recalibrates China’s tech culture.
Cultural alignment as the soft new frontier of AI sovereignty.
A quick note on the drafting process. This article was part of a fun ‘vibe research’ experiment where I ran Gemini 3 Deep Thinking on my Global AI Regulation Tracker to produce some original and interesting predictions. I’ve posted my journey on Linkedin: part 1 and part 2 (including my own mark up and comments on the AI-generated article). For this article, I decided to only use the AI-generated article as a reference, and wrote this Substack article manually from scratch (with the assistance of various AI tools for re-styling drafts and verifying statements).
1) Executive Order 14365 ramps up regulatory arbitrage practices across the US
Let’s start with the elephant in the room. In December 2025, President Trump signed the Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence”, which calls for replacing the fragmented patchwork of state laws with a “minimally burdensome” national standard in order to “remove barriers to United States AI leadership”. Specifically, it orders the establishment of an AI Litigation Task Force within the Department of Justice (DOJ) to challenge statutes such as the Colorado AI Act (SB 24-205) and California’s SB 53 on grounds that these state AI laws unduly burden interstate commerce.
The idea of AI regulatory simplification (or ‘de-regulation’, depending on how you see it) has been well-known goal of the Trump 2.0 administration. They initially tried to achieve this through the legislative process by sneaking in a 10 year moratorium on state-based AI laws into the Budget Reconciliation Bill (a.k.a. the "One Big Beautiful Bill Act") but that was struck out at the last minute.
So this time President Trump is leveraging a more bold and direct approach with his executive order powers. Executive orders are binding on federal agencies, and exert major influence on regulatory enforcement culture. Earlier in the year, President Trump issued Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence" which mandates a policy “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security”. The cultural impact of that order recently played out when the Federal Trade Commission (FTC) — although it was not specifically called out in the order — nevertheless set aside its prior ruling in a lawsuit against an AI company Rytr LLC after the FTC came to the view the original order would have unduly burdened AI development, contravening Executive Order 14179.
Executive Order 14365 is a particularly big deal because it’s effectively setting the stage for a ‘federal v state’ standoff, with Federal boldly testing the constitutional limits of State-led tech oversight. “Contentious regulatory simplification” is how I would describe this incoming period.
While this makes for interesting constitutional law questions, it will unfortunately trap the market in a “Schrödinger’s cat” type of purgatory, where businesses have to navigate state-level statutes and calibrate their compliance efforts based on how much they're betting that the relevant state law will ‘live or die’ from DOJ challenge.
To be fair, ‘regulatory arbitrage’ is nothing new in the US. Many businesses take advantage of discrepancies between federal/state or different agency rules to manage regulatory risk. That being said, I can see Executive Order 14365 being a major inflection point that ramps up that regulatory arbitrage culture as follows.
Risk-averse companies (especially in regulated industries like healthcare or finance) might gravitate toward states with minimal or no AI-specific laws, while aggressive startups will deliberately plant themselves in states with strict laws they believe will be struck down—essentially using regulatory instability as a competitive moat against slower-moving incumbents who can’t stomach the compliance uncertainty.
Legal teams will start making calculated decisions about which state laws to fully comply with versus which to treat as “provisional” requirements. This isn’t about breaking the law, but rather about resource allocation under uncertainty. For example, a company might invest heavily in compliance infrastructure for state laws they believe will survive (or can’t afford to ignore due to their customer base), while maintaining only minimal, easily reversible compliance for laws they think have a high probability of invalidation. The legal services market will also adapt, with law firms offering “challenge probability ratings” for different state provisions, similar to how credit rating agencies assess bond risk: e.g. “Colorado’s developer liability provisions have a X% chance of surviving challenge, California’s consumer notice requirements maybe Y%.”
A rise of new insurance products specifically designed for “regulatory transition risk”— i.e. policies that cover companies if they face enforcement under state laws that are later invalidated, or conversely, if they relied on a law being struck down and it survives.
B2B AI contract negotiations will increasingly hone in on “change in law” provisions, and allocate risk based on which party is better positioned to absorb regulatory changes.
Paradoxically, this uncertainty might actually encourage more aggressive state-level AI regulation in progressive states, not less. The logic is that states like California, Massachusetts, or New York may calculate that even if some provisions get struck down, the litigation process itself will take years—meaning they get 3-5 years of de facto enforcement before any federal resolution. They’re willing to bet that the policy benefits of early regulation outweigh the risk of partial invalidation. Meanwhile, they’ll draft laws with severability clauses that ensure if one provision falls, the rest survive.
2) EU AI compliance culture pivoting towards the Spanish interpretation.
This one might be a stretch but hear me out. While the EU AI Act has already commenced, its provisions on high-risk AI (which are the crux of the law) don’t come into force until 2 August 2026 (for Annex III use cases like critical infrastructure, HR, employment, etc) or until 2 August 2027 (for Annex I use cases that are already regulated under other EU regulation). The original plan was that official technical standards (the "harmonized standards") from the CEN and CENELEC Joint Technical Committee 21 (JTC 21) would be ready to provide a "presumption of conformity" for organizations—meaning if you follow these standards, you're legally assumed to meet the Act's requirements.
However, development of these standards has lagged to the point that the EU Commission has proposed in its Digital Omnibus amendments to push back the high risk AI obligations to, at latest, 2 December 2027 for Annex III and 2 August 2028 for Annex I. These amendments remain subject to political trilogue negotiations.
In this ‘standards vacuum’, the ISO standards (e.g. ISO 42001 AI System Management standards) will continue to normalise as the de facto standard for doing AI in the EU, with potential implications on market dynamics being:
VC term sheets will increasingly mandate ISO certification at funding milestones, akin to SOC 2 for SaaS, while M&A valuations factor it into pricing multiples.
A “certify or die” environment emerges: lapsed certification triggers material contract breaches, barring vendors from high-stakes EU tenders.
Insurers treat ISO as a prerequisite for coverage, imposing 3–5x premium hikes or claim denials on non-certified firms; CFOs prioritise it economically, with brokers bundling certification services.
While the above is not necessarily a new trend, what could be refreshing in 2026 is the Spanish interpretation of ISO spearheading EU AI compliance culture. Spain was the first EU member state to launch a fully operational AI supervisor (AESIA). And in December 2025, the AESIA released practical guides and compliance templates which operationalise high-risk obligations against ISO-style controls. For example, AESIA Guide 04 expressly maps to ISO 9001 Quality Management Systems as its structural skeleton.
Is there anything unique or exclusive to the Spanish interpretation that EU states can’t come to? Not necessarily. Nor is that my point. In my view, the cultural significance here is that:
The market will look towards Spain first for case studies, business testimonials and regulatory comments, etc for intel on they should practically with ISO in the context of the EU AI Act. The nuance is that there’s already a lot of guides that recommend businesses to map the EU AI Act to ISO controls, but don’t necessarily go into depth the practical procedures behind the ISO controls. The AESIA guides fill that latter gap.
Optically, the fact AESIA is a dedicated sector-agnostic AI supervisory body also makes its guides more authoritative relative to other comparable AI Act guides from privacy & data protection regulators in other states (e.g. Germany, Netherlands, France).
Spain-based practitioners who have experience working with and critiquing on the AESIA guides will also naturally have more street credibility and “get-the-first-and-last-say” in policy discussions.
Spain could also export its thought leadership to Spanish-speaking markets in South America, particularly in countries like Argentina, Chile and Peru which have local AI bills that loosely follow the EU AI Act structure.
I know from my own experience sitting on policy roundtables that these little ‘on-the-ground’ things that get talked about in boardrooms and committee meetings which ultimately feed into high-level policy decisions.
Taking a step back, it’s looking like the EU AI landscape will head towards different ‘centres of power’, with France and Germany as the main innovation hubs (due to their heavy investment in "national champion" startups like Mistral and Aleph Alpha, combined with robust industrial research bases), Ireland and Italy as the enforcement hubs (based on their historic GDPR enforcement actions), and Spain as the AI assurance hub. There’s so much nuance to unpack here - probably for another article!
3) Amended cybersecurity law recalibrates China’s tech culture
There’s so much to talk about on China. But if I had to choose one specific angle, it would be on the cultural impact of the China’s recent amendment to the Cybersecurity Law (CSL), set to take effect from 1 January 2026.
According to this brilliant summary, the CSL amendment “introduces the first revisions to the law since 2017, expanding state support for AI development while tightening compliance obligations for network operators and critical information infrastructure. The amendment also aligns cybersecurity provisions with the PIPL and increases penalties for violations involving data handling, emergency response, prohibited content, and cross-border data transfers”. Together, these changes signal “a more rigorous, integrated regulatory environment for companies operating in China’s digital ecosystem”.
Traditionally, Chinese tech companies have operated in a regulatory environment that was often reactive, negotiable, and relationship-driven. Enforcement was selective, and the implicit understanding was that innovation came first, with compliance smoothed over through guanxi (relationships) or post-hoc adjustments.
In China, AI governance tends to be valued and practised through the lens of cybersecurity. There’s partly a linguistic reason for this as ‘safety’ and ‘security’ are both the same word in Chinese (安全 - ān quán). I’ve written a post about the practical significance of this, being that the Chinese see "AI 安全" risks as broadly covering "cyberspace risks" (including misinformation, which is cyberspace risk rather than a model risk), "cognitive risks" (e.g. information cocoons, cognitive warfare), and "social order risks" (i.e. challenging traditional social order).
Absent an AI-specific law in China, the CSL has been the primary legislative instrument for setting AI governance and compliance culture in China. Yes I know, China already has administrative regulations / measures on facial recognition, recommendation algorithms, deep synthesis and generative AI. But there’s a big difference between laws (i.e. passed by the National People’s Congress) and admin regulations (issued by State Council) in China, with the latter being subordinate to the former.
So when the CSL changes, the market pays attention and recalibrates to it. And the latest CSL amendments introduces some hefty fines for violations of cybersecurity obligations:
Serious consequences: A fine of between RMB 500,000 and RMB 2 million (US$70,000 to US$280,000) and a fine of between RMB 50,000 and RMB 200,000 (US$7,000 to US$28,000) for the directly responsible supervisors or personnel.
Particularly serious consequences: A fine of between RMB 2 million and RMB 10 million (US$280,000 to US$1.4 million) and a fine of between RMB 200,000 and RMB 1 million (US$28,000 to US$140,000) for the directly responsible supervisors or other personnel.
No longer is it just about corporate fines, but now individual exposure for executives and technical leads. The cultural impact cannot be understated. I actually visited China twice this year to meet Chinese tech companies and explore the culture. The amended CSL was the top of mind, and I could sense a more “liability-aware culture”.
For 2026, I’m picturing a new reality where:
Compliance and legal officers gain more ‘status’ within the organisation, becoming the gatekeepers with veto power over product launches. Rise of “defensive” innovation strategies requiring extensive internal reviews, pre-deployment audits, and detailed documentation trails before any feature ships. This marks a shift from the engineer-led, product-first hierarchy that has dominated Chinese tech.
A boom in China’s compliance industry (think law firms, certification bodies, audit services, and “AI controllability” platforms) will turn compliance credentials into competitive advantages for winning government contracts and enterprise clients.
The US-China AI race moving from a battle of raw scaling to a competition between two fundamentally different operating philosophies. While the US ecosystem prioritises rapid, decentralised experimentation, China’s new regulatory floor will likely trade some "frontier" spontaneity for systemic stability and algorithmic efficiency. This creates a 2026 reality where Chinese firms may lag in unconstrained consumer AI but lead in "controllable" and security-first industrial applications and cost-effective modelling.
4) Cultural alignment as the soft new frontier of AI sovereignty
This prediction is dedicated to the rest of the world outside of the US-China-EU trinity. My prediction here is that the AI sovereignty trend will shapeshift into a worldwide race around “culturally aligned AI” (i.e. designing and developing AI systems to operate in accordance with diverse human values, ethical principles, societal expectations, and norms of the specific cultures they interact with).
It’s been a hot topic for a while, with my favourite infographic being the one below which shows the cultural alignment of various LLMs on a map.
There’s an emerging spectrum (across non-English speaking markets) ranging from soft ethical guidelines to hard-coded legal requirements to align AI systems to local culture and heritage:
In Africa, the African Union Continental Strategy (2024) advocates “decolonial” principles like inclusive datasets for Ubuntu and indigenous languages, proposing transparency mechanisms but without formalised “algorithmic registers” yet.
In the Middle East, the UAE AI Charter (2024) and SDAIA AI Ethics Principles (2023) promote cultural fairness through guidelines encouraging Arabic linguistic support and value-aligned bias checks in models like Jais, influencing voluntary procurement standards but without mandatory enforcement for sovereign deployment.
In South America, Brazil’s Artificial Intelligence Act (PL 2338/2023) approved by the Senate in December 2024 and pending Chamber review in late 2025, embeds cultural and diversity protections in its risk-based framework. It mandates high-risk AI systems to uphold principles of equality, non-discrimination, plurality, diversity, and cultural rights protection (Art. 2), alongside equity, inclusion, and accessibility for vulnerable groups (Art. 3), reflecting Brazil’s multiethnic and multilingual context.
In South East Asia, Vietnam’s AI Law (passed 10 December 2025, effective 1 March 2026) adopts a risk-based framework with safeguards for high-risk systems in critical sectors. Under Decision No. 1131/QD-TTg (June 2025), Vietnamese cultural, historical, and linguistic data—prioritising large language models—falls within strategic technologies. The Digital Technology Industry Law (Article 6) bans AI systems harming national interests, human rights, or Vietnam's cultural traditions.
Honestly, this was the trickest prediction to write because I can’t yet pinpoint a specific practical impact. But so far, I think this is pointing to a regulatory culture shift from “compliant” to “compliant and localised”.
“Cultural alignment” might even be the word of year in 2026 AI summits, and even weaponised as a ‘soft protectionist’ tool (even post-tariff wars) to favour local champions who have built AI models from the ground up within their culture.
Overseas AI companies (whether it be from US, China or EU) who are unable to reflect local linguistic, historical, or value contexts may face growing restrictions on deployment, certification, or public sector procurement, unless they partner with a local venture to provide “locally white labelled” AI services.’
It could mean a new market for ‘cultural auditing’ and ‘culture-as-a-service’ (e.g. linguists and culture experts pivoting in the AI audit and governance space).
In an interesting way, ‘cultural alignment’ becomes the new nuanced frontier for AI sovereignty efforts.
Views my own.
Want more?
Global AI Regulation Tracker (an interactive world map that tracks AI regulations around the world).
Note2Map (a platform to build and launch your own interactive world map tracker).







Really, really interesting. Your LinkedIn review of the vibe-coding process as well. “Coherent fluff” is a great term for what comes out of a simple prompt. More sophisticated prompt rails get much better results, but we need to be improving ourselves in the direction that the AI isn't improving to have long term returns on the investment of learning. Damn. Will be using these vibe directions in my work this year.
The cultural alignment prediction represents AI sovereignty's evolution beyond technical capability to values integration. How will locally-aligned AI change your competitive landscape. More: https://thoughts.jock.pl/p/multi-model-ai-workflow-2026-gpt-claude-gemini