Ctrl+AI+Reg - 21 January 2026
Your shortcut to AI regulation, law and policy updates around the world.
AI Regulation Updates
In this issue:
Updates from Italy x South Korea, EU, UK (x2), Singapore, South Korea, Kazakhstan.
See more on my Global AI Regulation Tracker (English version | Chinese version)
Global / Joint
See new updates on Brazil and EU in the Grok AI regulatory response tracker.
🇰🇷🇮🇹 [19 January 2026] South Korea and Italy agree to strengthen cooperation in AI, chips: At the Cheong Wa Dae summit in Seoul, South Korea and Italy have issued a joint statement agreeing to deepen cooperation in high-tech sectors, including AI, semiconductors and aerospace, while expanding ties in the defense industry and critical-mineral supply chains. On the sidelines of the summit, the two countries signed a memorandum of understanding on semiconductor cooperation between the Korea Semiconductor Industry Association and Italy’s Association of Electrical and Electronic Industries. The memorandum aims to promote business cooperation and information sharing in the semiconductor sector — including advanced areas such as AI — and to strengthen semiconductor supply chains.
Europe
🇪🇺 [19 January 2026] EU looks to ban nudification apps following Grok controversy: It is reported that the European Union is currently weighing a potential ban on “nudification” apps—AI tools used to create non-consensual explicit images—as part of a broader effort to regulate technologies that threaten individual rights and societal safety. This restriction could be implemented either by updating the list of prohibited practices under the EU AI Act during the Commission’s annual review or by using the existing “systemic risk” provisions that govern advanced general-purpose AI models. Because “non-consensual intimate images” were specifically identified as a systemic risk in recent compliance guidelines, the European Commission’s AI Office may now have a formal regulatory basis to challenge platforms, such as X’s Grok, ensuring that developers of complex models are held accountable for the harmful outputs their technology may facilitate.
