Chaotic AI regulations will collide with rising governance demands

· Posted on: January 28th 2026 · read

Meeting room

In 2026, regulation will be one of the biggest forces shaping the future of AI, yet it will also be one of the messiest. On one side, we see regulators pushing harder than ever for accountability inside organizations. In the U.S., the Securities and Exchange Commission (SEC) and state agencies are already forcing faster, clearer disclosures around AI usage.

Boards and executives are under growing pressure to show oversight, document their decision-making and prove that AI risks are being managed responsibly. Vendor risk assessments are also expanding to include AI governance, meaning companies can no longer treat AI as a “black box” tool. Compliance in this area will only become more resource intensive, adding strain to already stretched governance teams. But outside of corporate walls, the regulatory picture is far more chaotic. 

No single authority “owns” AI oversight and the technology’s rapid spread is outpacing the ability to legislate effectively. Consumer-facing tools highlight the problem: from unregulated content generation to platforms like Grok AI producing inappropriate responses, the lack of guardrails is creating societal risks, especially for the younger generation. Global inconsistencies only make things worse; what one country restricts, another allows.

Add to that the privacy nightmare of users pasting sensitive data into public AI tools, with no clear framework for controlling where that information goes, it’s easy to see why regulators are scrambling. 

The reality is that 2026 will bring both increased pressure on organizations to demonstrate responsible AI governance and mounting complexities in how AI is used by the public. While corporate compliance will grow more structured and demanding, public regulation will remain fragmented and reactionary. Organizations need to prepare for both worlds, navigating the formal obligations regulators impose while also understanding the reputational and operational risks that come from AI’s largely unregulated public use

The information provided here is of a general nature and is not intended to address the specific circumstances of any individual or entity. In specific circumstances, the services of a professional should be sought. © 2025 Baker Tilly Advisory Group, LP.

For more information

Contact the team