If you've been ignoring AI regulation because it sounds like a problem for big tech companies, you're partly right and partly wrong. Most of the rules being written now are aimed at the model developers — the OpenAIs and Anthropics — and the largest enterprise users. But pieces of the regulatory wave are starting to land directly on small and mid-sized businesses, and the businesses that aren't paying attention will be the ones caught off guard. Here's what's actually relevant.

The EU AI Act is the template — and parts already apply

The EU AI Act is the most comprehensive AI law in force, and even if you're not based in Europe, it matters. If you have European customers, employees, or vendors, parts of it reach you. The law classifies AI uses by risk level: most business uses are low or limited risk, with mostly transparency obligations. The high-risk categories — hiring, credit decisions, education, healthcare, certain government uses — carry real compliance burden. If your AI use touches any of those areas and any European stakeholder, get specific advice now, not later. Non-compliance penalties are calibrated to global revenue.

U.S. rules are coming piecemeal

There's no federal AI law yet, but the patchwork is filling in. Colorado has passed its own AI Act focused on high-risk consumer-facing uses. California, New York, Texas, and others have AI-related bills moving through their legislatures. The FTC has signaled aggressive enforcement of existing consumer protection law against deceptive AI claims and discriminatory outputs. Industry regulators — for finance, healthcare, employment — are issuing AI-specific guidance under their existing authority. The result: even without one big federal law, the rules are tightening in pieces. If you're operating in multiple states, the compliance picture is already messy.

What to actually do this year

Three practical moves. First, write a one-page internal AI use policy: what's allowed, what isn't, what data can and can't go into AI tools. This is the single most useful document you can produce, and most SMBs don't have one. Second, document what AI tools your business is using and what for — vendor names, use cases, what data they touch. When regulators ask, having that list is the difference between a one-page response and a six-month scramble. Third, identify whether any of your AI uses fall into a high-risk category — hiring, credit, healthcare, anything affecting individuals' rights or access. If they do, get advice from someone who knows the rules in your state and your customers' jurisdictions.

The honest caveat

Most SMBs aren't going to get hit by AI regulation in 2026. The risk you should actually plan for is reputational, not legal — using AI in ways customers find creepy, deceptive, or careless. Regulators move slowly. Customers and employees move fast, and they're watching. The businesses that handle AI thoughtfully now will avoid both kinds of trouble.

Don't panic. Don't ignore it either. A one-page policy and a list of your AI tools puts you ahead of 90% of small businesses. That's a low bar — and exactly why clearing it matters.