Preparing for the Next Wave of AI Regulation: Practical Steps for Small and Mid-Sized Businesses
By Holon Law Partners
Artificial intelligence is now embedded in daily business operations—whether through automated marketing tools, hiring platforms, predictive analytics, or fully integrated proprietary models. As adoption accelerates, 2025 is shaping up to be a defining year for regulatory clarity. Governments at the state, federal, and international levels are rapidly introducing AI risk, transparency, and accountability requirements.
For small and mid-sized companies that rely on AI but lack in-house compliance teams, preparing for this evolving landscape is essential. Fortunately, proactive steps can reduce risk and position businesses to adopt AI safely, ethically, and competitively.
- Map Where AI Is Used in the Organization
Many organizations underestimate how much AI they already use. AI “inventory mapping” is now considered a foundational best practice in global governance frameworks.
Key components of an AI inventory:
- Tools used internally (e.g., CRM automation, email scoring, HR screening)
- Vendor-provided systems that incorporate AI
- Custom or semi-custom models built in-house
- Data being processed, including personal or sensitive categories
- High-risk use cases such as hiring, creditworthiness, or safety-related automation
Even a simple spreadsheet can serve as an effective starting point. The goal is visibility—knowing what’s in use, how it functions, and which teams rely on it.
- Evaluate AI Risk Based on Use Case, Not Technology
Most regulatory frameworks (such as the EU AI Act and state-level U.S. proposals) follow a risk-based approach. Rather than banning AI broadly, they impose heightened requirements on high-risk or sensitive uses.
Factors that elevate AI risk include:
- Impacts on individuals’ rights (e.g., hiring, housing, credit, healthcare access)
- Use in safety-critical environments
- Reliance on sensitive personal data
- Automated decision-making without human oversight
Businesses can begin classifying their tools now. A three-tier framework—low, medium, high—offers a practical, intuitive structure.
- Strengthen Vendor Management and Contract Terms
Most companies rely heavily on third-party AI vendors. As regulation increases, contractual protections become essential.
Legal teams should consider:
- Transparency provisions: What exactly is the model doing?
- Data use restrictions: How the vendor may access, store, and train on your data
- Security obligations: Including breach notification, encryption, and access control
- Audit and assessment rights: Essential for high-risk use cases
- Indemnification: For claims arising from misuse or model failures
Well-drafted commercial agreements can significantly mitigate exposure.
- Implement Human Oversight and Document Key Decisions
One of the clearest themes emerging across regulatory bodies is the requirement for meaningful human oversight. AI cannot fully replace human judgment, especially for decisions that affect individuals’ rights or carry operational risks.
Best practices include:
- Establishing a human-in-the-loop review for automated decisions
- Documenting review processes
- Creating escalation paths when model outputs appear inconsistent or harmful
- Logging model performance issues
Documentation doesn’t need to be burdensome; consistency matters more than complexity.
- Address Data Privacy and Retention Standards
AI tools thrive on data, but modern privacy regulations require organizations to justify and limit data use.
Companies should ensure:
- A clear lawful basis for processing personal data
- Reasonable retention periods
- Policies for model training and retraining
- Mechanisms to respond to consumer access, deletion, and correction rights
Where multiple jurisdictions are involved, harmonizing policies across regions can streamline compliance and reduce operational friction.
- Train Employees on Responsible AI Use
Employee training may be the most impactful step a business can take. Many incidents of AI-related harm result not from model failure but from human misuse.
Training should cover:
- Appropriate vs. restricted use cases
- How to input and handle sensitive or confidential data
- Identifying bias or irregular outputs
- Understanding transparency obligations
- Reporting pathways for misuse or concerns
An annual training cycle is typically sufficient for small and mid-sized businesses, supplemented by targeted updates when new laws take effect.
- Develop an AI Governance Policy
A written policy provides clarity and structure. It does not need to be overly complex—start with a short, accessible internal document.
Elements to include:
- The company’s principles for responsible AI
- Definitions and scoping
- Inventory and oversight procedures
- Risk classifications
- Data requirements
- Vendor management guidelines
- Escalation and documentation processes
A well-crafted policy signals organizational maturity and readiness for regulatory evolution.
Final Thoughts
AI innovation offers tremendous benefits, but those benefits come with responsibility. As the regulatory landscape sharpens in 2026, small and mid-sized businesses can take practical, manageable steps today to ensure safe, compliant, and ethical AI adoption.
With clear governance, thoughtful oversight, and strong contractual foundations, organizations can harness AI confidently while reducing risk and preparing for the future.
