This article is part of a 3-part series being developed by Harrier. Learn more about Brian Birch here.
Part 1: Audit your organization and customers, then create simple AI usage policies
Healthy cultures experiment and try new things, and this is the time to do so. But you need to govern and oversee how these AI tools are being used, and create a simple set of policies and guidelines first. That starts with assessing your ‘current state’ of AI use for your small organization or business.
Audit your team first, then create a simple policy on AI usage
A little policy goes a long way to helping you and your team know where the limits are in using AI. Conduct a quick survey of your team, make it anonymous so they won’t feel threatened. The goal is to set the tone, make sure they understand that you are exploring AI and its current use, so you can help shape strategic and safe use. It should be clear that there are no wrong answers, just an assessment of the current state.
Here are some quick sample questions to ask your small-staff team members about AI
To your knowledge, what software tools (e.g., current AMS, CMS, LMS, etc.) are already using AI?
How often are you using AI in your daily routine?
- Never
- Several times per day
- At least once per hour
- As much as possible!
How are you currently using AI in your work?
What are you most excited about with the potential of AI as it relates to your work?
What are you most worried about?
What are you hearing from other contacts/colleagues in similar roles or organizations about AI use?
After your initial review, compile the results and create a simple report that outlines:
- Your current tech stack and how AI is being deployed and used (CRM, CSM, AMS, LMS, etc.)
- Tools being used internally by individuals or small teams that employ AI
- How AI is being used (for example, image generation, writing emails, research, etc.)
Audit your industry and customers about AI perceptions and use
Every business and organization is different. A lot may depend on your industry, as perceptions on AI differ across cultures and industries. Your members or customers, depending on who they are and what they do, may have very strong opinions about their personal and organizational privacy, as well as the potential impact of AI. Here are some examples:
Healthcare
Privacy and legality are of the utmost importance in these spaces. Determining liability when an AI system makes an error in diagnosis or treatment adds a complex legal layer not present in traditional medicine. Mental health advocates are worried about the impact of AI on vulnerable populations like teens and young adults.
Retail
This industry is moving at light speed to employ many various AI tools to grow their sales, and may have a much higher litmus test for what they think is acceptable use vs. too aggressive. Companies in this space are moving fast to gain a competitive edge, which can sometimes outpace the guardrails, potentially leading to missteps that damage their brand reputation. Some may have experienced this already, so opinions may vary widely.
Environmental
AI and its rapid growth is concerning to many environmentally-focused industries and people, and they may have strong feelings and ideas about how AI may impact the natural world. Environmentally-focused individuals and organizations may demand that companies prioritize sustainability principles in their AI development.
Customer culture and community
If you are a small local business or nonprofit, pay close attention to how your various customers and donors feel about AI and its use. When a business thrives on personal relationships (like a local cafe, a community bank, or a small nonprofit), the use of AI, such as a chatbot replacing human service or software replacing personal check-ins may be seen as cold or dismissive. For a small local business, losing customer trust due to a data breach associated with a new AI tool can be catastrophic. Customers want to know if their personal data is being used for hyper-targeting outside of their local relationship.
Create a simple AI policy for your small staff team
After you understand the current state of use and your customers' attitudes about AI use in general, it's time to craft a policy for AI use for your team. Start small and keep it as simple as you can, focus on making it something that provides clarity.
Things to include in your policy review:
Acceptable use of AI
Share examples and simple statements of what is acceptable use of AI. Keep these brief, as you want people to explore and try new things. The learning curve of AI isn’t overcome by reading about it, it has to be experimented with to see the potential. Define things like:
- Recording and transcribing: When meetings should be recorded/transcribed with AI tools. For example, internal meetings vs. committee and board meetings etc.
- Syncing internal data: How and when internal emails should be shared or synced automatically with systems. For example, one-to-one staff emails syncing to profiles in the AMS so all staff can access and read, etc.
- Image generation: Some simple guidelines that help employees understand that AI-generated images, while fast and convenient, often contain mistakes, misspellings and other potentially embarrassing errors. Even more important, these tools are naturally biased and may cause images to be biased against or toward various races, genders, or other personal issues.
Unacceptable use of AI
Critically, it's often easier to provide guidelines of what not to do. Things to review and include in your policies and training:
Data security and privacy
What data is NOT authorized to be provided to AI-based tools. Your biggest potential risk may be sensitive information being put into AI tools that are published or accessed publicly or hacked. This is especially true with what is called PII, or Proprietary Intellectual Property (names, email addresses, mobile phone numbers, etc.). Make sure you are explicit in what information is NOT acceptable to be put into AI-based tools, and make sure any 1099 contractors are also aware of your policies and can affirm they will be followed as part of your agreement with them.
Accuracy and quality
Using AI tools to create content from data or information that is flawed is also a concern. Your policy should outline consistent double-checks and emphasize that all content and data have a human touch at the end of the process, at the least.
Proprietary information and Intellectual Property (IP)
Be extremely cautious if your customers or members provide proprietary information, such as upcoming product launches, patents, new features, or new hires, and ensure staff are aware of the challenges that could be caused if this information is shared in the wrong way at the wrong time.
When in doubt, for small organizations using AI and implementing policy, share examples, have conversations, and ultimately remind your team:
“Humans check and review everything.”
Useful resources to explore
- Ethics of Artificial Intelligence
- Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility?
- Health advisory: Artificial intelligence and adolescent well-being
- The state of AI in [Current Year]: Agents, innovation, and transformation
- Beyond Efficiency: Small Businesses Look to AI for Competitive Edge (PayPal)
Frequently Asked Questions About AI Policy for Small Organizations
Why should small organizations create an AI usage policy?
A clear AI policy sets expectations, reduces risk, and ensures your team uses AI responsibly and creatively. It also builds trust with customers and partners, and you need to make sure you are aligned with the communities you serve and their perceptions of AI.
How do I audit my small staff team about their AI use?
Start with a quick survey. Ask your team what software or platforms they use that may include AI features, like CRMs, CMSs, or automation tools. Make sure you communicate that this is to help them use the tools safely, and consider making it anonymous. You need to make sure they are comfortable answering how they are using it and what they are excited about, and what they may be concerned about.
What should be included in an AI usage policy?
Focus on acceptable and unacceptable uses, data privacy rules, accuracy checks, and how AI-generated content should be reviewed by humans before publishing.
How do I address employee concerns about AI?
Encourage open, anonymous feedback during your audit. Make clear that the goal is exploration and safety, not punishment or control.
What are common AI risks for small businesses?
Data privacy breaches, biased information or mistakes, and overreliance on AI-generated content can kill your reputation. The best defense is human oversight and good policy and training.
How often should we review our AI policy?
Create a simple policy first, and keep it front and center with your team. Use it as a training tool, AI tools evolve quickly, don't put this on the shelf to get dusty.
Further resources used in the research of this material:
- The Great AI Divide: Are Small Businesses Being Left Behind? (G2)
- How to create an AI policy (Charity Digital)
- Grassroots and non-profit perspectives on generative AI (JRF)
- Right-sizing AI governance: Starting the conversation for SMEs (IAPP)
- AI Acceptable Use Policy Template / Worksheet
- Template: Acceptable Use of AI Tools in the Nonprofit Workplace
- Governance Tools for SMEs
- Acceptable Use of Generative AI Tools [Sample Policy] (Fisher Phillips)
- How Nonprofits Can Develop an AI Policy (Hedgeman Law)
- Navigating the NIST AI Risk Management Framework (OneTrust)
- AI Adoption in SMBs vs Enterprises (Big Sur AI)
- Why large companies can't match small business AI innovation (Enterprise Nation)
- Top 6 AI Guidelines For Associations To Follow (Sidecar AI)
- Ethical concerns a barrier to use of AI tools in fundraising (University of York)
- Integrating AI in Small Businesses (Innovate Carolina)
- Ethical Issues Resulting from Using AI in Fundraising (Hilborn Charity eNEWS)
- Artificial Intelligence for Nonprofits (Dataro)
- NIST AI Risk Management Framework (Palo Alto Networks)
- OECD AI Principles overview
- AI principles - OECD
- What is AI transparency? (Zendesk)
- AI Policy Template - NTEN
- Artificial Intelligence 2025 Legislation (NCSL)
- Responsible AI for Professional Associations (Cimatri)
- AI Policy Transparency & Readability (Promevo)
- AI and governance (ACEVO)
- Case Study: Nonprofits Leveraging Microsoft 365 Copilot (TechSoup)
- AI for small business (SBA)
- How to Successfully Adopt AI (OSIbeyond)
.png?width=400&height=120&name=HarrierLogoNew25_400x120_transback%20(1).png)