This article is part of a 3-part series being developed by Harrier. Learn more about Brian Birch here.
Healthy cultures experiment and try new things, and this is the time to do so. But you need to govern and oversee how these AI tools are being used, and create a simple set of policies and guidelines first. That starts with assessing your ‘current state’ of AI use for your small organization or business.
A little policy goes a long way to helping you and your team know where the limits are in using AI. Conduct a quick survey of your team, make it anonymous so they won’t feel threatened. The goal is to set the tone, make sure they understand that you are exploring AI and its current use, so you can help shape strategic and safe use. It should be clear that there are no wrong answers, just an assessment of the current state.
To your knowledge, what software tools (e.g., current AMS, CMS, LMS, etc.) are already using AI?
How often are you using AI in your daily routine?
How are you currently using AI in your work?
What are you most excited about with the potential of AI as it relates to your work?
What are you most worried about?
What are you hearing from other contacts/colleagues in similar roles or organizations about AI use?
After your initial review, compile the results and create a simple report that outlines:
Every business and organization is different. A lot may depend on your industry, as perceptions on AI differ across cultures and industries. Your members or customers, depending on who they are and what they do, may have very strong opinions about their personal and organizational privacy, as well as the potential impact of AI. Here are some examples:
Privacy and legality are of the utmost importance in these spaces. Determining liability when an AI system makes an error in diagnosis or treatment adds a complex legal layer not present in traditional medicine. Mental health advocates are worried about the impact of AI on vulnerable populations like teens and young adults.
This industry is moving at light speed to employ many various AI tools to grow their sales, and may have a much higher litmus test for what they think is acceptable use vs. too aggressive. Companies in this space are moving fast to gain a competitive edge, which can sometimes outpace the guardrails, potentially leading to missteps that damage their brand reputation. Some may have experienced this already, so opinions may vary widely.
AI and its rapid growth is concerning to many environmentally-focused industries and people, and they may have strong feelings and ideas about how AI may impact the natural world. Environmentally-focused individuals and organizations may demand that companies prioritize sustainability principles in their AI development.
If you are a small local business or nonprofit, pay close attention to how your various customers and donors feel about AI and its use. When a business thrives on personal relationships (like a local cafe, a community bank, or a small nonprofit), the use of AI, such as a chatbot replacing human service or software replacing personal check-ins may be seen as cold or dismissive. For a small local business, losing customer trust due to a data breach associated with a new AI tool can be catastrophic. Customers want to know if their personal data is being used for hyper-targeting outside of their local relationship.
After you understand the current state of use and your customers' attitudes about AI use in general, it's time to craft a policy for AI use for your team. Start small and keep it as simple as you can, focus on making it something that provides clarity.
Share examples and simple statements of what is acceptable use of AI. Keep these brief, as you want people to explore and try new things. The learning curve of AI isn’t overcome by reading about it, it has to be experimented with to see the potential. Define things like:
Critically, it's often easier to provide guidelines of what not to do. Things to review and include in your policies and training:
What data is NOT authorized to be provided to AI-based tools. Your biggest potential risk may be sensitive information being put into AI tools that are published or accessed publicly or hacked. This is especially true with what is called PII, or Proprietary Intellectual Property (names, email addresses, mobile phone numbers, etc.). Make sure you are explicit in what information is NOT acceptable to be put into AI-based tools, and make sure any 1099 contractors are also aware of your policies and can affirm they will be followed as part of your agreement with them.
Using AI tools to create content from data or information that is flawed is also a concern. Your policy should outline consistent double-checks and emphasize that all content and data have a human touch at the end of the process, at the least.
Be extremely cautious if your customers or members provide proprietary information, such as upcoming product launches, patents, new features, or new hires, and ensure staff are aware of the challenges that could be caused if this information is shared in the wrong way at the wrong time.
When in doubt, for small organizations using AI and implementing policy, share examples, have conversations, and ultimately remind your team:
“Humans check and review everything.”
A clear AI policy sets expectations, reduces risk, and ensures your team uses AI responsibly and creatively. It also builds trust with customers and partners, and you need to make sure you are aligned with the communities you serve and their perceptions of AI.
Start with a quick survey. Ask your team what software or platforms they use that may include AI features, like CRMs, CMSs, or automation tools. Make sure you communicate that this is to help them use the tools safely, and consider making it anonymous. You need to make sure they are comfortable answering how they are using it and what they are excited about, and what they may be concerned about.
Focus on acceptable and unacceptable uses, data privacy rules, accuracy checks, and how AI-generated content should be reviewed by humans before publishing.
Encourage open, anonymous feedback during your audit. Make clear that the goal is exploration and safety, not punishment or control.
Data privacy breaches, biased information or mistakes, and overreliance on AI-generated content can kill your reputation. The best defense is human oversight and good policy and training.
Create a simple policy first, and keep it front and center with your team. Use it as a training tool, AI tools evolve quickly, don't put this on the shelf to get dusty.