Template: “AI Policy”Copy
AI gives us a great opportunity to be more productive and increase our social impact, but it also has downsides and risks.
This document reflects our main guidelines and restrictions for using AI.
We also have an AI Library, where we share good prompts, useful recommendations and other AI resources.
This AI Policy is focused on giving specific and actionable guidelines. It’s designed to be easy to understand, apply and update.
⚠️ Warning
A document like this is only useful if everybody in your organization can understand and apply it easily. Otherwise it would be just another document full of big words and generic concepts that takes 10 meetings and 2 months to elaborate, only to be ignored by most of your people and collect dust in a folder somewhere.
Goals and scope
The goal of this document is to make sure our AI use is aligned with our mission, goals and ethical principles. It will also allow us to reduce risks and misunderstandings.
This Policy applies to:
- Staff members
- Board members
- Volunteers
- Partners
- Contractors
- Any other individuals or entities acting on behalf of the organization
It doesn’t apply to:
- Individuals or entities acting on their own behalf (without any link to the organization)
Violating this policy may result in disciplinary actions. Depending on severity, consequences may include loss of access to AI tools, mandatory retraining or legal/disciplinary measures (including contract termination).
Usage guidelines
When using AI tools, you should:
- Apply our values and ethical principles (LINK) to all your AI use. AI systems are just another tool to help you. You are responsible for all the content that you publish.
- Always manually review and improve the AI output before publishing anything (internally or externally). Never use 100% automated AI processes or systems without human supervision. They are not reliable enough (at least for now) and they could damage people and our reputation.
- Disclose when you are publishing AI-generated content (we consider something AI-generated if more than X% of that content was generated or modified by AI and not edited or edited lightly by a person). You should include the following phrase at the end of the content: “This content was created with AI assistance.”
- Use systems and tools for Risk Mitigation. Include measures to avoid bias, plagiarism and misinformation, such as using prompts (or custom instructions) that include this phrase: “Ensure your response is unbiased and does not rely on or promote stereotypes regarding gender, race, ethnicity, religion, age, sexual orientation, disability, socioeconomic status, or any other personal attribute. Consider diverse perspectives and avoid making assumptions about individuals or groups. If relevant, provide a range of viewpoints and acknowledge any potential limitations in your response.“
- Share your learnings with the team. You can use our AI Library to share your experiences (good and bad), best prompts, etc.
- Report risks and incidents related to AI, sending an email to ai@example.com. Things like data risks or breaches, AI tools generating problematic outputs and big technical issues. The AI Committee will review all reports promptly and change this Policy if necessary.
- Use only work accounts (provided by our organization) for work purposes. Do not use work accounts for personal purposes or personal accounts for work purposes. And don’t share your work accounts with anybody else, internal or external. Otherwise you could generate important security and privacy risks.
We encourage responsible experimentation with AI within these guidelines. Share your learnings so the entire organization can benefit.
If you have questions or need an exception to any of these rules, contact ai@example.com. We encourage questions and suggestions, nobody will be penalized for asking.
⚠️ Warning
These are generic guidelines. You probably should adapt them to your organization’s priorities and ethical principles. For example, if your organization is focused on protecting the environment, you might include guidelines to make sure your AI use is more eco-friendly (eg. using local AI systems or smaller AI models whenever possible, since they consume less energy and resources than the biggest cloud AI systems and the newer AI reasoning models).
Forbidden use cases
We will never use AI in ways that violate our core values or applicable laws, compromise the privacy and dignity of individuals, or create undue risk to our organization or the people we serve.
Here are a few specific examples of AI uses that we don’t allow:
- Image and video creation that doesn’t respect copyright or can be deceptive (eg. creating fake realistic images with the faces of famous people, creating designs or art that is very similar to the style of a certain artist…)
- Take decisions related to Human Resources (recruitment, performance reviews, promotions, terminations, etc.)
- Facial recognition or biometric analysis systems without the explicit consent from every individual involved.
- Analyze sensitive information about our beneficiaries or clients without their explicit consent.
Never provide Personal Identifiable Information (PII), proprietary or confidential information to an AI tool.
- Anonymize data (remove identifiable details like organization names, staff names or emails, phone numbers, etc.) before sharing it with AI tools. You might do this faster using “search” or “search & replace” in your document or database.
- If you might be sharing any kind of sensitive information, make sure the AI tool won’t be storing this information or using it to train their models (some AI tools like ChatGPT have options to deactivate training and memory). Still, they might be hacks or leaks, so assume any information that you share with an AI tool might be published later.
Tools
Some tools are more risky than others, so we have to be careful with them and sometimes not use them at all. It could be due to ethical issues, data privacy limitations, security risks, high costs, frequent hallucinations or other issues.
We will give priority to tools that comply with this principles:
These AI tools have been approved for use:
- XXX
- XXX
- XXX
These AI tools have been forbidden:
- XXX
- XXX
- XXX
If you want to use AI tools that are not listed here, contact ai@example.com. We will review it and give you a response ASAP. *
* Another option would be that only risky tools have to be approved. You could use a text like this:
“If it’s clearly a “safe” use (you won’t provide personal or confidential data, the tool doesn’t publish anything automatically, the provider complies with all our key regulations like GDPR…), you can use it without previous approval. If it’s a risky use or you are not sure, please contact ai@example.com“
ℹ️ Note
You might include in the list specific tools (eg. ChatGPT), categories (eg. AI image generators) or conditions (eg. Tools with their servers outside the EU or that don’t comply with a certain law).
You could also use a “Traffic Light” system: Categorize tools or use cases as green (always OK), yellow (proceed with caution, consult if unsure), and red (forbidden).
Governance
There will be an “AI Committee” composed by a diverse team of experts (IT, legal and ethics).
The AI Committee will:
- Designate an “AI Officer”.
- Meet once a year to review the latest AI issues, news and tools. And also every time there is an urgent and important issue related to AI.
- Approve new AI tools and other changes to this Policy
The AI Officer will:
- Manage the email ai@example.com
- Provide AI guidance to employees
- Experiment with new AI tools and tactics (and share the learnings).
- Solve simple issues related with AI
- Escalate important or complex issues to the AI Committee
- Recommend changes to this Policy to the AI Committee
Training
All the staff should check this Policy and our AI Library before using any AI tools for their work.
We recommend checking the AI Library at least once per month to stay up to date to the latest best practices, tools, etc.
We encourage everybody to join AI training programs. And share the best learnings with the team (summary + link to the source) on the AI Library and our internal communication platforms.
Updates
Since AI is a rapidly changing field, we need to review and improve this Policy frequently to ensure it’s adapted to new needs, technologies and legal requirements.
- Everybody should collaborate to make sure this document stays updated and relevant. If you have ideas or suggestions to improve it, please contact ai@example.com.
- Once a month the AI Officer (ai@example.com) will send an email to all the staff informing of all the changes made to this Policy, if there are any.
- When a new tool or use case has been approved or forbidden, it should be immediately added to this document by the AI Officer.
- The AI Committee will review possible changes to this Policy at least once a year and whenever there are urgent and important issues.
FAQ
What do we mean by “AI” in this policy?
We are referring to any software, system, or tool that uses algorithms or machine learning to generate content, make recommendations, or assist with decision-making. This includes text generators (like ChatGPT), image or video generators, predictive analytics tools, and more.
What kind of information should I never put into an AI tool?
Never enter Personally Identifiable Information (PII) like names, addresses, phone numbers, or email addresses into an AI tool. Also, do not enter confidential information about our organization, our finances, our beneficiaries, our donors, or our partners. Always err on the side of caution.
Can I use my personal AI accounts or free online tools for work?
Generally, no. We ask you to use only organization-approved AI tools and your designated work account. If you want to test a new tool that isn’t on the approved list, please contact the AI Officer (at ai@example.com). We need to ensure the tool meets our privacy and ethical standards.
What if I find a great new AI tool or a new way to use AI in our work?
That’s great, we encourage innovation! Please send the details to ai@example.com. We will review your suggestion and update our Policy if necessary. We’re always looking for ways to use AI to enhance our impact.
I’m unsure whether my AI use violates policy. What should I do?
When in doubt, ask. Pause and send an email to ai@example.com. Better to confirm first than risk a violation that could have legal, ethical, or reputational consequences.
How can I learn more about using AI?
Our AI Library is a great resource. It contains helpful guides, articles, recommended training programs, and examples of good AI prompts. You can also reach out to the AI Officer at ai@example.com with any questions. We encourage everyone to stay informed about AI and to share their learnings with the team.
Appendix
You might want to use this section to include additional resources or links, such as:
- Contact Information: Provide contact details for the AI Ethics Officer and Committee.
- Glossary: Define key AI-related concepts.
- Useful Resources: Links to guides and tools for responsible AI use.
- Relevant regulations: Links to relevant laws and regulations.
- Templates or guides for specific tasks or processes (eg. requesting or analyzing a new AI tool, performing data anonymization…)