Checklist: AI ethics & risk assessmentCopy
🏛️ Governance & Oversight Structure
⬜️ 1. Create & share a AI ethics policy
Publish a brief and clear policy covering acceptable uses, prohibited applications (e.g., making final funding decisions without human review), and approval workflows for new AI tools. Your board should review and approve this document. Require all staff and key volunteers to read and sign this policy.
To get started, you can search for templates & guides from trusted sources like NTEN. We also offer an AI Policy template as part of our membership.
⬜️ 2. Create an AI inventory
Document every AI system in an spreadsheet that can be shared with the team. For each tool, record its purpose, the type of data it processes, the vendor, monthly cost, and an assigned risk level (Low, Medium, High). Maybe also add a “comments” field to share recommendations, lessons learned, etc.
This inventory should be updated as soon as new AI systems get approved (by the board, committee or person in charge of IT/AI approvals). It should be shared with all the staff and key volunteers, to increase transparency and responsible use.
📊 Data Protection & Privacy Controls
⬜️ 3. Anonymize data before AI processing
Before uploading data to an AI tool, remove or replace personally identifiable information (PII) and other sensitive data (unless that data is really necessary for the task, you have consents for this kind of processing, and you are sure that the AI tool complies with all the legal and security requirements).
You may be able to perform the anonymization in a few seconds using “find and replace” or “regex” features in your spreadsheets/documents. Another option is using local AI models (installing them with LM Studio or Ollama for example), only for the anonymization task or for the whole AI process, so the sensitive data never leaves your computer (and you can completely erase the conversation after completing the task to make it even safer).
To make this process more efficient and consistent, consider sharing the anonymized versions of key documents and databases with your team, so they can use them quicky and safely, instead of having to figure out their own anonymization processes. Or at least share a guide that explains how to do it by themselves.
⬜️ 4. Map AI data flows
Create data flow diagrams for each AI tool (or at least for high-risk ones). These diagrams should visualize:
- where data comes from
- what kind of data it is (e.g., PII or sensitive health info)
- where and how it’s processed (use secure tools, maybe apply data minimization or anonymization)
- where and how long it’s stored (use secure storage, maybe erase sensitive data/conversations after use)
For example, a flow might be: Salesforce (Donor PII) -> CSV Export -> CSV anonymization (Excel + regex) -> Upload CSV to ChatGPT -> Drafts stored in Google Drive (no PII)
You can use a specific tool like Diagrams.net to make nice diagrams, but it’s probably better to just use a text document or spreadsheet that you can easily share with your team (and maybe let other colleagues make changes). It’s also important that you can add comments (to share relevant risks, recommendations, changes, etc.).
⬜️ 5. Update consent forms
Your privacy policies and consent forms must be updated to specifically mention that data may be processed using “automated systems” or “artificial intelligence”. Maybe even mention specific tools that you are using or may use. Provide a clear checkbox or manual option for opting out. Monitor opt-out rates to identify potential usability or trust issues among certain groups.
Use simple and clear language. For example: “We may use AI systems to analyze and optimize program effectiveness. You can opt out by contacting [ai@example.org] without affecting your services.”
⬜️ 6. Confirm data processing locations and cross-border transfers
Verify with each AI vendor where their servers are located. If you handle data from EU citizens, ensure the vendor complies with GDPR transfer rules. Look for a “Trust Center” or “Data Processing Addendum (DPA)” on your vendor’s website, or email them directly.
⬜️ 7. Ensure you can delete all traces of someone’s data within 30 days of request
You probably have to honor a “right to be forgotten” request within the legally required timeframe (e.g., 30 days under GDPR). Create a step-by-step internal guide for handling these requests, detailing how to request deletion from each specific vendor and assigning a person responsible for execution.
🎯 Bias Prevention & Fairness Measures
⬜️ 8. Test AI outputs using data from different demographic groups
Before deploying an AI system that impacts people, test it with sample data from different groups you serve (e.g., race, gender, age, location). Create a small “fairness test deck” in a spreadsheet with fictional profiles of diverse backgrounds. Run them all through the system and check if the outputs show any unintended patterns, such as consistently ranking applicants from a certain neighborhood lower.
⬜️ 9. Require human review for important decisions
Any AI-driven public communications, or recommendations that affects a person’s eligibility for services or involves a major financial decision (e.g. over $500), must be reviewed and approved by a trained human with override authority.
You may want to register every time the human reviewer’s decision doesn’t match the AI recommendation and perform a deep review of AI processes where these mismatches happen frequently.
🔒 Security & Access Controls
⬜️ 10. Implement role-based access
Ensure that only trained staff who need access to an AI tool have it, and enforce multi-factor authentication (MFA). Review the list of users with access every 90 days and revoke access immediately upon staff termination. For example, a grant writer might need access to an AI writing assistant, but not to the AI tool used by the finance team.
⬜️ 11. Use paid/enterprise accounts with security controls
Never use personal accounts for organizational work. If possible, upgrade to paid plans that offer administrative controls, audit logs, and data privacy features.
⬜️ 12. Opt out of model training
Many AI services use customer data to train their models by default. Find the setting to opt out and re-check it every 6 months at least (their settings, features and defaults might change). You can find data control policies for major providers like OpenAI and Google AI on their websites.
⬜️ 13. Rotate API keys every 90 days (if you are using AI APIs)
Treat API keys like passwords. Never hardcode them in scripts or share them in emails. Store them in a secure password manager and set a recurring calendar reminder to rotate them every 90 days. Team-oriented tools like Bitwarden or 1Password are excellent for securely storing and sharing secrets like API keys.
📢 Transparency & Accountability Practices
⬜️ 14. Ensure users know when they’re interacting with AI
Clearly disclose when a user is communicating with an AI chatbot or reading AI-generated content. A chatbot on your website should be labeled “AI Assistant” and include a button or option to “Speak with a human”. Any blog post or report substantially written by AI should include a disclaimer, such as “This article was drafted with the assistance of AI and verified by our staff”.
⬜️️ 15. Create a dedicated channel for AI concerns
Provide a simple way for anyone to ask questions or raise concerns about your use of AI, such as a dedicated email address (AI@example.org) or a web form. Reply ASAP to these messages, track patterns and report monthly to leadership.
🚨 Incident Response & Recovery Plans
⬜️ 16. Develop a one-page AI incident response plan
Prepare a simple plan for what to do if an AI system fails or is breached. It should include key contact information (maybe including also external experts that can help), steps to disable the system, and pre-drafted communication templates. You can adapt comprehensive guidance from resources like the SANS Institute Incident Handler’s Handbook to create a simplified one-page plan.
⬜️ 17. Ensure ability to revert to manual processes within 4 hours
If a critical AI system fails, you must be able to continue essential operations manually. Document the pre-AI workflows and ensure at least two staff members are trained on how to execute them. Store these documents in a shared folder labeled “MANUAL PROCESS FALLBACKS.”
👥 Vulnerable Populations & Beneficiary Protection
⬜️ 18. Identify which processes or beneficiary groups need extra AI safeguards
Explicitly list the vulnerable populations you serve (e.g., children, refugees, survivors of abuse) and apply stricter controls where AI might impact them. For example, a policy might state: “AI will not be used to handle initial intake communications or to store any personally identifiable stories from survivors”.
⬜️ 19. Involve beneficiaries in AI design through participatory methods
Include beneficiaries in focus groups or advisory panels before deploying AI that affects them. Document their feedback and how you addressed concerns.
Before deploying an AI tool that will directly affect your beneficiaries, include them in the design and testing process using focus groups or advisory panels. Document their feedback and how you addressed concerns. This practice, known as Co-Design, builds trust and leads to more equitable and effective tools.
🌍 Environmental Sustainability
⬜️ 20. Prioritize small and local AI models
When possible, use smaller and/or local models (they are “intelligent” enough for most tasks, provide faster results and require much less energy to run). Check if the latest big models give you significantly better results for your most frequent tasks or not. Share learnings with your team (e.g. list of tasks where a small model gave you great results).
For example, for correcting the format or grammar in a document, a small AI model or a dedicated tool like Grammarly is far more energy-efficient than running the text through a large reasoning model.
⬜️ 21. Implement data minimization
Only collect and process the data that is absolutely necessary for a given task. Before starting an AI project, ask: “What is the absolute minimum amount of data we need to achieve this goal?”
Providing a lot of data and documents may confuse some AI tools and certainly requires more “effort” (= more energy) to process. Data minimization simultaneously improves privacy and reduces environmental impact.
⬜️ 22. Assess the carbon footprint of your AI usage
If possible, choose vendors who are transparent about their energy use and are committed to renewables. Major cloud providers like Google, Microsoft, and AWS publish detailed sustainability reports that can inform your choices.
📚 Staff Education & Continuous Learning
⬜️ 23. Provide AI training to every staff member
Provide basic training to all the staff, new hires and maybe key volunteers, partners, board, etc. If possible, also more advanced/specialized training for certain roles (comms, grants, programs, HR, finance…). Track completion rates and test understanding with practical scenarios.
Better training translates into better AI use (increased efficiency, reduced environmental impact, reduced risks, etc.). The AI landscape changes very quickly (new tools and features launched almost every week), so ongoing training is highly recommended.
⬜️ 24. Maintain an internal knowledge base of AI best practices
Create a searchable repository for your AI policy, tool inventory, lessons learned, and approved use cases. This helps institutionalize knowledge and ensures consistency. Free tiers of tools like Notion work well for this.
🚀 Advanced Optimizations (Optional)
⬜️ 30. Conduct red team exercises with adversarial testing quarterly
For high-risk systems, hire experts or assign an internal team to actively try to “break” the AI by testing for vulnerabilities like prompt injection or bias. You can find common vulnerabilities to test for in the OWASP Top 10 for Large Language Model Applications.
⬜️ 31. Establish an independent AI ethics review board with external experts
For organizations heavily reliant on AI, create a formal board that includes external ethicists, technical experts, and community representatives. This board should be empowered to review and veto high-risk AI deployments.
⬜️ 32. Implement explainable AI (XAI) techniques for all automated decisions
For high-stakes decisions, use technical tools that can provide insight into why an AI model made a specific recommendation. This is technically complex, but open-source libraries like SHAP and LIME (for Python) can help.
⬜️ 33. Conduct risk assessments using the NIST AI Risk Management Framework
Adopt a structured, formal approach to risk management by following the guidance from the U.S. National Institute of Standards and Technology. The official NIST AI Risk Management Framework (AI RMF) is the gold standard for a comprehensive Govern, Map, Measure, and Manage approach to AI risks.
ℹ️ Note
You should probably adapt this checklist to the specific needs and priorities of your organization. You can copy the contents of this page into a Google Doc or similar tool, edit the list and maybe export it as PDF to share it with your colleagues.