🎯 Foundation Assessment
⬜️ 1. Identify 3-5 specific problems where AI could create measurable impact
Document concrete challenges like “grant applications take 20 hours each” or “we lose 30% of first-time donors.” Include current baseline metrics so you can measure improvement after AI implementation.
⬜️ 2. Map existing data sources and their current quality
List all databases, spreadsheets, and systems containing donor info, program metrics, and operational data. Note which are digital, which need cleaning, and which connect to each other.
⬜️ 3. Calculate the true cost of your most time-consuming manual processes
Identify your top 5 repetitive tasks and calculate their real cost (staff hours × hourly rate). This helps justify AI investment and prioritize which processes to automate or improve first.
⬜️ 4. Document baseline metrics for processes you want to improve
Record current performance: emails per week, donor response rates, grant success rates, hours per report. Without baselines, you can’t prove AI’s value to funders or board members.
👥 Team Preparation
⬜️ 5. Identify your AI champion and give them explicit authority
Choose someone curious about technology who has at least 10% of their time freed up for this role. Give them decision-making power for tools under $100/month and direct access to leadership.
⬜️ 6. Survey staff to identify eager adopters versus skeptics
Send an anonymous survey asking about AI interest, concerns, and current tech comfort level. Use results to pair enthusiasts with skeptics and address specific fears directly.
⬜️ 7. Ensure everybody understands AI
Give training to all your staff covering AI basics, showing real nonprofit examples, and dispelling myths. Consider also giving brief AI training or written guides to other stakeholders (board members, volunteers, contractors, etc.)
⬜️ 8. Create a change management plan with clear communication
Develop messaging that addresses job security concerns, emphasizes human oversight, and highlights how AI will make work more meaningful by reducing tedious tasks.
📋 Policy Framework
⬜️ 9. Create a brief AI use policy with specific dos and don’ts
Include concrete examples: “DO use AI for first drafts of thank-you letters. DON’T input donor credit card numbers.” Make it scannable with bullet points, not dense paragraphs.
⬜️ 10. Define which data categories can never enter AI systems
Create a “red list”: bank accounts, medical records, immigration status, credit cards, passwords…
⬜️ 11. Establish clear attribution rules for AI-generated content
Specify when to disclose AI use. For example: Always for published content, optional for internal documents. Create standard disclosure language like “Assisted by AI, reviewed and edited by our team.”
⬜️ 12. Update consent forms to cover AI data processing
Add a simple clause: “We may use AI tools to better serve you, always with human oversight.” Review with legal counsel, especially if serving vulnerable populations.
⬜️ 13. Establish an incident response plan for AI failures
Document who to notify, who writes corrections, and template language for apologies. Include procedures for both technical failures and ethical concerns.
🚀 Implementation Planning
⬜️ 14. Select 1 or 2 low-risk pilot projects
Identify your “quick wins” to build momentum. Choose something like social media drafts or meeting summaries. Avoid starting with donor-facing or legally sensitive tasks.
⬜️ 15. Engage beneficiaries in the AI planning process
If AI will affect service delivery, gather input from those served through focus groups or surveys. Their perspective is crucial for ethical implementation and acceptance.
⬜️ 16. Develop a rollback plan if the pilot fails
Document how to revert to previous processes, who makes that decision, and how to communicate it. This reduces anxiety about trying new approaches.
🔧 Technical Readiness
⬜️ 17. Create a vendor evaluation framework
Develop criteria including: privacy policy quality, data ownership terms, pricing transparency, nonprofit discounts, and ability to export data. Require scoring before any purchase.
⬜️ 18. Audit existing software for hidden AI features
Check if your CRM, email platform, or Microsoft/Google tools already include AI. You might already be paying for capabilities you’re not using.
⬜️ 19. Ensure vendor agreements protect your data
Ensure contracts specify you own your data, can delete it upon request, and that vendors won’t use it for model training without permission. Get these terms in writing.
⬜️ 20. Audit existing software for hidden AI features
Check if your CRM, email platform, or Microsoft/Google tools already include AI. You might already be paying for capabilities you’re not using.
⬜️ 21. Audit existing software for hidden AI features
Check if your CRM, email platform, or Microsoft/Google tools already include AI. You might already be paying for capabilities you’re not using.
⬜️ 22. Designate who can approve and purchase AI tools
Create a simple approval process. For example: Tools under $100/month need champion approval, over $100 need director approval. This prevents both paralysis and uncontrolled spending.
🔐 Risk Management
⬜️ 23. Identify the biggest risks specific to your mission and population
List unique concerns: Could AI responses traumatize vulnerable clients? Might automated messages seem culturally insensitive? Address these before they become problems.
⬜️ 24. Establish clear accountability for AI outcomes
Document who is responsible if AI causes harm: who makes corrections, who communicates with affected parties, and who decides on system changes.
⬜️ 25. Monitor the evolving AI regulatory landscape
Assign someone to track monthly updates on AI regulations, especially those affecting nonprofits. Subscribe to relevant newsletters from AI and legal experts.
📊 Measurement & Learning
⬜️ 26. Define specific KPIs for your AI initiatives
Set measurable goals: “Reduce grant writing time by 30%” or “Increase donor retention by 15%.” Track both efficiency metrics and quality indicators monthly.
⬜️ 27. Collect feedback from all stakeholder groups
Survey staff monthly, donors quarterly, and beneficiaries after any AI-assisted interaction. Include both quantitative ratings and open-ended feedback.
⬜️ 28. Create a process for sharing AI successes across departments
Schedule monthly or quarterly “AI show-and-tell” sessions where teams demo their wins. Record these for absent staff. Peer examples are more convincing than top-down mandates.
⬜️ 29. Plan regular board updates on AI progress
Schedule quarterly reports including metrics, lessons learned, and strategic recommendations. Include both successes and challenges for credibility.
⚡ Advanced Optimizations (Optional)
⬜️ 30. Conduct independent AI audits
Once AI is integrated into critical processes, hire external experts to audit for bias, security, and effectiveness.
⬜️ 31. Assess the environmental impact of your AI use
Calculate the carbon footprint of cloud computing and AI processing. Consider purchasing carbon offsets or choosing green hosting providers to align with sustainability values.
⬜️ 32. Develop custom AI models for unique organizational needs
After mastering standard tools, consider fine-tuning models on your content.
ℹ️ Note
You should probably adapt this checklist to the specific needs and priorities of your organization. You can copy the contents of this page into a Google Doc or similar tool, edit the list and maybe export it as PDF to share it with your colleagues.
Next steps
Get new “AI Superpowers” for your nonprofit. Improve results, save time, and avoid risks.
Receive expert help. AI questions? Request a free consultation!