March 9, 2026
You're sitting in a board meeting. Someone mentions AI. Maybe it's a vendor pitch, or maybe it's a staff member asking if you should use ChatGPT for grant writing. Maybe it's a funder asking what your "AI strategy" is.
And suddenly, everyone's looking at you.
Here's what I've noticed: most nonprofit leaders feel like they're supposed to have an answer. Like there's a "right" way to think about AI, and if they don't know it, they're behind. So they nod along, maybe ask a question that sounds smart, and then they go back to their actual job—which is running a mission, not becoming an AI expert.
I get it. You didn't sign up to be a technologist. You signed up to serve your community.
But here's the thing: You don't need to become an AI expert. You need to become a good decision-maker about AI. And that's simpler than you think.
The Pace Problem: Why AI Pressure Is Rising
Let’s start with what's actually changed.
AI isn't new. Machine learning has been quietly running things for years—your email spam filter, your credit score, your Netflix recommendations. What shifted recently is accessibility.. The tools got faster; they got better; and most importantly, they became useable by anyone.
You can now describe a taskin plain English, and AI will do it. Not perfectly, but well enough that it saves time. A lot of time.
This is raising the floor on what "good enough" work looks like. A grant proposal that used to take 20 hours to draft can now be sketched in two (with human refinement). A customer service email that used to require a person to think through can now be generated and reviewed. A dataset that once took days to analyze can now be summarized in minutes.
The pace is accelerating. And that creates pressure.
Vendors are pitching AI features. Staff are asking to experiment. Board members are asking what your strategy is. And the implicit question underneath all of it is: "Are we falling behind if we don't jump on this?"
Here's my honest answer: Not necessarily. But you need a framework for deciding.
The Three Questions That Actually Matter
I've worked with dozens of nonprofits navigating AI adoption. And I've noticed that the ones who make good decisions—the ones who get real value without creating unintended problems—all ask the same three questions.
Not "Should we use AI?" That's too vague.
Instead: Who? Why? What?
Question 1: Who?
Who is this for—and who could be impacted even if they never touch the tool?
This is the question most organizations skip. They see a tool that could save time, and they think about the person using it. But AI decisions ripple.
Imagine you're a benefits navigation nonprofit considering a chatbot to help clients answer eligibility questions. That sounds great, right? Faster response, less staff time.
But who's actually impacted?
The staff member using it (they need to understand its limits)
The client interacting with it (they're getting information that shapes their life decisions)
The community you serve (if the chatbot gives wrong information, it erodes trust in your organization)
Your funders (they care about outcomes and accountability)
Your board (they're liable if something goes wrong)
The question "Who?" forces you to think about everyone in the ecosystem, not just the person at the keyboard.
This matters because it changes what you need to do next.
Question 2: Why?
Why should they trust it—what are you doing to earn trust with data, transparency, and accountability?
This is where most organizations get uncomfortable. Because the honest answer is often: "We haven't thought about that yet."
But here's what I know: Trust is the currency nonprofits trade in. Your donors trust you. Your community trusts you. Your staff trusts you to make good decisions on their behalf.
When you introduce AI, you're asking people to trust a system they don't understand, making decisions they can't see, using data they didn't know you had.
That's a significant ask.
So the question becomes: What are you doing to earn that trust?
For the chatbot example:
Are you transparent about the fact that it's AI? (Most people assume it's a person.)
Do you have a human review process if something seems wrong?
Are you measuring whether the information is actually accurate?
What happens if the chatbot gives bad information? Who's accountable?
Are you protecting client privacy?
These aren't rhetorical questions. They're operational safeguards. And they determine whether you're using AI responsibly or just hoping nothing goes wrong.
Question 3: What?
What outcome are you responsible for? What will improve, how will you measure it, and what would make you stop or roll it back?
This is the one that separates organizations that learn from ones that just implement.
Too many organizations adopt a tool and then... nothing. They don't measure whether it actually worked. They don't ask staff if it's helping. They don't check whether the promised efficiency gains materialized.
Six months later, they're paying for software nobody's using, or they're using it in ways that create new problems they never anticipated.
The "What?" question forces clarity:
What specific outcome are we trying to improve? (Not "be more efficient"—that's too vague. "Reduce the time staff spends on intake calls from 30 minutes to 15 minutes" is specific.)
How will we measure it? (What data tells us it's working?)
How often will we check? (Monthly? Quarterly?)
What would make us stop? (If accuracy drops below 95%? If staff feedback is negative? If we discover bias?)
Who's accountable for monitoring this? (Not "the tech person." A real person, with real responsibility.)
This isn't bureaucracy. It's the difference between a tool that serves your mission and a tool that becomes a liability.
Why This Matters Right Now
Here's what's changed: The pace of AI improvement is fast. Like, really fast. A tool that's mediocre today might be excellent in six months. A tool that works well might expose risks you didn’t anticipate.This means you can't just make a decision and move on. You have to be willing to learn, adjust, and sometimes reverse course.
The organizations navigating this well have one thing in common: They treat AI adoption as governance, not as a tech project.
Governance means:
Clear ownership (who is accountable)
Defined safeguards (privacy checks, bias checks, human review)
Transparency about use (what you're doing and why)
Ongoing monitoring (measure, monitor, improve)
An off-ramp (pause or rollback if harm shows up)
It sounds formal. In practice, it's intentional leadership.
The Real Opportunity with AI
Here's the thing nobody talks about: The opportunity with AI isn't just efficiency. It's capacity.
When you automate drafting, summarizing, and organizing—you buy back time. And in a nonprofit, that time is precious.
That's time your program director can spend on strategy instead of paperwork.
That's time your ED can spend on fundraising instead of email.
That's time your case managers can spend on relationships instead of documentation.
But capacity is only valuable if you use it for something that matters.
The organizations seeing real value from AI aren't the ones that use it to do more of the same thing faster. They're the ones that use it to do different things. To go deeper. To serve better. To build stronger relationships with the people they serve.
That's the opportunity. And it only happens if you're intentional about it.
How to Start: One Experiment, Clear Boundaries
You don't need a complex AI strategy. You don't need to hire a consultant (though I'm obviously biased toward that). You need to start asking better questions.
Pick one thing. Something that's currently taking time or creating friction. Ask yourself:
Who would this impact?
Why should they trust it?
What outcome are we trying to improve, and how will we measure it?
If you can answer those clearly, you're ready to experiment responsibly. If you can't, you're not ready yet—and that's okay. It means you need stronger governance before you implement.
This is how you stay trustworthy while moving fast. This is how you capture the upside of AI without gambling with your mission.
And honestly? This is what your community expects from you. Not perfection. Just thoughtfulness. Just accountability. Just the willingness to say: "We thought about the impact, we set boundaries, we measured outcomes, and we're willing to change course if we need to."
That's leadership in the age of AI.
Want to go deeper? Grab our Nonprofit Ethical AI Toolkit—it includes practical templates, decision frameworks, and governance checklists you can use immediately. No fluff, just tools that help you lead with clarity.
Join our flagship Nonprofit Tech Decision System membership program. You'll receive a living library of templates, short walkthrough videos, and a predictable rhythm of support to keep your nonprofit’s technology decisions clear, current, and board-ready.
Ready for the next installment? Check out the next blog post in this series, Why Nonprofits Are on the Front Line of AI Governance (coming soon).


