March 17, 2026
Here's something most people don't realize: Nonprofits aren't just affected by AI. You're on the front line of determining how AI actually functions in the real world.
I know that sounds dramatic. But stick with me.
Right now, AI is being deployed everywhere—in hiring systems, in benefits eligibility decisions, in credit scoring, in healthcare triage, in housing allocation. The systems that determine whether someone gets a job, gets benefits, gets a loan, gets housing. The systems that shape people's lives.
And most of the time, nobody is asking hard questions about whether these systems are fair, accurate, or accountable.
Except nonprofits. Because you're the ones who see the fallout.
The Systems You're Already Living With
Let me give you some concrete examples.
Access and eligibility decisions:
You're a workforce development nonprofit. A potential client comes in. They want to apply for a program. But the eligibility system—run by the government, powered by AI—flags them as ineligible. They ask why. Nobody can tell them. The system said so.
This is happening right now, across the country. In benefits programs, housing programs, workforce programs. AI is quietly becoming a gatekeeper. And when it makes a mistake, or when it's biased, or when it's just incomprehensible—your staff have to deal with it.
Hiring and housing filters:
You're a housing nonprofit. You're trying to place someone in permanent supportive housing. But the landlord uses an AI screening system. The system flags the person as "high risk." The landlord declines. Your client doesn't get housed.
Or you're running a job training program. You place a graduate with an employer. But the employer's hiring system filtered out half your candidates before a human ever looked at them. The AI decided they weren't a good fit.
These systems are supposed to be objective. But they're trained on historical data, and historical data reflects historical bias. So the AI just automates the bias faster.
Misinformation and trust breakdown:
You're a community health nonprofit. You're trying to distribute accurate health information to the people you serve. But generative AI is making it easier than ever to create convincing misinformation. Scams. Fake resources. Deepfakes.
Your community is getting confused. They don't know what to trust. And they're starting to not trust you, because they can't tell the difference between real information and AI-generated fake information.
Benefits navigation and paperwork:
You're a legal aid nonprofit helping clients navigate complex benefits systems. The systems are increasingly automated. Forms are processed by AI. Decisions are made by AI. Your clients can't understand why they were denied. They can't appeal effectively. Because the system is a black box.
The Pattern
Here's what I'm noticing: In every one of these scenarios, AI is being used to make decisions that affect people's lives. And in most cases, nobody's asking:
- Is this system accurate?
- Is it fair?
- What happens when it makes a mistake?
- Who is accountable?
- Can someone understand why a decision was made?
- Can someone appeal the outcome?
These are governance questions. And they're not being asked by the people building the systems. They're being discovered by the people living with the consequences.
That's you.
Why This Makes You Powerful
Here's the thing: The people closest to the work discover failure modes first.
Your staff sees when an eligibility system makes a mistake. Your clients experience when a hiring filter is biased. Your community members get scammed by AI-generated misinformation. You see the real-world impact.
And right now, that insight is incredibly valuable. The broader conversation about AI governance is still being shaped. Funders are asking about it. Regulators are starting to care about it. Vendors are being pressured to explain their systems.
But most of the conversation is happening in rooms with technologists and policy people. Not with the people who actually see the impact.
That's changing. And nonprofits are in a unique position to lead it.
From Practice to Policy
Here's how this works in reality:
A nonprofit identifies that an AI system is making biased decisions. They document it. They escalate it. They work with the vendor to fix it. They share what they learned with other nonprofits. Those nonprofits do the same thing. Pretty soon, there's a pattern. A shared understanding emerges.
That shared understanding becomes a best practice. Best practices become standards. Standards become policy. Policy becomes regulation.
This has happened before. With accessibility. With privacy. With data protection. The people closest to the work discovered the problems. They documented them. They shared them. And eventually, that became the norm.
You have the opportunity to do that with AI governance.
But only if you're intentional about it.
What Intentional Leadership Looks Like
This doesn't mean you need to become an AI expert. Or a policy advocate (though that's cool if you want to). It means:
- Document what you see. When a client gets denied benefits by an AI system and you can't figure out why, write it down. What was the decision? What information did the system have? What happened when you tried to appeal? What would have helped?
- Share what you learn. Talk to other nonprofits. Are they seeing the same thing? Are there patterns? If there are, that's data. That's evidence. That's what funders and regulators actually care about.
- Ask hard questions of vendors. When a vendor pitches you an AI tool, ask: How is this trained? What data are you using? How do you check for bias? What's your accuracy rate? What happens if it makes a mistake? Who's accountable? Most vendors won't have good answers. That's useful information.
- Set your own standards. If you adopt an AI tool, measure it. Document whether it works. Share the results. If it doesn't work, say so. If it creates unintended problems, document those too. Your experience becomes evidence.
- Build practice into policy. Define what your staff can and can’t do with AI. The things you require before adopting a tool. The things you measure after. That's not bureaucracy. That's governance. And it's the foundation for standards and policy.
The Opportunity
Here's what I think is actually happening: We're at an inflection point. AI is moving fast. Governance is lagging.
That gap is dangerous. Because it means systems are being deployed without adequate safeguards. It means people are being harmed. It means trust is eroding.
But the gap is also an opportunity. Because the organizations that step into that gap—that start asking hard questions, documenting what they learn, and sharing that knowledge—those organizations become the leaders.
They become trusted voices with funders. The ones other nonprofits look to. The ones regulators listen to. The ones that shape what becomes standard.
And they do it not by being technologists. But by being thoughtful. By being accountable. By being willing to say: "We tried this. Here's what worked. Here's what didn't. Here's what we'd do differently."
That's leadership in the age of AI.
Two Paths
You have a choice.
Path #1:
You can be intentional. You can ask hard questions about the AI systems you adopt (or that are imposed on you). You can measure outcomes. You can document what works and what doesn't. You can share what you learn with other nonprofits. You can influence vendors, funders, and eventually regulators.
Result: You stay trustworthy. You learn faster. You shape what becomes standard. You position your organization as a leader.
Path #2:
You can drift. You can adopt tools because vendors are pitching them, or because funders are asking about them, or because you're worried about falling behind. You can hope they work out. You can assume someone else is handling the governance questions.
Result: You might get some efficiency gains. You might also create unintended problems. You might erode trust. You might waste money on tools that don't work. And you'll be reacting to problems instead of shaping solutions.
Where to Start
You don't need a five-year AI strategy. You don't need to hire a consultant (though, again, I'm biased). You need to start thinking like a governance leader.
Pick one thing. Something that's already happening in your organization or your ecosystem. An AI system you're considering. A vendor tool that's being pushed at you. A problem you're seeing in the community you serve.
Ask yourself:
- What's the real-world impact if this works well?
- What's the real-world impact if it fails?
- Who's accountable if something goes wrong?
- How will we measure success?
- What would make us stop using it?
If you can answer those questions, you're thinking like a governance leader. You're the person who shapes what becomes standard.
That's exactly what your community needs. Not more tools. Not more efficiency. But more thoughtfulness. More accountability. More leadership.
The good news: You don't have to figure this out alone. There's a growing community of nonprofits asking these questions. There are resources. There are frameworks. There are people who've already done the hard thinking.
You just have to be willing to step into it.
The Bigger Picture
Here's what I believe: The organizations that will thrive in the next decade aren't the ones that adopt the most tools. They're the ones that make the smartest decisions about which tools to adopt, how to use them responsibly, and how to measure whether they actually work.
And nonprofits are uniquely positioned to lead that. Because you care about impact. Because you're accountable to communities, not just shareholders. Because you're willing to ask hard questions.
That's your advantage. That's what makes you powerful.
Use it.
Ready to think through your AI strategy? Our Nonprofit Ethical AI Toolkit walks you through the governance questions, the measurement frameworks, and the decision-making process. It's built for nonprofits, by someone who's worked with dozens of organizations doing this work.
Was this blog post helpful? Check out the first one in this series, The Nonprofit Leadership Questions—How to Make AI Decisions Without Losing Your Integrity.


