How to Spot Responsible AI: Three Questions to Start With

Tip Sheet

Over the last year it’s safe to assume that your organization has been hearing more about AI than ever before. If you haven’t started using AI tools in your work, chances are good that you’re weighing the pros and cons with your team.

ChatGPT–the generative AI most in the spotlight right now–gained 1 million users in just five days after launching in November of last year, reaching 100 million users in two months. For reference, TikTok took nine months to reach the same benchmark. Despite this meteoric rise in the accessibility of GPT (that’s short for “generative pre-training transformer”) technologies, the underlying AI tools that make it possible have been steadily gaining traction over several decades. So, while there is no central body regulating AI development or usage in the United States today, this long track record of AI development has produced agreed upon principles that define “trustworthy,” or responsible, AI across the globe.

As a charitable organization, you’re held to a high standard of responsibility to your cause, constituents, and team. The tools you adopt in your day-to-day operations should be held to the same standard. Ensuring that you are adopting responsibly developed and managed AI is crucial, but how can you know what to look for? Below are tips to quickly spot responsible AI, keeping your promise to your community and your cause!

 


 

1. Responsibility to Your Team: Is the AI Program Human-Centered?

AI was developed by humans, is informed by humans, and ultimately should be a tool that makes the lives of us humans easier, making way for more of the kinds of work that only we can do. It is imperative that there is human input and oversight across all uses of AI in your organization. Because AI is a product of our culture, it also has the potential to carry harmful biases, making thoughtful oversight even more important when using these tools.

What to look for: Look for AI tools that build off the talents of users, rather than displace them. Many developers will now publish their own guiding principles on AI. Look for terms like human centered, human-centric, or user-centric. Of course, the proof is in the product itself. Critically think about the interface of the AI you’re considering:

  • Is it accessible and easy to use by various members of my team (taking into consideration the spectrum of ability and disability)?

  • Are there various opportunities to check for bias in generated content?

  • Is the AI tool solving a known problem for users, freeing them up for more complex tasks?

 

2. Responsibility to Your Constituents: Are Privacy Principles Clear?

Any form of machine learning is reliant upon the information it is given by users and developers to become a “better” and “smarter” tool. This raises the question of exactly how the data that you feed into AI systems is used, stored, and shared. When handling your donors’ personal and financial information, privacy is your top priority, closely followed by transparency.

What to look for: When adopting a new AI tool, you should closely read all published privacy resources and guidelines from the developers on how user-supplied information will be used by the AI. This information should be readily available and easily shared amongst your team and donor base. Look for principles that follow these common themes:

  • Transparency: There should be clarity around how developers use and disclose data, what rights people have, and how to exercise them. Be transparent yourself, as a user, letting donors know how you’re implementing AI tools in your fundraising or prospecting strategies. Like in any relationship, communication is key!

  • Choice: Individuals should be given the choices as to how they want or don’t want their data to be used.

  • Correction / Deletion: Individuals should be allowed to have their data deleted or corrected.

  • Accessibility: Personal data should be easily and quickly accessible when requested.

  • Security: Reasonable and appropriate steps should be taken to secure personal data.

 

3. Responsibility to Your Cause: Does It Align With Your Values?

Whether you are an environmental or human rights organization, your considerations around AI go beyond your day-to-day use. You’re likely considering the wider range of implications that the rise of machine learning and AI technologies have on our society, our world, and our power to make an impact.

What to look for: Like any product, you want to look to the source to know if the creation and maintenance of that product is in line with your core values as an organization:

  • Does the company that produced the AI tool report on their environmental impact? One place to look is at their carbon footprint and the efforts they make to neutralize it.

  • Does the company value the rights of their workers? Are they transparent in reporting on their structure, governance, and labor practices?

Above all, ask yourself if there is an application for the AI tool that will drive your impact forward. This could mean engaging more meaningfully with donors, allowing you to raise more and expand your reach. The tool may help you to limit administrative hours for in-the-field team members or automate parts of your workstream to allow staffers to focus on new and innovative programs and approaches.

 

What’s Next?

Adoption of any new technology involves thoughtful change management. Bring these tips and considerations to your team to start a conversation around pain points that AI could address, areas of concern or discomfort, and when and how to reassess your uses of AI as the technology continues to evolve.

 

Curious how Blackbaud is addressing AI and other innovative technologies? Learn more about our Intelligence for Good approach, with a focus on accessible, powerful, and responsible AI to fuel social impact. Our Privacy Resource Center also offers guidance and best practices for data privacy and compliance. Looking for more resources on how AI can become a part of your work? Check out the Chart Your Course: Navigating the New AI Landscape webinar series, with expert insights into everything from strategic planning around innovation; privacy and ethics; creating your own low-code apps; to the fundamentals of how AI can power your fundraising.

 

Learn how Blackbaud can level up your team.

 

Schedule a Demo