Community AI Dos and Don'ts: A field guide to AI in communities.

Cody Crumrine

Imagine for a moment that you’re on your way to a conference. Or maybe it’s a workshop. Hey, maybe it’s just a happy hour. Whatever it is, you’re going for the people. You carved time out of your busy life for this, and you’re on your way to network. Or to sell. Or to learn. Or to have fun. Or to encourage someone. Or because you need some encouragement yourself. Whatever it is - you can’t do it alone. You need to be where the people are.

Upon arrival, you walk into the room.

And you’re greeted by… a robot?

That’s right. You’ve found yourself, somewhat jarringly, in some kind of science fiction movie. Instead of a good old-fashioned human it’s a shiny metal facsimile welcoming you in.

How do you feel about that?

Hm…

Probably depends on what the robot does.

Why are we talking about robots?

This may not be playing out in clubs or at conferences (yet), but it’s all over their virtual counterparts. Online communities are encountering AI on a daily basis: from the platforms they run on, to the tools that support them, and even from the members themselves.

We can see an example of the prevalence of AI in the community space on the CMX Blog. An article titled “How to Elevate your Community on Slack” lists 11 tools that add community-friendly features to Slack workspaces. Ten out of those eleven(including my own company, Knobi) mention AI in their descriptions.

Now, that’s not much of a surprise. AI features have found their way into just about every application over the last two years. But communities are uniquely human-centric. The value prop of a community is essentially that you’ll be able to learn a skill, make a connection, find information, get support, etc. better or faster by interacting with other people.

If a bunch of us are out here bringing AI into that space, we had better be thinking hard about how to do it well. Are we going to enhance and encourage human interaction? Or poison the well with AI-generated slop?

Informed by my own experience and conversations with other founders, as well as community owners, managers, and members, this article is an attempt to lay out some ground rules for using AI in online communities that should help keep us on the right track.

Rule #1 - Let Bots Be Bots

Story time.

At Knobi, we had a customer with over 150 channels in their community Slack workspace. One of the CMs there explained that “channel discovery” was a big issue for new members. In a large community, breaking out conversation into many context-specific channels helps with organization, but it can be hard to figure out where to start.

We built a Slack bot that watched their #intros channel and used an LLM to review intro posts from new users, compare them to the list of channel topics/purposes, and recommend a few places to get started.

Now if you’ve worked with LLMs, you know that if you paste in an intro + list of channels and ask for recommendations, you’re not going to get just a list of recommendations. The model is trained on written human interactions, so our first version of this also wrote a warm little welcome message for each user. “Hey Alice! So glad to have you here. Your background in XYZ sounds really interesting! Here are a few channels….”

“Cool,” we thought. “Seems nice.” 

But we got terrible feedback.

Turns out (and this should have been obvious) a user who just joined a new community to, you know, interact with people, wants a warm greeting from a real human - not a bot pretending to be one. The welcome message was received as an inauthentic replacement for human interaction.

Did we scrap it?

No… the recommendations were helpful. So we modified the bot to cut out the warm greeting. Just “Hi Bob. Here are some channels you might want to check out:…”

Feedback turned positive immediately. Users appreciated simple help from a bot when it was obviously a bot and when it was only doing what they expect a bot to do: in this case give some helpful info. They did NOT appreciate a bot attempting to provide something they expect from a human (warm greetings).

Rule #2 - Keep it Transactional

Let’s dig into (or shall we delve? iykyk…) that idea of “things we expect bots to do” vs “things we expect a human to do”. 

Are there clear cut lines there? And if there are, are they getting blurred a bit as models get more capable?

In a recent chat, Zach Hawtoff, who’s building tools for Slack communities at Tightknit, shared a helpful approach. You can label most needs a community member has as either “transactional” or “non-transactional”. Transactional needs are those where you need to get something, and it doesn’t much matter how you get it. 

Example: My son just got a 3D printer for Christmas. As I help him figure it out (and have a lot of fun with it myself) I’ll probably join a community or two of 3D printing enthusiasts. We’ll have lots of transactional needs, like finding STLs to print, instructions for prepping them for our specific device, examples and tutorials… Other members can help us find those things, but if an AI tool comes along that helps us find them faster? Great!

But we also might need help troubleshooting weird results that don’t seem to match the tutorials. We might want to share ideas and geek out with others about things we could build. We’ll probably need some encouragement when things go poorly. These needs fall in the “non-transactional” category. There’s not a clear, specific “result” to get. Instead it’s the medium - human interaction - that’s important. 

You can probably fake a lot of those non-transactional interactions with AI as well - but our experience so far is that users will likely reject it.

And really, that is the best case scenario. If members did accept and embrace AI replacements for those more human-oriented needs, what would that mean for the future of your community?

And that brings us nicely to…

Rule #3 - Connect, Don’t Replace

So are we saying AI has no place in the non-transactional human-to-human side of community?

Not so fast.

There’s a powerful role AI can still play in encouraging human interaction rather than replacing it.

Margaux Miller has pointed out that one of the most exciting things AI can do in a community setting is help users find each other - connect one member to another, then get out of the way!

Story time again: We worked on a project that specifically targeted unanswered questions. In an active community, there are always members willing and eager to help you out, but sheer volume can prevent them from even seeing a post from someone in need of their experience. Generally, if you don’t get an answer to a question within about 24 hours of posting it, your chances of ever finding help start to drop dramatically.

This is a great opportunity to leverage AI and find help for those questions that got missed. A bot with the ability to search through conversation archives and find answers to open questions is an ideal LLM+RAG use case.

But it’s a risk too: if a bot chimes in and answers those questions, are we cutting off opportunities for members to help each other out? That’s the last thing we want. 

We ended up taking an approach that prioritized that “connect and clear out” approach. First, the bot would wait at least 48 hours to see if a human user would help out first. If a question was still unanswered, the bot went to work. But instead of looking for a direct answer to the question, it looked for members who had posted relevant information in the past. If it found helpful results, it replied to the original post, and shared the info it found, but also tagged and invited the members who posted it into the conversation.

We were thrilled with the results. Often the bot successfully reanimated dead threads by inviting members who were happy to help but had simply missed the question when it was originally posted. They’d chime in saying “Hey, yes I can help! DM me! Let’s have a call!”

This is probably my favorite community-oriented use case so far. I love seeing AI “keep up” with the posts at a scale most humans can’t, but then use that ability to bring humans together and let them work their magic.

What about the Members?

Hopefully the rules above give you a framework to use when you’re thinking about how to use AI (or not) in your community. But what about members and the AI tools they’re already using that are outside of your control? 

If a member uses Chat GPT to draft messages, or uses Perplexity to research another user's question, is that good? Is it spam? Should you try to control it? Can you even?

The answers are going to vary based on your community, its membership and your goals for it. A great starting point is to acknowledge it and start a conversation. 

For that, I will point you toward an excellent article by Jenny Weigle-Bonds about 5 communities that have already added language about AI to their community guidelines. I highly recommend you start there - see what others are doing - and use that as a starting point for your own policies.

Closing Thoughts

AI is here to stay - and it will continue to play a role in all kinds of community interactions.

As community owners, managers, leaders and members we have an opportunity to be intentional about the role AI plays. Let’s commit to leveraging AI in ways that enhance human connection, not replace it.

Related Stories

Applied AI

Announcing Tribe AI’s new CRO!

Applied AI

Self-Hosting Llama 3.1 405B (FP8): Bringing Superintelligence In-House

Applied AI

AI in Finance: Common Challenges and How to Solve Them

Applied AI

Top 9 Criteria for Evaluating AI Talent

Applied AI

AI Consulting in Healthcare: The Complete Guide

Applied AI

Tribe welcomes data science legend Drew Conway as first advisor 🎉

Applied AI

From PoC to Production: Scaling Bright’s Training Simulations with Tribe AI & AWS Bedrock

Applied AI

7 Key Benefits of AI in Software Development

Applied AI

The Revolution of AI in Healthcare

Get started with Tribe

Companies

Find the right AI experts for you

Talent

Join the top AI talent network

Close
Cody Crumrine