I just want to hear do you use AI based moderation in your community? How well it does work and what kind of operations AI does and so on?
There is several "AI moderators" in the world, here is example one https://utopiaanalytics.com/services/utopia-ai-moderator/
@Enidin - In general, if you see the native spam moderation of Lithium, that too is AI-based (not that it truly keeps on learning as expected), in fact, all the systems nowadays are AI based, be it search, personalization, recommendation engines etc. Coming to your question - Using this tool or for that matter, any AI based tool would mean to feed it a set of rules at first, and then it'll learn along the way. One challenge with moderation always would be that the human intervention would be (a little less in AI) required because any system would fail when there are conundrums (even Tesla's get confused sometimes). It may be required a bit less in terms of AI but I believe this is something which needs to be observed over the time to see how well this works. In the end, the system will keep on learning but the spammers would still be a step ahead in any case.
(why I can't see quote-button anymore in the text editor...?)
Thanks @VarunGrazitti. I can imagine that there might be "funny" situations when AI tries find out the context and possibly makes wrong decisions. Negativity, hate speech and so on, just like @MrB77 mentioned are very important aspects but also I would like to know is it possible to use AI in the support community? Normally help comes from the peers but sometimes there are situations when help/statement should come from the company. Does AI recognise these situations, is it possible anyways?
EDIT: Thanks @MrB77 for the tip!
I think you can build something simple very fast. Let's think about a simple algorithm that alerts (or flags) if someone mentions the company in combination with an @ mention or "to" "dear" etc. Also you can have a list of words and word combinations (similar to the spam list) that does the same (e.g. a words related "billing" or "outage" etc. where the community normally can't help).
By the way: A very "manual" solution for the problem is to give Superusers the ability to escalate posts/threads so the mods are notified and can answer 😉
And there is another discussion going on within the community here
Currently moderation is done really well by our people in terms of removing sensitive information, warnings for conduct and so on BUT moving all of the posts into the correct area is an endless task that never gets completed.
I have put a business proposal together for having an AI setup for ensuring content is on the right forum board but at this stage have progressed no further than the idea stage.
Another area where the AI is yet to really learn is when it comes to handling the sarcasm. There have been a lot of times when the brands have been publically embarrassed due to this. One funny example which I was reminded of is this 🙂
So unless it really starts to learn human nature (probably some more if's and else's), there would be the human intervention required.