Hello Mark,
Thanks for the comment. I think you meant 65% and missed that percent sigh right? One question for you is did you look at the data cumulatively throughout the lifetime of the community? As a scientist, I felt responsible to point out 2 issues that you might want to watch out for, because I’ve been through these pitfalls myself and do not want others to repeat my mistake.
1. Lurkers actually include people who just passively consume community content without ever registering. So 65% of the registered member contributed at least once (which I will call participants) does not mean that there are 35% lurkers. If you really count everyone who visited the community and normalize the participants to the total unique visitors, then I think the numbers could be much smaller. Maybe I will do an analysis on that.
2. This is a subject of debate, but I will point it out anyway. Some people believe that the 90-9-1 rule is NOT a cumulative measure. It is more of a point in time measure, so that measurement of participants and lurkers should be restricted to a relatively small window of time. To some extent I agree, but the problem is that there is no objective way of choosing the proper window length. And the reason that I believe this had some validity is that lurkers will eventually delurk (when s/he finds a post that spur s/he passion or have a question that s/he needs answers). This depletes the 90% of lurkers and increases the fraction of participants. But at the same time new visitors keeps coming to the community (some of these will lurk, some will participate). For the90-9-1 rule to remain true cumulatively, there need to be a very precise balance between member acquisition rate, the delurking rate, the probability of new member lurking, or participating, and the acceleration of participation for participants. Yet there is nothing to constrain these quantities to maintain that delicate balance. In fact, they are quite independent. So even if the 90-9-1 rule is true for a given point in time, they may quickly fall out of that precise balance. That is why I believe that participation metrics should be windowed. I actually did both cumulatively and windowed calculation using 15 days. 30days and 60days. And it does appear that the number comes out closer to 90:9:1 even though the variability is still very large.
Although we like to think of superusers as more than just those who post a lot, I am speaking with respect to 90-9-1 rule here and that is purely based on posting activity. In fact that is why I’ve use the term hyper-contributor instead of superuser in my post. But we can definitely incorporate kudos, accepted solutions when identifying superusers.
However, with respect to the social graph, it is meant for identifying influencers, which is different from superusers. They are quite correlated most of the time, but superusers are not always the same as influencers. Influencers are identified by social network analysis of the social graph, which is built based on who talked to who in the community. Because every communication, there is a potential for influence, identifying the important nodes in the social graph, which is really a communication network within the community, we identify influencers. And these influencers may or may not be superusers.
I hope this address your question.