Previous work on moderation has focused on punishing misbehaving users. As a consequence, tools exist to make it easier for moderators to time out, ban, or silence users who violate community norms. While work in automation is important, our research has focused on situating moderation within the context of community growth. Our results suggest that moderation is complex and social, and that algorithmic methods of punishment are simply not enough to fill moderator needs. We hope our work informs future work on the design of social platforms.
For this study, I worked closely with a PhD student on interview question design, conducting interviews, data analysis, and paper writing. We decided to use qualitative methods as our bread and butter for gathering data. Sure, interviews take time to conduct and achieving a high inter-rater reliability score can be incredibly difficult. Data can also be messy to interpret. Yet despite these challenges, we wanted to take an in-depth look at moderators in a way that hasn't been done.
It took close to two months using grounded theory to code 1800+ data points from our 56 interviews. While UX research in industry could never wait several months just for data analysis, our focus on academic rigor and argumentation also proved valuable as research experience. In industry, I hope to adopt a middle-ground between rigor and speed.