For those who would rather just read the code: https://github.com/calebshortt/shillbot
Introduction
We’ve all been there. You’re browsing Reddit and see a post that you’re passionate about. You click the comment box and reach for the keyboard — but hesitate. Reddit’s reputation precedes it. You type anyways and punch out your thoughts. Submit.
*Bliip*
A comment already? You click the icon and read the most disproportionately-voracious response to a comment about cats you have ever seen. What a jerk! But you’re not going to play that game, and view the author’s previous posts and comments. Through your review a trend of tactless comments and inflationary responses bubbles to the surface. They’re a troll. You promptly ignore the comment.
Method
The process of inspecting a user’s previous posts and determining how to respond based on that information is a repeatable process — and if it is repeatable, it is surely automatable. This was my thought as I wrapped up my own analysis — and spawned a project to figure out the ‘how’. ShillBot is the fruit of my efforts.
I approached the problem by breaking it down into two problems: The first problem was the extraction of a target user’s comment history from Reddit, the second problem was training the appropriate algorithm with a corpus of data that is representative of the group I wanted to identify.
Extracting the post history from Reddit was more complicated than initially expected. When a user views another user’s page one of three separate page versions may be returned: The ‘new’ style, ‘old-new’ style, and the ‘old’ style of Reddit. I had to create a separate parser for each version. Once this was done, I was able to extract the post information and construct a corpus for that specific target user.
Training an algorithm on a representative set of Reddit trolls required manual identification. This exercise was both entertaining and depressing as the posts included some of the most vile aspects of Reddit. I was able to find more than enough of a representation simply by finding ‘hot-topic’ and controversial posts and then sorting by controversial (or simply finding the most down-voted post). Then I would inspect that user’s history and determine if they were truly a troll. I also created a list of ‘normal’ Reddit users. This would be used to counteract the troll set. In essence, I would need to give the algorithm an accurate representation of both troll and ‘not troll’ to accurately classify each set.
If all the algorithm knows how to classify is a hammer everything starts to look like a hammer.
The algorithm was trained by combining the post text, post title, post author, and subreddit for all posts in a target user’s history. This provided more context than simply recording the post’s text. For example, including the subreddit and post author (OP) allows the algorithm to identify common trends such as cross posting from one subreddit to another and trolls commenting on other troll’s posts to boost controversy, etc.
For this application I used a basic stochastic gradient descent (SGD) classifier as it has traditionally had some success in the text classification space. In the future I may play around with other classifiers to see what results they produce.
Results
Promising. I was able to successfully differentiate Reddit trolls from ‘not trolls’ to a reasonable extent. The main limitation is my manual verification process for data points — I still have to check the post history of each suspected troll manually before I can add them to my dataset.
Bias. My search method may be prone to bias as ‘searching for controversial topics’ is dependent on the topic du jour. For example, political topics are highly represented in Reddit posts as of late which leads to an over-representation in the classifier’s training set. Even when I am conscious of such an effect and over-representation the dataset will inevitably reflect it.
Scalable. Partially. Obviously limited to a reasonable number of requests to Reddit. With that said the system itself is capable of handling a relatively large number of requests through a standard consumer/producer multi-threaded model. Workers complete the scraping and parsing actions then send the data to the server for analysis.
Future
Always with the Neural Networks! I just like throwing an ANN at the problem to see what it finds — they are great for teasing out relationships that you may not have found otherwise.
Better extraction of data. My data points right now include a combination of text post, text author, post title, and subreddit. I suspect that I can tease out more relationships between these aspects if I represent them in a better format. Will consider mapping relationships between subreddits and posters, etc.
Trying to address the bias problem — although I am not entirely sure how.
Conclusion
All in all, this was a fun project with some interesting challenges. This project confirms, if there was any doubt, that it is possible to take a corpus of text posts from a selected group and apply a few basic algorithms to answer the question ‘does this text belong in that group’ — to a certain extent.