Here at HumanGeo we do all sorts of interesting things with sentiment analysis and entity resolution. Before you get to have fun with that, though, you need to bring data into the system. One data source we've recently started working with is reddit.
Compared to the walled gardens of Facebook and LinkedIn, reddit's API is as open as open can be; Everything is nice and RESTful, rate limits are sane, the developers are open to enhancement requests, and one can do quite a bit without needing to authenticate.
The most common objects we collect from reddit are submissions (posts) and comments. A submission can either be a link, or a self post with a text body, and can have an arbitrary number of comments. Comments contain text, as well as references to parent nodes (if they're not root nodes in the comment tree). Pulling this data is as simple as
GET http://www.reddit.com/r/washingtondc/new.json. (Protip: pretty much any view in reddit has a corresponding API endpoint that can be generated by appending '.json' to the URL.)
With little effort a developer could hack together a quick 'n dirty reddit scraper. However, as additional features appear and collection-breadth grows, the quick 'n dirty scraper becomes more dirty than quick, and you discover
bugsfeatures that others utilizing the API have already encountered and possibly addressed. API wrappers help consolidate communal knowledge and best practices for the good of all. We considered several, and, being a Python shop, settled on PRAW (Python Reddit API Wrapper).
With PRAW, getting a list of posts is pretty easy:
$ python parse_bot_2000.py 209 :: /r/WashingtonDC's Official Guide to the City! 29 :: What are some good/active meetups in DC that are easy to join? 17 :: So no more tailgating at the Nationals park anymore... 3 :: Anyone know of a juggling club in DC 2 :: The American Beer Classic: Yay or Nay?
Now, let's try something a little more complicated. Our mission, if we choose to accept it, is to capture all incoming comments to a subreddit. For each comment we should collect the author's username, the URL for the submission, a permalink to the comment, as well as its body.
Here's what this looks like:
That was pretty easy. For the sake of this demo the
save_comment method has been stubbed out, but anything can go there.
If you run the snippet, you'll observe the following pattern:
... comment ... ... comment ... [WAIT FOR A FEW SECONDS] ... comment ... ... comment ... [WAIT FOR A FEW SECONDS] ... comment ... ... comment ... [WAIT FOR A FEW SECONDS] (repeating...)
This process also seems to be taking longer than a normal HTTP request. As anyone working with large amounts of data should do, let's quantify this.
Using the wonderful, indispensable iPython:
Ouch. While this difference in run-times is fine for a one-off, contrived example, such inefficiency is disastrous when dealing with large volumes of data. What could be causing this behavior?
Each API request to Reddit must be separated by a 2 second delay, as per the API rules. So to get the highest performance, the number of API calls must be kept as low as possible. PRAW uses lazy objects to only make API calls when/if the information is needed.
Perhaps we're doing something that is triggering additional HTTP requests. Such behavior would explain the intermittent printing of comments to the output stream. Let's verify this hypothesis.
To see the underlying requests, we can override PRAW's default log level:
And what does the output look like?
DEBUG:requests.packages.urllib3.connectionpool:"PUT /check HTTP/1.1" 200 106 DEBUG:requests.packages.urllib3.connectionpool:"GET /comments/2ak14j.json HTTP/1.1" 200 888 .. comment .. DEBUG:requests.packages.urllib3.connectionpool:"GET /comments/2aies0.json HTTP/1.1" 200 2889 .. comment .. DEBUG:requests.packages.urllib3.connectionpool:"GET /comments/2aiier.json HTTP/1.1" 200 14809 .. comment .. DEBUG:requests.packages.urllib3.connectionpool:"GET /comments/2ajam1.json HTTP/1.1" 200 1091 .. comment .. .. comment .. .. comment ..
Those intermittent requests for individual comments back up our claim. Now, let's see what's causing this.
Prettifying the response JSON yields the following schema (edited for brevity):
Lets compare that to what we get when listing comments from the
Now we're getting somewhere - there are fields in the per-comment's response that aren't in the subreddit listing's. Of the four fields we're collecting, the submission URL and permalink properties are not returned by the subreddit comments endpoint. Accessing those causes a lazy evaluation to fire off additional requests. If we can infer these values from the data we already have, we can avoid having to waste time querying for each comment.
Submission URLs are a combination of the subreddit name, the post ID, and title. We can easily get the post ID fragment:
However, there is no title returned! Luckily, it turns out that it's not needed.
Great! This also gets us most of the way to constructing the second URL we need - a permalink to the comment.
Maybe we can append the comment's ID to the end of the submission URL?
Sadly, that URL doesn't work because reddit expects the submission's title to precede the ID. Referring to the subreddit comment's JSON object, we can see that the title is not returned. This is curious: why is the title important? They already have a globally unique ID for the post, and can display the post just fine without (as demonstrated by the code sample immediately preceding this). Perhaps reddit wanted to make it easier for users to identify a link and are just parsing a forward-slash delimited series of parameters. If we put the comment ID in the appropriate position, the URL should be valid. Let's give it a shot:
Following that URL takes us to the comment!
Let's see how much we've improved our execution time:
Wow! 403 seconds to 3.6 seconds - a factor of 111. Deploying this improvement to production not only increased the volume of data we were able to process, but also provided the side benefit of reducing the number of 504 errors we encountered during reddit's peak hours. Remember, always be on the lookout for ways to improve your stack. A bunch of small wins can add up to something significant.
[Does this sort of stuff interest you? Love hacking and learning new things? Good news - we're hiring!]