Using Python and the NLTK to Find Haikus in the Public Twitter Stream

So after sitting around mining the public twitter stream and detecting natural language with Python, I decided to have a little fun with all that data by detecting haikus.

The Natural Language Toolkit (NLTK) in Python, basically the Swiss army knife of natural language processing, allows for more than just natural language detection. The NLTK offers quite a few corpora, including the Carnegie Mellon University (CMU) Pronouncing Dictionary. This corpus contains quite a few features, but the one that piqued my interest was the syllable count for over 125,000 (English) words. With the ability to get the number of syllables for almost every English word, why not see if we can pluck some haikus from the public Twitter stream!

We’re going to be feeding Python a string formed Tweet and try to figure out if it is a haiku, trying our best to split it up into haiku form.

Building upon natural language detection with the NLTK, we should first filter out all the Tweets that come are probably not English (to speed things up a little bit).

Once we have that out of the way, we can dig into the haiku detection.

So what we have now is a function,  is_haiku, that will return a list of the three haiku lines if the given string is a haiku, or returns  False  if it’s (probably) not a haiku. I keep saying probably because this script isn’t perfect, but it works most of the time.

After all that hacky code, it’s just a matter of hooking it up to the public Twitter stream. Borrowing from the public Twitter stream mining code, we can pipe every Tweet into the is_haiku function and if it returns a list, add it to our database.

So running this for a while, we actually pick up some pretty entertaining Tweets. I have been running this script for a little while on a micro EC2 instance and created a basic site that shows them in haiku form, as well as a Twitter account that retweets every haiku that it finds.

Some samples of found haikus,

 

 

 

So it’s can be pretty interesting. What this exercise underlines is the publicity of your Tweets. There might be some robot out there mining all that stuff. In fact, every Tweet is archived by the Library of Congress, so be mindful what you post.

I have posted the full script in as a Gist that puts it all together. If you have any improvements or comments, feel free to contribute!

Mining the Public Tweet Stream for Fun and Profit

The Twitter streaming API for those without firehose access is still useful and interesting, you just need to get your feet wet.

Because you should love Python, today we’ll focus on how to mine tweets from the Twitter streaming API using Python and Tweepy. Before we begin, you need to create a new application on Twitter and get your API keys and secrets ready to roll.

Got them? Good, let’s continue.

Oh also, I’d like to assume you’re using virtualenv, if not, no worries… but please, you’re going to ruin your life.

So let’s setup our dumb little development environment.

Alrighty, so now we have Tweepy setup and we’re just about ready to get down to brass tacks. We’re going to sucking down 1% of the Twitter feed via the sample streaming API… while that may not sound like a lot, it does add up, so let’s use sqlite to handle the task.

Really though, the complication pretty much ends there. Tweepy is an amazing library that makes the next part pretty easy.

Before we get into our Python code below, let’s quickly create a table to hold all of the information we want to keep by entering our Python interpreter.

Okay, cool. Now we have a healthy place to store our tweets. Feel free to tweak that one to your desired data you want to capture, just be sure to modify the code below.

So, let’s pretend we put that code into a file called tweet_gobbler.py and said okay, let’s go!

That’s it! Now whenever we run this script, whilst it runs, it will continue to update the sqlite database tweets.db and after a few days you will have tons and tons of tweets from all around the world. Lucky you!

I set this up on an Amazon EC2 micro instance and let it run for a few days and pulled down about 400MiB of tweets, so it’s not a bad way to build that awesome dataset you’ve been craving.

Now, go save the world.

Oh, also, you can hone in on specific users or tweets if you like using the streaming API filter functionality. Maybe more on that later, stick around.