Using Python and the NLTK to Find Haikus in the Public Twitter Stream

So after sitting around mining the public twitter stream and detecting natural language with Python, I decided to have a little fun with all that data by detecting haikus.

The Natural Language Toolkit (NLTK) in Python, basically the Swiss army knife of natural language processing, allows for more than just natural language detection. The NLTK offers quite a few corpora, including the Carnegie Mellon University (CMU) Pronouncing Dictionary. This corpus contains quite a few features, but the one that piqued my interest was the syllable count for over 125,000 (English) words. With the ability to get the number of syllables for almost every English word, why not see if we can pluck some haikus from the public Twitter stream!

We’re going to be feeding Python a string formed Tweet and try to figure out if it is a haiku, trying our best to split it up into haiku form.

Building upon natural language detection with the NLTK, we should first filter out all the Tweets that come are probably not English (to speed things up a little bit).

Once we have that out of the way, we can dig into the haiku detection.

So what we have now is a function,  is_haiku, that will return a list of the three haiku lines if the given string is a haiku, or returns  False  if it’s (probably) not a haiku. I keep saying probably because this script isn’t perfect, but it works most of the time.

After all that hacky code, it’s just a matter of hooking it up to the public Twitter stream. Borrowing from the public Twitter stream mining code, we can pipe every Tweet into the is_haiku function and if it returns a list, add it to our database.

So running this for a while, we actually pick up some pretty entertaining Tweets. I have been running this script for a little while on a micro EC2 instance and created a basic site that shows them in haiku form, as well as a Twitter account that retweets every haiku that it finds.

Some samples of found haikus,

 

 

 

So it’s can be pretty interesting. What this exercise underlines is the publicity of your Tweets. There might be some robot out there mining all that stuff. In fact, every Tweet is archived by the Library of Congress, so be mindful what you post.

I have posted the full script in as a Gist that puts it all together. If you have any improvements or comments, feel free to contribute!

Detecting Language with Python and the Natural Language Toolkit (NLTK)

Whether you want to catalog your mined public tweets or offer suggestions to user’s language preferences, Python can help detect a given language with a little bit of hackery around the Natural Language Toolkit (NLTK).

Let’s get going by first installing NLTK and downloading some language data to make it useful. Just a note here, the NLTK is an incredible suite of tools that can act as your swiss army knife for almost all natural language processing jobs you might have–we are just scratching the surface here.

Install the NLTK within a Python virtualenv.

Now we’re going to need some language data, hmm.

Play around in the NLTK downloader interface for a while, particularly the list of available packages (by entering ell), but basically all we need to download are the punkt and stopwords packages.

Now we can finally start having some fun with a new script,detect_lang.py.

Basically what we’re doing above is seeing which language stopwords dictionary contains the most coincident words to our input text and returning that language.

Let’s test it out!

Not too bad! We tried to strip out most of the HTML from a Wikipedia page for that, so some of the Javascript calls are still contained and may through off our detector, but this technique should work for most data. I found it works pretty well for detecting English tweets versus non-English tweets… more on that later.