Using Python and the NLTK to Find Haikus in the Public Twitter Stream

So after sitting around mining the public twitter stream and detecting natural language with Python, I decided to have a little fun with all that data by detecting haikus.

The Natural Language Toolkit (NLTK) in Python, basically the Swiss army knife of natural language processing, allows for more than just natural language detection. The NLTK offers quite a few corpora, including the Carnegie Mellon University (CMU) Pronouncing Dictionary. This corpus contains quite a few features, but the one that piqued my interest was the syllable count for over 125,000 (English) words. With the ability to get the number of syllables for almost every English word, why not see if we can pluck some haikus from the public Twitter stream!

We’re going to be feeding Python a string formed Tweet and try to figure out if it is a haiku, trying our best to split it up into haiku form.

Building upon natural language detection with the NLTK, we should first filter out all the Tweets that come are probably not English (to speed things up a little bit).

Once we have that out of the way, we can dig into the haiku detection.

So what we have now is a function,  is_haiku, that will return a list of the three haiku lines if the given string is a haiku, or returns  False  if it’s (probably) not a haiku. I keep saying probably because this script isn’t perfect, but it works most of the time.

After all that hacky code, it’s just a matter of hooking it up to the public Twitter stream. Borrowing from the public Twitter stream mining code, we can pipe every Tweet into the is_haiku function and if it returns a list, add it to our database.

So running this for a while, we actually pick up some pretty entertaining Tweets. I have been running this script for a little while on a micro EC2 instance and created a basic site that shows them in haiku form, as well as a Twitter account that retweets every haiku that it finds.

Some samples of found haikus,




So it’s can be pretty interesting. What this exercise underlines is the publicity of your Tweets. There might be some robot out there mining all that stuff. In fact, every Tweet is archived by the Library of Congress, so be mindful what you post.

I have posted the full script in as a Gist that puts it all together. If you have any improvements or comments, feel free to contribute!

  • Zerglinator

    Is it possible to narrow it down to a single Twitter profile?

  • billbock

    *piqued my interest

    • h6o6

      Not sure why it took me nearly a year to realize that this was a correction for the post. Thank you so much! (Gosh, I feel so foolish!)

      • varun kumar

        Can you please tell me the correction .I am also facing the error at place where its mentioned

        “from haiku”.Where is the module for haiku

  • Dan F-M

    This is so awesome! I was inspired (by this and Pentametron [1]) to make an automatic limerick generator with NLTK [2]. The code is on github [3]. Thanks for the great post!


    • h6o6

      Awesome, the poems are actually pretty awesome, haha. Thanks for posting the code!

  • Diego Pignattini

    What’s the haiku module you import on the third script?

  • Benjamin McFetridge-Smith

    line 19 in haiku detection code should be syl_count += [len(list(y for y in x if y[-1].isdigit())) for x in d[word.lower()]][0]

    otherwise syl_count is always 0