Detecting Language with Python and the Natural Language Toolkit (NLTK)

Whether you want to catalog your mined public tweets or offer suggestions to user’s language preferences, Python can help detect a given language with a little bit of hackery around the Natural Language Toolkit (NLTK).

Let’s get going by first installing NLTK and downloading some language data to make it useful. Just a note here, the NLTK is an incredible suite of tools that can act as your swiss army knife for almost all natural language processing jobs you might have–we are just scratching the surface here.

Install the NLTK within a Python virtualenv.

Now we’re going to need some language data, hmm.

Play around in the NLTK downloader interface for a while, particularly the list of available packages (by entering ell), but basically all we need to download are the punkt and stopwords packages.

Now we can finally start having some fun with a new script,

Basically what we’re doing above is seeing which language stopwords dictionary contains the most coincident words to our input text and returning that language.

Let’s test it out!

Not too bad! We tried to strip out most of the HTML from a Wikipedia page for that, so some of the Javascript calls are still contained and may through off our detector, but this technique should work for most data. I found it works pretty well for detecting English tweets versus non-English tweets… more on that later.