Android Virtual Assistant Part 3 – Natural Language Processing

Android Virtual Assistant Part 3 Natural Language Processing

Android Virtual Assistant Part 3 Natural Language Processing

Natural language processing (NLP), employs algorithms and mathematical models to analyse representations of language data (voice, text). NLP is employed often when the volume and velocity of the data reach a point where for humans to perform analysis using becomes impractical. Possible uses of NLP include Document classification (web pages), sentiment analysis (surveys and consumer choices) and Automatic summarization. A particular focus for Social Media by researchers particularly in media and marketing companies is sentiment analysis.

NLP by necessity is a technically complicated field of computing. However its value can be important to even the smallest of businesses. By exploiting the published api’s of companies that provide NLP as a service, its value can be leveraged both through free quotas and inexpensive access to common low-level NLP functions. A good example of just such an inexpensive service with free quotas is IBM’s AlchemyAPI. To get started we will explore what Alchemy offers and to begin reading about its extensive NLP api, you can find all its endpoints here.

The construction of Virtual Assistants and an Android Virtual Assistant in particular, when enabled by natural language processing draws heavily on Machine learning particularly for language models. A basic development flow involves setting up a language model then training the model (this is the machine learning phase).

In this article we look at the conceptual basis for constructing a language model for NLP and how that works with text, for speech it becomes more technical in the sense that speech audio waveforms are translated into bits and given to a speech recognition system, where the language model and natural language processing makes inferences about the user’s intent. Apple, Google and Microsoft all have their own natural language processing models and virtual assistants, Apple’s Siri, Google Now and Microsoft Cortana.

Language Models

One simple way to think of the language model is consider the set of strings formatted below, this is a very small subset of what would be 50+ thousand lines for what would be considered in Machine learning terms a small training dataset or corpus.

The probability in the context of the first two statements in a weather context is high while the probability of the final statement is zero. We can evaluate the exact probabilities for the particular statements by parsing a body of the training data text (corpus) and calculating the probabilities, this evaluation of the actual probabilities is the actual ‘training’.

N-gram Language Models

The next step in engaging language models is to understand an n-gram model. In this model the sentences in Corpus 1 can be considered as a tree, here the sentence.

Now the training process is designed to estimate the probability of the next leaf (word) in the input.

A preliminary stage in constructing a language model is to define the model’s vocabulary.
This is not a trivial task and tools from a NLP framework will be used to create the vocabulary. Generating a language model is a time- and resource-intensive task. There exist techniques to improve the accuracy of the model. For example you could take a more general Language Model, based on a bigger corpus and interpolate your smaller Language Model with it (a back-off language model).


AlchemyAPI offers NLP as a service via its api and has a very generous free quota, excellent and up to date documentation. The Alchemy API has software developments kits (sdk) for most major programming languages, download an sdk here. Register for api key here. We will start by running the process online then we will move on to work with Android.

Run a Language Model online.

Once you have your api key you can try some initial processes online here. After entering your api key you can test the api and relate the results to the documentation.

Language Model Concepts In Code

The articles demo Android source code here is an upgraded version of the Alchemy API example source code configured to run in Android Studio.

To build and run the source you need the latest version of Android Studio with Gradle 1+ and Gradle 2+ installed as shown in the screen shot below

Register the api key in source code

In the class on line 43 paste your api key.

With this Android application you can now try out the following implementations of the NLP processes that we will define below.

Entity Extraction

Entity Extraction algorithms extract semantic units from a corpus of data. Typical commercial entities identify the names of people, organizations, locations, time zones, quantities, addresses etc… Classifying them into predefined categories or content types.
Consider the following fragment of data (Merrill Lynch Entity Corpus) that could represent the results of a query on a financial services database. We wish to process this result set with a NLP Entity Extraction algorithm. Here the entity is the Merrill Lynch Corporation another entity is the Merrill Lynch corporation address. Here the goal of the NLP algorithm could be to extract unique addresses for each unique Merrill Lynch branch from the data. We could set up an ‘address’ model and train it in just the same way we construct train language models.

Merrill Lynch Entity Corpus

This is an application of NLP where we can add value to a database, many companies provide mailouts, and by ensuring that the promotional material is only mailed out to a single address for each entity make the process more efficient and save resources.

Concept Tagging

We can extend the entities and their matching to ontological concepts. Here a concept could be a ‘Financial Services Corporation’ we could construct a model that would match the extracted Merrill Lynch entities as attributes of a ‘Financial Services Corporation’ concept.


This article winds up the Android Virtual Assistant series. The focus of the series is to showcase and introduce the technologies you could possibly use to construct a Virtual Assistant application. We have outlined how you can configure and bootstrap VOIP applications via the SIP protocol for piping speech through a TCP. Then we looked at the code you would need to configure and bootstrap an Android SIP client, we followed this up in part two with an outline of Android voice actions which have become important with the advent of ‘wearable computing devices’ and are about to be extended in the next Android kernel release, Android M.
In this article we have outlined the terminology and high level concepts of NLP, provided pointers to its applications and value and provided Android source you could load into Android Studio and bootstrap an Android Alchemy client.


Please enter your comment!
Please enter your name here