Apache OpenNLP Developer Documentation

Written and maintained by the Apache OpenNLP Development Community

Version 1.5.1-incubating

License and Disclaimer.  The ASF licenses this documentation to you under the Apache License, Version 2.0 (the "License"); you may not use this documentation except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, this documentation and its contents are distributed under the License on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

,


Table of Contents

1. Introduction
2. Sentence Detector
Sentence Detection
Sentence Detection Tool
Sentence Detection API
Sentence Detector Training
Training Tool
Training API
Evaluation
Evaluation Tool
3. Tokenizer
Tokenization
Tokenizer Tools
Tokenizer API
Training Tool
Detokenizing
4. Name Finder
Named Entity Recognition
Name Finder Tool
Name Finder API
Name Finder Training
Training Tool
Training API
Custom Feature Generation
Evaluation
Evaluation Tool
Evaluation API
Named Entity Annotation Guidelines
5. Document Categorizer
6. Part-of-Speech Tagger
Tagging
POS Tagger Tool
POS Tagger API
Training
Training Tool
Training API
Tag Dictionary
Evaluation
Evaluation Tool
7. Chunker
Chunking
Chunker Tool
Chunking API
Chunker Training
Training Tool
Chunker Evaluation
Chunker Evaluation Tool
8. Parser
Parsing
Parser Tool
Parsing API
Parser Training
Training Tool
9. Coreference Resolution
10. Corpora
CONLL
CONLL 2000
CONLL 2002
CONLL 2003
Arvores Deitadas
Getting the data
Converting the data
Evaluation
Leipzig Corpora
11. Machine Learning
Maximum Entropy
Implementation
12. UIMA Integration
Running the pear sample in CVD
Further Help

Chapter 1. Introduction

OpenNLP is a machine learning based toolkit for the processing of natural language text. It supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, and coreference resolution. These tasks are usually required to build more advanced text processing services. OpenNLP also included maximum entropy and perceptron based machine learning.

The goal of the OpenNLP project will be to create a mature toolkit for the abovementioned tasks. An additional goal is to provide a large number of pre-built models for a variety of languages, as well as the annotated text resources that those models are derived from.

Chapter 2. Sentence Detector

Sentence Detection

The OpenNLP Sentence Detector can detect that a punctuation character marks the end of a sentence or not. In this sense a sentence is defined as the longest white space trimmed character sequence between two punctuation marks. The first and last sentence make an exception to this rule. The first non whitespace character is assumed to be the begin of a sentence, and the last non whitespace character is assumed to be a sentence end. The sample text below should be segmented into its sentences.

				
Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29. Mr. Vinken is chairman of Elsevier N.V., 
the Dutch publishing group. Rudolph Agnew, 55 years old and former chairman of Consolidated Gold Fields PLC, 
was named a director of this British industrial conglomerate.
		

After detecting the sentence boundaries each sentence is written in its own line.

				
Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29.
Mr. Vinken is chairman of Elsevier N.V., the Dutch publishing group.
Rudolph Agnew, 55 years old and former chairman of Consolidated Gold Fields PLC, was named a director of this British industrial conglomerate.
		

Usually Sentence Detection is done before the text is tokenized and thats the way the pre-trained models on the web site are trained, but it is also possible to perform tokenization first and let the Sentence Detector process the already tokenized text. The OpenNLP Sentence Detector cannot identify sentence boundaries based on the contents of the sentence. A prominent example is the first sentence in an article where the title is mistakenly identified to be the first part of the first sentence. Most components in OpenNLP expect input which is segmented into sentences.

Sentence Detection Tool

The easiest way to try out the Sentence Detector is the command line tool. The tool is only intended for demonstration and testing. Download the english sentence detector model and start the Sentence Detector Tool with this command:

				
$bin/opennlp SentenceDetector en-sent.bin
		

Just copy the sample text from above to the console. The Sentence Detector will read it and echo one sentence per line to the console. Usually the input is read from a file and the output is redirected to another file. This can be achieved with the following command.

				
$bin/opennlp SentenceDetector en-sent.bin < input.txt > output.txt
		

For the english sentence model from the website the input text should not be tokenized.

Sentence Detection API

The Sentence Detector can be easily integrated into an application via its API. To instantiate the Sentence Detector the sentence model must be loaded first.

				
InputStream modelIn = new FileInputStream("en-sent.bin");

try {
  SentenceModel model = new SentenceModel(modelIn);
}
catch (IOException e) {
  e.printStackTrace();
}
finally {
  if (modelIn != null) {
    try {
      modelIn.close();
    }
    catch (IOException e) {
    }
  }
}
		

After the model is loaded the SentenceDetectorME can be instantiated.

				
SentenceDetectorME sentenceDetector = new SentenceDetectorME(model);
		

The Sentence Detector can output an array of Strings, where each String is one sentence.

				
String sentences[] = sentenceDetector.sentDetect("  First sentence. Second sentence. ");
		

The result array now contains two entires. The first String is "First sentence." and the second String is "Second sentence." The whitespace before, between and after the input String is removed. The API also offers a method which simply returns the span of the sentence in the input string.

				
Span sentences[] = sentenceDetector.sentPosDetect("  First sentence. Second sentence. ");
		

The result array again contains two entires. The first span beings at index 2 and ends at 17. The second span begins at 18 and ends at 34. The utility method Span.getCoveredText can be used to create a substring which only covers the chars in the span.

Sentence Detector Training

Training Tool

OpenNLP has a command line tool which is used to train the models available from the model download page on various corpora. The data must be converted to the OpenNLP Sentence Detector training format. Which is one sentence per line. An empty line indicates a document boundary. In case the document boundary is unknown, its recommended to have an empty line every few ten sentences. Exactly like the output in the sample above. Usage of the tool:

				
$bin/opennlp SentenceDetectorTrainer
Usage: opennlp SentenceDetectorTrainer -lang language -encoding charset [-iterations num] [-cutoff num] -data trainingData -model model
-lang language     specifies the language which is being processed.
-encoding charset  specifies the encoding which should be used for reading and writing text.
-iterations num    specified the number of training iterations
-cutoff num        specifies the min number of times a feature must be seen
		

To train an english sentence detector use the following command:

				
$bin/opennlp SentenceDetectorTrainer -encoding UTF-8 -lang en -data en-sent.train -model en-sent.bin

Indexing events using cutoff of 5

	Computing event counts...  done. 4883 events
	Indexing...  done.
Sorting and merging events... done. Reduced 4883 events to 2945.
Done indexing.
Incorporating indexed data for training...  
done.
	Number of Event Tokens: 2945
	    Number of Outcomes: 2
	  Number of Predicates: 467
...done.
Computing model parameters...
Performing 100 iterations.
  1:  .. loglikelihood=-3384.6376826743144	0.38951464263772273
  2:  .. loglikelihood=-2191.9266688597672	0.9397911120212984
  3:  .. loglikelihood=-1645.8640771555981	0.9643661683391358
  4:  .. loglikelihood=-1340.386303774519	0.9739913987302887
  5:  .. loglikelihood=-1148.4141548519624	0.9748105672742167

 ...<skipping a bunch of iterations>...

 95:  .. loglikelihood=-288.25556805874436	0.9834118369854598
 96:  .. loglikelihood=-287.2283680343481	0.9834118369854598
 97:  .. loglikelihood=-286.2174830344526	0.9834118369854598
 98:  .. loglikelihood=-285.222486981048	0.9834118369854598
 99:  .. loglikelihood=-284.24296917223916	0.9834118369854598
100:  .. loglikelihood=-283.2785335773966	0.9834118369854598
Wrote sentence detector model.
Path: en-sent.bin

		

Training API

The Sentence Detector also offers an API to train a new sentence detection model. Basically three steps are necessary to train it:

  • The application must open a sample data stream

  • Call the SentenceDetectorME.train method

  • Save the SentenceModel to a file or directly use it

The following sample code illustrates these steps:

				
ObjectStream<String> lineStream = new PlainTextByLineStream(new FileInputStream("en-sent.train"), "UTF-8");
ObjectStream<SentenceSample> sampleStream = new SentenceSampleStream(lineStream);

SentenceModel model = SentenceDetectorME.train("en",sampleStream, true, null, 5, 100);

try {
  modelOut = new BufferedOutputStream(new FileOutputStream(modelFile));
  model.serialize(modelOut);
} finally {
  if (modelOut != null) 
     modelOut.close();      
}
		

Evaluation

Evaluation Tool

The command shows how the evaluator tool can be run:

				
$bin/opennlp SentenceDetectorEvaluator -encoding UTF-8 -model en-sent.bin -data en-sent.eval  

Loading model ... done
Evaluating ... done

Precision: 0.9465737514518002
Recall: 0.9095982142857143
F-Measure: 0.9277177006260672
				

The en-sent.eval file has the same format as the training data.

Chapter 3. Tokenizer

Tokenization

The OpenNLP Tokenizers segment an input character sequence into tokens. Tokens are usually words, punctuation, numbers, etc.

			
Pierre Vinken, 61 years old, will join the board as a nonexecutive director Nov. 29.
Mr. Vinken is chairman of Elsevier N.V., the Dutch publishing group.
Rudolph Agnew, 55 years old and former chairman of Consolidated Gold Fields PLC, was named a director of this British industrial conglomerate.
			
		 

The following result shows the individual tokens in a whitespace separated representation.

			
Pierre Vinken , 61 years old , will join the board as a nonexecutive director Nov. 29 .
Mr. Vinken is chairman of Elsevier N.V. , the Dutch publishing group .
Rudolph Agnew , 55 years old and former chairman of Consolidated Gold Fields PLC , was named a nonexecutive director of this British industrial conglomerate . 
A form of asbestos once used to make Kent cigarette filters has caused a high percentage of cancer deaths among a group of workers exposed to it more than 30 years ago , researchers reported . 
			
		 	

OpenNLP offers multiple tokenizer implementations:

  • Whitespace Tokenizer - A whitespace tokenizer, non whitespace sequences are identified as tokens

  • Simple Tokenizer - A character class tokenizer, sequences of the same character class are tokens

  • Learnable Tokenizer - A maximum entropy tokenizer, detects token boundaries based on probability model

Most part-of-speech taggers, parsers and so on, work with text tokenized in this manner. It is important to ensure that your tokenizer produces tokens of the type expected by your later text processing components.

With OpenNLP (as with many systems), tokenization is a two-stage process: first, sentence boundaries are identified, then tokens within each sentence are identified.

Tokenizer Tools

The easiest way to try out the tokenizers are the command line tools. The tools are only intended for demonstration and testing.

There are two tools, one for the Simple Tokenizer and one for the learnable tokenizer. A command line tool the for the Whitespace Tokenizer does not exist, because the whitespace separated output would be identical to the input.

The following command shows how to use the Simple Tokenizer Tool.

			
$ bin/opennlp SimpleTokenizer
			
		 

To use the learnable tokenizer download the english token model from our website.

			
$ bin/opennlp TokenizerME en-token.bin
			
		 

To test the tokenizer copy the sample from above to the console. The whitespace separated tokens will be written written back to the console.

Usually the input is read from a file and written to a file.

			
$ bin/opennlp TokenizerME en-token.bin < article.txt > article-tokenized.txt
			
		 

It can be done in the same way for the Simple Tokenizer.

Since most text comes truly raw and doesn't have sentence boundaries and such, its possible to create a pipe which first performs sentence boundary detection and tokenization. The following sample illustrates that.

			
$ opennlp SentenceDetector sentdetect.model < article.txt | opennlp TokenizerME tokenize.model | more
Loading model ... Loading model ... done
done
Showa Shell gained 20 to 1,570 and Mitsubishi Oil rose 50 to 1,500.
Sumitomo Metal Mining fell five yen to 692 and Nippon Mining added 15 to 960 .
Among other winners Wednesday was Nippon Shokubai , which was up 80 at 2,410 .
Marubeni advanced 11 to 890 .
London share prices were bolstered largely by continued gains on Wall Street and technical factors affecting demand for London 's blue-chip stocks .
...etc...
		 

Of course this is all on the command line. Many people use the models directly in their Java code by creating SentenceDetector and Tokenizer objects and calling their methods as appropriate. The following section will explain how the Tokenizers can be used directly from java.

Tokenizer API

The Tokenizers can be integrated into an application by the defined API. The shared instance of the WhitespaceTokenizer can be retrieved from a static field WhitespaceTokenizer.INSTANCE. The shared instance of the SimpleTokenizer can be retrieved in the same way from SimpleTokenizer.INSTANCE. To instantiate the TokenizerME (the learnable tokenizer) a Token Model must be created first. The following code sample shows how a model can be loaded.

			
InputStream modelIn = new FileInputStream("en-token.bin");

try {
  TokenizerModel model = new TokenizerModel(modelIn);
}
catch (IOException e) {
  e.printStackTrace();
}
finally {
  if (modelIn != null) {
    try {
      modelIn.close();
    }
    catch (IOException e) {
    }
  }
}
			
		 

After the model is loaded the TokenizerME can be instantiated.

			
Tokenizer tokenizer = new TokenizerME(model);
		 

The tokenizer offers two tokenize methods, both expect an input String object which contains the untokenized text. If possible it should be a sentence, but depending on the training of the learnable tokenizer this is not required. The first returns an array of Strings, where each String is one token.

			
String tokens[] = tokenizer.tokenize("An input sample sentence.");
		 

The output will be an array with these tokens.

			
"An", "input", "sample", "sentence", "."
		 

The second method, tokenizePos returns an array of Spans, each Span contain the begin and end character offsets of the token in the input String.

			
Span tokenSpans[] = tokenizer.tokenizePos("An input sample sentence.");		
			

The tokenSpans array now contain 5 elements. To get the text for one span call Span.getCoveredText which takes a span and the input text. The TokenizerME is able to output the probabilities for the detected tokens. The getTokenProbabilities method must be called directly after one of the tokenize methods was called.

			
TokenizerME tokenizer = ...

String tokens[] = tokenizer.tokenize(...);
double tokenProbs[] = tokenizer.getTokenProbabilities();
					
			

The tokenProbs array now contains one double value per token, the value is between 0 and 1, where 1 is the highest possible probability and 0 the lowest possible probability.

Training Tool

OpenNLP has a command line tool which is used to train the models available from the model download page on various corpora. The data must be converted to the OpenNLP Tokenizer training format. Which is one sentence per line. Tokens are either separater by a whitespace or if by a special <SPLIT> tag. The following sample shows the sample from above in the correct format.

			
Pierre Vinken<SPLIT>, 61 years old<SPLIT>, will join the board as a nonexecutive director Nov. 29<SPLIT>.
Mr. Vinken is chairman of Elsevier N.V.<SPLIT>, the Dutch publishing group<SPLIT>.
Rudolph Agnew<SPLIT>, 55 years old and former chairman of Consolidated Gold Fields PLC<SPLIT>, was named a nonexecutive director of this British industrial conglomerate<SPLIT>. 
					
			

Usage of the tool:

			
$ bin/opennlp TokenizerTrainer
Usage: opennlp TokenizerTrainer-lang language -encoding charset [-iterations num] [-cutoff num] [-alphaNumOpt] -data trainingData -model model
-lang language     specifies the language which is being processed.
-encoding charset  specifies the encoding which should be used for reading and writing text.
-iterations num    specified the number of training iterations
-cutoff num        specifies the min number of times a feature must be seen
-alphaNumOpt Optimization flag to skip alpha numeric tokens for further tokenization
					
			

To train the english tokenizer use the following command:

			
$ bin/opennlp TokenizerTrainer -encoding UTF-8 -lang en -alphaNumOpt -data en-token.train -model en-token.bin
Indexing events using cutoff of 5

	Computing event counts...  done. 262271 events
	Indexing...  done.
Sorting and merging events... done. Reduced 262271 events to 59060.
Done indexing.
Incorporating indexed data for training...  
done.
	Number of Event Tokens: 59060
	    Number of Outcomes: 2
	  Number of Predicates: 15695
...done.
Computing model parameters...
Performing 100 iterations.
  1:  .. loglikelihood=-181792.40419263614	0.9614292087192255
  2:  .. loglikelihood=-34208.094253153664	0.9629238459456059
  3:  .. loglikelihood=-18784.123872910015	0.9729211388220581
  4:  .. loglikelihood=-13246.88162585859	0.9856103038460219
  5:  .. loglikelihood=-10209.262670265718	0.9894422181636552

 ...<skipping a bunch of iterations>...

 95:  .. loglikelihood=-769.2107474529454	0.999511955191386
 96:  .. loglikelihood=-763.8891914534009	0.999511955191386
 97:  .. loglikelihood=-758.6685383254891	0.9995157680414533
 98:  .. loglikelihood=-753.5458314695236	0.9995157680414533
 99:  .. loglikelihood=-748.5182305519613	0.9995157680414533
100:  .. loglikelihood=-743.5830058068038	0.9995157680414533
Wrote tokenizer model.
Path: en-token.bin
					
			

Detokenizing

Detokenizing is simple the opposite of tokenization, the original non-tokenized string should be constructed out of a token sequence. The OpenNLP implementation was created to undo the tokenization of training data for the tokenizer. It can also be used to undo the tokenization of such a trained tokenizer. The implementation is strictly rule based and defines how tokens should be attached to a sentence wise character sequence.

The rule dictionary assign to every token an operation which describes how it should be attached to one continous character sequence.

The following rules can be assigned to a token:

  • MERGE_TO_LEFT - Merges the token to the left side.

  • MERGE_TO_RIGHT - Merges the token to the righ side.

  • RIGHT_LEFT_MATCHING - Merges the token to the right side on first occurence and to the left side on second occurence.

The following sample will illustrate how the detokenizer with a small rule dictionary (illustration format, not the xml data format):

			
. MERGE_TO_LEFT
" RIGHT_LEFT_MATCHING		
		

The dictionary should be used to de-tokenize the following whitespace tokenized sentence:

			
He said " This is a test " .		
		

The tokens would get these tags based on the dictionary:

			
He -> NO_OPERATION
said -> NO_OPERATION
" -> MERGE_TO_RIGHT
This -> NO_OPERATION
is -> NO_OPERATION
a -> NO_OPERATION
test -> NO_OPERATION
" -> MERGE_TO_LEFT
. -> MERGE_TO_LEFT		
			

That will result in the following character sequence:

			
He said "This is a test".		
		

TODO: Add documentation about the dictionary format and how to use the API. Contributions are welcome.

Chapter 4. Name Finder

Named Entity Recognition

The Name Finder can detect named entities and numbers in text. To be able to detect entities the Name Finder needs a model. The model is dependent on the language and entity type it was trained for. The OpenNLP projects offers a number of pre-trained name finder models which are trained on various freely available corpora. They can be downloaded at our model download page. To find names in raw text the text must be segmented into tokens and sentences. A detailed description is given in the sentence detector and tokenizer tutorial. Its important that the tokenization for the training data and the input text is identical.

Name Finder Tool

The easiest way to try out the Name Finder is the command line tool. The tool is only intended for demonstration and testing. Download the English person model and start the Name Finder Tool with this command:

				
$bin/opennlp TokenNameFinder en-ner-person.bin
			 

The name finder now reads a tokenized sentence per line from stdin, an empty line indicates a document boundary and resets the adaptive feature generators. Just copy this text to the terminal:

				
Pierre Vinken , 61 years old , will join the board as a nonexecutive director Nov. 29 .
Mr . Vinken is chairman of Elsevier N.V. , the Dutch publishing group .
Rudolph Agnew , 55 years old and former chairman of Consolidated Gold Fields PLC , was named a director of this British industrial conglomerate .
			 

the name finder will now output the text with markup for person names:

				
<START:person> Pierre Vinken <END> , 61 years old , will join the board as a nonexecutive director Nov. 29 .
Mr . <START:person> Vinken <END> is chairman of Elsevier N.V. , the Dutch publishing group .
<START:person> Rudolph Agnew <END> , 55 years old and former chairman of Consolidated Gold Fields PLC , was named a director of this British industrial conglomerate .
				
			 

Name Finder API

To use the Name Finder in a production system its strongly recommended to embed it directly into the application instead of using the command line interface. First the name finder model must be loaded into memory from disk or an other source. In the sample below its loaded from disk.

				
InputStream modelIn = new FileInputStream("en-ner-person.bin");

try {
  TokenNameFinderModel model = new TokenNameFinderModel(modelIn);
}
catch (IOException e) {
  e.printStackTrace();
}
finally {
  if (modelIn != null) {
    try {
      modelIn.close();
    }
    catch (IOException e) {
    }
  }
}
			 

There is a number of reasons why the model loading can fail:

  • Issues with the underlying I/O

  • The version of the model is not compatible with the OpenNLP version

  • The model is loaded into the wrong component, for example a tokenizer model is loaded with TokenNameFinderModel class.

  • The model content is not valid for some other reason

After the model is loaded the NameFinderME can be instantiated.

				
NameFinderME nameFinder = new NameFinderME(model);
			

The initialization is now finished and the Name Finder can be used. The NameFinderME class is not thread safe, it must only be called from one thread. To use multiple threads multiple NameFinderME instances sharing the same model instance can be created. The input text should be segmented into documents, sentences and tokens. To perform entity detection an application calls the find method for every sentence in the document. After every document clearAdaptiveData must be called to clear the adaptive data in the feature generators. Not calling clearAdaptiveData can lead to a sharp drop in the detection rate after a few documents. The following code illustrates that:

				
for (String document[][] : documents) {

  for (String[] sentence : document) {
    Span nameSpans[] = find(sentence);
    // do something with the names
  }

  nameFinder.clearAdaptiveData()
}
			 

the following snippet shows a call to find

				
String sentence = new String[]{
    "Pierre",
    "Vinken",
    "is",
    "61",
    "years"
    "old",
    "."
    };

Span nameSpans[] = nameFinder.find(sentence);
			

The nameSpans arrays contains now exactly one Span which marks the name Pierre Vinken. The elements between the begin and end offsets are the name tokens. In this case the begin offset is 0 and the end offset is 2. The Span object also knows the type of the entity. In this case its person (defined by the model). It can be retrieved with a call to Span.getType(). Additionally to the statistical Name Finder, OpenNLP also offers a dictionary and a regular expression name finder implementation. TODO: Explain how to retrieve probs from the name finder for names and for non recognized names

Name Finder Training

The pre-trained models might not be available for a desired language, can not detect important entities or the performance is not good enough outside the news domain. These are the typical reason to do custom training of the name finder on a new corpus or on a corpus which is extended by private training data taken from the data which should be analyzed.

Training Tool

OpenNLP has a command line tool which is used to train the models available from the model download page on various corpora.

The data must be converted to the OpenNLP name finder training format. Which is one sentence per line. The sentence must be tokenized and contain spans which mark the entities. Documents are separated by empty lines which trigger the reset of the adaptive feature generators. A training file can contain multiple types. If the training file contains multiple types the created model will also be able to detect these multiple types. For now its recommended to only train single type models, since multi type support is stil experimental.

Sample sentence of the data:

				
<START:person> Pierre Vinken <END> , 61 years old , will join the board as a nonexecutive director Nov. 29 .
Mr . <START:person> Vinken <END> is chairman of Elsevier N.V. , the Dutch publishing group .
				
			 

The training data should contain at least 15000 sentences to create a model which performs well. Usage of the tool:

				
$ bin/opennlp TokenNameFinderTrainer
Usage: opennlp TokenNameFinderTrainer -lang language -encoding charset [-iterations num] [-cutoff num] [-type type] -data trainingData -model model
-lang language     specifies the language which is being processed.
-encoding charset  specifies the encoding which should be used for reading and writing text.
-iterations num    specified the number of training iterations
-cutoff num        specifies the min number of times a feature must be seen
-type The type of the token name finder model
			 

Its now assumed that the english person name finder model should be trained from a file called en-ner-person.train which is encoded as UTF-8. The following command will train the name finder and write the model to en-ner-person.bin:

				
$bin/opennlp TokenNameFinderTrainer -encoding UTF-8 -lang en -data en-ner-person.train -model en-ner-person.bin
			 

Additionally its possible to specify the number of iterations, the cutoff and to overwrite all types in the training data with a single type.

Training API

To train the name finder from within an application its recommended to use the training API instead of the command line tool. Basically three steps are necessary to train it:

  • The application must open a sample data stream

  • Call the NameFinderME.train method

  • Save the TokenNameFinderModel to a file or database

The three steps are illustrated by the following sample code:

				
ObjectStream<String> lineStream =
		new PlainTextByLineStream(new FileInputStream("en-ner-person.train"), "UTF-8");
ObjectStream<NameSample> sampleStream = new NameSampleDataStream(lineStream);

TokenNameFinderModel model = NameFinderME.train("en", "person", sampleStream,
		Collections.<String, Object>emptyMap(), 100, 5);

try {
  modelOut = new BufferedOutputStream(new FileOutputStream(modelFile));
  model.serialize(modelOut);
} finally {
  if (modelOut != null) 
     modelOut.close();      
}
			 

Custom Feature Generation

OpenNLP defines a default feature generation which is used when no custom feature generation is specified. Users which want to experiment with the feature generation can provide a custom feature generator. The custom generator must be used for training and for detecting the names. If the feature generation during training time and detection time is different the name finder might not be able to detect names. The following lines show how to construct a custom feature generator

				
AdaptiveFeatureGenerator featureGenerator = new CachedFeatureGenerator(
         new AdaptiveFeatureGenerator[]{
           new WindowFeatureGenerator(new TokenFeatureGenerator(), 2, 2),
           new WindowFeatureGenerator(new TokenClassFeatureGenerator(true), 2, 2),
           new OutcomePriorFeatureGenerator(),
           new PreviousMapFeatureGenerator(),
           new BigramNameFeatureGenerator(),
           new SentenceFeatureGenerator(true, false)
           });
			

which is similar to the default feature generator. The javadoc of the feature generator classes explain what the individual feature generators do. To write a custom feature generator please implement the AdaptiveFeatureGenerator interface or if it must not be adaptive extend the FeatureGeneratorAdapter. The train method which should be used is defined as

				
public static TokenNameFinderModel train(String languageCode, String type, ObjectStream<NameSample> samples, 
       AdaptiveFeatureGenerator generator, final Map<String, Object> resources, 
       int iterations, int cutoff) throws IOException
			

and can take feature generator as an argument. To detect names the model which was returned from the train method and the feature generator must be passed to the NameFinderME constructor.

				
new NameFinderME(model, featureGenerator, NameFinderME.DEFAULT_BEAM_SIZE);
			 

Evaluation

The built in evaluation can measure the named entity recognition performance of the name finder. The performance is either measured on a test dataset or via cross validation.

Evaluation Tool

The following command shows how the tool can be run:

				
$bin/opennlp TokenNameFinderEvaluator -encoding UTF-8 -model en-ner-person.bin -data en-ner-person.test
			 

				
Precision: 0.8005071889818507
Recall: 0.7450581122145297
F-Measure: 0.7717879983140168
			 

Note: The command line interface does not support cross evaluation in the current version.

Evaluation API

The evaluation can be performed on a pre-trained model and a test dataset or via cross validation. In the first case the model must be loaded and a NameSample ObjectStream must be created (see code samples above), assuming these two objects exist the following code shows how to perform the evaluation:

				
TokenNameFinderEvaluator evaluator = new TokenNameFinderEvaluator(new NameFinderME(model));
evaluator.evaluate(sampleStream);

FMeasure result = evaluator.getFMeasure();

System.out.println(result.toString());
			

In the cross validation case all the training arguments must be provided (see the Training API section above). To perform cross validation the ObjectStream must be resettable.

				
FileInputStream sampleDataIn = new FileInputStream("en-ner-person.train");
ObjectStream<NameSample> sampleStream = new PlainTextByLineStream(sampleDataIn.getChannel(), "UTF-8"); 
TokenNameFinderCrossValidator evaluator = new TokenNameFinderCrossValidator("en", 100, 5);
evaluator.evaluate(sampleStream, 10);

FMeasure result = evaluator.getFMeasure();

System.out.println(result.toString());
			

Named Entity Annotation Guidelines

Annotation guidelines define what should be labeled as an entity. To build a private corpus its important to know these guidelines and maybe write a custom one. Here is a list of publicly available annotation guidelines:

Chapter 5. Document Categorizer

TODO: Write documentation about the doccat component. Any contributions are very welcome. If you want to contribute please contact us on the mailing list or comment on the jira issue OPENNLP-33.

Chapter 6. Part-of-Speech Tagger

Tagging

The Part of Speech Tagger marks tokens with their corresponding word type based on the token itself and the context of the token. A token might have multiple pos tags depending on the token and the context. The OpenNLP POS Tagger uses a probability model to predict the correct pos tag out of the tag set. To limit the possible tags for a token a tag dictionary can be used which increases the tagging and runtime performance of the tagger.

POS Tagger Tool

The easiest way to try out the POS Tagger is the command line tool. The tool is only intended for demonstration and testing. Download the english maxent pos model and start the POS Tagger Tool with this command:

			
$ bin/opennlp POSTagger en-pos-maxent.bin
		 

The POS Tagger now reads a tokenized sentence per line from stdin. Copy these two sentences to the console:

			
Pierre Vinken , 61 years old , will join the board as a nonexecutive director Nov. 29 .
Mr. Vinken is chairman of Elsevier N.V. , the Dutch publishing group .
		 

the POS Tagger will now echo the sentences with pos tags to the console:

			
Pierre_NNP Vinken_NNP ,_, 61_CD years_NNS old_JJ ,_, will_MD join_VB the_DT board_NN as_IN a_DT nonexecutive_JJ director_NN Nov._NNP 29_CD ._.
Mr._NNP Vinken_NNP is_VBZ chairman_NN of_IN Elsevier_NNP N.V._NNP ,_, the_DT Dutch_NNP publishing_VBG group_NN
		 

The tag set used by the english pos model is the Penn Treebank tag set. See the link below for a description of the tags.

POS Tagger API

The POS Tagger can be embedded into an application via its API. First the pos model must be loaded into memory from disk or an other source. In the sample below its loaded from disk.

				
InputStream modelIn = null;

try {
  modelIn = new FileInputStream("en-pos-maxent.bin");
  POSModel model = new POSModel(modelIn);
}
catch (IOException e) {
  // Model loading failed, handle the error
  e.printStackTrace();
}
finally {
  if (modelIn != null) {
    try {
      modelIn.close();
    }
    catch (IOException e) {
    }
  }
}
			

After the model is loaded the POSTaggerME can be instantiated.

				
POSTaggerME tagger = new POSTaggerME(model);
			

The POS Tagger instance is now ready to tag data. It expects a tokenized sentence as input, which is represented as a String array, each String object in the array is one token.

The following code shows how to determine the most likely pos tag sequence for a sentence.

		  
String sent[] = new String[]{"Most", "large", "cities", "in", "the", "US", "had",
                             "morning", "and", "afternoon", "newspapers", "."};		  
String tags[] = tagger.tag(sent);
			

The tags array contains one part-of-speech tag for each token in the input array. The corresponding tag can be found at the same index as the token has in the input array. The confidence scores for the returned tags can be easily retrieved from a POSTaggerME with the following method call:

		  
double probs[] = tagger.probs();
			

The call to probs is stateful and will always return the probabilities of the last tagged sentence. The probs method should only be called when the tag method was called before, otherwise the behavior is undefined.

Some applications need to retrieve the n-best pos tag sequences and not only the best sequence. The topKSequences method is capable of returning the top sequences. It can be called in a similar way as tag.

		  
Sequence topSequences[] = tagger.topKSequences(sent);
			

Each Sequence object contains one sequence. The sequence can be retrieved via Sequence.getOutcomes() which returns a tags array and Sequence.getProbs() returns the probability array for this sequence.

Training

The POS Tagger can be trained on annotated training material. The training material is a collection of tokenized sentences where each token has the assigned part-of-speech tag. The native POS Tagger training material looks like this:

		  
About_IN 10_CD Euro_NNP ,_, I_PRP reckon_VBP ._.
That_DT sounds_VBZ good_JJ ._.
			

Each sentence must be in one line. The token/tag pairs are combined with "_". The token/tag pairs are whitespace separated. The data format does not define a document boundary. If a document boundary should be included in the training material it is suggested to use an empty line.

The Part-of-Speech Tagger can either be trained with a command line tool, or via an trainng API.

Training Tool

OpenNLP has a command line tool which is used to train the models available from the model download page on various corpora.

Usage of the tool:

				
$ bin/opennlp POSTaggerTrainer
Usage: opennlp POSTaggerTrainer -lang language -encoding charset [-iterations num] [-cutoff num] \ 
    [-dict tagdict] [-model maxent|perceptron|perceptron_sequence] -data trainingData -model model
-lang language     specifies the language which is being processed.
-encoding charset  specifies the encoding which should be used for reading and writing text.
-iterations num    specified the number of training iterations
-cutoff num        specifies the min number of times a feature must be seen
			 

The following command illustrates how an english part-of-speech model can be trained:

		  
$bin/opennlp POSTaggerTrainer -encoding UTF-8 -lang en -model-type maxent -data en-pos.train -model en-pos-maxent.bin
		 

Training API

The Part-of-Speech Tagger training API supports the programmatically training of a new pos model. Basically three steps are necessary to train it:

  • The application must open a sample data stream

  • Call the POSTagger.train method

  • Save the POSModel to a file or database

The following code illustrates that:

				
POSModel model = null;

InputStream dataIn = null;
try {
  dataIn = new FileInputStream("en-pos.train");
  ObjectStream<String> lineStream =
		new PlainTextByLineStream(dataIn, "UTF-8");
  ObjectStream<POSSample> sampleStream = new WordTagSampleStream(lineStream);

  model = POSTaggerME.train("en", sampleStream, ModelType.MAXENT,
      null, null, 100, 5);
}
catch (IOException e) {
  // Failed to read or parse training data, training failed
  e.printStackTrace();
}
finally {
  if (dataIn != null) {
    try {
      dataIn.close();
    }
    catch (IOException e) {
      // Not an issue, training already finished.
      // The exception should be logged and investigated
      // if part of a production system.
      e.printStackTrace();
    }
  }
}
	

The above code performs the first two steps, opening the data and training the model. The trained model must still be saved into an OutputStream, in the sample below it is written into a file.

				
OutputStream modelOut = null;
try {
  modelOut = new BufferedOutputStream(new FileOutputStream(modelFile));
  model.serialize(modelOut);
}
catch (IOException e) {
  // Failed to save model
  e.printStackTrace();
}
finally {
  if (modelOut != null) {
  try {
     modelOut.close();
  }
  catch (IOException e) {
    // Failed to correctly save model.
    // Written model might be invalid.
    e.printStackTrace();
  }
}
		

Tag Dictionary

The tag dicitionary is a word dictionary which specifies which tags a specific token can have. Using a tag dictionary has two advantages, unappropriate tags can not been assigned to tokens in the dictionary and the beam search algrotihm has to consider less possibilties and can search faster.

The dictionary is defined in a xml format and can be created and stored with the POSDictionary class. Pleaes for now checkout the javadoc and source code of that class.

Note: Contributions to extend this section are welcome. The format should be documented and sample code should show how to use the dictionary.

Evaluation

The built in evaluation can measure the accuracy of the pos tagger. The accuracy can be measured on a test data set or via cross validation.

Evaluation Tool

There is a command line tool to evaluate a given model on a test data set. The command line tool currently does not support the cross validation evaluation (contribution welcome). The following command shows how the tool can be run:

				
$bin/opennlp POSTaggerEvaluator -encoding utf-8 -model pt.postagger.model -data pt.postagger.test
			 

This will display the resulting accuracy score, e.g.:

				
Loading model ... done
Evaluating ... done

Accuracy: 0.9659110277825124
			 

Chapter 7. Chunker

Chunking

Text chunking consists of dividing a text in syntactically correlated parts of words, like noun groups, verb groups, but does not specify their internal structure, nor their role in the main sentence.

Chunker Tool

The easiest way to try out the Chunker is the command line tool. The tool is only intended for demonstration and testing.

Download the english maxent chunker model from the website and start the Chunker Tool with this command:

				
bin/opennlp ChunkerME en-chunker.bin
		

The Chunker now reads a pos tagged sentence per line from stdin. Copy these two sentences to the console:

				
Rockwell_NNP International_NNP Corp._NNP 's_POS Tulsa_NNP unit_NN said_VBD it_PRP signed_VBD a_DT tentative_JJ agreement_NN extending_VBG its_PRP$ contract_NN with_IN Boeing_NNP Co._NNP to_TO provide_VB structural_JJ parts_NNS for_IN Boeing_NNP 's_POS 747_CD jetliners_NNS ._.
Rockwell_NNP said_VBD the_DT agreement_NN calls_VBZ for_IN it_PRP to_TO supply_VB 200_CD additional_JJ so-called_JJ shipsets_NNS for_IN the_DT planes_NNS ._.
		

the Chunker will now echo the sentences grouped tokens to the console:

				
[NP Rockwell_NNP International_NNP Corp._NNP ] [NP 's_POS Tulsa_NNP unit_NN ] [VP said_VBD ] [NP it_PRP ] [VP signed_VBD ] [NP a_DT tentative_JJ agreement_NN ] [VP extending_VBG ] [NP its_PRP$ contract_NN ] [PP with_IN ] [NP Boeing_NNP Co._NNP ] [VP to_TO provide_VB ] [NP structural_JJ parts_NNS ] [PP for_IN ] [NP Boeing_NNP ] [NP 's_POS 747_CD jetliners_NNS ] ._.
[NP Rockwell_NNP ] [VP said_VBD ] [NP the_DT agreement_NN ] [VP calls_VBZ ] [SBAR for_IN ] [NP it_PRP ] [VP to_TO supply_VB ] [NP 200_CD additional_JJ so-called_JJ shipsets_NNS ] [PP for_IN ] [NP the_DT planes_NNS ] ._.
		

The tag set used by the english pos model is the Penn Treebank tag set. See the link below for a description of the tags.

Chunking API

TODO

Chunker Training

The pre-trained models might not be available for a desired language, can not detect important entities or the performance is not good enough outside the news domain.

These are the typical reason to do custom training of the chunker on a new corpus or on a corpus which is extended by private training data taken from the data which should be analyzed.

The training data must be converted to the OpenNLP chunker training format, that is based on CoNLL2000: The train data consist of three columns separated by spaces. Each word has been put on a separate line and there is an empty line after each sentence. The first column contains the current word, the second its part-of-speech tag and the third its chunk tag. The chunk tags contain the name of the chunk type, for example I-NP for noun phrase words and I-VP for verb phrase words. Most chunk types have two types of chunk tags, B-CHUNK for the first word of the chunk and I-CHUNK for each other word in the chunk. Here is an example of the file format:

Sample sentence of the training data:

				
He        PRP  B-NP
reckons   VBZ  B-VP
the       DT   B-NP
current   JJ   I-NP
account   NN   I-NP
deficit   NN   I-NP
will      MD   B-VP
narrow    VB   I-VP
to        TO   B-PP
only      RB   B-NP
#         #    I-NP
1.8       CD   I-NP
billion   CD   I-NP
in        IN   B-PP
September NNP  B-NP
.         .    O
		

Training Tool

OpenNLP has a command line tool which is used to train the models available from the model download page on various corpora.

Usage of the tool:

				
$ bin/opennlp ChunkerTrainerME
Usage: opennlp ChunkerTrainerME-lang language -encoding charset [-iterations num] [-cutoff num] -data trainingData -model model
-lang language     specifies the language which is being processed.
-encoding charset  specifies the encoding which should be used for reading and writing text.
-iterations num    specified the number of training iterations
-cutoff num        specifies the min number of times a feature must be seen
		

Its now assumed that the english chunker model should be trained from a file called en-chunker.train which is encoded as UTF-8. The following command will train the name finder and write the model to en-chunker.bin:

		
bin/opennlp ChunkerTrainerME -encoding UTF-8 -lang en -data en-chunker.train -model en-chunker.bin
		

Additionally its possible to specify the number of iterations, the cutoff and to overwrite all types in the training data with a single type.

Chunker Evaluation

The built in evaluation can measure the chunker performance. The performance is either measured on a test dataset or via cross validation.

Chunker Evaluation Tool

The following command shows how the tool can be run:

				
bin/opennlp ChunkerEvaluator
Usage: opennlp ChunkerEvaluator [-encoding charsetName] -data data -model model
		

A sample of the command considering you have a data sample named en-chunker.eval and you trainned a model called en-chunker.bin:

				
bin/opennlp ChunkerEvaluator -lang en -encoding UTF-8 -data en-chunker.eval -model en-chunker.bin
		

and here is a sample output:

		
Precision: 0.9255923572240226
Recall: 0.9220610430991112
F-Measure: 0.9238233255623465
		

You can also use the tool to perform 10-fold cross validation of the Chunker. he following command shows how the tool can be run:

				
bin/opennlp ChunkerCrossValidator
Usage: opennlp ChunkerCrossValidator -lang language -encoding charset [-iterations num] [-cutoff num]
-lang language     specifies the language which is being processed.
-encoding charset  specifies the encoding which should be used for reading and writing text.
-iterations num    specified the number of training iterations
-cutoff num        specifies the min number of times a feature must be seen
-data trainingData      training data used for cross validation
		

It is not necessary to pass a model. The tool will automatically split the data to train and evaluate:

				
bin/opennlp ChunkerCrossValidator -lang pt -encoding UTF-8 -data en-chunker.cross
		

Chapter 8. Parser

Parsing

Parser Tool

The easiest way to try out the Parser is the command line tool. The tool is only intended for demonstration and testing. Download the english chunking parser model from the our website and start the Parser Tool with the following command.

				
$bin/opennlp Parser en-parser.bin en-parser-chunking.bin
		

Loading the big parser model can take several seconds, be patient. Copy this sample sentence to the console.

				
The quick brown fox jumps over the lazy dog .
		

The parser should now print the following to the console.

				
(TOP (NP (NP (DT The) (JJ quick) (JJ brown) (NN fox) (NNS jumps)) (PP (IN over) (NP (DT the) (JJ lazy) (NN dog))) (. .)))
		

With the following command the input can be read from a file and be written to an output file.

				
$ bin/opennlp Parser en-parser.bin en-parser-chunking.bin < article-tokenized.txt > article-parsed.txt.
		

The article-tokenized.txt file must contain one sentence per line which is tokenized with the english tokenizer model from our website. See the Tokenizer documentation for further details.

Parsing API

The Parser can be easily integrated into an application via its API. To instantiate a Parser the parser model must be loaded first.

				
InputStream modelIn = new FileInputStream("en-parser-chunking.bin");
try {
  ParserModel model = new ParserModel(modelIn);
}
catch (IOException e) {
  e.printStackTrace();
}
finally {
  if (modelIn != null) {
    try {
      modelIn.close();
    }
    catch (IOException e) {
    }
  }
}
		

Unlike the other components to instantiate the Parser a factory method should be used instead of creating the Parser via the new operator. The parser model is either trained for the chunking parser or the tree insert parser the parser implementation must be chosen correctly. The factory method will read a type parameter from the model and create an instance of the corresponding parser implementation.

				
Parser parser = ParserFactory.create(model);
		

Right now the tree insert parser is still experimental and there is no pre-trained model for it. The parser expect a whitespace tokenized sentence. A utility method from the command line tool can parse the sentence String. The following code shows how the parser can be called.

				
String sentence = "The quick brown fox jumps over the lazy dog .";
Parse topParses[] = ParserTool.parseLine(sentence, parser, 1);
		

The topParses array only contains one parse because the number of parses is set to 1. The Parse object contains the parse tree. To display the parse tree call the show method. It either prints the parse to the console or into a provided StringBuffer. Similar to Exception.printStackTrace. TODO: Extend this section with more information about the Parse object.

Parser Training

The OpenNLP offers two different parser implementations, the chunking parser and the treeinsert parser. The later one is still experimental and not recommended for production use. (TODO: Add a section which explains the two different approches) The training can either be done with the command line tool or the training API. In the first case the training data must be available in the OpenNLP format. Which is the Penn Treebank format, but with the limitation of a sentence per line.

				
(TOP (S (NP-SBJ (DT Some) )(VP (VBP say) (NP (NNP November) ))(. .) ))
(TOP (S (NP-SBJ (PRP I) )(VP (VBP say) (NP (CD 1992) ))(. .) ('' '') ))
		

(TODO: Insert link which explains the penn treebank format.) A parser model also contains a pos tagger model, depending on the amount of available training data it is recommend to switch the tagger model against a tagger model which was trained on a larger corpus. The pre-trained parser model provided on the website is doing this to achieve a better performance. (TODO: On which data is the model on the website trained, and say on which data the tagger model is trained)

Training Tool

OpenNLP has a command line tool which is used to train the models available from the model download page on various corpora. The data must be converted to the OpenNLP parser training format, which is shortly explained above. To train the parser a head rules file is also needed. (TODO: Add documentation about the head rules file) Usage of the tool:

				
$ bin/opennlp ParserTrainer
Usage: opennlp ParserTrainer-lang language -encoding charset [-iterations num] [-cutoff num] -head-rules head_rules -data trainingData -model model
-lang language     specifies the language which is being processed.
-encoding charset  specifies the encoding which should be used for reading and writing text.
-iterations num    specified the number of training iterations
-cutoff num        specifies the min number of times a feature must be seen
		

The model on the website was trained with the following command:

		
$bin/opennlp ParserTrainer -encoding ISO-8859-1 -lang en -parserType CHUNKING -head-rules head_rules \
    -data train.all -model en-parser-chunking.bin
		

Its also possible to specify the cutoff and the number of iterations, these parameters are used for all trained models. The -parserType parameter is an optional parameter, to use the tree insertion parser, specify TREEINSERT as type. The TaggerModelReplacer tool replaces the tagger model inside the parser model with a new one. Note: The original parser model will be overwritten with the new parser model which contains the replaced tagger model.

		
$bin/opennlp TaggerModelReplacer  models/en-parser-chunking.bin models/en-pos-maxent.bin
		

Additionally there are tools to just retrain the build or the check model.

Chapter 9. Coreference Resolution

TODO: Write documentation about the coref component. Any contributions are very welcome. If you want to contribute please contact us on the mailing list or comment on the jira issue OPENNLP-48.

Chapter 10. Corpora

OpenNLP has built-in support to convert various corpora into the native training format needed by the different trainable components.

CONLL

CoNLL stands for the Confernece on Computational Natural Language Learning and is not a single project but a consortium of developers attempting to broaden the computing environment. More information about the entire conference series can be obtained here for CoNLL.

CONLL 2000

The shared task of CoNLL-2000 is Chunking .

Getting the data

CoNLL-2000 made available training and test data for the Chunk task in English. The data consists of the same partitions of the Wall Street Journal corpus (WSJ) as the widely used data for noun phrase chunking: sections 15-18 as training data (211727 tokens) and section 20 as test data (47377 tokens). The annotation of the data has been derived from the WSJ corpus by a program written by Sabine Buchholz from Tilburg University, The Netherlands. Both training and test data can be obtained from http://www.cnts.ua.ac.be/conll2000/chunking.

Converting the data

The data don't need to be transformed because Apache OpenNLP Chunker follows the CONLL 2000 format for training. Check Chunker Training section to learn more.

Training

We can train the model for the Chunker using the train.txt available at CONLL 2000:

			
bin/opennlp ChunkerTrainerME -encoding UTF-8 -lang en -iterations 500 \
-data train.txt -model en-chunker.bin
		

			
Indexing events using cutoff of 5

	Computing event counts...  done. 211727 events
	Indexing...  done.
Sorting and merging events... done. Reduced 211727 events to 197252.
Done indexing.
Incorporating indexed data for training...  
done.
	Number of Event Tokens: 197252
	    Number of Outcomes: 22
	  Number of Predicates: 107838
...done.
Computing model parameters...
Performing 500 iterations.
  1:  .. loglikelihood=-654457.1455212828	0.2601510435608118
  2:  .. loglikelihood=-239513.5583724216	0.9260037690044255
  3:  .. loglikelihood=-141313.1386347238	0.9443387003074715
  4:  .. loglikelihood=-101083.50853437989	0.954375209585929
... cut lots of iterations ...
498:  .. loglikelihood=-1710.8874647317095	0.9995040783650645
499:  .. loglikelihood=-1708.0908900815848	0.9995040783650645
500:  .. loglikelihood=-1705.3045902366732	0.9995040783650645
Writing chunker model ... done (4.019s)

Wrote chunker model to path: .\en-chunker.bin
		

Evaluating

We evaluate the model using the file test.txt available at CONLL 2000:

			
$ bin/opennlp ChunkerEvaluator -encoding utf8 -model en-chunker.bin -data test.txt
		

			
Loading Chunker model ... done (0,665s)
current: 85,8 sent/s avg: 85,8 sent/s total: 86 sent
current: 88,1 sent/s avg: 87,0 sent/s total: 174 sent
current: 156,2 sent/s avg: 110,0 sent/s total: 330 sent
current: 192,2 sent/s avg: 130,5 sent/s total: 522 sent
current: 167,2 sent/s avg: 137,8 sent/s total: 689 sent
current: 179,2 sent/s avg: 144,6 sent/s total: 868 sent
current: 183,2 sent/s avg: 150,3 sent/s total: 1052 sent
current: 183,2 sent/s avg: 154,4 sent/s total: 1235 sent
current: 169,2 sent/s avg: 156,0 sent/s total: 1404 sent
current: 178,2 sent/s avg: 158,2 sent/s total: 1582 sent
current: 172,2 sent/s avg: 159,4 sent/s total: 1754 sent
current: 177,2 sent/s avg: 160,9 sent/s total: 1931 sent


Average: 161,6 sent/s 
Total: 2013 sent
Runtime: 12.457s

Precision: 0.9244354736974896
Recall: 0.9216837162502096
F-Measure: 0.9230575441395671
		

CONLL 2002

TODO: Document how to use the converters for CONLL 2002. Any contributions are very welcome. If you want to contribute please contact us on the mailing list or comment on the jira issue OPENNLP-46.

CONLL 2003

The shared task of CoNLL-2003 is language independent named entity recognition for English and German.

Getting the data

The English data is the Reuters Corpus, which is a collection of news wire articles. The Reuters Corpus can be obtained free of charges from the NIST for research purposes: http://trec.nist.gov/data/reuters/reuters.html

The German data is a collection of articles from the German newspaper Frankfurter Rundschau. The articles are part of the ECI Multilingual Text Corpus which can be obtained for 75$ (2010) from the Linguistic Data Consortium: http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC94T5

After one of the corpora is available the data must be transformed as explained in the README file to the conll format. The transformed data can be read by the OpenNLP CONLL03 converter.

Converting the data

To convert the information to the OpenNLP format:

			
$ bin/opennlp TokenNameFinderConverter conll03 -data eng.train -lang en -types per > corpus_train.txt
		

Optionally, you can convert the training test samples as well.

			
bin/opennlp TokenNameFinderConverter conll03 -data eng.testa -lang en -types per > corpus_testa.txt
bin/opennlp TokenNameFinderConverter conll03 -data eng.testb -lang en -types per > corpus_testb.txt
		

Training with English data

To train the model for the name finder:

			
$ bin/opennlp TokenNameFinderTrainer -lang en -encoding utf8 -iterations 500 \
    -data corpus_train.txt -model en_ner_person.bin
		

			
Indexing events using cutoff of 5

	Computing event counts...  done. 203621 events
	Indexing...  done.
Sorting and merging events... done. Reduced 203621 events to 179409.
Done indexing.
Incorporating indexed data for training...  
done.
	Number of Event Tokens: 179409
	    Number of Outcomes: 3
	  Number of Predicates: 58814
...done.
Computing model parameters...
Performing 500 iterations.
  1:  .. loglikelihood=-223700.5328318588	0.9453494482396216
  2:  .. loglikelihood=-40525.939777363084	0.9467933071736215
  3:  .. loglikelihood=-24893.98837874921	0.9598518816821447
  4:  .. loglikelihood=-18420.3379471033	0.9712996203731442
... cut lots of iterations ...
498:  .. loglikelihood=-952.8501399442295	0.9988950059178572
499:  .. loglikelihood=-952.0600155746948	0.9988950059178572
500:  .. loglikelihood=-951.2722802086295	0.9988950059178572
Writing name finder model ... done (1.638s)

Wrote name finder model to
path: .\en_ner_person.bin
		

Evaluating with English data

Since we created the test A and B files above, we can use them to evaluate the model.

			
$ bin/opennlp TokenNameFinderEvaluator -lang en -encoding utf8 -model en_ner_person.bin \
    -data corpus_testa.txt
		

			
Loading Token Name Finder model ... done (0.359s)
current: 190.2 sent/s avg: 190.2 sent/s total: 199 sent
current: 648.3 sent/s avg: 415.9 sent/s total: 850 sent
current: 530.1 sent/s avg: 453.6 sent/s total: 1380 sent
current: 793.8 sent/s avg: 539.0 sent/s total: 2178 sent
current: 705.4 sent/s avg: 571.9 sent/s total: 2882 sent


Average: 569.4 sent/s
Total: 3251 sent
Runtime: 5.71s

Precision: 0.9366247297154147
Recall: 0.739956568946797
F-Measure: 0.8267557582133971
		

Arvores Deitadas

The Portuguese corpora available at http://www.linguateca.pt project follow the Arvores Deitadas (AD) format. Apache OpenNLP includes tools to convert from AD format to native format.

Getting the data

The Corpus can be downloaded from here: http://www.linguateca.pt/floresta/corpus.html

The Name Finder models were trained using the Amazonia corpus: amazonia.ad. The Chunker models were trained using the Bosque_CF_8.0.ad.

Converting the data

To extract NameFinder training data from Amazonia corpus:

			
$ bin/opennlp TokenNameFinderConverter ad -encoding ISO-8859-1 -data amazonia.ad \
    -lang pt -types per > corpus.txt
			

To extract Chunker training data from Bosque_CF_8.0.ad corpus:

			
$ bin/opennlp ChunkerConverter ad -encoding ISO-8859-1 -data Bosque_CF_8.0.ad.txt > bosque-chunk
			

Evaluation

To perform the evaluation the corpus was split into a training and a test part.

			
$ sed '1,55172d' corpus.txt > corpus_train.txt
$ sed '55172,100000000d' corpus.txt > corpus_test.txt
			

			
$ bin/opennlp TokenNameFinderTrainer -lang PT -encoding UTF-8 -data corpus_train.txt \
    -model pt-ner.bin -cutoff 20
..
$ bin/opennlp TokenNameFinderEvaluator -encoding UTF-8 -model ../model/pt-ner.bin \
    -data corpus_test.txt

Precision: 0.8005071889818507
Recall: 0.7450581122145297
F-Measure: 0.7717879983140168
			

Leipzig Corpora

The Leiopzig Corpora collection presents corpora in different languages. The corpora is a collection of individual sentences collected from the web and newspapers. The Corpora is available as plain text and as MySQL database tables. The OpenNLP integration can only use the plain text version.

The corpora in the different languages can be used to train a document categorizer model which can detect the document language. The individual plain text packages can be downlaoded here: http://corpora.uni-leipzig.de/download.html

Afer all packages have been downloaded, unzip them and use the following commands to produce a training file which can be processed by the Document Categorizer:

			
bin/opennlp DoccatConverter leipzig -lang cat -data Leipzig/cat100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang de -data Leipzig/de100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang dk -data Leipzig/dk100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang ee -data Leipzig/ee100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang en -data Leipzig/en100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang fi -data Leipzig/fi100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang fr -data Leipzig/fr100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang it -data Leipzig/it100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang jp -data Leipzig/jp100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang kr -data Leipzig/kr100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang nl -data Leipzig/nl100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang no -data Leipzig/no100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang se -data Leipzig/se100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang sorb -data Leipzig/sorb100k/sentences.txt >> lang.train
bin/opennlp DoccatConverter leipzig -lang tr -data Leipzig/tr100k/sentences.txt >> lang.train
	

Depending on your platform local it might be problemmatic to output characters which are not supported by that encoding, we suggest to run these command on a platform which has a unicode default encoding, e.g. Linux with UTF-8.

Afer the lang.train file is created the actual language detection document categorizer model can be created with the following command.

			
bin/opennlp DoccatTrainer -lang x-unspecified -encoding MacRoman -data ../lang.train -model lang.model
Indexing events using cutoff of 5

	Computing event counts...  done. 10000 events
	Indexing...  done.
Sorting and merging events... done. Reduced 10000 events to 10000.
Done indexing.
Incorporating indexed data for training...  
done.
	Number of Event Tokens: 10000
	    Number of Outcomes: 2
	  Number of Predicates: 42730
...done.
Computing model parameters...
Performing 100 iterations.
  1:  .. loglikelihood=-6931.471805600547	0.5
  2:  .. loglikelihood=-2110.9654348555955	1.0
... cut lots of iterations ...

 99:  .. loglikelihood=-0.449640418555347	1.0
100:  .. loglikelihood=-0.443746359746235	1.0
Writing document categorizer model ... done (1.210s)

Wrote document categorizer model to
path: /Users/joern/dev/opennlp-apache/opennlp/opennlp-tools/lang.model

	

In the sample above the language detection model was trained to distinguish two languages, danish and english.

After the model is created it can be used to detect the two languages:

			
$ bin/opennlp Doccat ../lang.
lang.model  lang.train  
karkand:opennlp-tools joern$ bin/opennlp Doccat ../lang.model
Loading Document Categorizer model ... done (0.289s)
The American Finance Association is pleased to announce the award of ...
en	The American Finance Association is pleased to announce the award of ..
.
Danskerne skal betale for den økonomiske krise ved at blive længere på arbejdsmarkedet .
dk	Danskerne skal betale for den økonomiske krise ved at blive længere på arbejdsmarkedet .	
	

Chapter 11. Machine Learning

Maximum Entropy

To explain what maximum entropy is, it will be simplest to quote from Manning and Schutze* (p. 589): Maximum entropy modeling is a framework for integrating information from many heterogeneous information sources for classification. The data for a classification problem is described as a (potentially large) number of features. These features can be quite complex and allow the experimenter to make use of prior knowledge about what types of informations are expected to be important for classification. Each feature corresponds to a constraint on the model. We then compute the maximum entropy model, the model with the maximum entropy of all the models that satisfy the constraints. This term may seem perverse, since we have spent most of the book trying to minimize the (cross) entropy of models, but the idea is that we do not want to go beyond the data. If we chose a model with less entropy, we would add `information' constraints to the model that are not justified by the empirical evidence available to us. Choosing the maximum entropy model is motivated by the desire to preserve as much uncertainty as possible.

So that gives a rough idea of what the maximum entropy framework is. Don't assume anything about your probability distribution other than what you have observed.

On the engineering level, using maxent is an excellent way of creating programs which perform very difficult classification tasks very well. For example, precision and recall figures for programs using maxent models have reached (or are) the state of the art on tasks like part of speech tagging, sentence detection, prepositional phrase attachment, and named entity recognition. On the engineering level, an added benefit is that the person creating a maxent model only needs to inform the training procedure of the event space, and need not worry about independence between features.

While the authors of this implementation of maximum entropy are generally interested using maxent models in natural language processing, the framework is certainly quite general and useful for a much wider variety of fields. In fact, maximum entropy modeling was originally developed for statistical physics.

For a very in-depth discussion of how maxent can be used in natural language processing, try reading Adwait Ratnaparkhi's dissertation. Also, check out Berger, Della Pietra, and Della Pietra's paper A Maximum Entropy Approach to Natural Language Processing, which provides an excellent introduction and discussion of the framework.

*Foundations of statistical natural language processing . Christopher D. Manning, Hinrich Schutze. Cambridge, Mass. : MIT Press, c1999.

Implementation

We have tried to make the opennlp.maxent implementation easy to use. To create a model, one needs (of course) the training data, and then implementations of two interfaces in the opennlp.maxent package, EventStream and ContextGenerator. These have fairly simple specifications, and example implementations can be found in the OpenNLP Tools preprocessing components.

We have also set in place some interfaces and code to make it easier to automate the training and evaluation process (the Evalable interface and the TrainEval class). It is not necessary to use this functionality, but if you do you'll find it much easier to see how well your models are doing. The opennlp.grok.preprocess.namefind package is an example of a maximum entropy component which uses this functionality.

We have managed to use several techniques to reduce the size of the models when writing them to disk, which also means that reading in a model for use is much quicker than with less compact encodings of the model. This was especially important to us since we use many maxent models in the Grok library, and we wanted the start up time and the physical size of the library to be as minimal as possible. As of version 1.2.0, maxent has an io package which greatly simplifies the process of loading and saving models in different formats.

Chapter 12. UIMA Integration

The UIMA Integration wraps the OpenNLP components in UIMA Analysis Engines which can be used to automatically annotate text and train new OpenNLP models from annotated text.

Running the pear sample in CVD

The Cas Visual Debugger is shipped as part of the UIMA distribution and is a tool which can run the OpenNLP UIMA Annotators and display their analysis results. The source distribution comes with a script which can create a sample UIMA application. Which includes the sentence detector, tokenizer, pos tagger, chunker and name finders for English. This sample application is packaged in the pear format and must be installed with the pear installer before it can be run by CVD. Please consult the UIMA documentation for further information about the pear installer.

The OpenNLP UIMA pear file must be build manually. First download the source distribution, unzip it and go to the apache-opennlp/opennlp folder. Type "mvn install" to build everything. Now build the pear file, go to apache-opennlp/opennlp-uima and build it as shown below. Note the models will be downloaded from the old SourceForge repository and are not licensed under the AL 2.0.

			
$ ant -f createPear.xml 
Buildfile: createPear.xml

createPear:
     [echo] ##### Creating OpenNlpTextAnalyzer pear #####
     [copy] Copying 13 files to OpenNlpTextAnalyzer/desc
     [copy] Copying 1 file to OpenNlpTextAnalyzer/metadata
     [copy] Copying 1 file to OpenNlpTextAnalyzer/lib
     [copy] Copying 3 files to OpenNlpTextAnalyzer/lib
    [mkdir] Created dir: OpenNlpTextAnalyzer/models
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-token.bin
      [get] To: OpenNlpTextAnalyzer/models/en-token.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-sent.bin
      [get] To: OpenNlpTextAnalyzer/models/en-sent.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-ner-date.bin
      [get] To: OpenNlpTextAnalyzer/models/en-ner-date.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-ner-location.bin
      [get] To: OpenNlpTextAnalyzer/models/en-ner-location.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-ner-money.bin
      [get] To: OpenNlpTextAnalyzer/models/en-ner-money.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-ner-organization.bin
      [get] To: OpenNlpTextAnalyzer/models/en-ner-organization.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-ner-percentage.bin
      [get] To: OpenNlpTextAnalyzer/models/en-ner-percentage.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-ner-person.bin
      [get] To: OpenNlpTextAnalyzer/models/en-ner-person.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-ner-time.bin
      [get] To: OpenNlpTextAnalyzer/models/en-ner-time.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-pos-maxent.bin
      [get] To: OpenNlpTextAnalyzer/models/en-pos-maxent.bin
      [get] Getting: http://opennlp.sourceforge.net/models-1.5/en-chunker.bin
      [get] To: OpenNlpTextAnalyzer/models/en-chunker.bin
      [zip] Building zip: OpenNlpTextAnalyzer.pear

BUILD SUCCESSFUL
Total time: 3 minutes 20 seconds
		 

After the pear is installed start the Cas Visual Debugger shipped with the UIMA framework. And click on Tools -> Load AE. Then select the opennlp.uima.OpenNlpTextAnalyzer_pear.xml file in the file dialog. Now enter some text and start the analysis engine with "Run -> Run OpenNLPTextAnalyzer". Afterwards the results will be displayed. You should see sentences, tokens, chunks, pos tags and maybe some names. Remember the input text must be written in English.

Further Help

For more information about how to use the integration please consult the javadoc of the individual Analysis Engines and checkout the included xml descriptors.

TODO: Extend this documentation with information about the individual components. If you want to contribute please contact us on the mailing list or comment on the jira issue OPENNLP-49.