# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. org.apache.stanbol.enhancer.engines.entitylinking.labeltokenizer.SimpleLabelTokenizer.name=Stanbol Enhancer \ EntityLinking SimpleLabelTokenizer org.apache.stanbol.enhancer.engines.entitylinking.labeltokenizer.SimpleLabelTokenizer.description=This \ is the default LabelTokenizer implementation. It behaves like the OpenNLP SimpleTokenizer but does not \ have any external dependencies. org.apache.stanbol.enhancer.engines.entitylinking.labeltokenizer.OpenNlpLabelTokenizer.name=Stanbol Enhancer \ EntityLinking LabelTokenizer using OpenNLP org.apache.stanbol.enhancer.engines.entitylinking.labeltokenizer.OpenNlpLabelTokenizer.description=This \ LabelTokenizer implementation uses the OpenNLP Tokenizers API for tokenizing Entity labels processed \ by the EntityLinkingEngine. It can be configured to load custom tokenizer models for specific \ languages. service.ranking.name=Ranking service.ranking.description=If two LabelTokenizer support the same language, that the one with the \ higher ranking is called first. Lower ranking LabelTokenizer are only used if others return NULL \ on tokenize requests. enhancer.engines.entitylinking.labeltokenizer.languages.name=Supported Languages enhancer.engines.entitylinking.labeltokenizer.languages.description=The supported languages \ of this Tokenizer. Values can use '!{lang}' to exclude languages; '{lang}' to explicitly \ include languages and '*' as wildcard. In addition parameters are supported by using the \ '{lang};{param}=value' syntax. Example: "!zh,de;model=my-de-tokenizer-model.zip,*" This will disable \ processing of Chinese use the model with the name "my-de-tokenizer-model.zip" and the defaults \ for all other languages.