Works in conjunction with the SinkTokenizer to provide the ability to set aside tokens
/// that have already been analyzed. This is useful in situations where multiple fields share
/// many common analysis steps and then go their separate ways.
///
/// It is also useful for doing things like entity extraction or proper noun analysis as
/// part of the analysis workflow and saving off those tokens for use in another field.
///
///
/// SinkTokenizer sink1 = new SinkTokenizer();
/// SinkTokenizer sink2 = new SinkTokenizer();
/// TokenStream source1 = new TeeTokenFilter(new TeeTokenFilter(new WhitespaceTokenizer(reader1), sink1), sink2);
/// TokenStream source2 = new TeeTokenFilter(new TeeTokenFilter(new WhitespaceTokenizer(reader2), sink1), sink2);
/// TokenStream final1 = new LowerCaseFilter(source1);
/// TokenStream final2 = source2;
/// TokenStream final3 = new EntityDetect(sink1);
/// TokenStream final4 = new URLDetect(sink2);
/// d.add(new Field("f1", final1));
/// d.add(new Field("f2", final2));
/// d.add(new Field("f3", final3));
/// d.add(new Field("f4", final4));
///
/// In this example, sink1
and sink2
will both get tokens from both
/// reader1
and reader2
after whitespace tokenizer
/// and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired.
/// It is important, that tees are consumed before sinks (in the above example, the field names must be
/// less the sink's field names).
/// Note, the EntityDetect and URLDetect TokenStreams are for the example and do not currently exist in Lucene
///
///
/// See LUCENE-1058.
///
/// WARNING: {@link TeeTokenFilter} and {@link SinkTokenizer} only work with the old TokenStream API.
/// If you switch to the new API, you need to use {@link TeeSinkTokenFilter} instead, which offers
/// the same functionality.
///
///