The IndexWriter type exposes the following members.
Constructors
Name | Description | |
---|---|---|
IndexWriter(FileInfo, Analyzer) | Obsolete. Constructs an IndexWriter for the index in
CopyC# path CopyC# a | |
IndexWriter(String, Analyzer) | Obsolete. Constructs an IndexWriter for the index in
CopyC# path CopyC# a | |
IndexWriter(Directory, Analyzer) | Obsolete. Constructs an IndexWriter for the index in
CopyC# d CopyC# a | |
IndexWriter(FileInfo, Analyzer, IndexWriter..::..MaxFieldLength) | Obsolete. Constructs an IndexWriter for the index in
CopyC# path CopyC# a | |
IndexWriter(FileInfo, Analyzer, Boolean) | Obsolete. Constructs an IndexWriter for the index in CopyC# path CopyC# a CopyC# create CopyC# path | |
IndexWriter(String, Analyzer, IndexWriter..::..MaxFieldLength) | Obsolete. Constructs an IndexWriter for the index in
CopyC# path CopyC# a | |
IndexWriter(String, Analyzer, Boolean) | Obsolete. Constructs an IndexWriter for the index in CopyC# path CopyC# a CopyC# create CopyC# path | |
IndexWriter(Directory, Analyzer, IndexWriter..::..MaxFieldLength) | Constructs an IndexWriter for the index in
CopyC# d CopyC# a | |
IndexWriter(Directory, Analyzer, Boolean) | Obsolete. Constructs an IndexWriter for the index in CopyC# d CopyC# a CopyC# create CopyC# d | |
IndexWriter(Directory, Boolean, Analyzer) | Obsolete. Constructs an IndexWriter for the index in
CopyC# d CopyC# a | |
IndexWriter(FileInfo, Analyzer, Boolean, IndexWriter..::..MaxFieldLength) | Obsolete. Constructs an IndexWriter for the index in CopyC# path CopyC# a CopyC# create CopyC# path | |
IndexWriter(String, Analyzer, Boolean, IndexWriter..::..MaxFieldLength) | Obsolete. Constructs an IndexWriter for the index in CopyC# path CopyC# a CopyC# create CopyC# path | |
IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter..::..MaxFieldLength) | Expert: constructs an IndexWriter with a custom {@link
IndexDeletionPolicy}, for the index in CopyC# d CopyC# a | |
IndexWriter(Directory, Analyzer, Boolean, IndexWriter..::..MaxFieldLength) | Constructs an IndexWriter for the index in CopyC# d CopyC# a CopyC# create CopyC# d | |
IndexWriter(Directory, Boolean, Analyzer, IndexDeletionPolicy) | Obsolete. Expert: constructs an IndexWriter with a custom {@link
IndexDeletionPolicy}, for the index in CopyC# d CopyC# a | |
IndexWriter(Directory, Boolean, Analyzer, Boolean) | Obsolete. Constructs an IndexWriter for the index in CopyC# d CopyC# a CopyC# create CopyC# d | |
IndexWriter(Directory, Analyzer, IndexDeletionPolicy, IndexWriter..::..MaxFieldLength, IndexCommit) | Expert: constructs an IndexWriter on specific commit
point, with a custom {@link IndexDeletionPolicy}, for
the index in CopyC# d CopyC# a | |
IndexWriter(Directory, Analyzer, Boolean, IndexDeletionPolicy, IndexWriter..::..MaxFieldLength) | Expert: constructs an IndexWriter with a custom {@link
IndexDeletionPolicy}, for the index in CopyC# d CopyC# a CopyC# create CopyC# d | |
IndexWriter(Directory, Boolean, Analyzer, Boolean, IndexDeletionPolicy) | Obsolete. Expert: constructs an IndexWriter with a custom {@link
IndexDeletionPolicy}, for the index in CopyC# d CopyC# a CopyC# create CopyC# d |
Methods
Name | Description | |
---|---|---|
Abort | Obsolete. | |
AddDocument(Document) | Adds a document to this index. If the document contains more than
{@link #SetMaxFieldLength(int)} terms for a given field, the remainder are
discarded.
Note that if an Exception is hit (for example disk full)
then the index will be consistent, but this document
may not have been added. Furthermore, it's possible
the index will have one segment in non-compound format
even when using compound files (when a merge has
partially succeeded). This method periodically flushes pending documents
to the Directory (see above), and
also periodically triggers segment merges in the index
according to the {@link MergePolicy} in use.Merges temporarily consume space in the
directory. The amount of space required is up to 1X the
size of all segments being merged, when no
readers/searchers are open against the index, and up to
2X the size of all segments being merged when
readers/searchers are open against the index (see
{@link #Optimize()} for details). The sequence of
primitive merge operations performed is governed by the
merge policy.
Note that each term in the document can be no longer
than 16383 characters, otherwise an
IllegalArgumentException will be thrown.Note that it's possible to create an invalid Unicode
string in java if a UTF16 surrogate pair is malformed.
In this case, the invalid characters are silently
replaced with the Unicode replacement character
U+FFFD.NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
AddDocument(Document, Analyzer) | Adds a document to this index, using the provided analyzer instead of the
value of {@link #GetAnalyzer()}. If the document contains more than
{@link #SetMaxFieldLength(int)} terms for a given field, the remainder are
discarded.
See {@link #AddDocument(Document)} for details on
index and IndexWriter state after an Exception, and
flushing/merging temporary free space requirements.NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
AddIndexes(array<Directory>[]()[][]) | Obsolete. Merges all segments from an array of indexes into this index.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
AddIndexes(array<IndexReader>[]()[][]) | Merges the provided indexes into this index.
After this completes, the index is optimized. The provided IndexReaders are not closed.NOTE: while this is running, any attempts to
add or delete documents (with another thread) will be
paused until this method completes.
See {@link #AddIndexesNoOptimize(Directory[])} for
details on transactional semantics, temporary free
space required in the Directory, and non-CFS segments
on an Exception.NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
AddIndexesNoOptimize | Merges all segments from an array of indexes into this
index.
This may be used to parallelize batch indexing. A large document
collection can be broken into sub-collections. Each sub-collection can be
indexed in parallel, on a different thread, process or machine. The
complete index can then be created by merging sub-collection indexes
with this method.
NOTE: the index in each Directory must not be
changed (opened by a writer) while this method is
running. This method does not acquire a write lock in
each input Directory, so it is up to the caller to
enforce this.
NOTE: while this is running, any attempts to
add or delete documents (with another thread) will be
paused until this method completes.
This method is transactional in how Exceptions are
handled: it does not commit a new segments_N file until
all indexes are added. This means if an Exception
occurs (for example disk full), then either no indexes
will have been added or they all will have been.Note that this requires temporary free space in the
Directory up to 2X the sum of all input indexes
(including the starting index). If readers/searchers
are open against the starting index, then temporary
free space required will be higher by the size of the
starting index (see {@link #Optimize()} for details).
Once this completes, the final size of the index
will be less than the sum of all input index sizes
(including the starting index). It could be quite a
bit smaller (if there were many pending deletes) or
just slightly smaller.
This requires this index not be among those to be added.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
Close()()()() | Commits all changes to an index and closes all
associated files. Note that this may be a costly
operation, so, try to re-use a single writer instead of
closing and opening a new one. See {@link #Commit()} for
caveats about write caching done by some IO devices.
If an Exception is hit during close, eg due to disk
full or some other reason, then both the on-disk index
and the internal state of the IndexWriter instance will
be consistent. However, the close will not be complete
even though part of it (flushing buffered documents)
may have succeeded, so the write lock will still be
held. If you can correct the underlying cause (eg free up
some disk space) then you can call close() again.
Failing that, if you want to force the write lock to be
released (dangerous, because you may then lose buffered
docs in the IndexWriter instance) then you can do
something like this: try { writer.close(); } finally { if (IndexWriter.isLocked(directory)) { IndexWriter.unlock(directory); } }after which, you must be certain not to use the writer instance anymore.NOTE: if this method hits an OutOfMemoryError you should immediately close the writer, again. See above for details. | |
Close(Boolean) | Closes the index with or without waiting for currently
running merges to finish. This is only meaningful when
using a MergeScheduler that runs merges in background
threads.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer, again. See above for details.NOTE: it is dangerous to always call
close(false), especially when IndexWriter is not open
for very long, because this can result in "merge
starvation" whereby long merges will never have a
chance to finish. This will cause too many segments in
your index over time. | |
Commit()()()() | Commits all pending changes (added & deleted
documents, optimizations, segment merges, added
indexes, etc.) to the index, and syncs all referenced
index files, such that a reader will see the changes
and the index updates will survive an OS or machine
crash or power loss. Note that this does not wait for
any running background merges to finish. This may be a
costly operation, so you should test the cost in your
application and do it only when really necessary. Note that this operation calls Directory.sync on
the index files. That call should not return until the
file contents & metadata are on stable storage. For
FSDirectory, this calls the OS's fsync. But, beware:
some hardware devices may in fact cache writes even
during fsync, and return before the bits are actually
on stable storage, to give the appearance of faster
performance. If you have such a device, and it does
not have a battery backup (for example) then on power
loss it may still lose data. Lucene cannot guarantee
consistency on such devices. NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
Commit(IDictionary<(Of <<'(String, String>)>>)) | Commits all changes to the index, specifying a
commitUserData Map (String -> String). This just
calls {@link #PrepareCommit(Map)} (if you didn't
already call it) and then {@link #finishCommit}.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
DeleteAll | Delete all documents in the index.
This method will drop all buffered documents and will
remove all segments from the index. This change will not be
visible until a {@link #Commit()} has been called. This method
can be rolled back using {@link #Rollback()}.NOTE: this method is much faster than using deleteDocuments( new MatchAllDocsQuery() ).NOTE: this method will forcefully abort all merges
in progress. If other threads are running {@link
#Optimize()} or any of the addIndexes methods, they
will receive {@link MergePolicy.MergeAbortedException}s.
| |
DeleteDocuments(Query) | Deletes the document(s) matching the provided query.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
DeleteDocuments(array<Query>[]()[][]) | Deletes the document(s) matching any of the provided queries.
All deletes are flushed at the same time.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
DeleteDocuments(Term) | Deletes the document(s) containing CopyC# term | |
DeleteDocuments(array<Term>[]()[][]) | Deletes the document(s) containing any of the
terms. All deletes are flushed at the same time.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
Dispose |
.NET
| |
DoAfterFlush |
A hook for extending classes to execute operations after pending added and
deleted documents have been flushed to the Directory but before the change
is committed (new segments_N file written).
| |
DoBeforeFlush |
A hook for extending classes to execute operations before pending added and
deleted documents are flushed to the Directory.
| |
DocCount | Obsolete. Returns the number of documents currently in this
index, not counting deletions.
| |
EnsureOpen()()()() | ||
EnsureOpen(Boolean) | Used internally to throw an {@link
AlreadyClosedException} if this IndexWriter has been
closed.
| |
Equals | (Inherited from Object.) | |
ExpungeDeletes()()()() | Expunges all deletes from the index. When an index
has many document deletions (or updates to existing
documents), it's best to either call optimize or
expungeDeletes to remove all unused data in the index
associated with the deleted documents. To see how
many deletions you have pending in your index, call
{@link IndexReader#numDeletedDocs}
This saves disk space and memory usage while
searching. expungeDeletes should be somewhat faster
than optimize since it does not insist on reducing the
index to a single segment (though, this depends on the
{@link MergePolicy}; see {@link
MergePolicy#findMergesToExpungeDeletes}.). Note that
this call does not first commit any buffered
documents, so you must do so yourself if necessary.
See also {@link #ExpungeDeletes(boolean)}
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
ExpungeDeletes(Boolean) | Just like {@link #ExpungeDeletes()}, except you can
specify whether the call should block until the
operation completes. This is only meaningful with a
{@link MergeScheduler} that is able to run merges in
background threads.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
Finalize | Allows an Object to attempt to free resources and perform other cleanup operations before the Object is reclaimed by garbage collection. (Inherited from Object.) | |
Flush()()()() | Obsolete. Flush all in-memory buffered updates (adds and deletes)
to the Directory.
Note: while this will force buffered docs to be
pushed into the index, it will not make these docs
visible to a reader. Use {@link #Commit()} instead
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
Flush(Boolean, Boolean, Boolean) | Flush all in-memory buffered udpates (adds and deletes)
to the Directory.
| |
GetAnalyzer | Returns the analyzer used by this index. | |
GetBufferedDeleteTermsSize | ||
GetDefaultInfoStream | Returns the current default infoStream for newly
instantiated IndexWriters.
| |
GetDefaultWriteLockTimeout | Returns default write lock timeout for newly
instantiated IndexWriters.
| |
GetDirectory | Returns the Directory used by this index. | |
GetDocCount | ||
GetFlushCount | ||
GetFlushDeletesCount | ||
GetHashCode | Serves as a hash function for a particular type. (Inherited from Object.) | |
GetInfoStream | Returns the current infoStream in use by this writer. | |
GetMaxBufferedDeleteTerms | Returns the number of buffered deleted terms that will
trigger a flush if enabled.
| |
GetMaxBufferedDocs | Returns the number of buffered added documents that will
trigger a flush if enabled.
| |
GetMaxFieldLength | Returns the maximum number of terms that will be
indexed for a single field in a document.
| |
GetMaxMergeDocs | Returns the largest segment (measured by document
count) that may be merged with other segments.Note that this method is a convenience method: it
just calls mergePolicy.getMaxMergeDocs as long as
mergePolicy is an instance of {@link LogMergePolicy}.
Otherwise an IllegalArgumentException is thrown. | |
GetMaxSyncPauseSeconds | Obsolete. Expert: returns max delay inserted before syncing a
commit point. On Windows, at least, pausing before
syncing can increase net indexing throughput. The
delay is variable based on size of the segment's files,
and is only inserted when using
ConcurrentMergeScheduler for merges.
| |
GetMergedSegmentWarmer | Returns the current merged segment warmer. See {@link
IndexReaderWarmer}.
| |
GetMergeFactor | Returns the number of segments that are merged at
once and also controls the total number of segments
allowed to accumulate in the index.Note that this method is a convenience method: it
just calls mergePolicy.getMergeFactor as long as
mergePolicy is an instance of {@link LogMergePolicy}.
Otherwise an IllegalArgumentException is thrown. | |
GetMergePolicy | Expert: returns the current MergePolicy in use by this writer. | |
GetMergeScheduler | Expert: returns the current MergePolicy in use by this
writer.
| |
GetNextMerge_forNUnit | ||
GetNumBufferedDeleteTerms | ||
GetNumBufferedDocuments | ||
GetRAMBufferSizeMB | Returns the value set by {@link #setRAMBufferSizeMB} if enabled. | |
GetReader()()()() | Expert: returns a readonly reader, covering all committed as well as
un-committed changes to the index. This provides "near real-time"
searching, in that changes made during an IndexWriter session can be
quickly made available for searching without closing the writer nor
calling {@link #commit}.
Note that this is functionally equivalent to calling {#commit} and then
using {@link IndexReader#open} to open a new reader. But the turarnound
time of this method should be faster since it avoids the potentially
costly {@link #commit}.
You must close the {@link IndexReader} returned by this method once you are done using it.
It's near real-time because there is no hard
guarantee on how quickly you can get a new reader after
making changes with IndexWriter. You'll have to
experiment in your situation to determine if it's
faster enough. As this is a new and experimental
feature, please report back on your findings so we can
learn, improve and iterate.The resulting reader suppports {@link
IndexReader#reopen}, but that call will simply forward
back to this method (though this may change in the
future).The very first time this method is called, this
writer instance will make every effort to pool the
readers that it opens for doing merges, applying
deletes, etc. This means additional resources (RAM,
file descriptors, CPU time) will be consumed.For lower latency on reopening a reader, you should call {@link #setMergedSegmentWarmer}
to call {@link #setMergedSegmentWarmer} to
pre-warm a newly merged segment before it's committed
to the index. This is important for minimizing index-to-search
delay after a large merge.
If an addIndexes* call is running in another thread,
then this reader will only search those segments from
the foreign index that have been successfully copied
over, so far.
NOTE: Once the writer is closed, any
outstanding readers may continue to be used. However,
if you attempt to reopen any of those readers, you'll
hit an {@link AlreadyClosedException}.NOTE: This API is experimental and might
change in incompatible ways in the next release. | |
GetReader(Int32) | Expert: like {@link #getReader}, except you can
specify which termInfosIndexDivisor should be used for
any newly opened readers.
| |
GetReaderTermsIndexDivisor | ||
GetSegmentCount | ||
GetSimilarity | Expert: Return the Similarity implementation used by this IndexWriter.
This defaults to the current value of {@link Similarity#GetDefault()}.
| |
GetTermIndexInterval | Expert: Return the interval between indexed terms.
| |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
GetUseCompoundFile | Get the current setting of whether newly flushed
segments will use the compound file format. Note that
this just returns the value previously set with
setUseCompoundFile(boolean), or the default value
(true). You cannot use this to query the status of
previously flushed segments.Note that this method is a convenience method: it
just calls mergePolicy.getUseCompoundFile as long as
mergePolicy is an instance of {@link LogMergePolicy}.
Otherwise an IllegalArgumentException is thrown. | |
GetWriteLockTimeout | Returns allowed timeout when acquiring the write lock. | |
HasDeletions | ||
IsLocked(String) | Obsolete. Returns CopyC# true | |
IsLocked(Directory) | Returns CopyC# true | |
MaxDoc | Returns total number of docs in this index, including
docs not yet flushed (still in the RAM buffer),
not counting deletions.
| |
MaybeMerge | Expert: asks the mergePolicy whether any merges are
necessary now and if so, runs the requested merges and
then iterate (test again if merges are needed) until no
more merges are returned by the mergePolicy.
Explicit calls to maybeMerge() are usually not
necessary. The most common case is when merge policy
parameters have changed.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
Merge_ForNUnit | ||
Message | Prints a message to the infoStream (if non-null),
prefixed with the identifying information for this
writer and the thread that's calling it.
| |
NewestSegment | ||
NumDeletedDocs | Obtain the number of deleted docs for a pooled reader.
If the reader isn't being pooled, the segmentInfo's
delCount is returned.
| |
NumDocs | Returns total number of docs in this index, including
docs not yet flushed (still in the RAM buffer), and
including deletions. NOTE: buffered deletions
are not counted. If you really need these to be
counted you should call {@link #Commit()} first.
| |
NumRamDocs | Expert: Return the number of documents currently
buffered in RAM.
| |
Optimize()()()() | Requests an "optimize" operation on an index, priming the index
for the fastest available search. Traditionally this has meant
merging all segments into a single segment as is done in the
default merge policy, but individaul merge policies may implement
optimize in different ways.
It is recommended that this method be called upon completion of indexing. In
environments with frequent updates, optimize is best done during low volume times, if at all.
See http://www.gossamer-threads.com/lists/lucene/java-dev/47895 for more discussion. Note that optimize requires 2X the index size free
space in your Directory (3X if you're using compound
file format). For example, if your index
size is 10 MB then you need 20 MB free for optimize to
complete (30 MB if you're using compound fiel format).If some but not all readers re-open while an
optimize is underway, this will cause > 2X temporary
space to be consumed as those new readers will then
hold open the partially optimized segments at that
time. It is best not to re-open readers while optimize
is running.The actual temporary usage could be much less than
these figures (it depends on many factors).In general, once the optimize completes, the total size of the
index will be less than the size of the starting index.
It could be quite a bit smaller (if there were many
pending deletes) or just slightly smaller.If an Exception is hit during optimize(), for example
due to disk full, the index will not be corrupt and no
documents will have been lost. However, it may have
been partially optimized (some segments were merged but
not all), and it's possible that one of the segments in
the index will be in non-compound format even when
using compound file format. This will occur when the
Exception is hit during conversion of the segment into
compound format.This call will optimize those segments present in
the index when the call started. If other threads are
still adding documents and flushing segments, those
newly created segments will not be optimized unless you
call optimize again.NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
Optimize(Boolean) | Just like {@link #Optimize()}, except you can specify
whether the call should block until the optimize
completes. This is only meaningful with a
{@link MergeScheduler} that is able to run merges in
background threads.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
Optimize(Int32) | Optimize the index down to <= maxNumSegments. If
maxNumSegments==1 then this is the same as {@link
#Optimize()}.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
Optimize(Int32, Boolean) | Just like {@link #Optimize(int)}, except you can
specify whether the call should block until the
optimize completes. This is only meaningful with a
{@link MergeScheduler} that is able to run merges in
background threads.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
PrepareCommit()()()() | Expert: prepare for commit.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
PrepareCommit(IDictionary<(Of <<'(String, String>)>>)) | Expert: prepare for commit, specifying
commitUserData Map (String -> String). This does the
first phase of 2-phase commit. You can only call this
when autoCommit is false. This method does all steps
necessary to commit changes since this writer was
opened: flushes pending added and deleted docs, syncs
the index files, writes most of next segments_N file.
After calling this you must call either {@link
#Commit()} to finish the commit, or {@link
#Rollback()} to revert the commit and undo all changes
done since the writer was opened.
You can also just call {@link #Commit(Map)} directly
without prepareCommit first in which case that method
will internally call prepareCommit.
NOTE: if this method hits an OutOfMemoryError
you should immediately close the writer. See above for details. | |
RamSizeInBytes | Expert: Return the total size of all index files currently cached in memory.
Useful for size management with flushRamDocs()
| |
Rollback | Close the CopyC# IndexWriter CopyC# autoCommit=false | |
SegString | ||
SetAllowMinus1Position | Deprecated: emulates IndexWriter's buggy behavior when
first token(s) have positionIncrement==0 (ie, prior to
fixing LUCENE-1542)
| |
SetDefaultInfoStream | If non-null, this will be the default infoStream used
by a newly instantiated IndexWriter.
| |
SetDefaultWriteLockTimeout | Sets the default (for any instance of IndexWriter) maximum time to wait for a write lock (in
milliseconds).
| |
SetInfoStream | If non-null, information about merges, deletes and a
message when maxFieldLength is reached will be printed
to this.
| |
SetMaxBufferedDeleteTerms | Determines the minimal number of delete terms required before the buffered
in-memory delete terms are applied and flushed. If there are documents
buffered in memory at the time, they are merged and a new segment is
created.Disabled by default (writer flushes by RAM usage). | |
SetMaxBufferedDocs | Determines the minimal number of documents required
before the buffered in-memory documents are flushed as
a new Segment. Large values generally gives faster
indexing.
When this is set, the writer will flush every
maxBufferedDocs added documents. Pass in {@link
#DISABLE_AUTO_FLUSH} to prevent triggering a flush due
to number of buffered documents. Note that if flushing
by RAM usage is also enabled, then the flush will be
triggered by whichever comes first.Disabled by default (writer flushes by RAM usage). | |
SetMaxFieldLength | The maximum number of terms that will be indexed for a single field in a
document. This limits the amount of memory required for indexing, so that
collections with very large files will not crash the indexing process by
running out of memory. This setting refers to the number of running terms,
not to the number of different terms.Note: this silently truncates large documents, excluding from the
index all terms that occur further in the document. If you know your source
documents are large, be sure to set this value high enough to accomodate
the expected size. If you set it to Integer.MAX_VALUE, then the only limit
is your memory, but you should anticipate an OutOfMemoryError.
By default, no more than {@link #DEFAULT_MAX_FIELD_LENGTH} terms
will be indexed for a field.
| |
SetMaxMergeDocs | Determines the largest segment (measured by
document count) that may be merged with other segments.
Small values (e.g., less than 10,000) are best for
interactive indexing, as this limits the length of
pauses while indexing to a few seconds. Larger values
are best for batched indexing and speedier
searches.The default value is {@link Integer#MAX_VALUE}.Note that this method is a convenience method: it
just calls mergePolicy.setMaxMergeDocs as long as
mergePolicy is an instance of {@link LogMergePolicy}.
Otherwise an IllegalArgumentException is thrown.The default merge policy ({@link
LogByteSizeMergePolicy}) also allows you to set this
limit by net size (in MB) of the segment, using {@link
LogByteSizeMergePolicy#setMaxMergeMB}. | |
SetMaxSyncPauseSeconds | Obsolete. Expert: sets the max delay before syncing a commit
point.
| |
SetMergedSegmentWarmer | Set the merged segment warmer. See {@link
IndexReaderWarmer}.
| |
SetMergeFactor | Determines how often segment indices are merged by addDocument(). With
smaller values, less RAM is used while indexing, and searches on
unoptimized indices are faster, but indexing speed is slower. With larger
values, more RAM is used during indexing, and while searches on unoptimized
indices are slower, indexing is faster. Thus larger values (> 10) are best
for batch index creation, and smaller values (< 10) for indices that are
interactively maintained.
Note that this method is a convenience method: it
just calls mergePolicy.setMergeFactor as long as
mergePolicy is an instance of {@link LogMergePolicy}.
Otherwise an IllegalArgumentException is thrown.This must never be less than 2. The default value is 10.
| |
SetMergePolicy | Expert: set the merge policy used by this writer. | |
SetMergeScheduler | Expert: set the merge scheduler used by this writer. | |
SetRAMBufferSizeMB | Determines the amount of RAM that may be used for
buffering added documents and deletions before they are
flushed to the Directory. Generally for faster
indexing performance it's best to flush by RAM usage
instead of document count and use as large a RAM buffer
as you can.
When this is set, the writer will flush whenever
buffered documents and deletions use this much RAM.
Pass in {@link #DISABLE_AUTO_FLUSH} to prevent
triggering a flush due to RAM usage. Note that if
flushing by document count is also enabled, then the
flush will be triggered by whichever comes first.NOTE: the account of RAM usage for pending
deletions is only approximate. Specifically, if you
delete by Query, Lucene currently has no way to measure
the RAM usage if individual Queries so the accounting
will under-estimate and you should compensate by either
calling commit() periodically yourself, or by using
{@link #setMaxBufferedDeleteTerms} to flush by count
instead of RAM usage (each buffered delete Query counts
as one).
NOTE: because IndexWriter uses CopyC# int | |
SetReaderTermsIndexDivisor | ||
SetSimilarity | Expert: Set the Similarity implementation used by this IndexWriter.
| |
SetTermIndexInterval | Expert: Set the interval between indexed terms. Large values cause less
memory to be used by IndexReader, but slow random-access to terms. Small
values cause more memory to be used by an IndexReader, and speed
random-access to terms.
This parameter determines the amount of computation required per query
term, regardless of the number of documents that contain that term. In
particular, it is the maximum number of other terms that must be
scanned before a term is located and its frequency and position information
may be processed. In a large index with user-entered query terms, query
processing time is likely to be dominated not by term lookup but rather
by the processing of frequency and positional data. In a small index
or when many uncommon query terms are generated (e.g., by wildcard
queries) term lookup may become a dominant cost.
In particular, CopyC# numUniqueTerms/interval CopyC# interval/2 | |
SetUseCompoundFile | Setting to turn on usage of a compound file. When on,
multiple files for each segment are merged into a
single file when a new segment is flushed.Note that this method is a convenience method: it
just calls mergePolicy.setUseCompoundFile as long as
mergePolicy is an instance of {@link LogMergePolicy}.
Otherwise an IllegalArgumentException is thrown. | |
SetWriteLockTimeout | ||
TestPoint | ||
ToString | (Inherited from Object.) | |
Unlock | Forcibly unlocks the index in the named directory.
Caution: this should only be used by failure recovery code,
when it is known that no other process nor thread is in fact
currently accessing this index.
| |
UpdateDocument(Term, Document) | Updates a document by first deleting the document(s)
containing CopyC# term | |
UpdateDocument(Term, Document, Analyzer) | Updates a document by first deleting the document(s)
containing CopyC# term | |
Verbose | Returns true if verbosing is enabled (i.e., infoStream != null). | |
WaitForMerges | Wait for any currently outstanding merges to finish.
It is guaranteed that any merges started prior to calling this method
will have completed once this method completes. |
Fields
Name | Description | |
---|---|---|
DEFAULT_MAX_BUFFERED_DELETE_TERMS | Disabled by default (because IndexWriter flushes by RAM usage
by default). Change using {@link #SetMaxBufferedDeleteTerms(int)}.
| |
DEFAULT_MAX_BUFFERED_DOCS | Disabled by default (because IndexWriter flushes by RAM usage
by default). Change using {@link #SetMaxBufferedDocs(int)}.
| |
DEFAULT_MAX_FIELD_LENGTH | Default value is 10,000. Change using {@link #SetMaxFieldLength(int)}. | |
DEFAULT_MAX_MERGE_DOCS | Obsolete. | |
DEFAULT_MAX_SYNC_PAUSE_SECONDS | Default for {@link #getMaxSyncPauseSeconds}. On
Windows this defaults to 10.0 seconds; elsewhere it's
0.
| |
DEFAULT_MERGE_FACTOR | Obsolete. | |
DEFAULT_RAM_BUFFER_SIZE_MB | Default value is 16 MB (which means flush when buffered
docs consume 16 MB RAM). Change using {@link #setRAMBufferSizeMB}.
| |
DEFAULT_TERM_INDEX_INTERVAL | Default value is 128. Change using {@link #SetTermIndexInterval(int)}. | |
DISABLE_AUTO_FLUSH | Value to denote a flush trigger is disabled | |
MAX_TERM_LENGTH | Absolute hard maximum length for a term. If a term
arrives from the analyzer longer than this length, it
is skipped and a message is printed to infoStream, if
set (see {@link #setInfoStream}).
| |
WRITE_LOCK_NAME | Name of the write lock in the index. | |
WRITE_LOCK_TIMEOUT | Default value for the write lock timeout (1,000). |