Uses of Class
org.apache.lucene.analysis.TokenStream
Packages that use TokenStream
Package
Description
Text analysis.
Analyzer for Arabic.
Analyzer for Bulgarian.
Analyzer for Bengali Language.
Provides various convenience classes for creating boosts on Tokens.
Analyzer for Brazilian Portuguese.
Analyzer for Catalan.
Analyzer for Chinese, Japanese, and Korean, which indexes bigrams.
Analyzer for Sorani Kurdish.
Fast, general-purpose grammar-based tokenizers.
Analyzer for Simplified Chinese, which indexes words.
Construct n-grams for frequently occurring terms and phrases.
A filter that decomposes compound words you find in many Germanic languages into the word parts.
Basic, general-purpose analysis components.
A general-purpose Analyzer that can be created with a builder-style API.
Analyzer for Czech.
Analyzer for Danish.
Analyzer for German.
Analyzer for Greek.
Fast, general-purpose URLs and email addresses tokenizers.
Analyzer for English.
Analyzer for Spanish.
Analyzer for Estonian.
Analyzer for Basque.
Analyzer for Persian.
Analyzer for Finnish.
Analyzer for French.
Analyzer for Irish.
Analyzer for Galician.
Analyzer for Hindi.
Analyzer for Hungarian.
A Java implementation of Hunspell stemming and
spell-checking algorithms (
Hunspell), and a stemming
TokenFilter (HunspellStemFilter) based on it.Analyzer for Armenian.
Analysis components based on ICU
Tokenizer that breaks text into words with the Unicode Text Segmentation algorithm.
Analyzer for Indonesian.
Analyzer for Indian languages.
Analyzer for Italian.
Analyzer for Japanese.
Analyzer for Korean.
Analyzer for Lithuanian.
Analyzer for Latvian.
MinHash filtering (for LSH).
Miscellaneous Tokenstreams.
Analyzer for Nepali.
Character n-gram tokenizers and filters.
Analyzer for Dutch.
Analyzer for Norwegian.
Analysis components for path-like strings such as filenames.
Set of components for pattern-based (regex) analysis.
Provides various convenience classes for creating payloads on Tokens.
Analysis components for phonetic search.
Analyzer for Polish.
Analyzer for Portuguese.
Filter to reverse token text.
Analyzer for Romanian.
Analyzer for Russian.
Word n-gram filters.
Analyzer for Serbian.
Fast, general-purpose grammar-based tokenizer
StandardTokenizer implements the Word Break rules from the
Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29.Stempel: Algorithmic Stemmer
Analyzer for Swedish.
Analysis components for Synonyms.
Analysis components for Synonyms using Word2Vec model.
Analyzer for Tamil.
Analyzer for Telugu Language.
Analyzer for Thai.
Analyzer for Turkish.
Utility functions for text analysis.
Tokenizer that is aware of Wikipedia syntax.
Uses already seen data (the indexed documents) to classify new documents.
The logical representation of a
Document for indexing and
searching.Code to maintain and access indices.
High-performance single-document main memory Apache Lucene fulltext search index.
Misc extensions of the Document/Field API.
Monitoring framework
Intervals queries
This package contains a flexible graph-based proximity query, TermAutomatonQuery, and geospatial
queries.
Highlighting search terms.
Analyzer based autosuggest.
Support for document suggestion
The UnifiedHighlighter -- a flexible highlighter that can get offsets from postings, term
vectors, or analysis.
Some utility classes.
Utility classes for working with token streams as graphs.
-
Uses of TokenStream in org.apache.lucene.analysis
Subclasses of TokenStream in org.apache.lucene.analysisModifier and TypeClassDescriptionprivate static final classprivate static classToken Stream that outputs tokens from a topo sorted graph.final classThis class can be used if the token attributes of a TokenStream are intended to be consumed more than once.classAbstract base class for TokenFilters that may remove tokens.classAn abstract TokenFilter that exposes its input stream as a graphclassNormalizes token text to lower case.classRemoves stop words from a token stream.classA TokenFilter is a TokenStream whose input is another TokenStream.classA Tokenizer is a TokenStream whose input is a Reader.Fields in org.apache.lucene.analysis declared as TokenStreamModifier and TypeFieldDescriptionprotected final TokenStreamTokenFilter.inputThe source of tokens for this filter.protected final TokenStreamAnalyzer.TokenStreamComponents.sinkSink tokenstream, such as the outer tokenfilter decorating the chain.Methods in org.apache.lucene.analysis that return TokenStreamModifier and TypeMethodDescriptionabstract TokenStreamTokenFilterFactory.create(TokenStream input) Transform the specified input TokenStreamAnalyzer.TokenStreamComponents.getTokenStream()Returns the sinkTokenStreamprotected TokenStreamAnalyzer.normalize(String fieldName, TokenStream in) Wrap the givenTokenStreamin order to apply normalization filters.protected final TokenStreamAnalyzerWrapper.normalize(String fieldName, TokenStream in) TokenFilterFactory.normalize(TokenStream input) Normalize the specified input TokenStream While the default implementation returns input unchanged, filters that should be applied at normalization time can delegate tocreatemethod.final TokenStreamAnalyzer.tokenStream(String fieldName, Reader reader) Returns a TokenStream suitable forfieldName, tokenizing the contents ofreader.final TokenStreamAnalyzer.tokenStream(String fieldName, String text) Returns a TokenStream suitable forfieldName, tokenizing the contents oftext.static TokenStreamAutomatonToTokenStream.toTokenStream(Automaton automaton) converts an automaton into a TokenStream.TokenFilter.unwrap()protected TokenStreamAnalyzerWrapper.wrapTokenStreamForNormalization(String fieldName, TokenStream in) Wraps / alters the given TokenStream for normalization purposes, taken from the wrapped Analyzer, to form new components.protected final TokenStreamDelegatingAnalyzerWrapper.wrapTokenStreamForNormalization(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis with parameters of type TokenStreamModifier and TypeMethodDescriptionabstract TokenStreamTokenFilterFactory.create(TokenStream input) Transform the specified input TokenStreamprotected TokenStreamAnalyzer.normalize(String fieldName, TokenStream in) Wrap the givenTokenStreamin order to apply normalization filters.protected final TokenStreamAnalyzerWrapper.normalize(String fieldName, TokenStream in) TokenFilterFactory.normalize(TokenStream input) Normalize the specified input TokenStream While the default implementation returns input unchanged, filters that should be applied at normalization time can delegate tocreatemethod.TokenStreamToAutomaton.toAutomaton(TokenStream in) Pulls the graph (includingPositionLengthAttribute) from the providedTokenStream, and creates the corresponding automaton where arcs are bytes (or Unicode code points if unicodeArcs = true) from each term.protected TokenStreamAnalyzerWrapper.wrapTokenStreamForNormalization(String fieldName, TokenStream in) Wraps / alters the given TokenStream for normalization purposes, taken from the wrapped Analyzer, to form new components.protected final TokenStreamDelegatingAnalyzerWrapper.wrapTokenStreamForNormalization(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis with parameters of type TokenStreamModifierConstructorDescriptionCachingTokenFilter(TokenStream input) Create a new CachingTokenFilter aroundinput.Create a newFilteringTokenFilter.GraphTokenFilter(TokenStream input) Create a new GraphTokenFilterCreate a new LowerCaseFilter, that normalizes token text to lower case.StopFilter(TokenStream in, CharArraySet stopWords) Constructs a filter which removes words from the input TokenStream that are named in the Set.protectedTokenFilter(TokenStream input) Construct a token stream filtering the given input.TokenStreamComponents(Consumer<Reader> source, TokenStream result) Creates a newAnalyzer.TokenStreamComponentsinstance.TokenStreamComponents(Tokenizer tokenizer, TokenStream result) Creates a newAnalyzer.TokenStreamComponentsinstance -
Uses of TokenStream in org.apache.lucene.analysis.ar
Subclasses of TokenStream in org.apache.lucene.analysis.arModifier and TypeClassDescriptionfinal classATokenFilterthat appliesArabicNormalizerto normalize the orthography.final classATokenFilterthat appliesArabicStemmerto stem Arabic words..Methods in org.apache.lucene.analysis.ar that return TokenStreamModifier and TypeMethodDescriptionArabicNormalizationFilterFactory.create(TokenStream input) protected TokenStreamArabicAnalyzer.normalize(String fieldName, TokenStream in) ArabicNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.ar with parameters of type TokenStreamModifier and TypeMethodDescriptionArabicNormalizationFilterFactory.create(TokenStream input) ArabicStemFilterFactory.create(TokenStream input) protected TokenStreamArabicAnalyzer.normalize(String fieldName, TokenStream in) ArabicNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.ar with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.bg
Subclasses of TokenStream in org.apache.lucene.analysis.bgModifier and TypeClassDescriptionfinal classATokenFilterthat appliesBulgarianStemmerto stem Bulgarian words.Methods in org.apache.lucene.analysis.bg that return TokenStreamModifier and TypeMethodDescriptionBulgarianStemFilterFactory.create(TokenStream input) protected TokenStreamBulgarianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.bg with parameters of type TokenStreamModifier and TypeMethodDescriptionBulgarianStemFilterFactory.create(TokenStream input) protected TokenStreamBulgarianAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.bg with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.bn
Subclasses of TokenStream in org.apache.lucene.analysis.bnModifier and TypeClassDescriptionfinal classATokenFilterthat appliesBengaliNormalizerto normalize the orthography.final classATokenFilterthat appliesBengaliStemmerto stem Bengali words.Methods in org.apache.lucene.analysis.bn that return TokenStreamModifier and TypeMethodDescriptionBengaliNormalizationFilterFactory.create(TokenStream input) BengaliStemFilterFactory.create(TokenStream input) protected TokenStreamBengaliAnalyzer.normalize(String fieldName, TokenStream in) BengaliNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.bn with parameters of type TokenStreamModifier and TypeMethodDescriptionBengaliNormalizationFilterFactory.create(TokenStream input) BengaliStemFilterFactory.create(TokenStream input) protected TokenStreamBengaliAnalyzer.normalize(String fieldName, TokenStream in) BengaliNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.bn with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.boost
Subclasses of TokenStream in org.apache.lucene.analysis.boostModifier and TypeClassDescriptionfinal classCharacters before the delimiter are the "token", those after are the boost.Methods in org.apache.lucene.analysis.boost with parameters of type TokenStreamConstructors in org.apache.lucene.analysis.boost with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.br
Subclasses of TokenStream in org.apache.lucene.analysis.brMethods in org.apache.lucene.analysis.br that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamBrazilianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.br with parameters of type TokenStreamModifier and TypeMethodDescriptionBrazilianStemFilterFactory.create(TokenStream in) protected TokenStreamBrazilianAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.br with parameters of type TokenStreamModifierConstructorDescriptionCreates a new BrazilianStemFilter -
Uses of TokenStream in org.apache.lucene.analysis.ca
Methods in org.apache.lucene.analysis.ca that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamCatalanAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.ca with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamCatalanAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.cjk
Subclasses of TokenStream in org.apache.lucene.analysis.cjkModifier and TypeClassDescriptionfinal classForms bigrams of CJK terms that are generated from StandardTokenizer or ICUTokenizer.final classATokenFilterthat normalizes CJK width differences: Folds fullwidth ASCII variants into the equivalent basic latin Folds halfwidth Katakana variants into the equivalent kanaMethods in org.apache.lucene.analysis.cjk that return TokenStreamModifier and TypeMethodDescriptionCJKBigramFilterFactory.create(TokenStream input) CJKWidthFilterFactory.create(TokenStream input) protected TokenStreamCJKAnalyzer.normalize(String fieldName, TokenStream in) CJKWidthFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.cjk with parameters of type TokenStreamModifier and TypeMethodDescriptionCJKBigramFilterFactory.create(TokenStream input) CJKWidthFilterFactory.create(TokenStream input) protected TokenStreamCJKAnalyzer.normalize(String fieldName, TokenStream in) CJKWidthFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.cjk with parameters of type TokenStreamModifierConstructorDescriptionCJKBigramFilter(TokenStream in, int flags) CJKBigramFilter(TokenStream in, int flags, boolean outputUnigrams) Create a new CJKBigramFilter, specifying which writing systems should be bigrammed, and whether or not unigrams should also be output.CJKWidthFilter(TokenStream input) -
Uses of TokenStream in org.apache.lucene.analysis.ckb
Subclasses of TokenStream in org.apache.lucene.analysis.ckbModifier and TypeClassDescriptionfinal classATokenFilterthat appliesSoraniNormalizerto normalize the orthography.final classATokenFilterthat appliesSoraniStemmerto stem Sorani words.Methods in org.apache.lucene.analysis.ckb that return TokenStreamModifier and TypeMethodDescriptionSoraniNormalizationFilterFactory.create(TokenStream input) protected TokenStreamSoraniAnalyzer.normalize(String fieldName, TokenStream in) SoraniNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.ckb with parameters of type TokenStreamModifier and TypeMethodDescriptionSoraniNormalizationFilterFactory.create(TokenStream input) SoraniStemFilterFactory.create(TokenStream input) protected TokenStreamSoraniAnalyzer.normalize(String fieldName, TokenStream in) SoraniNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.ckb with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.classic
Subclasses of TokenStream in org.apache.lucene.analysis.classicModifier and TypeClassDescriptionclassNormalizes tokens extracted withClassicTokenizer.final classA grammar-based tokenizer constructed with JFlexMethods in org.apache.lucene.analysis.classic that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamClassicAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.classic with parameters of type TokenStreamModifier and TypeMethodDescriptionClassicFilterFactory.create(TokenStream input) protected TokenStreamClassicAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.classic with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.cn.smart
Subclasses of TokenStream in org.apache.lucene.analysis.cn.smartModifier and TypeClassDescriptionclassTokenizer for Chinese or mixed Chinese-English text.Methods in org.apache.lucene.analysis.cn.smart that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamSmartChineseAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.cn.smart with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamSmartChineseAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.commongrams
Subclasses of TokenStream in org.apache.lucene.analysis.commongramsModifier and TypeClassDescriptionfinal classConstruct bigrams for frequently occurring terms while indexing.final classWrap a CommonGramsFilter optimizing phrase queries by only returning single words when they are not a member of a bigram.Methods in org.apache.lucene.analysis.commongrams with parameters of type TokenStreamModifier and TypeMethodDescriptionCommonGramsFilterFactory.create(TokenStream input) CommonGramsQueryFilterFactory.create(TokenStream input) Create a CommonGramsFilter and wrap it with a CommonGramsQueryFilterConstructors in org.apache.lucene.analysis.commongrams with parameters of type TokenStreamModifierConstructorDescriptionCommonGramsFilter(TokenStream input, CharArraySet commonWords) Construct a token stream filtering the given input using a Set of common words to create bigrams. -
Uses of TokenStream in org.apache.lucene.analysis.compound
Subclasses of TokenStream in org.apache.lucene.analysis.compoundModifier and TypeClassDescriptionclassBase class for decomposition token filters.classATokenFilterthat decomposes compound words found in many Germanic languages.classATokenFilterthat decomposes compound words found in many Germanic languages.Methods in org.apache.lucene.analysis.compound that return TokenStreamModifier and TypeMethodDescriptionDictionaryCompoundWordTokenFilterFactory.create(TokenStream input) Methods in org.apache.lucene.analysis.compound with parameters of type TokenStreamModifier and TypeMethodDescriptionDictionaryCompoundWordTokenFilterFactory.create(TokenStream input) HyphenationCompoundWordTokenFilterFactory.create(TokenStream input) Constructors in org.apache.lucene.analysis.compound with parameters of type TokenStreamModifierConstructorDescriptionprotectedCompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary) protectedCompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary, boolean onlyLongestMatch) protectedCompoundWordTokenFilterBase(TokenStream input, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch) DictionaryCompoundWordTokenFilter(TokenStream input, CharArraySet dictionary) Creates a newDictionaryCompoundWordTokenFilterDictionaryCompoundWordTokenFilter(TokenStream input, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch) Creates a newDictionaryCompoundWordTokenFilterHyphenationCompoundWordTokenFilter(TokenStream input, HyphenationTree hyphenator) Create a HyphenationCompoundWordTokenFilter with no dictionary.HyphenationCompoundWordTokenFilter(TokenStream input, HyphenationTree hyphenator, int minWordSize, int minSubwordSize, int maxSubwordSize) Create a HyphenationCompoundWordTokenFilter with no dictionary.HyphenationCompoundWordTokenFilter(TokenStream input, HyphenationTree hyphenator, CharArraySet dictionary) Creates a newHyphenationCompoundWordTokenFilterinstance.HyphenationCompoundWordTokenFilter(TokenStream input, HyphenationTree hyphenator, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch) Creates a newHyphenationCompoundWordTokenFilterinstance.HyphenationCompoundWordTokenFilter(TokenStream input, HyphenationTree hyphenator, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch, boolean noSubMatches, boolean noOverlappingMatches) Creates a newHyphenationCompoundWordTokenFilterinstance. -
Uses of TokenStream in org.apache.lucene.analysis.core
Subclasses of TokenStream in org.apache.lucene.analysis.coreModifier and TypeClassDescriptionfinal classFolds all Unicode digits in[:General_Category=Decimal_Number:]to Basic Latin digits (0-9).final classConverts an incoming graph token stream, such as one fromSynonymGraphFilter, into a flat form so that all nodes form a single linear chain with no side paths.final classEmits the entire input as a single token.classA LetterTokenizer is a tokenizer that divides text at non-letters.final classNormalizes token text to lower case.final classRemoves stop words from a token stream.final classRemoves tokens whose types appear in a set of blocked types from a token stream.final classA UnicodeWhitespaceTokenizer is a tokenizer that divides text at whitespace.final classNormalizes token text to UPPER CASE.final classA tokenizer that divides text at whitespace characters as defined byCharacter.isWhitespace(int).Methods in org.apache.lucene.analysis.core that return TokenStreamModifier and TypeMethodDescriptionDecimalDigitFilterFactory.create(TokenStream input) FlattenGraphFilterFactory.create(TokenStream input) LowerCaseFilterFactory.create(TokenStream input) StopFilterFactory.create(TokenStream input) TypeTokenFilterFactory.create(TokenStream input) UpperCaseFilterFactory.create(TokenStream input) DecimalDigitFilterFactory.normalize(TokenStream input) LowerCaseFilterFactory.normalize(TokenStream input) protected TokenStreamSimpleAnalyzer.normalize(String fieldName, TokenStream in) protected TokenStreamStopAnalyzer.normalize(String fieldName, TokenStream in) UpperCaseFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.core with parameters of type TokenStreamModifier and TypeMethodDescriptionDecimalDigitFilterFactory.create(TokenStream input) FlattenGraphFilterFactory.create(TokenStream input) LowerCaseFilterFactory.create(TokenStream input) StopFilterFactory.create(TokenStream input) TypeTokenFilterFactory.create(TokenStream input) UpperCaseFilterFactory.create(TokenStream input) DecimalDigitFilterFactory.normalize(TokenStream input) LowerCaseFilterFactory.normalize(TokenStream input) protected TokenStreamSimpleAnalyzer.normalize(String fieldName, TokenStream in) protected TokenStreamStopAnalyzer.normalize(String fieldName, TokenStream in) UpperCaseFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.core with parameters of type TokenStreamModifierConstructorDescriptionDecimalDigitFilter(TokenStream input) Creates a new DecimalDigitFilter overinputCreate a new LowerCaseFilter, that normalizes token text to lower case.StopFilter(TokenStream in, CharArraySet stopWords) Constructs a filter which removes words from the input TokenStream that are named in the Set.TypeTokenFilter(TokenStream input, Set<String> stopTypes) Create a newTypeTokenFilterthat filters tokens out (useWhiteList=false).TypeTokenFilter(TokenStream input, Set<String> stopTypes, boolean useWhiteList) Create a newTypeTokenFilter.Create a new UpperCaseFilter, that normalizes token text to upper case. -
Uses of TokenStream in org.apache.lucene.analysis.custom
Methods in org.apache.lucene.analysis.custom that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamCustomAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.custom with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamCustomAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.cz
Subclasses of TokenStream in org.apache.lucene.analysis.czModifier and TypeClassDescriptionfinal classATokenFilterthat appliesCzechStemmerto stem Czech words.Methods in org.apache.lucene.analysis.cz that return TokenStreamModifier and TypeMethodDescriptionCzechStemFilterFactory.create(TokenStream input) protected TokenStreamCzechAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.cz with parameters of type TokenStreamModifier and TypeMethodDescriptionCzechStemFilterFactory.create(TokenStream input) protected TokenStreamCzechAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.cz with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.da
Methods in org.apache.lucene.analysis.da that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamDanishAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.da with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamDanishAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.de
Subclasses of TokenStream in org.apache.lucene.analysis.deModifier and TypeClassDescriptionfinal classATokenFilterthat appliesGermanLightStemmerto stem German words.final classATokenFilterthat appliesGermanMinimalStemmerto stem German words.final classNormalizes German characters according to the heuristics of the German2 snowball algorithm.final classATokenFilterthat stems German words.Methods in org.apache.lucene.analysis.de that return TokenStreamModifier and TypeMethodDescriptionGermanLightStemFilterFactory.create(TokenStream input) GermanMinimalStemFilterFactory.create(TokenStream input) GermanNormalizationFilterFactory.create(TokenStream input) protected TokenStreamGermanAnalyzer.normalize(String fieldName, TokenStream in) GermanNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.de with parameters of type TokenStreamModifier and TypeMethodDescriptionGermanLightStemFilterFactory.create(TokenStream input) GermanMinimalStemFilterFactory.create(TokenStream input) GermanNormalizationFilterFactory.create(TokenStream input) GermanStemFilterFactory.create(TokenStream in) protected TokenStreamGermanAnalyzer.normalize(String fieldName, TokenStream in) GermanNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.de with parameters of type TokenStreamModifierConstructorDescriptionGermanLightStemFilter(TokenStream input) Creates aGermanStemFilterinstance -
Uses of TokenStream in org.apache.lucene.analysis.el
Subclasses of TokenStream in org.apache.lucene.analysis.elModifier and TypeClassDescriptionfinal classNormalizes token text to lower case, removes some Greek diacritics, and standardizes final sigma to sigma.final classATokenFilterthat appliesGreekStemmerto stem Greek words.Methods in org.apache.lucene.analysis.el that return TokenStreamModifier and TypeMethodDescriptionGreekLowerCaseFilterFactory.create(TokenStream in) GreekStemFilterFactory.create(TokenStream input) protected TokenStreamGreekAnalyzer.normalize(String fieldName, TokenStream in) GreekLowerCaseFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.el with parameters of type TokenStreamModifier and TypeMethodDescriptionGreekLowerCaseFilterFactory.create(TokenStream in) GreekStemFilterFactory.create(TokenStream input) protected TokenStreamGreekAnalyzer.normalize(String fieldName, TokenStream in) GreekLowerCaseFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.el with parameters of type TokenStreamModifierConstructorDescriptionCreate a GreekLowerCaseFilter that normalizes Greek token text.GreekStemFilter(TokenStream input) -
Uses of TokenStream in org.apache.lucene.analysis.email
Subclasses of TokenStream in org.apache.lucene.analysis.emailModifier and TypeClassDescriptionfinal classThis class implements Word Break rules from the Unicode Text Segmentation algorithm, as specified in Unicode Standard Annex #29 URLs and email addresses are also tokenized according to the relevant RFCs.Methods in org.apache.lucene.analysis.email that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamUAX29URLEmailAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.email with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamUAX29URLEmailAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.en
Subclasses of TokenStream in org.apache.lucene.analysis.enModifier and TypeClassDescriptionfinal classATokenFilterthat appliesEnglishMinimalStemmerto stem English words.final classTokenFilter that removes possessives (trailing 's) from words.final classA high-performance kstem filter for english.final classTransforms the token stream as per the Porter stemming algorithm.Methods in org.apache.lucene.analysis.en that return TokenStreamModifier and TypeMethodDescriptionEnglishMinimalStemFilterFactory.create(TokenStream input) EnglishPossessiveFilterFactory.create(TokenStream input) protected TokenStreamEnglishAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.en with parameters of type TokenStreamModifier and TypeMethodDescriptionEnglishMinimalStemFilterFactory.create(TokenStream input) EnglishPossessiveFilterFactory.create(TokenStream input) KStemFilterFactory.create(TokenStream input) PorterStemFilterFactory.create(TokenStream input) protected TokenStreamEnglishAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.en with parameters of type TokenStreamModifierConstructorDescription -
Uses of TokenStream in org.apache.lucene.analysis.es
Subclasses of TokenStream in org.apache.lucene.analysis.esModifier and TypeClassDescriptionfinal classATokenFilterthat appliesSpanishLightStemmerto stem Spanish words.final classDeprecated.final classATokenFilterthat appliesSpanishPluralStemmerto stem Spanish words.Methods in org.apache.lucene.analysis.es that return TokenStreamModifier and TypeMethodDescriptionSpanishLightStemFilterFactory.create(TokenStream input) SpanishMinimalStemFilterFactory.create(TokenStream input) Deprecated.SpanishPluralStemFilterFactory.create(TokenStream input) protected TokenStreamSpanishAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.es with parameters of type TokenStreamModifier and TypeMethodDescriptionSpanishLightStemFilterFactory.create(TokenStream input) SpanishMinimalStemFilterFactory.create(TokenStream input) Deprecated.SpanishPluralStemFilterFactory.create(TokenStream input) protected TokenStreamSpanishAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.es with parameters of type TokenStreamModifierConstructorDescriptionDeprecated. -
Uses of TokenStream in org.apache.lucene.analysis.et
Methods in org.apache.lucene.analysis.et that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamEstonianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.et with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamEstonianAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.eu
Methods in org.apache.lucene.analysis.eu that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamBasqueAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.eu with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamBasqueAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.fa
Subclasses of TokenStream in org.apache.lucene.analysis.faModifier and TypeClassDescriptionfinal classATokenFilterthat appliesPersianNormalizerto normalize the orthography.final classATokenFilterthat appliesPersianStemmerto stem Persian words.Methods in org.apache.lucene.analysis.fa that return TokenStreamModifier and TypeMethodDescriptionPersianNormalizationFilterFactory.create(TokenStream input) protected TokenStreamPersianAnalyzer.normalize(String fieldName, TokenStream in) PersianNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.fa with parameters of type TokenStreamModifier and TypeMethodDescriptionPersianNormalizationFilterFactory.create(TokenStream input) PersianStemFilterFactory.create(TokenStream input) protected TokenStreamPersianAnalyzer.normalize(String fieldName, TokenStream in) PersianNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.fa with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.fi
Subclasses of TokenStream in org.apache.lucene.analysis.fiModifier and TypeClassDescriptionfinal classATokenFilterthat appliesFinnishLightStemmerto stem Finnish words.Methods in org.apache.lucene.analysis.fi that return TokenStreamModifier and TypeMethodDescriptionFinnishLightStemFilterFactory.create(TokenStream input) protected TokenStreamFinnishAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.fi with parameters of type TokenStreamModifier and TypeMethodDescriptionFinnishLightStemFilterFactory.create(TokenStream input) protected TokenStreamFinnishAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.fi with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.fr
Subclasses of TokenStream in org.apache.lucene.analysis.frModifier and TypeClassDescriptionfinal classATokenFilterthat appliesFrenchLightStemmerto stem French words.final classATokenFilterthat appliesFrenchMinimalStemmerto stem French words.Methods in org.apache.lucene.analysis.fr that return TokenStreamModifier and TypeMethodDescriptionFrenchLightStemFilterFactory.create(TokenStream input) FrenchMinimalStemFilterFactory.create(TokenStream input) protected TokenStreamFrenchAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.fr with parameters of type TokenStreamModifier and TypeMethodDescriptionFrenchLightStemFilterFactory.create(TokenStream input) FrenchMinimalStemFilterFactory.create(TokenStream input) protected TokenStreamFrenchAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.fr with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.ga
Subclasses of TokenStream in org.apache.lucene.analysis.gaModifier and TypeClassDescriptionfinal classNormalises token text to lower case, handling t-prothesis and n-eclipsis (i.e., that 'nAthair' should become 'n-athair')Methods in org.apache.lucene.analysis.ga that return TokenStreamModifier and TypeMethodDescriptionIrishLowerCaseFilterFactory.create(TokenStream input) protected TokenStreamIrishAnalyzer.normalize(String fieldName, TokenStream in) IrishLowerCaseFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.ga with parameters of type TokenStreamModifier and TypeMethodDescriptionIrishLowerCaseFilterFactory.create(TokenStream input) protected TokenStreamIrishAnalyzer.normalize(String fieldName, TokenStream in) IrishLowerCaseFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.ga with parameters of type TokenStreamModifierConstructorDescriptionCreate an IrishLowerCaseFilter that normalises Irish token text. -
Uses of TokenStream in org.apache.lucene.analysis.gl
Subclasses of TokenStream in org.apache.lucene.analysis.glModifier and TypeClassDescriptionfinal classATokenFilterthat appliesGalicianMinimalStemmerto stem Galician words.final classATokenFilterthat appliesGalicianStemmerto stem Galician words.Methods in org.apache.lucene.analysis.gl that return TokenStreamModifier and TypeMethodDescriptionGalicianMinimalStemFilterFactory.create(TokenStream input) GalicianStemFilterFactory.create(TokenStream input) protected TokenStreamGalicianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.gl with parameters of type TokenStreamModifier and TypeMethodDescriptionGalicianMinimalStemFilterFactory.create(TokenStream input) GalicianStemFilterFactory.create(TokenStream input) protected TokenStreamGalicianAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.gl with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.hi
Subclasses of TokenStream in org.apache.lucene.analysis.hiModifier and TypeClassDescriptionfinal classATokenFilterthat appliesHindiNormalizerto normalize the orthography.final classATokenFilterthat appliesHindiStemmerto stem Hindi words.Methods in org.apache.lucene.analysis.hi that return TokenStreamModifier and TypeMethodDescriptionHindiNormalizationFilterFactory.create(TokenStream input) HindiStemFilterFactory.create(TokenStream input) protected TokenStreamHindiAnalyzer.normalize(String fieldName, TokenStream in) HindiNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.hi with parameters of type TokenStreamModifier and TypeMethodDescriptionHindiNormalizationFilterFactory.create(TokenStream input) HindiStemFilterFactory.create(TokenStream input) protected TokenStreamHindiAnalyzer.normalize(String fieldName, TokenStream in) HindiNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.hi with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.hu
Subclasses of TokenStream in org.apache.lucene.analysis.huModifier and TypeClassDescriptionfinal classATokenFilterthat appliesHungarianLightStemmerto stem Hungarian words.Methods in org.apache.lucene.analysis.hu that return TokenStreamModifier and TypeMethodDescriptionHungarianLightStemFilterFactory.create(TokenStream input) protected TokenStreamHungarianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.hu with parameters of type TokenStreamModifier and TypeMethodDescriptionHungarianLightStemFilterFactory.create(TokenStream input) protected TokenStreamHungarianAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.hu with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.hunspell
Subclasses of TokenStream in org.apache.lucene.analysis.hunspellModifier and TypeClassDescriptionfinal classTokenFilter that uses hunspell affix rules and words to stem tokens.Methods in org.apache.lucene.analysis.hunspell that return TokenStreamMethods in org.apache.lucene.analysis.hunspell with parameters of type TokenStreamConstructors in org.apache.lucene.analysis.hunspell with parameters of type TokenStreamModifierConstructorDescriptionHunspellStemFilter(TokenStream input, Dictionary dictionary) Create aHunspellStemFilteroutputting all possible stems.HunspellStemFilter(TokenStream input, Dictionary dictionary, boolean dedup) Create aHunspellStemFilteroutputting all possible stems.HunspellStemFilter(TokenStream input, Dictionary dictionary, boolean dedup, boolean longestOnly) Creates a new HunspellStemFilter that will stem tokens from the given TokenStream using affix rules in the provided Dictionary -
Uses of TokenStream in org.apache.lucene.analysis.hy
Methods in org.apache.lucene.analysis.hy that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamArmenianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.hy with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamArmenianAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.icu
Subclasses of TokenStream in org.apache.lucene.analysis.icuModifier and TypeClassDescriptionfinal classA TokenFilter that applies search term folding to Unicode text, applying foldings from UTR#30 Character Foldings.classNormalize token text with ICU'sNormalizer2final classATokenFilterthat transforms text with ICU.Methods in org.apache.lucene.analysis.icu that return TokenStreamModifier and TypeMethodDescriptionICUFoldingFilterFactory.create(TokenStream input) ICUNormalizer2FilterFactory.create(TokenStream input) ICUTransformFilterFactory.create(TokenStream input) ICUFoldingFilterFactory.normalize(TokenStream input) ICUNormalizer2FilterFactory.normalize(TokenStream input) ICUTransformFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.icu with parameters of type TokenStreamModifier and TypeMethodDescriptionICUFoldingFilterFactory.create(TokenStream input) ICUNormalizer2FilterFactory.create(TokenStream input) ICUTransformFilterFactory.create(TokenStream input) ICUFoldingFilterFactory.normalize(TokenStream input) ICUNormalizer2FilterFactory.normalize(TokenStream input) ICUTransformFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.icu with parameters of type TokenStreamModifierConstructorDescriptionICUFoldingFilter(TokenStream input) Create a new ICUFoldingFilter on the specified inputICUFoldingFilter(TokenStream input, com.ibm.icu.text.Normalizer2 normalizer) Create a new ICUFoldingFilter on the specified input with the specified normalizerICUNormalizer2Filter(TokenStream input) Create a new Normalizer2Filter that combines NFKC normalization, Case Folding, and removes Default Ignorables (NFKC_Casefold)ICUNormalizer2Filter(TokenStream input, com.ibm.icu.text.Normalizer2 normalizer) Create a new Normalizer2Filter with the specified Normalizer2ICUTransformFilter(TokenStream input, com.ibm.icu.text.Transliterator transform) Create a new ICUTransformFilter that transforms text on the given stream. -
Uses of TokenStream in org.apache.lucene.analysis.icu.segmentation
Subclasses of TokenStream in org.apache.lucene.analysis.icu.segmentationModifier and TypeClassDescriptionfinal classBreaks text into words according to UAX #29: Unicode Text Segmentation (http://www.unicode.org/reports/tr29/) -
Uses of TokenStream in org.apache.lucene.analysis.id
Subclasses of TokenStream in org.apache.lucene.analysis.idModifier and TypeClassDescriptionfinal classATokenFilterthat appliesIndonesianStemmerto stem Indonesian words.Methods in org.apache.lucene.analysis.id that return TokenStreamModifier and TypeMethodDescriptionIndonesianStemFilterFactory.create(TokenStream input) protected TokenStreamIndonesianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.id with parameters of type TokenStreamModifier and TypeMethodDescriptionIndonesianStemFilterFactory.create(TokenStream input) protected TokenStreamIndonesianAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.id with parameters of type TokenStreamModifierConstructorDescriptionIndonesianStemFilter(TokenStream input) IndonesianStemFilter(TokenStream input, boolean stemDerivational) Create a new IndonesianStemFilter. -
Uses of TokenStream in org.apache.lucene.analysis.in
Subclasses of TokenStream in org.apache.lucene.analysis.inModifier and TypeClassDescriptionfinal classATokenFilterthat appliesIndicNormalizerto normalize text in Indian Languages.Methods in org.apache.lucene.analysis.in that return TokenStreamModifier and TypeMethodDescriptionIndicNormalizationFilterFactory.create(TokenStream input) IndicNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.in with parameters of type TokenStreamModifier and TypeMethodDescriptionIndicNormalizationFilterFactory.create(TokenStream input) IndicNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.in with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.it
Subclasses of TokenStream in org.apache.lucene.analysis.itModifier and TypeClassDescriptionfinal classATokenFilterthat appliesItalianLightStemmerto stem Italian words.Methods in org.apache.lucene.analysis.it that return TokenStreamModifier and TypeMethodDescriptionItalianLightStemFilterFactory.create(TokenStream input) protected TokenStreamItalianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.it with parameters of type TokenStreamModifier and TypeMethodDescriptionItalianLightStemFilterFactory.create(TokenStream input) protected TokenStreamItalianAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.it with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.ja
Subclasses of TokenStream in org.apache.lucene.analysis.jaModifier and TypeClassDescriptionfinal classReplaces term text with theBaseFormAttribute.final classATokenFilterthat adds Japanese romanized tokens to the term attribute.final classATokenFilterthat normalizes small letters (捨て仮名) in hiragana into normal letters.final classATokenFilterthat normalizes common katakana spelling variations ending in a long sound character by removing this character (U+30FC).final classATokenFilterthat normalizes small letters (捨て仮名) in katakana into normal letters.classATokenFilterthat normalizes Japanese numbers (kansūji) to regular Arabic decimal numbers in half-width characters.final classRemoves tokens that match a set of part-of-speech tags.final classATokenFilterthat replaces the term attribute with the reading of a token in either katakana or romaji form.final classTokenizer for Japanese that uses morphological analysis.Methods in org.apache.lucene.analysis.ja that return TokenStreamModifier and TypeMethodDescriptionJapaneseBaseFormFilterFactory.create(TokenStream input) JapaneseCompletionFilterFactory.create(TokenStream input) JapaneseHiraganaUppercaseFilterFactory.create(TokenStream input) JapaneseKatakanaStemFilterFactory.create(TokenStream input) JapaneseKatakanaUppercaseFilterFactory.create(TokenStream input) JapaneseNumberFilterFactory.create(TokenStream input) JapanesePartOfSpeechStopFilterFactory.create(TokenStream stream) JapaneseReadingFormFilterFactory.create(TokenStream input) protected TokenStreamJapaneseAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.ja with parameters of type TokenStreamModifier and TypeMethodDescriptionJapaneseBaseFormFilterFactory.create(TokenStream input) JapaneseCompletionFilterFactory.create(TokenStream input) JapaneseHiraganaUppercaseFilterFactory.create(TokenStream input) JapaneseKatakanaStemFilterFactory.create(TokenStream input) JapaneseKatakanaUppercaseFilterFactory.create(TokenStream input) JapaneseNumberFilterFactory.create(TokenStream input) JapanesePartOfSpeechStopFilterFactory.create(TokenStream stream) JapaneseReadingFormFilterFactory.create(TokenStream input) protected TokenStreamJapaneseAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.ja with parameters of type TokenStreamModifierConstructorDescriptionCreates a newJapaneseCompletionFilterwith default configurationsCreates a newJapaneseCompletionFilterJapaneseKatakanaStemFilter(TokenStream input, int minimumLength) JapaneseNumberFilter(TokenStream input) JapanesePartOfSpeechStopFilter(TokenStream input, Set<String> stopTags) Create a newJapanesePartOfSpeechStopFilter.JapaneseReadingFormFilter(TokenStream input, boolean useRomaji) -
Uses of TokenStream in org.apache.lucene.analysis.ko
Subclasses of TokenStream in org.apache.lucene.analysis.koModifier and TypeClassDescriptionclassATokenFilterthat normalizes Korean numbers to regular Arabic decimal numbers in half-width characters.final classRemoves tokens that match a set of part-of-speech tags.final classReplaces term text with theReadingAttributewhich is the Hangul transcription of Hanja characters.final classTokenizer for Korean that uses morphological analysis.Methods in org.apache.lucene.analysis.ko that return TokenStreamModifier and TypeMethodDescriptionKoreanNumberFilterFactory.create(TokenStream input) KoreanPartOfSpeechStopFilterFactory.create(TokenStream stream) KoreanReadingFormFilterFactory.create(TokenStream input) protected TokenStreamKoreanAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.ko with parameters of type TokenStreamModifier and TypeMethodDescriptionKoreanNumberFilterFactory.create(TokenStream input) KoreanPartOfSpeechStopFilterFactory.create(TokenStream stream) KoreanReadingFormFilterFactory.create(TokenStream input) protected TokenStreamKoreanAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.ko with parameters of type TokenStreamModifierConstructorDescriptionKoreanNumberFilter(TokenStream input) Create a newKoreanPartOfSpeechStopFilterwith the default list of stop tagsKoreanPartOfSpeechStopFilter.DEFAULT_STOP_TAGS.KoreanPartOfSpeechStopFilter(TokenStream input, Set<POS.Tag> stopTags) Create a newKoreanPartOfSpeechStopFilter. -
Uses of TokenStream in org.apache.lucene.analysis.lt
Methods in org.apache.lucene.analysis.lt that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamLithuanianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.lt with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamLithuanianAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.lv
Subclasses of TokenStream in org.apache.lucene.analysis.lvModifier and TypeClassDescriptionfinal classATokenFilterthat appliesLatvianStemmerto stem Latvian words.Methods in org.apache.lucene.analysis.lv that return TokenStreamModifier and TypeMethodDescriptionLatvianStemFilterFactory.create(TokenStream input) protected TokenStreamLatvianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.lv with parameters of type TokenStreamModifier and TypeMethodDescriptionLatvianStemFilterFactory.create(TokenStream input) protected TokenStreamLatvianAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.lv with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.minhash
Subclasses of TokenStream in org.apache.lucene.analysis.minhashModifier and TypeClassDescriptionclassGenerate min hash tokens from an incoming stream of tokens.Methods in org.apache.lucene.analysis.minhash that return TokenStreamMethods in org.apache.lucene.analysis.minhash with parameters of type TokenStreamConstructors in org.apache.lucene.analysis.minhash with parameters of type TokenStreamModifierConstructorDescriptionMinHashFilter(TokenStream input, int hashCount, int bucketCount, int hashSetSize, boolean withRotation) create a MinHash filter -
Uses of TokenStream in org.apache.lucene.analysis.miscellaneous
Subclasses of TokenStream in org.apache.lucene.analysis.miscellaneousModifier and TypeClassDescriptionfinal classThis class converts alphabetic, numeric, and symbolic Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) into their ASCII equivalents, if one exists.final classA filter to apply normal capitalization rules to Tokens.final classRemoves words that are too long or too short from the stream.final classConcatenates/Joins every incoming token with a separator into one output token for every path through the token stream (which is a graph).final classA TokenStream that takes an array of input TokenStreams as sources, and concatenates them together.classAllows skipping TokenFilters based on the current set of attributes.private final classclassFilters all tokens that cannot be parsed to a date, using the providedDateFormat.final classCharacters before the delimiter are the "token", the textual integer after is the term frequency.final classAllows Tokens with a given combination of flags to be dropped.final classAn always exhausted token stream.classFilter outputs a single token which is a concatenation of the sorted and de-duplicated set of input tokens.final classDeprecated.Fix the token filters that create broken offsets in the first place.final classWhen the plain text is extracted from documents, we will often have many words hyphenated and broken into two lines.final classA TokenFilter that only keeps tokens with text contained in the required words.classMarks terms as keywords via theKeywordAttribute.final classThis TokenFilter emits each incoming token twice once as keyword and once non-keyword, in other words once withKeywordAttribute.setKeyword(boolean)set totrueand once set tofalse.final classRemoves words that are too long or too short from the stream.final classThis TokenFilter limits the number of tokens while indexing.final classLets all tokens pass through until it sees one with a start offset <= a configured limit, which won't pass and ends the stream.final classThis TokenFilter limits its emitted tokens to those with positions that are not greater than the configured limit.final classMarks terms as keywords via theKeywordAttribute.classA ConditionalTokenFilter that only applies its wrapped filters to tokens that are not contained in a protected set.final classA TokenFilter which filters out Tokens at the same position and Term text as the previous token in the stream.final classThis filter folds Scandinavian characters åÅäæÄÆ->a and öÖøØ->o.final classThis filter normalize use of the interchangeable Scandinavian characters æÆäÄöÖøØ and folded variants (aa, ao, ae, oe and oo) by transforming them to åÅæÆøØ.final classMarks terms as keywords via theKeywordAttribute.final classProvides the ability to override anyKeywordAttributeaware stemmer with custom dictionary-based stemming.final classTrims leading and trailing whitespace from Tokens in the stream.final classA token filter for truncating the terms into a specific length.final classAdds theTypeAttribute.type()as a synonym, i.e.final classDeprecated.UseWordDelimiterGraphFilterinstead: it produces a correct token graph so that e.g.final classSplits words into subwords and performs optional transformations on subword groups, producing a correct token graph so that e.g.Fields in org.apache.lucene.analysis.miscellaneous declared as TokenStreamModifier and TypeFieldDescriptionprivate final TokenStreamConditionalTokenFilter.delegateprivate final TokenStreamConcatenateGraphFilter.inputTokenStreamprivate final TokenStream[]ConcatenatingTokenStream.sourcesMethods in org.apache.lucene.analysis.miscellaneous that return TokenStreamModifier and TypeMethodDescriptionASCIIFoldingFilterFactory.create(TokenStream input) ConcatenateGraphFilterFactory.create(TokenStream input) ConditionalTokenFilterFactory.create(TokenStream input) DateRecognizerFilterFactory.create(TokenStream input) DropIfFlaggedFilterFactory.create(TokenStream input) FingerprintFilterFactory.create(TokenStream input) FixBrokenOffsetsFilterFactory.create(TokenStream input) Deprecated.KeepWordFilterFactory.create(TokenStream input) KeywordMarkerFilterFactory.create(TokenStream input) KeywordRepeatFilterFactory.create(TokenStream input) LimitTokenCountFilterFactory.create(TokenStream input) LimitTokenOffsetFilterFactory.create(TokenStream input) LimitTokenPositionFilterFactory.create(TokenStream input) ScandinavianFoldingFilterFactory.create(TokenStream input) StemmerOverrideFilterFactory.create(TokenStream input) TrimFilterFactory.create(TokenStream input) TruncateTokenFilterFactory.create(TokenStream input) TypeAsSynonymFilterFactory.create(TokenStream input) ASCIIFoldingFilterFactory.normalize(TokenStream input) ScandinavianFoldingFilterFactory.normalize(TokenStream input) ScandinavianNormalizationFilterFactory.normalize(TokenStream input) TrimFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.miscellaneous with parameters of type TokenStreamModifier and TypeMethodDescriptionprivate static AttributeSourceConcatenatingTokenStream.combineSources(TokenStream... sources) ASCIIFoldingFilterFactory.create(TokenStream input) CapitalizationFilterFactory.create(TokenStream input) CodepointCountFilterFactory.create(TokenStream input) ConcatenateGraphFilterFactory.create(TokenStream input) ConditionalTokenFilterFactory.create(TokenStream input) protected abstract ConditionalTokenFilterConditionalTokenFilterFactory.create(TokenStream input, Function<TokenStream, TokenStream> inner) Modify the incomingTokenStreamwith aConditionalTokenFilterDateRecognizerFilterFactory.create(TokenStream input) DelimitedTermFrequencyTokenFilterFactory.create(TokenStream input) DropIfFlaggedFilterFactory.create(TokenStream input) FingerprintFilterFactory.create(TokenStream input) FixBrokenOffsetsFilterFactory.create(TokenStream input) Deprecated.HyphenatedWordsFilterFactory.create(TokenStream input) KeepWordFilterFactory.create(TokenStream input) KeywordMarkerFilterFactory.create(TokenStream input) KeywordRepeatFilterFactory.create(TokenStream input) LengthFilterFactory.create(TokenStream input) LimitTokenCountFilterFactory.create(TokenStream input) LimitTokenOffsetFilterFactory.create(TokenStream input) LimitTokenPositionFilterFactory.create(TokenStream input) protected ConditionalTokenFilterProtectedTermFilterFactory.create(TokenStream input, Function<TokenStream, TokenStream> inner) RemoveDuplicatesTokenFilterFactory.create(TokenStream input) ScandinavianFoldingFilterFactory.create(TokenStream input) ScandinavianNormalizationFilterFactory.create(TokenStream input) StemmerOverrideFilterFactory.create(TokenStream input) TrimFilterFactory.create(TokenStream input) TruncateTokenFilterFactory.create(TokenStream input) TypeAsSynonymFilterFactory.create(TokenStream input) WordDelimiterFilterFactory.create(TokenStream input) Deprecated.WordDelimiterGraphFilterFactory.create(TokenStream input) ASCIIFoldingFilterFactory.normalize(TokenStream input) ScandinavianFoldingFilterFactory.normalize(TokenStream input) ScandinavianNormalizationFilterFactory.normalize(TokenStream input) TrimFilterFactory.normalize(TokenStream input) Method parameters in org.apache.lucene.analysis.miscellaneous with type arguments of type TokenStreamModifier and TypeMethodDescriptionprotected abstract ConditionalTokenFilterConditionalTokenFilterFactory.create(TokenStream input, Function<TokenStream, TokenStream> inner) Modify the incomingTokenStreamwith aConditionalTokenFilterprotected abstract ConditionalTokenFilterConditionalTokenFilterFactory.create(TokenStream input, Function<TokenStream, TokenStream> inner) Modify the incomingTokenStreamwith aConditionalTokenFilterprotected ConditionalTokenFilterProtectedTermFilterFactory.create(TokenStream input, Function<TokenStream, TokenStream> inner) protected ConditionalTokenFilterProtectedTermFilterFactory.create(TokenStream input, Function<TokenStream, TokenStream> inner) Constructors in org.apache.lucene.analysis.miscellaneous with parameters of type TokenStreamModifierConstructorDescriptionASCIIFoldingFilter(TokenStream input) ASCIIFoldingFilter(TokenStream input, boolean preserveOriginal) Create a newASCIIFoldingFilter.Creates a CapitalizationFilter with the default parameters.CapitalizationFilter(TokenStream in, boolean onlyFirstWord, CharArraySet keep, boolean forceFirstLetter, Collection<char[]> okPrefix, int minWordLength, int maxWordCount, int maxTokenLength) Creates a CapitalizationFilter with the specified parameters.CodepointCountFilter(TokenStream in, int min, int max) Create a newCodepointCountFilter.ConcatenateGraphFilter(TokenStream inputTokenStream) Creates a token stream to convertinputto a token stream of accepted strings by its token stream graph.ConcatenateGraphFilter(TokenStream inputTokenStream, boolean preserveSep, boolean preservePositionIncrements, int maxGraphExpansions) ConcatenateGraphFilter(TokenStream inputTokenStream, Character tokenSeparator, boolean preservePositionIncrements, int maxGraphExpansions) Creates a token stream to convertinputto a token stream of accepted strings by its token stream graph.ConcatenatingTokenStream(TokenStream... sources) Create a new ConcatenatingTokenStream from a set of inputsprotectedConditionalTokenFilter(TokenStream input, Function<TokenStream, TokenStream> inputFactory) Create a new ConditionalTokenFilterDateRecognizerFilter(TokenStream input) DateRecognizerFilter(TokenStream input, DateFormat dateFormat) DelimitedTermFrequencyTokenFilter(TokenStream input, char delimiter) DropIfFlaggedFilter(TokenStream input, int dropFlags) Construct a token stream filtering the given input.FingerprintFilter(TokenStream input) Create a new FingerprintFilter with default settingsFingerprintFilter(TokenStream input, int maxOutputTokenSize, char separator) Create a new FingerprintFilter with control over all settingsDeprecated.Creates a new HyphenatedWordsFilterKeepWordFilter(TokenStream in, CharArraySet words) Create a newKeepWordFilter.protectedCreates a newKeywordMarkerFilterKeywordRepeatFilter(TokenStream input) Construct a token stream filtering the given input.LengthFilter(TokenStream in, int min, int max) Create a newLengthFilter.LimitTokenCountFilter(TokenStream in, int maxTokenCount) Build a filter that only accepts tokens up to a maximum number.LimitTokenCountFilter(TokenStream in, int maxTokenCount, boolean consumeAllTokens) Build an filter that limits the maximum number of tokens per field.LimitTokenOffsetFilter(TokenStream input, int maxStartOffset) Lets all tokens pass through until it sees one with a start offset <=maxStartOffsetwhich won't pass and ends the stream.LimitTokenOffsetFilter(TokenStream input, int maxStartOffset, boolean consumeAllTokens) LimitTokenPositionFilter(TokenStream in, int maxTokenPosition) Build a filter that only accepts tokens up to and including the given maximum position.LimitTokenPositionFilter(TokenStream in, int maxTokenPosition, boolean consumeAllTokens) Build a filter that limits the maximum position of tokens to emit.PatternKeywordMarkerFilter(TokenStream in, Pattern pattern) Create a newPatternKeywordMarkerFilter, that marks the current token as a keyword if the tokens term buffer matches the providedPatternvia theKeywordAttribute.ProtectedTermFilter(CharArraySet protectedTerms, TokenStream input, Function<TokenStream, TokenStream> inputFactory) Creates a new ProtectedTermFilterCreates a new RemoveDuplicatesTokenFilterSetKeywordMarkerFilter(TokenStream in, CharArraySet keywordSet) Create a new KeywordSetMarkerFilter, that marks the current token as a keyword if the tokens term buffer is contained in the given set via theKeywordAttribute.StemmerOverrideFilter(TokenStream input, StemmerOverrideFilter.StemmerOverrideMap stemmerOverrideMap) Create a new StemmerOverrideFilter, performing dictionary-based stemming with the provideddictionary.Create a newTrimFilter.TruncateTokenFilter(TokenStream input, int length) TypeAsSynonymFilter(TokenStream input) TypeAsSynonymFilter(TokenStream input, String prefix) TypeAsSynonymFilter(TokenStream input, String prefix, Set<String> ignore, int synFlagsMask) WordDelimiterFilter(TokenStream in, byte[] charTypeTable, int configurationFlags, CharArraySet protWords) Deprecated.Creates a new WordDelimiterFilterWordDelimiterFilter(TokenStream in, int configurationFlags, CharArraySet protWords) Deprecated.Creates a new WordDelimiterFilter usingWordDelimiterIterator.DEFAULT_WORD_DELIM_TABLEas its charTypeTableWordDelimiterGraphFilter(TokenStream in, boolean adjustInternalOffsets, byte[] charTypeTable, int configurationFlags, CharArraySet protWords) Creates a new WordDelimiterGraphFilterWordDelimiterGraphFilter(TokenStream in, int configurationFlags, CharArraySet protWords) Creates a new WordDelimiterGraphFilter usingWordDelimiterIterator.DEFAULT_WORD_DELIM_TABLEas its charTypeTableConstructor parameters in org.apache.lucene.analysis.miscellaneous with type arguments of type TokenStreamModifierConstructorDescriptionprotectedConditionalTokenFilter(TokenStream input, Function<TokenStream, TokenStream> inputFactory) Create a new ConditionalTokenFilterprotectedConditionalTokenFilter(TokenStream input, Function<TokenStream, TokenStream> inputFactory) Create a new ConditionalTokenFilterProtectedTermFilter(CharArraySet protectedTerms, TokenStream input, Function<TokenStream, TokenStream> inputFactory) Creates a new ProtectedTermFilterProtectedTermFilter(CharArraySet protectedTerms, TokenStream input, Function<TokenStream, TokenStream> inputFactory) Creates a new ProtectedTermFilter -
Uses of TokenStream in org.apache.lucene.analysis.ne
Methods in org.apache.lucene.analysis.ne that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamNepaliAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.ne with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamNepaliAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.ngram
Subclasses of TokenStream in org.apache.lucene.analysis.ngramModifier and TypeClassDescriptionfinal classTokenizes the given token into n-grams of given size(s).classTokenizes the input from an edge into n-grams of given size(s).final classTokenizes the input into n-grams of the given size(s).classTokenizes the input into n-grams of the given size(s).Methods in org.apache.lucene.analysis.ngram with parameters of type TokenStreamModifier and TypeMethodDescriptionEdgeNGramFilterFactory.create(TokenStream input) NGramFilterFactory.create(TokenStream input) Constructors in org.apache.lucene.analysis.ngram with parameters of type TokenStreamModifierConstructorDescriptionEdgeNGramTokenFilter(TokenStream input, int gramSize) Creates an EdgeNGramTokenFilter that produces edge n-grams of the given size.EdgeNGramTokenFilter(TokenStream input, int minGram, int maxGram, boolean preserveOriginal) Creates an EdgeNGramTokenFilter that, for a given input term, produces all edge n-grams with lengths >= minGram and <= maxGram.NGramTokenFilter(TokenStream input, int gramSize) Creates an NGramTokenFilter that produces n-grams of the indicated size.NGramTokenFilter(TokenStream input, int minGram, int maxGram, boolean preserveOriginal) Creates an NGramTokenFilter that, for a given input term, produces all contained n-grams with lengths >= minGram and <= maxGram. -
Uses of TokenStream in org.apache.lucene.analysis.nl
Methods in org.apache.lucene.analysis.nl that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamDutchAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.nl with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamDutchAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.no
Subclasses of TokenStream in org.apache.lucene.analysis.noModifier and TypeClassDescriptionfinal classATokenFilterthat appliesNorwegianLightStemmerto stem Norwegian words.final classATokenFilterthat appliesNorwegianMinimalStemmerto stem Norwegian words.final classThis filter normalize use of the interchangeable Scandinavian characters æÆäÄöÖøØ and folded variants (ae, oe, aa) by transforming them to åÅæÆøØ.Methods in org.apache.lucene.analysis.no that return TokenStreamModifier and TypeMethodDescriptionNorwegianLightStemFilterFactory.create(TokenStream input) NorwegianMinimalStemFilterFactory.create(TokenStream input) protected TokenStreamNorwegianAnalyzer.normalize(String fieldName, TokenStream in) NorwegianNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.no with parameters of type TokenStreamModifier and TypeMethodDescriptionNorwegianLightStemFilterFactory.create(TokenStream input) NorwegianMinimalStemFilterFactory.create(TokenStream input) NorwegianNormalizationFilterFactory.create(TokenStream input) protected TokenStreamNorwegianAnalyzer.normalize(String fieldName, TokenStream in) NorwegianNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.no with parameters of type TokenStreamModifierConstructorDescriptionNorwegianLightStemFilter(TokenStream input, int flags) Creates a new NorwegianLightStemFilterNorwegianMinimalStemFilter(TokenStream input, int flags) Creates a new NorwegianLightStemFilter -
Uses of TokenStream in org.apache.lucene.analysis.path
Subclasses of TokenStream in org.apache.lucene.analysis.pathModifier and TypeClassDescriptionclassTokenizer for path-like hierarchies.classTokenizer for domain-like hierarchies. -
Uses of TokenStream in org.apache.lucene.analysis.pattern
Subclasses of TokenStream in org.apache.lucene.analysis.patternModifier and TypeClassDescriptionfinal classCaptureGroup uses Java regexes to emit multiple tokens - one for each capture group in one or more patterns.final classA TokenFilter which applies a Pattern to each token in the stream, replacing match occurrences with the specified replacement string.final classThis tokenizer uses regex pattern matching to construct distinct tokens for the input stream.classSet a type attribute to a parameterized value when tokens are matched by any of a several regex patterns.final classfinal classMethods in org.apache.lucene.analysis.pattern that return TokenStreamMethods in org.apache.lucene.analysis.pattern with parameters of type TokenStreamModifier and TypeMethodDescriptionPatternCaptureGroupFilterFactory.create(TokenStream input) PatternReplaceFilterFactory.create(TokenStream input) PatternTypingFilterFactory.create(TokenStream input) Constructors in org.apache.lucene.analysis.pattern with parameters of type TokenStreamModifierConstructorDescriptionPatternCaptureGroupTokenFilter(TokenStream input, boolean preserveOriginal, Pattern... patterns) PatternReplaceFilter(TokenStream in, Pattern p, String replacement, boolean all) Constructs an instance to replace either the first, or all occurrencesPatternTypingFilter(TokenStream input, PatternTypingFilter.PatternTypingRule... replacementAndFlagByPattern) -
Uses of TokenStream in org.apache.lucene.analysis.payloads
Subclasses of TokenStream in org.apache.lucene.analysis.payloadsModifier and TypeClassDescriptionfinal classCharacters before the delimiter are the "token", those after are the payload.classAssigns a payload to a token based on theTypeAttributeclassAdds theOffsetAttribute.startOffset()andOffsetAttribute.endOffset()First 4 bytes are the startclassMakes theTypeAttributea payload.Methods in org.apache.lucene.analysis.payloads with parameters of type TokenStreamModifier and TypeMethodDescriptionDelimitedPayloadTokenFilterFactory.create(TokenStream input) NumericPayloadTokenFilterFactory.create(TokenStream input) TokenOffsetPayloadTokenFilterFactory.create(TokenStream input) TypeAsPayloadTokenFilterFactory.create(TokenStream input) Constructors in org.apache.lucene.analysis.payloads with parameters of type TokenStreamModifierConstructorDescriptionDelimitedPayloadTokenFilter(TokenStream input, char delimiter, PayloadEncoder encoder) NumericPayloadTokenFilter(TokenStream input, float payload, String typeMatch) -
Uses of TokenStream in org.apache.lucene.analysis.phonetic
Subclasses of TokenStream in org.apache.lucene.analysis.phoneticModifier and TypeClassDescriptionfinal classTokenFilter for Beider-Morse phonetic encoding.final classCreate tokens for phonetic matches based on Daitch–Mokotoff Soundex.final classFilter for DoubleMetaphone (supporting secondary codes)final classCreate tokens for phonetic matches.Methods in org.apache.lucene.analysis.phonetic that return TokenStreamMethods in org.apache.lucene.analysis.phonetic with parameters of type TokenStreamModifier and TypeMethodDescriptionBeiderMorseFilterFactory.create(TokenStream input) DaitchMokotoffSoundexFilterFactory.create(TokenStream input) DoubleMetaphoneFilterFactory.create(TokenStream input) PhoneticFilterFactory.create(TokenStream input) Constructors in org.apache.lucene.analysis.phonetic with parameters of type TokenStreamModifierConstructorDescriptionBeiderMorseFilter(TokenStream input, org.apache.commons.codec.language.bm.PhoneticEngine engine) BeiderMorseFilter(TokenStream input, org.apache.commons.codec.language.bm.PhoneticEngine engine, org.apache.commons.codec.language.bm.Languages.LanguageSet languages) Create a new BeiderMorseFilterDaitchMokotoffSoundexFilter(TokenStream in, boolean inject) Creates a DaitchMokotoffSoundexFilter by either adding encoded forms as synonyms (inject=true) or replacing them.DoubleMetaphoneFilter(TokenStream input, int maxCodeLength, boolean inject) Creates a DoubleMetaphoneFilter with the specified maximum code length, and either adding encoded forms as synonyms (inject=true) or replacing them.PhoneticFilter(TokenStream in, org.apache.commons.codec.Encoder encoder, boolean inject) Creates a PhoneticFilter with the specified encoder, and either adding encoded forms as synonyms (inject=true) or replacing them. -
Uses of TokenStream in org.apache.lucene.analysis.pl
Methods in org.apache.lucene.analysis.pl that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamPolishAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.pl with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamPolishAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.pt
Subclasses of TokenStream in org.apache.lucene.analysis.ptModifier and TypeClassDescriptionfinal classATokenFilterthat appliesPortugueseLightStemmerto stem Portuguese words.final classATokenFilterthat appliesPortugueseMinimalStemmerto stem Portuguese words.final classATokenFilterthat appliesPortugueseStemmerto stem Portuguese words.Methods in org.apache.lucene.analysis.pt that return TokenStreamModifier and TypeMethodDescriptionPortugueseLightStemFilterFactory.create(TokenStream input) PortugueseMinimalStemFilterFactory.create(TokenStream input) PortugueseStemFilterFactory.create(TokenStream input) protected TokenStreamPortugueseAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.pt with parameters of type TokenStreamModifier and TypeMethodDescriptionPortugueseLightStemFilterFactory.create(TokenStream input) PortugueseMinimalStemFilterFactory.create(TokenStream input) PortugueseStemFilterFactory.create(TokenStream input) protected TokenStreamPortugueseAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.pt with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.reverse
Subclasses of TokenStream in org.apache.lucene.analysis.reverseModifier and TypeClassDescriptionfinal classReverse token string, for example "country" => "yrtnuoc".Methods in org.apache.lucene.analysis.reverse with parameters of type TokenStreamConstructors in org.apache.lucene.analysis.reverse with parameters of type TokenStreamModifierConstructorDescriptionCreate a new ReverseStringFilter that reverses all tokens in the suppliedTokenStream.ReverseStringFilter(TokenStream in, char marker) Create a new ReverseStringFilter that reverses and marks all tokens in the suppliedTokenStream. -
Uses of TokenStream in org.apache.lucene.analysis.ro
Methods in org.apache.lucene.analysis.ro that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamRomanianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.ro with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamRomanianAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.ru
Subclasses of TokenStream in org.apache.lucene.analysis.ruModifier and TypeClassDescriptionfinal classATokenFilterthat appliesRussianLightStemmerto stem Russian words.Methods in org.apache.lucene.analysis.ru that return TokenStreamModifier and TypeMethodDescriptionRussianLightStemFilterFactory.create(TokenStream input) protected TokenStreamRussianAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.ru with parameters of type TokenStreamModifier and TypeMethodDescriptionRussianLightStemFilterFactory.create(TokenStream input) protected TokenStreamRussianAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.ru with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.shingle
Subclasses of TokenStream in org.apache.lucene.analysis.shingleModifier and TypeClassDescriptionfinal classA FixedShingleFilter constructs shingles (token n-grams) from a token stream.final classA ShingleFilter constructs shingles (token n-grams) from a token stream.Methods in org.apache.lucene.analysis.shingle that return TokenStreamMethods in org.apache.lucene.analysis.shingle with parameters of type TokenStreamModifier and TypeMethodDescriptionFixedShingleFilterFactory.create(TokenStream input) ShingleFilterFactory.create(TokenStream input) Constructors in org.apache.lucene.analysis.shingle with parameters of type TokenStreamModifierConstructorDescriptionFixedShingleFilter(TokenStream input, int shingleSize) Creates a FixedShingleFilter over an input token streamFixedShingleFilter(TokenStream input, int shingleSize, String tokenSeparator, String fillerToken) Creates a FixedShingleFilter over an input token streamShingleFilter(TokenStream input) Construct a ShingleFilter with default shingle size: 2.ShingleFilter(TokenStream input, int maxShingleSize) Constructs a ShingleFilter with the specified shingle size from theTokenStreaminputShingleFilter(TokenStream input, int minShingleSize, int maxShingleSize) Constructs a ShingleFilter with the specified shingle size from theTokenStreaminputShingleFilter(TokenStream input, String tokenType) Construct a ShingleFilter with the specified token type for shingle tokens and the default shingle size: 2 -
Uses of TokenStream in org.apache.lucene.analysis.sinks
Subclasses of TokenStream in org.apache.lucene.analysis.sinksModifier and TypeClassDescriptionfinal classThis TokenFilter provides the ability to set aside attribute states that have already been analyzed.static final classTokenStream output from a tee.Methods in org.apache.lucene.analysis.sinks that return TokenStreamModifier and TypeMethodDescriptionTeeSinkTokenFilter.newSinkTokenStream()Returns a newTeeSinkTokenFilter.SinkTokenStreamthat receives all tokens consumed by this stream.Constructors in org.apache.lucene.analysis.sinks with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.snowball
Subclasses of TokenStream in org.apache.lucene.analysis.snowballModifier and TypeClassDescriptionfinal classA filter that stems words using a Snowball-generated stemmer.Methods in org.apache.lucene.analysis.snowball with parameters of type TokenStreamConstructors in org.apache.lucene.analysis.snowball with parameters of type TokenStreamModifierConstructorDescriptionSnowballFilter(TokenStream in, String name) Construct the named stemming filter.SnowballFilter(TokenStream input, SnowballStemmer stemmer) -
Uses of TokenStream in org.apache.lucene.analysis.sr
Subclasses of TokenStream in org.apache.lucene.analysis.srModifier and TypeClassDescriptionfinal classNormalizes Serbian Cyrillic and Latin characters to "bald" Latin.final classNormalizes Serbian Cyrillic to Latin.Methods in org.apache.lucene.analysis.sr that return TokenStreamModifier and TypeMethodDescriptionSerbianNormalizationFilterFactory.create(TokenStream input) protected TokenStreamSerbianAnalyzer.normalize(String fieldName, TokenStream in) SerbianNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.sr with parameters of type TokenStreamModifier and TypeMethodDescriptionSerbianNormalizationFilterFactory.create(TokenStream input) protected TokenStreamSerbianAnalyzer.normalize(String fieldName, TokenStream in) SerbianNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.sr with parameters of type TokenStreamModifierConstructorDescription -
Uses of TokenStream in org.apache.lucene.analysis.standard
Subclasses of TokenStream in org.apache.lucene.analysis.standardModifier and TypeClassDescriptionfinal classA grammar-based tokenizer constructed with JFlex.Methods in org.apache.lucene.analysis.standard that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamStandardAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.standard with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamStandardAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.stempel
Subclasses of TokenStream in org.apache.lucene.analysis.stempelModifier and TypeClassDescriptionfinal classTransforms the token stream as per the stemming algorithm.Methods in org.apache.lucene.analysis.stempel that return TokenStreamMethods in org.apache.lucene.analysis.stempel with parameters of type TokenStreamConstructors in org.apache.lucene.analysis.stempel with parameters of type TokenStreamModifierConstructorDescriptionStempelFilter(TokenStream in, StempelStemmer stemmer) Create filter using the supplied stemming table.StempelFilter(TokenStream in, StempelStemmer stemmer, int minLength) Create filter using the supplied stemming table. -
Uses of TokenStream in org.apache.lucene.analysis.sv
Subclasses of TokenStream in org.apache.lucene.analysis.svModifier and TypeClassDescriptionfinal classATokenFilterthat appliesSwedishLightStemmerto stem Swedish words.final classATokenFilterthat appliesSwedishMinimalStemmerto stem Swedish words.Methods in org.apache.lucene.analysis.sv that return TokenStreamModifier and TypeMethodDescriptionSwedishLightStemFilterFactory.create(TokenStream input) SwedishMinimalStemFilterFactory.create(TokenStream input) protected TokenStreamSwedishAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.sv with parameters of type TokenStreamModifier and TypeMethodDescriptionSwedishLightStemFilterFactory.create(TokenStream input) SwedishMinimalStemFilterFactory.create(TokenStream input) protected TokenStreamSwedishAnalyzer.normalize(String fieldName, TokenStream in) Constructors in org.apache.lucene.analysis.sv with parameters of type TokenStreamModifierConstructorDescription -
Uses of TokenStream in org.apache.lucene.analysis.synonym
Subclasses of TokenStream in org.apache.lucene.analysis.synonymModifier and TypeClassDescriptionfinal classDeprecated.UseSynonymGraphFilterinstead, but be sure to also useFlattenGraphFilterat index time (not at search time) as well.final classApplies single- or multi-token synonyms from aSynonymMapto an incomingTokenStream, producing a fully correct graph output.Methods in org.apache.lucene.analysis.synonym that return TokenStreamModifier and TypeMethodDescriptionSynonymFilterFactory.create(TokenStream input) Deprecated.SynonymGraphFilterFactory.create(TokenStream input) Methods in org.apache.lucene.analysis.synonym with parameters of type TokenStreamModifier and TypeMethodDescriptionSynonymFilterFactory.create(TokenStream input) Deprecated.SynonymGraphFilterFactory.create(TokenStream input) Constructors in org.apache.lucene.analysis.synonym with parameters of type TokenStreamModifierConstructorDescriptionSynonymFilter(TokenStream input, SynonymMap synonyms, boolean ignoreCase) Deprecated.SynonymGraphFilter(TokenStream input, SynonymMap synonyms, boolean ignoreCase) Apply previously built synonyms to incoming tokens. -
Uses of TokenStream in org.apache.lucene.analysis.synonym.word2vec
Subclasses of TokenStream in org.apache.lucene.analysis.synonym.word2vecModifier and TypeClassDescriptionfinal classApplies single-token synonyms from a Word2Vec trained network to an incomingTokenStream.Methods in org.apache.lucene.analysis.synonym.word2vec that return TokenStreamMethods in org.apache.lucene.analysis.synonym.word2vec with parameters of type TokenStreamConstructors in org.apache.lucene.analysis.synonym.word2vec with parameters of type TokenStreamModifierConstructorDescriptionWord2VecSynonymFilter(TokenStream input, Word2VecSynonymProvider synonymProvider, int maxSynonymsPerTerm, float minAcceptedSimilarity) Apply previously built synonymProvider to incoming tokens. -
Uses of TokenStream in org.apache.lucene.analysis.ta
Methods in org.apache.lucene.analysis.ta that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamTamilAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.ta with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamTamilAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.te
Subclasses of TokenStream in org.apache.lucene.analysis.teModifier and TypeClassDescriptionfinal classATokenFilterthat appliesTeluguNormalizerto normalize the orthography.final classATokenFilterthat appliesTeluguStemmerto stem Telugu words.Methods in org.apache.lucene.analysis.te that return TokenStreamModifier and TypeMethodDescriptionTeluguNormalizationFilterFactory.create(TokenStream input) TeluguStemFilterFactory.create(TokenStream input) protected TokenStreamTeluguAnalyzer.normalize(String fieldName, TokenStream in) TeluguNormalizationFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.te with parameters of type TokenStreamModifier and TypeMethodDescriptionTeluguNormalizationFilterFactory.create(TokenStream input) TeluguStemFilterFactory.create(TokenStream input) protected TokenStreamTeluguAnalyzer.normalize(String fieldName, TokenStream in) TeluguNormalizationFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.te with parameters of type TokenStream -
Uses of TokenStream in org.apache.lucene.analysis.th
Subclasses of TokenStream in org.apache.lucene.analysis.thMethods in org.apache.lucene.analysis.th that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamThaiAnalyzer.normalize(String fieldName, TokenStream in) Methods in org.apache.lucene.analysis.th with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamThaiAnalyzer.normalize(String fieldName, TokenStream in) -
Uses of TokenStream in org.apache.lucene.analysis.tr
Subclasses of TokenStream in org.apache.lucene.analysis.trModifier and TypeClassDescriptionfinal classStrips all characters after an apostrophe (including the apostrophe itself).final classNormalizes Turkish token text to lower case.Methods in org.apache.lucene.analysis.tr that return TokenStreamModifier and TypeMethodDescriptionApostropheFilterFactory.create(TokenStream input) TurkishLowerCaseFilterFactory.create(TokenStream input) protected TokenStreamTurkishAnalyzer.normalize(String fieldName, TokenStream in) TurkishLowerCaseFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.tr with parameters of type TokenStreamModifier and TypeMethodDescriptionApostropheFilterFactory.create(TokenStream input) TurkishLowerCaseFilterFactory.create(TokenStream input) protected TokenStreamTurkishAnalyzer.normalize(String fieldName, TokenStream in) TurkishLowerCaseFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.tr with parameters of type TokenStreamModifierConstructorDescriptionCreate a new TurkishLowerCaseFilter, that normalizes Turkish token text to lower case. -
Uses of TokenStream in org.apache.lucene.analysis.util
Subclasses of TokenStream in org.apache.lucene.analysis.utilModifier and TypeClassDescriptionclassAn abstract base class for simple, character-oriented tokenizers.final classRemoves elisions from aTokenStream.classBreaks text into sentences with aBreakIteratorand allows subclasses to decompose these sentences into words.Methods in org.apache.lucene.analysis.util that return TokenStreamModifier and TypeMethodDescriptionElisionFilterFactory.create(TokenStream input) ElisionFilterFactory.normalize(TokenStream input) Methods in org.apache.lucene.analysis.util with parameters of type TokenStreamModifier and TypeMethodDescriptionElisionFilterFactory.create(TokenStream input) ElisionFilterFactory.normalize(TokenStream input) Constructors in org.apache.lucene.analysis.util with parameters of type TokenStreamModifierConstructorDescriptionElisionFilter(TokenStream input, CharArraySet articles) Constructs an elision filter with a Set of stop words -
Uses of TokenStream in org.apache.lucene.analysis.wikipedia
Subclasses of TokenStream in org.apache.lucene.analysis.wikipediaModifier and TypeClassDescriptionfinal classExtension of StandardTokenizer that is aware of Wikipedia syntax. -
Uses of TokenStream in org.apache.lucene.classification.document
Methods in org.apache.lucene.classification.document with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected String[]SimpleNaiveBayesDocumentClassifier.getTokenArray(TokenStream tokenizedText) Returns a token array from theTokenStreamin input -
Uses of TokenStream in org.apache.lucene.document
Subclasses of TokenStream in org.apache.lucene.documentModifier and TypeClassDescriptionprivate static final classprivate static final classprivate static final classFields in org.apache.lucene.document declared as TokenStreamModifier and TypeFieldDescriptionprotected TokenStreamField.tokenStreamPre-analyzed tokenStream for indexed fields; this is separate from fieldsData because you are allowed to have both; eg maybe field has a String value but you customize how it's tokenizedMethods in org.apache.lucene.document that return TokenStreamModifier and TypeMethodDescriptionFeatureField.tokenStream(Analyzer analyzer, TokenStream reuse) Field.tokenStream(Analyzer analyzer, TokenStream reuse) ShapeDocValuesField.tokenStream(Analyzer analyzer, TokenStream reuse) TokenStreams are not yet supportedField.tokenStreamValue()The TokenStream for this field to be used when indexing, or null.Methods in org.apache.lucene.document with parameters of type TokenStreamModifier and TypeMethodDescriptionvoidField.setTokenStream(TokenStream tokenStream) Expert: sets the token stream to be used for indexing and causes isIndexed() and isTokenized() to return true.FeatureField.tokenStream(Analyzer analyzer, TokenStream reuse) Field.tokenStream(Analyzer analyzer, TokenStream reuse) ShapeDocValuesField.tokenStream(Analyzer analyzer, TokenStream reuse) TokenStreams are not yet supportedConstructors in org.apache.lucene.document with parameters of type TokenStreamModifierConstructorDescriptionField(String name, TokenStream tokenStream, IndexableFieldType type) Create field with TokenStream value.TextField(String name, TokenStream stream) Creates a new un-stored TextField with TokenStream value. -
Uses of TokenStream in org.apache.lucene.index
Fields in org.apache.lucene.index declared as TokenStreamModifier and TypeFieldDescription(package private) TokenStreamIndexingChain.PerField.tokenStreamMethods in org.apache.lucene.index that return TokenStreamModifier and TypeMethodDescriptionIndexableField.tokenStream(Analyzer analyzer, TokenStream reuse) Creates the TokenStream used for indexing this field.IndexingChain.ReservedField.tokenStream(Analyzer analyzer, TokenStream reuse) Methods in org.apache.lucene.index with parameters of type TokenStreamModifier and TypeMethodDescriptionIndexableField.tokenStream(Analyzer analyzer, TokenStream reuse) Creates the TokenStream used for indexing this field.IndexingChain.ReservedField.tokenStream(Analyzer analyzer, TokenStream reuse) -
Uses of TokenStream in org.apache.lucene.index.memory
Methods in org.apache.lucene.index.memory that return TokenStreamModifier and TypeMethodDescription<T> TokenStreamMemoryIndex.keywordTokenStream(Collection<T> keywords) Convenience method; Creates and returns a token stream that generates a token for each keyword in the given collection, "as is", without any transforming text analysis.Methods in org.apache.lucene.index.memory with parameters of type TokenStreamModifier and TypeMethodDescriptionvoidMemoryIndex.addField(String fieldName, TokenStream stream) Iterates over the given token stream and adds the resulting terms to the index; Equivalent to adding a tokenized, indexed, termVectorStored, unstored, LuceneField.voidMemoryIndex.addField(String fieldName, TokenStream stream, int positionIncrementGap) Iterates over the given token stream and adds the resulting terms to the index; Equivalent to adding a tokenized, indexed, termVectorStored, unstored, LuceneField.voidMemoryIndex.addField(String fieldName, TokenStream tokenStream, int positionIncrementGap, int offsetGap) Iterates over the given token stream and adds the resulting terms to the index; Equivalent to adding a tokenized, indexed, termVectorStored, unstored, LuceneField.private voidMemoryIndex.storeTerms(MemoryIndex.Info info, TokenStream tokenStream, int positionIncrementGap, int offsetGap) -
Uses of TokenStream in org.apache.lucene.misc.document
Methods in org.apache.lucene.misc.document that return TokenStreamModifier and TypeMethodDescriptionLazyDocument.LazyField.tokenStream(Analyzer analyzer, TokenStream reuse) Methods in org.apache.lucene.misc.document with parameters of type TokenStreamModifier and TypeMethodDescriptionLazyDocument.LazyField.tokenStream(Analyzer analyzer, TokenStream reuse) -
Uses of TokenStream in org.apache.lucene.monitor
Subclasses of TokenStream in org.apache.lucene.monitorModifier and TypeClassDescription(package private) final class(package private) classA TokenStream created from aTermsEnumMethods in org.apache.lucene.monitor that return TokenStreamModifier and TypeMethodDescriptiondefault TokenStreamCustomQueryHandler.wrapTermStream(String field, TokenStream in) Adds additional processing to theTokenStreamover a document's terms indexRegexpQueryHandler.wrapTermStream(String field, TokenStream ts) Methods in org.apache.lucene.monitor with parameters of type TokenStreamModifier and TypeMethodDescriptiondefault TokenStreamCustomQueryHandler.wrapTermStream(String field, TokenStream in) Adds additional processing to theTokenStreamover a document's terms indexRegexpQueryHandler.wrapTermStream(String field, TokenStream ts) Constructors in org.apache.lucene.monitor with parameters of type TokenStreamModifierConstructorDescriptionSuffixingNGramTokenFilter(TokenStream input, String suffix, String wildcardToken, int maxTokenLength) Creates SuffixingNGramTokenFilter. -
Uses of TokenStream in org.apache.lucene.queries.intervals
Methods in org.apache.lucene.queries.intervals with parameters of type TokenStreamModifier and TypeMethodDescriptionstatic IntervalsSourceIntervals.analyzedText(TokenStream tokenStream, int maxGaps, boolean ordered) Returns intervals that correspond to tokens from the providedTokenStream.private static List<IntervalsSource> IntervalBuilder.analyzeGraph(TokenStream source) private static IntervalsSourceIntervalBuilder.analyzeSynonyms(TokenStream ts, int maxGaps, boolean ordered) private static IntervalsSourceIntervalBuilder.analyzeTerm(TokenStream ts) private static List<IntervalsSource> IntervalBuilder.analyzeTerms(TokenStream ts) -
Uses of TokenStream in org.apache.lucene.sandbox.search
Methods in org.apache.lucene.sandbox.search with parameters of type TokenStreamModifier and TypeMethodDescriptionTokenStreamToTermAutomatonQuery.toQuery(String field, TokenStream in) Pulls the graph (includingPositionLengthAttribute) from the providedTokenStream, and creates the corresponding automaton where arcs are bytes (or Unicode code points if unicodeArcs = true) from each term. -
Uses of TokenStream in org.apache.lucene.search.highlight
Subclasses of TokenStream in org.apache.lucene.search.highlightModifier and TypeClassDescription(package private) final classThis is a simplified version of org.apache.lucene.analysis.miscellaneous.LimitTokenOffsetFilter to prevent a dependency on analysis-common.jar.final classThis TokenFilter limits the number of tokens while indexing by adding up the current offset.final classTokenStream created from a term vector field.Fields in org.apache.lucene.search.highlight declared as TokenStreamMethods in org.apache.lucene.search.highlight that return TokenStreamModifier and TypeMethodDescriptionstatic TokenStreamTokenSources.getAnyTokenStream(IndexReader reader, int docId, String field, Analyzer analyzer) Deprecated.static TokenStreamTokenSources.getAnyTokenStream(IndexReader reader, int docId, String field, Document document, Analyzer analyzer) Deprecated.static TokenStreamTokenSources.getTermVectorTokenStreamOrNull(String field, Fields tvFields, int maxStartOffset) Get a token stream by un-inverting the term vector.static TokenStreamTokenSources.getTokenStream(String field, String contents, Analyzer analyzer) Deprecated.static TokenStreamTokenSources.getTokenStream(String field, Fields tvFields, String text, Analyzer analyzer, int maxStartOffset) Get a token stream from either un-inverting a term vector if possible, or by analyzing the text.static TokenStreamTokenSources.getTokenStream(Document doc, String field, Analyzer analyzer) Deprecated.static TokenStreamTokenSources.getTokenStream(IndexReader reader, int docId, String field, Analyzer analyzer) Deprecated.static TokenStreamTokenSources.getTokenStream(Terms tpv) Deprecated.static TokenStreamTokenSources.getTokenStream(Terms vector, boolean tokenPositionsGuaranteedContiguous) Deprecated.WeightedSpanTermExtractor.getTokenStream()Returns the tokenStream which may have been wrapped in a CachingTokenFilter.static TokenStreamTokenSources.getTokenStreamWithOffsets(IndexReader reader, int docId, String field) Deprecated.QueryScorer.init(TokenStream tokenStream) QueryTermScorer.init(TokenStream tokenStream) Scorer.init(TokenStream tokenStream) Called to init the Scorer with aTokenStream.private TokenStreamQueryScorer.initExtractor(TokenStream tokenStream) Methods in org.apache.lucene.search.highlight with parameters of type TokenStreamModifier and TypeMethodDescriptionfinal StringHighlighter.getBestFragment(TokenStream tokenStream, String text) Highlights chosen terms in a text, extracting the most relevant section.final String[]Highlighter.getBestFragments(TokenStream tokenStream, String text, int maxNumFragments) Highlights chosen terms in a text, extracting the most relevant sections.final StringHighlighter.getBestFragments(TokenStream tokenStream, String text, int maxNumFragments, String separator) Highlights terms in the text , extracting the most relevant sections and concatenating the chosen fragments with a separator (typically "...").final TextFragment[]Highlighter.getBestTextFragments(TokenStream tokenStream, String text, boolean mergeContiguousFragments, int maxNumFragments) Low level api to get the most relevant (formatted) sections of the document.WeightedSpanTermExtractor.getWeightedSpanTerms(Query query, float boost, TokenStream tokenStream) Creates a Map ofWeightedSpanTermsfrom the givenQueryandTokenStream.WeightedSpanTermExtractor.getWeightedSpanTerms(Query query, float boost, TokenStream tokenStream, String fieldName) Creates a Map ofWeightedSpanTermsfrom the givenQueryandTokenStream.WeightedSpanTermExtractor.getWeightedSpanTermsWithScores(Query query, float boost, TokenStream tokenStream, String fieldName, IndexReader reader) Creates a Map ofWeightedSpanTermsfrom the givenQueryandTokenStream.QueryScorer.init(TokenStream tokenStream) QueryTermScorer.init(TokenStream tokenStream) Scorer.init(TokenStream tokenStream) Called to init the Scorer with aTokenStream.private TokenStreamQueryScorer.initExtractor(TokenStream tokenStream) voidFragmenter.start(String originalText, TokenStream tokenStream) Initializes the Fragmenter.voidNullFragmenter.start(String s, TokenStream tokenStream) voidSimpleFragmenter.start(String originalText, TokenStream stream) voidSimpleSpanFragmenter.start(String originalText, TokenStream tokenStream) Constructors in org.apache.lucene.search.highlight with parameters of type TokenStreamModifierConstructorDescription(package private)LimitTokenOffsetFilter(TokenStream input, int maxStartOffset) OffsetLimitTokenFilter(TokenStream input, int offsetLimit) TokenGroup(TokenStream tokenStream) -
Uses of TokenStream in org.apache.lucene.search.suggest.analyzing
Subclasses of TokenStream in org.apache.lucene.search.suggest.analyzingModifier and TypeClassDescriptionfinal classLikeStopFilterexcept it will not remove the last token if that token was not followed by some token separator.Methods in org.apache.lucene.search.suggest.analyzing that return TokenStreamMethods in org.apache.lucene.search.suggest.analyzing with parameters of type TokenStreamConstructors in org.apache.lucene.search.suggest.analyzing with parameters of type TokenStreamModifierConstructorDescriptionSuggestStopFilter(TokenStream input, CharArraySet stopWords) Sole constructor. -
Uses of TokenStream in org.apache.lucene.search.suggest.document
Subclasses of TokenStream in org.apache.lucene.search.suggest.documentModifier and TypeClassDescriptionfinal classAConcatenateGraphFilterbut we can set the payload and provide access to config options.private static final classTheContextSuggestField.PrefixTokenFilterwraps aTokenStreamand adds a set prefixes ahead.Fields in org.apache.lucene.search.suggest.document declared as TokenStreamModifier and TypeFieldDescription(package private) final TokenStreamCompletionTokenStream.inputTokenStreamMethods in org.apache.lucene.search.suggest.document that return TokenStreamMethods in org.apache.lucene.search.suggest.document with parameters of type TokenStreamModifier and TypeMethodDescriptionSuggestField.tokenStream(Analyzer analyzer, TokenStream reuse) protected CompletionTokenStreamContextSuggestField.wrapTokenStream(TokenStream stream) protected CompletionTokenStreamSuggestField.wrapTokenStream(TokenStream stream) Wraps astreamwith a CompletionTokenStream.Constructors in org.apache.lucene.search.suggest.document with parameters of type TokenStreamModifierConstructorDescription(package private)CompletionTokenStream(TokenStream inputTokenStream) (package private)CompletionTokenStream(TokenStream inputTokenStream, boolean preserveSep, boolean preservePositionIncrements, int maxGraphExpansions) PrefixTokenFilter(TokenStream input, char separator, Iterable<CharSequence> prefixes) Create a newContextSuggestField.PrefixTokenFilter -
Uses of TokenStream in org.apache.lucene.search.uhighlight
Subclasses of TokenStream in org.apache.lucene.search.uhighlightModifier and TypeClassDescriptionprivate static final classWraps anAnalyzerand string text that represents multiple values delimited by a specified character.Fields in org.apache.lucene.search.uhighlight declared as TokenStreamModifier and TypeFieldDescription(package private) TokenStreamTokenStreamOffsetStrategy.TokenStreamOffsetsEnum.streamMethods in org.apache.lucene.search.uhighlight that return TokenStreamModifier and TypeMethodDescriptionprotected TokenStreamAnalysisOffsetStrategy.tokenStream(String content) Methods in org.apache.lucene.search.uhighlight with parameters of type TokenStreamModifier and TypeMethodDescriptionprivate static FilteringTokenFilterMemoryIndexOffsetStrategy.newKeepWordFilter(TokenStream tokenStream, CharArrayMatcher matcher) Constructors in org.apache.lucene.search.uhighlight with parameters of type TokenStreamModifierConstructorDescriptionprivateMultiValueTokenStream(TokenStream subTokenStream, String fieldName, Analyzer indexAnalyzer, String content, char splitChar, int splitCharIdx) (package private)TokenStreamOffsetsEnum(TokenStream ts, CharArrayMatcher[] matchers) -
Uses of TokenStream in org.apache.lucene.util
Methods in org.apache.lucene.util with parameters of type TokenStreamModifier and TypeMethodDescriptionprotected QueryQueryBuilder.analyzeBoolean(String field, TokenStream stream) Creates simple boolean query from the cached tokenstream contentsprotected QueryQueryBuilder.analyzeGraphBoolean(String field, TokenStream source, BooleanClause.Occur operator) Creates a boolean query from a graph token stream.protected QueryQueryBuilder.analyzeGraphPhrase(TokenStream source, String field, int phraseSlop) Creates graph phrase query from the tokenstream contentsprotected QueryQueryBuilder.analyzeMultiBoolean(String field, TokenStream stream, BooleanClause.Occur operator) Creates complex boolean query from the cached tokenstream contentsprotected QueryQueryBuilder.analyzeMultiPhrase(String field, TokenStream stream, int slop) Creates complex phrase query from the cached tokenstream contentsprotected QueryQueryBuilder.analyzePhrase(String field, TokenStream stream, int slop) Creates simple phrase query from the cached tokenstream contentsprotected QueryQueryBuilder.analyzeTerm(String field, TokenStream stream) Creates simple term query from the cached tokenstream contentsprotected QueryQueryBuilder.createFieldQuery(TokenStream source, BooleanClause.Occur operator, String field, boolean quoted, int phraseSlop) Creates a query from a token stream. -
Uses of TokenStream in org.apache.lucene.util.graph
Subclasses of TokenStream in org.apache.lucene.util.graphModifier and TypeClassDescriptionprivate classMethods in org.apache.lucene.util.graph that return types with arguments of type TokenStreamModifier and TypeMethodDescriptionGraphTokenStreamFiniteStrings.getFiniteStrings()Get all finite strings from the automaton.GraphTokenStreamFiniteStrings.getFiniteStrings(int startState, int endState) Get all finite strings that start atstartStateand end atendState.Methods in org.apache.lucene.util.graph with parameters of type TokenStreamModifier and TypeMethodDescriptionprivate AutomatonGraphTokenStreamFiniteStrings.build(TokenStream in) Build an automaton from the providedTokenStream.Constructors in org.apache.lucene.util.graph with parameters of type TokenStream
SpanishPluralStemFilterinstead.