Package org.apache.lucene.queries.mlt
Class MoreLikeThis
- java.lang.Object
-
- org.apache.lucene.queries.mlt.MoreLikeThis
-
public final class MoreLikeThis extends java.lang.ObjectGenerate "more like this" similarity queries. Based on this mail:Lucene does let you access the document frequency of terms, with IndexReader.docFreq(). Term frequencies can be computed by re-tokenizing the text, which, for a single document, is usually fast enough. But looking up the docFreq() of every term in the document is probably too slow. You can use some heuristics to prune the set of terms, to avoid calling docFreq() too much, or at all. Since you're trying to maximize a tf*idf score, you're probably most interested in terms with a high tf. Choosing a tf threshold even as low as two or three will radically reduce the number of terms under consideration. Another heuristic is that terms with a high idf (i.e., a low df) tend to be longer. So you could threshold the terms by the number of characters, not selecting anything less than, e.g., six or seven characters. With these sorts of heuristics you can usually find small set of, e.g., ten or fewer terms that do a pretty good job of characterizing a document. It all depends on what you're trying to do. If you're trying to eek out that last percent of precision and recall regardless of computational difficulty so that you can win a TREC competition, then the techniques I mention above are useless. But if you're trying to provide a "more like this" button on a search results page that does a decent job and has good performance, such techniques might be useful. An efficient, effective "more-like-this" query generator would be a great contribution, if anyone's interested. I'd imagine that it would take a Reader or a String (the document's text), analyzer Analyzer, and return a set of representative terms using heuristics like those above. The frequency and length thresholds could be parameters, etc. DougInitial Usage
This class has lots of options to try to make it efficient and flexible. The simplest possible usage is as follows. The bold fragment is specific to this class.
IndexReader ir = ... IndexSearcher is = ... MoreLikeThis mlt = new MoreLikeThis(ir); Reader target = ... // orig source of doc you want to find similarities to Query query = mlt.like( target); Hits hits = is.search(query); // now the usual iteration thru 'hits' - the only thing to watch for is to make sure //you ignore the doc if it matches your 'target' document, as it should be similar to itself
Thus you:
- do your normal, Lucene setup for searching,
- create a MoreLikeThis,
- get the text of the doc you want to find similarities to
- then call one of the like() calls to generate a similarity query
- call the searcher to find the similar docs
More Advanced Usage
You may want to use
setFieldNames(...)so you can examine multiple fields (e.g. body and title) for similarity.Depending on the size of your index and the size and makeup of your documents you may want to call the other set methods to control how the similarity queries are generated:
-
setMinTermFreq(...) -
setMinDocFreq(...) -
setMaxDocFreq(...) -
setMaxDocFreqPct(...) -
setMinWordLen(...) -
setMaxWordLen(...) -
setMaxQueryTerms(...) -
setMaxNumTokensParsed(...) -
setStopWord(...)
Changes: Mark Harwood 29/02/04 Some bugfixing, some refactoring, some optimisation. - bugfix: retrieveTerms(int docNum) was not working for indexes without a termvector -added missing code - bugfix: No significant terms being created for fields with a termvector - because was only counting one occurrence per term/field pair in calculations(ie not including frequency info from TermVector) - refactor: moved common code into isNoiseWord() - optimise: when no termvector support available - used maxNumTermsParsed to limit amount of tokenization
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description private static classMoreLikeThis.FreqQPriorityQueue that orders words by score.private static classMoreLikeThis.IntUse for frequencies and to avoid renewing Integers.private static classMoreLikeThis.ScoreTerm
-
Field Summary
Fields Modifier and Type Field Description private AnalyzeranalyzerAnalyzer that will be used to parse the doc.private booleanboostShould we apply a boost to the Query based on the scores?private floatboostFactorBoost factor to use when boosting the termsstatic booleanDEFAULT_BOOSTBoost terms in query based on score.static java.lang.String[]DEFAULT_FIELD_NAMESDefault field names.static intDEFAULT_MAX_DOC_FREQIgnore words which occur in more than this many docs.static intDEFAULT_MAX_NUM_TOKENS_PARSEDDefault maximum number of tokens to parse in each example doc field that is not stored with TermVector support.static intDEFAULT_MAX_QUERY_TERMSReturn a Query with no more than this many terms.static intDEFAULT_MAX_WORD_LENGTHIgnore words greater than this length or if 0 then this has no effect.static intDEFAULT_MIN_DOC_FREQIgnore words which do not occur in at least this many docs.static intDEFAULT_MIN_TERM_FREQIgnore terms with less than this frequency in the source doc.static intDEFAULT_MIN_WORD_LENGTHIgnore words less than this length or if 0 then this has no effect.static java.util.Set<?>DEFAULT_STOP_WORDSDefault set of stopwords.private java.lang.String[]fieldNamesField name we'll analyze.private IndexReaderirIndexReader to useprivate intmaxDocFreqIgnore words which occur in more than this many docs.private intmaxNumTokensParsedThe maximum number of tokens to parse in each example doc field that is not stored with TermVector supportprivate intmaxQueryTermsDon't return a query longer than this.private intmaxWordLenIgnore words if greater than this len.private intminDocFreqIgnore words which do not occur in at least this many docs.private intminTermFreqIgnore words less frequent that this.private intminWordLenIgnore words if less than this len.private TFIDFSimilaritysimilarityFor idf() calculations.private java.util.Set<?>stopWordsCurrent set of stop words.
-
Constructor Summary
Constructors Constructor Description MoreLikeThis(IndexReader ir)Constructor requiring an IndexReader.MoreLikeThis(IndexReader ir, TFIDFSimilarity sim)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description private voidaddTermFrequencies(java.io.Reader r, java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies, java.lang.String fieldName)Adds term frequencies found by tokenizing text from reader into the Map wordsprivate voidaddTermFrequencies(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> field2termFreqMap, Terms vector, java.lang.String fieldName)Adds terms and frequencies found in vector into the Map termFreqMapprivate QuerycreateQuery(PriorityQueue<MoreLikeThis.ScoreTerm> q)Create the More like query from a PriorityQueueprivate PriorityQueue<MoreLikeThis.ScoreTerm>createQueue(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies)Create a PriorityQueue from a word->tf map.java.lang.StringdescribeParams()Describe the parameters that control how the "more like this" query is formed.AnalyzergetAnalyzer()Returns an analyzer that will be used to parse source doc with.floatgetBoostFactor()Returns the boost factor used when boosting termsjava.lang.String[]getFieldNames()Returns the field names that will be used when generating the 'More Like This' query.intgetMaxDocFreq()Returns the maximum frequency in which words may still appear.intgetMaxNumTokensParsed()intgetMaxQueryTerms()Returns the maximum number of query terms that will be included in any generated query.intgetMaxWordLen()Returns the maximum word length above which words will be ignored.intgetMinDocFreq()Returns the frequency at which words will be ignored which do not occur in at least this many docs.intgetMinTermFreq()Returns the frequency below which terms will be ignored in the source doc.intgetMinWordLen()Returns the minimum word length below which words will be ignored.TFIDFSimilaritygetSimilarity()java.util.Set<?>getStopWords()Get the current stop words being used.private intgetTermsCount(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies)booleanisBoost()Returns whether to boost terms in query based on "score" or not.private booleanisNoiseWord(java.lang.String term)determines if the passed term is likely to be of interest in "more like" comparisonsQuerylike(int docNum)Return a query that will return docs like the passed lucene document ID.Querylike(java.lang.String fieldName, java.io.Reader... readers)Return a query that will return docs like the passed Readers.Querylike(java.util.Map<java.lang.String,java.util.Collection<java.lang.Object>> filteredDocument)java.lang.String[]retrieveInterestingTerms(int docNum)java.lang.String[]retrieveInterestingTerms(java.io.Reader r, java.lang.String fieldName)Convenience routine to make it easy to return the most interesting words in a document.private PriorityQueue<MoreLikeThis.ScoreTerm>retrieveTerms(int docNum)Find words for a more-like-this query former.private PriorityQueue<MoreLikeThis.ScoreTerm>retrieveTerms(java.io.Reader r, java.lang.String fieldName)Find words for a more-like-this query former.private PriorityQueue<MoreLikeThis.ScoreTerm>retrieveTerms(java.util.Map<java.lang.String,java.util.Collection<java.lang.Object>> field2fieldValues)voidsetAnalyzer(Analyzer analyzer)Sets the analyzer to use.voidsetBoost(boolean boost)Sets whether to boost terms in query based on "score" or not.voidsetBoostFactor(float boostFactor)Sets the boost factor to use when boosting termsvoidsetFieldNames(java.lang.String[] fieldNames)Sets the field names that will be used when generating the 'More Like This' query.voidsetMaxDocFreq(int maxFreq)Set the maximum frequency in which words may still appear.voidsetMaxDocFreqPct(int maxPercentage)Set the maximum percentage in which words may still appear.voidsetMaxNumTokensParsed(int i)voidsetMaxQueryTerms(int maxQueryTerms)Sets the maximum number of query terms that will be included in any generated query.voidsetMaxWordLen(int maxWordLen)Sets the maximum word length above which words will be ignored.voidsetMinDocFreq(int minDocFreq)Sets the frequency at which words will be ignored which do not occur in at least this many docs.voidsetMinTermFreq(int minTermFreq)Sets the frequency below which terms will be ignored in the source doc.voidsetMinWordLen(int minWordLen)Sets the minimum word length below which words will be ignored.voidsetSimilarity(TFIDFSimilarity similarity)voidsetStopWords(java.util.Set<?> stopWords)Set the set of stopwords.
-
-
-
Field Detail
-
DEFAULT_MAX_NUM_TOKENS_PARSED
public static final int DEFAULT_MAX_NUM_TOKENS_PARSED
Default maximum number of tokens to parse in each example doc field that is not stored with TermVector support.- See Also:
getMaxNumTokensParsed(), Constant Field Values
-
DEFAULT_MIN_TERM_FREQ
public static final int DEFAULT_MIN_TERM_FREQ
Ignore terms with less than this frequency in the source doc.- See Also:
getMinTermFreq(),setMinTermFreq(int), Constant Field Values
-
DEFAULT_MIN_DOC_FREQ
public static final int DEFAULT_MIN_DOC_FREQ
Ignore words which do not occur in at least this many docs.- See Also:
getMinDocFreq(),setMinDocFreq(int), Constant Field Values
-
DEFAULT_MAX_DOC_FREQ
public static final int DEFAULT_MAX_DOC_FREQ
Ignore words which occur in more than this many docs.
-
DEFAULT_BOOST
public static final boolean DEFAULT_BOOST
Boost terms in query based on score.- See Also:
isBoost(),setBoost(boolean), Constant Field Values
-
DEFAULT_FIELD_NAMES
public static final java.lang.String[] DEFAULT_FIELD_NAMES
Default field names. Null is used to specify that the field names should be looked up at runtime from the provided reader.
-
DEFAULT_MIN_WORD_LENGTH
public static final int DEFAULT_MIN_WORD_LENGTH
Ignore words less than this length or if 0 then this has no effect.- See Also:
getMinWordLen(),setMinWordLen(int), Constant Field Values
-
DEFAULT_MAX_WORD_LENGTH
public static final int DEFAULT_MAX_WORD_LENGTH
Ignore words greater than this length or if 0 then this has no effect.- See Also:
getMaxWordLen(),setMaxWordLen(int), Constant Field Values
-
DEFAULT_STOP_WORDS
public static final java.util.Set<?> DEFAULT_STOP_WORDS
Default set of stopwords. If null means to allow stop words.- See Also:
setStopWords(java.util.Set<?>),getStopWords()
-
stopWords
private java.util.Set<?> stopWords
Current set of stop words.
-
DEFAULT_MAX_QUERY_TERMS
public static final int DEFAULT_MAX_QUERY_TERMS
Return a Query with no more than this many terms.
-
analyzer
private Analyzer analyzer
Analyzer that will be used to parse the doc.
-
minTermFreq
private int minTermFreq
Ignore words less frequent that this.
-
minDocFreq
private int minDocFreq
Ignore words which do not occur in at least this many docs.
-
maxDocFreq
private int maxDocFreq
Ignore words which occur in more than this many docs.
-
boost
private boolean boost
Should we apply a boost to the Query based on the scores?
-
fieldNames
private java.lang.String[] fieldNames
Field name we'll analyze.
-
maxNumTokensParsed
private int maxNumTokensParsed
The maximum number of tokens to parse in each example doc field that is not stored with TermVector support
-
minWordLen
private int minWordLen
Ignore words if less than this len.
-
maxWordLen
private int maxWordLen
Ignore words if greater than this len.
-
maxQueryTerms
private int maxQueryTerms
Don't return a query longer than this.
-
similarity
private TFIDFSimilarity similarity
For idf() calculations.
-
ir
private final IndexReader ir
IndexReader to use
-
boostFactor
private float boostFactor
Boost factor to use when boosting the terms
-
-
Constructor Detail
-
MoreLikeThis
public MoreLikeThis(IndexReader ir)
Constructor requiring an IndexReader.
-
MoreLikeThis
public MoreLikeThis(IndexReader ir, TFIDFSimilarity sim)
-
-
Method Detail
-
getBoostFactor
public float getBoostFactor()
Returns the boost factor used when boosting terms- Returns:
- the boost factor used when boosting terms
- See Also:
setBoostFactor(float)
-
setBoostFactor
public void setBoostFactor(float boostFactor)
Sets the boost factor to use when boosting terms- See Also:
getBoostFactor()
-
getSimilarity
public TFIDFSimilarity getSimilarity()
-
setSimilarity
public void setSimilarity(TFIDFSimilarity similarity)
-
getAnalyzer
public Analyzer getAnalyzer()
Returns an analyzer that will be used to parse source doc with. The default analyzer is not set.- Returns:
- the analyzer that will be used to parse source doc with.
-
setAnalyzer
public void setAnalyzer(Analyzer analyzer)
Sets the analyzer to use. An analyzer is not required for generating a query with thelike(int)method, all other 'like' methods require an analyzer.- Parameters:
analyzer- the analyzer to use to tokenize text.
-
getMinTermFreq
public int getMinTermFreq()
Returns the frequency below which terms will be ignored in the source doc. The default frequency is theDEFAULT_MIN_TERM_FREQ.- Returns:
- the frequency below which terms will be ignored in the source doc.
-
setMinTermFreq
public void setMinTermFreq(int minTermFreq)
Sets the frequency below which terms will be ignored in the source doc.- Parameters:
minTermFreq- the frequency below which terms will be ignored in the source doc.
-
getMinDocFreq
public int getMinDocFreq()
Returns the frequency at which words will be ignored which do not occur in at least this many docs. The default frequency isDEFAULT_MIN_DOC_FREQ.- Returns:
- the frequency at which words will be ignored which do not occur in at least this many docs.
-
setMinDocFreq
public void setMinDocFreq(int minDocFreq)
Sets the frequency at which words will be ignored which do not occur in at least this many docs.- Parameters:
minDocFreq- the frequency at which words will be ignored which do not occur in at least this many docs.
-
getMaxDocFreq
public int getMaxDocFreq()
Returns the maximum frequency in which words may still appear. Words that appear in more than this many docs will be ignored. The default frequency isDEFAULT_MAX_DOC_FREQ.- Returns:
- get the maximum frequency at which words are still allowed, words which occur in more docs than this are ignored.
-
setMaxDocFreq
public void setMaxDocFreq(int maxFreq)
Set the maximum frequency in which words may still appear. Words that appear in more than this many docs will be ignored.- Parameters:
maxFreq- the maximum count of documents that a term may appear in to be still considered relevant
-
setMaxDocFreqPct
public void setMaxDocFreqPct(int maxPercentage)
Set the maximum percentage in which words may still appear. Words that appear in more than this many percent of all docs will be ignored. This method callssetMaxDocFreq(int)internally (both conditions cannot be used at the same time).- Parameters:
maxPercentage- the maximum percentage of documents (0-100) that a term may appear in to be still considered relevant.
-
isBoost
public boolean isBoost()
Returns whether to boost terms in query based on "score" or not. The default isDEFAULT_BOOST.- Returns:
- whether to boost terms in query based on "score" or not.
- See Also:
setBoost(boolean)
-
setBoost
public void setBoost(boolean boost)
Sets whether to boost terms in query based on "score" or not.- Parameters:
boost- true to boost terms in query based on "score", false otherwise.- See Also:
isBoost()
-
getFieldNames
public java.lang.String[] getFieldNames()
Returns the field names that will be used when generating the 'More Like This' query. The default field names that will be used isDEFAULT_FIELD_NAMES.- Returns:
- the field names that will be used when generating the 'More Like This' query.
-
setFieldNames
public void setFieldNames(java.lang.String[] fieldNames)
Sets the field names that will be used when generating the 'More Like This' query. Set this to null for the field names to be determined at runtime from the IndexReader provided in the constructor.- Parameters:
fieldNames- the field names that will be used when generating the 'More Like This' query.
-
getMinWordLen
public int getMinWordLen()
Returns the minimum word length below which words will be ignored. Set this to 0 for no minimum word length. The default isDEFAULT_MIN_WORD_LENGTH.- Returns:
- the minimum word length below which words will be ignored.
-
setMinWordLen
public void setMinWordLen(int minWordLen)
Sets the minimum word length below which words will be ignored.- Parameters:
minWordLen- the minimum word length below which words will be ignored.
-
getMaxWordLen
public int getMaxWordLen()
Returns the maximum word length above which words will be ignored. Set this to 0 for no maximum word length. The default isDEFAULT_MAX_WORD_LENGTH.- Returns:
- the maximum word length above which words will be ignored.
-
setMaxWordLen
public void setMaxWordLen(int maxWordLen)
Sets the maximum word length above which words will be ignored.- Parameters:
maxWordLen- the maximum word length above which words will be ignored.
-
setStopWords
public void setStopWords(java.util.Set<?> stopWords)
Set the set of stopwords. Any word in this set is considered "uninteresting" and ignored. Even if your Analyzer allows stopwords, you might want to tell the MoreLikeThis code to ignore them, as for the purposes of document similarity it seems reasonable to assume that "a stop word is never interesting".- Parameters:
stopWords- set of stopwords, if null it means to allow stop words- See Also:
getStopWords()
-
getStopWords
public java.util.Set<?> getStopWords()
Get the current stop words being used.- See Also:
setStopWords(java.util.Set<?>)
-
getMaxQueryTerms
public int getMaxQueryTerms()
Returns the maximum number of query terms that will be included in any generated query. The default isDEFAULT_MAX_QUERY_TERMS.- Returns:
- the maximum number of query terms that will be included in any generated query.
-
setMaxQueryTerms
public void setMaxQueryTerms(int maxQueryTerms)
Sets the maximum number of query terms that will be included in any generated query.- Parameters:
maxQueryTerms- the maximum number of query terms that will be included in any generated query.
-
getMaxNumTokensParsed
public int getMaxNumTokensParsed()
- Returns:
- The maximum number of tokens to parse in each example doc field that is not stored with TermVector support
- See Also:
DEFAULT_MAX_NUM_TOKENS_PARSED
-
setMaxNumTokensParsed
public void setMaxNumTokensParsed(int i)
- Parameters:
i- The maximum number of tokens to parse in each example doc field that is not stored with TermVector support
-
like
public Query like(int docNum) throws java.io.IOException
Return a query that will return docs like the passed lucene document ID.- Parameters:
docNum- the documentID of the lucene doc to generate the 'More Like This" query for.- Returns:
- a query that will return docs like the passed lucene document ID.
- Throws:
java.io.IOException
-
like
public Query like(java.util.Map<java.lang.String,java.util.Collection<java.lang.Object>> filteredDocument) throws java.io.IOException
- Parameters:
filteredDocument- Document with field values extracted for selected fields.- Returns:
- More Like This query for the passed document.
- Throws:
java.io.IOException
-
like
public Query like(java.lang.String fieldName, java.io.Reader... readers) throws java.io.IOException
Return a query that will return docs like the passed Readers. This was added in order to treat multi-value fields.- Returns:
- a query that will return docs like the passed Readers.
- Throws:
java.io.IOException
-
createQuery
private Query createQuery(PriorityQueue<MoreLikeThis.ScoreTerm> q)
Create the More like query from a PriorityQueue
-
createQueue
private PriorityQueue<MoreLikeThis.ScoreTerm> createQueue(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies) throws java.io.IOException
Create a PriorityQueue from a word->tf map.- Parameters:
perFieldTermFrequencies- a per field map of words keyed on the word(String) with Int objects as the values.- Throws:
java.io.IOException
-
getTermsCount
private int getTermsCount(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies)
-
describeParams
public java.lang.String describeParams()
Describe the parameters that control how the "more like this" query is formed.
-
retrieveTerms
private PriorityQueue<MoreLikeThis.ScoreTerm> retrieveTerms(int docNum) throws java.io.IOException
Find words for a more-like-this query former.- Parameters:
docNum- the id of the lucene document from which to find terms- Throws:
java.io.IOException
-
retrieveTerms
private PriorityQueue<MoreLikeThis.ScoreTerm> retrieveTerms(java.util.Map<java.lang.String,java.util.Collection<java.lang.Object>> field2fieldValues) throws java.io.IOException
- Throws:
java.io.IOException
-
addTermFrequencies
private void addTermFrequencies(java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> field2termFreqMap, Terms vector, java.lang.String fieldName) throws java.io.IOException
Adds terms and frequencies found in vector into the Map termFreqMap- Parameters:
field2termFreqMap- a Map of terms and their frequencies per fieldvector- List of terms and their frequencies for a doc/field- Throws:
java.io.IOException
-
addTermFrequencies
private void addTermFrequencies(java.io.Reader r, java.util.Map<java.lang.String,java.util.Map<java.lang.String,MoreLikeThis.Int>> perFieldTermFrequencies, java.lang.String fieldName) throws java.io.IOExceptionAdds term frequencies found by tokenizing text from reader into the Map words- Parameters:
r- a source of text to be tokenizedperFieldTermFrequencies- a Map of terms and their frequencies per fieldfieldName- Used by analyzer for any special per-field analysis- Throws:
java.io.IOException
-
isNoiseWord
private boolean isNoiseWord(java.lang.String term)
determines if the passed term is likely to be of interest in "more like" comparisons- Parameters:
term- The word being considered- Returns:
- true if should be ignored, false if should be used in further analysis
-
retrieveTerms
private PriorityQueue<MoreLikeThis.ScoreTerm> retrieveTerms(java.io.Reader r, java.lang.String fieldName) throws java.io.IOException
Find words for a more-like-this query former. The result is a priority queue of arrays with one entry for every word in the document. Each array has 6 elements. The elements are:- The word (String)
- The top field that this word comes from (String)
- The score for this word (Float)
- The IDF value (Float)
- The frequency of this word in the index (Integer)
- The frequency of this word in the source document (Integer)
retrieveInterestingTerms().- Parameters:
r- the reader that has the content of the documentfieldName- field passed to the analyzer to use when analyzing the content- Returns:
- the most interesting words in the document ordered by score, with the highest scoring, or best entry, first
- Throws:
java.io.IOException- See Also:
retrieveInterestingTerms(int)
-
retrieveInterestingTerms
public java.lang.String[] retrieveInterestingTerms(int docNum) throws java.io.IOException- Throws:
java.io.IOException- See Also:
retrieveInterestingTerms(java.io.Reader, String)
-
retrieveInterestingTerms
public java.lang.String[] retrieveInterestingTerms(java.io.Reader r, java.lang.String fieldName) throws java.io.IOExceptionConvenience routine to make it easy to return the most interesting words in a document. More advanced users will callretrieveTerms()directly.- Parameters:
r- the source documentfieldName- field passed to analyzer to use when analyzing the content- Returns:
- the most interesting words in the document
- Throws:
java.io.IOException- See Also:
retrieveTerms(java.io.Reader, String),setMaxQueryTerms(int)
-
-