A set of Elastic/OpenSearch-inspired significance scoring heuristics for discovery of terms strongly related to a set.
Typically the point of identifying significant terms is to suggest terms users can use to refine searches.
Watch this for background on the signficance statistics.
These algorithms rely on comparing a focused set of data (the "foreground set") with a larger set of more general content (the "background set"). The sources of data generally fall into two camps:
The words used in a specific set of search results are compared with a cache of word usage statistics which have been taken from a much broader selection of content. While elasticsearch and opensearch implementations compare search results with the full database (often at some expense), it is possible for client applications to hold a much smaller cache of only the most common words as the background dataset. This small background set can be less than one megabyte of JSON and still be useful.
A set of content eg the latest news headlines are grouped into different clusters (e.g. by using binary vectors). The words used in each cluster are compared to word usage in all other clusters to help identify what makes it different.
import { getHeuristic } from "@andorsearch/significance-heuristics";
const jlh = getHeuristic("jlh");
const score = jlh.score(12, 100, 40, 1700000);
Each heuristic implements:
interface SignificanceHeuristic {
name: string;
score(subsetFreq: number, subsetSize: number, supersetFreq: number, supersetSize: number): number;
}
Significant keywords can be detected in search results. These can be shown as suggestions to refine queries e.g to add keyword "h5n1" to a search for "bird flu".
The language used in search results needs to be compared with some record of general word use we call "the background". A background is simply a list of words and how frequently they each occur. You can get a background of word use from a couple of places:
npx @andorsearch/significant-terms count-background-vocab 'MyExampleTextFile.txt' MyBackgroundWordStats.json
``` **or**
Once you have a background it can be used for comparison with search results:
import { computeSignificantTerms, detectAndSortSequences, simpleTokenizer, WordUsageCounts } from "@andorsearch/significant-terms";
// ==== Your choice of background ====
// const backgroundWordStats:WordUsageCounts
// const backgroundCorpusSize:number
function findSignificantWordsOrPhrases(searchResultTexts: string[]) {
// Tokenise the text found in search results
let tokenStreams = searchResultTexts.map((textValue) => simpleTokenizer(textValue))
//Find the significant words compared to the background
const significantWords = computeSignificantTerms(
tokenStreams,
backgroundWordStats,
backgroundCorpusSize
);
// Optionally, examine how the significant words are placed in the text to identify word pairs e.g. "Mitt" + "Romney"
let significantWordsOrPhrases = detectAndSortSequences(significantWords, tokenStreams)
// Display the words or phrases as a comma delimited list.
const summary = significantWordsOrPhrases.map(termOrPhrase => termOrPhrase.join(" ")).join(", ");
console.log(summary)
}