• Convenience wrapper for computeSignificantTermsFromCounts that accepts tokenized documents instead of precomputed counts.

    Parameters

    • foregroundDocs: string[][]

      Array of foreground documents - each doc is represented as an array of tokens (typically words)

    • bgCounts: WordUsageCounts

      Background document frequency map for words

    • totalBackgroundDocs: number

      Total number of documents in the background set

    • options: SignificanceFindingOptions = {}

      Same as computeSignificantTermsFromCounts

    Returns TermScore[]