Skip to content

Feature: repository search

Britta Gustafson edited this page Nov 14, 2024 · 52 revisions

The purpose of this page is to document how our search works (at a high level).

We need to show the following materials in our blended search results:

  • Regulations
  • Public policy documents that we link to:
    • Federal Register rules
    • Public links: subregulatory guidance, technical assistance, etc.
  • Internal policy documents (internal to CMCS):
    • Internal files: uploaded files that are hosted within eRegulations
    • Internal links: Box, Sharepoint, or other URLs hosted within other CMCS tools

User experience goals

  • Return relevant results in less than 2 seconds.
  • Return meaningful highlighting of where your query term shows up in the document.
  • Provide a bit of interpretation instead of being 100% literal (such as applying stop words, stemming, etc.), but not so much that it prevents relevant results.

Helpful test queries:

  • state
    • At rank filter 0.05, this returns 2800+ results
  • "state plan"
    • At rank filter 0.05, this returns 1400+ results
  • state plan amendment
    • At rank filter 0.05, this returns 2000+ results
    • Should show the stemmed word "amendment" in highlights
  • Medicare
    • At rank filter 0.05, this returns 2100+ results
    • Shouldn't just return results for "medical" at the top
  • personal care services
    • At rank filter 0.05, this returns 1700+ results
    • Should show the stemmed words "personal" and "services" in highlights

Background info about technology and data

We use Postgres full-text search via Django's support for Postgres full-text search.

In our Postgres database we have:

  • The full text of regulation sections in scope, imported via eCFR API
  • Metadata about each document:
    • Imported via Federal Register API for post-1994 rules (and hand-corrected as needed)
    • Entered by hand for everything else
  • The full text of most documents, extracted via our Text Extractor Lambda. See that page for details. It uses:
    • Python Requests to grab content from URLs, respecting robots.txt and providing a custom user agent (CMCSeRegsTextExtractorBot/1.0)
    • Google Magika to detect file types
    • AWS Textract to process PDFs, including text detection for scanned documents (this is about $1.50 per 1000 pages)
    • Several open source libraries to process a lot of file types

As of November 2024: For PDFs, our system only attempts to extract the first 50 pages of each document, to put a bound on using too much resources. We need to revisit this.

Notes about Federal Register links

Our FR parser includes a special step to enable search indexing because the FR website does not allow scraping their normal URLs: we fetch their text-only URL via their API and give that Extract_URL to the Text Extractor Lambda instead of the normal URL.

The raw text content of an indexed FR link can be 2-3 MB or more.

We don't index pre-1994 FR links because we link to PDFs that we can't scrape:

Notes about internal links

Searching the full text of internal links depends on whether we can programmatically access and index those documents through APIs or other tools, and on whether we've been able to build an integration. Our current infrastructure does not include any such integrations.

Metadata fields

See Resources linking system for details about our metadata fields for public links, FR links, internal links, and internal files.

Display of resources search results

Example:

search results

Factors for what you can see

  • FR links, public links, internal links, and internal files can be marked "approved" or not approved in the admin panel. Items that aren't approved are only visible in the admin panel (which is only available to logged-in users), never shown in search results or elsewhere on the site.
  • If you're not logged in, you cannot see internal documents (internal files or internal links) in search results or elsewhere on the site.

Display of metadata

In search results, we always show the following document metadata if available:

  • Document category
  • Date
  • Subjects
  • Related citations

If the desired keyword(s) exist only in the document metadata (FR link Document ID or Title, public link Document ID or Title, internal file Title or Summary, etc.), show that document metadata. This means:

  • FR link: Document ID (grey metadata) and Title (blue link)
  • Public link: Document ID (grey metadata) and Title (blue link)
  • Internal link: Document ID (grey metadata) and Title (blue link)
  • Internal files: Title (blue link) and Summary (black text)

If the desired keyword(s) also exist in the extracted document text, show the Document ID and Title (grey metadata and blue link) AND:

  • For all types of documents: if we have a relevant excerpt (headline) from the full-text content, display it in black text. (For internal files, this headline replaces the summary.)
    • The amount of text is configured via the variables SEARCH_HEADLINE_MAX_WORDS, SEARCH_HEADLINE_MIN_WORDS, and SEARCH_HEADLINE_MAX_FRAGMENTS. These correspond to MaxWords, MinWords, and MaxFragments as described in 12.3.4. Highlighting Results.
    • Note that we show headlines from only the first 50k characters in a document, because otherwise search is really slow. If you imagine a plain document in 12 point font, 50k characters is about 10 pages. (This is configured via the SEARCH_HEADLINE_TEXT_MAX variable.)

We highlight matching keywords in bold in the document Title and Summary or excerpt. We could highlight them in the Document ID as well.

Search result ranking

Rank filtering

For background, see Ranking Search Results in the Postgres docs.

When Django directs Postgres to provide results for a query, each potential result for a query gets a ts_rank score. See the definition of ts_rank: "Computes a score showing how well the vector matches the query."

A high score (0.1) means very relevant, while a low score (0.01) means not very relevant.

We have an environment variable that tells Postgres how to filter the results: should it only show fewer results that are most relevant, or should it show lots of results, including less relevant results at the end? A higher filter (like 0.1) mean show fewer results, and a lower filter (like 0.01) means show lots of results.

The rank filter value for each environment is in our parameter store: BASIC_SEARCH_FILTER and QUOTED_SEARCH_FILTER.

Rank filter is 0.05 in all environments, for both basic (not quoted) and phrase (quoted) search queries.

Normalization

We're not yet using any of the Postgres document length normalization options.

Weights

To make search faster, we create and automatically maintain a "vector_column" with a pre-processed version of each content item. We create the the pre-processed version using "weight" values for various parts of the metadata and content for an item, so that (for example) a word in the title of a document counts more toward relevance than a word in the body of a document.

Context about decisions we made for weights (login required).

Weights for documents:

  • (FR link) Document ID: A
  • (Public link) Document ID: A
  • (Internal file) Document ID: A
  • (FR link) Title: A
  • (Public link) Title: A
  • (Internal file) summary: B
  • (Internal file) filename: C
  • Date: C
  • Subjects (full names, short names, and abbreviations): D
  • Content: D

We may want to:

  • Add FR docket numbers to weight A
  • Bump subjects up to weight C
  • Add related regulation and statute citations to weight C

Weights for regulation text sections:

  • Section number: A
  • Section title: A
  • Part title: A
  • Content: B

We may want to:

  • Add subpart title to weight B
  • Reduce part title to weight B
  • Reduce content to weight C

Overview

Data

Features

Decisions

User research

Usability studies

Design

Development

Clone this wiki locally