Skip to content

Conversation

@bali0019
Copy link
Contributor

@bali0019 bali0019 commented Oct 7, 2025

This example demonstrates incremental document processing using:

  • ai_parse_document for extracting structured data from PDFs/images
  • ai_query for LLM-based content analysis
  • Databricks Workflows with Structured Streaming and serverless compute

Key features:

  • Python notebooks with Structured Streaming for incremental processing
  • Serverless compute for cost efficiency
  • Parameterized workflow with catalog, schema, and table names
  • Checkpointed streaming to process only new data
  • Visual debugging notebook with interactive bounding boxes

This example demonstrates incremental document processing using:
- ai_parse_document for extracting structured data from PDFs/images
- ai_query for LLM-based content analysis
- Databricks Workflows with Structured Streaming and serverless compute

Key features:
- Python notebooks with Structured Streaming for incremental processing
- Serverless compute for cost efficiency
- Parameterized workflow with catalog, schema, and table names
- Checkpointed streaming to process only new data
- Visual debugging notebook with interactive bounding boxes
@bali0019 bali0019 force-pushed the add-ai-document-workflow branch from 0aeff00 to d5151d4 Compare October 7, 2025 16:43
- Add job-level parameters block for catalog and schema (shared across all tasks)
- Move optional schedule configuration to top of job definition
- Replace Python notebook with Jupyter notebook format including visual outputs
The ipynb format was not preserving the visual output properly.
Using .py notebook source and .html export to show outputs correctly.
- Move directory from knowledge_base/ to contrib/
- Replace "workflow" terminology with "job" throughout
- Add recursiveFileLookup option for subdirectory file discovery
- Make all parameters explicit at job level
- Update checkpoint paths to use Unity Catalog volumes instead of /tmp
- Remove hardcoded personal paths from notebooks
- Add 6 visual screenshots showing parsing capabilities
- Update README with Example Output section
- Remove HTML output file in favor of Python notebook
default: /Volumes/main/default/parsed_output
checkpoint_base_path:
description: Base path for Structured Streaming checkpoints
default: /tmp/checkpoints/ai_parse_workflow
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This path is local to a driver. Checkpoints will be gone by the time the cluster terminates. Intentional?

@pietern pietern merged commit 774414c into databricks:main Oct 15, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants