Skip to content
Merged
Show file tree
Hide file tree
Changes from 20 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/codegen/src/cli_doc.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ use std::{fs, path::Path};
use crate::utils;

pub fn generate_cli_doc(docs_dir: &Path) -> anyhow::Result<()> {
let file_path = docs_dir.join("cli_reference.md");
let file_path = docs_dir.join("reference/cli.md");

let content = fs::read_to_string(&file_path)?;

Expand Down
2 changes: 1 addition & 1 deletion docs/codegen/src/default_configuration.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ use crate::utils::replace_section;
use pgt_configuration::PartialConfiguration;

pub fn generate_default_configuration(docs_dir: &Path) -> anyhow::Result<()> {
let index_path = docs_dir.join("index.md");
let index_path = docs_dir.join("getting_started.md");

let printed_config = format!(
"\n```json\n{}\n```\n",
Expand Down
2 changes: 1 addition & 1 deletion docs/codegen/src/env_variables.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ use std::path::Path;
use crate::utils::replace_section;

pub fn generate_env_variables(docs_dir: &Path) -> Result<()> {
let file_path = docs_dir.join("env_variables.md");
let file_path = docs_dir.join("reference/env_variables.md");

let mut content = vec![];

Expand Down
2 changes: 1 addition & 1 deletion docs/codegen/src/rules_docs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ use std::{
///
/// * `docs_dir`: Path to the docs directory.
pub fn generate_rules_docs(docs_dir: &Path) -> anyhow::Result<()> {
let rules_dir = docs_dir.join("rules");
let rules_dir = docs_dir.join("reference/rules");

if rules_dir.exists() {
fs::remove_dir_all(&rules_dir)?;
Expand Down
2 changes: 1 addition & 1 deletion docs/codegen/src/rules_index.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ use crate::utils;
///
/// * `docs_dir`: Path to the docs directory.
pub fn generate_rules_index(docs_dir: &Path) -> anyhow::Result<()> {
let index_file = docs_dir.join("rules.md");
let index_file = docs_dir.join("reference/rules.md");

let mut visitor = crate::utils::LintRulesVisitor::default();
pgt_analyser::visit_registry(&mut visitor);
Expand Down
11 changes: 10 additions & 1 deletion docs/codegen/src/rules_sources.rs
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ impl PartialOrd for SourceSet {
}

pub fn generate_rule_sources(docs_dir: &Path) -> anyhow::Result<()> {
let rule_sources_file = docs_dir.join("rule_sources.md");
let rule_sources_file = docs_dir.join("reference/rule_sources.md");

let mut visitor = crate::utils::LintRulesVisitor::default();
pgt_analyser::visit_registry(&mut visitor);
Expand Down Expand Up @@ -69,7 +69,16 @@ pub fn generate_rule_sources(docs_dir: &Path) -> anyhow::Result<()> {
}
}

writeln!(buffer, "# Rule Sources",)?;
writeln!(
buffer,
"Many rules are inspired by or directly ported from other tools. This page lists the sources of each rule.",
)?;

writeln!(buffer, "## Exclusive rules",)?;
if exclusive_rules.is_empty() {
writeln!(buffer, "_No exclusive rules available._")?;
}
for (rule, link) in exclusive_rules {
writeln!(buffer, "- [{rule}]({link}) ")?;
}
Expand Down
96 changes: 96 additions & 0 deletions docs/configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# Configuration

This guide will help you to understand how to configure the Postgres Language Server. It explains the structure of the configuration file and how the configuration is resolved.

The Postgres Language Server allows you to customize its behavior using CLI options or a configuration file named `postgrestools.jsonc`. We recommend that you create a configuration file for each project. This ensures that each team member has the same configuration in the CLI and in any editor that allows Biome integration. Many of the options available in a configuration file are also available in the CLI.

## Configuration file structure

A configuration file is usually placed in your project’s root folder. It is organized around the tools that are provided. All tools are enabled by default, but some require additional setup like a database connection or the `plpgsql_check` extension.

```json
{
"$schema": "https://pgtools.dev/latest/schema.json",
"linter": {
"enabled": true,
"rules": {
"recommended": true
}
},
"typecheck": {
"enabled": true
}
"plpgsqlCheck": {
"enabled" : true
}
}
```

## Configuring a database connection

Some tools that the Postgres Language Server provides are implemented as mere interfaces on top of functionality that is provided by the database itself. This ensures correctness, but requires an active connection to a Postgres database. We strongly recommend to only connect to a local development database.

```json
{
"$schema": "https://pgtools.dev/latest/schema.json",
"db": {
"host": "127.0.0.1",
"port": 5432,
"username": "postgres",
"password": "postgres",
"database": "postgres",
"connTimeoutSecs": 10,
"allowStatementExecutionsAgainst": ["127.0.0.1/*", "localhost/*"]
}
}
```


## Specifying files to process

You can control the files/folders to process using different strategies, either CLI, configuration and VCS.

### Include files via CLI
The first way to control which files and folders are processed is to list them in the CLI. In the following command, we only check `file1.sql` and all the files in the `src` folder, because folders are recursively traversed.

```shell
postgrestools check file1.js src/
```

### Control files via configuration

The configuration file can be used to refine which files are processed. You can explicitly list the files to be processed using the `files.includes` field. `files.includes` accepts glob patterns such as sql/**/*.sql. Negated patterns starting with `!` can be used to exclude files.

Paths and globs inside the configuration file are resolved relative to the folder the configuration file is in. An exception to this is when a configuration file is extended by another.

#### Include files via configuration
Let’s take the following configuration, where we want to include only SQL files (`.sql`) that are inside the `sql/` folder:

```json
{
"files": {
"includes": ["sql/**/*.sql"]
}
}
```

#### Exclude files via configuration
If you want to exclude files and folders from being processed, you can use the `files.ignore` .

In the following example, we include all files, except those in any test/ folder:

```json
{
"files": {
"ignore": [
"**/test",
]
}
}
```

#### Control files via VCS
You can ignore files ignored by your [VCS](/guides/vcs_integration.md).



39 changes: 39 additions & 0 deletions docs/features/editor_features.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Autocompletion & Hover

The language server provides autocompletion and hover information when connected to a database.

## Autocompletion

As you type SQL, the language server suggests relevant database objects based on your current context:

- **Tables**: Available tables from your database schema
- **Columns**: Columns from tables referenced in your query
- **Functions**: Database functions and procedures
- **Schemas**: Available schemas in your database
- **Keywords**: SQL keywords and syntax

The suggestions are context-aware - for example, when typing after `FROM`, you'll see table suggestions, and when typing after `SELECT`, you'll see column suggestions from relevant tables.

## Hover Information

Hovering over database objects in your SQL shows detailed information:

- **Tables**: Schema, column list with data types
- **Columns**: Data type, nullable status, table location
- **Functions**: Return type, parameter information

The hover information is pulled from your database schema.

## Requirements

Both features require:
- A configured database connection
- The language server must be able to read schema information from your database

Without a database connection, these features are not available.

## Configuration

These features work automatically when you have a database connection configured. See the [database configuration guide](../guides/configure_database.md) for setup instructions.

The language server caches schema information on startup.
66 changes: 66 additions & 0 deletions docs/features/linting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Linting

The language server provides static analysis through linting rules that detect potential issues in your SQL code. The linter analyses SQL statements for safety issues, best practices violations, and problems that could break existing applications.

## Rules

Rules are organized into categories like Safety, Performance, and Style. Each rule can be configured individually or disabled entirely.

See the [Rules Reference](../reference/rules.md) for the complete list of available rules and their descriptions.

## Configuration

Configure linting behavior in your `postgrestools.jsonc`:

```json
{
"linter": {
// Enable/disable the linter entirely
"enabled": true,
"rules": {
// Configure rule groups
"safety": {
// Individual rule configuration
"banDropColumn": "error", // error, warn, info, hint, off
"banDropTable": "warn",
"addingRequiredField": "off"
}
}
}
}
```

## Suppressing Diagnostics

You can suppress specific diagnostics using comments:

```sql
-- pgt-ignore-next-line safety/banDropColumn: Intentionally dropping deprecated column
ALTER TABLE users DROP COLUMN deprecated_field;

-- pgt-ignore safety/banDropTable: Cleanup during migration
DROP TABLE temp_migration_table;
```

For more details on suppressions check out [our guide]('../guides/suppressions.md').

## Schema-Aware Analysis

Some rules require a database connection to perform schema-aware analysis. If no connection is configured, they are skipped.

## CLI Usage

The linter can also be used via the CLI for CI integration:

```bash
# Lint specific files
postgrestools check migrations/

# With specific rules
postgrestools check migrations/ --only safety/banDropColumn

# Skip certain rules
postgrestools check migrations/ --skip safety/banDropTable
```

See the [CLI Reference](../reference/cli.md) for more options, and check the guide on [linting migrations]('../guides/checking_migrations.md').
40 changes: 40 additions & 0 deletions docs/features/plpgsql.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# PL/pgSQL Support

The Postgres Language Server partially supports `PL/pgSQL`. By default, use `libpg_query` to parse the function body and show any syntax error. This is a great way to quickly reduce the feedback loop when developing. Unfortunately, the reported errors do not contain any location information and we always report an error on the entire function body.

To get more sophisticated and fine-grained errors, we strongly recommend to enable the [`plpgsql_check`](https://github.com/okbob/plpgsql_check) extension in your development database.

```sql
CREATE EXTENSION IF NOT EXISTS plpgsql_check;
```

The language server will automatically detect the extension and start forwarding its reports as diagnostics.

For any `CREATE FUNCTION` statement with the language `PL/pgSQL`, the following process occurs:

1. The language server creates the function in a temporary transaction
2. Calls `plpgsql_check_function()` to perform comprehensive static analysis of the function body
3. For trigger functions, the analysis runs against each table that has triggers using this function, providing context-specific validation
4. Errors are mapped back to source locations with token-level precision

The integration provides more detailed and actionable feedback compared to basic syntax checking, including:

> - checks fields of referenced database objects and types inside embedded SQL
> - validates you are using the correct types for function parameters
> - identifies unused variables and function arguments, unmodified OUT arguments
> - partial detection of dead code (code after an RETURN command)
> - detection of missing RETURN command in function (common after exception handlers, complex logic)
> - tries to identify unwanted hidden casts, which can be a performance issue like unused indexes
> - ability to collect relations and functions used by function
> - ability to check EXECUTE statements against SQL injection vulnerability

You can always disable the integration if you do not want the language server to hit your development database.

```json
{
"plpqsqlCheck": {
"enabled": false
}
}
```

26 changes: 26 additions & 0 deletions docs/features/syntax_diagnostics.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Syntax Diagnostics

The Postgres Language Server reports diagnostics for syntax errors in your SQL files. Syntax diagnostics are enabled by default and cannot be disabled.

## How it Works

The language server uses [libpg_query](https://github.com/pganalyze/libpg_query) to parse SQL statements, which is the actual Postgre parser packaged as a library. This ensures 100% compatibility with Postgres syntax.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The language server uses [libpg_query](https://github.com/pganalyze/libpg_query) to parse SQL statements, which is the actual Postgre parser packaged as a library. This ensures 100% compatibility with Postgres syntax.
The language server uses [libpg_query](https://github.com/pganalyze/libpg_query) to parse SQL statements, which is the actual Postgres parser packaged as a library. This ensures 100% compatibility with Postgres syntax.


When you type or modify SQL, the language server:
1. Parses the SQL using `libpg_query`
2. Reports any syntax errors as diagnostics

## Features

- Always correct: Uses the same parser as Postgres itself for accurate syntax validation
- Named Parameter Support: Postgres does not support named parameters such as `:param` and `@param`. Since they are commonly used in ORMs and other tooling, we convert them to positional parameters (`$1`, `$2`) before parsing
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reads as though the feature is not supported, maybe:

"Named Parameter Support: We convert :param and @param to positional parameters ($1, $2) so the Postgres parsers understands them and the LSP works with ORMs and other tooling

- `PL/pgSQL`: In addition to SQL, also validates `PL/pgSQL` function bodies for basic syntax errors

## Error Information

Syntax errors include:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Syntax errors include:
Syntax errors include:

- The exact error message from the Postgres parser
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- The exact error message from the Postgres parser
- The exact error message from the Postgres parser

- Source location when available (though `libpg_query` does not always provide precise positions)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Source location when available (though `libpg_query` does not always provide precise positions)
- Source location when available (though `libpg_query` does not always provide precise positions)

- Error severity (always "Error" for syntax issues)

Note: For more advanced `PL/pgSQL` validation beyond basic syntax, see the [PL/pgSQL feature](plpgsql.md) which integrates with the `plpgsql_check` extension.
56 changes: 56 additions & 0 deletions docs/features/type_checking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Type Checking

The Postgres Language Server validates your SQL queries against your actual database schema. As you type, it checks that your tables exist, columns are spelled correctly, and data types match - just like Postgres would when executing the query.

## How it Works

When you write a SQL query, the language server:
1. Connects to your database
2. Asks Postgres to validate your query without running it
3. Shows any errors directly in your editor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When you write a SQL query, the language server:
1. Connects to your database
2. Asks Postgres to validate your query without running it
3. Shows any errors directly in your editor
When you write a SQL query, the language server:
1. Connects to your database
2. Asks Postgres to validate your query without running it
3. Shows any errors directly in your editor


Since it uses your actual database, you get the exact same validation that would happen at runtime - but instantly as you type.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Since it uses your actual database, you get the exact same validation that would happen at runtime - but instantly as you type.
Since it uses your actual database, you get the same validation that happens at runtime - as soon as you type.


## Supported Statements

Since we are using `EXPLAIN`, type checking is only available for DML statements:
- `SELECT` statements
- `INSERT` statements
- `UPDATE` statements
- `DELETE` statements
- Common Table Expressions (CTEs)
Comment on lines +17 to +22
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Since we are using `EXPLAIN`, type checking is only available for DML statements:
- `SELECT` statements
- `INSERT` statements
- `UPDATE` statements
- `DELETE` statements
- Common Table Expressions (CTEs)
Since we are using `EXPLAIN`, type checking is only available for DML statements:
- `SELECT` statements
- `INSERT` statements
- `UPDATE` statements
- `DELETE` statements
- Common Table Expressions (CTEs)


## Configuration

You can configure the schemas included in the search path for type checking:

```json
{
"typecheck": {
"searchPath": ["public", "app_*", "auth"]
}
}
```

The `searchPath` supports:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok die liste hier ist auch nicht korrekt formiert, grundsätzlich müssen bei fast allen listen noch spaces hinzugefügt werden glaub ich

Screenshot 2025-09-09 at 13 38 04

- Exact schema names (e.g., `"public"`)
- Glob patterns (e.g., `"app_*"` to match `app_users`, `app_products`, etc.)
- The order matters - schemas are searched in the order specified

If not configured, defaults to `["public"]`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If not configured, defaults to `["public"]`.
Even if not specified, the LSP will always search`"public"` in last position.


## What Gets Checked

The type checker catches common SQL mistakes:

- **Typos in table and column names**: `SELECT user_naem FROM users` → "column 'user_naem' does not exist"
- **Type mismatches**: `WHERE user_id = 'abc'` when `user_id` is an integer
- **Missing tables**: `SELECT * FROM user` when the table is named `users`
- **Wrong number of columns**: `INSERT INTO users VALUES (1)` when the table has multiple required columns

## Requirements

Type checking requires:

- An active database connection
- Appropriate permissions to prepare statements in your database
Loading