Skip to content

Conversation

@UnboundVariable
Copy link
Collaborator

This PR improves the "signature help" language server feature in two ways:

  1. It adds support for the recently-introduced "stub mapper" which maps symbol declarations within stubs to their implementation counterparts. This allows the signature help to display docstrings from the original implementation.
  2. It incorporates a more robust fix to a bug that was addressed in a previous PR. It also adds more comprehensive tests to cover this case.

@github-actions
Copy link
Contributor

mypy_primer results

No ecosystem changes detected ✅
No memory usage changes detected ✅

@AlexWaygood AlexWaygood added server Related to the LSP server ty Multi-file analysis & type inference labels Jul 26, 2025

// Create a range from the offset for the covering_node function.
let range = TextRange::new(offset, offset);
// Use length 1 if it fits within the root node, otherwise use zero-length range.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the benefit of using a 1 byte offset? I think this is a bit dangerous as it might panic if anyone ends up using the passed range to take a peek at the source text (or calls any method on tokens where some methods enforce that the range falls directly at a token boundary).

It might be better to use the actual len of the character at offset over using an incorrect range.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently required to "trick" covering_node to not return the node to the left of the cursor position. Is there a reason why covering_node returns a node to the left of a zero-length range? Do other callers rely on this behavior? If not, perhaps the best fix here is to modify covering_node to not return the node to the left of a zero-length range?

Is there an existing function that retrieves the length of the character at offset?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To get the size of a character at a given offset:

a[offset..].chars().next().map(TextLen::text_len);

Is there a reason why covering_node returns a node to the left of a zero-length range?

covering_node uses contains to test if two ranges overlap. Which I find the correct behavior in case we start with a range (in which case the nodes don't overlap).

The idea behind the API is that covering_node is very low level and unopinionated. It's also not the idea that it is used directly with an offset because there are cases where your offset falls directly between two nodes, in which case it requires some prioritization. The way we've implemented this in other LSP methods is by using tokens.at

let token = parsed
        .tokens()
        .at_offset(offset)
        .max_by_key(|token| match token.kind() {
            TokenKind::Name
            | TokenKind::String
            | TokenKind::Complex
            | TokenKind::Float
            | TokenKind::Int => 1,
            _ => 0,
        })?;

This allows the caller to customize which tokens should be preferred. I'm not sure what the heuristic should be for signature help. We can consider extracting the above logic from find_goto_target if it is the same or we can deploy our own here.

Copy link
Member

@dhruvmanila dhruvmanila left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@UnboundVariable UnboundVariable merged commit 2413483 into astral-sh:main Jul 28, 2025
39 checks passed
@UnboundVariable UnboundVariable deleted the sig_help branch July 28, 2025 17:57
dcreager added a commit that referenced this pull request Aug 1, 2025
* main: (24 commits)
  Add `Checker::context` method, deduplicate Unicode checks (#19609)
  [`flake8-pyi`] Preserve inline comment in ellipsis removal (`PYI013`) (#19399)
  [ty] Add flow diagram for import resolution
  [ty] Add comments to some core resolver functions
  [ty] Add missing ticks and use consistent quoting
  [ty] Reflow some long lines
  [ty] Unexport helper function
  [ty] Remove offset from `CompletionTargetTokens::Unknown`
  [`pyupgrade`] Fix `UP030` to avoid modifying double curly braces in format strings (#19378)
  [ty] fix a typo  (#19621)
  [ty] synthesize `__replace__` for dataclasses (>=3.13) (#19545)
  [ty] Discard `Definition`s when normalizing `Signature`s (#19615)
  [ty] Fix empty spans following a line terminator and unprintable character spans in diagnostics (#19535)
  Add `LinterContext::settings` to avoid passing separate settings (#19608)
  Support `.pyi` files in ruff analyze graph (#19611)
  [ty] Sync vendored typeshed stubs (#19607)
  [ty] Bump docstring-adder pin (#19606)
  [`perflint`] Ignore rule if target is `global` or `nonlocal` (`PERF401`) (#19539)
  Add license classifier back to pyproject.toml (#19599)
  [ty] Add stub mapping support to signature help (#19570)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

server Related to the LSP server ty Multi-file analysis & type inference

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants