Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[autodoc] Parse PEP 257 style docstring #410

Open
vwxyzjn opened this issue Sep 26, 2023 · 0 comments
Open

[autodoc] Parse PEP 257 style docstring #410

vwxyzjn opened this issue Sep 26, 2023 · 0 comments

Comments

@vwxyzjn
Copy link

vwxyzjn commented Sep 26, 2023

Copied from https://github.com/rr-/docstring_parser/issues/71#issue-1318744037

The approved PEP 257 mentions the so-called "attribute docstrings", which are string literals in the line after where an attribute is defined. These kind of docstrings are supported by a several packages, for example sphinx's autodoc extension.

An example looks like below:

from dataclasses import dataclass
from typing import Optional
from docstring_parser import parse_from_object

@dataclass
class RewardConfig:
    """
    RewardConfig collects all training arguments related to the [`RewardTrainer`] class.
    """

    max_length: Optional[int] = None
    """The maximum length of the sequences in the batch. This argument is required if you want to use the default data collator."""
    gradient_checkpointing: Optional[bool] = True
    """If True, use gradient checkpointing to save memory at the expense of slower backward pass."""

doc = parse_from_object(RewardConfig)

print(doc.short_description)
print()
for i in doc.params:
    print(f"{i.arg_name}, {i.type_name} (default - {i.default})\n   {i.description}")
    print()
RewardConfig collects all training arguments related to the [`RewardTrainer`] class.

max_length, Optional[int] (default - None)
   The maximum length of the sequences in the batch. This argument is required if you want to use the default data collator.

gradient_checkpointing, Optional[bool] (default - True)
   If True, use gradient checkpointing to save memory at the expense of slower backward pass.

This appears not supported in the autodoc feature of this repo. For example, in TRL our config is https://github.com/huggingface/trl/blob/d608fea0d107d4359f9c03a9d6dd434d292a9f50/trl/trainer/ppo_config.py, but its related docs does not show the documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant