-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically generate literal models via codegen #64
Automatically generate literal models via codegen #64
Conversation
I have added an example in bc3dc8b on how to derive models from a remote source. In this case, harp-tech, keeps a list of all harp devices. Since all harp devices must reserve a whoami at the harp-tech website, we can simply use that list to generate our models. In order to version the list, we can lock it to a specific commit. This way the list will always be compatible with registered devices, will also encourage people to follow the standard, and remove several sources of human labor and error. This pattern could be easily extended to other remote sources. |
I don't think C901 is that great in this case. The function has a few guard clauses but they are small, self contained and well documented
The urls should be entered in a single line
The coverage settings may need to change in order for tests to pass. I will defer to a maintainer for this decision... |
discriminator="name", | ||
data_source_identifier="https://raw.githubusercontent.com/harp-tech/protocol/97ded281bd1d0d7537f90ebf545d74cf8ba8805e/whoami.yml", # noqa: E501 | ||
parser=lambda: get_who_am_i_list( | ||
url="https://raw.githubusercontent.com/harp-tech/protocol/97ded281bd1d0d7537f90ebf545d74cf8ba8805e/whoami.yml" # noqa: E501 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we can separate this out into a different PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good!
if __name__ == "__main__": | ||
|
||
root = Path(__file__).parent / r"src/aind_data_schema_models/models" | ||
target_folder = Path(r".\src\aind_data_schema_models\_generated") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably have this use a similar way that root is defined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup. Although my general question would be where should this script run from? I left it at the root of the repo for now, but it may not be the preferred location. I don't think it should be shipped with the package either tho, as it should only be used during generation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could put the generator code in a separate repo and bundle it into a docker image (or pypi package). We could then just put the instructions in the readme like:
docker run -e {parameters specifying the input and output directories} name-of-docker-image:latest
I'd like to get this merged (but with the external registry removed). Do you have time to split this into separate PRs? If not, I can split the code into branches in the main models repo. Not sure if there's an easy way to do that and preserve your commit history though. |
Closing in favor of #91 |
This PR attempts to fix #63 by automatically generating models
I should preface this PR by saying that the underlying pattern of these enum-like classes is not ideal and smells a bit like an anti-pattern to me. Mainly because of AllenNeuralDynamics/aind-data-schema#960
Several of the patterns I used for the generators reflect the following discussions: