Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create unified script and workflow for llama-fast models validationin #94

Merged
merged 1 commit into from
Apr 11, 2024

Conversation

guangy10
Copy link
Contributor

@guangy10 guangy10 commented Apr 9, 2024

To run the workflow across different platforms for supported models, simply run the top-level script bash ./scripts/workflow.sh [cpu | cuda], which relies on bunch of scripts under .ci/scripts. So CI and local dev are running pretty much same scripts. For the platforms that are not supported in GitHub CI, we can manually run the script on the target platforms

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Apr 9, 2024
@guangy10 guangy10 force-pushed the et_validation branch 6 times, most recently from 4e3c734 to 8a57c22 Compare April 9, 2024 04:04
@guangy10 guangy10 marked this pull request as draft April 9, 2024 04:04
@guangy10 guangy10 force-pushed the et_validation branch 8 times, most recently from eb57f3f to d949b12 Compare April 9, 2024 06:57
@guangy10 guangy10 requested review from kit1980 and huydhn April 9, 2024 21:28
@guangy10 guangy10 marked this pull request as ready for review April 9, 2024 21:29
@guangy10
Copy link
Contributor Author

guangy10 commented Apr 9, 2024

It's not fully ready for review as the newly added models will require running the hg_converter first, but the CI part should be okay. My main goal is to run model valdiation jobs on macOS x86 (supported), macOS M1/ARM (supportted), Linux x86 (supported), Linux ARM (?). @huydhn @kit1980 I know pytorch test-infra has lots of runners, but not sure if those are self-hosted and whether we can use one here. Also is it possible if we'd like to run the Android/iOS emulator on an host platform for the correctness validation?

@guangy10 guangy10 force-pushed the et_validation branch 5 times, most recently from 9c19e53 to 8b0b17a Compare April 11, 2024 03:13
@guangy10
Copy link
Contributor Author

The inference error using ExecuTorch is fixed in #122

@guangy10
Copy link
Contributor Author

The macos x86 issue is same as what Scott reported early when build ExecuTorch. Let's disable it in this PR and move on. Will have to re-enable it once the issue is cleared.

@guangy10 guangy10 changed the title Validate ExecuTorch path for more models in CI Create unified script and workflow for llama-fast models validationin Apr 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants