-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
From training to IR translation #4
Comments
.pth only stores the weights for pytorch model and the IR generation is performed by manually convertion. You may refer to https://github.com/mit-han-lab/tiny-training/blob/main/compilation/mcu_ir_gen.py for detailed process. |
Yes, let me rephrase my question. What should be done if I want to modify the weights manually we get from the first step, and load it to the model to convert in the second step? |
Hi @kgorgor, Were you able to run the code generation.py file that converts your model into C++ code (from the other MIT tinyengine repo) with your own customized model for on-device training? If yes, did the 3 files you used to compile your customized model came from:
Thanks! |
Thanks for your reply! @729557989 Did you mean https://github.com/mit-han-lab/tinyengine? I didn't run any code from it and I didn't find the generation.py in it. I thought I would be able to translate my customized pytorch models and get the IRs just using the codes in this repo, right? |
Hi @songhan @zhijian-liu @Lyken17 @tonylins @synxlin . To my understanding (correct me if I'm wrong), the first step is to train models and get model weights saved in .pth files, which is to complete the steps in the README file in the "algorithm" folder. The second step is to translate pytorch models into .pkl and .json files, which is to complete the "compilation" folder.
I have completed the two steps separately, but the problem is how to connect them. In other words, how do we use the .pth files we get from the first step and perform the translation in the second step? I tried simply model.load_state_dict, but the model I got from the first step had its mcu_head_type as "fp" while the script in the second step (mcu_ir_gen.py) required it to be "quantized". And I also tried to do the first step with mcu_head_type as "quantized", but it caused a huge accuracy drop.
I would appreciate it if you could provide some help!
The text was updated successfully, but these errors were encountered: