-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Transformer Debugger - Debugging and controlling the behavior of transformer based LLM models. #1513
Comments
Thanks for the suggestion! How do you envision |
Note that there is an on-going effort to integrate Learning Interpretability Tool (LIT) with KerasNLP. #1521 is an example of adding the |
Hello @SamanehSaadat
I believe we should reserve one whole directory for interpretability tools here: https://github.com/keras-team/keras-nlp/tree/master/keras_nlp we would need to incorporate the whole thing but that's a time-consuming goal. Here's a one-minute video on how that works: https://www.youtube.com/watch?v=5D_GiJv7O-M
Thank you for pointing that out. I'm not an expert here, but LIT sounds like a very general approach while TDB is a very specific LLM approach. TDB could be a very long and lengthy feature to implement, so I'm all up to contribute in case there is any need. |
Short Description
Paper
https://arxiv.org/pdf/2211.00593v1.pdf
Existing Implementations
Other Information
This tool could be a very great guide to people working with the interpretability of LLM models. There are already a lot of LLM models in Keras-nlp and engineers might find it very useful while working on the deployment of the models to ensure the safety, reliability, intrepretability and control of the LLM models available here.
The text was updated successfully, but these errors were encountered: