You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are a couple of great libraries for model explainability, SHAP and Transformers Interpret. These libraries primarily focus on explaining classification, question-answering, summarization and translation model outputs. There isn't an easy way to explain the output of embeddings models.
This issue will add explain methods to embeddings instances. The implementation will create n permutations for each input text element to determine term importance for query results.
The text was updated successfully, but these errors were encountered:
There are a couple of great libraries for model explainability, SHAP and Transformers Interpret. These libraries primarily focus on explaining classification, question-answering, summarization and translation model outputs. There isn't an easy way to explain the output of embeddings models.
This issue will add explain methods to
embeddings
instances. The implementation will create n permutations for each input text element to determine term importance for query results.The text was updated successfully, but these errors were encountered: