You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, LLTFI takes a very large amount of time to do fault injection in NLP models (Bert: 890 s, GPT: 60 s). For GPT, there are 1.4B LLFI cycles i.e. there will be 1.4B calls to LLFI's shared library. How about we statically link the fault injection library in these cases? This should significantly reduce the fault injection time at the expense of increased binary size.
The text was updated successfully, but these errors were encountered:
Currently, LLTFI takes a very large amount of time to do fault injection in NLP models (Bert: 890 s, GPT: 60 s). For GPT, there are 1.4B LLFI cycles i.e. there will be 1.4B calls to LLFI's shared library. How about we statically link the fault injection library in these cases? This should significantly reduce the fault injection time at the expense of increased binary size.
The text was updated successfully, but these errors were encountered: