We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Been curious for awhile now, then moreso since reading Disentangling Dense Embeddings with Sparse Autoencoders (https://arxiv.org/html/2408.00657v2)
It looks like most of the ingredients in pyvene are here to to do this with text embeddings?
The text was updated successfully, but these errors were encountered:
@fblissjr Yes! We recently released one SAE tutorials on hidden layers, not the embedding layers. But if you specify the component to be the embedding layer output, you could essentially replicate the results in this paper IIUC: https://github.com/stanfordnlp/pyvene/blob/main/tutorials/basic_tutorials/Sparse_Autoencoder.ipynb
Sorry, something went wrong.
frankaging
No branches or pull requests
Suggestion / Feature Request
Been curious for awhile now, then moreso since reading Disentangling Dense Embeddings with Sparse Autoencoders (https://arxiv.org/html/2408.00657v2)
It looks like most of the ingredients in pyvene are here to to do this with text embeddings?
The text was updated successfully, but these errors were encountered: