We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks for your open-source model and paper, its great.
llama.cpp hack in one night noticed that it has a very similar architecture to GPT-J .
has a very similar architecture to GPT-J
No offense, are you just train it with trivial modification and multiple opensource dataset ?
The text was updated successfully, but these errors were encountered:
A lot of the llms are extremely similar architecture wise. The training process and resultant model are the novel contributions, not the architecture.
Sorry, something went wrong.
Ah.. facebook is rich, that is all I read from the LLaMa paper ..
No branches or pull requests
Thanks for your open-source model and paper, its great.
llama.cpp hack in one night noticed that it
has a very similar architecture to GPT-J
.No offense, are you just train it with trivial modification and multiple opensource dataset ?
The text was updated successfully, but these errors were encountered: