Skip to content

[DOCS] Inconsistency in supported LLMs engine names #520

@Chotom

Description

@Chotom

In Supported LLM Models section there is info about supported engines:

You can use any LLM provider that is supported by LangChain, e.g., ai21, aleph_alpha, anthropic, anyscale, azure, cohere, huggingface_endpoint, huggingface_hub, openai, self_hosted, self_hosted_hugging_face. Check out the LangChain official documentation for the full list.

The problem is that the documentation doesn't mention any particular page in the LangChain docs. I would guess that it references this page: Langchain > Components > LLMs. But there is also LangChain page dedicated to certain providers: Langchain > Providers.

And while some of the listed names seem to match the LangChain LLMs components (like e.g. ai21, aleph_alpha, anthropic), the other ones like azure or huggingface_hub, in my opinion, create inconsistency in name convention. The below image shows what I'm talking about (there is no particular provider called only azure):

image

So the questions are:

  1. Which part of LangChain documentation does Namo Guardrails refer to?
  2. Where I could find the real list of supported LLMs/engines in Nemo?

Metadata

Metadata

Labels

bugSomething isn't workingdocumentationImprovements or additions to documentationgood first issueGood for newcomers

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions