Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix : Fix Image Path and Broken xref Links in Documentation #1813

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
208 changes: 104 additions & 104 deletions spring-ai-docs/src/main/antora/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
@@ -1,114 +1,114 @@
* xref:index.adoc[Overview]
** xref:concepts.adoc[AI Concepts]
* xref:getting-started.adoc[Getting Started]
* xref:api/chatclient.adoc[]
** xref:api/advisors.adoc[Advisors]
* xref:api/index.adoc[AI Models]
** xref:api/chatmodel.adoc[Chat Models]
*** xref:api/chat/comparison.adoc[Chat Models Comparison]
*** xref:api/bedrock-converse.adoc[Amazon Bedrock Converse]
*** xref:api/bedrock-chat.adoc[Amazon Bedrock]
**** xref:api/chat/bedrock/bedrock-anthropic3.adoc[Anthropic3]
**** xref:api/chat/bedrock/bedrock-anthropic.adoc[Anthropic2]
**** xref:api/chat/bedrock/bedrock-llama.adoc[Llama]
**** xref:api/chat/bedrock/bedrock-cohere.adoc[Cohere]
**** xref:api/chat/bedrock/bedrock-titan.adoc[Titan]
**** xref:api/chat/bedrock/bedrock-jurassic2.adoc[Jurassic2]
*** xref:api/chat/anthropic-chat.adoc[Anthropic 3]
**** xref:api/chat/functions/anthropic-chat-functions.adoc[Function Calling]
*** xref:api/chat/azure-openai-chat.adoc[Azure OpenAI]
**** xref:api/chat/functions/azure-open-ai-chat-functions.adoc[Function Calling]
*** xref:api/chat/google-vertexai.adoc[Google VertexAI]
**** xref:api/chat/vertexai-gemini-chat.adoc[VertexAI Gemini]
***** xref:api/chat/functions/vertexai-gemini-chat-functions.adoc[Function Calling]
*** xref:api/chat/groq-chat.adoc[Groq]
*** xref:api/chat/huggingface.adoc[Hugging Face]
*** xref:api/chat/mistralai-chat.adoc[Mistral AI]
**** xref:api/chat/functions/mistralai-chat-functions.adoc[Function Calling]
*** xref:api/chat/minimax-chat.adoc[MiniMax]
**** xref:api/chat/functions/minimax-chat-functions.adoc[Function Calling]
*** xref:api/chat/moonshot-chat.adoc[Moonshot AI]
//// **** xref:api/chat/functions/moonshot-chat-functions.adoc[Function Calling]
*** xref:api/chat/nvidia-chat.adoc[NVIDIA]
*** xref:api/chat/ollama-chat.adoc[Ollama]
* xref:pages/index.adoc[Overview]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think pages prefix is needed? why do you think it does?

** xref:pages/concepts.adoc[AI Concepts]
* xref:pages/getting-started.adoc[Getting Started]
* xref:pages/api/chatclient.adoc[]
** xref:pages/api/advisors.adoc[Advisors]
* xref:pages/api/index.adoc[AI Models]
** xref:pages/api/chatmodel.adoc[Chat Models]
*** xref:pages/api/chat/comparison.adoc[Chat Models Comparison]
*** xref:pages/api/bedrock-converse.adoc[Amazon Bedrock Converse]
*** xref:pages/api/bedrock-chat.adoc[Amazon Bedrock]
**** xref:pages/api/chat/bedrock/bedrock-anthropic3.adoc[Anthropic3]
**** xref:pages/api/chat/bedrock/bedrock-anthropic.adoc[Anthropic2]
**** xref:pages/api/chat/bedrock/bedrock-llama.adoc[Llama]
**** xref:pages/api/chat/bedrock/bedrock-cohere.adoc[Cohere]
**** xref:pages/api/chat/bedrock/bedrock-titan.adoc[Titan]
**** xref:pages/api/chat/bedrock/bedrock-jurassic2.adoc[Jurassic2]
*** xref:pages/api/chat/anthropic-chat.adoc[Anthropic 3]
**** xref:pages/api/chat/functions/anthropic-chat-functions.adoc[Function Calling]
*** xref:pages/api/chat/azure-openai-chat.adoc[Azure OpenAI]
**** xref:pages/api/chat/functions/azure-open-ai-chat-functions.adoc[Function Calling]
*** xref:pages/api/chat/google-vertexai.adoc[Google VertexAI]
**** xref:pages/api/chat/vertexai-gemini-chat.adoc[VertexAI Gemini]
***** xref:pages/api/chat/functions/vertexai-gemini-chat-functions.adoc[Function Calling]
*** xref:pages/api/chat/groq-chat.adoc[Groq]
*** xref:pages/api/chat/huggingface.adoc[Hugging Face]
*** xref:pages/api/chat/mistralai-chat.adoc[Mistral AI]
**** xref:pages/api/chat/functions/mistralai-chat-functions.adoc[Function Calling]
*** xref:pages/api/chat/minimax-chat.adoc[MiniMax]
**** xref:pages/api/chat/functions/minimax-chat-functions.adoc[Function Calling]
*** xref:pages/api/chat/moonshot-chat.adoc[Moonshot AI]
//// **** xref:pages/api/chat/functions/moonshot-chat-functions.adoc[Function Calling]
*** xref:pages/api/chat/nvidia-chat.adoc[NVIDIA]
*** xref:pages/api/chat/ollama-chat.adoc[Ollama]
*** OCI Generative AI
**** xref:api/chat/oci-genai/cohere-chat.adoc[Cohere]
*** xref:api/chat/openai-chat.adoc[OpenAI]
**** xref:api/chat/functions/openai-chat-functions.adoc[Function Calling]
*** xref:api/chat/qianfan-chat.adoc[QianFan]
*** xref:api/chat/zhipuai-chat.adoc[ZhiPu AI]
// **** xref:api/chat/functions/zhipuai-chat-functions.adoc[Function Calling]
*** xref:api/chat/watsonx-ai-chat.adoc[watsonx.AI]
** xref:api/embeddings.adoc[Embedding Models]
*** xref:api/bedrock.adoc[Amazon Bedrock]
**** xref:api/embeddings/bedrock-cohere-embedding.adoc[Cohere]
**** xref:api/embeddings/bedrock-titan-embedding.adoc[Titan]
*** xref:api/embeddings/azure-openai-embeddings.adoc[Azure OpenAI]
*** xref:api/embeddings/mistralai-embeddings.adoc[Mistral AI]
*** xref:api/embeddings/minimax-embeddings.adoc[MiniMax]
*** xref:api/embeddings/oci-genai-embeddings.adoc[OCI GenAI]
*** xref:api/embeddings/ollama-embeddings.adoc[Ollama]
*** xref:api/embeddings/onnx.adoc[(ONNX) Transformers]
*** xref:api/embeddings/openai-embeddings.adoc[OpenAI]
*** xref:api/embeddings/postgresml-embeddings.adoc[PostgresML]
*** xref:api/embeddings/qianfan-embeddings.adoc[QianFan]
**** xref:pages/api/chat/oci-genai/cohere-chat.adoc[Cohere]
*** xref:pages/api/chat/openai-chat.adoc[OpenAI]
**** xref:pages/api/chat/functions/openai-chat-functions.adoc[Function Calling]
*** xref:pages/api/chat/qianfan-chat.adoc[QianFan]
*** xref:pages/api/chat/zhipuai-chat.adoc[ZhiPu AI]
// **** xref:pages/api/chat/functions/zhipuai-chat-functions.adoc[Function Calling]
*** xref:pages/api/chat/watsonx-ai-chat.adoc[watsonx.AI]
** xref:pages/api/embeddings.adoc[Embedding Models]
*** xref:pages/api/bedrock.adoc[Amazon Bedrock]
**** xref:pages/api/embeddings/bedrock-cohere-embedding.adoc[Cohere]
**** xref:pages/api/embeddings/bedrock-titan-embedding.adoc[Titan]
*** xref:pages/api/embeddings/azure-openai-embeddings.adoc[Azure OpenAI]
*** xref:pages/api/embeddings/mistralai-embeddings.adoc[Mistral AI]
*** xref:pages/api/embeddings/minimax-embeddings.adoc[MiniMax]
*** xref:pages/api/embeddings/oci-genai-embeddings.adoc[OCI GenAI]
*** xref:pages/api/embeddings/ollama-embeddings.adoc[Ollama]
*** xref:pages/api/embeddings/onnx.adoc[(ONNX) Transformers]
*** xref:pages/api/embeddings/openai-embeddings.adoc[OpenAI]
*** xref:pages/api/embeddings/postgresml-embeddings.adoc[PostgresML]
*** xref:pages/api/embeddings/qianfan-embeddings.adoc[QianFan]
*** VertexAI
**** xref:api/embeddings/vertexai-embeddings-text.adoc[Text Embedding]
**** xref:api/embeddings/vertexai-embeddings-multimodal.adoc[Multimodal Embedding]
*** xref:api/embeddings/watsonx-ai-embeddings.adoc[watsonx.AI]
*** xref:api/embeddings/zhipuai-embeddings.adoc[ZhiPu AI]
** xref:api/imageclient.adoc[Image Models]
*** xref:api/image/azure-openai-image.adoc[Azure OpenAI]
*** xref:api/image/openai-image.adoc[OpenAI]
*** xref:api/image/stabilityai-image.adoc[Stability]
*** xref:api/image/zhipuai-image.adoc[ZhiPuAI]
*** xref:api/image/qianfan-image.adoc[QianFan]
** xref:api/audio[Audio Models]
*** xref:api/audio/transcriptions.adoc[]
**** xref:api/audio/transcriptions/azure-openai-transcriptions.adoc[Azure OpenAI]
**** xref:api/audio/transcriptions/openai-transcriptions.adoc[OpenAI]
*** xref:api/audio/speech.adoc[]
**** xref:api/audio/speech/openai-speech.adoc[OpenAI]
** xref:api/moderation[Moderation Models]
*** xref:api/moderation/openai-moderation.adoc[OpenAI]
// ** xref:api/generic-model.adoc[]
**** xref:pages/api/embeddings/vertexai-embeddings-text.adoc[Text Embedding]
**** xref:pages/api/embeddings/vertexai-embeddings-multimodal.adoc[Multimodal Embedding]
*** xref:pages/api/embeddings/watsonx-ai-embeddings.adoc[watsonx.AI]
*** xref:pages/api/embeddings/zhipuai-embeddings.adoc[ZhiPu AI]
** xref:pages/api/imageclient.adoc[Image Models]
*** xref:pages/api/image/azure-openai-image.adoc[Azure OpenAI]
*** xref:pages/api/image/openai-image.adoc[OpenAI]
*** xref:pages/api/image/stabilityai-image.adoc[Stability]
*** xref:pages/api/image/zhipuai-image.adoc[ZhiPuAI]
*** xref:pages/api/image/qianfan-image.adoc[QianFan]
** xref:pages/api/audio[Audio Models]
*** xref:pages/api/audio/transcriptions.adoc[]
**** xref:pages/api/audio/transcriptions/azure-openai-transcriptions.adoc[Azure OpenAI]
**** xref:pages/api/audio/transcriptions/openai-transcriptions.adoc[OpenAI]
*** xref:pages/api/audio/speech.adoc[]
**** xref:pages/api/audio/speech/openai-speech.adoc[OpenAI]
** xref:pages/api/moderation[Moderation Models]
*** xref:pages/api/moderation/openai-moderation.adoc[OpenAI]
// ** xref:pages/api/generic-model.adoc[]

* xref:api/vectordbs.adoc[]
** xref:api/vectordbs/azure.adoc[]
** xref:api/vectordbs/azure-cosmos-db.adoc[]
** xref:api/vectordbs/apache-cassandra.adoc[]
** xref:api/vectordbs/chroma.adoc[]
** xref:api/vectordbs/elasticsearch.adoc[]
** xref:api/vectordbs/gemfire.adoc[GemFire]
** xref:api/vectordbs/milvus.adoc[]
** xref:api/vectordbs/mongodb.adoc[]
** xref:api/vectordbs/neo4j.adoc[]
** xref:api/vectordbs/opensearch.adoc[]
** xref:api/vectordbs/oracle.adoc[Oracle]
** xref:api/vectordbs/pgvector.adoc[]
** xref:api/vectordbs/pinecone.adoc[]
** xref:api/vectordbs/qdrant.adoc[]
** xref:api/vectordbs/redis.adoc[]
** xref:api/vectordbs/hana.adoc[SAP Hana]
** xref:api/vectordbs/typesense.adoc[]
** xref:api/vectordbs/weaviate.adoc[]
* xref:pages/api/vectordbs.adoc[]
** xref:pages/api/vectordbs/azure.adoc[]
** xref:pages/api/vectordbs/azure-cosmos-db.adoc[]
** xref:pages/api/vectordbs/apache-cassandra.adoc[]
** xref:pages/api/vectordbs/chroma.adoc[]
** xref:pages/api/vectordbs/elasticsearch.adoc[]
** xref:pages/api/vectordbs/gemfire.adoc[GemFire]
** xref:pages/api/vectordbs/milvus.adoc[]
** xref:pages/api/vectordbs/mongodb.adoc[]
** xref:pages/api/vectordbs/neo4j.adoc[]
** xref:pages/api/vectordbs/opensearch.adoc[]
** xref:pages/api/vectordbs/oracle.adoc[Oracle]
** xref:pages/api/vectordbs/pgvector.adoc[]
** xref:pages/api/vectordbs/pinecone.adoc[]
** xref:pages/api/vectordbs/qdrant.adoc[]
** xref:pages/api/vectordbs/redis.adoc[]
** xref:pages/api/vectordbs/hana.adoc[SAP Hana]
** xref:pages/api/vectordbs/typesense.adoc[]
** xref:pages/api/vectordbs/weaviate.adoc[]

* xref:observability/index.adoc[]
* xref:api/prompt.adoc[]
* xref:api/structured-output-converter.adoc[Structured Output]
* xref:api/functions.adoc[Function Calling]
** xref:api/function-callback.adoc[FunctionCallback API]
* xref:api/multimodality.adoc[Multimodality]
* xref:api/etl-pipeline.adoc[]
* xref:api/testing.adoc[AI Model Evaluation]
* xref:pages/observability/index.adoc[]
* xref:pages/api/prompt.adoc[]
* xref:pages/api/structured-output-converter.adoc[Structured Output]
* xref:pages/api/functions.adoc[Function Calling]
** xref:pages/api/function-callback.adoc[FunctionCallback API]
* xref:pages/api/multimodality.adoc[Multimodality]
* xref:pages/api/etl-pipeline.adoc[]
* xref:pages/api/testing.adoc[AI Model Evaluation]

* Service Connections
** xref:api/docker-compose.adoc[Docker Compose]
** xref:api/testcontainers.adoc[Testcontainers]
** xref:api/cloud-bindings.adoc[Cloud Bindings]
** xref:pages/api/docker-compose.adoc[Docker Compose]
** xref:pages/api/testcontainers.adoc[Testcontainers]
** xref:pages/api/cloud-bindings.adoc[Cloud Bindings]

* xref:contribution-guidelines.adoc[Contribution Guidelines]
* xref:pages/contribution-guidelines.adoc[Contribution Guidelines]

* Appendices
** xref:upgrade-notes.adoc[]
** xref:pages/upgrade-notes.adoc[]

Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ Flux<ChatResponse> response = this.chatModel.stream(

=== Low-level AnthropicChatBedrockApi Client [[low-level-api]]

The https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-bedrock/src/main/java/org/springframework/ai/bedrock/anthropic/api/AnthropicChatBedrockApi.java[AnthropicChatBedrockApi] provides is lightweight Java client on top of AWS Bedrock link:https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html[Anthropic Claude models].
The https://github.com/spring-projects/spring-ai/blob/main/models/spring-ai-bedrock/src/main/java/org/springframework/ai/bedrock/anthropic/api/AnthropicChatBedrockApi.java[AnthropicChatBedrockApi] provides a lightweight Java client on top of AWS Bedrock link:https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html[Anthropic Claude models].

Following class diagram illustrates the AnthropicChatBedrockApi interface and building blocks:

Expand Down
14 changes: 7 additions & 7 deletions spring-ai-docs/src/main/antora/modules/ROOT/pages/concepts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Before ChatGPT, many people were fascinated by text-to-image generation models s

The following table categorizes several models based on their input and output types:

image::spring-ai-concepts-model-types.jpg[Model types, width=600, align="center"]
image::../images/spring-ai-concepts-model-types.jpg[Model types, width=600, align="center"]

Spring AI currently supports models that process input and output as language, image, and audio.
The last row in the previous table, which accepts text as input and outputs numbers, is more commonly known as embedding text and represents the internal data structures used in an AI model.
Expand Down Expand Up @@ -78,7 +78,7 @@ The length of the embedding array is called the vector's dimensionality.

By calculating the numerical distance between the vector representations of two pieces of text, an application can determine the similarity between the objects used to generate the embedding vectors.

image::spring-ai-embeddings.jpg[Embeddings, width=900, align="center"]
image::../images/spring-ai-embeddings.jpg[Embeddings, width=900, align="center"]

As a Java developer exploring AI, it's not necessary to comprehend the intricate mathematical theories or the specific implementations behind these vector representations.
A basic understanding of their role and function within AI systems suffices, particularly when you're integrating AI functionalities into your applications.
Expand All @@ -98,7 +98,7 @@ On input, models convert words to tokens. On output, they convert tokens back to

In English, one token roughly corresponds to 75% of a word. For reference, Shakespeare's complete works, totaling around 900,000 words, translate to approximately 1.2 million tokens.

image::spring-ai-concepts-tokens.png[Tokens, width=600, align="center"]
image::../images/spring-ai-concepts-tokens.png[Tokens, width=600, align="center"]

Perhaps more important is that Tokens = Money.
In the context of hosted AI models, your charges are determined by the number of tokens used. Both input and output contribute to the overall token count.
Expand All @@ -120,7 +120,7 @@ Also, asking "`for JSON`" as part of the prompt is not 100% accurate.

This intricacy has led to the emergence of a specialized field involving the creation of prompts to yield the intended output, followed by converting the resulting simple string into a usable data structure for application integration.

image::structured-output-architecture.jpg[Structured Output Converter Architecture, width=800, align="center"]
image::../images/structured-output-architecture.jpg[Structured Output Converter Architecture, width=800, align="center"]

The xref:api/structured-output-converter.adoc#_structuredoutputconverter[Structured output conversion] employs meticulously crafted prompts, often necessitating multiple interactions with the model to achieve the desired formatting.

Expand All @@ -141,7 +141,7 @@ However, it is a challenging process for machine learning experts and extremely
This approach is colloquially referred to as "`stuffing the prompt.`"
The Spring AI library helps you implement solutions based on the "`stuffing the prompt`" technique otherwise known as xref::concepts.adoc#concept-rag[Retrieval Augmented Generation (RAG)].

image::spring-ai-prompt-stuffing.jpg[Prompt stuffing, width=700, align="center"]
image::../images/spring-ai-prompt-stuffing.jpg[Prompt stuffing, width=700, align="center"]

* **xref::concepts.adoc#concept-fc[Function Calling]**: This technique allows registering custom, user functions that connect the large language models to the APIs of external systems.
Spring AI greatly simplifies code you need to write to support xref:api/functions.adoc[function calling].
Expand All @@ -167,7 +167,7 @@ The next phase in RAG is processing user input.
When a user's question is to be answered by an AI model, the question and all the "`similar`" document pieces are placed into the prompt that is sent to the AI model.
This is the reason to use a vector database. It is very good at finding similar content.

image::spring-ai-rag.jpg[Spring AI RAG, width=1000, align="center"]
image::../images/spring-ai-rag.jpg[Spring AI RAG, width=1000, align="center"]

* The xref::api/etl-pipeline.adoc[ETL Pipeline] provides further information about orchestrating the flow of extracting data from data sources and storing it in a structured vector store, ensuring data is in the optimal format for retrieval when passing it to the AI model.
* The xref::api/chatclient.adoc#_retrieval_augmented_generation[ChatClient - RAG] explains how to use the `QuestionAnswerAdvisor` to enable the RAG capability in your application.
Expand All @@ -186,7 +186,7 @@ It handles the function invocation conversation for you.
You can provide your function as a `@Bean` and then provide the bean name of the function in your prompt options to activate that function.
Additionally, you can define and reference multiple functions in a single prompt.

image::function-calling-basic-flow.jpg[Function calling, width=700, align="center"]
image::../images/function-calling-basic-flow.jpg[Function calling, width=700, align="center"]

1. Perform a chat request sending along function definition information.
The latter provides the `name`, `description` (e.g. explaining when the Model should call the function), and `input parameters` (e.g. the function's input parameters schema).
Expand Down
4 changes: 2 additions & 2 deletions spring-ai-docs/src/main/antora/modules/ROOT/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[introduction]]
= Introduction

image::spring_ai_logo_with_text.svg[Integration Problem, width=300, align="left"]
image::../images/spring_ai_logo_with_text.svg[Integration Problem, width=300, align="left"]

The `Spring AI` project aims to streamline the development of applications that incorporate artificial intelligence functionality without unnecessary complexity.

Expand All @@ -10,7 +10,7 @@ The project was founded with the belief that the next wave of Generative AI appl

NOTE: Spring AI addresses the fundamental challenge of AI integration: `Connecting your enterprise Data and APIs with the AI Models`.

image::spring-ai-integration-diagram-3.svg[Interactive,500,opts=interactive]
image::../images/spring-ai-integration-diagram-3.svg[Interactive,500,opts=interactive]

Spring AI provides abstractions that serve as the foundation for developing AI applications.
These abstractions have multiple implementations, enabling easy component swapping with minimal code changes.
Expand Down