Skip to content

kpavlov/langchain4j-kotlin

Repository files navigation

LangChain4j-Kotlin

Maven Central Kotlin CI with Maven Codacy Badge Codacy Coverage codecov Maintainability Api Docs Kotlin enhancements for LangChain4j, providing coroutine support and Flow-based streaming capabilities for chat language models.

See the discussion on LangChain4j project.

ℹ️ I am verifying my ideas for improving LangChain4j here. If an idea is accepted, the code might be adopted into the original LangChain4j project. If not - you may enjoy it here.

Features

See api docs for more details.

Installation

Maven

Add the following dependencies to your pom.xml:

<dependencies>
    <!-- LangChain4j Kotlin Extensions -->
    <dependency>
        <groupId>me.kpavlov.langchain4j.kotlin</groupId>
        <artifactId>langchain4j-kotlin</artifactId>
        <version>[LATEST_VERSION]</version>
    </dependency>
    
    <!-- Extra Dependencies -->
    <dependency>
      <groupId>dev.langchain4j</groupId>
      <artifactId>langchain4j</artifactId>
      <version>0.36.2</version>
    </dependency>
    <dependency>
         <groupId>dev.langchain4j</groupId>
         <artifactId>langchain4j-open-ai</artifactId>
      <version>0.36.2</version>
    </dependency>
</dependencies>

Gradle (Kotlin DSL)

Add the following to your build.gradle.kts:

dependencies {
    implementation("me.kpavlov.langchain4j.kotlin:langchain4j-kotlin:$LATEST_VERSION")
  implementation("dev.langchain4j:langchain4j-open-ai:0.36.2")
}

Quick Start

Basic Chat Request

Extension can convert ChatLanguageModel response into Kotlin Suspending Function:

val model: ChatLanguageModel = OpenAiChatModel.builder()
    .apiKey("your-api-key")
    // more configuration parameters here ...
    .build()

// sync call
val response =
    model.chat(
        ChatRequest
            .builder()
            .messages(
                listOf(
                    SystemMessage.from("You are a helpful assistant"),
                    UserMessage.from("Hello!"),
                ),
            ).build(),
    )
println(response.aiMessage().text())

// Using coroutines
CoroutineScope(Dispatchers.IO).launch {
    val response =
        model.chatAsync(
            ChatRequest
                .builder()
                .messages(
                    listOf(
                        SystemMessage.from("You are a helpful assistant"),
                        UserMessage.from("Hello!"),
                    ),
                ),
        )
    println(response.aiMessage().text())
}      

Streaming Chat Language Model support

Extension can convert StreamingChatLanguageModel response into Kotlin Asynchronous Flow:

val model: StreamingChatLanguageModel = OpenAiStreamingChatModel.builder()
    .apiKey("your-api-key")
    // more configuration parameters here ...
    .build()

model.generateFlow(messages).collect { reply ->
    when (reply) {
        is Completion ->
            println(
                "Final response: ${reply.response.content().text()}",
            )

        is Token -> println("Received token: ${reply.token}")
        else -> throw IllegalArgumentException("Unsupported event: $reply")
    }
}

Kotlin Notebook

The Kotlin Notebook environment allows you to:

  • Experiment with LLM features in real-time
  • Test different configurations and scenarios
  • Visualize results directly in the notebook
  • Share reproducible examples with others

You can easyly get started with LangChain4j-Kotlin notebooks:

%useLatestDescriptors
%use coroutines

@file:DependsOn("dev.langchain4j:langchain4j:0.36.2")
@file:DependsOn("dev.langchain4j:langchain4j-open-ai:0.36.2")

// add maven dependency
@file:DependsOn("me.kpavlov.langchain4j.kotlin:langchain4j-kotlin:0.1.1")
// ... or add project's target/classes to classpath
//@file:DependsOn("../target/classes")

import dev.langchain4j.data.message.*
import dev.langchain4j.model.openai.OpenAiChatModel
import me.kpavlov.langchain4j.kotlin.model.chat.generateAsync
  
val model = OpenAiChatModel.builder()
  .apiKey("demo")
  .modelName("gpt-4o-mini")
  .temperature(0.0)
  .maxTokens(1024)
  .build()

// Invoke using CoroutineScope
val scope = CoroutineScope(Dispatchers.IO)

runBlocking {
  val result = model.generateAsync(
    listOf(
      SystemMessage.from("You are helpful assistant"),
      UserMessage.from("Make a haiku about Kotlin, Langchani4j and LLM"),
    )
  )
  println(result.content().text())
}

Try this Kotlin Notebook yourself:

Development Setup

Prerequisites

  1. Create .env file in root directory and add your API keys:
OPENAI_API_KEY=sk-xxxxx

Building the Project

Using Maven:

mvn clean verify

Using Make:

make build

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Run before submitting your changes

make lint

Acknowledgements

License

MIT License