Skip to content

Commit

Permalink
community[minor],docs[minor]: Add ChromeAI chat model (#5903)
Browse files Browse the repository at this point in the history
* community[minor]: Add ChromeAI chat model

* extra

* chore: lint files

* move to experimental, add demo app and instructions

* chore: lint files

* cr

* cr

* chore: lint files

* chore: lint files

* chore: lint files

* docs

* fix docs link

* moved to community

* nits

* allow for custom prompt formatters
  • Loading branch information
bracesproul authored Jun 28, 2024
1 parent f3585eb commit f2fe566
Show file tree
Hide file tree
Showing 12 changed files with 3,115 additions and 0 deletions.
59 changes: 59 additions & 0 deletions docs/core_docs/docs/integrations/chat/chrome_ai.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
sidebar_label: ChromeAI
---

import CodeBlock from "@theme/CodeBlock";

# ChatChromeAI

:::info
This feature is **experimental** and is subject to change.
:::

:::note
The `Built-in AI Early Preview Program` by Google is currently in beta. To apply for access or find more information, please visit [this link](https://developer.chrome.com/docs/ai/built-in).
:::

ChatChromeAI leverages the webGPU and Gemini Nano to run LLMs directly in the browser, without the need for an internet connection.
This allows for running faster and private models without ever having data leave the consumers device.

## Getting started

Once you've been granted access to the program, follow all steps to download the model.

Once downloaded, you can start using `ChatChromeAI` in the browser as follows:

```typescript
import { ChatChromeAI } from "@langchain/community/experimental/chat_models/chrome_ai";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatChromeAI({
temperature: 0.5, // Optional, defaults to 0.5
topK: 40, // Optional, defaults to 40
});

const message = new HumanMessage("Write me a short poem please");

const response = await model.invoke([message]);
```

### Streaming

`ChatChromeAI` also supports streaming chunks:

```typescript
import { AIMessageChunk } from "@langchain/core/messages";

let fullMessage: AIMessageChunk | undefined = undefined;
for await (const chunk of await model.stream([message])) {
if (!fullMessage) {
fullMessage = chunk;
} else {
fullMessage = fullMessage.concat(chunk);
}
console.log(fullMessage.content);
}
```

We also have a simple demo application which you can copy to instantly start running `ChatChromeAI` in your browser.
Navigate to the [README.md](https://github.com/langchain-ai/langchainjs/tree/main/libs/langchain-community/src/experimental/chrome_ai/app/README.md) in the `./app` directory of the integration for more instructions.
4 changes: 4 additions & 0 deletions libs/langchain-community/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -1030,6 +1030,10 @@ experimental/chat_models/ollama_functions.cjs
experimental/chat_models/ollama_functions.js
experimental/chat_models/ollama_functions.d.ts
experimental/chat_models/ollama_functions.d.cts
experimental/chat_models/chrome_ai.cjs
experimental/chat_models/chrome_ai.js
experimental/chat_models/chrome_ai.d.ts
experimental/chat_models/chrome_ai.d.cts
chains/graph_qa/cypher.cjs
chains/graph_qa/cypher.js
chains/graph_qa/cypher.d.ts
Expand Down
1 change: 1 addition & 0 deletions libs/langchain-community/langchain.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -316,6 +316,7 @@ export const config = {
"experimental/hubs/makersuite/googlemakersuitehub":
"experimental/hubs/makersuite/googlemakersuitehub",
"experimental/chat_models/ollama_functions": "experimental/chat_models/ollama_functions",
"experimental/chat_models/chrome_ai": "experimental/chat_models/chrome_ai/chat_models",
// chains
"chains/graph_qa/cypher": "chains/graph_qa/cypher"
},
Expand Down
13 changes: 13 additions & 0 deletions libs/langchain-community/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -3022,6 +3022,15 @@
"import": "./experimental/chat_models/ollama_functions.js",
"require": "./experimental/chat_models/ollama_functions.cjs"
},
"./experimental/chat_models/chrome_ai": {
"types": {
"import": "./experimental/chat_models/chrome_ai.d.ts",
"require": "./experimental/chat_models/chrome_ai.d.cts",
"default": "./experimental/chat_models/chrome_ai.d.ts"
},
"import": "./experimental/chat_models/chrome_ai.js",
"require": "./experimental/chat_models/chrome_ai.cjs"
},
"./chains/graph_qa/cypher": {
"types": {
"import": "./chains/graph_qa/cypher.d.ts",
Expand Down Expand Up @@ -4067,6 +4076,10 @@
"experimental/chat_models/ollama_functions.js",
"experimental/chat_models/ollama_functions.d.ts",
"experimental/chat_models/ollama_functions.d.cts",
"experimental/chat_models/chrome_ai.cjs",
"experimental/chat_models/chrome_ai.js",
"experimental/chat_models/chrome_ai.d.ts",
"experimental/chat_models/chrome_ai.d.cts",
"chains/graph_qa/cypher.cjs",
"chains/graph_qa/cypher.js",
"chains/graph_qa/cypher.d.ts",
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# ChatChromeAI

This is a simple application designed to run in the browser that uses the webGPU and Gemini Nano.
Gemini Nano is a LLM which Google Chrome has embedded in the browser. As of 06/26/2024 it is still in beta.
To request access or find more information, please visit [this link](https://developer.chrome.com/docs/ai/built-in).

## Getting Started

To run this application, you'll first need to build the locally dependencies. From the root of the `langchain-ai/langchainjs` repo, run the following command:

```bash
yarn build --filter=@langchain/community --filter=@langchain/openai
```

Once the dependencies are built, navigate into this directory (`libs/langchain-community/src/experimental/chat_models/chrome_ai/app`) and run the following commands:

```bash
yarn install # install the dependencies

yarn start # start the application
```

Then, open your browser and navigate to [`http://127.0.0.1:8080/src/chrome_ai.html`](http://127.0.0.1:8080/src/chrome_ai.html).
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
{
"name": "chrome_ai",
"packageManager": "yarn@3.4.1",
"scripts": {
"start": "rm -rf ./dist && yarn webpack && yarn http-server -c-1 -p 8080"
},
"devDependencies": {
"http-server": "^14.0.1",
"webpack": "^5.92.1",
"webpack-cli": "^5.1.4"
},
"dependencies": {
"@langchain/community": "file:../../../../../",
"@langchain/openai": "file:../../../../../../langchain-openai"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
<!DOCTYPE html>
<html>
<head>
<title>ChatChromeAI Example</title>
<style>
body {
font-family: Arial, sans-serif;
max-width: 800px;
margin: 0 auto;
padding: 20px;
background-color: #f0f0f0;
}
h1 {
color: #333;
text-align: center;
}
button {
background-color: #4caf50;
border: none;
color: white;
padding: 10px 20px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
margin: 4px 2px;
cursor: pointer;
border-radius: 4px;
}
#destroyButton {
background-color: #f44336;
}
form {
background-color: white;
padding: 20px;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
input[type="text"] {
width: 100%;
padding: 12px 20px;
margin: 8px 0;
box-sizing: border-box;
border: 2px solid #ccc;
border-radius: 4px;
}
#responseContainer {
background-color: white;
padding: 20px;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
margin-top: 20px;
}
.stats {
display: flex;
justify-content: space-around;
margin-top: 10px;
}
.stat-pill {
padding: 5px 10px;
border-radius: 20px;
font-size: 14px;
color: white;
}
</style>
</head>
<body>
<h1>LangChain.js🦜🔗 - ChatChromeAI Example</h1>

<button id="destroyButton">Destroy Model</button>

<form id="inputForm">
<label for="inputField">Enter your input:</label><br />
<input
type="text"
id="inputField"
name="inputField"
autocomplete="off"
/><br />
<button type="submit">Submit</button>
</form>

<div id="responseContainer">
<div id="responseText"></div>
<div id="statsContainer">
<div class="stats">
<span
class="stat-pill"
style="background-color: #3498db"
id="firstTokenTime"
>First Token: -- ms</span
>
<span
class="stat-pill"
style="background-color: #2ecc71"
id="totalTime"
>Total Time: -- ms</span
>
<span
class="stat-pill"
style="background-color: #e74c3c"
id="totalTokens"
>Total Tokens: --</span
>
</div>
</div>
</div>

<script src="../dist/bundle.js"></script>
</body>
</html>
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
import { ChatChromeAI } from "@langchain/community/experimental/chat_models/chrome_ai";
import { encodingForModel } from "@langchain/core/utils/tiktoken";

const model = new ChatChromeAI();
const destroyButton = document.getElementById("destroyButton");
const inputForm = document.getElementById("inputForm");
const submitButton = inputForm.querySelector("button[type='submit']");

// Initialize the model when the page loads
window.addEventListener("load", async () => {
try {
await model.initialize();
destroyButton.disabled = false;
submitButton.disabled = false;
} catch (error) {
console.error("Failed to initialize model:", error);
alert("Failed to initialize model. Please try refreshing the page.");
}
});

destroyButton.addEventListener("click", () => {
model.destroy();
destroyButton.disabled = true;
submitButton.disabled = true;
});

inputForm.addEventListener("submit", async (event) => {
event.preventDefault();
const input = document.getElementById("inputField").value;
const humanMessage = ["human", input];

// Clear previous response
const responseTextElement = document.getElementById("responseText");
responseTextElement.textContent = "";

let fullMsg = "";
let timeToFirstTokenMs = 0;
let totalTimeMs = 0;
try {
const startTime = performance.now();
for await (const chunk of await model.stream(humanMessage)) {
if (timeToFirstTokenMs === 0) {
timeToFirstTokenMs = performance.now() - startTime;
}
fullMsg += chunk.content;
// Update the response element with the new content
responseTextElement.textContent = fullMsg;
}
totalTimeMs = performance.now() - startTime;
} catch (error) {
console.error("An error occurred:", error);
responseTextElement.textContent = "An error occurred: " + error.message;
}

const encoding = await encodingForModel("gpt2");
const numTokens = encoding.encode(fullMsg).length;

// Update the stat pills
document.getElementById(
"firstTokenTime"
).textContent = `First Token: ${Math.round(timeToFirstTokenMs)} ms`;
document.getElementById("totalTime").textContent = `Total Time: ${Math.round(
totalTimeMs
)} ms`;
document.getElementById(
"totalTokens"
).textContent = `Total Tokens: ${numTokens}`;
});
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
const path = require("path");

module.exports = {
entry: "./src/index.js",
output: {
filename: "bundle.js",
path: path.resolve(__dirname, "dist"),
},
mode: "development",
};
Loading

0 comments on commit f2fe566

Please sign in to comment.