Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: upgrade Tavily API with comprehensive input and constrain the token consumption #1246

Merged
merged 3 commits into from
Dec 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -257,6 +257,9 @@ LARGE_AKASH_CHAT_API_MODEL= # Default: Meta-Llama-3-1-405B-Instruct-FP8
FAL_API_KEY=
FAL_AI_LORA_PATH=

# Web search API Configuration
TAVILY_API_KEY=

# WhatsApp Cloud API Configuration
WHATSAPP_ACCESS_TOKEN= # Permanent access token from Facebook Developer Console
WHATSAPP_PHONE_NUMBER_ID= # Phone number ID from WhatsApp Business API
Expand Down
4 changes: 4 additions & 0 deletions packages/core/src/generation.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1214,6 +1214,10 @@
api_key: apiKey,
query,
include_answer: true,
max_results: 3, // 5 (default)
topic: "general", // "general"(default) "news"
search_depth: "basic", // "basic"(default) "advanced"
include_images: false, // false (default) true

Check warning on line 1220 in packages/core/src/generation.ts

View check run for this annotation

Codecov / codecov/patch

packages/core/src/generation.ts#L1217-L1220

Added lines #L1217 - L1220 were not covered by tests
}),
});

Expand Down
26 changes: 24 additions & 2 deletions packages/plugin-web-search/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,30 @@ import {
State,
} from "@ai16z/eliza";
import { generateWebSearch } from "@ai16z/eliza";

import { SearchResult } from "@ai16z/eliza";
import { encodingForModel, TiktokenModel } from "js-tiktoken";

const DEFAULT_MAX_WEB_SEARCH_TOKENS = 4000;
const DEFAULT_MODEL_ENCODING = "gpt-3.5-turbo";

function getTotalTokensFromString(
str: string,
encodingName: TiktokenModel = DEFAULT_MODEL_ENCODING
) {
const encoding = encodingForModel(encodingName);
return encoding.encode(str).length;
}

function MaxTokens(
data: string,
maxTokens: number = DEFAULT_MAX_WEB_SEARCH_TOKENS
): string {

if (getTotalTokensFromString(data) >= maxTokens) {
return data.slice(0, maxTokens);
}
return data;
}

const webSearch: Action = {
name: "WEB_SEARCH",
Expand Down Expand Up @@ -68,7 +90,7 @@ const webSearch: Action = {
: "";

callback({
text: responseList,
text: MaxTokens(responseList, DEFAULT_MAX_WEB_SEARCH_TOKENS),
});
} else {
elizaLogger.error("search failed or returned no data.");
Expand Down
Loading