Issues with Handling Responses via stream in Dart/Flutter using Langchain and ChatOpenAI #519
-
Hello, I'm currently building a Korean Assistant in a Flutter application using Langchain and ChatOpenAI. I am trying to process the user's conversation in real-time using stream, but it is not working as expected. Below is the code I'm working with and a detailed description of the problem I'm encountering. Problem Description
'm looking for a way to properly use stream to handle the AI's response while ensuring that the assistant remembers details, like the user's name, throughout the conversation. What I've Tried |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
Hey @Bamschool, The following code should work for your use case: void main() async {
await chat('안녕하세요 제 이름은 David Miguel 입니다, 어떻게 지내세요?');
await chat('제 이름을 기억하고 있나요?');
}
// Define the tool specification
const tool = ToolSpec(
name: 'koreanAssistant',
description: 'Respond like a Korean friend',
inputJsonSchema: {
'type': 'object',
'properties': {
'korean': {
'type': 'string',
'description': 'Response in Korean as a friendly assistant',
},
'english': {
'type': 'string',
'description': 'Translation of the response in English',
},
},
'required': ['korean', 'english'],
},
);
// Initialize the ChatOpenAI model with tools
final chatModel = ChatOpenAI(
apiKey: Platform.environment['OPENAI_API_KEY'],
defaultOptions: ChatOpenAIOptions(
tools: const [tool],
toolChoice: ChatToolChoice.forced(name: 'koreanAssistant'),
parallelToolCalls: false,
),
);
final outputParser = ToolsOutputParser();
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a Korean assistant who responds like a friendly Korean friend.',
),
(ChatMessageType.messagesPlaceholder, 'history'),
(ChatMessageType.human, '{korean}'),
]);
// Create the runnable sequence with the proper chaining
final chain = Runnable.mapInput<String, Map<String, dynamic>>(
(input) => {
'korean': input,
'history': history,
},
).pipe(promptTemplate).pipe(chatModel).pipe(outputParser);
final history = <ChatMessage>[];
Future<void> chat(String userInput) async {
try {
// Fetch and process the response as a stream
final stream = chain.stream(userInput);
String koreanMsg = '';
String englishMsg = '';
await for (final toolCalls in stream) {
final toolCall = toolCalls.first;
koreanMsg = toolCall.arguments['korean']?.toString() ?? '';
englishMsg = toolCall.arguments['english']?.toString() ?? '';
print('Korean: $koreanMsg');
print('English: $englishMsg');
}
// Update the history
history
..add(ChatMessage.humanText(userInput))
..add(ChatMessage.ai(koreanMsg));
} catch (e) {
// Handle and log the error
print('Error occurred: $e');
}
} Output:
Instead of printing to console, you can update the state of your Flutter app. Notes:
I hope that helps! 🙂 |
Beta Was this translation helpful? Give feedback.
-
Hi! thank you so much for replying, however, could I ask you how I can use in this way? I want to make it seperate. init function and sendMessage. however, I spent a lof time.. I couldn't find the way! thank you! import 'dart:developer';
import 'package:flutter_dotenv/flutter_dotenv.dart';
import 'package:langchain/langchain.dart';
import 'package:langchain_openai/langchain_openai.dart';
class LangchainJsonRepository {
final history = <ChatMessage>[];
Future<Map<String, dynamic>> init(String userInput) async {
try {
// Define the tool specification
const tool = ToolSpec(
name: 'koreanAssistant',
description: 'Respond like a Korean friend',
inputJsonSchema: {
'type': 'object',
'properties': {
'content': {
'type': 'string',
'description': 'Response text in Korean',
},
'translation': {
'type': 'string',
'description': 'Translation of the content in English',
},
'guide': {
'type': 'string',
'description': 'Suggested reply in Korean',
},
'guideTranslation': {
'type': 'string',
'description': 'Translation of the guide in English',
},
},
'required': ['content', 'guide', 'translation', 'guideTranslation'],
},
);
// Initialize the ChatOpenAI model with tools
final chatModel = ChatOpenAI(
apiKey: dotenv.env['OPENAI_API_KEY']!,
defaultOptions: ChatOpenAIOptions(
tools: const [tool],
toolChoice: ChatToolChoice.forced(name: 'koreanAssistant'),
parallelToolCalls: false,
),
);
final outputParser = ToolsOutputParser();
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'You are a Korean assistant who responds like a friendly Korean friend. Provide the response in JSON format with the keys "content", "guide", "translation", and "guideTranslation".',
),
(
ChatMessageType.system,
'Content should be the response text in Korean, and translation should be its English translation. Guide should be a suggested reply in Korean, and guideTranslation should be its English translation.',
),
(
ChatMessageType.system,
'For example, if the content is "안녕! 나는 로빈이야! 너는 에이미의 친구야?", the translation could be "Hello! I am Robin! Are you Amy\'s friend?". Similarly, if the guide is "응, 반가워 나는 에이미 친구야.", the guideTranslation could be "Yes, nice to meet you, I am Amy\'s friend."',
),
(ChatMessageType.messagesPlaceholder, 'history'),
(ChatMessageType.human, '{korean}'),
]);
// Create the runnable sequence with the proper chaining
final chain = Runnable.mapInput<String, Map<String, dynamic>>(
(input) => {
'korean': input,
'history': history,
},
).pipe(promptTemplate).pipe(chatModel).pipe(outputParser);
// Fetch and process the response as a stream
final stream = chain.stream(userInput);
String content = '';
String translation = '';
String guide = '';
String guideTranslation = '';
await for (final toolCalls in stream) {
final toolCall = toolCalls.first;
log('ToolCall arguments: ${toolCall.arguments}');
content = toolCall.arguments['content']?.toString() ?? '';
translation = toolCall.arguments['translation']?.toString() ?? '';
guide = toolCall.arguments['guide']?.toString() ?? '';
guideTranslation =
toolCall.arguments['guideTranslation']?.toString() ?? '';
}
// Update the history
history
..add(ChatMessage.humanText(userInput))
..add(ChatMessage.ai(content));
// Return the final AI message as a map
return {
'content': content,
'translation': translation,
'guide': guide,
'guideTranslation': guideTranslation,
};
} catch (e) {
// Handle and log the error
print('Error occurred: $e');
return {};
}
}
} |
Beta Was this translation helpful? Give feedback.
Hey @Bamschool,
The following code should work for your use case: