The app demonstrates real-time chat interactions powered by Mistral AI.
- Real-time chat interface with stream-based responses
- Powered by Mistral AI Large model
- Cross-platform (iOS & Android) support
- Dark mode theme
- Multi-language support (English, French)
- Node.js (v16 or higher)
- Expo CLI
- Mistral API key
The backend is a simple Vercel Edge Function that integrates with Mistral AI:
import { streamText } from "ai";
import { mistral } from "@ai-sdk/mistral";
export const config = { runtime: "edge" };
export default async function handler(req: Request) {
if (req.method !== "POST") {
return new Response(JSON.stringify({ message: "Method not allowed" }), {
status: 405,
headers: { "Content-Type": "application/json" },
});
}
try {
const { messages } = await req.json();
const result = streamText({
model: mistral("mistral-large-latest", { safePrompt: true }),
messages,
});
return result.toDataStreamResponse();
} catch (error) {
return new Response(JSON.stringify({ message: "Internal server error" }), {
status: 500,
headers: { "Content-Type": "application/json" },
});
}
}
- Clone the backend repository
- Install dependencies:
npm install
- Create a
.env
file with your Mistral API key:
MISTRAL_API_KEY=your_api_key_here # Get it from https://console.mistral.ai/api-keys/
For more information about the Mistral AI integration, check the Vercel AI SDK documentation.
- Deploy to Vercel or run locally:
vercel dev
- Clone this repository
- Install dependencies:
npm install
- Create a
.env
file:
EXPO_PUBLIC_API_BASE_URL=https://your-backend-url.vercel.app
- Start the development server:
npx expo start
The application uses a simple client-server architecture:
- Frontend: React Native with Expo
- Backend: Vercel Edge Functions with Mistral AI integration
- End-to-end testing with Maestro
- Enhanced accessibility features
- Error tracking (Sentry/Firebase Crashlytics)
- Analytics implementation
MIT License