yarn add realtime-ai realtime-ai-react
# or
npm install realtime-ai realtime-ai-react
Instantiate a VoiceClient
instance and pass it down to the VoiceClientProvider
. Render the <VoiceClientAudio>
component to have audio output setup automatically.
import { VoiceClient } from "realtime-ai";
import { VoiceClientAudio, VoiceClientProvider } from "realtime-ai-react";
const voiceClient = new VoiceClient({
baseUrl: "https://rtvi.pipecat.bot",
enableMic: true,
});
render(
<VoiceClientProvider voiceClient={voiceClient}>
<MyApp />
<VoiceClientAudio />
</VoiceClientProvider>
);
We recommend starting the voiceClient from a click of a button, so here's a minimal implementation of <MyApp>
to get started:
import { useVoiceClient } from "realtime-ai-react";
const MyApp = () => {
const voiceClient = useVoiceClient();
return <button onClick={() => voiceClient.start()}>OK Computer</button>;
};
Wrap your app with <VoiceClientProvider>
and pass it a voiceClient
instance.
Creates a new <audio>
element that mounts the bot's audio track.
Creates a new <video participant="local | bot">
element that renders either the bot or local participant's video track.
Creates a new canvas element that renders a customizable waveform effect for visualizing a audio track.
Returns the voiceClient
instance, that was originally passed to VoiceClientProvider
.
Allows to register event handlers for all supported event callbacks in the VoiceClient.
Returns a list of availableMics
and availableCams
, the selectedMic
and selectedCam
, and methods updateMic
and updateCam
to switch to different media devices.
Returns the MediaStreamTrack
with the given trackType
('audio' | 'video'
) for the given participantType
('local' | 'bot'
). In case no track is available, it returns null
.
Returns voiceClient.state
as React state.
We are welcoming contributions to this project in form of issues and pull request. For questions about RTVI head over to the Pipecat discord server and check the #rtvi channel.