Redefining Reality
Demo video with 3D printed hologram box
·
Report Bug
·
Request Feature
Table of Contents
!Please don't attempt to install this right now, the project is undergoing major changes and exllamav2 likely will throw many errors!
A virtual waifu / assistant that you can speak to through your mic and it'll speak back to you! Has many features such as:
- You can speak to her with a mic
- It can speak back to you
- Has short-term memory and long-term memory
- Can open apps
- Smarter than you
- Fluent in English, Japanese, Korean, and Chinese
- Can control your smart home like Alexa if you set up Tuya (more info in Prerequisites)
More features I'm planning to add soon in the Roadmap. Also, here's a summary of how it works for those of you who want to know:
First, the Python package SpeechRecognition recognizes what you say into your mic, then that speech is written into an audio (.wav) file, which is sent to OpenAI's Whisper speech-to-text transcription AI, and the transcribed result is printed in the terminal and written in a conversation.jsonl which the vector database hyperdb uses cosine similarity on to find 2 of the closest matches to what you said in the conversation.jsonl and appends that to the prompt to give Megumin context, the response is then passed through multiple NLE RTE and other checks to see if you want to open an app or do something with your smarthome, the prompt is then sent to llama.cpp, and the response from Megumin is printed to the terminal and appended to conversation.jsonl, and finally, the response is spoken by VITS TTS.
- Python
- Llama-cpp-python
- Whisper
- SpeechRecognition
- PocketSphinx
- VITS-fast-fine-tuning
- VITS-simple-api
- HyperDB
- Sentence Transformers
- Tuya Cloud IoT
- Install Python 3.10.11 and set it as an environment variable in PATH
- Install GIT
- Install CUDA 11.7 if you have an Nvidia GPU
- Install Visual Studio Community 2022 and select
Desktop Development with C++
in the install options - Install VTube Studio on Steam
- Download Megumin's VTube Studio Model
- Extract the downloaded zip so it's only one folder deep (you should be able to open the folder and have all the files there, not one folder containing everything)
- Open VTube Studio > Settings icon >
Open Data Folder
and move the folder there > Person icon > c001_f_costume_kouma - Install VB Cable Audio Driver, but don't set it as your audio devices just yet
- Open Control Panel > Sound and Hardware > Sound > Recording > find CABLE Output > right-click > Properties > Listen > Check
Listen to this device
> ForPlayback through this device
, select your headphones or speakers - (Optional) Create a Tuya cloud project if you want to control your smart devices with the AI, for example, you can say 'Hey Megumin, can you turn on my LEDs' it's a bit complicated though and I'll probably make a video on it later because it's hard to explain through text, but here's a guide that should help you out: https://developer.tuya.com/en/docs/iot/device-control-practice?id=Kat1jdeul4uf8
- Open cmd in whatever folder you want the project to be in, and run
git clone --recurse-submodules https://github.com/DogeLord081/OneReality.git
- Open the folder and run
python setup.py
and follow the instructions - Edit the variables in
.env
if you must - Run
OneReality.bat
and while it's running, open the start menu and typeSound Mixer Options
and open it. You might have to wait and make Megumin say something first, but you should see Python in the App Volume list - Change the output to
CABLE Input (VB-Audio Virtual Cable)
- Open VTube Studio > Settings icon > Scroll to Microphone Settings > Select Microphone > CABLE Output (VB-Audio Virtual Cable) > Person with settings icon > Scroll to Mouth Smile > Copy these settings > Scroll to Mouth Open > Copy these settings
- Open
Sound Mixer Options
again and change the input for VTube Studio toCABLE Output (VB-Audio Virtual Cable)
- May need to restart computer if lip sync doesn't work
- You're good to go! If you run into any issues, let me know on Discord and I can help you. Once again, it's https://discord.gg/PN48PZEXJS
- When you want to stop, say goodbye, bye, or see you somewhere in your sentence because that automatically ends the program, otherwise you can just ctrl + c or close the window
- Long-term memory
- Time and date awareness
- Virtual reality / augmented reality / mixed reality integration
- Gatebox-style hologram
- Animatronic body
- Alexa-like smart home control
- More languages for the AI's voice
- Japanese
- English
- Korean
- Chinese
- Spanish
- Indonesian
- Mobile version
- Easier setup
- Compiling into one exe
- Localized
- VTube Studio lip-sync without driver like in this project but I don't really understand the VTube Studio API used here
Distributed under the GNU General Public License v3.0 License. See LICENSE.txt
for more information.
E-mail: danu0518@gmail.com
YouTube: https://www.be.com/@OneReality-tb4ut
Discord: https://discord.gg/PN48PZEXJS
Project Link: https://github.com/DogeLord081/OneReality