- Watch MP4 DEMO ( I use a mirror camera so that my behavior is opposite to Hiyori's )
- Test Behavior : Nod, Shake, Rotation, Eyeball Rotation, Blink, Eye Half-opening, Mouth Opening
Recently, I do some studies on Deep Learning and Computer Vision. At the same time, I realize that I can make a VTuber model by Unity which could simulate my facial expression via computer vision. After watching some tutorials I have made a fantastic Live2D model Momose Hhiyori, and becomes a VTuber successfully !
- Test System : Windows 10 64bits
- Camera : Integrated Webcam
- Socket Transmission : Intranet
- Model Made : Live2D Cubism Editor 4.0
- Engine : Unity
- Script Language : C#
- Recognition Algorithm : Deep Learning
- Language : Python 3.7 Anaconda
- Main Required Library : opencv, dlib, numpy, torch
File | Explanation |
---|---|
Recognition | Packed Algorithm for facial recognition |
UnityAssets | Tutorial materials for those want to make Live2D VTuber by self |
Hiyori酱~ | Starter, quick mode to start program |
- Download and unzip ZIP source file
- Install required python libraries ( recommend Anaconda )
- I do not test at other operating system, if your OS is not Windows, you'd better test it by yourself
- Windows
- There are some libraries that I use, you can use
pip install -r requirements.txt
to install as you like- CPU ( recommend for testing )
- Libraries Installation by
pip install -r requirements_cpu.txt
- Open Anaconda Prompt to install
dlib
byconda install -c menpo dlib
if it doesn't work- GPU
- Firstly, please check the your CUDA version : 9.0 / 10.1 / 10.2 / None
- Install pytorch by running corresponding command such as
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
for 10.2- Install other libraries by
pip install -r requirements_gpu.txt
.- If you have CUDA 10,
pip install onnxruntime-gpu
to get faster inference speed using onnx model.
- Download
VTuber_Hiyori.zip
andckpts.zip
( If you want to useonnxruntime
to get faster speed ) at Release- Unzip
ckpts
and put it underRecognition\face_alignment
- Unzip
VTuber_Hiyori
and startVTuber_MomoseHiyori.exe
( Please wait and do not start any other applications simultaneously !!! )- Run
Hiyori酱~.bat
- If ひよりちゃん start to simulate your facial expression, congratulations! You have been a VTuber now!
- The latest verion has been released, you can download and use them.
If you find it doesn't recognize well, please try again as following :
- Use brighter light : To make your face more clearly, using both natural light and point light seems perfect.
- Adjust your position : You can start a camera demo to help you know your position by adding
--debug
atHiyori酱~.bat
. Run again, let the outer green boundary be larger and central but not larger than demo boundary. - Do not wear glasses : Glasses probably influence on the accuracy of eye recognition.
- Show your forehead : Probably your hair is too long to have side effects on recognition of your eyes.
- Use Live2D instead of 3D model
- Add 2 eye events : Eye Half-opening and Eyeball Rotation
- Optimize some parameters and be more accurate
- Easy start and fixed window at top without boundary, more convenient for live streaming
- Description : It is a template for most Cubism Live2D models. If you just want to customize your own Live2D models, probably you can read this tutorial and following steps.
- Recommend Unity Engine : Unity 2019.4.1f1 LTS
- Before you start : Equip yourselves with knowledge of Unity basic operation
- Prepare Live2D SDK : You must download SDK on website, or use
CubismSdkForUnity-4-r.1
I download for you underUnityAssets
- Create a new Unity project
- Import Live2D SDK : Drag
CubismSdkForUnity-4-r.1
toAssets
and choose to import all
- Restart Unity : Do not forget this step, otherwise the SDK probably cannot work !
- Import Assets : Delete the default scene file, drag
Momose
,Scece
andScript
file underAssets
- Import Model : A prefab will be automatically generated at
Assets/Momose/hiyori_pro_t08.prefab
. OpenScene/MomoseHiyori
and drag prefab into scene
- Set Position : Select prefab and move Y axis (blue) ahead
- Initialization : Move control balls to initialize
- Bind Script
- Export & Build
- Start to Test
- Recommendation for Model Website : https://www.live2d.com/en/download/sample-data/
- Now, enjoy making your own Live2D VTuber !!
Thanks for following blogs or projects which give me a reference :
-
Algorithm
Project Author LICENSE head-pose-estimation Yin Guobing LICENSE face-alignment Adrian Bulat LICENSE GazeTracking Antoine Lamé LICENSE VTuber_Unity AI葵 LICENSE
- Kennard Wang ( 2020.6.27 )