Author: Samat Zholdassov
Focus: Multimodal Machine Learning · AI for Health · Biomedical Informatics
This repository contains early research and prototype code for a multimodal AI system that combines speech, text, behavioral, and wearable data to detect early signs of mental and physical health risks (e.g., depression, chronic stress, sleep disorders, cardiometabolic risk).
The project is aligned with the broader vision of building learning health systems and exploring digital and behavioral biomarkers for proactive care.
- Build a reproducible multimodal data pipeline for:
- Speech (audio features)
- Text (sentiment & semantic embeddings)
- Wearable biometrics (HRV, sleep, activity)
- Behavioral digital signals
- Train and evaluate multimodal ML models for early risk detection.
- Develop a small prototype for a personal health assistant interface.
- Prepare the groundwork for future collaboration, publication, and clinical integration.
proposal/ – Research proposal and documentation
src/ – Python source code (pipelines, models, training scripts)
notebooks/ – Jupyter notebooks for EDA and experiments
requirements.txt – Python dependencies