Skip to content

My goal is to develop AI systems that detect early signs of mental and physical health decline by integrating speech, behavior, wearable data, and real-world biomedical information. I want to work at the intersection of multimodal machine learning, data engineering, clinical decision — fields that Stanford’s Biomedical Informatics Research group.

Notifications You must be signed in to change notification settings

Samdevelop25/samat-academic-site

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI-Powered Multimodal Personal Health Assistant for Early Risk Detection

Author: Samat Zholdassov
Focus: Multimodal Machine Learning · AI for Health · Biomedical Informatics

This repository contains early research and prototype code for a multimodal AI system that combines speech, text, behavioral, and wearable data to detect early signs of mental and physical health risks (e.g., depression, chronic stress, sleep disorders, cardiometabolic risk).

The project is aligned with the broader vision of building learning health systems and exploring digital and behavioral biomarkers for proactive care.


Goals

  • Build a reproducible multimodal data pipeline for:
    • Speech (audio features)
    • Text (sentiment & semantic embeddings)
    • Wearable biometrics (HRV, sleep, activity)
    • Behavioral digital signals
  • Train and evaluate multimodal ML models for early risk detection.
  • Develop a small prototype for a personal health assistant interface.
  • Prepare the groundwork for future collaboration, publication, and clinical integration.

Repository Structure

proposal/        – Research proposal and documentation
src/             – Python source code (pipelines, models, training scripts)
notebooks/       – Jupyter notebooks for EDA and experiments
requirements.txt – Python dependencies

About

My goal is to develop AI systems that detect early signs of mental and physical health decline by integrating speech, behavior, wearable data, and real-world biomedical information. I want to work at the intersection of multimodal machine learning, data engineering, clinical decision — fields that Stanford’s Biomedical Informatics Research group.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published