Skip to content
View marsggbo's full-sized avatar
🙈
This is life: eat, drink, sleep.
🙈
This is life: eat, drink, sleep.

Block or report marsggbo

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
marsggbo/README.md

Hi 👋, I'm marsggbo

A student who keeps slim and smart

Personal Website

marsggbo

🇨🇳→🇭🇰→🇸🇬 I am currently a Research Scientist at A*star, Singapore, affiliated with CFAR, under the supervision of Prof Ong Yew Soon. Prior to this, I completed my Ph.D. in Computer Science at Hong Kong Baptist University (HKBU), where I was advised by Prof. Chu Xiaowen. I earned my Bachelor's degree with honors in the School of Electronic Information and Communications at Huazhong University of Science & Technology (HUST).

Driven by a mission to democratize deep learning, my research is dedicated to advancing the accessibility and efficiency of large-scale deep learning models, particularly Large Language Models (LLMs). My goal is to bridge the theoretical aspects of machine learning with practical system designs to create scalable, robust, and trustworthy AI systems that are widely accessible. My interested research directions include:

  • 1.Model-Centric AI:
    • Architecture Dearch: Neural Architecture Search (e.g., multi-objective NAS, Training-free NAS, resource-aware NAS), Sparse Model (e.g., Mixture-of-Experts)
    • Hyper-parameter optimization (HPO): Grid/Random Search, Evolutionary Algorithm, Differentiable Optimization
    • Model Compression: Pruning, Quantization, Knowledge Distillation
  • 2.Data-Centric AI:
    • Automatic Data Augmentation (ADA), Data Generation, Dataset compression,
    • RAG, LLM alignment
  • 3.HPC AI:
    • Memory efficiency: Offloading, KV-cache
    • LLM training acceleration: Distributed Parallellism (data parallel, tensor parallel, pipeline parallel)
    • LLM inference optimization: Batch Schedule, Dynamic Inference Paths

Contact Me

AutoML机器学习

marsggbo

Pinned Loading

  1. hyperbox hyperbox Public

    https://hyperbox-doc.readthedocs.io/en/latest/

    Python 25 4

  2. EAGAN EAGAN Public

    (ECCV2022) EAGAN: EAGAN: Efficient Two-stage Evolutionary Architecture Search for GANs

    Jupyter Notebook 9 2

  3. torchline torchline Public

    Easy to use Pytorch

    Python 71 2

  4. automl_a_survey_of_state_of_the_art automl_a_survey_of_state_of_the_art Public

    AutoML: A Survey of State-of-the-Art

    Python 14 7

  5. CovidNet3D CovidNet3D Public

    [AAAI2021] Automated Model Design and Benchmarking of 3D Deep Learning Models for COVID-19 Detection with Chest CT Scans

    Python 12 6

  6. deeplearning.ai_JupyterNotebooks deeplearning.ai_JupyterNotebooks Public

    DeepLearning.ai课程学习Jupyter Notebook作业

    Jupyter Notebook 576 322