Skip to content

Latest commit

 

History

History
81 lines (78 loc) · 6.37 KB

README.org

File metadata and controls

81 lines (78 loc) · 6.37 KB

Group meeting schedule, Spring 2024

Time 10-11am on Tuesday mornings in Building 90 (SICCS), second floor meeting room (223)

What to present?

  1. your research.
  2. somebody else’s paper.
  3. some useful software.
  • Jan 16: Toby, Title: mlr3resampling: cross-validation algorithms for the mlr3 framework in R. Abstract: cross-validation is an essential algorithm in any machine learning analysis. The mlr3 framework in R provides a standard interface to several common machine learning algorithms, including “standard” K-Fold cross-validation via mlr3::ResamplingCV. In this talk I will discuss two other kinds of K-fold cross-validation that I have implemented in the mlr3resampling R package. First, “Same versus Other” cross-validation can be used to determine the extent to which you can get accurate predictions, by training on some different data subset/group (person, image, geographic region, year, etc). Second, “Variable Size Train Set” cross-validation can be used to determine how many train set samples are necessary to get optimal prediction accuracy on the test set. Slides, Vignette
  • Jan 23:
    • Presenter: Tung L. Nguyen
    • Title: Optimizing Changepoint Detection through Deep Learning-based Penalty Tuning
    • Abstract: This study focuses on improving ChangePoint detection algorithms (OPART and LOPART), by dynamically learning the optimal penalty parameter, lambda. My approach aims to minimize predicted label errors using advanced deep learning methods. Comparative analysis against established methods like Bayesian Information Criterion (BIC) and Linear techniques reveals the superiority of my proposed approach in terms of accuracy. This research contributes to enhancing the adaptability and effectiveness of ChangePoint detection algorithms for improved anomaly detection in time-series data.
    • Link for the presentation: Slides
  • Jan 30: Doris, I will be presenting on the basic data manipulation using data.table as part of my research on expanding the open source ecosystem around data.table in R. link to my slides is in: here
  • Feb 6: Guangting Yu (ASU, Applied Math) Title: Deep Learning methods for numerical solutions to differential equations. Abstract: Physics-Informed Neural Networks (PINNs) have achieved significant success as a machine learning method (using artificial neural networks) for numerically solving differential equations. We explore the low-rank features that emerge from training PINNs to solve ordinary differential equations (ODEs) and build low-rank architectures to leverage them, achieving a reduction in model complexity. We also use the Deep Ritz method to solve partial differential equations (PDEs) in variational form, including boundary-value problems and eigenvalue problems of Laplace equations. We adopt an alternative optimizer, Ensemble Kalman Inversion (EKI), to replace stochastic gradient descent/ADAM in minimizing the proposed loss functions. This optimizer will be consistently used to train the neural networks in the aforementioned examples for solving ODEs/PDEs. Additionally, we solve inverse problems in PDEs using the PINN setup to recover unknown parameters in PDEs, given partial or sparse observations of the PDE solution. Further applications of PINN to solve fractional differential equations will also be illustrated. Link for the presentation: https://mathpost.asu.edu/~gyu If the audience are interested, I can stay after the presentation to run demo code (hand-dirty session)
  • Feb 13: Danny, Title: ML Experimentation with necromass data set. Slides
  • Feb 20: ML lab cross-country ski day, meet at SICCS lobby at 9:30am.
  • Feb 27:
    • Presenter: Tung L. Nguyen
    • Title: Fast Python For Loops: Boosting Speed for Efficiency
    • Github repo: Link
  • Mar 5:
    • Presenter: Ani
    • Topic: GitHub Actions; Automated (performance) regression testing on PRs. Slides
  • Mar 12: spring break.
  • Mar 19: Toby not present.
  • Mar 26: Toby, A tutorial on interpretable machine learning algorithms for understanding factors related to childhood autism, links to slides and source.
  • Apr 2: Danny: PhD Comprehensive Exams Practice Talk
  • Apr 9:
    • Presenter: Tung L. Nguyen
      • Title: Fine-Tuning Optimal Partitioning: Leveraging Deep Learning for Penalty Parameter Optimization
      • Abstract: This study introduces a new method to optimize the penalty parameter, lambda ($λ$), in the Optimal Partitioning (OPART) algorithm. Unlike existing methods which rely solely on non-parametric or linear models, our new method will use chosen useful features and leverage deep learning techniques to enhance accuracy. Through experiments, we demonstrate the superiority of our method over existing methods, resulting in improved changepoint detection accuracy in sequence data.
  • Apr 16:
    • Doris, Practice Talk for the useR conference, lightning Talk on the Benchmarking Performance for the data.table. see slides here
  • Apr 23
  • Apr 30
    • Doris, second Practice Talk for the useR conference, lightning Talk on the Benchmarking Performance for the data.table. see slides here
  • May 7
    • Ani, practice talk for JSM ‘24, slides
  • May 14
  • May 21