You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Merlin provides documentation and a number of example notebooks on how to use tools like NVTabular, Dataloader and Merlin Models. In order to build a pipeline for training and evaluation purposes, a Data Scientist needs to analyze that material, copy-and-paste code snippets demonstrating the API and glue that code together to implement scripts for experimentation and benchmarking.
It might also not be clear to the users the advanced API options featured by Merlin Models that can be mapped as a hyperparameter, and potentially improve models accuracy.
Goal:
This RMP provides a Quick-start for building ranking models training pipelines.
It addresses the ranking models part of this larger RMP NVIDIA-Merlin/models#732, in particular the steps 4-7 of the Data Scientist journey when experimenting with Merlin Models.
The Quick start for ranking is composed by:
Template scripts
Generic template script for preprocessing
Generic template script for training ranking models, exposing the main hyperparameters for ranking models .
It includes support to ranking models like DLRM, DCN-v2, DeepFM, Wide&Deep and multi-task learning (MTL) like MMOE, CGC and PLE.
Documentation
Documentation of the scripts command line arguments
Documentation of best practices learned from our experimentation:
Hyperparameter tuning: search space, most important hyperparameters and best hparams for TenRec public dataset for STL and MTL models
Intuitions of API options (building blocks, arguments) that can improve models accuracy
Constraints:
Preprocessing - The pre-processing template notebook will perform some basic feature encoding for categorical (e.g. categorify) and continuous variables (e.g. standardization). The customer can expand the template with advanced preprocessing ops demonstrated in our examples.
Training - The training and evaluation script for Merlin Models should be totally configurable, taking as input the parquet files and schema, and a number of hyperparameters exposed via command line arguments. The output of this script should be the evaluation metrics, logged as a CSV file and also to Weights&Biases.
Problem:
Merlin provides documentation and a number of example notebooks on how to use tools like NVTabular, Dataloader and Merlin Models. In order to build a pipeline for training and evaluation purposes, a Data Scientist needs to analyze that material, copy-and-paste code snippets demonstrating the API and glue that code together to implement scripts for experimentation and benchmarking.
It might also not be clear to the users the advanced API options featured by Merlin Models that can be mapped as a hyperparameter, and potentially improve models accuracy.
Goal:
This RMP provides a Quick-start for building ranking models training pipelines.
It addresses the ranking models part of this larger RMP NVIDIA-Merlin/models#732, in particular the steps 4-7 of the Data Scientist journey when experimenting with Merlin Models.
The Quick start for ranking is composed by:
Template scripts
It includes support to ranking models like DLRM, DCN-v2, DeepFM, Wide&Deep and multi-task learning (MTL) like MMOE, CGC and PLE.
Documentation
Constraints:
Starting Point:
The ranking training scripts we have developed for the MTL research project.
Tasks:
PR: NVIDIA-Merlin/models#988
OutputBlock
instead ofPredictionsTasks
models#914PredictionBlock
instead ofPredictionTask
) #917Tenrec dataset experiments
Documentation
preprocessing.py
andranking_train_eval.py
Deployment and inference with Triton
Testing
Blog post
The text was updated successfully, but these errors were encountered: