Skip to content

Training dataset preprocessing #9

@saeidnp

Description

@saeidnp

Hi,

I'm trying to run the training code, and the README directs users to the Diffusion Planner repository for dataset preprocessing:

Convert nuplan data into npz and generate corresponding data list json file as indicated in https://github.com/ZhengYinan-AIR/Diffusion-Planner.

However, it seems there is an incompatibility between the datasets generated by the Diffusion Planner preprocessing code and what Flow Planner expects. For example, the ego_current_state array in the Diffusion Planner–generated .npz files is 10-dimensional, while the Flow Planner model expects a 16-dimensional array. This mismatch is also reflected in the differing mean/std vector shapes shown in the Diffusion Planner normalization file (linked here) and in the Flow Planner normalization stats (linked here).

I also noticed that the data_process directory in this repo is very similar (but not identical) to the one in Diffusion Planner. Am I correct in assuming that dataset preprocessing for Flow Planner requires scripts analogous to data_process.py and data_process.sh from the Diffusion Planner repository? If so, could you include the appropriate preprocessing scripts for this repo as well?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions