Skip to content

v2.0.0

Latest
Compare
Choose a tag to compare
@Aradhye2002 Aradhye2002 released this 22 Jan 05:40
· 1 commit to main since this release

We’re excited to announce ECoDepth v2.0.0, a major restructuring of our monocular depth estimation codebase. This release offers significant improvements to make training and inference more straightforward and flexible:


What’s New

  • Integrated Model Downloading
    Automatically download and cache pretrained checkpoints—no more manual handling of files.

  • Generic DepthDataset
    Load any custom dataset with ease, using a consistent API.

  • PyTorch Lightning Integration
    Enjoy streamlined training, validation, and checkpointing, with support for ONNX and TorchScript exports.

  • Config-Based Workflows
    JSON config files now replace bash scripts, improving clarity and maintainability.

  • Simplified Dependencies
    Removed extraneous packages like mmcv. Installing ECoDepth is now much smoother.

  • Single Model Download
    We no longer require separate downloads for Stable Diffusion, CLIP, and VIT checkpoints—one file is all you need.

  • Three Separate Workflows
    Clearly divided into train/, test/, and infer/ directories for easier navigation.


Getting Started

  1. Install PyTorch (with or without GPU support).
  2. Install Dependencies
    pip install -r requirements.txt
  3. Configure & Run
    • Training: python train/train.py --config train/train_config.json
    • Testing: python test/test.py --config test/test_config.json
    • Inference: python infer/infer_image.py --config infer/image_config.json

For more details, please see the updated README.


Note:

  • The original code that generated the CVPR 2024 paper results is tagged as [v1.0.0].
  • For new projects, we strongly recommend using v2.0.0 for its cleaner, more modular design.

Thank you for using ECoDepth! Feel free to open an issue if you have any questions or feedback.