Skip to content

kdj842969/3D-Autoencoder

Repository files navigation

3D-Autoencoder

A 3D auto-encoder project based on ShapeNet dataset

Copyright

Introduction

  • This project is a real 3D auto-encoder based on ShapeNet
  • In this project, our input is real 3D object in 3d array format. And we use 3D convolution layer to learn the patterns of objects.

Installation

Dataset

  • We use 3D ShapeNet as our dataset.
  • To avoid use .off data, we use volumetric data in their source code.
  • In order to simplify our training process, we only select 10 classes. Of course you can use more classes.
  • The original input size is 30x30x30. However, in order to fit in our model, we padding them into 32x32x32. The original data looks like below (class for this object is "chair"):

Architecture

  • The architecture of this auto-encoder is shown below:

Code

  • read_off.py: label the original data, shuffle and padding the input, then convert them into hdf5 file.
  • train.py: train the auto-encoder model.
  • test.py: test the trained model with test set.
  • test_vis.py: visulize the first 10 test results.
  • autoencoder.h5: a trained model. If you don't want to train the model yourself, you can directly use this file and run test.py to see the results. The training loss for this trained model should be 0.0062.

Results

  • Training loss:

  • Validation loss:

  • Reconstruction example:

    • demo 1 (class: "airplane")

    • demo 2 (class: "bathtub")

    • demo 3 (class: "chair")

About

A 3D auto-encoder project based on ShapeNet dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages