diff --git a/docs/assets/AToM_animal2400.png b/docs/assets/AToM_animal2400.png new file mode 100644 index 0000000..60aade7 Binary files /dev/null and b/docs/assets/AToM_animal2400.png differ diff --git a/docs/assets/banner-compressed.mp4 b/docs/assets/banner-compressed.mp4 new file mode 100644 index 0000000..ce6c2e9 Binary files /dev/null and b/docs/assets/banner-compressed.mp4 differ diff --git a/docs/assets/main-results-compressed.mp4 b/docs/assets/main-results-compressed.mp4 new file mode 100644 index 0000000..6a7193e Binary files /dev/null and b/docs/assets/main-results-compressed.mp4 differ diff --git a/docs/index.html b/docs/index.html index 598af10..d3b1e01 100755 --- a/docs/index.html +++ b/docs/index.html @@ -58,7 +58,7 @@
AToM generalizes to unseen interpolated prompts. Comparing AToM to AToM Per-Prompt on the Pig64 compositional prompt set in the format of ``a pig {activity} {theme}'', where each row and column represent a different activity and theme. Models are trained using 56 prompts and tested on all prompts, while the 8 unseen testing prompts are evaluated on the diagonal.
AToM generalizes to unseen prompts (diagonal from left up to right down)
Per-prompt text-to-3D cannot generalize and yields low consistency
Train on only 300 prompts, AToM generalizes to 2400 interpolated prompts. Here we show part of them. See the consistent identity, orientation, and quality.
+AToM offers high-quality textured meshes in less than 1 second in inference. Here we show the results of AToM in DF415 dataset.