• Docs >
  • 4. Reconstruct a category from videos
Shortcuts

4. Reconstruct a category from videos

In this tutorial, we build a shape and pose model of a category using ~48 videos of different human, similar to the setup of RAC.

Get pre-processed data

First, download pre-processeed data (20G):

bash scripts/download_unzip.sh "https://www.dropbox.com/scl/fi/c6lrg2aaabat4gu57avbq/human-48.zip?dl=0&rlkey=ezpc3k13qgm1yqzm4v897whcj"

To use custom videos, see the preprocessing tutorial.

Note

Besides human-48, you man download the pre-processed data of cat-85` and dog-98 with:

bash scripts/download_unzip.sh "https://www.dropbox.com/scl/fi/xfaot22qbzz0o0ncl5bna/cat-85.zip?dl=0&rlkey=wcer6lf0u4en7tjzaonj5v96q"
bash scripts/download_unzip.sh "https://www.dropbox.com/scl/fi/h2m7f3jqzm4a2u3lpxhki/dog-98.zip?dl=0&rlkey=x4fy74mbk7qrhc5ovmt4lwpkg"

Training

To train the dynamic neural fields

# Args: training script, gpu id, input args
bash scripts/train.sh lab4d/train.py 0,1,2,3,4,5,6 --seqname human-48 --logname skel-soft --fg_motion comp_skel-human_dense --nosingle_inst --num_rounds 120

Note

The training takes around 2 hours 50 minutes on 7 3090 GPUs. You may find the list of flags at lab4d/config.py.

In this setup, we follow RAC and HumanNeRF to use a hybrid deformation model. The hybrid model contains both a skeleton and soft deformation fields (–fg_motion comp_skel-human_dense). The skeleton explains the rigid motion and the soft deformation fields explain the remaining motion.

Skeleton specifies the 3D rest joint locations and a tree topology (in lab4d/nnutils/pose.py). We provide a human skeleton (modified from the mojuco human format) and a quadruped skeleton (modified from Mode-Adaptive Neural Networks for Quadruped Motion Control).

We also add –nosingle_inst to enable instance-specific morphology code, which represents the between-instance shape and bone length variations.

Visualization during training

Here we show the final bone locations (1st), camera transformations and geometry (2nd).

The camera transformations are sub-sampled to 200 frames to speed up the visualization.

Rendering after training

After training, we can check the reconstruction quality by rendering the reference view and novel views. Pre-trained checkpoints are provided here.

To render reference view of a video (e.g., video 0), run:

# reference view
python lab4d/render.py --flagfile=logdir/$logname/opts.log --load_suffix latest --inst_id 0 --render_res 256

Note

Some of the frames with small motion are not rendered (determined by preprocessing).

To render novel views, run:

# turntable views, --viewpoint rot-elevation-angles --freeze_id frame-id-to-freeze
python lab4d/render.py --flagfile=logdir/$logname/opts.log --load_suffix latest  --inst_id 0 --viewpoint rot-0-360 --render_res 256  --freeze_id 50

Exporting meshes and motion parameters after training

To export meshes and motion parameters of video 0, run:

python lab4d/export.py --flagfile=logdir/$logname/opts.log --load_suffix latest --inst_id 0

Re-animation

RAC disentangles the space of morphology and motion, which enables motion transfer between instances.

We show the re-animation results of re-animating the motion of video 0 while keeping the instance detail of video 8. To render the re-animated video, run:

# reanimation in the reference view
python lab4d/reanimate.py --flagfile=logdir/$logname/opts.log --load_suffix latest --motion_id 0 --inst_id 8 --render_res 256

Visit other tutorials.


© Copyright 2023, Gengshan Yang, Jeff Tan, Alex Lyons, Neehar Peri, Carnegie Mellon University.

Built with Sphinx using a theme provided by Read the Docs.