Shortcuts

lab4d.utils package

lab4d.utils.cam_utils

lab4d.utils.camera_utils

lab4d.utils.decorator

lab4d.utils.decorator.train_only_fields(method)

Decorator to skip the method and return an empty field list if not in training mode.

lab4d.utils.geom_utils

lab4d.utils.geom_utils.K2inv(K)

Compute the inverse camera intrinsics matrix from tuple

Parameters:

K – (…, 4) Camera intrinsics (fx, fy, cx, cy)

Returns:

Kmat – (…, 3, 3) Inverse camera intrinsics matrix

lab4d.utils.geom_utils.K2mat(K)

Convert camera intrinsics tuple into matrix

Parameters:

K – (…, 4) Camera intrinsics (fx, fy, cx, cy)

Returns:

Kmat – (…, 3, 3) Camera intrinsics matrix

lab4d.utils.geom_utils.Kmatinv(Kmat)

Invert camera intrinsics matrix

Parameters:

Kmat – (…, 3, 3) Camera intrinsics matrix

Returns:

Kmatinv – (…, 3, 3) Inverse camera intrinsics matrix

lab4d.utils.geom_utils.apply_se3mat(se3, pts)

Apply an SE(3) rotation and translation to points.

Note

se3 and pts have the same number of batch dimensions. During skinning there could be an additional dimension B

Parameters:
  • se3 – (M,1,1,(B),4) Real-first quaternion and (M,1,1,(B),3) Translation

  • pts – (M,N,D,(1),3) Points to transform

Returns:

pts_out – (M,N,D,(B),3) Transformed points

lab4d.utils.geom_utils.check_inside_aabb(xyz, aabb)

Return a mask of whether the input poins are inside the aabb

Parameters:
  • xyz – (N,3) Points in object canonical space to query

  • aabb – (2,3) axis-aligned bounding box

Returns:

inside_aabb

  1. Inside mask, bool

lab4d.utils.geom_utils.compute_crop_params(mask, crop_factor=1.2, crop_size=256, use_full=False)

Compute camera intrinsics transform from cropped to raw images

Parameters:
  • mask – (H, W) segmentation mask

  • crop_factor (float) – Ratio between crop size and size of a tight crop

  • crop_size (int) – Target size of cropped images

  • use_full (bool) – If True, return a full image

lab4d.utils.geom_utils.dual_quaternion_skinning(dual_quat, pts, skin)

Attach points to dual-quaternion bones according to skinning weights

Parameters:
  • dual_quat – ((M,B,4), (M,B,4)) per-bone SE(3) transforms, written as dual quaternions

  • pts – (M, …, 3) Points in object canonical space

  • skin – (M, …, B) Skinning weights from each point to each bone

Returns:

pts – (M, …, 3) Articulated points

lab4d.utils.geom_utils.eval_func_chunk(func, xyz, chunk_size)

Evaluate a function in chunks to avoid OOM.

Parameters:
  • func – (M,x) -> (M,y)

  • xyz – (M,x)

  • chunk_size – int

Returns:

vals – (M,y)

lab4d.utils.geom_utils.extend_aabb(aabb, factor=0.1)

Extend aabb along each side by factor of the previous size. If aabb = [-1,1] and factor = 1, the extended aabb will be [-3,3]

Parameters:
  • aabb – Axis-aligned bounding box, (2,3)

  • factor (float) – Amount to extend on each side

Returns:

aabb_new – Extended aabb, (2,3)

lab4d.utils.geom_utils.get_near_far(pts, rtmat, tol_fac=1.5)
Parameters:
  • pts – Point coordinate, (N,3), torch

  • rtmat – Object to camera transform, (M,4,4), torch

  • tol_fac – Tolerance factor

lab4d.utils.geom_utils.hat_map(v)

Returns the skew-symmetric matrix corresponding to the last dimension of a PyTorch tensor.

Parameters:

v – (…, 3) Input vector

Returns:

V – (…, 3, 3) Output matrix

lab4d.utils.geom_utils.marching_cubes(sdf_func, aabb, visibility_func=None, grid_size=64, level=0, chunk_size=262144, apply_connected_component=False)

Extract a mesh from a signed-distance function using marching cubes. For the multi-instance case, we use the mean shape/visibility

Parameters:
  • sdf_func (Function) – Signed distance function

  • aabb – (2,3) Axis-aligned bounding box

  • visibility_func (Function) – Returns visibility of each point from camera

  • grid_size (int) – Marching cubes resolution

  • level (float) – Contour value to search for isosurfaces on the signed distance function

  • chunk_size (int) – Chunk size to evaluate the sdf function

  • apply_connected_component (bool) – Whether to apply connected component

Returns:

mesh (Trimesh) – Output mesh

lab4d.utils.geom_utils.mat2K(Kmat)

Convert camera intrinsics matrix into tuple

Parameters:

Kmat – (…, 3, 3) Camera intrinsics matrix

Returns:

K – (…, 4) Camera intrinsics (fx, fy, cx, cy)

lab4d.utils.geom_utils.obj_to_cam(pts, rtmat)
Parameters:
  • pts – Point coordinate, (M,N,3) or (N,3), torch or numpy

  • rtmat – Object to camera transform, M,4,4, torch or numpy

Returns:

verts – Transformed points, (M,N,3), torch or numpy

lab4d.utils.geom_utils.pinhole_projection(Kmat, xyz_cam)

Project points from camera space to the image plane

Parameters:
  • Kmat – (M, 3, 3) Camera intrinsics

  • xyz_cam – (M, …, 3) Points in camera space

Returns:

hxy – (M, …, 3) Homogeneous pixel coordinates on the image plane

lab4d.utils.geom_utils.rot_angle(mat)

Compute rotation angle of a rotation matrix

Parameters:

mat – (…, 3, 3) Rotation matrix

Returns:

angle – (…,) Rotation angle

lab4d.utils.geom_utils.sample_grid(aabb, grid_size)

Densely sample points in a 3D grid

Parameters:
  • aabb – (2,3) Axis-aligned bounding box

  • grid_size (int) – Points to sample along each axis

Returns:

query_xyz – (grid_size^3,3) Dense xyz grid

lab4d.utils.geom_utils.se3_mat2rt(mat)

Convert an SE(3) 4x4 matrix into rotation matrix and translation.

Parameters:

mat – (…, 4, 4) SE(3) matrix

Returns:
  • rmat – (…, 3, 3) Rotation

  • tmat – (…, 3) Translation

lab4d.utils.geom_utils.se3_mat2vec(mat, outdim=7)

Convert SE(3) 4x4 matrix into a quaternion or axis-angle vector :param mat: (…, 4, 4) SE(3) matrix :param outdim: 7 to output quaternion vector, 6 to output axis-angle :type outdim: int

Returns:

vec – (…, outdim): Quaternion or axis-angle vector

lab4d.utils.geom_utils.se3_vec2mat(vec)

Convert an SE(3) quaternion or axis-angle vector into 4x4 matrix.

Parameters:

vec – (…, 7) quaternion real-last or (…, 6) axis angle

Returns:

mat – (…, 4, 4) SE(3) matrix

lab4d.utils.geom_utils.so3_to_exp_map(so3, eps=1e-06)

Converts a PyTorch tensor of shape (…, 3) representing an element of SO(3) to a PyTorch tensor of shape (…, 3, 3) representing the corresponding exponential map.

Parameters:
  • so3 – (…, 3) Element of SO(3)

  • eps (float) – Small value to avoid division by zero

Returns:

exp_V – (…, 3, 3) Exponential map

lab4d.utils.gpu_utils

lab4d.utils.gpu_utils.gpu_map(func, args, gpus=None, method='static')

Map a function over GPUs

Parameters:
  • func (Function) – Function to parallelize

  • args (List(Tuple)) – List of argument tuples, to split evenly over GPUs

  • gpus (List(int) or None) – Optional list of GPU device IDs to use

  • method (str) – Either “static” or “dynamic” (default “static”). Static assignment is the fastest if workload per task is balanced; dynamic assignment better handles tasks with uneven workload.

Returns:

outs (List) – List of outputs

lab4d.utils.gpu_utils.gpu_map_dynamic_helper(func, arg, it, gpu_id, result_queue, gpu_queue)
lab4d.utils.gpu_utils.gpu_map_static_helper(func, args, rank, result_queue)

lab4d.utils.io

lab4d.utils.loss_utils

lab4d.utils.loss_utils.align_vectors(v1, v2)

Return the scale that best aligns v1 to v2 in the L2 sense: min || kv1-v2 ||^2

Parameters:
  • v1 – (…,) Source vector

  • v2 – (…,) Target vector

Returns:

scale_fac (1,) – Scale factor

lab4d.utils.loss_utils.cross_entropy_skin_loss(skin)

Compute entropy of a probability distribution In the case of skinning weights, each column is a distribution over assignment to B bones. We want to encourage low entropy, i.e. each point is assigned to fewer bones.

Parameters:

skin – (…, B) un-normalized skinning weights

lab4d.utils.loss_utils.entropy_loss(prob, dim=-1)

Compute entropy of a probability distribution In the case of skinning weights, each column is a distribution over assignment to B bones. We want to encourage low entropy, i.e. each point is assigned to fewer bones.

Parameters:

prob – (…, B) Probability distribution

Returns:

entropy (…,)

lab4d.utils.numpy_utils

lab4d.utils.numpy_utils.bilinear_interp(feat, xy_loc)

Sample from a 2D feature map using bilinear interpolation

Parameters:
  • feat – (H,W,x) Input feature map

  • xy_loc – (N,2) Coordinates to sample, float

Returns:

feat_samp – (N,x) Sampled features

lab4d.utils.numpy_utils.interp_wt(x, y, x2, type='linear')

Map a scalar value from range [x0, x1] to [y0, y1] using interpolation

Parameters:
  • x – Input range [x0, x1]

  • y – Output range [y0, y1]

  • x2 (float) – Scalar value in range [x0, x1]

  • type (str) – Interpolation type (“linear” or “log”)

Returns:

y2 (float) – Scalar value mapped to [y0, y1]

lab4d.utils.numpy_utils.pca_numpy(raw_data, n_components)

Return a function that applies PCA to input data, based on the principal components of a raw data distribution.

Parameters:
  • raw_data (np.array) – Raw data distribution, used to compute principal components.

  • n_components (int) – Number of principal components to use

Returns:

apply_pca_fn (Function) – A function that applies PCA to input data

lab4d.utils.profile_utils

lab4d.utils.profile_utils.decorate_module(module)

Modifies a module in place to decorate every class with @record_class and every module with @record_function

Parameters:

module – Module to modify in-place

Returns:

module – The input module with classes and functions decorated

class lab4d.utils.profile_utils.record_class(arg)

Bases: object

A class decorator that applies the @record_function decorator to every member function defined in the class

Parameters:

class_name (str) – Name of the decorated class

class lab4d.utils.profile_utils.record_function(name: str, args: str | None = None)

Bases: record_function

A context manager / function decorator that adds a label to a block of Python code (or function) when running autograd profiler. To avoid polluting the error messages, this class is invisible in the stack trace.

Parameters:

func_name (str) – Name of the decorated function

lab4d.utils.profile_utils.torch_profile(save_dir, out_prefix, enabled=True)

Wrapper around torch.profiler.profile() that profiles CPU time, CUDA time, and memory usage. Writes output tables and Chrome traces to disk

Parameters:
  • save_dir (str) – Directory to save output logs

  • out_prefix (str) – Prefix of output filenames

  • enabled (bool) – If False, this context manager does nothing

lab4d.utils.quat_transform

lab4d.utils.quat_transform.QuaternionTranslation

quaternion library from pytorch3d https://pytorch3d.readthedocs.io/en/latest/_modules/pytorch3d/transforms/rotation_conversions.html

alias of Tuple[Tensor, Tensor]

lab4d.utils.quat_transform.dual_quaternion_3rd_conjugate(dq: Tuple[Tensor, Tensor]) Tuple[Tensor, Tensor]
lab4d.utils.quat_transform.dual_quaternion_apply(dq: Tuple[Tensor, Tensor], point: Tensor) Tensor
lab4d.utils.quat_transform.dual_quaternion_d_conjugate(dq: Tuple[Tensor, Tensor]) Tuple[Tensor, Tensor]
lab4d.utils.quat_transform.dual_quaternion_inverse(dq: Tuple[Tensor, Tensor]) Tuple[Tensor, Tensor]
lab4d.utils.quat_transform.dual_quaternion_mul(dq1: Tuple[Tensor, Tensor], dq2: Tuple[Tensor, Tensor]) Tuple[Tensor, Tensor]
lab4d.utils.quat_transform.dual_quaternion_norm(dq: Tuple[Tensor, Tensor]) Tuple[Tensor, Tensor]
lab4d.utils.quat_transform.dual_quaternion_q_conjugate(dq: Tuple[Tensor, Tensor]) Tuple[Tensor, Tensor]
lab4d.utils.quat_transform.dual_quaternion_to_quaternion_translation(dq: Tuple[Tensor, Tensor]) Tuple[Tensor, Tensor]
lab4d.utils.quat_transform.dual_quaternion_to_se3(dq)
lab4d.utils.quat_transform.quaternion_apply(quaternion: Tensor, point: Tensor) Tensor

Apply the rotation given by a quaternion to a 3D point. Usual torch rules for broadcasting apply.

Parameters:
  • quaternion – Tensor of quaternions, real part first, of shape (…, 4).

  • point – Tensor of 3D points of shape (…, 3).

Returns:

out – Tensor of rotated points of shape (…, 3).

lab4d.utils.quat_transform.quaternion_conjugate(q: Tensor) Tensor
lab4d.utils.quat_transform.quaternion_mul(a: Tensor, b: Tensor) Tensor
lab4d.utils.quat_transform.quaternion_translation_apply(q: Tensor, t: Tensor, point: Tensor) Tensor
lab4d.utils.quat_transform.quaternion_translation_inverse(q: Tensor, t: Tensor) Tuple[Tensor, Tensor]
lab4d.utils.quat_transform.quaternion_translation_mul(qt1: Tuple[Tensor, Tensor], qt2: Tuple[Tensor, Tensor]) Tuple[Tensor, Tensor]
lab4d.utils.quat_transform.quaternion_translation_to_dual_quaternion(q: Tensor, t: Tensor) Tuple[Tensor, Tensor]

https://cs.gmu.edu/~jmlien/teaching/cs451/uploads/Main/dual-quaternion.pdf

lab4d.utils.quat_transform.quaternion_translation_to_se3(q: Tensor, t: Tensor)
lab4d.utils.quat_transform.se3_to_quaternion_translation(se3, tuple=True)
lab4d.utils.quat_transform.standardize_quaternion(quaternions: Tensor) Tensor

Convert a unit quaternion to a standard form: one in which the real part is non negative.

Parameters:

quaternions – Quaternions with real part first, as tensor of shape (…, 4).

Returns:

out – Standardized quaternions as tensor of shape (…, 4).

lab4d.utils.render_utils

lab4d.utils.render_utils.compute_weights(density, deltas)

Compute weight and transmittance for each point along a ray

Parameters:
  • density (M,N,D,1) – Volumetric density of points along rays

  • deltas (M,N,D,1) – Distance along rays between adjacent samples

Returns:
  • weights (M,N,D) – Contribution of each point to the output rendering

  • transmit (M,N,D) – Transmittance from camera to each point along ray

lab4d.utils.render_utils.integrate(field_dict, weights)

Integrate neural field outputs over rays render = sum_i w_i^n * value_i

Parameters:
  • field_dict (Dict) – Neural field outputs with arbitrary keys (M,N,D,x)

  • weights – (M,N,D) Contribution of each point to the output rendering

Returns:

rendered (Dict) – Output renderings with arbitrary keys (M,N,x)

lab4d.utils.render_utils.render_pixel(field_dict, deltas)

Volume-render neural field outputs along rays

Parameters:
  • field_dict (Dict) – Neural field outputs to render, with keys “density” (M,N,D,1), “vis” (M,N,D,1), and arbitrary keys (M,N,D,x)

  • deltas – (M,N,D,1) Distance along rays between adjacent samples

Returns:

rendered (Dict) – Rendered outputs, with arbitrary keys (M,N,x)

lab4d.utils.render_utils.sample_cam_rays(hxy, Kinv, near_far, n_depth=64, depth=None, perturb=False)

Sample NeRF rays in camera space

Parameters:
  • hxy – (M,N,3) Homogeneous pixel coordinates on the image plane

  • Kinv – (M,3,3) Inverse camera intrinsics

  • near_far – (M,2) Location of near/far planes per frame

  • n_depth (int) – Number of points to sample along each ray

  • depth – (M,N,D,1) If provided, use these Z-coordinates for each ray sample

  • perturb (bool) – If True, use stratified sampling and perturb depth samples

Returns:
  • xyz – (M,N,D,3) Ray points in camera space

  • dir – (M,N,D,3) Ray directions in camera space

  • delta – (M,N,D,1) Distance between adjacent samples along a ray

  • depth – (M,N,D,1) Z-coordinate of each ray sample

lab4d.utils.render_utils.sample_pdf(bins, weights, N_importance, det=False, eps=1e-05)

from https://github.com/kwea123/nerf_pl/ Sample @N_importance samples from @bins with distribution defined by @weights.

Inputs:

bins: (N_rays, n_samples1) where n_samples is “the number of coarse samples per ray - 2” weights: (N_rays, n_samples) N_importance: the number of samples to draw from the distribution det: deterministic or not eps: a small number to prevent division by zero

Outputs:

samples: the sampled samples

lab4d.utils.skel_utils

lab4d.utils.torch_utils

lab4d.utils.torch_utils.compress_state_with(state_dict, string)

Initialize model parameters with the mean of the instance embedding if the parameter name contains a string

Parameters:
  • state_dict (Dict) – Model checkpoint, modified in place

  • string (str) – String to filter

lab4d.utils.torch_utils.compute_gradient(fn, x)

gradient of mlp params wrt pts

lab4d.utils.torch_utils.frameid_to_vid(fid, frame_offset)

Given absolute frame ids [0, …, N], compute the video id of each frame.

Parameters:
  • fid – (nframes,) Absolute frame ids e.g. [0, 1, 2, 3, 100, 101, 102, 103, 200, 201, 202, 203]

  • frame_offset – (nvideos + 1,) Offset of each video e.g., [0, 100, 200, 300]

Returns:
  • vid – (nframes,) Maps idx to video id

  • tid – (nframes,) Maps idx to relative frame id

lab4d.utils.torch_utils.remove_ddp_prefix(state_dict)

Remove distributed data parallel prefix from model checkpoint

Parameters:

state_dict (Dict) – Model checkpoint

Returns:

new_state_dict (Dict) – New model checkpoint

lab4d.utils.torch_utils.remove_state_startwith(state_dict, prefix)

Remove model parameters that start with a prefix

Parameters:
  • state_dict (Dict) – Model checkpoint

  • prefix (str) – Prefix to filter

Returns:

new_state_dict (Dict) – New model checkpoint

lab4d.utils.torch_utils.remove_state_with(state_dict, string)

Remove model parameters that contain a string

Parameters:
  • state_dict (Dict) – Model checkpoint

  • string (str) – String to filter

Returns:

new_state_dict (Dict) – New model checkpoint

lab4d.utils.transforms

lab4d.utils.transforms.get_bone_coords(xyz, bone2obj)

Transform points from object canonical space to bone coordinates

Parameters:
  • xyz – (…, 3) Points in object canonical space

  • bone2obj – ((…, B, 4), (…, B, 4)) Bone-to-object SE(3) transforms, written as dual quaternions

Returns:

xyz_bone – (…, B, 3) Points in bone space

lab4d.utils.transforms.get_xyz_bone_distance(xyz, bone2obj)

Compute squared distances from points to bone centers

Argss:

xyz: (…, 3) Points in object canonical space bone2obj: ((…, B, 4), (…, B, 4)) Bone-to-object SE(3) transforms, written as dual quaternions

Returns:

dist2 – (…, B) Squared distance to each bone center

lab4d.utils.vis_utils


© Copyright 2023, Gengshan Yang, Jeff Tan, Alex Lyons, Neehar Peri, Carnegie Mellon University.

Built with Sphinx using a theme provided by Read the Docs.