blog.yui.codes thought dump revolving around art+tech

notes on particle-based systems and deformables

Unified particle-based system: Smoke done by advecting tiny particles along velocities provided by unified particle system. Seems like it is a lot of non-physically-based implementations, for example, rigidbody collisions don’t work that way. But it is convenient to have everything using the same system (maybe).

Deformables

Position-based Dynamics (PBD)

At first, Position-based Dynamics seems to be similar to Verlet integrator, which I have previously used for cloth simulation. However, the author highlights that where Verlet stores velocity implicitly with the previous and current position, PBD uses velocities explicitly.

Later, I realised that for PBD, we try to treat everything as a constraint, including forces (except general forces like gravity, see my annotation on algorithm. When we used position constraints in the cloth simulation, it seemed to automatically correct for any instabilities given by the numerical integration method. But actually when everything is made into a constraint, this seems to shift the problem from the integration step to the constraint solver. As a result, the stability of PBD no longer depends on time-step but on the shape of the constraint functions.

TODO: look more into XPBD. Updates soon.

Finite Element Method

The paper “Breaking Things” video by O’Brien and Hodgins uses continuous model and finite elements made up of tetrahedrons. I was unclear on the meaning of the vector u in the equations as it does not correspond to vertex locations, instead it refers to the location of the element? Do we just take the center of the tetrahedron as u? Additionally, they say that the rows of β are the coefficients of the shape functions, which I don’t understand.

Projective Dynamics

For PBD the constraint solver which does most of the work does it in a “Gauss-Seidel-like fashion”, which means solving each constraint individually independent of each other. When we project particles, the modifications are visible to the process, so the order of constraints is extremely important. For projective dynamics, constraints are taken from a local/global optimization perspective. The authors say that PBD converges to inelastic behaviour (I’m not sure why yet) and PD converges to the “true implicit Euler solution”.

TODO: Self-study on deformables (lit review)

week 4-5

Motion Editing

We looked at PRECISION which can figure out what motions go with new geometry. I feel like this tool makes it much easier for large quantities of content in games and animations; however, it didn’t seem like the artists had much control over how animations might look. There were also issues with transitioning between fixed motion captured animations.

Skinning

Linear Blend Skinning is intuitive but the candy-wrapper problem is well known. Dual quaternion skinning I feel is a really smart way of using mathematical properties of quaternions to avoid the candy wrapper effect. But to be honest, skinning of organic bodies to skeletons has a lot of variations that can’t be described just by one algorithm. I guess that’s why there are still so many artists working to do weight-painting all the time.

On the note of artist input, we looked at this paper for using line-of-action concept to generate 3D poses. Although this seemed like a very time-efficient method, the poses generated still needed to be tweaked by artists a fair bit, so I think the added value is not that much.

On the topic of weight painting, we also looked at this paper on spline-based weight painting. This concept was really exciting to me as an artist who has always found Maya’s weight painting tools to feel very clumsy. Especially because it is hard to tell when your weight painting is blended properly: Picture taken from this really good resource for learning how difficult and complex the weight-painting process is!

Using splines to determine the falloff or interpolation of weights seems very elegant. Although I have yet to try using a system like that.

Using cages to warp the mesh seemed like a really intuitive idea as well. We see cage methods all the time in 2D animation software. I’m pretty sure Live2D is using a cage deformation method, and it has some really beautiful results. I’m really curious why we don’t see cage methods in 3D more often. It seems really convenient, there are different methods for applying the cage transform to the cuboid or tetrahedron that can be used for artist control. Perhaps it is more difficult to determine what happens at the joints.

We also looked at implicit functions for mesh wrapping. Seems like we use a bounding mesh as a “cage”, and the bounding mesh can be represented by an implicit function. This method seems like it will not be able to take care of sharp details that well. I also want to read more on implicit functions for meshes as the meta-balls or blobbys idea is unfamiliar to me.

week 1-3

Week 1: Techniques for creating animation

2D animators make use of keyframes to get poses before doing in-betweens. In 3D, keyframes can be set and automatically tweened. In 2D animation, they often make use of 2s due to not doing all in between frames for 24 fps. In the recent Spiderman animated movie, it was interesting to see that they sometimes used 2s in 3D by stepping the animation curves between keyframes as a stylistic choice.

Procedural animation has some really cool applications - automatic skinning and animation has been used in games like Spore, where players can make their own creatures. Procedural generation of variants in animation can also help generate some natural look to character’s scripted animation.

Week 2-3: Inverse Kinematics

In Week 2, we visited Motion Capture lab in CMU. As I’ve never been in a motion capture lab before, this experience was really enlightening for me. They use infrared cameras and plastic balls wrapped in reflective tape to capture positional data. There were different sized reflective balls for different types of data. For facial capture, the motion capture lab assistant has to reconfigure and calibrate all the infrared cameras. I’ve not used any motion capture system calibration and capture software before, but I think it might be similar to camera calibration systems in computer vision for 3D reconstruction, which I am somewhat more familiar with.

Inverse Kinematics Methods

We talked about Jacobian transpose, pseudo-inverse and damped least squares method. We have to implement an inverse kinematics solution for Miniproject 1. Previously I have implemented CCD and I remember it was pretty fast. I seem to have lost my implementation to poor organisation in my filesystem but I did find my pseudo-inverse implementation! Which is much slower!

Anyway, I can’t remember most of my implementation so I’m looking forward to doing it again. It will be a good chance to revise what I learned. We also looked at more heuristic methods, like CCD. I have heard of FABRIK before, but not from class. Mostly from videos on the internet. But learning about it in class, it seemed really intuitive and to have really nice results. I might try my hand at implementing it somewhere.