Products | Features | Sales | What's New | Download | Support | Tutorials | Newsletter | DOTM | About 

TheBest3D.com - Home

The Motion Prediction Module
New since PD Howler 7.2




  v9 is here new! - v8 - v7  -  galleries  -  more howler  what's new?  download  tutorials  sales  help  
let's howl

 
 








New to PD Pro Howler in v7.2: the Motion Prediction Module. It is accessed through the Animated filters:

the motion
                        prediction module


With it you can analyze the motion in a video in order to create slow motion from regular video. Stop-motion artists could also use it to introduce motion blur into their work. Instead of blurring the whole frame, only those parts that move will be subject to motion blur. This could also be great for 2D and 3D animations that were rendered without motion blur and now need such motion blur added to the rendered frame sequence or video.

Before you select this option, you should already have an animation loaded.

Here's a look at the interface that will appear:

motion prediction
                        module - the main panel
initial view of the module's panel

Near the top, notice the Dry run option, and three different modes of operation: Motion analysis only, Introducing motion blur, and Extrapolate frames, respectively.  Notice the Search distance and Grid spacing and other options for further refining and optimizing.

When you click Go to run it, additional details may show: a progress bar and a Stop option, near the bottom. Here's an example, with Dry run mode enabled:


Running a Dry run


Below is an example of what you might see when using the Motion Prediction Module while adjusting the Grid spacing value slider on a short video clip to create additional frames (tweens), extrapolating between given frames. The intent in this case is to create a slow-motion version with 10x more frames than the original.

grid spacing
click the image to enlarge


We're also exploring ways to use this tool to help create certain animations faster by creating interpolations without blending or blurring.


A First Look - Tutorial Videos


The Cactus Walkthrough
(note: recorded with a pre-release version - the quality of the rendering and interface of the motion prediction module's panel has further improved in the final build.)



 
The Cable Cars
(note: recorded with a pre-release version - the quality of the rendering and interface of the motion prediction module's panel has further improved in the final build.)





How to Use it - a Primer

This tool is made for videographers, VFX (Visual effects) artists, and animators. The scenario: you have a short video clip that you wish could be a little longer, or much longer, perhaps running in slow-motion mode. However, the traditional frame blending option with the Frames->Time stretch feature may not give satisfactory results, as it is commonly associated with a sort of swimming side effect due to the blended tweens that show parts of the prior (current) frame and parts of the next frame.

The motion prediction module can in some cases yield better results, as it uses a fine grid to dissect the image (in each frame) into smaller squares (squarelets) and then proceeds to analyze those smaller blocks of the dissected images with respect to their motion. It is essentially tantamount to doing a motion tracking analysis on each squarelet.

If there is a lot of noise in the video, larger Fudge factors may be recommended. Blending between the squarelets may be suggested most cases. There are other parameters to try. This is a bit of an art with an iterative approach.

The motion prediction module is a compute intensive module. It will analyze the video in an attempt to predict which way the various parts of the images in it are moving. We did a little math, and clearly this is very probably our most compute intensive function to date.  We're doing something like over 1 million tests per pixel.  Ok, that's only for grid points, but there's also a pixel accurate mode that test ever pixel.  I think that's something like 60,000 tests per actual pixel.  We're using sse2 in the inner loop, and plain C code in the outer loops, and some VB  for things like regular frame processing.


You can use the "Dry run" option to just do the calculations, without saving the results. This allows for a test without it saving the calculated data. You'll see a visual of the detected motion as it progresses through the video's frames, but the data won't be saved. Try this to see if any refinement is needed to some of the parameters, i.e. to see if it misses some.

You can choose to have it analyze and render one of the following results:
  • Motion analysis only: show a colored motion map.
  • Introduce motion blur: Use the motion analysis results to add motion blur. Unlike a motion blur filter that throws itself at the entire frame, only those parts that are actually moving will be blurred.
  • Extrapolate frames: produce additional frames, such as 3 for 1 by default, to effectively slow the video down (slow-mo). This is the most interesting use of the motion prediction module. In this case, you can set the number of tweens desired. There's an option for cartoonists who may want to use this for creating tween frames without blurring the lines, and perhaps save some time along the way. When using Cartoon mode, it will skip block  blending. You'll want this in order to to keep the line art and edges of filled regions crisp. Well, sometimes.

There's also an option to Show motion vectors. If an area of the image is not moving, no motion vector is shown for that area. If an area is moving, a red vector of proportionate length and direction will show for that area  during its analysis. This is mostly for informational and educational value: it shows in the window during calculations and rendering as a temporary overlay. It won't be in the saved rendering.

There's also an option now to Stop the calculation while it is in progress. It will appear near the lower left corner, during rendering after you click on Go. There will also be a progress bar during calculations.

Note that while it is calculating, you can change some of the parameters, such as whether to display the motion vectors or not. If you're in in Dry run mode you could try changing some of the optimization parameters while it is number crunching on-the-fly.



In the latest version, you'll temporarily see a blue grid displayed over the current image to show you the density of the mesh as you move the Grid spacing slider. That will give you a good idea of how many trackers are involved.

changing the Grid spacing on the motion
                          prediction module

changing the Grid spacing on the motion prediction module


Here are a few more guidelines and rules for the motion module thus far.

First off, it's not magic.  It won't add any new information to a video.  It merely morphs between the frames that are already there. But it does so in a selective way, based on what's happening (moving) in the video. Pretty neat, huh?

Some frames (first and last?) may be lost along the way. It's part of the algorithm. So be sure to have a backup copy of the original animation or video on file before you run this filter on your frames.

The quality of the video is important. Quality of camera and steady lighting will help. If there's a lot of motion blur, this won't make it any less blurry. The motion blur will get that much more exaggerated.  Better to record with a "sports" mode (fast shutter). Dropped frames get all the more obvious. 

The routines are sensitive to lighting changes and react strangely if the lghting changes due for instance to changes in the environment, or even the moving parts themselves (such as if they reflect or absorb much ambient light).  In short: Lighting changes are bad.  Auto lighting adjustment is bad.  Not much you can do about it  probably, but it's bad.

A few words about optimization & refinement: We did a little math and figured out that on a high-def frame, there can be over 4 trillion comparisons using the refinement pass.  obviously you need more than just threading to get it to run fast.  You need to cut out about 90% of that before you even think of other optimizations.  We've got the "refinement optimize" that skips calculations under a certain threshold based on the first pass.  So think of it this way.  If there's camera shake and the pixels move 2 pixels per frame, using a 'refinement optimize' factor of 2 will skip those calculations, and only work on things that move more than that.
If you did a stabilization on an animation, then it would only calculate the actual objects that are really moving.  That's a huge savings.

Sometimes, you have really subtle motions on small objects.  Those may not even register in the first pass, and thus will also be skipped in the refinement pass.  In that case, you'll just have to set the 'refinement optimize' to 0, and that will recalculate every pixel, regardless.  (It still uses the data from the first pass as a first guess, but it will recalculate every pixel using that guess.)

All that being said, the refinement pass is multi-threaded, and the inner loops use SSE2, so there can be as many as 64 parallel comparisons on a 4 core machine per step, even more on a 2-XEON system with 6 cores per processor.


Edge conditions:  the edges (the borders of your video frames) are not accounted for, so expect a little bit of 'noise' there. It's a small price to pay though in exchange for a much improved animation inside. There will likely need to be some cropping at some point it you want to eliminate the edges. For example if someone is moving out of the camera's view frame and you don't want their face to appear distorted along the edges when it exits the view.

Areas around 'objects' can be tricky too.  They can get some distortion and it's hard to get a good motion vector in those areas.  The stochastic sampling helps some by taking averages (try stochastic 5, or 15 if you must). It's one of the options you can try. It will take additional time, but it may be well worth the wait.

Occlusions are problematic.  Same problem as above. For example, a walk sequence may show the legs perfectly extrapolated as long as they're away from eachother, but when they come together, i.e. when one leg hides the other, things get a bit odd. Try it on a walk sequence done with a 3D animation tool, to see what we mean.

All that being said...

Using a larger grid size (Higher grid value) seems to work well enough for many cases of video, plus it's a lot faster. Don't use a larger grid size than you need to though.  You can do a dry run to see if you're getting misses.

Tweens is the number of new frames to create from each found frame. SHould be at least 1, better yet 2. The default is 3.  It has to fit in memory so keep it under a reasonable number of frames. Try '2' if you want approximately twice as many frames as before.


Final words:

Start with the default parameters, or if you messed around, set them back to something like this:

- Search distance: 64
- Grid spacing: 12
- Fudge factor: 400
- 1 sample
- no refinement pass




 

Project Dogwaffle
Main Features List


This is
Computing Insanity!


 Prediction 




Learn more:
Get ready to waffle!
System Requirements
I'm new to Dogwaffle
What was new in v6

What was new way before that
I want it now!
Ordering Info
As seen in the Newsletter
November TGE Newsletter
For Programmers:
the 3D SDK




Examples:

extending the duration of a short 3D animation to make it last 6 times longer


nodding slow-mo James

 

terrain fly-through 3D animation slowed down 10x
producing many more frames to significantly slow down a 3D terrain fly through



From 2 seconds to 20, in just 2 more minutes of rendering after 8 hours initial rendering



Comparing Frame Blending vs. Motion Prediction technique.


watch me howl on youtube Tweet woof howl waffle!