Element (in)animée

In the ever popular discussion on everybody’s favorite new toy in town, Element 3D, it’s now time to take a closer look at its animation capabilities. It’s been almost 4 weeks since the release so everybody has had a chance to watch the tutorials and do his own experiments, so most of you should have an idea what I’m talking about. First you may wanna revisit one of my previous posts and watch the clips, because they illustrate some of what you can do and bear relation on the issues discussed here. And then of course there’s some more stuff to lay your eyes on:

So what are the strengths, possibilities, potential caveats? First let me reiterate that there are serious issues with scaling/ sizing imported models. On pretty much any project I did during the Beta as well as my current experiments it seems you can always import two or three meshes and then the next one will be off despite everything being to scale in the 3D program. That may not be a problem for some stuff you may be cooking up, but it certainly is for me. Imagine scenarios like having a car with spinning weels and other moving parts and when you import them they just won’t align. This could potentially waste a lot of time and lead to unwanted sliding and intersections of parts of the geometry. It’s one of the areas that needs some serious rewiring to be fixed, especially since there is no way to preview any such stuff in the editor.

That brings us to another problem – there is simply currently no intuitive (or at least predictable way) to build a hierarchy nor manage it. The editor only ever will show the currently selected mesh and the item list is flat, meaning you cannot establish parent/ child relations. While this may allow better performance in teh preview window, the big disadvantage is that you’re flying blind. Imagine you bought a stock model of a trailer truck with the components separate and now want to make everything align. Without having access to a 3D program and being able to adjust pivot points there, this could get you on the verge of losing your sanity. In addition to the geometry problems, it also prevents you from tweaking your materials in context. Now imagine that truck having a chrome-y tank trailer and you are trying to build a nice sunset scene where good materials are the key. That could become really annoying. In fact it is even a problem for scenes like the one with the stars where you are trying to establish a certain color palette.

With no hierarchy in sight, that leaves you with rigging everything with expressions and that’s probably one of my biggest frustrations with the plug-in. Connect different positions with pickwhip expressions? Check! Animate and link randomness parameters for the Replicators? Check! Place other elements onto your geometry or link your meshes to other layers? Aww, rats, not quite so! Unfortunately that’s where everything falls apart. The plug-in treats the replicator rotations as a series of consecutive planar rotations in the local coordinate system. What does that mean? Basically you cannot do any rotations that are not perpendicular to one of the main world axes without things getting whacky or requiring you to add a lot of extra code. In essence you more or less have to reconstruct the rotations using cosinus and sinus functions and then subtract and multiply them with the rotation values (your own rotation matrix and matrix decomposition more or less). This is a major workflow issue and seriously limiting. It would be much more straightforward if this would be handled consistently with how After Effects deals with this and you wouldn’t have to worry about doing something like attaching a bunch of spotlights to a car or targeting a an item at a Null swirling through your scene.

With all the small annoyances out of the way, lets look at the more positive side of things. Of course Element supports animated textures by ways of assigning e.g. a pre-comp as a custom layer input. Aside from color textures that also could include normal maps for instance to simulate structures like bark or water surfaces. Since the plug-in doesn’t do genuine displacments of any kind, that would be a way to simulate such phenomena. You may need some plug-ins, though, like RevisionFX’ Shade/Shape to convert a greyscale map to a proper normal representation or use game-centric tools like xNormal and similar or NVidia‘s Photoshop plug-ins to do so. Similarly, you could create animated reflection maps. Wanna simulate a car driving through a tunnel or city at night? Make your life easy and animate the neons passing by in a pre-comp while softly hovering with the camera over a static model. Finally, you can use those maps to distribute the instances. Think of this similar like a property map in Particle Playground or Card Dance or an Alpha texture in Trapcode Form where the map’s colors determine where a clone will appear. Of course there are some limitations with that kind of grid based sampling and the furry text makes a good point of it – if the underlying shape changes, the distribution changes and especially if even a tiny bit of randomization is involved, things can flicker like crazy. Nonetheless, this kind of thing could be exactly what you want.

Since in the current version there is no support for importing pre-animated objects or baked object sequences, an interesting, if somewhat unintentional alternative are animated mask shapes. As you may already have discovered, you can assign mask paths on another layer to be used by the extrusion stuff, but you may not know that those within certain limitations and rules can also be animated. You have to keep in mind that this is an unsupported and somewhat experimental feature. It basically exploits the fact that the plug-in needs to update with every frame and look at your scene if something has changed. In turn this means that as a first pre-requisite something in your scene has to be animated, be it a light or the camera. However, don’t ever try to turn on motion blur! Since the algorithm doesn’t calculate in-between shapes for the areas between frames and this throws of things, it is bound to crash. The second important part is to only use one mask and keep its shape consistent. This means it cannot intersect with other shapes and its points should be animated pretty evenly. Sudden intersections or radical changes in point placement and sharpness could – you guessed it – cause crashes. For all those reasons the technique is still rather limited, but may still come in handy for animating a melting, lasercut or sweeping logo. For anything more complex you should resort to a 3D program or plug-ins like Zaxwerks‘ tools or ShapeShifter.

Finally, and that is the part because you came here, is it not, let’s talk about the group animation. While many compare it to Cinema 4D‘s MoGraph, it’s not quite that. Rather I tend to think of it as a morph between to fixed states and masked with a wipe, disguised with some random motion here and there. In fact it would be pretty trivial to set-up in most 3D programs. That does of course not take away from its merits – even such a simple technique can be pretty powerful, if you just know how to handle it. Some of that I already explained for the Tron project and it should be easy enough for most people to create this kind of explosion/ implosion projects just like the technique also works for transitioning between any two different objects to create the illusion that something is assembled from the fragments of another object, a.k.a. your pre-fractured text transition. Just as interesting, however, are things you can do with partial transitions and tweaking the falloffs. The tentacle, the caterpillar and the pine tree are such examples. Of course the secret sauce here are again expressions, but I’m not giving away my best secrets yet. So try to figure out how I did them and maybe you can get to the bottom of it. That also includes the dynamic shadows, which no doubt is something you would just love to have, wouldn’t you? See you some time soon on that topic…

Advertisements
%d bloggers like this: