Since I knew that the more info was out there, the more people would go crazy about Element I decided to keep a low profile on all the hubub, especially since there is more yet to come. Still, since I have answered a few questions over the past days as has Andrew and other people involved with this, let me summarize them here again.
- First, most obviously and most importantly, Element is 100% OpenGL. It doesn’t use CUDA or OptiX like After Effects CS6. While that allows the plug-in to run a much wider selection of hardware, it imposes some limitations in terms of what features it can support. Most notably there are no genuine reflections and transparencies won’t look very "optical" for lack of a better term. Arguably, though, this is not so much of an issue because even in a raytrace renderer you would need environment maps if your geometry is not surrounded by other geometry. No geometry, no reflections, if you get my meaning. Similarly, the complexity of any model you can create or import will depend on your graphics hardware’s capabilities, ranging from the mere number of polygons in a model to the number and size of textures to how many instances of the effect you can use in a project. The better your card, the more complex stuff you will be able to handle, but even my 3.5 year old GTX 285 still holds up just fine with a 200000 polygon model or conversely about 8 to 10 instances of the effect with lighter models. I’m not using much texturing, though, and mostly rely on bare materials.
- Element does not do IBL (image based lighting) to simulate global illumination but from simulating it with lights to baking it into textures in your 3D programs you have a few options. You could even use the environment reflections or refraction texture creatively here. The plug-in does have SSAO (screen space ambient occlusion) on the other hand, so not all is lost. This will give you some of that "radiosity look".
- For now, the plug-in doesn’t do any shadows. That’s a slight bummer of course, given how they would contribute to your scene’s realism. The reasons they are not there are simple: a) there was just too much else to do and b) calculating shadows takes a real performance hit. Especially scenarios where different items cast shadows on each other and in addition may need to cast shadows on floors and walls will require multiple passes to render them in the correct order and you’d definitely notice the slowdown. Don’t give up yet, though. There are ways of faking shadows and you’ll be surprised how simple some of them are. Also lights can have falloff and you can do some pretty nifty things with spotlights and feathered cones.
- Slightly related to all the points before, you will have to live with the same limitations as any other of those 3D plug-ins. They only exist inside their own 3D universe and do not interact with normal 3D layers, at least not beyond a certain degree. In this case you will be able to set the rendering mode to show the depth buffer which you can then use in effects like Trapcode Particular as an input or more conventionally as a luma matte on any of your other layers where you define the actual clipping with a simple Levels effect on the matte.
- In its current state, there is no support for object sequences or deformations of any kind like the good old MDD format that originated back in Lightwave in its golden days or Alembic (the latter being supported in AtomKraft, though). The reasoning behind this decision is actually quite logical – since Element does everything on your graphics card, shuffling complex meshes back and forth would again slow down things and in addition introduce one more potential breaking point that could cause issues if meshes are defective or have inconsistent numbers of polygons, bad topology or flakey texture coordinates. Likewise, holding deformation data on the card would once more chew up resources. So while none of that is impossible or technically unfeasible, it’s gonna take a while to figure things out and provide a stable and reliable solution.
People have asked about the so-called Animation Engine, but as much as I would like to share some details, I have to bite my tongue here. That’s a topic that will become clear in the next few days when Andrew does more of his videos. ;-) Since I already mentioned it, you can also set your output to show the normals. What would you do with that? Well, you could use Pixel Cloud to relight it! Now granted, this doesn’t make sense here, but it may if your normal pass actually comes from a 3D program like Cinema 4D. If you ever heard or even used the old Normality plug-ins, the concept will be familiar. They are also providing a plug-in to create a World Position pass in Cinema 4D. If you have no clue what that’s good for, the smoke 2013 videos on FXGuide can help you with that.