Avengers: Age of Ultron is now out in the cinemas, including a sequence that a small team of us at Animal Logic worked on for about nine months. We were responsible for the ‘Birth of Ultron’ sequence, creating the Jarvis and Ultron characters’ hologram representations in Tony Stark’s lab, and also the ‘cyberspace’ sequence where Ultron becomes self-aware while searching the internet. The process of creating these was highly creative and interesting, and I’m happy to have had the chance to take responsibility for so much of what was shown on screen, from the months of design development through to the final animated imagery. There’s a bit more info in this article on fxGuide and this interview with SideFX.
May 9th, 2015 . 9 comments
Here’s a quick CVEX lens shader for Houdini, allowing you to render in Mantra with an equidistant fisheye lens. Also check out Matt Estela’s stereo spherical panorama camera that he got working for some VR tests we did.
February 4th, 2015 . 1 comment
The two above pieces are some sculptural objects I’ve produced for an exhibition a friend of mine runs every year. They’re the result of some ongoing work and research I’ve been doing into generative techniques for modelling and growing organic objects – in this case, coral. The objects were designed using a directed laplacian growth process, then 3d printed and cast in brass.
The video below shows the progression preceding the final forms.
They’re currently on exhibition and available for purchase until Feb 23 at Art By Design 10, Wedge Gallery, near Books Kinokuniya, L2, The Galeries, 500 George St, Sydney.
August 3rd, 2014 . 0 comments
I’ve pre-ordered an Oculus Rift DK2 for some experimentation, and in the meantime have been looking into OpenFrameworks as a convenient way of creating things for use in VR. There’s a huge range of addons, including ofxOculusRift which looks like it will make things pretty easy.
While waiting for the headset to be shipped, I’ve been thinking about what input devices that I could use with it. A wacom tablet is an interesting candidate for VR because of its 1:1 mapping of movement in physical space to virtual space, which increases presence compared to something less direct like a game pad. I did some searching to see if any OF addons for tablet data already existed, with only a few traces and broken links to show for it. I’d added support for tablets before in Blender’s game engine many years ago, so I ended up having a go at putting something together myself.
It currently only supports OS X since that’s what I’m using. I only have an old Wacom Intuos 2 which doesn’t have any of the fancy newer touch strips to connect up, but basic stuff like pressure and tilt works pretty well, and it’s good enough to get a bare minimum of sensor data out to reconstruct a position and orientation in 3D space. While I have a bit of experience with tablets before, I’ve barely done much in OF or obj-c, so any contributions or fixes are very welcome.
Get the code here: https://github.com/mattebb/ofxTablet
July 7th, 2014 . 4 comments
Here’s a quick one, inspired by a tool I saw while at Double Negative earlier this year. The Point Wrangle and Attrib Wrangle nodes in Houdini 13 come in handy for a lot of things but are a but cumbersome when you want to add parameters to control them. This bit of python will look over your VEX code snippet and create parameters for any that have been referenced in your code but don’t already exist.
It’s in the form of a custom menu option, so you just need to drop this file in your houdini user folder (where all your .pref files are), and it will append the “Create Wrangler Parameters” option to the end of the parameter context menu.
July 3rd, 2014 . 0 comments
Today I finally got around to putting together a little tool for use in VEX/VOPs, to generate random numbers weighted according to a ramp. In the example below, I’m distributing 8000 points by ramp weight in the X axis, and randomly in Y – you can see how the density of random X values corresponds to the ramp.
This is using a simplistic method I remembered from a while ago when I saw it used for importance sampling brights parts of an environment map during rendering.
The idea is a bit similar to making a histogram. Divide your function (in this case the ramp) up into a number of bins, and scan over them, accumulating the sum of all bins up to that point in an array. The final value in the array will be equal to the sum of all bins.
Above you can see the normalised sum curve overlaid on the ramp curve. Where the ramp has a high value, the slope of the sum is greatest (because the change in value of the sum is highest when its summing high ramp values).
If we then choose a range of random numbers in y, most of the corresponding points on the sum curve will be in the areas of greatest slope. It’s easy to visualise as if you’re projecting points horizontally and intersecting the curve – areas of low slope will tend to get missed, and the majority of the random values will stick to the areas of greater slope, and therefore the greatest values in the underlying ramp function.
You can drop the VEX code below into a wrangle SOP or a snippet VOP. Right now it’s not hugely optimised, it’s pre-processing the ramp every time its run (eg. for each point), and it’s using a linear search to find the ‘random number intersections’ when a binary search might be faster. It’s still super quick though, so further work may not be so necessary.
Example .hip file is here: http://mattebb.com/projects/houdini/weightedrandom.hipnc
December 22nd, 2013 . 0 comments
I just finished making a terrarium for Kat’s birthday. It’s an homage to her favourite scene in Jurassic Park when the lawyer gets eaten on the toilet. I was able to pick up a plastic dinosaur from the Australian museum, and some architectural model making supplies for the destroyed toilet structure, but I wanted to make it accurate, so used 3d printing for the lawyer on the toilet. I’m not a great modeller/sculptor, but at the size it was printed, I could get away with a pretty rough digital sculpt to generate the STL for printing. It’s a repurposed version of this toilet combined with a modified old human base mesh I made years ago.
Once again I used Rapid Prototyping Services in Sydney for the print – the level of fine detail and quality was impeccable, unfortunately a bit masked by my dodgy paint job with too many layers of undercoat. You can check how well it fares against the reference.
December 4th, 2013 . 4 comments
Although the 3Delight/blender addon is mostly abandoned due to lack of time to keep it maintained, I want to at least bring it up to date with the latest 3Delight release, which has had a lot of good updates in the pathtracing/physically based rendering department.
3Delight’s approach to the problem has been to extend some of the commonly used shadeops, seemingly with the intention of making it simple to convert over old shaders, or create simple shaders from scratch. It definitely has advantages in terms of the amount of work required to get something set up, but imo it’s also a bit messy and confusing how it all fits together, especially if you’re familiar with a more common and organised physically based shading infrastructure as in pbrt.
December 4th, 2013 . 2 comments
Last week I wrapped on The Lego Movie, produced at Animal Logic in Sydney. It was tons of fun to work on with lots of unique challenges for us in the fx department. It’s also really surprised me, becoming a much better film than I initially imagined. I’m looking forward to seeing it when it comes out next year, until then here’s the trailer:
September 4th, 2013 . 0 comments
If you use VOPs in Houdini a lot like I do, you might also find it a bit annoying when using Import/Add Attribute VOPs trying to keep a clean setup with nodes nicely named and local variables added – it can be a lot of typing.
I made a couple of scripts that you can add to your shelf and add keyboard shortcuts to, to automate this process a bit. They’ll pop up a text entry with the attribute name, add the VOP node, and then fill in the relevant parameters based on the attribute. For the import attribute node it also tries to be a bit clever and set the attribute type information by querying the incoming geometry. Enjoy!
August 12th, 2012 . 7 comments
One of the things I’ve been wanting to investigate more since rendering clouds on Happy Feet 2 is using more physically based phase functions for volume lighting. The volume phase function describes the direction of scattered light relative to the incoming light’s direction – either isotropic (scattering light evenly in all directions), or anisotropic – scattering mostly forward, or mostly back, or some more complicated distribution altogether. Many volumes, such as smoke, are generally pretty isotropic, but clouds have quite a particular scattering behaviour, determined by the size, and distribution of microscopic water droplets that comprise their form. On HF2 we used a combination of two henyey-greenstein distributions, which was good enough for the task at hand, but after spending so long back then researching atmospheric optics, I’ve been curious to try something more realistic.
Antoine Bouthors’ excellent Phd thesis, Realistic rendering of clouds in realtime, spends a few pages on the Mie phase function – a good fit for many types of clouds. Clouds are highly anisotropic – at each scattering event, over 90% of the energy gets scattered in roughly the same direction that it enters (the strong forward peak), which is what gives the characteristic ‘silver lining’ when looking through a cloud towards the sun. Of the remaining energy scattered in other directions, diffraction and interference create some interesting optical effects such as the fogbow, or glory.
As part of Antoine’s work, he used Mieplot to generate scattering data for a general purpose droplet size distribution found in clouds, and published the data online – the relevant files are Mie0.txt and MiePF3.txt. The scattering distributions are divided into two to make it easier to sample – I imagine because the forward peak is so strong, attempting to importance sample it would barely leave any samples in the sideways or backward directions, so it helps to sample the rest of the function independently from the peak, and combine with weighted probabilities. In a traditional lighting pipeline where the phase function is just being used to weight lighting contribution and not sample new scattering vectors, we can ignore this and combine the data into a single set. To verify the results I wrote a little python script using pyglet to add the split data together and graph them on a log scale, as in the thesis – It’s pleasing to see it matches up perfectly. I also normalised the data to sum to 1.0 over the RGB channels.
For rendering in Houdini, the original plan was to copy the data in as a VEX array, and lookup/interpolate the data based on the input angle, but I think I may have hit a limit to the size of arrays VEX lets you define – arrays of a handful of float items worked fine, but using the full 1800 values introduced compilation errors. So as a more portable alternative, I brought the data into blender to generate an 1800 x 1 pixel OpenEXR image containing the scattering values. This is a lot more convenient to look up, and is easy to then reuse in a renderman shader for example. To sample the image using the U coordinate, the angle between light direction and eye direction needs to be fit to a 0-1 range. This is trivial to do in VOPs, replicating:
U = acos(normalize(I) . normalize(L)) / pi … and feeding that into a color map node.
Because this phase function is so heavily forward scattering, by default the volume may seem dark from any other angle than directly in front of the light. For best results:
- Set up your scene to be more physically plausible, with a high dynamic range in lighting, viewed through a tone mapping transform (or at least a gamma curve). This way, with enough intensity in your main light, you’ll be able to see the backward and sideways scattered light, even though the light in the strong forward peak will be very intense. The tone mapping should bring the peak back into a more acceptable range while still leaving the other angles visible.
- Render some form of multiple scattering additional to the single scattering, to fill out the overall light in the volume. This can be faked in any number of ways – in Houdini one way could be to add in an additional weak isotropic phase function (higher orders of scattering tend towards isotropic scattering after light has bounced around randomly for a while), perhaps with a blurry deep shadow map, or however you like to fake multiple scattering effects. Even adding in a constant ambient term can help substantially.
June 10th, 2012 . 6 comments
Today I tried out the super nice Sublime Text editor for writing Renderman shaders, and liked it enough to buy a license immediately. Looks like it’s made here in Sydney too!
To improve the workflow a bit, I made a little language pack for Renderman SL. It includes a syntax package for highlighting, which mostly inherits the C syntax, but adds a few extras for SL data types (eg. color, vector) and some global shader variables. It also contains a ‘build system’ for shaderdl, 3Delight’s shader compiler, which lets you compile a shader quickly with a hotkey. It should be very easy to copy that for other renderers/compilers too. It’s all pretty basic, but does enough for my needs.
You can grab it here: sublime_renderman_v1.zip
April 25th, 2012 . 9 comments
It’s been a while, but the 3Delight/blender exporter has been getting progressively more out of date, with changes in Blender’s python API leaving version 0.7.0 broken in current releases. Blender 2.63 will also include the new bmesh system, which is incompatible with old versions. I’ve updated the addon to fix these issues, and add a few more little things. This version now requires Blender 2.63 – until it’s released, you can use a pre-release version. As always, I’ve tried to test it on the main OSes, but if you find any issues, please let me know.
Update: There was a last minute Blender python API change which renders v0.7.5 incompatible with the Blender 2.63 release. The addon has been fixed and updated to v0.7.6.
Download the addon here: render_3delight_0.7.6.zip
- Enabled editable output paths, including RIB file export, shadow maps, and point clouds. These path properties support using environment variables, or other blender data variables that are built in to the exporter. Environment variables can be read from outside blender, or default environment variables can be edited from within the Environment Variables panel in Render Properties.
More info at: http://mattebb.com/3delightblender/documentation/
- Added option to either both export RIB and render interactively, or just export the RIB (better for render farms)
- Added choose of Display Drivers – currently accepted are ‘auto’ (integrated in blender image editor), idisplay, and tiff.
- Added access to Hider settings. Using idisplay with the raytrace hider allows progressive rendering.
April 11th, 2012 . 3 comments
Spherical Harmonics is a method for efficiently representing values that vary based on angle – often, lighting. It’s been in use for a long time in computer graphics, and with a google search you can find plenty of interesting information explaining the subject, in particular this (now a bit old) paper by Robin Green of SCEA – Spherical Harmonic Lighting: The Gritty Details. The common use case for spherical harmonics is caching a slow-to-calculate value that varies by angle, storing it as SH coefficient data, then reproducing an approximated version of that original value later on. What makes spherical harmonics useful is that for certain types of things (like diffuse lighting) the amount of data you have to store is quite small, and the value can be reproduced later quite quickly.
I’d previously tinkered with SH a bit in blender, but this time decided to port the code in the above paper to VEX in Houdini, implemented as a couple of VOPs, used to generate and evaluate spherical harmonics as part of a VOP network. I started playing with this idea last year at Dr. D, and still haven’t implemented my original ideas yet after getting sidetracked with these fun examples of things you can do with SH. Maybe soon.
You can download this example file, and the OTL here: houdini_sh_otl_hipnc.v001.zip.
The OTL includes three VOPs:
- SH Generate
Stores a single float sample along with its corresponding angle in a set of spherical harmonics coefficients. It’s currently using a 4×4 Matrix type as storage for this because it’s convenient to work with in VOPs/attributes, and because 16 floats will allow you to store up to 4 bands of spherical harmonics, which is enough for many situations involving smooth/diffuse values.
- SH Evaluate
Evaluates the value of the input spherical harmonics coefficients at a given lookup angle, as a single float.
- Cartesian to Spherical
The SH Generate/SH Evaluate VOPs take input angles in spherical coordinates (Phi/Theta),. This VOP can be used to convert a cartesian vector direction, to spherical coordinates.
The OTL was made in Houdini Apprentice, but probably isn’t difficult to convert to a commercial version. If you find any use for this, or any mistakes, please let me know!
December 28th, 2011 . 4 comments
I made another piece of jewelry for Kat‘s birthday last week – I thought I’d experiment with making a necklace rather than a ring like last time. It’s 3D printed and cast in sterling silver, and sits in three parts. Originally the idea was to have the arrangement customisable so they could be re-positioned along the chain, but in the end only a few combinations hang well in practice. Doing it this way, as opposed to a pendant, is much more complicated than I imagined and will require a bit more experimentation and prototyping if I attempt it again in the future.
The design is inspired by an islamic tile pattern that we saw recently while travelling in Turkey. I modelled it in Houdini by first procedurally re-creating the tiling pattern, then randomly breaking it up and distorting the pieces with some final detailing and bevelling. The final form was chosen by spending a while experimenting with different random seeds and noise offsets to find something that worked aesthetically. I then brought it into Blender for final tweaks, cleaned up the geometry to be watertight, added sprues for ease of casting, and exported the STL file for the print service.
September 15th, 2011 . 23 comments
After fixing some cross-platform issues that people were having with the last few versions, here’s a new release of 3Delight/Blender. As well as the fixes, I’ve included some new stuff that’s been on the backburner for a while – a new point cloud global illumination method. When enabled, the addon will automatically generate a point cloud, and then use it in the render for indirect lighting and environment lighting.
It’s just doing one bounce of indirect lighting, in the future it should be reasonably easy to add more bounces via photon mapping in the point cloud generation stage. Eventually I’d like to make this a bit more advanced, with a more modern design for the lighting/shading pipeline and more control over baking pre-passes, but for now (especially since I’m quite short on time ) I’d rather get it out and working in a simple, automatic way so people can use it.
I’ve tested this on my mac, and in both Linux and Windows XP VMs, but as always, if you have any problems on your system please let me know. Download the new addon here: render_3delight_0.7.0.zip