Appearance
Blender
Overview
Hot take: The standard Houdini trope is 'it makes the easy things hard and the hard things easy'. The Blender take might be 'make the easy things easier'.
If you consider that maybe 80% of 3d is to model a thing, texture it, render it, Blender streamlines that quite well. It focuses on using hotkeys and a busy but easily accessed UI, when you know a handful of keys and where the important buttons are, you can churn out stuff quickly.
Good things:
- Poly modelling and UVs are pretty fully featured, good set of default keyboard shortcuts, 'traditional' poly modellers would like it
- Sculpting is remarkably good
- The cycles (offline) and eevee (realtime) render engines are tightly integrated, very responsive
- Python API is pretty good, lets you automate most things, get stuff done.
Less good things:
- Geometry nodes can't match 20+ years of SOPs, but they're pretty good for only being a couple of years old
- Can have issues scaling to big production scenes, but that's why you use Houdini right?
- Not internally consistent the way Houdini is. More on that below.
- Documentation is patchy, often misleading, community support is variable. More on that below also.
Houdini has a tight knit, engaged community. Houdini itself has strong internally consistent underpinnings, quite often tips and tricks from 5 years ago, 10, 15 years ago are still valid today. Both these facts mean if you ask a question in the forum or in a discord, you'll likely get an answer quickly. Often this answer is from a pretty experienced Houdini person, often from the sidefx developers themselves.
The Blender community is several orders of magnitude larger. A houdini discord I'm on has 190 online users, a blender discord I just joined has 30 THOUSAND online users. There's no way to say this without sounding elitist, but a lot of those users are teenagers and hobbyists. Combine that community size with the code churn and frequent UI and workflow inconsistencies, it can be surprisingly hard to get answers. It feels like the numbers are flipped; a Houdini discord will get 1 question every 10 minutes, and 5 people will answer. A blender discord will get 10 questions every minute, maybe 1 will get answered. That rate of questions and the disparate nature of Blenders various features means support channels can feel like you're asking questions in a noisy stock exchange floor. If you search stackoverflow or similar, there'll be many answers, but they'll either be out of date or just wrong.
All that said, it feels like waiting until v4.0 to use Blender feels like the right choice, I'm happy I delayed until now, and I'm happy that I was forced to learn it for work. 😃
General keys and UI
Move the camera
- mmb will rotate the view
- mmb+shift to pan
- scrolwheel to zoom, or ctrl+mmb
- numpad . to focus
Move and duplicate things:
- g = 'grab' = translate selection
- r = rotate
- s = scale
- shift-d = duplicate
Create things:
- shift-a = add menu, can start typing straight away like houdini tab menu (tho search is less fuzzy)
Most hotkeys in blender are immediate, so g will immediately start a freeform translate action. you can then press modifiers to constrain, eg tap x to drag only along x axis. Fast when you get used to it, eg 'ry' will start a rotate action around the y-axis'
The small icons in the top left of the 3d viewport is shading quality. The line art mirrored sphere is full quality.
Default renderer is eevee, change to cycles by going to scene properties (the properties pane on the right, looks like the back of a DSLR icon), change render engine to 'cycles'
To quickly bring up the node search in the material view, shift-a, then s
until you get used to blender, use the magnifying glass, hand icon on the right of the viewport to pan and zoom.
Tech underpinnings
If Houdini is based around unix, Blender is based around pointers and occasional garbage collection. Eg you make an object, create a material, assign a texture to the material. Blender internally has pointers from the object to the material to the texture. If you remove the texture from the material, the texture hangs around as an unreferenced object, or an orphan in their terminology. It will remain there until you save the scene and reload, at which point all orphans are garbage collected, or you run a File -> Cleanup operation, that will also sweep up all orphans. You might see references to 'fake user', this attaches a reference to the thing, even if its not used, so it won't be swept away.
A big thing to get used to as part of this, parent/child relationships are also stored as pointers. This means there's no 'real' parent/child links, its more data on the objects. This has the surprising result that names have to be unique. In Maya or Houdini you expect if you duplicate a box you get box, box1, box2 etc. But if you parent them under different groups, eg /steve and /dave, you can have /steve/box and dave/box. In other words, their full parent/child path is unique. Blender doesn't allow this, so even if you have that kind of parenting, Blender will insist the names be unique, so you'll get /steve/box and /dave/box.001. This is fundamental to how Blender is designed, you can't avoid it. You'll see that pipeline tools, python scripts often have to deal with this .number issue on object names, which can be quite tricky when exchanging data with other 3d apps.
Another impact of this 'everything is pointers' approach is you can't easily filter for objects by hierarchy. You take it as given than you can look for /set//cam/*camShape or similar in Maya/Houdini. You can't directly access the scene hierarchy in Blender, because it doesn't really exist. There's alternatives, eg query an object for its children, and those objects for their children etc, or you can run filters of the entire scene object list very quickly, eg 'give me a list of all the cameras'. Takes a little getting used to.
Environment light
Blender doesn't have this concept per-se, instead the 'world' has a material. This can be got to either via the world properties, or in the shader editor you can flip the mode in the top left from object to world, and see its node graph.
Mirrored expressions
Blender has object constraints and bone constraints, in theory either could be used so that if you drag a locator on its local x axis, another locator would move on its own local x axis by an equal and opposite amount. From what I gather, the constraints are a little too clever; they either calculate a final value in world space, or don't allow for simple 'value * -1' expressions. In my case I had two locators that were children of other objects, I explicitly needed their local translate x value.
A straight expression seemed the answer, but this is also too clever, and gets confused about what value you want even though it has options for local/parent/world space.
To get around this, I had to make the expression refer NOT to a transform, but to a single variable. I could then refer directly to location[0] as my expression var, then make the value be var*-1.
Duplicate object
- shift-d will duplicate, or in the menus object->duplicate. ctrl-c/ctrl-v also work, but i've found it can paste to odd locations in the outliner, vs just next to the current shape with duplicate. duplicate also assumes you want to immediately move the copy.
Wireframe on shaded
- search the property panel for 'wire', or find it in the 'object' property set (the orange square), turn on 'wire'
xray mode
- alt-z, or the two overlapping squares in the top right mini toolbar where you choose between wire/solid/eevee/cycles
Viewport colour
The default viewport keeps everything gray. There's a viewport display section within the model properties, it has a Color slot, but by default has no effect.
To fix, open the viewport shading properties (little dropdown next to the small shading spheres in the top right of the display), set color to 'object'.
Panning the viewport top menu bar
Often when running a small view the shading buttons aren't visible. If you hover your cursor over the menu and use the scroll wheel, it will move left/right.
Sculpt mode
It's pretty good!
- f - set radius (i guess f is for falloff). don't be confused by the UI, its radius. Tap f, old radius shown, move cursor, new radius is realtime, tap to confirm
- shift-f set falloff
Geometry Nodes
Somewhere between Unreal Blueprints and Vops (actually has a LOT in common with Apex!). Geo is passed along as a primary wire, other wires and nodes represent attributes on the geometry, and operators that get and set attributes.
The strangest thing to get used to is the order of operations. Unlike vex/vops where its a clear data-in-data-out flow, and you get used to the idea that you can't (or shouldn't) read and write at the same time, Geometry Nodes lets you do this, and what is available when is very context dependant. For the most part you'll be doing little atomic operations, that eventually get their results connect to a node that has both a geometry green input, and a attribute input. Whatever attribute nodes are feeding into it, will get results from THAT point in the graph, not from the initial geometry input. It's weird, but makes sense after a while.
The suite of nodes available is pretty powerful, you an encapsulate nodes into subgroups, you can have exposed inputs like an HDA, its a fun playground to dive into for Houdini folk.
Careful with types
Has caught me out a few times, watch for types. In my specific case, the attrib blur node. Had a heart attack when a setup I'd been working on for a while was acting up, eventually realised it's because the attrib blur defaults to type 'float', when I was passing vector values through it. The GUI gives you clues that it's doing a conversion for you (vector wires are purple, floats are white, you get a gradient when its casting), but its super subtle and hard to miss. Typing it here to force it into my memory!
3d cursor
https://www.youtube.com/watch?v=JoVNtekpnX8
- Select cursor from toolbar on left
- can snap to surfaces by holding down shift while dragging
- then can snap the camera to cursor in the view/camera menu
- when done, hide the cursor in the view options
- shift r.click will set its location
- shift-s will bring up a pie menu of cursor related options, 2 handy ones are 'cursor to object' and 'object to cursor'
- 3d cursor is position only, it won't allow you to snap rotation values.
show shape keys in edit mode
Jump over to the shape keys lister, click the 'edit' button, the square with the filled in corner. Make sure to select the actual shape you want to see/edit.
Everything is pink in cycles mode
If objects are pink it means a material is missing a texture. If everything is tinted pink, it means the environment is missing a texture, probably an hdri. In the properties panel click the globe icon, that's where the environment map is set. Click the parameter that has a reference to the missing texture, and in the big menu that pops up, choose 'remove' from the link column.
Rivet or parent to mesh/polygon
Hiding in plain sight; it's called a vertex parent. From the object properties (orange square with highlighted corners), relationships section, parent type is '3 vertices'. More info here:
Video Editing
At the top where it shows shortcuts for various viewport configs (layout, modelling, sculpting etc), click the + sign and choose 'video editing'.
G to grab and move. Hold ctrl to snap
K to razor at current time
Can setup static file browser, drag drop single clips to current time
Can drag multiple clips to timeline, but from modal (ie shift-a) only
Issue with maya UI mode, change prefs -> input -> animation -> change frame to left mouse (the default of 'action mouse' won't scrub)
Set overall timeline length in properties
Text edit strip quick, but no font styles, hmm
GLTF
It defaults to a separate animation track for each object. If you don't want this, expect the animation options in the exporter, animation under that, and uncheck 'Group by NLA track'.
Displacement
Blender 2.8x, I assume its different in later builds.
TLDR: Render settings Cycles experimental, subdivision viewport to 1px. Subdivision modifier with 'adaptive' enabled. Material displacement mode 'displacement'.
'True' per pixel displacement is a combo of cycles experimental features, material settings, geometry subdivision modifier. Use the 'Shading' panel layout at the top to see a render view, material editor, settings all at once.
Render settings (the back-of-camera icon):
- Render Engine : Cycles
- Feature set : Experimental. Among other things, this enables pixel-level subdivision.
Jump to the final render view mode (top right of viewport, final 'mirrored' sphere).
Select your object, then:
Modifer settings (the spanner icon)
- Add modifer, subdivision surface (2/3 down the second column)
- Turn on 'adaptive'
The object should now be a perfectly subdivided surface in the render view.
In the material editor:
- Add menu, search for 'checker'
- Connect the colour output of the checker to the purple 'displacement' input of the material output
- You'll see the checker in the viewport, but very shallow; its just a bump map.
Material settings (Sphere with checkerboard icon)
- Settings -> Surface -> Displacement: Displacement only
Now its displacing, and most likely awful; all in the one direction, and lumpy. Lets fix.
In the material editor:
- Add, search, 'geometry'
- Add, search, 'vector math'
- Connect geometry normal and checker color to the vector math inputs
- Set vector math mode from 'add' to 'multiply'
- Connect this to displacement in on the material output
Cool, displacement is now normal based, how to refine the dicing?
In the settings tab (the back-of-camera icon again):
- Subdivision, set viewport to 1px. This is roughly equivalent to shading rate in renderman, it controls the rate of subdivision dicing. Go to 0.5px if feeling crazy, but generally a harsh checkerboard is a worst case scenario for displacement, you usually won't need to go this hard.
Material assign and export via python
Blender treats materials as 'something else' in the way that Houdini shop or vop materials are 'something else' that can't be exported via bgeo, for example.
Unfortunately Blender's export tools, its USD exporter in my specific case, think the same way. This means that you can export an object, and you'll get the object+material, or all objects, and you'll get all the objects and their materials, but you can't export all materials; any material that isn't assigned isn't visible to the exporter, so its skipped.
This means you need to get a list of all the materials, create an object for each, and assign each material. Here's some python code to do that:
python3
import bpy
mats = [x for x in bpy.data.materials]
for i, mat in enumerate(mats):
bpy.ops.mesh.primitive_cube_add(size=0.3, location=(i*0.5,0,0))
sel = bpy.context.active_object
sel.name = 'cube_'+mat.name
sel.data.materials.append(mat)
import bpy
mats = [x for x in bpy.data.materials]
for i, mat in enumerate(mats):
bpy.ops.mesh.primitive_cube_add(size=0.3, location=(i*0.5,0,0))
sel = bpy.context.active_object
sel.name = 'cube_'+mat.name
sel.data.materials.append(mat)
This will create a little line of cubes, nicely named, each assigned each material in the blend file. Now you can export and be happy.
Compositor
I wanted to use the compositor like cops; do some quick alterations to textures on disk, write em out. I flipped to the compositing desktop, put down an image node, loaded an image from disk, and couldn't work out why I couldn't see the image in either the background of the comp window or the image viewer.
Clever clogs and my Blender agony aunt Hallam Roberts came to the rescue. The default composite output node doesn't work with arbitrary inputs. Put down a viewer node, hey presto, the background wakes up, and if you put an image window into viewing the 'viewer node' texture, you'll see it there too.
Getting a texture in the material editor into the compositor
Annoyingly you can't copy/paste image nodes between the material editor and the compositor. The fastest way I found was to select the image node in the material editor, highlight the image name in the image field, copy it. Jump over to the compositor, make an image node, click on the little button/icon to the left of the image name. That brings up a search bar for all the images currently loaded in blender. Paste the image name in there, it will find the image, hit enter or click it, now you have the image in the compositor.
Animation with Blender and Freebird XR and Nomad
Nomad, Blender, Freebird XR, allowed me to puppeteer this little scene very quickly and easily in VR:
The TLDR version:
- Sculpted, painted, made blendshapes in Nomad, export GLB
- Imported GLB to Blender, rigged head joint, wings
- Used Freebird XR python plugin for Blender to puppeteer the rigs in VR with a Quest 2.
I built the scene in Nomad based on a design by Shafi Ahi. My fanboy status for Nomad is limitless, the app is so damn good:
I exported this as a glb, and imported to Blender. It retains vertex colours, scene hierarchy, object names, cameras, lights, and translates nomad layers to shapekeys (blendshapes). Pretty awesome.
I'd stumbled across Freebird XR on twitter, joined the discord and started asking questions about animation. The creator of Freebird was nice enough to share an early script he'd been working on to do this, it's what I used here.
The plugin is basically 'from your ready-to-animate blender scene, link VR controllers to things'. so first I had to get the lizard head rigged, the blendshapes linked, the bee wings flapping.
So, rigging:
- The teeth, eye, lizard are separate objects with their own shapekeys. To link them is similar to Maya or Houdini. R.click the lizard shapekey slider for jaw open, copy as driver, select the teeth jawopen slider, r.click, paste driver.
- The lizard head is a 3 bone rig; ctrl-a, armature, draw out 3 bones as roughly chest, neck, head, auto skin the lizard to the bones with ctrl-p. I could test by getting into pose mode, looked good enough for this little test.
- Parent the eyes, teeth, tounge to the head bone too.
- For the wings I used blender's sculpt tools to quickly sculpt a wings-wide shapekey, and animated it with an expression like sin(frame*4.2)
Now that the rig is done, can move into VR puppeteering:
- Once you get the addon activated, it appears as another panel in the right viewport nav, pressing N on your keyboard will toggle that nav
- The plugin is basically 3 things; what is parented to the hand controllers and headset position, what the buttons do, and starting/stopping recording.
- For the hand controllers, the bee was parented to the left controller, the lizard head bone parented to the right.
- When you turn on VR mode and put on the headset, you see your blender scene in there with bee/lizard head linked to controllers. Because VR is all about absolute positioning, its likely things are twisted or sitting in places hard to see or control. You can add position and rotation offsets for the controllers and the headset to get it all in the right place.
- I then linked the controller buttons to shapekeys, using the add-on shortcut to set this up quickly. Right trigger was jawopen, right grip squeeze was eyebrow, right joystick left/right was the arms. A nice surprise was that most of the buttons on the Quest 2 have a range of motion; they don't just toggle on/off, but if you gradually squeeze triggers or buttons, the shape keys slowly activate.
- Hit record (can also map this to a button), puppeteer away, stop record
- I did some cleanup in the graph editor to fix my bad puppeteering skills; reduce the dense data, tweak ranges, remove jitter etc
- Setup lights, camera
- Render out a mp4, using Eevee this processed in near realtime.
Baking USD and Alembic to shape keys
Blender has a handy feature called 'packing' (not to be confused with Houdini packed objects!), it lets you embed external files within the blender file. So if you need to send a blend file to a render farm, pack it, it will copy all the external image references used for materials into the blend, apparently it can support most external files you might reference.
So imagine my surprise when I had it pointing to an external animated USD, but I couldn't pack it. Same goes for alembic.
Instead, you have to bake the animation down into per-frame blendshapes, called shape keys in blender.
Thanks to the fabulous Ben Skinner, he found a little python snippet to do just this. Pop it into a text field in the scripting environment in Blender, run it, bam, baked.
Make sure if you have other modifers like subdiv modifers, turn them off before running the bake.