Blender
Overview
Hot take: The standard Houdini trope is 'it makes the easy things hard and the hard things easy'. The Blender take might be 'make the easy things easier'.
If you consider that maybe 80% of 3d is to model a thing, texture it, render it, Blender streamlines that quite well. It focuses on using hotkeys and a busy but easily accessed UI, when you know a handful of keys and where the important buttons are, you can churn out stuff quickly.
Good things:
- Poly modelling and UVs are pretty fully featured, good set of default keyboard shortcuts, 'traditional' poly modellers would like it
- Sculpting is remarkably good
- The cycles (offline) and eevee (realtime) render engines are tightly integrated, very responsive
- Python API is pretty good, lets you automate most things, get stuff done.
Less good things:
- Geometry nodes can't match 20+ years of SOPs, but they're remarkably good for only being a couple of years old
- Can have issues scaling to big production scenes, but that's why you use Houdini right?
- Not internally consistent in the way Houdini is. More on that below.
- Documentation is patchy, often misleading, community support is variable. More on that below also.
Houdini has a tight knit, engaged community. Houdini itself has strong internally consistent underpinnings, quite often tips and tricks from 5 years ago, 10, 15 years ago are still valid today. Both these facts mean if you ask a question in the forum or in a discord, you'll likely get an answer quickly. Often this answer is from a pretty experienced Houdini person, often from the sidefx developers themselves.
The Blender community is several orders of magnitude larger. A houdini discord I'm on has 190 online users, a blender discord I just joined has 30 THOUSAND online users. There's no way to say this without sounding elitist, but a lot of those users are teenagers and hobbyists. Combine that community size with the code churn and frequent UI and workflow inconsistencies, it can be surprisingly hard to get answers. It feels like the numbers are flipped; a Houdini discord will get 1 question every 10 minutes, and 5 people will answer. A blender discord will get 10 questions every minute, maybe 1 will get answered. That rate of questions and the disparate nature of Blenders various features means support channels can feel like you're asking questions in a noisy stock exchange floor. If you search stackoverflow or similar, there'll be many answers, but they'll either be out of date or just wrong.
All that said, it feels like waiting until v4.0 to use Blender feels like the right choice, I'm happy I delayed until now, and I'm happy that I was forced to learn it for work. 😃
General keys and UI
Move the camera
- mmb will rotate the view
- mmb+shift to pan
- scrolwheel to zoom, or ctrl+mmb
- numpad . to focus
Move and duplicate things:
- g = 'grab' = translate selection
- r = rotate
- s = scale
- shift-d = duplicate
Create things:
- shift-a = add menu, can start typing straight away like houdini tab menu (tho search is less fuzzy)
Most hotkeys in blender are immediate, so g will immediately start a freeform translate action. you can then press modifiers to constrain, eg tap x to drag only along x axis. Fast when you get used to it, eg 'ry' will start a rotate action around the y-axis'
The small icons in the top left of the 3d viewport is shading quality. The line art mirrored sphere is full quality.
Default renderer is eevee, change to cycles by going to scene properties (the properties pane on the right, looks like the back of a DSLR icon), change render engine to 'cycles'
To quickly bring up the node search in the material view, shift-a, then s
until you get used to blender, use the magnifying glass, hand icon on the right of the viewport to pan and zoom.
Duplicate object
- shift-d will duplicate, or in the menus object->duplicate. ctrl-c/ctrl-v also work, but i've found it can paste to odd locations in the outliner, vs just next to te current shape with duplicate. duplicate also assumes you want to immediately move the copy.
Wireframe on shaded
- search the property panel for 'wire', or find it in the 'object' property set (the orange square), turn on 'wire'
xray mode
- alt-z, or the two overlapping squares in the top right mini toolbar where you choose between wire/solid/eevee/cycles
sculpt mode
It's pretty good!
- f - set radius (i guess f is for falloff). don't be confused by the UI, its radius. Tap f, old radius shown, move cursor, new radius is realtime, tap to confirm
- shift-f set falloff
3d cursor
https://www.youtube.com/watch?v=JoVNtekpnX8
- Select cursor from toolbar on left
- can snap to surfaces by holding down shift while dragging
- then can snap the camera to cursor in the view/camera menu
- when done, hide the cursor in the view options
show shape keys in edit mode
Jump over to the shape keys lister, click the 'edit' button, the square with the filled in corner. Make sure to select the actual shape you want to see/edit.
Everything is pink in cycles mode
If objects are pink it means a material is missing a texture. If everything is tinted pink, it means the environment is missing a texture, probably an hdri. In the properties panel click the globe icon, that's where the environment map is set. Click the parameter that has a reference to the missing texture, and in the big menu that pops up, choose 'remove' from the link column.
Rivet or parent to mesh/polygon
Hiding in plain sight; it's called a vertex parent. From the object properties (orange square with highlighted corners), relationships esction, parent type is '3 vertices'. More info here:
Video Editing
At the top where it shows shortuts for various viewport configs (layout, modelling, sculpting etc), click the + sign and choose 'video editing'.
G to grab and move. Hold ctrl to snap
K to razor at current time
Can setup static file browser, drag drop single clips to current time
Can drag multiple clips to timeline, but from modal (ie shift-a) only
Issue with maya UI mode, change prefs -> input -> animation -> change frame to left mouse (the default of 'action mouse' won't scrub)
Set overall timeline length in properties
Text edit strip quick, but no font styles, hmm
GLTF
It defaults to a separate animation track for each object. If you don't want this, expect the animation options in the exporter, animation under that, and uncheck 'Group by NLA track'.
Displacement
Blender 2.8x, I assume its different in later builds.
TLDR: Render settings Cycles experimental, subdivision viewport to 1px. Subdivision modifier with 'adaptive' enabled. Material displacement mode 'displacement'.
'True' per pixel displacement is a combo of cycles experimental features, material settings, geometry subdivision modifier. Use the 'Shading' panel layout at the top to see a render view, material editor, settings all at once.
Render settings (the back-of-camera icon):
- Render Engine : Cycles
- Feature set : Experimental. Among other things, this enables pixel-level subdivision.
Jump to the final render view mode (top right of viewport, final 'mirrored' sphere).
Select your object, then:
Modifer settings (the spanner icon)
- Add modifer, subdivision surface (2/3 down the second column)
- Turn on 'adaptive'
The object should now be a perfectly subdivided surface in the render view.
In the material editor:
- Add menu, search for 'checker'
- Connect the colour output of the checker to the purple 'displacement' input of the material output
- You'll see the checker in the viewport, but very shallow; its just a bump map.
Material settings (Sphere with checkerboard icon)
- Settings -> Surface -> Displacement: Displacement only
Now its displacing, and most likely awful; all in the one direction, and lumpy. Lets fix.
In the material editor:
- Add, search, 'geometry'
- Add, search, 'vector math'
- Connect geometry normal and checker color to the vector math inputs
- Set vector math mode from 'add' to 'multiply'
- Connect this to displacement in on the material output
Cool, displacement is now normal based, how to refine the dicing?
In the settings tab (the back-of-camera icon again):
- Subdivision, set viewport to 1px. This is roughly equivalent to shading rate in renderman, it controls the rate of subdivision dicing. Go to 0.5px if feeling crazy, but generally a harsh checkerboard is a worst case scenario for displacement, you usually won't need to go this hard.
Material assign and export via python
Blender treats materials as 'something else' in the way that Houdini shop or vop materials are 'something else' that can't be exported via bgeo, for example.
Unfortunately Blender's export tools, its USD exporter in my specific case, think the same way. This means that you can export an object, and you'll get the object+material, or all objects, and you'll get all the objects and their materials, but you can't export all materials; any material that isn't assigned isn't visible to the exporter, so its skipped.
This means you need to get a list of all the materials, create an object for each, and assign each material. Here's some python code to do that:
import bpy
mats = [x for x in bpy.data.materials]
for i, mat in enumerate(mats):
bpy.ops.mesh.primitive_cube_add(size=0.3, location=(i*0.5,0,0))
sel = bpy.context.active_object
sel.name = 'cube_'+mat.name
sel.data.materials.append(mat)
import bpy
mats = [x for x in bpy.data.materials]
for i, mat in enumerate(mats):
bpy.ops.mesh.primitive_cube_add(size=0.3, location=(i*0.5,0,0))
sel = bpy.context.active_object
sel.name = 'cube_'+mat.name
sel.data.materials.append(mat)
This will create a little line of cubes, nicely named, each assigned each material in the blend file. Now you can export and be happy.
Compositor
I wanted to use the compositor like cops; do some quick alterations to textures on disk, write em out. I flipped to the compositing desktop, put down an image node, loaded an image from disk, and couldn't work out why I couldn't see the image in either the background of the comp window or the image viewer.
Clever clogs and my Blender agony aunt Hallam Roberts came to the rescue. For reasons I don't fully understand, the default composite output node doesn't work with arbitrary inputs. Put down a viewer node, hey presto, the background wakes up, and if you put an image window into viewing the 'viewer node' texture, you'll see it there too.
Getting a texture in the material editor into the compositor
Annoyingly you can't copy/paste image nodes between the material editor and the compositor. The fastest way I found was to select the image node in the material editor, highlight the image name in the image field, copy it. Jump over to the compositor, make an image node, click on the little button/icon to the left of the image name. That brings up a search bar for all the images currently loaded in blender. Paste the image name in there, it will find the image, hit enter or click it, now you have the image in the compositor.
Animation with Blender and Freebird XR and Nomad
Nomad, Blender, Freebird XR, allowed me to puppeteer this little scene very quickly and easily in VR:
The TLDR version:
- Sculpted, painted, made blendshapes in Nomad, export GLB
- Imported GLB to Blender, rigged head joint, wings
- Used Freebird XR python plugin for Blender to puppeteer the rigs in VR with a Quest 2.
I built the scene in Nomad based on a design by Shafi Ahi. My fanboy status for Nomad is limitless, the app is so damn good:
I exported this as a glb, and imported to Blender. It retains vertex colours, scene hierarchy, object names, cameras, lights, and translates nomad layers to shapekeys (blendshapes). Pretty awesome.
I'd stumbled across Freebird XR on twitter, joined the discord and started asking questions about animation. The creator of Freebird was nice enough to share an early script he'd been working on to do this, it's what I used here.
The plugin is basically 'from your ready-to-animate blender scene, link VR controllers to things'. so first I had to get the lizard head rigged, the blendshapes linked, the bee wings flapping.
So, rigging:
- The teeth, eye, lizard are separate objects with their own shapekeys. To link them is similar to Maya or Houdini. R.click the lizard shapekey slider for jaw open, copy as driver, select the teeth jawopen slider, r.click, paste driver.
- The lizard head is a 3 bone rig; ctrl-a, armature, draw out 3 bones as roughly chest, neck, head, auto skin the lizard to the bones with ctrl-p. I could test by getting into pose mode, looked good enough for this little test.
- Parent the eyes, teeth, tounge to the head bone too.
- For the wings I used blender's sculpt tools to quickly sculpt a wings-wide shapekey, and animated it with an expression like sin(frame*4.2)
Now that the rig is done, can move into VR puppeteering:
- Once you get the addon activated, it appears as another panel in the right viewport nav, pressing N on your keyboard will toggle that nav
- The plugin is basically 3 things; what is parented to the hand controllers and headset position, what the buttons do, and starting/stopping recording.
- For the hand controllers, the bee was parented to the left controller, the lizard head bone parented to the right.
- When you turn on VR mode and put on the headset, you see your blender scene in there with bee/lizard head linked to controllers. Because VR is all about absolute positioning, its likely things are twisted or sitting in places hard to see or control. You can add position and rotation offsets for the controllers and the headset to get it all in the right place.
- I then linked the controller buttons to shapekeys, using the add-on shortcut to set this up quickly. Right trigger was jawopen, right grip squeeze was eyebrow, right joystick left/right was the arms. A nice surprise was that most of the buttons on the Quest 2 have a range of motion; they don't just toggle on/off, but if you gradually squeeze triggers or buttons, the shape keys slowly activate.
- Hit record (can also map this to a button), puppeteer away, stop record
- I did some cleanup in the graph editor to fix my bad puppeteering skills; reduce the dense data, tweak ranges, remove jitter etc
- Setup lights, camera
- Render out a mp4, using Eevee this processed in near realtime.