Blender network rendering for single frames
Blender users have been experimenting with all sorts of interesting ways to speed up render times for single images on their individual machines.
Some methods include scripts to split renders between separate GPUs, splitting render load between CPU and GPU, or rendering multiple images at a much lower sample rate with randomized seeds, then alpha merging them all back together.
That's fine for a single machine, but even if you just have two computers, that's double the potential rendering speed! Blender has a simple and powerful way to split the workload between machines when rendering an animation, using placeholders and shared storage. Unfortunately, it only splits workload at the frame level. What if you're only rendering a single image, but want to take advantage of separate, network connected machines?
A few render farm services support something they are calling "split and stitch", where they subdivide the viewport, and render each piece across a number of machines. It's not the usual case for individual Blender users to have local access to a render farm, but this script emulates that behavior -- whether you have access to 2 machines, or 200. Blender has had these capabilites for awhile, but this automates the process. Go and pool your computing power with your friends, or spin up 50 temporary Digital Ocean machines and render the hell out of your blend.
There are no external dependencies, but this script has some assumptions about your environment. First of all, it's not a regular blender plug in. You'll probably want to run it in the same directory as your blend file. Like running Blender with placeholders when rendering an animation, it is assumed that the directory that contains your blend file is on shared storage of some kind (NFS, SMB, whatever), and all participating render machines have access to it.
It also assumes you have the ability to run blender from the command line across all participating machines. How you do this is entirely up to you. You can use a commercial job runner like Qube or Tractor, or an ssh wrapper like ClusterIt. Or, vanilla ssh, where you just copy and paste the command on each machine. Your call!
The image is only divided into 100 parts currently. If you have over 100 machines at your disposal, some will be idle. (This should be adjustable, I might modify it later to be an option.)
You're left with tiled images -- how you put them back together into a
completed image is entirely up to your pipeline. However, the fantastic
ImageMagick package includes a program called
montage, which can stitch the completed image very quickly.
There is no support for rendering animations at the moment. I suppose it wouldn't be difficult to add -- each frame in it's own subfolder? Maybe later.
Lets say you have 3 machines, and your blend file is in a directory that is accessable to all of them. Copy this script into that directory. Ensure all your settings are dialed in how you'd like them (samples, light bounces, output format, size, etc) in the blend file. Then, run blender on each machine:
machine-1$ blender file.blend -b -P chunked_render.py
machine-2$ blender file.blend -b -P chunked_render.py
machine-3$ blender file.blend -b -P chunked_render.py
The 3 machines will divvy up the work, and your image will render 3 times as quickly (assuming the hardware is similar.)
When they are done, you'll be left with 100 separate output files, labeled "chunk_XXX.png". If you have ImageMagick installed, the easiest way to reassemble them into your final image is via the following (just on one machine):
machine-1$ montage -background none -mode concatenate -tile 10x10 chunk_* rendered.png
This should finish very quickly. And that's all there is to it.
Special thanks to Justin Smith for his sweet sweet math action.
Update: I somehow missed Alan Taylor's DTR project, which looks great -- albeit a few more moving parts than I personally prefer. Check it out for an alternative viewpoint on tackling this!
Update #2: There is also a script that Campbell Barton wrote -- it was a solution to rendering high resolutions without running out of memory rather than network rendering (which is likely how I missed it), but it's extremely similar. I especially like the configuration via environment vars.