Going on Air Wide Open – Custom rig for Octane Render by Polder Animation

Originally I was planning to write a single piece about air-cooling overall, merging different practices, but because my friends gave me so much info I’m providing all of that untouched as separate articles.

This one is by Jean-Paul from Polder Animation. He talks thru why & how this beautiful machine was built & without any need to say anything more I simply leave You to enjoy reading =)


The reason that we decided to go for this setup is that we wanted to make rig with as much video cards as possible, so the limiting factor was the available motherboards maximum usable pci-e slots.

We thought about a water cooled rig as well, but we decided to go for the air cooled version because it would be less prone to leakage and maintenance. That off course meant that we had to construct our own casing because none of the off-the-shelve casings supported five dual-slot cards with enough breathing room in between them. I used 2 old cases I had laying around and sawed one to peaces and used those as good as possible to construct the outside support rack for the cards.

A few things to consider though before starting your own project like this:

1) When choosing the motherboard check if all pci-e slots can be used simultaneously, we went for the ASUS Maximus VI Extreme, which has five pci-e 16x and single pci-e4x slots, but as turned out later, only 4 of the pci-e 16x slots could be used at the same time, luckily the pci-e 4x slot could be used as well and loss of bandwidth does not impact the render times.

2) Make sure you choose a good power supply, although the cards we used (GTX 780 Ti OC) only have a TDP of 250W, when all 5 cards power up at the same time and start to overclock themselves the socket power peak measures around 2200W. Although this is only for a very short time, probably less then a second, and after that the system only uses around 1200 Watts of power while rendering, You need a power supply that can handle the peak demand. We are using the LEPA G1600-MA-EU which has a 1700 watt peak (1888 Watts from socket after efficiency conversion). But I recommend going as high as possible to within your price range, as long as it has high grade components.

3) The most important parts for this rig are the pci-e riser cards, I bought non-shielded pci-e 16x risers (20 euro’s a peace) designed for cheap bitcoin mining. But with these we had a very rough start. At first we could not get all cards to work, the system would only recognise a few of them and randomly loose them while rendering. After some research I realised that the risers are very prone to electrical interference and clumping 5 of them together in close proximity does NOT help. So I decided to wrap them in tin foil to shield them myself, this was a very tedious job where I wrapped each strand (3 per riser) in a 4 layer tin foil casing. After this all cards worked, but still sometimes the system drops 1 card, a simple reboot and fiddling the riser will work, but it’s not perfect. In addition I lately surrounded each riser by an anti static bag and spaced them more by adding folded cardboard in between. Since then we have not had any problems, but it’s not scientifically proven and does not win the pretty-ness award.

4) Another thing we had to do to get all the cards recognised is set the pci-e generation back to 1 instead of 3, the higher speeds are more susceptible to static interference (my conclusion, no fact per se). Luckily at 4x lanes and generation 1 pci-e the transfer speed is still 1GB/s, so full memory loading still only takes 3 seconds max which is more than acceptable considering the render times. Although these calculations are theoretical and may vary depending on the quality of the risers it is my opinion that the loss of bandwith for loading scene data is neglectable considering the rendertimes. I compared the OBJ load times of 2 setups, the first being the machine this article is about (GXT780Ti @ gen1 and 4 lanes), the 2nd a setup with a GXT580 (@ gen2 and 16 lanes). I enabled only one card in each system, the diference in bandwith should be 8x faster for the GTX580. When I load an OBJ file of 180MB (gpu memory size) it took 3,5 seconds on the GTX580 and 7 seconds on the GXT7800Ti, however when I loaded an 300MB OBJ, it took 11 seconds on the GTX580 and only 9 on the GTX780Ti. This tels me that the load times are more dependend on other things, probaby network, HDD, memory and or CPU. And off course this is a comparison of 2 different generations of GPU chips, systems and connected via different length network cables (all data resides on a file server), so far from ideal, but it is a real world situation.

As far as the GPU temperatures go, this air cooled setup is more that sufficient. Depending on the time of year and room temperatures all cards temperatures vary from 45 degrees Celsius to 65 after hours of non-stop full load rendering. A side not though, due to the amount of risers there is no room for a bigger CPU cooler, this combined with the severely impared air flow in the casing, the CPU temperatures are less ideal, running up to almost 100 degrees celcius after a longer periods of time at full load. This is something to take in to account when designing you’re own case. And needless to say the amount of noise this setup produces is quite a lot.

If I had to do it all over again, and we probably will somewhere in the future for an extra render system, I would go for professionally shielded risers like these M3 shielded twin axial cables. But at roughly 90 US dollars a piece it is worth considering a water-cooled option where the cards can be plugged directly in to the motherboard.

Kind regards,

Jean-Paul Tossings


So that’s it Guys! Again, I would like to thank Jean-Paul for sharing with us some thoughts, tips & findings. Truly furious looking piece, that You will not see so often!


Back to the main page