Value-based GPU build for Octane Render

It’s easy to go to Your favorite shop (or e-shop) & buy the most expensive product willing that it will perform the best. If You done so, well.., I have bad news for You. Reality is somehow a bit more complicated & pricing is usually based on different performance metrics.

After spending a ton of money, You might not only get the worst value possible, but Your top of the line professional Quadro/Tesla might even perform slightly worse on GPU rendering (ray tracing, path tracing, etc.) than gaming oriented GTX card for a fracture of that price.

Sounds confusing? If so, simply keep reading, because here we are going to try clear things up a bit. To help me handle this hardware topic I’m happy to introduce You my guest from OTOY’s forums, Sebastian (for Octane Render users better known as Smicha).

Sebastian: Thank you, Tom. And welcome all Octane Render enthusiasts.

Together we’ll try to nail down some basics & then get into specifics, based on His personal build. We came to each other on OTOY’s forum discussing about various topics over different threads & it seems we have very similar viewpoint (at least on hardware). I hope this dialog style article about GPU hardware will be eye opening for some & at least in some extent useful for those who already know quite a bit.

Quadro & Tesla cards from nVidia are not so much different in terms of internal components compared to GTX line.

Those PRO cards are tuned to perform better with some professional applications (where optimised driver supports exists). They have higher double precision (DP) performance, ECC vRAM, but neither of mentioned features makes any difference for GPU path tracing in programs like Octane Render. One good thing is that PRO line usually offers higher amounts of vRAM compared to GTX line, but on the other hand chips that do all the calculations are usually tuned not so aggressively (in pro cards) to keep temperatures lower & power usage as low as possible.

If You consider to pay few times more to get same or even less rendering performance the amount of available memory should be very important to You.

Here we are going to talk about value based builds (doesn’t mean cheap) using GTX cards & how to tune them in order to squeeze every little bit of available performance. We’ll try to look into differences between reference & non-reference coolers, about overall airflow in a case: how it influences rendering speed. Finally, we’ll end up diving into water-cooling & all other things that falls in between of those lines.

At time when 980/970 were announced at GAME24 event 780, 780 Ti & 770 got discontinued. To find 780 6gb model became a bit of challenge, for weeks the only available model was the one from ASUS, more precisely called STRIX. As gamers started looking for new Maxwell based cards Octane Render users were waiting for news how these card are going to perform in CUDA applications.

Based on what we’ve seen from first generation Maxwell cards (like 750 & 750 Ti released earlier – 70W cards performed almost the same as Fermi based 160W 460s – impressive performance per W!) we were expecting to see good numbers from those 980/970s too.

While some can wait for a while for new “ground breaking technologies”, others need to do work; make an investment & build tools to meet deadlines. That’s what my friend done with his build. Tell us a bit more, Sebastian.

S: I’ve been working on a project with tons of instances of classical architecture elements. Few months ago we got to a point where my single, although still great watercooled Titan was too slow in rendering animations in decent time. I knew I could speed it up four times, but that meant money – new motherboard, three more Titans (even Black), more cooling stuff, etc. I am a last person who claims that faster computer means better renders. There are plenty of pretty talented and hard working Octane Render users who still use a single GPU machines and their works are superb. But when you have to render dozens of stills for 24 hours or even longer, the difference in time being saved is huge. Three more graphics cards means four times shorter rendering.

T: Actually one of the most attractive things in GPU rendering is expandability. Even if You own a top of the line GPU (let’s say Titan Black for 1000$), You can easily add a second card & if Your motherboard allows even third or fourth. If You do things in old (CPU) way & buy top of the line SKU (for the same 1000$) You are left with buying more computers, ‘cause there’s no way to simply add an extra processor. You’re left with considerably more expensive Xeon line (where again 2 CPUs per motherboard for workstation is maximum or You need very expensive motherboard and special & expensive CPUs to house 4 of them on single motherboard).

S:That is why it is so convenient to have a computer that will be expandable easily – just insert new cards and you are ready to go. And what is important for Octane Render users – the extra CPU power does not mean faster rendering. Surely with a 12- or even 18-cored CPU you’ll benefit in other CPU-based software, but we need to remember that Octane Render power comes from GPU. If you own an old CPU (like I do) and you are pretty happy with responsiveness of your system there is no need to throw your money. You’ll hardly see any difference in Octane Render performance whether you use 2nd or 5th generation core i7 (or even i5) processors. Our golden rule is to be focused on easy expandability of GPUs.

T: So let’s talk more about them. You bought three 780s, not Titans, neither 980s, why?

S: Shortly speaking – for about half of the price you get similar performance to Titan, but when you watercool them, oh Man, these babies show their true power (45% speed-up compared to 4 of them on air with no extra space for optimal cooling). I’ll talk about some benchmarks later on.

6GB is crucial for me. By the time I was planning expansion of my rig they were easily to get. I thought I would wait for GTX 980 and see how they perform in Octane Render. There had been rumours that they might perform close to 780 Ti and draw significantly less power than 700 series. Next day after nVidia’s event GAME24 some guys on overclock.net posted that they bought 900 series. I asked them immediately for help in benchmarking Octane Render. The preliminary scores indicated that 900 series cards were about 35% (or even more) slower than GTX 780 or GTX Titan (GTX 980 scored 4.5 ms/s vs 6.5-7.8 ms/s for GTX 780 or GTX Titan).

T: With Octane Render 2.13 release of OTOY managed to rewrite portion of code to take advantage of new architectural changes & it seems 980 falls somewhere in between on 780 & 780Ti in terms of rendering performance. But still, GTX980 (at least for now) is “only” 4Gb card.

You know what that reminds me? A change in architecture before. GTX 580, based on Fermi architecture, was a the “King of the Hill” for long time – 3GB of vRAM, reasonable performance, not so hot as first generation Fermi based GTX 480. Then came GTX 680 4GB. A lot of guys were so upset & even angry that 680 was not performing as good as 580.. when the reason was not so much in the software itself..but green giant’s marketing – nVidia is doing the same move now.

S: I remember these days. I had 2x 580s, then 2x 680 4GB. I was a bit surprised 580s were faster. Still being focused on extra vram I sold 680s immediately when Titan came out. So history repeats, does it?

T: Well, 580 was GF110 based card, while 680 – GK104. Now it’s the same with transition from Kepler to Maxwell, nVidia again moving from high performing GK110 on 780(Ti)/ Titan(Blacks) to GM204 equipped 980/970s keeping something in their sleeve. So with lower-end chip of new architecture they manage to get on par or even outperform high-end chip from older architecture – That’s impressive!

But let’s get back to our topic:

S: Yes. Unfortunately right after the launch of 900 series 780s 6GB were sold out. Literally I couldn’t get any single piece. If I didn’t need 6GB of vRAM I would’ve bought 780 Ti. After a week I received a phone call there were 5 Asus Strix 780 6GB at great price – I took three.

T: Are the 780s with 6GB gone from everywhere? I don’t see them anymore..

S: Yes. In the meantime I signed at EVGA website to be notified when 780s 6GB would be available again and a month ago I was notified about a few. There are also some STRIX left but I noticed sellers increased their price. Luckily prices of TITAN Z dropped by a half and this beast seems to be attractive for Octane users.

Indeed, Titan Z price now seems a bit more reasonable. Not so long ago wrote a small piece ( “Second look on Titan Z”) about it & other thing guys looking for it might be interested.

T: Without a doubt, STRIX (780) is a powerful card. What I don’t like is their cooler. It’s perfect, if You have plenty of space, but these custom coolers do not perform very good when cards are put very close to each other in multi GPU rigs.

Correct me if I’m wrong but there was no reference cooler design equipped 780 6GB.

S: I think I haven’t seen any of those. And this remark is of a great importance, because all of these non-reference coolers dissipate heat into a case and strongly affect temperatures of adjacent cards and slow them, sometimes drastically. So yes – you’d better pay attention to cooling solutions too. If you are strictly focused on air-cooled systems get yourself reference cards with external heat dissipation.

Or go with slightly relaxed, spaced out configuration, like Jeroen (a.k.a. Rappet from OTOY’s forums) done, leaving extra space between cards in order not to keep optimal airflow.

T: Ok, let’s switch gears for a while from GPUs to other system components. Ever since I’ve started with GPU rendering I’m not looking for high-end motherboards, unless I’m building a beast for someone who wants to use such workstation for CPU intensive work too. Myself, I’m pretty much happy with OC’ed 3570k as it offers great value & it’s enough to feed GPUs while I’m doing other things. What CPU/motherboard You have chosen for this build and why?

S: Honestly, that’s the starting point – a motherboard. Imagine I got my Asus P8P67 WS for 50$ on auction.

Yes, I was lucky, but even today there are old chipset based motherboards, like P67 for 150$ that can handle 4 graphics cards. Also newer Gigabyte X99-UD4, Z97-SOC or even GA-Z87X-OC Force for half of the price (about 250$) of a high-end gaming or workstation motherboard. If you are happy with your old CPU and want to have more GPUs – there are two ways for expansion: dual GPU cards, like Titan Z, or older but still great motherboards capable of handling from 4 up to 7 cards.

T: As GPU cards are backward compatible with even older motherboards there is little to no need of grabbing top of the line & probably expensive motherboards. However if You’re looking to have overall balanced rig, not only crafted for GPU power, You can do that, but You’re not forced to, unlike if You’re using CPU where with every top of the line CPU upgrade You need to buy new motherboard.

Any recommendations of what to look for? Hard learned mistakes made?

S: Three years ago it was my mistake to take although a good motherboard but with only two PCI express slots. So if you think about further development of your Octane Render computer think of how much effort, costs and time will it take to replace your old motherboard (and CPU due to a new socket) & even RAM (like from DDR3 to DDR4) and reinstall the system. My current CPU is still the same old i7 2600K which runs cool at 4.5GHz. Sure, you can get yourself Rampage V extreme and 5960X, but isn’t it better to save some money and get yourself a high-end high-power PSU?

T: Agree with You here. What about PSUs?

S: I noticed there are many questions on the forum in this matter. ‘Will my 600W or 1000W or… be able to power this and that’. If a 1200/1500W PSU costs you 100-150$ more isn’t it worth to have such for many years and not bother whether extra GPUs will run on it? Additionally there is a nice feature of such high power PSUs – they run silent at lower load, and sometimes even passive up to 40-60% power draw.

T: What do You use for the build we are talking about?

S: If you look at my build, everything apart from 3x GTX 780s, some additional cooling stuff and new case – is old, even Enermax 1350W with peak 1500W, although not well sleeved but it works.

T: Nice sleeving helps, not only for visual appeal, but also for cable routing – trying to deal with all the mess in a case might be a challenge if You have stiff cables. But in the end what matters most is the quality of supply & the power it delivers. Have You any idea about power usage in Octane Render? Read on forums, guys claimed lower usage under load & this surprised me a bit!

S: Recently I did some measurements how power hungry is GTX 780 or Titan. And this is important part because there are different opinions in this matter. First of all Octane does not draw that much power out of your GPUs as games or heavy stress tests, FurMark let’s say. When I turn on my computer it starts drawing 190 W, while loading Windows 400-500 W, and it drops to 170 W at idle. While rendering in Octane on one GTX 780 the power draw is 320 W (or 350 W overclocked). Remember, this is for entire system, not a single GPU. MSI Afterburner indicates 68% of power usage of GTX 780 (77% at overclock). And these measurements seems to be correct, because when I turn on FurMark the power usage is 100% and the system draws 440 W (at 110% it is 470 W). So GTX 780 draws from 150-180 W in Octane Render (Titan about 180-230 W, with power usage at 84-97%) and even 300 W under heavy stress tests. The difference is huge.

T: So under load Octane Render uses less power, but from time to time when system is stressed You can observe higher draw. But then how about PSU, what to choose?

S: Shortly speaking – 600 W power supply is capable of handling ONE GPU and while rendering in Octane Render you’ll have 200-250 watts reserved for peaks (as they happen). My entire system with 3x GTX 780 and one Titan draws in Octane 955 W (overclocked) but my power meter shows that the maximum was at 1500 W. I don’t know when it happened but it did. And this is why we strongly recommend using high-end high-power supplies. If you plan to have 4 GPUs get yourself 1500W PSU – although Octane will use 1000W your system will remain stable.

My simple rule for choosing a right power supply is:

PSU power = (number of GPUs + 1)*300W. Hence:

1 GPU 600W PSU
2 GPUs 900W PSU
3 GPUs 1200W PSU
4 GPUs 1500W PSU

Although Octane Render power consumption may be given by

Octane Power Consumption W = (number of GPUs +1 )*200W

So for one Titan Z 900W PSU is fine, for two 1500W, for four you need at least 2x1200W PSUs.

So those who have extra room for additional power supply in their cases are ready to expand even to 4 Titans Z.

*T: or You can pick something like 2000W PSU from SuperFlower*

S: And one more very important remark – one of Octane Render forum users reported he encountered 1500W burnout. My recommendations – keep the backside of PSU cool, because all of its electronic components are mounted there. Don’t let any graphics card close to PSU, effecting it’s temperature and keep your case, even large, always well ventilated.

T: In some cases PSU take air from outside (bottom of the case). Having this area filtered & regular vacuum cleaning is a must if You want to prevent surprises.

Last but not least, when You’re buying power supply, think whether You wish to save 100-200$ putting Your entire system at risk. If PSU fails, more than likely it can damage other components too & if You have multi GPU setup..that might be costly..

So, power supply topic is more or less clear now. It’s not so much cases that are capable storing a pair of PSUs. I see You got monstrous, though very solid one. How You came to it?

S: 3 years ago when I got 2x GTX 580 and Lian-li X2000 server case (I thought that such expensive case will be the best, but I was wrong) I had no idea how loud and hot the system will be. After I ran Octane Render I immediately started looking for any solution to cool my GPUs down and make them less noisy.

T: Guess water-cooling came into radar?

S: Water-cooling was so new to me, but I decided to give it a shot. Then it occurred that the server case can’t handle any greater radiator, 560mm especially. I had to put it outside the case, but it worked. I regretted a lot buying that case and started looking for something that could handle water-cooling stuff.

T: To house 560 (4x140mm) isn’t so easy (as far as I know there is only few products if You start to play at this scale).

S: ‘IT’ appeared in all its glory ‘CaseLabs’. When a guy on YouTube founded ‘Singularity Computers’ and showed TH10 I was stunned. 2.3mm thick alloy/aluminium construction (compared to others 1 mm) makes it a military class. It is entirely modular, can handle 2 to 6 (or more!) rads, even four power supplies, great space for cable management, no cheap plastic parts. And the only thing that limits you is your imagination of how to place your PC parts inside and… your wallet. The latter one made me give it up, especially that shipment to Europe and extra duty bumps 500-800$ price up by additional 300-400$.

T: That’s 1k$ for a case. Far from being cheap (& it seems not in line with our value topic at all). However for building a multi GPU rig with ample of radiator capacity You don’t have so much to choose from & in the end a possibility to put everything inside without leaving a mess is a great bonus that You will not have with smaller cases (especially if You have small kids or curious cats).

S: When the time came for three extra cards it occurred I need an extra 480/560 radiator (one radiator was already lying on a floor). And the CaseLabs topic returned. I just browsed auction portal and there was one guy selling it. Even without the shipment costs it was fairly expensive. But I could deduce some taxes and… asked my wife if my death will be quick and easy if I buy it.

T: How did She respond to this?

S: She looked at me and asked ’If the rads disappear from the floor just don’t ask me, get it!’ Oh Man, when I received the box and saw how every single piece is well packed, how well it is powder coated, at first I thought I am touching leather, not metal… I knew it will be a great case but what I experienced was way too much, way beyond my expectations.

T: Yeah, these cases are exceptional. As they say, You get what You pay for. As family owned business, this is not some sort of out-of-the shelf product. I’ve read somewhere that they make every case just after the order is done – not so much products today You can buy with such care & thus the price is fair. (Not so long ago tomsHardware wrote CaseLabs Q&A AMA recap – read if interested).

So, from what we’ve already talked, guess Your original idea from the beginning wasn’t to run this STRIX cards on air?

S: STRIX dissipate heat into the case (these are non-reference cooler cards), so if you want to have 4 of them on air they will strongly affect each other, reaching 80-85C (depends on your settings) limit, operate at high rpm and make lots of noise and what is the most important reduce their clock to about 800 MHz. That means you lose so much of their potential (we’ll talk about it more in detail later on). So yes, I could watercool them or… use raisers and custom “mining like” solutions to give them some air to breathe.

Going on Air wide open like Guys from Polder Animation done – small piece about their rig.

As I already had some watercooling parts – a pump, radiator, fittings, reservoir – the watercooling was a right way for me.

T: What water blocks You’ve used for them & why?

S: I already had an XXL EK on my Titan. There are plenty of great waterblocks from companies like Bitspower, Aquacomputer, XSPC, Koolance… And these are in competitive prices, about 100 EUR per piece.

The most important is that a waterblock has to be compatible with your graphics card. EK had such a model. The only pain is that I had to do some custom connection with my ‘old’ Titan.

Before I bought the 3-way parallel bridge (see the photo) I emailed EKWB company if 4-way bridge would fit. Unfortunately not. It’s always good to ask. I ordered waterblocks (with some more cooling stuff) right at the EK company and after a week the package arrived (see the photo). I also took backplates, beside esthetics it cools vRAM and chip from the other side of PCB. Quality (and look) – top-notch.

T: As this beast wasn’t built for looks, what You were expecting in terms of performance with increased cooling capacity?

S: Usually I do simple calculations: take the number of CUDA cores, multiply it by base core clock, multiply it by memory clock and then compare (divide) the score to same-technology-based graphics card, e.g., GTX 780 to GTX Titan (Black).

Let me give you an example. Let’s consider GTX 780 6GB (500$), Titan Black 1000$, and TITAN Z (which prices dropped lately to 1500$).

GTX 780: 2304*.889*6/500 = 24.58
Titan Black: 2880*.889*7/1000 = 17.92
Titan Z: 2*2880*.705*7/1500 = 18.95

So looking at performance to price ratio (24.58/17.92=1.37) you get 37% more power from GTX 780 6GB than from Titan Black. Surely these are rough approximations, but they give you a general idea of how attractive GTX 780 6GB is.

Now we know that Titan Z operates not on 705 MHz but on 800-900 MHz core clock (depending on available airflow inside Your case) on stock air cooling which makes it an attractive choice (connected to single PCI slot & occupying only three slots, leaving more room for further build expansion compared to 2x Titans).

But this is not the end of the story. If you have one GPU there is no need to go with water-cooling. Eventually you may get about 10-20% of rendering speed improvement, much lower temps (40C) and silence of course.

T: & Considering the cost of full custom loop it’s not worth..unless, You go for some hybrid solution by Corsair to pair AIO with HG N780 bracket. Then You can get “relatively cheap” & silent (if that’s Your plan) for single GPU.

Let’s continue, Sebastian.

S: Today graphics cards are rather quiet and relatively cool on air (up to 70C), provided that they are at least 1-slot distant from each other. So even if you have two graphics cards in a well ventilated case air cooling seems to be optimal. But when it comes to 4x graphics cards that sit very close each other.. it’s a different story.

T: What You gain from going water-cooling those already great value offering 780s?

S: The question is rather what you lose on air, and you lose a lot, even 40-45% of potential power of your graphics cards.

T: How does that even possible? (I mean, 10-20% boost is somewhat expectable, but 40%+ ???)

S: OK. Let me stress it – we are talking about four graphics cards that are placed next to each other. Let’s talk about real numbers. A while ago I did some help rendering a scene for animation. I used my machine (3×780+Titan) and my friend used 4x Titans on air. I scored 20 ms/s in PT, his score was 15ms/s PT, same settings. So my computer was 33% faster (or his was 25% slower, depending of how you look at it). But 4x Titans have 10,752 CUDA cores while mine rig has 9600. So if I were on stock air cooling he should have 12% faster renderer (10752/9600=1.12). That means watercooling gave me 49.3% speed up (20/9600)/(15/10752)=1.493 or putting this in another words stock air cooling stole 2 Titans out of 6. OC was done at stock voltage 1.16V on STRIX and 1.2V on Titan – 1200MHz on core and 7000MHz on memory.

T: What about temperatures?

S: And temps… 40C. But what if I tell you that STRIX goes 1300MHz easily at 1.2V?

T: Actually would be interesting to see how far You could stretch OC on those cards. The questions remains how stable on the long run that would be? Guess You choose fairly safe OC on Your rig too, considering it’s a machine to do work, not to break records.

S: At stock volts 1.2V ASUS STRIX goes to about 1320MHz while rendering in Octane. But I never push it that high. Actually I am at stock clocks for everyday usage (1000MHz on water). 1250MHz on core and 7000MHz on memory (at 1.16V) seems to be all I need for rendering and I never let temps go above 43-45C on GPUs, while water temp is 35C. Stability is much more important for me, especially when rendering for long hours. It’s not a gaming rig, nor FPS contest. I saw some bios mods with crazy 1600MHz at 1.6V. But, you know, for how long?

T: That’s a good point (“speed kills” they say =). Another question about overclocking – how much of that speedup comes from overclocking the GPU core and how much from OCing memory?

S: I did some measurements and estimated using ordinary least squares method a simple logarithmic model (screenshot from gretl). It occurs that about ¾ of overclocking power comes from clock and ¼ from memory. So at first try to bump up a core, then memory.

T: Do You think that could be more or less true for all cards? Or this proportion is specific to Your GPU only?

S: Estimation of certain parameters of the model I did was done on data from GTX Titan and 780. There might be some discrepancies between various chips, but I believe the rule – ¾ from clock and ¼ from memory – should be valid.

T: Then again some might ask: & what about the cost of water-cooling?

S: If water-cooling gives you the speedup of extra cards and costs you less than those, why not go for it?

T: agree 100%

S: All you need are radiators, waterblocks, pump, reservoir, fittings. This shouldn’t cost more than 1k$. And some fun. Once you go for it, you’ll never want to go back to air, like going from HDD to SSD.

T: For me performance is important but that’s not the only thing I’d look for if building a water-cooled GPU rendering rig. Silent operation under full load is another thing (especially if You don’t have dedicated server room) for which I would consider water-cooling worthwhile. How important that was for You?

S: For me – very important. I can not imagine working in noisy environment entire day. But, you know, maybe this is only me who cares.

T: The comfort of workspace is not the first priority for some, but for others it might be even more important than performance itself.

Let’s move on to other components (as we still have quite some to cover).

If I remember good, Extreme OverClockers made extensive test comparing multiple 360 radiators & the conclusion was..that 11 top performing rads differ only by 6W in terms of heat dissipation & in this context 6W might be the difference between using different pumps. So it’s questionable if there is any importance in choosing radiator, but You got something special in Your rig, tell us a bit more:

S: My first aim was at AlphaCool 560 Monsta, but in the meantime I got my CaseLabs and it occurred it has space for a 480mm radiator on its top. So you know – 480 Monsta again or EK, or XSPC, whatever.

And then I found this copper beauty. How could they – I mean AquaComputer – produce such thing that is teasing you so much? So, guiding by tons of research I knew that its performance may not be of such extreme importance. Couple of degrees will not cause my Octane Render workstation explode. I could save 50-60 EUR and get something black, as my old Phobya. I could. And it confirms we (I mean me) are buying using our eyes.

T: OK, but it is still the Best, the coolest running radiator at the category under low spinning fans – that’s a fact

S: Come on, Man, we both know that this time the look is the most important. I just did it. When the package arrived I sat comfortably on a sofa and slowly opened the box. I took so much air in my lungs and then let it go slowly saying ‘ooooooooooo…’. Shiny copper and stainless steel…

But being very serious – yes, its performance is exceptional! When I put on it 8 be-quiet fans in push pull temps went down by 5C, water temp was reduced to 35C from 40C. The internal copper fins and pipes are extremely well optimized in terms of air flow. Imagine I turned off 4 lower fans and let only 4 upper fans spinning and pushing air through the rad. Despite be quiet! BL062 fans has 1.6mmH2O pressure (which is not that high compared to 2.6mmH20 of Noctua NF-F12) the 4 lower fans were spinning! And, Man, what a silence.

T: Any recommendations from what You’ve learned so far about radiators?

S: A general rule is that you need at least one radiator unit (120mm or 140mm) for one GPU, one for CPU plus one spare. By the time I received the Aquacomputer radiator I had only one 560 rad – for two GPUs fine. No more than that.

T: Check! Now from radiators let’s go to the fans I have always been a fan of Noctua brand. As I found them, don’t look anywhere else, but I’m also curious how be quiet! perform in comparison, any observation as You had both on Your rig while testing?

S: Before I got Noctuas (NF-F12) I did tons of research, looked not only at speed, noise but mainly at static pressure – the higher the better for radiator. Because I’ve already had be quiet! fans on the 560 radiator I was leaning towards the same brand – I just love these fans, they are so quiet. But Noctua fans reach high scores among various users and the only thing some are annoyed with is their color.

T: Yeah, the choice of color scheme was interesting, but I see why they make it (purely from marketing standpoint). However You can find REDUX line in grey color & Industrial line in black (though for me color wasn’t an issue).

S: Personally I like them too and performance is the key. So I gave them a shot and… I understood that their high (2.6 mm H2O) static pressure (1.6mmH2O of be quiet! 120mm) is the reason why be quiet! fans are so silent. Noctua simply pushes air harder when applied to a radiator. I also got some more recommendations that there are industrial fans out there – Sanyo Denki. I bought one for a test, but it was 2200RPM with crazy static pressure – 3.05 mm H2O (9S1212F401). It blew so much air that one can use it in summer to cool oneself. Anyway its quality was great, built like a tank but – you know – too much for me. And I could not get a 1500 RPM version in short time, that has almost the same specs as the be quiet! fan – Silent Wings 2 120mm (BL062). These are definitely the quietest fans I’ve tested.

T: I’ve read some users are unhappy with Noctuas on higher RPMs. I’m personally using them with included U.L.N.A. (Ultra Low Noise Adapter) to reduce their speed down to 600 RPM. Higher static pressure still keeps enough airflow, but the noise signature is minimised (thought If You’re looking for higher airflow, that might not be a solution).

S: So my short summary for fans :
*Noctua NF-F12 performs great, has great package, exceptional cable sleeving and extenders, high static pressure, but is too loud for me at 1500 RPM when applied to a radiator.
*be quiet! Silent Wings 2 are well packed, have also great sleeving, rubber rings from both sides, two different anti-vibration mounting systems (one of them let you set 0 or 1mm distance from a case) and look awesome. . In push-pull setup they perform extremely well and their acoustics is exceptional. I simply love them.
*SanyoDenki fans are built like a tank, you feel their quality in your hands. You can get e.g., 9S1212 series with 1500 up to 2700 RPM versions (if this is not enough for you – alloy framed for above 100EUR; want 5000RPM? – no problem; splash, oil and dust proof…). But there is one thing – as these are industrial fans they come without 3-pin connectors (and no sleeving).

I remember I had NoiseBlockers – these are also great fans. I believe one can find her/his best fan out there, but some experimentation might help a lot. If you can return a fan to a store – get 2-3 different models and test them on your own.

T: That’s probably the best way: try before You buy or simply get few & send back those that left leaving the best inside. As always taste & preference matters. Some might find some products intrusive (acoustically) while other might not hear them at all as they are used to different noise levels.

Looking at Your last update on a build log in OTOY forums I see You went all the way with be quiet! finally. What You’ve managed to achieve in terms of OC, overall performance & temperatures in addition to whisper quiet operation?

S: As said earlier – silence and very low temps – at idle I set all fans to 5V and hear literally nothing, having +1C or +2C to ambient temperature (2x D5 pumps are set to 40% of their max speed, high enough to cause cyclone in the reservoir).

Under full load water temperature reaches 35C (22-24C ambient) and overclocked GPUs work under 40C. I already have 16 be quiet! fans – 8x 120mm in push-pull on the aquacomputer 480mm radiator, 3x 120mm as internal fans – two of them are blowing air onto motherboard cooling at once PSU and lower chamber, one as exhaust. 4x 140mm are mounted onto the 560mm Phobya rad. There is much (if not excessive) positive pressure in my case – more fans are pumping air into the case than out of it. It prevents dust being sucked into a case through unfiltered holes and keeps it clean. Actually I am awaiting DemciFlex filters.

There are some future works, 4 extra 140mm fans for 560 radiator in push, more extension fittings for the loop, cable sleeving. Having 4 GPUs seems to be enough, but even now I am thinking about possibility to go with 8 of them. And yes – I have enough room in the case to get 2x 240 radiators, an extra PSU on an optional mount and 2x 560 rads in an (optional) pedestal. My rendering speed is about 8 times faster than 3 years ago, so we’ll see what future technology brings. Anyway, the only thing I’ll be changing for sure are GPUs.

T: I think we can talk about any of these topics in great detail dedicating entire article for single piece of hardware. But as always all good things have the beginning & the end. It’s been a great time to chat with You, Sebastian. Thank You for Your time & immense input explaining some concepts for readers. Considering the fact there is a lot more to talk about I would be glad to have more articles with You in the future.

Idea for 2015 is to focus on each group of parts: GPUs, PSUs, Cooling gear, Cases, etc. Having everything in one place is just too much to go deep into differences & minor things, that matter – That’s the reason why this article is so long. I just hope readers find it as interesting & hopefully useful. For me it was so much new things & I bet You haven’t said everything You know anyway.. Let’s meet again with different build or any changes You make on this to cover!

S: Tom, it is my pleasure. Thank you very much!

T: Cheers, Sebastian.

Back to the main page

CORSAIR’s HG N780 Bracket to Couple GTX 7xx GPUs & AIO Liquid Coolers


HG N780 is one of the most interesting things I’ve seen so far presented in CES 2015. That is Corsair’s bracket for reference design GTX 7xx Kepler based GPUs (including 770, 780, 780Ti, Titan & Titan Black).

 

It’s not the first such accessory in the market (& Corsair actually modified their own older version made for AMD cards) to fit nVidia GPUs (the one for Maxwell based GPUs a.k.a. 980 & 970 is coming too). But!..it’s the first the first done right!

Now let me explain here. We had units like G10 from NZXT, but using three slots is far from perfect choice if You consider to put more then one GPU (or something else into PCIe slot below). Looking from the side it seems Corsair managed to make it proper dual slot piece!

 

Reusing original fan (from Your reference design card) to cool VRAM & VRMs (entire bracket acts as huge aluminium heatsink) is a nice touch too. Some say it’s not that silent, but most heat comes from Your GPU chip itself (that AIO cooler deals with, ideally removing it out from Your case) & thus You can lower down speed of this blower type fan to minimise noise output.

 

The only issue I see is that AIO cooler pipes might prevent the whole package to actually fit in dual slot space as pipes near the pump unit might not be able to turn sideways enough (rotating the pump might work, but..I, guess I’ll we need to try ourself self to know for sure =)

 

Owning a Titan Black for a while I wished to make a small loop with 280mm radiator in front of my mATX case to cool GPU (even got a pump/res combo from XSPC) but never triggered the build. Mainly because in my eyes full custom loop for single GPU is a waste of money & pricy one when You consider full cover water block, pump&res, fittings & rads would cost at least 300$.

That’s where Corsair’s solution comes to rescue:

* it’s relatively cheap (~40$ for bracket & 65-130$ for AIO unit),
* easy to setup & maintain,
* it keeps Your GPU temperatures low,
* allows plenty of OC (increased performance “pays for” investment),
* & last but not least, it looks good!

Check promotional shot from Corsair (inside their own Air 240 cube case).

 

Also from CES 2015 Single Slot GPU from EVGA (Update: sadly came out to be k|ngp|n edition..)

Back to the main page

Going on Air Wide Open – Custom rig for Octane Render by Polder Animation

Originally I was planning to write a single piece about air-cooling overall, merging different practices, but because my friends gave me so much info I’m providing all of that untouched as separate articles.

This one is by Jean-Paul from Polder Animation. He talks thru why & how this beautiful machine was built & without any need to say anything more I simply leave You to enjoy reading =)

 

The reason that we decided to go for this setup is that we wanted to make rig with as much video cards as possible, so the limiting factor was the available motherboards maximum usable pci-e slots.

We thought about a water cooled rig as well, but we decided to go for the air cooled version because it would be less prone to leakage and maintenance. That off course meant that we had to construct our own casing because none of the off-the-shelve casings supported five dual-slot cards with enough breathing room in between them. I used 2 old cases I had laying around and sawed one to peaces and used those as good as possible to construct the outside support rack for the cards.

A few things to consider though before starting your own project like this:

1) When choosing the motherboard check if all pci-e slots can be used simultaneously, we went for the ASUS Maximus VI Extreme, which has five pci-e 16x and single pci-e4x slots, but as turned out later, only 4 of the pci-e 16x slots could be used at the same time, luckily the pci-e 4x slot could be used as well and loss of bandwidth does not impact the render times.

2) Make sure you choose a good power supply, although the cards we used (GTX 780 Ti OC) only have a TDP of 250W, when all 5 cards power up at the same time and start to overclock themselves the socket power peak measures around 2200W. Although this is only for a very short time, probably less then a second, and after that the system only uses around 1200 Watts of power while rendering, You need a power supply that can handle the peak demand. We are using the LEPA G1600-MA-EU which has a 1700 watt peak (1888 Watts from socket after efficiency conversion). But I recommend going as high as possible to within your price range, as long as it has high grade components.

3) The most important parts for this rig are the pci-e riser cards, I bought non-shielded pci-e 16x risers (20 euro’s a peace) designed for cheap bitcoin mining. But with these we had a very rough start. At first we could not get all cards to work, the system would only recognise a few of them and randomly loose them while rendering. After some research I realised that the risers are very prone to electrical interference and clumping 5 of them together in close proximity does NOT help. So I decided to wrap them in tin foil to shield them myself, this was a very tedious job where I wrapped each strand (3 per riser) in a 4 layer tin foil casing. After this all cards worked, but still sometimes the system drops 1 card, a simple reboot and fiddling the riser will work, but it’s not perfect. In addition I lately surrounded each riser by an anti static bag and spaced them more by adding folded cardboard in between. Since then we have not had any problems, but it’s not scientifically proven and does not win the pretty-ness award.

4) Another thing we had to do to get all the cards recognised is set the pci-e generation back to 1 instead of 3, the higher speeds are more susceptible to static interference (my conclusion, no fact per se). Luckily at 4x lanes and generation 1 pci-e the transfer speed is still 1GB/s, so full memory loading still only takes 3 seconds max which is more than acceptable considering the render times. Although these calculations are theoretical and may vary depending on the quality of the risers it is my opinion that the loss of bandwith for loading scene data is neglectable considering the rendertimes. I compared the OBJ load times of 2 setups, the first being the machine this article is about (GXT780Ti @ gen1 and 4 lanes), the 2nd a setup with a GXT580 (@ gen2 and 16 lanes). I enabled only one card in each system, the diference in bandwith should be 8x faster for the GTX580. When I load an OBJ file of 180MB (gpu memory size) it took 3,5 seconds on the GTX580 and 7 seconds on the GXT7800Ti, however when I loaded an 300MB OBJ, it took 11 seconds on the GTX580 and only 9 on the GTX780Ti. This tels me that the load times are more dependend on other things, probaby network, HDD, memory and or CPU. And off course this is a comparison of 2 different generations of GPU chips, systems and connected via different length network cables (all data resides on a file server), so far from ideal, but it is a real world situation.

As far as the GPU temperatures go, this air cooled setup is more that sufficient. Depending on the time of year and room temperatures all cards temperatures vary from 45 degrees Celsius to 65 after hours of non-stop full load rendering. A side not though, due to the amount of risers there is no room for a bigger CPU cooler, this combined with the severely impared air flow in the casing, the CPU temperatures are less ideal, running up to almost 100 degrees celcius after a longer periods of time at full load. This is something to take in to account when designing you’re own case. And needless to say the amount of noise this setup produces is quite a lot.

If I had to do it all over again, and we probably will somewhere in the future for an extra render system, I would go for professionally shielded risers like these M3 shielded twin axial cables. But at roughly 90 US dollars a piece it is worth considering a water-cooled option where the cards can be plugged directly in to the motherboard.

Kind regards,

Jean-Paul Tossings
http://www.polderanimation.com

 

So that’s it Guys! Again, I would like to thank Jean-Paul for sharing with us some thoughts, tips & findings. Truly furious looking piece, that You will not see so often!

 

Back to the main page

The Best GPU for Octane Render? (intro)

The answer might not be as simple as You think. Word “Best” is kind of subjective on it’s own because it’s hard to say what You have in mind. Being more specific, let’s say: fastest (in rendering), with the highest amount of vRam (to fit bigger scenes), offering best value (performance/$), etc. – would help to give You a proper advise.

But it’s not not only the questions about GPU that matters when You’re trying to make a choice. Environment You’re looking to put Your GPUs in (case, just frame or non at all) is important too. As You can see from article with Sebastian – Value based build for Octane Render difference that cooling makes can be as high as 40%! So when You choose a GPU to buy think of the rest of Your system too.

If You go crazy with vRam but let’s say Your scenes are small (You’re working with optimized models, small textures & low output resolutions) Your hard earned money might be wasted, as buying two lower vRam parts for the same budget might give You a better performance.

Again, PRO range cards costing few times more than their GTX line counterparts will not give better (& in some cases even worse) performance in GPU rendering, but the advantage is that those usually offer higher amount of vRam.

In the end for majority budget is important, as they might look to spend only certain amount of money. Thus a list (of chosen metric) in certain price range might be the information they are looking for. Speaking shortly it’s a lot of different things to consider before choosing a right card for Your needs, so You can finally call it “the Best”.

In 2o15 I’ll try regularly update this topic with most relevant GPU models and put them in context of: performance, latest pricing, availability, cooling design, compatibility (case/airflow), software optimizations (influences the speed for cards based on certain architecture) & all other aspects that might come useful to know when You’re trying to make a decision!

(more to come..)

Back to the main page

Single Slot GTX card?

I’ve been waiting this for a while (actually since GTX 580 Hydro Cooper from EVGA). Of course there were some models in this form factor, but in terms of performance GTs came nowhere close, even that GTX 750 was only entry level (performance wise replacing GTX 460, but with lower power usage). PRO cards are simply in other price category (without offering any additional performance) & some (read AMD) are not even capable to run CUDA.

As pre CES 2015 news were flooding social networks I’ve found a tweet from ExtremeRigs “We are in love with single slot GPUs. …” linking to photo on EVGA’s fan page. “It’s practically begging for a full cover waterblock”added EK Water Blocks.

 

Hard to say what chip is under the hood (as it’s not in focus), but I hope that it’s higher-end model (GM204, sitting in 980/970 or something along those lines). Considering from 5 output on the back side it might be true.

Before this we saw Guys modding Titan cards to single slot, but You need to cut one of DVI connection in order to change the bracket from dual to single slot. A possibility, though clearly not for everyone, as even disassembling the GPU is going to void Your warranty, not to mention operation itself..

It’s very nice to see this, hope this trend is going to continue & soon after we’re going to see more Single Slot options on the market! Especially high-end models, like this dual GPU R9 295×2 from VisionTek (EK water block, custom bracket..).

For some might be a question, what’s the point? Well, the point is that we have some workstation motherboards on the market that have 7-8 PCIe slots to fill, but located only single slot apart.

If You want to add 7-8 GPU cards You need risers (ribbon cables) in order to make this happen (as most common GPU format takes two slots). The problem with ribbon cables is that those are just another point of failure in a rig & not always work perfectly, plus You need to modify case to physically put those cards somehow (for majority of users that’s too much of a headache to consider this route).

Bringing Single Slot GPU option to the market is very welcome & I’m pretty sure if that’s a right card (compute capable) we’ll see it very popular among certain type of users who leverage CUDA applications for simulations, rendering & other type of heavy compute tasks.

 

UPDATE: As happy as I was (after seeing first news covered above) as sad I become after some footage from CES 2015 – check this tastefully captured piece by Hardware Canucks about EVGA’s products.

So where that sadness came from? Well it’s clear this single slot card is not going to be very useful or attractive to GPU compute guys.


It’s definitely not going to be cheap – released as k|ngp|in edition GTX 980 & targeted to extreme overlockers (aiming to achieve highest peaks on benchmarks).


Card is huge, to say the least & more than that it requires extra 8pin (total of 2x 8pin + 6pin power connectors). We already know that Octane Render doesn’t draw as much power under load as some benchmarks or games do, so this card is less than ideal for GPU raytracing crowd.

Unless.. You are going water-cooling route & try to squeeze every little drop of performance (probably risking stability), but in the end this will not bring You good value (considering the fact it’s sort of a niche product, water block probably ain’t going to be cheap too). So it’s questionable if anyOne is going to choose this card for rendering..

As sad as I could be about this, somewhere deep inside I hope we’ll see Single Slot options down the road. GTX 980 overall is not the best GPU for Octane Render. Maxwell architecture so far haven’t proved to bring any advantage over Kepler (running Octane Render) just maybe slightly lower power usage (before You start overlock or turn to non-reference design route). It might be question of software optimisation, but I’m not expecting this to improve things too much. Overall GTX 980 is gaming card & nVidia made some good optimisations in that area, but for compute we are waiting for something new (hopefully coming around GTC 2015 ).

 

So, that’s it on this topic, Guys. Other & probably even more interesting thing shown on CES 2015 was GPU bracket from Corsair (HG10 N780) adapted for reference design 7xx cards & if You’re interested feel free to read some thoughts I’ve wrote on this topic!

Back to the main page

Second look at Titan Z for GPU rendering

It’s been a while since nVidia announced & released the most powerful GTX card to date – dual GPU called Titan Z. Safe to say it’s not only the most powerful, but also one of the most expensive GTX line card (sold for ~3000$ just after launch).

With recent releases in Titan line (Titan, TitanBlack & Titan Z) nVidia is raising prices & almost creating new category of “semi professional cards” targeted as entry level product for compute intensive applications, even if nVidia is mainly marketing Titan Z as gaming card & keeping GeForce GTX (“The ultimate GPU for gamers”) badge.

My guess is that we are going to see three segments in the future line of nVidia GPUs.

(I hope I’m wrong with that..- but again, only time is going to show).

How Titan Z differs from other dual GPU cards?

When You look at this card comparing it to other dual GPU cards from the past You might notice one interesting fact: it has full set of vRam per GPU. I mean, GTX 580 came in two SKUs (looking from vRAM standpoint) : 1.5GB & 3GB, but GTX 590 (dual GPU card based on the same GF110 chip) got “only” 3GB (1,5GB per GPU – that is what programs like Octane Render see as usable memory to fit Your scene). The same story was with GTX 680 (2GB & 4GB) & GTX 690 (4GB “only” split between two GPUs), so in that extent Titan Z is unique as it has 12GB (6GB per GPU) – the same amount as in Titan (Black).

Recent Price-Cuts & value

After recent price cuts You can get Titan Z for “as low” as ~1500$ & that’s finally putting it into a list of appealing choices for anyone looking to leverage GPU compute with CUDA powered applications.

Looking from value perspective, original price of Titan Z made little to no sense of buying it – at least I thought like that back then. You can find a small article I wrote just after GTC’14 – “Titan Z in the eyes of Octane Render user”.

I have to admit, that I’ve made a small mistake predicting its performance (based on SP) from given data, but for the asking price that doesn’t make too much of a difference anyway.

From what we know now Titan Z actually performs very close to two Titan Blacks. From early official data it should have been running at ~700 MHz, but it’s usually over 1000 MHz. Only as temperature rises under load it might start throttling down effectively setting up final Clock speeds between 700-1000 MHz . With both cards (TitanBlack & TitanZ) sitting in Jeroen Tapper’s from Tapperworks (a.k.a Rappet via OTOY forums) system we can figure out exact differences between those Titans  & how they behave under load. Also putting a pair of TitanZs into the same case would give us more precise results of four GPUs ..(I’ll cover the impact of layout He’s using in his workstations & provide more data on this topic in separate article coming soon!)

Where does Titan Z fit out of the box?

One of the best example of using such card is in SFF (Small Form Factor) builds. Boutique builders like DigitalStormFalcon Nothwest, etc. showed some nice little GPU powered monsters – take a look :

When You start thinking about the amount of render power these small boxes possess (in terms of GPU computation capacity) – that compares to hot & loud 4x GTX 580 equipped towers just few years ago & it amazes me how small computer could be build in these days.

DIY route? Well, there are few mITX standard motherboards, SFX power supplies & other parts to choose from..(We’ll get into this topic soon, as I’ve been asked to make a list of parts for very portable, yet as powerful as possible rig for traveling).

Custom cooling for Titan Z

Soon after the release one of the best know company specializing in water-cooling, EKWB – from Slovenia, brought water block for Titan Z.

Apart from keeping card cool (if Your loop has enough radiators), in the kit You would also find a bracket to shrink card from three to two slots.

For those who don’t like to mess with expensive hardware EVGA started selling its version of Titan Z called hydro Copper – for little extra You get pre-fitted water block (it is made by EK too).

What’s the point, You may ask? Obvious answer would be to Overclock & still keep cards cool, but even more than that You’d be able to house four of those cards into single system (using any 4-way capable motherboard, whether You choose any consumer 1150/1155 socket coupled with PLX or high-end X79/X99 for 2011). ATX/EAT form factor has 8 slot spacing & fitting more than two Titan Zs with stock cooling might become a problem (3x of these cards is maximum, provided You have a capable case with 9 PCIe expansion slots on the back & specific motherboard layout).

What if You would take 4x Titan Zs, water-cool them, even OC to squeeze more performance?

Check this teaser, 4x Titan Z powered EpicForce from MAINGEAR:

MAINGEAR, best known for mad gaming oriented systems, is stepping into entirely different niche. No surprise for any gamer that modern titles (because of driver limitation & game engine design) don’t benefit from anything more than quad-SLI (two Titan Z) cards. That’s probably one of the main reason (together with excessive price tag) why we haven’t seen such systems before. But times are changing, prices are dropping & Boutique builders are starting to have fun with them, building systems for those who can utilize all the power provided.

Stay tuned for more info on this (4x Titan Z) topic!

Back to the main page

Architectural 3D Visualiser

Available for remote freelancing as 3D Visualiser, providing not only graphics, but also coaching on how to leverage bleeding edge GPU technologies to speed up workflows & improve quality, while reducing cost at the same time (sounds too good to be true?).

Wide range of skills cover full process: sketching, modelling, lighting, texturing, rendering, post-processing, etc

SculpturalStairs_tomGlimps_B&W

 

Fee free to contact me if You wish

or head back to the main page

Testing noise levels in OTOY’s Octane Render

Not so long ago from publishing this piece OTOY, the developer of Octane Render released first version of 2.1x development cycle. Among other welcome changes & new features (like render passes) speed improvements came along too.

“We improved the sampling in the direct lighting and path tracing kernels. In most scenes this should reduce the time to get to a similar level than before. And the sampling is now more optimized in a multi-GPU environment and in network rendering.” – wrote Marcus a.k.a. Abstrax (source: OctaneRender™ Standalone 2.10 via OTOY’s Release Candidate sub forum).

So what are these new optimisations exactly? Here is a bit more info:

“We also implemented a new path termination strategy, which is a bit tricky to explain: In the past the there was this ominous pin “RR probability” which in most cases only worked good when set to 0. With the new algorithm we try to provide a system where you can tweak render speed vs. convergence (how fast noise vanishes). Increasing the value, will cause he kernels to keep paths shorter and spend less time on dark areas (which means they stay noisy longer) but may increase samples/second a lot. Reducing the value will cause kernels trace longer paths in average and spend more time on dark areas. The current default of 0.3 works good in most scenes, but play with it. For example, in two interior test scenes it actually payed off to set the value to 0.5 – 0.6 which increases the samples/second a lot and then just to render more samples.
The direct lighting and path tracing kernels also have a new option “Coherent mode”, which increases the render speed, but causes some “flickering” during the first samples/pixel and should be mainly used for the final rendering and if only if you plan to render 500 samples/pixel or more.

The three changes above together should help you to speed up rendering quite a bit. How much depends on the scene of course.”

Needless to say speed improvements were more then welcome for majority of users as 2.0 brought a bit of a slow down. It is somewhat understandable considering addition of new features like displacement, fur, etc. – You can’t expect get better speeds with additional complexity (added with features) on a code. But again OTOY’s team worked things out, made some improvements & came back with solution. How does that work? Well, that’s what I’m going after..

Problem?

Now after some tweaks You might need different amount of samples in order to get rid of noise & that brings up a challenge : How to test things out?

Back then (at 2.0 days) someone at OctaneRender forums said (I rephrased) that we need less sampless to get cleaner image & thus judging performance from samples/seconds perspective isn’t the best way to look at speed.

You can always do that test in old fashion way, just to render a scene, save out, tweak a thing or two, render, save out..after that simply try to compare images side by side. However that takes insane amount of time & again I don’t see a lot of those around who would like to invest their time into such test.

Solution?

Latelly I’ve noticed a tweet from BOXX technologies (well know boutique builder specialising in building custom workstations for VFX, CG work). They were trying to come up with an algorithm to compare render times on CPUs & GPUs runing Vray RT.

The methodology that was used in their tests came very interesting to me. They’ve used a plugin called Image Quality Calculator developed by the University of Manitoba’s Physics and Astronomy Department that runs on ImageJ, Java-based image processing program developed at National Institutes of Health.

Then I though to myself: What if, we could try to use this for testing out noise in OctaneRender to figureout differences between versions or how newly introduced parameters effect end result? We can’t compare samples or speed – but with this tool we can compare noise output after certain amount of time & that’s what matters most!

For sure this plugin isn’t perfect & might have it’s own caviats.. I’m looking forward to come out with some scenes & tweak bits here & there trying to observe their influence on speed, but for now I simply wanted to share this idea with You.

What do You think? (drop a line). Any ideas about how to take the most of this test are more then welcome Guys, as in the end (hopefully) this will give us (end users) better understanding of how all these different parameters work together to help us reducing noise.

To be continued..