Previous Entry Share Next Entry
When knowing too much holds you back
Grin
zeph_ng
Computer related.
----------
TLDR Version:
I'm contemplating about building a new system, but am unsure of what to get: i5 or i7, LGA1156 or LGA1366
----------

As of late, other than the introduction of the 'new' LGA1366 platforms, and the more recent LGA1156 platforms, the consumer based CPU market seems to be pretty much stagnant, considering that the newer LGA775 Core 2 Quads improve over the older Core 2 Quads just ever so slightly, though not the same can be said for the LGA1156 and LGA1366 based platforms.

Right now however, I've been contemplating over a new system build for my use (3D Animation rendering in particular), but this is where I can't easily pick out the necessary information from what's needed, especially when budget comes into play.

As to why I want a new computer, the machine I am using at present, an Acer Aspire 8730G laptop, has these specs:
C2D T9600 @ 2.83GHz <- Decent dual core, but very limiting for final rendering
4GB PC2-6400 RAM 5-5-5-18 (64-bit Vista)
nVidia 9600m GT <- decent for gaming I guess, but saying meh for Maya and XSI is an understatement
1920x1080 18.4" screen <- Good workspace pixel count, keeping the overall pixel count for work here

Predominantly, I use Maya 2009 and XSI SoftImage, both of which support multiple cores (more than just 2), and given how fast rendering on a Quad Core based machine is compared to a Dual Core (much so that even a mid range Phenom II system will handily best any of the fastest Intel C2Ds OC'ed), so Dual Core chips (Core 2 Duo, Phenom II X2, Athlon II X2) are definitely out of the equation (though not a Dual Core, the Phenom II X3 is also pretty much out of the equation).

Right now however, I'm pretty much avoiding the AM3 and Phenom II platform due to upgradability issues, and also their performance when it comes to 3D rendering compared to what Intel has to offer at present. Their price/performance ratio doesn't do it for me here

Decent Quad Cores absolutely destroy Duals when it comes to rendering, and when it comes to investments/commisioned projects, full render times are practically halved (C2Q9650 [4x 3.0 GHz] VS C2D E8400 [2x 3.0 GHz] taken into consideration)

Why CPUs? Wouldn't workstation graphic cards come into play?
Well, Pro Graphic Cards come into play, but that's during real-time interaction with the program's Viewport, and CPUs come into play with Software Rendering and Mental Ray rendering.
For the workstation GPU, I've more or less decided to settle for a Quadro FX580, an entry level workstation GPU that doesn't need separate power connectors (feeds off the PCI-E board).

Right now however, I'm at a loss as to whether to go with budget and get an LGA775 based Q9550, or go with an LGA1156/1366 platform for possible future upgradability (the fastest of the Core i7s are not even overkill for what people of my line work at, but that's way out of reach in terms of budget).

The thing about the LGA1156 though, is that 'm uncertain what kind of roadmap Intel intends to take for the future CPU releases, whereas the upcoming LGA1366 roadmap has the 6-core HT enabled Core i9s are coming in the very near future, which I believe will force the present batch of Core i7 CPU costs down to a reasonable level if market sales were to be taken into consideration.

The other thing about the LGA1156 and LGA1366 platforms is about Dual and Triple channel DDR3 memory control. Most gaming folks would probably say that there's little to no benefit, hence going for the LGA1156 platform is a viable option, but if you take animation rendering into account, the tri-channel DDR3 option suddenly looks good because of higher overall bandwidth.

How so? One may ask that dual vs tri-channel may shave off a few seconds at most when rendering a completed project, but here's how animation works:
- 1 second has 24 frames of animation
- A scene generally has a few frames of animation
- A self project in general is no longer than a few minutes, but a few minutes generally delves in more than just a few hundred seconds.
- A few hundred seconds in an animation project plays a large role in render time because of point 1.
- 1 frame of animation may take a few minutes to render, especially slightly more complex scenes (a few minutes a frame is not unfamiliar, even on the fastest of i7s)
- A few minutes = a few hundred seconds = thousands of frames of animation
- Thousands of frames of animation (a few seconds of difference per frame on dual VS tri-channel) = huge amount of time
- Time = money

Then again, the above point may also be moot since complex animation require render farms (ExaFLOP power here) to get the job done... But I don't see myself doing something as complex as 3D feature animation films as a self-project, so that just brings up even more questions, mostly those which bring things in favour of the tri-channel i7 setup.

Yet here comes another whole can of worms. The amount of RAM a workstation has should come into play greater than Dual VS Tri-channel, and for the most part I can save on a bit of costs if I decided to go with 8GB of RAM if I went with LGA1156 as opposed to 6GB on LGA1366 (12GBs very tempting, but its excruciatingly expensive), since P55 moboes are cheaper than x58 moboes, but I'm wondering if I'm actually stunting the possible upgradability of the machine by doing so. A high RAM count, tri-channel system is relatively ideal though.

In the end though, the cost difference between the LGA1156 and LGA1366 machines I have in mind (with a Core i5 750 and Core i7 920 as the heart of the machine, 8GB on the i5 and 12GB on the i7) is about CAD$1350 VS CAD$1600.

Wonder if I'm actually debating to myself too much over this issue here n.n;

  • 1
I would go for the Core i7 (920) + X58 option and for a few reasons:

Both the P55 and X58 motherboards support multiple graphic cards (CrossFire for ATI and SLI for nVidia). The main difference between the two is the amount of bandwidth that is provided for the PCI-E slots available on the board. On the P55, plugging in one graphics card gives you the full x16 bandwidth available on the controller. Plugging in two graphics cards, however, result in the bandwidth being split into a x8/x8 configuration. The X58 has more bandwidth available and while some models sport 3 (or up to 4) PCI-E slots, two graphics cards will still use a x16/x16 configuration, great if you decide to push for a more powerful graphics card in the future.

I harping on about graphics cards because with the immense processing power of a GPU (as compared to a CPU), there is great potential that can be unlocked, especially in the field of rendering and such. With OpenCL still in the process of maturing, it never hurts to prepare for the future.

The Core i7 also has other tricks up it's sleeves, such as the ability to push (read: overclock) to about 4.0GHz on air-cooling (with a more powerful heat-sink solution than the standard Intel stock cooler). The tri-channel memory bandwidth also helps, and while DDR3 kit prices are still quite pricey, it might be a little cheaper to buy individual sticks than to buy a kit (i.e. OCZ 10666/1333 DDR3 CL9 2GB = SGD$70 compared to OCZ 10666/1333 Plat CL7 6GB Kit = SGD$245).

As a gamer, I would opt for the i7/LGA1366/X58 platform because of the potential it offers over the i5/1156/P55. As a 3D artist, if you intend to squeeze out every drop of performance now and in the future, I'd recommend the i7 + X58.

Good points, and sums up pretty much what I have in mind with regard to GPUs and the X58 platform.

OC'ing is pretty much out of the question however, IIRC, some 3D render programs hate that, and if I recall right, especially 3DS Max, where in instances, it will inform you that your CPU is being OCed and refuse to work (this was what I was told, I'm uncertain of the credibility).

The advent of GPGPUs is exciting, but consumer level graphic cards (this, I'm assuming that the Geforce line and Radeon line of cards are the ones being used here) in general aren't programmed to have the accuracy of the Quadro or the FireGL line of cards, hence in my decision in going with a single card solution. While consumer cards are hyped on speed, they lack the floating point accuracy that the workstation cards have.

IIRC:
The i7 965 has a FP of ~70 GFLOPs @ double-precision
The GTX 295 has can do about 74 GFLOPs @ double-precision
The HD 4870x2 has a whopping 200+ GLOPs double-precision computation capacity

Honestly though, I'd be surprised if CUDA and Stream were programmed to the point of the level of accuracy that 3D Modellers/Animators would need, since that would jeopardize the existing Quadro/Tesla/FireGL family of cards.

That said, I'm also uncertain of the benefits of having a Quadro in SLI or FireGL in Crossfire mode.

For now, the worstation cards predominantly increase productivity in the real-time interaction with the program's interface, but Mental Ray rendering and Software rendering are still being programmed to be utilized under CPU computation.

That said however, the x58 platform is tempting in the sense that once the OpenCL platform matures more, the extra bandwidth of the extra PCI-X16 slots will actually be utilized for more computing power. Just hoping that Maya 2011 or 2012 would harness the power of CUDA/Stream, which may sway my decision to actually purchase a consumer level card, as they're more cost friendly compared to the workstation cards.

  • 1
?

Log in

No account? Create an account