: Building a Fast graphics Computer for Poser 9/2012 Renders & Carrara

Consumer573Consumer573 Posts: 282
edited February 2013 in Poser Discussion

NOTE: Post 15, below is a Direct Link to Compare Intel Processors: Laptop, Desktop, Server That has turned out to be helpful.
.
.


I would like to build/spec out a computer to handle Poser 9 and above. Cost is an issue and I'm looking for a guide as to where I really should not skimp.


My main objective for the new system would be to have it minimize Render times. I would also like to create larger models that don't use up the available memory leaving nothing for the renderer.


I would like to know what people are currently using and what they think the critical components for fast graphics are.

What do you consider state of the art and what have you all experienced with newer systems?? What would you change?

Any idea what kind of rendering time improvements I can reasonably expect over the older XP system I have now?

.
.

I don't know much about Poser 9 system details. I am told it can access 64 bit for working , but that I can't efficiently render with it no matter what new computer I build [edit: no: runs 32 bit only; can run 32 bit under Windows 7] . If that is the case, what do 'normal' 3D programs (or Poser 2012) run best on. Does Poser 9 access more than one core if you have a multi-core system? [edit: no. 32 bit only]

Here are my basic question areas with the techy, but feel free to ignore them to tell me what you think. Also, I may not be even asking the pertinent questions:

(1) Intel or AMD? How many cores? Core memory size?
(2) If Intel I5 or I7?
(3) Graphics card (GPU). How important? How much dedicated memory makes a difference. Any particular brand/vendor?
(4) Are busses important? [Ivy or Sandy].


I welcome input from people who use other 3D Graphics software, too. Daz users!!! C4, Rhino, Maya, Lightwave, Autodesk and Blender. Shade, Hexagon.... What am I leaving out?

Any places I should look (magazine articles you may have seen, threads, forums?

Thanks!

Post edited by Consumer573 on
«1

Comments

  • wimvdb_dc63ee9ce6wimvdb_dc63ee9ce6 Posts: 183
    edited December 1969

    Use a 64 bit program (such as PP2012), pick a CPU with the most cores and put as much memory in as the motherboard can hold.
    If you want to do CUDA GPU rendering (Octane or similar) buy a Nvidia CUDA card in addition to your standard graphics card with as much VRAM as possible, as many GPU cores as possible and as much texture slots as possible (example GTX680 4GB). If you don't, just use or buy a "standard' gaming card since it won't help you with renderspeed. A current generation is fast enough for OpenGL

    In other words - It all depends on your budget. Renderspeeds is defined by the number of cores, memory and CPU speed for "normal" rendering. For GPU rendering the number of CPU cores and the size of the scene you can render in it depends on the VRAM and texture slots. This is the case for all 3D software.

    Is GPU rendering faster? Doing the same thing - yes. But once you have it you want to render more realistic and in the end render times are just as slow.

    The real thing you probably want to know is what the best price/performance ratio is. Generally that would be last years top of the line.

  • Consumer573Consumer573 Posts: 282
    edited January 2013

    These last few days I've been evaluating your comments. I looked at them right away and was hoping someone else would post, too so I could do some research before following up.

    GPU

    The Cuda & GPU is a world of it's own and I think it's important that any new Graphics computer be built with that upgrade in mind.

    About 18 months ago I came to the conclusion that a rendering computer had to be at least one order of magnitude to one and one-half orders of magnitude faster (10x-15x) than my current computer. On the existing mother boards I didn't see how I would get that; I'd be lucky at 4x-6x. But it looks like Nvidia is the leader in having developed a monster parallel processor system specialized to render graphics for Video and gaming, and now that it works for real-time gaming the graphics spill over is a variant of, "why can't we just print out what we see? Why do we have to render on the motherboard?" Hence the Cuda and up and coming software Octane and Reality (I want, I want). Your comments helped point me in that direction and I appreciated the specific model number to look at.

    NVidia has a conference coming up in March.

    CHIPSET

    I seem to have settled on an Intel i7 3700 series "K" version ivy-bridge chip. i5 seems to be only dual core; i7 is quad core. Ivy-bridge seems to be the 22nm equivalent of the larger Sandy bridge; That is, how Intel got Sandy technology to work on a smaller chip. They say it's not a radically new design, but rather the stepping stone from one chip size to another. The next technology (H-something) will build a new design at 22nm. The "3" in the 3700 seems to mean it is the 3rd generation, or most recent version of the i7 on the market, and the "K" seems to mean it is the unlocked (tweakable speed) version. I'm not sure how the "K" differs from a straight 3700, or if there is such a thing as a straight 3700. There is a "T" that has significantly lower power consumption (how do they do that? Do they slow the chip down when it get too hot?) and an "S" for performance. If I get the chance I'll attach some links.

    AMD vs Intel, 2013

    Why not AMD? Hate to say it, but in all my searches AMD really does not seem to come up at the top. They don't seem to have anything that is being touted as equivalent. I have really liked the AMD machines I've used, and in the past If Intel had something on top AMD would have it's staunch defenders. I'm not finding the obvious links or references to get me down that path this time.


    Steps to building a Powerful graphics Computer that I can't afford all at once

    My approach will be to build the minimum computer that can do the job and is one that I can upgrade over the course of 12-24 mos. So, I'm trying to choose a chipset that won't be outdated immediately and a mother board that can be upgraded. Right now Cuda seems to be six months or more away because of cost, which is fine because motherboards I'm looking at seem to come with on-board video. But I need to make sure the motherboard has the slots. 8GB seems to be a starting point, but It looks like I want the slots to upgrade it to 32GB at least. Haven't really seen 64GB too much.

    In going through this I'm also coming up with questions about USB 2.0 and USB 3.0. Why do motherboards carry both? It seems there are still problems and accents with USB 3.0?

    Sources

    RIght now one of the top gaming computer custom companies seems to be Ailienware (part of Dell now) so I went to see what they were doing; one of the questions to be answered (based on your comment above) is "What is last year's best technology"; don't have that yet, but:

    http://www.alienware.com/

    And I'm looking at bundles on mwave:

    http://www.mwave.com/mwave/index.asp?

    I'm really not impressed with the selection I seem to get at mwave these days. I think it's their website software. Right I go to bundles and then choose the Chipset (i7) and the type (Ivy Bridge) and I get about 2 motherboard bundles in the $300 range (with Chipset and 8GB memory [expandable to 32] totals about $700). But it has video to start with and seems to have the PIC expanded slots that I need.

    Mwave has a lot of motherboards but they seem to have slipped up in me having to find the ones that match the i7 and ivy. Maybe I'm doing something wrong.

    Still looking. Still appreciative of comments.

    Post edited by Consumer573 on
  • wimvdb_dc63ee9ce6wimvdb_dc63ee9ce6 Posts: 183
    edited December 1969

    I built a machine a few months ago. I selected the fastest components, got myself an i7 3930K NVidia GTX 680, 32GB of memory, an SSD drive and 2 4TB hard drives. It has 2 GB network cards. That was the fastest I could get at that time. I buy at least 1 machine a year and use the previous machine as my web en email machine as as secondary Poser machine.
    I have been building machine since the early 80s, so this is probably generation 15 or later. I don't play games anymore, so the game specific features don't interest me, but with Cuda there is for me a renewed interest in capable video cards. I did have a GTX 580 card with 1.5GB and that really did not play well with the Poser Octane Plugin - there was simple too little VRAM on board and there were not enough texture slots. 90% of my scenery would not load due to these constraints.
    With the GTX 680 withh 4GB and using the GTX 580 as a secondary card the situation has improved a lot. Now I can load 80% of the scenes and render them
    But GPU rendering has a trade-off. Poser Firefly is a very capable render in its own right and has features which do not translate well in Octane (or Lux for that matter). This means that any material room tinkering (texure rotation, image offsets, many of the special nodes) will not translate and are in some cases impossible to render without changing textures. So it is not a perfect match. But in many occasions you get really great results. It is defiinitely worth the money

    Regarding AMD - I have never been a fan of AMD, so I always have used Intel. Motherboard - I define what I want on it and that pretty much defines which boards I can use. Which make? I have always been reasonable happy with Asus, so that is what I usually go for. Sometimes I do run into problems (mostly because it is all pretty new) but that usually fixes iself with a BIOS update.

  • WandWWandW Posts: 2,819
    edited December 1969

    I have a USB 3.0 motherboard, and it has both 3.0 and 2.0 ports. However, if you plug a USB 2.0 cable into a 3.0 port, it acts like a 2.0 port.

  • Consumer573Consumer573 Posts: 282
    edited February 2013

    WimvdB said:
    ... I did have a GTX 580 card with 1.5GB and that really did not play well with the Poser Octane Plugin - there was simple too little VRAM on board and there were not enough texture slots. 90% of my scenery would not load due to these constraints.
    With the GTX 680 withh 4GB and using the GTX 580 as a secondary card the situation has improved a lot. Now I can load 80% of the scenes and render them...

    You mentioned it before. What are you defining as a texture slot? I'm not real familiar with the term.

    Also, what is the advantage of the SSD drive? Not sure how that comes into play these days. I still think of it as a slower HD, but non -mechanical.

    And, any reason for the 3900? I'm thinking the previous generation 3700 but with a really good cuda card 6-8 mos from now is what will be fine. My immediate budget will use whatever video is on the motherboard, and so, Firefly it will be.

    Asus. Yes, that is what my tendency would be for building. Glad to hear it re-inforced. Surprised you never used AMD. For a while they had the best price/performance.

    By the way, I'm building this for 3D modeling and fast rendering, and potentially some video editing, not so much gaming; it just seems like the drive behind the GPU is the gaming movement.

    Finally, sounds like your machine 3 years ago would blow away anything I have. 50 Bucks for it? (Said just for the smile).

    Post edited by Consumer573 on
  • wimvdb_dc63ee9ce6wimvdb_dc63ee9ce6 Posts: 183
    edited December 1969

    CUDA has a limited number of textures you can use. This depends on the chip. The GTX 80 allows 64 rgb textures and 32 grayscale textures. This is not a lot. A gen4 (or genesis) figure will occupy 6-18 slots (body, head,limbs, eyes,mouth,.eyelashes with possible and bump and specular map), add hair and clothes and you will use a large percentage of the available slots. Bump, disp, specular and transmaps are all grayscale maps. A Stonemason scene canl use well over a hundred texture maps.
    WIth CUDA/Octane you do not have the option to fall back to CPU rendering. With a single videcard, CUDA and the OS share the Videomemory. Windows takes 300-400MB on a dual screen setup. The VRAM is mostly used by CUDA for texture maps. So 4000x4000 color maps fill up VRAM fast.
    There are many other considerations, but this is the main one when deciding to buy a video card

    SSD is much faster as a traditional drive. The life time of the SSD is a but uncertain. Most manufacturers claim the life time of an ordinary disk under normal usage. Keyfactor here is that there is a liimit on how often a block on an SSD can be written to. So it is advised to move the temp files to a RAM disk or an ordinary disk. The big advantage of the disk is its speed. Booting takes a few seconds and installing an app will go 10 times as fast.

    The 3900 was the fastest at the time I bought it and the price difference with the next best was not too much

    AMD - Compatibility was the main reason. I used to tinker a lot with the OS and did not want to be distracted by quirks of the processor

  • Consumer573Consumer573 Posts: 282
    edited February 2013

    Can you recommend an Asus board to look at? I'm having some trouble sorting out paired PCI express slots that will work. A number of boards I look at say the slots are shared and that to me means reduced performance.

    Do you have a recommended vendor for paired Motherboard / CPU combos? I liked mwave because in the past they would assemble and test the board and put on the CPU heatsink which I understand can be a pain to get right if you only do it once.

    I understand now more about the texture slots. But I'm not sure how you know what the specs for the board are (also did you leave out a digit above or is generic to the whole series). That's nice info you shared. I went to look at Cuda specs to try and understand and also to the Nvida site. Neither Cuda nor the specs for the boards seem to talk about texture slots. I wouldn't have picked up the importance on my own without your mention of it.

    Thank you.

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited February 2013

    Thread link to a recent (Jul 2012) dual core discussion:

    http://www.overclock.net/t/1277467/dual-socket-x79-lga-2011-intel-motherboard

    This has some excerpted nuggets:

    Post # 3/28 (on dual CPUs):

    In things like cad and that there will be a large differance in having two cpus. However, im everything else there will be almost no differance whatsoever, so you need to make up your mind on whats more importent to you. The second is overclocking- a single 3930k overclocked to 4.8 ghz will be on par in cad with two 2.4ghz x79 xeons, while being much better at gaming and that and cheaper. However if this is a 'workstation' where you will be doing cad like work that your livly hood depends on, you might not overclok at all, for reliability reasons. In that case a single six core xeon might be best, as it gives you ecc memory support. So it would help to out line what you do and why alittle more. Personnaly however it seems that unless you have unlimited budget or do hours of cad and maya a day, a single 3930k would be best.

    Post # 5 same reader: The Asus ROG Rampage boards are pretty popular, and so far i haven't found any reason to dislike my extreme

    Post # 8/28 (on why you can't use i7 for dual processor and you have to go intel Xeon series):

    The reason you can't run two 3930K's in a 2P board is that they only have one QPI (Quick Path Interconnect). Xeon chips have 2, one for the system and one to go to a second processor. the 3930K's don't physically have a way to communicate with each other.

    As far as a single cpu board you want to look at the Rampage IV Formula http://www.newegg.com/Product/Product.aspx?Item=N82E16813131808

    You can look at the extreme but I don't think there is anything to justify the extra $100 for it.

    I just picked up an ASRock X79 Extreme6 for my 3930K that I hope to get up and running soon.

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited February 2013

    Is it possible to buy a dual CPU board and only put one CPU on it to start with? [Edit: Yes. Intel's Server line, the Xeon processor, often has computers, such as Dell's Precision series, that come with dual slot motherboards with only one CPU installed].


    Do you have a dual CPU board recommendation that will be able to put a dual slot GPU on it.

    .
    .
    .

    Edit: 6 cores per Xeon (Westmere) 5600 processor 32nm technology July 2010 Review:

    http://techreport.com/review/19196/intel-xeon-5600-processors

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited December 1969

    From Creative Cow thread:

    http://forums.creativecow.net/thread/61/863196

    Fellow Building a fast graphics computer for Autodesk/Maya:



    Josh Buchanan Computer for maya and 3ds max, more cores or faster cores?
    by Josh Buchanan on May 27, 2012 at 9:35:45 pm

    Hey guys,
    I plan on buying a new workstation for the autodesk programs, maya and 3ds max and other 3d rendering programs. And my question is should I aim for a workstation with a lot of cores, or fewer cores but the are much faster. Say an Intel set up with 6 at 4.5ghz, or AMD with like 24 at like 2.1, how much does the Autodesk programs take advantage of the multi cores?
    Thanks






    Steve Sayer Re: Computer for maya and 3ds max, more cores or faster cores?
    by Steve Sayer on Jun 4, 2012 at 1:47:30 pm

    Very generally speaking, faster cores will help YOU while you're working in the software, while more cores will help the RENDERER when you leave it running on its own or in the background.

    If you're going to be doing a lot of processor-intensive work, like animating extremely complex characters or running elaborate simulations, faster cores will make that less painful. However, if you're going to be a doing a lot of rendering on that box, the more cores you have the less time you'll spend waiting for those renders to finish.

  • Consumer573Consumer573 Posts: 282
    edited February 2013

    Xeon Chips typically for Workstations and Servers, Intel "I" series typically for home (But there's still a lot to be said for going with multiple processors to build a fast 3D graphics render machine):

    Thinking out loud here, leaving a partial research trail: Xeon chips are workstation chips and can be as new as the 'i' series. It is a parallel line made for servers. They typically have more cores, come with ECC (error checking correction memory) for stability, and are more expensive. It looks like if you want to run multiple processors the Xeon series really is set up to take advantage of that whereas the "i" series is not.

    Googling the phrase: "Asus, dual Xeon, GTX580" to try an come up with a motherboard I can start with and upgrade brings more info from an Adobe forum (but no motherboard). This fellow is not as concerned with 3D rendering so his conclusion (going with an overclocked i7 and a GPU) may ultimately be different than mine:

    He's trying to build a fast grpahics computer to run Adobe in October 2012:

    http://forums.adobe.com/message/4755559

    Also, this article from Tom's Hardware favors a six core i7 over Xeon (EXCEPT when using a high number of threads, as in rendering):

    http://www.tomshardware.com/forum/331975-28-xeon-2690-3930k
    .
    .
    .
    Credit Link above, I found this excerpt a very helpful commentary for the discussion here (Date: April 2012):


    "i7 3930K (or i7-3960X) vs. Xeon E5 2690 eight core

    Archi_B
    Hello,

    I`m wondering how is i7 3930K (or i7-3960X) is holding up to the Xeon E5 2690 eight core; i saw that hp realeased their new workstation z820 wich supports a variety of 8 core processors (even dual 8 cores = 16 cores) but the system is very expensive.
    So in terms to performance how are the six core i7 3930K (or i7-3960X) in comparison with 8 cores processors? Are the 8 cores E5``s really worth it? are they really that powerfull and fast? becouse, as i said in financial terms a i7 3930K system would be half the money (or more)
    * i couldnd find a proper benchmark where the E5`s were listed

    any help would be much appreciated

    blazorthon 04-16-2012 at 09:48:05 AM

    The 8 cores with reduced clock speed are fairly similar to the X79 i7 six core CPUs in highly threaded performance and inferior in lightly threaded performance. The Xeons are more expensive because of their more server/workstation oriented features (ECC memory compatibility, more stable, multi-CPUs per board compatibility, etc.), not because of them being faster, except for the fastest of the Xeons (IE an eight core Xeon with a higher than 3GHz clock frequency and the ten core Xeons).

    Archi_B 04-16-2012 at 09:57:42 AM

    thanks for your imput blazorthon,
    regarding ppl`s general opinion this is what i found out:

    "For the money that you spent, dual E5s do not perform anywhere near that much faster than systems equipped with single i7-39xx CPUs. In fact, dual E5s might actually perform slower than single i7s in H.264 encodes due to the excessive latencies in the switching in dual-CPU systems (and the more CPUs within the single system, the greater the latency)."


    Archi_B
    04-16-2012 at 09:59:24 AM

    "Here is one major problem with all dual-CPU setups (not just dual e5s):

    No dual-CPU system performs anywhere near twice as fast as an otherwise comparable single-CPU system. In fact, without all of the latencies and bottlenecks that switchers, disk systems and graphics systems impose on the system, a dual-CPU system performs at best 41 percent faster than a single-CPU system. (In fact, one would need a quad-CPU system just to theoretically double the overall performance of a given single-CPU system.) Add in the chipset, disks and GPU, and the performance advantage could plummet to less than 20 percent. That's way too small of a performance improvement for such an astronomical increase in total system cost (which could amount to double or even triple the cost of an otherwise comparable single-CPU system). And that's not to mention that the second CPU increases the total system cost by at least $2,000 up to a whopping $6,000. No wonder why dual-CPU systems are relatively poor values (bang-for-the-buck)."


    blazorthon 04-16-2012 at 10:34:36 AM

    The E5-2670 and the i7-3930K and the i7-3960X should all be about equal in highly threaded performance (12/16 threads in this context) and the i7s pull ahead significantly in anything that uses less than 16 threads. Same goes for the E5-2690. If these are your CPU choices, then get an i7-3930K or an i7-2700 and overclock it to about 5GHz (or whatever it will go to at below 1.4v). If you are also willing to overclock the i7-3930K, then you can probably get it up to about 4.5 or 4.6GHz (maybe a little higher) with an $80-$100 cooler.


    Archi_B

    thanks blazorthon, well i`m looking for a new computer for work, and i must say that i have little experience with building one, most of my computers, laptops, current workstation is HP ( i am a hp fan)
    so what i currently have i mind is this:
    HPE Phoenix h9se series
    - 2nd Generation Intel(R) Core(TM) i7-3960X six-core processor [3.3GHz, Shared 15MB Cache]
    - 16GB DDR3-1333MHz SDRAM [4 DIMMs]
    - 256GB Solid state drive
    - 1GB DDR5 NVIDIA GeForce GTX 550 Ti [2 DVI, mini-HDMI. VGA adapter]

    Hp: total cost 2400$
    Reply to Archi_B
    Archi_B 04-16-2012 at 11:01:00 AM

    is this worth it or should i consider a custom buid(ask a friend to help), that for the same amount of money get better hardware?

    blazorthon 04-16-2012 at 05:33:24 PM (shortened from link)

    Get the i7-3930K instead of the i7-3960X. It has nearly identical performance to the 3960X at all workloads for $400 less money. Also, I recommend getting 1600MHz memory. It should cost about the same as 1333MHz memory does if you buy the memory yourself from a site such as newegg. Make sure that you get 1.5v quad channel memory if you do.

    Build it Yourself

    ...OEM computers tend to get more overcharged as they go up in performance. For example, for a low end machine, you might get the same performance for a home built machine, but for a high end machine, you might pay 50% to several times what it would cost to build it yourself. The memory and video card are usually the worst offenders in cost. Since you were opting for a fairly low end grapics card (at the bottom of the middle end class today, won't be long before it is considered the top of the low end class), you probably weren't getting a price as bad as a similar gaming machine would have, but it still seems like too much money for such a system.

    Considering that Tom's built an X79 computer with some frills for looks and noise reduction that had a 3930K and Radeon 7970 (the 7970 alone was about $600 of that budget), I'd say that you should try either a home built or a partially home built such as what I suggested. If you have a friend that can help, then it would just be even easier.

    Going for something similar except with less frills (cheaper case, PSU, CPU cooler) and the GTX 550 TI instead of the powerhouse of a 7970, you should be able to get something similar (or greater than) the specs that you listed for about $1200-$1400.....

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited February 2013

    Models

    Dual Xeon E5 2687W on Asus Z9 PE-D8 WS - Cinebench 11.5

    http://www.youtube.com/watch?v=ZdkDsI8wihI

    Asus Review (Below) from September 2012 Motherboards.org:
    http://www.motherboards.org/


    ASUS Maximus V Extreme vs Maximus V Formula Thunder FX (A good explanation of the names and differentiation laying out the Asus Line, not just the comparision between these two specific boards):

    http://www.youtube.com/watch?v=x7G-hZ7BNbc


    Nvida vs AMD GPU (Note from 2011; starting to get old):

    http://hexus.net/tech/reviews/graphics/31451-asus-rog-dual-gtx-580-mars-ii/

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited February 2013

    One motherboard possibility (Looks like it might be able to handle the latest 6-core i7 chip [not sure] as well as the Nvidia GPU) at a reasnoable cost ($295):

    Asus P9X79 Pro:


    http://www.asus.com/Motherboard/P9X79_PRO/

    Retailers & Prices:

    http://www.google.com/shopping/product/3644098954551914614?hl=en&sugexp=les;&gs_rn=2&gs_ri=hp&cp=14&gs_id=6&xhr=t&q=asus+p9x79+pro&pf=p&output=search&sclient=psy-ab&oq=Asus+P9X79+Pro&gs;_l=&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.41934586,d.dmQ&biw=1024&bih=608&tch=1&ech=1&psi=vnwRUcGgCuil0AHynIHgCQ.1360100575625.1&sa=X&ei=wHwRUe3RHsHw0gHOzYD4Bg&sqi=2&ved=0CLgBEMwD

    .
    .
    Review: Asus P9X79 Pro (from Google Link Above)
    - Jan 6, 2012
    Having recently looked at the Sabretooth X79, it's now the turn of a motherboard a little lower down the food chain, the Asus P9X79 Pro.

    This board is lower down in relative terms only though, as the X79 is a high-end, enthusiast chipset. That means any motherboard featuring this chipset isn't going to be exactly cheap.

    The P9X79 Pro is about a tenner cheaper than the Sabretooth X79. And, you'd better sit down for this, nearly a hundred quid cheaper than Asus's flagship Republic of Gamers X79.

    The P9X79 Pro is still packed with up-to-the-minute features, such as PCIe 3.0 support, USB 3.0 boost technology, SSD caching, Asus's new USB BIOS Flashback, an updated UEFI BIOS with new features and eSATA 6Gbps. So in fact you're getting an awful lot of board for that price tag and with a fair degree of future-proofing built in as a bonus.

    The new processors feature a quad-channel memory controller, which goes some way to explaining the crazy physical size of the chip and its corresponding socket on the board. Small it ain't and while some companies are happy to stick with four DIMM slots, Asus being Asus has gone the whole hog and, as it did with the Sabertooth X79, the P9X79 Pro sports a complete set of eight DIMM slots, two per channel. In theory, that means you can load up the board with a maximum of 64GB of memory.

    In performance terms too the P9X79 Pro impresses. The Sabertooth X79 may be seen as the home overclocker's board of choice, but the P9X79 Pro seems just as capable of hitting 4.8GHz. An excellent result considering the Sabertooth only managed another 100MHz more. It may not have the looks, but it's still got the performance chops.

    Simon Crisp TechRadar Labs, UK

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited February 2013

    See the i3, i5, i7, Xeon and more!


    Intel Processor Comparison
    (Click on the tabs once you're on the web page for Laptop, Desktop and Server):

    http://www.intel.com/content/www/us/en/processor-comparison/compare-intel-processors.html


    or just follow me:

    Desktop: (i3, i5, i7 series, etc.)

    http://www.intel.com/content/www/us/en/processor-comparison/compare-intel-processors.html?select=desktop

    Server (Latest Xeon):

    http://www.intel.com/content/www/us/en/processor-comparison/compare-intel-processors.html?select=server

    This is a pretty nicely laid out group of web pages. When you click on "compare" (you're allowed to compare 5 processors at once) That takes you to a more detailed comparison page and you can see things like the date the processors came on line (how old they are) and more. I compared an i3, the latest i7 6 core, and the latest Xeon, for example. They keep the line colors the same when they are different across cells vs when they are the same.

    Post edited by Consumer573 on
  • Standard DeviationStandard Deviation Posts: 12
    edited December 1969

    Nice thread.

    I use an ancient dual Xeon workstation, great buys used, careful owners, excellent spares (the IT guys save everything, and sell). Much better than getting consumer grade, the build quality is solid, and more reliable used.

    Architecture, TDP, voltages on the rails, are things I had to learn with mine.

    Also if you can look at the HP Z800 (even Z600, Z400). If you found a used one it would keep you happy for a longer time than a consumer unit.

  • Consumer573Consumer573 Posts: 282
    edited February 2013

    So, you made me look at ebay for the Dell 690!:

    http://www.ebay.com/itm/Dell-Precision-690-Xeon-HT-3-0GHz-2GB-250GB-DVDRW-512MB-Video-Win-XP-Pro-/160700286184?pt=Desktop_PCs&hash=item256a7bc4e8


    Though this one looks like it's an older dual core with Hyperthreading. Numbers are in the ball park, though. This venue a possibility if the right machine comes up.

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited February 2013

    ssgbryan said:
    I picked up a 4 core Dell Precision 690 workstation for 300 dollars. For an additional 30 or so, I can add a memory riser and go to 64gb of ram.


    Can anybody verify that the Dell 690's will run Windows 7?** I don't see why not, but I'd rather be surprised now before I buy one.

    Where do you get memory for $30. Crucial, etc. is charging an arm and a leg ($280-330/8GB) as it's all ecc and they only seem to have 4GB modules. I found someone who will sell me 8GB for $70.


    **It doesn't look like a Dell Precision 690 will run Windows 7:

    http://www.dell.com/support/drivers/us/en/04/Product/precision-690

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited February 2013

    ssgbryan said:
    I picked up a 4 core Dell Precision 690 workstation for 300 dollars. For an additional 30 or so, I can add a memory riser and go to 64gb of ram.


    Can anybody verify that the Dell 690's will run Windows 7? I don't see why not, but I'd rather be surprised now before I buy one.

    Where do you get memory for $30. Crucial, etc. is charging an arm and a leg ($280-330/8GB) as it's all ecc and they only seem to have 4GB modules. I found someone who will sell me 8GB for $70.

    And I only see memory in increments of 4GB, so I don't quite understand how you get 64GB; I get to 32. Do I need special brackets or fans for cooling?

    I'm intrigued.

    Post edited by Consumer573 on
  • frank0314frank0314 Posts: 14,054
    edited December 1969

    The more sticks of RAM you have the more cooling you need. Some will come with heat spreaders that go over top the ram to hep dissipate the heat. But a couple quality fans should be fine for 32GB

  • Consumer573Consumer573 Posts: 282
    edited February 2013

    Looks like a number of 5-6 year old Dell Precisions are coming off lease about now.

    A Xeon Quad core processor (4 cores/4 threads) I couldn't find in the Intel Specs (might have missed it) Reference:

    Product description
    Description
    The Intel Xeon E5345 is a quad core 2.33 GHz computer processor chip that spans four threads and is made for use with an Intel motherboard for personal computers used in a home setting as well as those that are put to work in small to medium sized businesses. This Intel Xeon quad-core processor has a clock speed of 2.33 GHz as well as an L2 cache with a maximum capacity of 8 MB. The Intel Xeon E5345 2.3 GHz processor has a front side BUS speed of 1333 MHz, which translates into a BUS to core ratio of 7. Other features of this Intel Xeon quad-core processor include a VID voltage range of 1000 to 15000 volts, a maximum TDP rating of 80 watts, and a lithography rating of 65 nm. This 2.33 GHz computer processor supports 800 or 1066 DDR 3 memory with a maximum bandwidth of 25.6 GB per second. In addition to Intel visualization technology and enhanced Speed Step processing power, this Intel Xeon quad-core processor supports multiple sockets including LGA 771 and PLGA 771 technology.

    Product Identifiers
    Brand Intel
    Processor Model Xeon E5345
    MPN HH80563QJ0538M
    UPC 735858199308

    Key Features
    Clock Speed 2.33 GHz
    CPU Socket Type LGA 771/Socket J
    Multi-Core Technology Quad-Core
    TDP 80 W
    Processor Quantity 1
    Product Type Processor

    Cache Memory
    Level 1 Size 64 KB
    Installed Size 8 MB
    Type Advanced Smart Cache

    Expansion / Connectivity
    Compatible Slots 1 x processor, 1 x processor - LGA771 Socket

    Other Features
    Bus Speed FSB Speed - 1333 MHz
    Manufacturing Process 65 nm
    64-bit Computing Yes
    Architecture Features Enhanced SpeedStep technology, Intel Virtualization Technology, Intel 64 Technology, Intel Advanced Smart Cache
    Platform Compatibility PC

    Miscellaneous
    Package Type OEM/tray

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited February 2013
    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited February 2013

    Intel® Xeon® Processor W3550
    (8M Cache, 3.06 GHz, 4.80 GT/s Intel® QPI)

    Status
    Launch Date: Q3'09
    Processor Number: W3550
    # of Cores: 4
    # of Threads: 8
    Clock Speed: 3.06 GHz
    Max Turbo Frequency: 3.33 GHz
    Intel® Smart Cache: 8 MB
    Bus/Core Ratio: 23
    Intel® QPI Speed: 4.8 GT/s
    # of QPI Links: 1
    Instruction Set: 64-bit
    Instruction Set Extensions: SSE4.2


    Embedded Options Available: No
    Lithography: 45 nm
    Max TDP: 130 W
    VID Voltage Range
    0.800V-1.375V
    Recommended Customer Price: TRAY: $294.00, BOX : $305.00

    Memory Specifications:
    Max Memory Size (dependent on memory type): 24 GB
    Memory Types: DDR3-800/1066
    # of Memory Channels: 3
    Max Memory Bandwidth: 25.6 GB/s
    Physical Address Extensions: 36-bit
    ECC Memory Supported: Yes
    Graphics Specifications: Integrated Graphics: No
    Max CPU Configuration: 1
    TCASE
    67.9°C
    Package Size: 42.5mm x 45.0mm
    Processing Die Size: 263 mm2
    # of Processing Die Transistors: 731 million
    Low Halogen Options Available: See MDDS
    Advanced Technologies
    Intel® Turbo Boost Technology
    Yes
    Intel® Hyper-Threading Technology
    Yes
    Intel® Virtualization Technology (VT-x)
    Yes
    Intel® Trusted Execution Technology
    No
    AES New Instructions
    No
    Intel® 64
    Yes
    Idle States
    Yes
    Enhanced Intel SpeedStep® Technology
    Yes
    Intel® Demand Based Switching
    Yes
    Thermal Monitoring Technologies
    No
    Execute Disable Bit
    Yes
    Intel® VT-x with Extended Page Tables (EPT)
    Yes



    Note: As I see some out of date Xeon processors coming up regularly in Dell Machines now coming off 3 & 5 year leases that are not on Intel's compare list (First Page this thread) I'm putting specs for a couple of them here for easy reference. If you're not potentially in the market for a used commercial graphics machine you can just skip these references.

    Post edited by Consumer573 on
  • Consumer573Consumer573 Posts: 282
    edited December 1969

    Some additional interesting threads:

    (1) RDNA person asks about Poser 8 and XP and wants to know maximum amounts of RAM.

    "Thread: Optimizing Memory with Poser 9 and poser Pro 2012":

    http://forum.runtimedna.com/showthread.php?65432-Optimizing-Memory-with-Poser-9-and-poser-Pro-2012

    =>Admin Kera points to Windows 32 bit (XP etc) 3Gb Switch location:
    "There is a 3GB switch for 32bit OS ...: http://msdn.microsoft.com/en-us/wind...dware/gg487508


    (2) RNDA Person asks difference between Poser 9 and Poser pro 2012:

    http://forum.runtimedna.com/showthread.php?66611-Poser-9-vs-Pro-2012

  • WandWWandW Posts: 2,819
    edited December 1969

    Old drivers on the install disk; I had to do the same for SATA drivers on my last system build. I just copied the MB drivers to a USB stick and pointed the installer to them, IIRC...

  • Standard DeviationStandard Deviation Posts: 12
    edited December 1969

    I also use the 3gb switch on a Win XP 32bit setup.

    It works okay, but you really need to add extra virtual memory as well (not on your main drive - that means not on another partition of your main drive too), as you will slow the system down. I disabled the virtual memory on the C drive, and use a dedicated drive. A form of flash memory is good for your extra virtual memory. Big scenes can be slow (the system runs as fast as your slowest memory - poor CPU works hard), and as ssgbryan hinted, open up task manager look at performance tab, look at Page File use, load item, check PF, load item check PF, then when you render make sure you still have surplus in your PF, as for each light in the scene, it's going to cost you in PF memory.

    Use all the tricks you can to optimize your system for speed first.

    The Xeon, is a star for data transfer (that's what servers do best) and is good for using in a system with a mass of virtual memory.

  • Consumer573Consumer573 Posts: 282
    edited December 1969

    I also use the 3gb switch on a Win XP 32bit setup.

    It works okay, but you really need to add extra virtual memory as well (not on your main drive - that means not on another partition of your main drive too), as you will slow the system down.

    Do you have any experience, say hooking up a Thermaltake BlackX duet hard drive dock with a 320 or 500 GB SATA (current price sweet spot) to your usb? The Thermaltake, or equivalent, is a way of making a standard hard drive seem like a huge gig stick. It's even supposedly hot swappable.

    Example:
    http://www.amazon.com/Thermaltake-Drives-Docking-Station-ST0014U/dp/B002MUYOLW

  • frank0314frank0314 Posts: 14,054
    edited December 1969

    We've got one but it's only a single. Very nice to have

  • Standard DeviationStandard Deviation Posts: 12
    edited December 1969

    Thanks Frank0314, for answering.

    Sorry!, no personal experience of using a Thermaltake BlacX duet, or single. Sounds nice.

    I mount internally, because the BIOS / Motherboard will certainly see the additional virtual memory that way.

Sign In or Register to comment.