3090

Is it possible to use four cards in a single system? There is a motherboard capable of use four 3090 default cards without modifications? Thanks

Comments

  • I am jealous of your budget devil

  • KitsumoKitsumo Posts: 1,216

    Whatever store you shop at, select the filter for "4-way SLI Capable" and that should get you started.

    Of course you don't actually use SLI with Iray as far as I know, but the requirements are the same; it needs 4 PCI-E slots adequately spaced to allow for cards.

  • kgrosserkgrosser Posts: 141
    edited September 2020

    I am jealous of your budget devil

    God, so am I. Iwas so stoked for the 3080 with 20GB VRAM and will hold on to my cash until it's released with a 7nm respin and double GDDR6X chips next year.
    Anyway, on topic: NO. Even EATX is just 7 slots, with the 3090 clocking in at 3 it would sum up to a 12 PCIe slot spacing on board and case - there is no such thing and would bust every specified form factor. I have yet to see a contemporary motherboard capable of sporting more than 3 full PCIe 16 slots. IIRC also QUAD SLI was only possible with single slot cards.

    Post edited by kgrosser on
  • KitsumoKitsumo Posts: 1,216

    Wow. I didn't catch that part. So are all the 3090s going to be triple slot? If so, that's insane. Nevermind then, I don't think there are any motherboards that support 4 triple slot cards. Plus the cooling requirements would be massive. Oh well, first world problems.

  • Plus, I don't think it would benefit one. VRAM doesn't add up in SLI/NvLink AFAIK, so you are "stuck" with 24GB anyway. Also I don't feel the (current) iray implementation  really utilizes the raw power of multiple cards to speed up the render. I tried an external Cubic PCIe Expander loaded with 2 Titan GTX (12GB each) added to my 1080Ti and didn't experience any speed in either viewport or rendering.

  • Plus, I don't think it would benefit one. VRAM doesn't add up in SLI/NvLink AFAIK, so you are "stuck" with 24GB anyway. Also I don't feel the (current) iray implementation  really utilizes the raw power of multiple cards to speed up the render. I tried an external Cubic PCIe Expander loaded with 2 Titan GTX (12GB each) added to my 1080Ti and didn't experience any speed in either viewport or rendering.

    It is possible to pool RAM, with an overhead I believe, for materials over nvLink - though the speed will suffer as a result. It doesn't help with geometry and other resources, however.

  • It is possible to pool RAM, with an overhead I believe, for materials over nvLink - though the speed will suffer as a result.

    Hm, why would one then want to take the effort (and money) if it doesn't speed up things?

  • SimonJMSimonJM Posts: 5,997

    Maybe it's my age, but saw the topic of 3090 and my first thought was this (which I used to work on)

  • LeanaLeana Posts: 11,812
    kgrosser said:

    It is possible to pool RAM, with an overhead I believe, for materials over nvLink - though the speed will suffer as a result.

    Hm, why would one then want to take the effort (and money) if it doesn't speed up things?

    It would enable you to render a bigger scene with the GPU rather than having to use the CPU.

  • nonesuch00nonesuch00 Posts: 18,284
    SimonJM said:

    Maybe it's my age, but saw the topic of 3090 and my first thought was this (which I used to work on)

    This is also the 1st computer I they used to teach me to program and I remember JCL and some of that stuff still. Thank goodness I programmed regular workstations after that class as submitting a job and waiting a day or more for the results (and mistakes) made it hard to do good work or learn to be a good programmer. Sort of like CPU rendering in iRay creates the the problem. laugh 

  • SimonJMSimonJM Posts: 5,997
    SimonJM said:

    Maybe it's my age, but saw the topic of 3090 and my first thought was this (which I used to work on)

    This is also the 1st computer I they used to teach me to program and I remember JCL and some of that stuff still. Thank goodness I programmed regular workstations after that class as submitting a job and waiting a day or more for the results (and mistakes) made it hard to do good work or learn to be a good programmer. Sort of like CPU rendering in iRay creates the the problem. laugh 

    For IBM kit I started on the 3081 (1st mainframe I eorked on was an ICL-1904), and one main issue was, "hold on, you need to write a new running preogram for batch (JCL) and onlune (TSO)?, as the ICL just had George 3 macros which worked for either..

  • nicsttnicstt Posts: 11,715

    3090 is a triple slot card no?

    In affect, you would need 12 slots. Can 4 be linked using the nvlink? No idea.

    Don't ask here, ask people that might actually know.

  • kgrosser said:

    Plus, I don't think it would benefit one. VRAM doesn't add up in SLI/NvLink AFAIK, so you are "stuck" with 24GB anyway. Also I don't feel the (current) iray implementation  really utilizes the raw power of multiple cards to speed up the render. I tried an external Cubic PCIe Expander loaded with 2 Titan GTX (12GB each) added to my 1080Ti and didn't experience any speed in either viewport or rendering.

    IRay scales very well with multiple GPUs. Consult RayDAnt's IRay Benchmark topic and check your setup.

  • Seven193Seven193 Posts: 1,103

    Don't forget about the power.  350w each card, 1400w raw power.  A 2000w PSU would fit the bill.  And then you would need to find a way to cool it, because I think it would get very, very hot.

  • Is it possible to run 4x 3090 cards on a single system? Yes, considering you're using a CPU with enough PCIe lanes (or a motherboard with a PLX chip to provide the PCIe lanes), and a PSU capable of handling at least ~1400w (just for the GPUs).
    Is it possible to run them without making any modifications, such as water cooling the GPUs? Probably not the best idea.

    Watercooling is the only solution not just because of the temps and preventing the cards from melting but to also actually fit them (as others have said, they're 3-slot cards).

Sign In or Register to comment.