New RTX-2070 Super, little to no render time improvement over CPU. Why?

13»

Comments

  • jukingeojukingeo Posts: 711
    alexburca said:

    I am using GeForce RTX 2070 graphics card type with no issues. It sounds like you have a drivers issue there.

    you should make sure you remove all instances of the old drivers first, by using this command:
     

    SET DEVMGR_SHOW_NONPRESENT_DEVICES=1

    devmgmt.msc

    once device manager shows up on screen, select show hidden devices.

    you should see some faded videocard drivers, probably from whatever video card you had previously installed, in a transparent icon. Right click and delete them.

    also, for the new RTX 2070, you should download and use. The NVIDIA Studio Drivers, instead of the default gaming drivers.

    I hope this helps.

    Prior to installing the RTX 2070, I never had a GPU on my machine, so prior drivers wouldn't be an issue.  However, as Jason Galterio said above, it is a moot point now because I no longer have the card.  I expected much more than it delivered and working or not, I wasn't going to keep it.  I ended up buying a GTX Black 1080ti for $100 less and while I can't say the performance is overall better, but it is certainly on par.  But the thing that matters more to me is the more memory of 11gig over 8 gig.  So for me and my use, the 1080ti is better.  It may not be that way on paper, but it works out for me. 

    Thanks anyway for trying to help, but the RTX 2070 is long gone now.

  • JamesJABJamesJAB Posts: 1,760

    The other option would be to look at used Quadro cards.
    For a little over $1000 used the Quadro P5000 is a 16GB GPU that will have roughly the same render speed as a GTX 1080.  This would allow you to render your larger scenes at a very repectable speed.

    Or if you want to prioritize video RAM over render speed, for $750 you can get a 24GB Quadro M6000 that will render at around the same speed as the GTX 980ti.

    Both of these cards will be a huge step up from your CPU rendering

    BTW, Nvidia does sell larger VRAM cards.  The issue for us is the cost because they are Quadro cards and larger capacity ECC GDDR6 chips are expensive.
    48GB GDDR6: Quadro RTX 8000 - $5500
    32GB HBM2: Quadro GV100 - $9000

  • PadonePadone Posts: 3,786
    JamesJAB said:

    for $750 you can get a 24GB Quadro M6000

    not less than $1000-1500 on ebay right now .. where do you get those prices from ?

  • SpottedKittySpottedKitty Posts: 7,232
    JamesJAB said:

    The other option would be to look at used Quadro cards.
    For a little over $1000 used the Quadro P5000 is a 16GB GPU [...]

    [...] for $750 you can get a 24GB Quadro M6000 [...]

    I've just remembered that NVidia is gradually phasing out their older cards from working in recent versions of Iray. Does anyone know if they do the same with older Quadro cards?

  • JamesJABJamesJAB Posts: 1,760
    JamesJAB said:

    The other option would be to look at used Quadro cards.
    For a little over $1000 used the Quadro P5000 is a 16GB GPU [...]

    [...] for $750 you can get a 24GB Quadro M6000 [...]

    I've just remembered that NVidia is gradually phasing out their older cards from working in recent versions of Iray. Does anyone know if they do the same with older Quadro cards?

    It does not happen as often as you would think.
    The only Iray capable Geforce/Quadro cards that have been removed are the old Fermi based cards.  Even then none of those cards have enough VRAM to be an effective render card anymore.
    The next series that will be removed at some point in the future is Kepler (Geforce 600/700 and Quadro K series cards).  Even now most of the Kepler cards already don't have enough VRAM for effective Iray rendering.

    Here is a list of Nvidia GPU series and maximum VRAM values for each:
     

    Series Geforce Quadro
    Fermi 4GB 6GB
    Kepler 6GB 12GB
    Maxwell 12GB 24GB
    Pascal 12GB 24GB
    Volta 12GB* 32GB
    Turing 24GB 48GB

    * I Do not count the CEO edition because it was never sold.

  • PadonePadone Posts: 3,786
    edited January 2020

    Thank you @JamesJAB for some reason those didn't come out in my ebay search list. Indeed a 24gb maxwell card could be a very good alternative to titans, especially at those prices.

    There are also k6000 12gb at $300-400 that may be good alternatives as well.

    Post edited by Padone on
  • FlortaleFlortale Posts: 611

    JukingGeo,

    DAZ Studio requires its users to learn basic memory management for IRAY renders.

    GPU renders faster than CPU.

    Visit this link and read the text in all the promo images: https://www.daz3d.com/resource-saver-shaders-collection-for-iray, it offers good explanations so you can understand how to manage memory better.

  • JamesJABJamesJAB Posts: 1,760
    Padone said:

    Thank you @JamesJAB for some reason those didn't come out in my ebay search list. Indeed a 24gb maxwell card could be a very good alternative to titans, especially at those prices.

    There are also k6000 12gb at $300-400 that may be good alternatives as well.

    For around $400 you would be better off grabbing a Maxwell based Geforce GTX Titan X.  This card is pretty much a 12GB version for the Quadro M6000.

    Based on the complexity that you are talking about rendering, you would be much happier with a 24GB card.  This will allow you to focus more on creating art rather than "optimizing" a scene to fit it inside of hardware limitations.

  • The kepler cards are slated for removal in an upcoming version of iRay, it was in the release notes a while back. Therefore don't buy any k series Quadros or GTX 6xx/7xx cards unless its at firesale prices.

  • jukingeo said:

    That totally sucks!  I was under the impression that the purpose for linking cards is to add the memory.   So then what is the purpose of linking cards?   Now on the flipside.  If you have a dual processor motherboard and two i9 processors would that speed up render times over one cpu?

    Linking Cards is mainly for the Game Industry. Most modern Game Engines use two cards to render Top and Bottom view or 3 cards to render Left 1/3, Center 1/3 and Right 1/3 view. All that stuff is hard-coded and under the hood by the Devs.

    Rendering PBR, however will use all CUDA cores. That's why it's recommended you get twin cards or a single high-end card, like a Titan or Quadro for Animation Reels. You can double your render times, but not worry about one card choking the other with VRAM.

     

  • sajikysajiky Posts: 28

    Theres something wrong with iray rendering using daz 4.12 at least on some machines. I couldn't get it to render using the gpu no matter what I tried, would always revert to cpu. Had to go back to 4.11 for the time being. 4.11 I can render 99% converged, full 4k with high settings in hour and a half. 4.12 let one sit all night, was maybe 20% through. Same scene.

  • sajiky said:

    Theres something wrong with iray rendering using daz 4.12 at least on some machines. I couldn't get it to render using the gpu no matter what I tried, would always revert to cpu. Had to go back to 4.11 for the time being. 4.11 I can render 99% converged, full 4k with high settings in hour and a half. 4.12 let one sit all night, was maybe 20% through. Same scene.

    If it was a general issue, not jusb that scene, which driver ar you using?

  • sajikysajiky Posts: 28
    sajiky said:

    Theres something wrong with iray rendering using daz 4.12 at least on some machines. I couldn't get it to render using the gpu no matter what I tried, would always revert to cpu. Had to go back to 4.11 for the time being. 4.11 I can render 99% converged, full 4k with high settings in hour and a half. 4.12 let one sit all night, was maybe 20% through. Same scene.

    If it was a general issue, not jusb that scene, which driver ar you using

    Well most recent driver, then rolled back and tried the 2 before that as well.  Saw where one guy who had the problem said drivers before a version (can't remember the number) from several months ago all worked, tried a few from around then. Tried multiple scenes with every render setting I know of that could possibly change it.

    Every scene over about 1 GB Ram (I have GTX 970 8GB) would start with GPU Then switch to CPU.  Did some research found out I'm not alone. Went way down the rabbit hole lol, (I'm obssessed with learning DAZ) lots of finger pointing at DAZ, at Nvidia, and at Microsoft. One guys said roll back to Studio 4.11. Only thing that worked.

  • sajikysajiky Posts: 28

    Linking Cards is mainly for the Game Industry. Most modern Game Engines use two cards to render Top and Bottom view or 3 cards to render Left 1/3, Center 1/3 and Right 1/3 view. All that stuff is hard-coded and under the hood by the Devs.

    Rendering PBR, however will use all CUDA cores. That's why it's recommended you get twin cards or a single high-end card, like a Titan or Quadro for Animation Reels. You can double your render times, but not worry about one card choking the other with VRAM.

     

    So would adding a GTX 970 SLI to my existing one be a better option DAZwise then replacing my 970 with a newer model?

  • JamesJABJamesJAB Posts: 1,760

    SLI does nothing for Iray rendering.

    Also, the GTX 970 only came in a 4GB version

    The extra card will speed up render times, provided that the render scene fits into the VRAM on each card.

    I would not spend money on any card with less than 8GB of VRAM.

  • TheKDTheKD Posts: 2,696

    I had that issue a lot in win10, ended up upgrading win10 to win7 after a few months. The problem went away. In my years of win7 my renders has never drop to cpu in the middle unless the driver crashes, which rarely happens. Most of my driver crashes happen when trying to use dforce, not rendering.

  • fastbike1fastbike1 Posts: 4,078

    If you are using Win 10 , up to 1 GB of Vram will be "reerved for system use" and will generally be unavailable for Iray rendering. That will put a GTX970 to 3 GB which will be very difficult to render newer items in any but the absolute simplest scenes. Studio 4.12 will render Iray consistently within the VRAM limits of your card with Nvidia driver 430.86, at least for GTX cards. I suspect RTX as well but have no personal experience.

Sign In or Register to comment.