GTX 1080 Iray support?

1181921232428

Comments

  • nicsttnicstt Posts: 11,715
    edited November 2016

    A point people are forgetting is that if Iray Texture Compression is on, the *CPU* will have to convert the textures from jpg/png to raw then compress them before sending them to the VRAM.  This is extremely CPU dependent AND a reason it takes so long for the render to start showing results.

    Kendall

    Which is why (unless I really need the GPU space) I leave a render image open. The time taken is then just the render; just about.

    jnwggs said:
    nicstt said:

    Let's keep this civil, without a descent into tribalism, please.

    Yeh, sorry; I'm not being tribal, merely incredulous.

    Sounds incredible, but there is so much variance from render to render, depending upon the scene and the lighting and the camera angle (pointing at lights vs otherwise), and the host computer, etc... Where one render can be extremely fast on one card, another can be the opposite. Remember, we are comparing Maxwell to Pascal, and at the moment, nobody really knows how each platform will perform relative to the other within particular aspects of a render. Since I first wrote what I wrote, there have been more tests, and although my 1070 kicks butt over the 980ti in general, I've found, there have been times where I've seen the reverse happening and I haven't been able to nail it down to anything in particular. Just some scenes are really fast, and others are slower. I'm still experimenting with this to determine what affects what. From what I've read, the 1070 is faster than the 1080 at the moment, but that could be because the 1080 isn't fully operational yet in this "Beta" version, or perhaps the Pascal isn't optimally configured for it yet, or who knows? That's why it's still in Beta and not final release. It could also be because I'm using a "Founders Edition", and the 980ti I tested it against isn't.

    I don't have a 980ti, my friend does and I compared rendering the same scene on her computer with her 980ti to rendering on mine with my 1070. It's probably because my GTX 1070 is a "Founders Edition" though, more than anything else :)

     


    So far, not seen anything that makes me want to upgrade; difinitely considering the xeon route - and will still be able to use the graphics cards I have.

    Post edited by nicstt on
  • jnwggsjnwggs Posts: 89

    A point people are forgetting is that if Iray Texture Compression is on, the *CPU* will have to convert the textures from jpg/png to raw then compress them before sending them to the VRAM.  This is extremely CPU dependent AND a reason it takes so long for the render to start showing results.

    Kendall

    So how do you turn that off? And, what do you lose if it is turned off?

     

  • jnwggs said:

    A point people are forgetting is that if Iray Texture Compression is on, the *CPU* will have to convert the textures from jpg/png to raw then compress them before sending them to the VRAM.  This is extremely CPU dependent AND a reason it takes so long for the render to start showing results.

    Kendall

    So how do you turn that off? And, what do you lose if it is turned off?

     

    The thresholds (the image sizes above which first medium and then high compression are applied) are set in the Advanced tab of Render Settings. Do bear in mind that not compressing may well tip a scene over the GPU memory limit, however, though as discussed elsewhere compression may not always make a positive contribution.

  • Has ANYONE heard when in the heck Nvidia might actually release Iray drivers for the 1070 and 1080? Mostly the 1070. I cant seem to find the answer online. Last I knew they were saying end of October and here we are in November. I am seriously SERIOUSLY losing my patience with it. lol

    If no one knows about the driver release, do you know how I might be able to yell at Nvidia??? lol

     

    Thanks!

  • mjc1016mjc1016 Posts: 15,001

    Nvidia drivers are out...the latest driver package provides full function of the 10xx series cards, what isn't available, yet, is a fully updated version of Studio...

    http://www.daz3d.com/forums/discussion/95436/daz-studio-pro-beta-version-4-9-3-117-release-candidate-2-updated/p1

  • GatorGator Posts: 1,294

    Has ANYONE heard when in the heck Nvidia might actually release Iray drivers for the 1070 and 1080? Mostly the 1070. I cant seem to find the answer online. Last I knew they were saying end of October and here we are in November. I am seriously SERIOUSLY losing my patience with it. lol

    If no one knows about the driver release, do you know how I might be able to yell at Nvidia??? lol

     

    Thanks!

    Lots of us here have been rendering with Daz Studio now with our Pascal cards.  Read the past 2-3 pages.  You need to install the latest Public Beta Build along with the latest Nvidia drivers.

  •  

    Lots of us here have been rendering with Daz Studio now with our Pascal cards.  Read the past 2-3 pages.  You need to install the latest Public Beta Build along with the latest Nvidia drivers.

    Awesome!  I was reading this thread and that seemed to be the case, but I'm glad you spelled it out.  I already googled on how to download the public beta build.  It seems rather straightforward.


    ​I am on the fence about upgrading and getting a 1080.  I would be going from a 980 (not ti).  I like having the extra memory (8 vs 4).   From gaming benchmarks it seems like the 1080 is quite a bit more powerful than the 980 (regular and TI).   Would it make as much a difference in iray renders?

    ​If I decided to break my bank and get a new system with two 1080s in SLI,  from what I gather, I can't render in SLI.   Would it simply be a matter of turning off SLI to make 2 cards work?   Could you make a profile for DS  with SLI off.  (I know my way about nvidia control panel, but I've never had a SLI system.)

    ​Also, I got more question to sneak in, kinda OT.  If you have an Intel octo- or decem-core CPU, would that be like giving a steroid shot for renders in 3Delight?  Would it have any impact on Iray?


     

  • I have two 1080s tried in sli and not sli made no difference to me been pretty fast some have said cards in sli can cause crashes again I've not had any problems there I did a render with 30+ figures mixture of gen 2 and 3 took just over 2hrs and did one in sli and the other with it disabled only few seconds difference and others with only a few figs only took a few minutes in sli probably take into account your cpu too that may make a difference too what I got 

    Processors: Intel Core i7 6700K Quad-Core 4.0GHz (4.2GHz TurboBoost)
    Motherboard: ASUS Maximus VIII Gene
     Windows 10 Pro
    Graphic Cards: Dual 8GB NVIDIA GTX 1080

     

  • Thanks for the helpful info, AngelReaper1972.  BTW you have a lot of very nice work in your galleries!

  • GatorGator Posts: 1,294
    edited November 2016

     

    Lots of us here have been rendering with Daz Studio now with our Pascal cards.  Read the past 2-3 pages.  You need to install the latest Public Beta Build along with the latest Nvidia drivers.

    Awesome!  I was reading this thread and that seemed to be the case, but I'm glad you spelled it out.  I already googled on how to download the public beta build.  It seems rather straightforward.


    ​I am on the fence about upgrading and getting a 1080.  I would be going from a 980 (not ti).  I like having the extra memory (8 vs 4).   From gaming benchmarks it seems like the 1080 is quite a bit more powerful than the 980 (regular and TI).   Would it make as much a difference in iray renders?

    ​If I decided to break my bank and get a new system with two 1080s in SLI,  from what I gather, I can't render in SLI.   Would it simply be a matter of turning off SLI to make 2 cards work?   Could you make a profile for DS  with SLI off.  (I know my way about nvidia control panel, but I've never had a SLI system.)

    ​Also, I got more question to sneak in, kinda OT.  If you have an Intel octo- or decem-core CPU, would that be like giving a steroid shot for renders in 3Delight?  Would it have any impact on Iray?


     

    On the CPU, dunno on 3Delight.  Doubt going overkill on the CPU will help much for Iray.  If your CPU isn't a bottleneck then it shouldn't matter much if at all.  FYI, I didn't compare render times, but when I moved my Titan-X Maxwell cards to an AMD FX-8320 from an i7-6700K I didn't notice much difference.  The CPU was never close to maxed out for either, the cards hit 100% utilization.

    I have some comparisons on Maxwell vs. Pascal (Titan X cards, same exact scene done for comparison):

    Yuge scene, 8 Genesis 3 figures SubD at 1:  16.7% faster

    Scene with 2 Genesis 3 figures visible, SubD at 3: 57% faster

    Scene with 3 Genesis 3 figures, SubD at 3: 38.6% faster.

    Post edited by Gator on
  • wfwf Posts: 19

    I did some quick tests with one character only and I think the speed doubled up with a GTX 1080 vs CPU only (The 1080 rendering wasn't working with the previous version)

    Amazing! 

  • Thanks for the helpful info, AngelReaper1972.  BTW you have a lot of very nice work in your galleries!

    oh thank you princesamoyed :)

     

  • wf said:

    I did some quick tests with one character only and I think the speed doubled up with a GTX 1080 vs CPU only (The 1080 rendering wasn't working with the previous version)

    Amazing! 

    You speed should be about 5-10x faster (10% of the time) if you're rendering with a GPU vs a CPU.  A scene that takes 5 minutes on my 980 Ti generally takes 50 minutes on the cpu.

  • I purchased a new computer with a stronger cpu and DDR4 ram upgrad from a very strong machine running two Titan SC's only to find out Iray didn't work with the 1080.

    So I put the two Titans into the new machine, one 1080 in the old machine and one on the shelf.

    Is it time to shuffle the Titans out and put the 1080's in, or wait beyond the available beta build?

    Advice I guess is what I'm looking for.

  • areg5areg5 Posts: 617

    A point people are forgetting is that if Iray Texture Compression is on, the *CPU* will have to convert the textures from jpg/png to raw then compress them before sending them to the VRAM.  This is extremely CPU dependent AND a reason it takes so long for the render to start showing results.

    Kendall

    What is texture compression and how do I trun it off?

  • hphoenixhphoenix Posts: 1,335

    A point people are forgetting is that if Iray Texture Compression is on, the *CPU* will have to convert the textures from jpg/png to raw then compress them before sending them to the VRAM.  This is extremely CPU dependent AND a reason it takes so long for the render to start showing results.

    Kendall

    While I ususally agree with Kendall, I think this point is a bit misleading.

    When you load the scene, the images on disk (in png/jpg/tiff/whatever) have to be loaded into memory so that they can be used by OpenGL as well as by Qt for display in the UI.  These will be in RAW format in memory.  3DeLight had to convert them to mipmap formats, which did take some extra time at render start for 3Delight which actually runs a shell program (TDLMAKE) to do the mip-map conversions..

    While DS _does_ have to send each and every texture image to the GPU (separate from the OpenGL context), it doesn't have to do any special compression.  I believe the Iray SDK does that (at least partially) and the GPU itself handles part of it.

    So the whole 'decompress' side is already handled long before render-time (specifically, it happens at load time for the scene).  The presence of the textures in OpenGL is one of the reasons that using the same card for display as for rendering uses up so much GPU VRAM.....the images exist TWICE on the card, along with the geometry.  While the consumption varies from OpenGL to Iray, it's still a considerable amount, especially for involved scenes.

    But the increase in time between render start and iray first iteration start (the setup/load time) probably isn't a huge difference whether texture compression is on or off.  But it WILL affect render time, as well as whether or not a given scene may fit in GPU memory.  My experiments don't show much of a percentage difference when it's on or off.

     

     

  • Kendall SearsKendall Sears Posts: 2,995
    edited November 2016
    hphoenix said:

    A point people are forgetting is that if Iray Texture Compression is on, the *CPU* will have to convert the textures from jpg/png to raw then compress them before sending them to the VRAM.  This is extremely CPU dependent AND a reason it takes so long for the render to start showing results.

    Kendall

    While I ususally agree with Kendall, I think this point is a bit misleading.

    When you load the scene, the images on disk (in png/jpg/tiff/whatever) have to be loaded into memory so that they can be used by OpenGL as well as by Qt for display in the UI.  These will be in RAW format in memory.  3DeLight had to convert them to mipmap formats, which did take some extra time at render start for 3Delight which actually runs a shell program (TDLMAKE) to do the mip-map conversions..

    While DS _does_ have to send each and every texture image to the GPU (separate from the OpenGL context), it doesn't have to do any special compression.  I believe the Iray SDK does that (at least partially) and the GPU itself handles part of it.

    So the whole 'decompress' side is already handled long before render-time (specifically, it happens at load time for the scene).  The presence of the textures in OpenGL is one of the reasons that using the same card for display as for rendering uses up so much GPU VRAM.....the images exist TWICE on the card, along with the geometry.  While the consumption varies from OpenGL to Iray, it's still a considerable amount, especially for involved scenes.

    But the increase in time between render start and iray first iteration start (the setup/load time) probably isn't a huge difference whether texture compression is on or off.  But it WILL affect render time, as well as whether or not a given scene may fit in GPU memory.  My experiments don't show much of a percentage difference when it's on or off.

     

     

    This was a stretch to reply to without violating NDA and I contemplated not replying at all.  Luckily I am not a DS exclusive developer and I have a different source that will allow me to make the point and not get into trouble:

    https://knowledge.autodesk.com/support/3ds-max/learn-explore/caas/CloudHelp/cloudhelp/2016/ENU/3DSMax/files/GUID-83D8A821-190E-46D2-A7AD-87EC9038F875-htm.html

    Please see the last few paragraphs.  

    and from the Iray SDK:

    virtual void mi::neuraylib::ITexture::set_compression ( Texture_compression  compression)    
    pure virtual

    Sets the texture compression method.

    Note

    This setting does not affect the referenced image itself, it only affects image data that has been processed by the render modes. For example, in order to save GPU memory processed image data can be compressed before being uploaded to the GPU.

    See Also

    mi::neuraylib::Texture_compression

    bold emphasis is mine.

    While the compression is handled by Iray, it is not done on the GPU side.  Please note that Iray is both a CPU and GPU system and splits the work across the two as necessary to get the work done.  Pushing huge amounts of uncompressed data across the PCI-e bus will cause massive stalls since most desktop systems do not contain bus controllers capable of bus mastering at the level necessary to keep systems from freezing up.  On Xeon systems and multiprocessor systems with asyncronous bus-mastering circuitry (AMD Opteron systems), this is less of a problem.  Note that many 4x Xeon/Opteron Server systems that push huge data use PCI-e x8 buses (most half-length slots) because the bus controllers can keep the systems from stalling without the extra lines.  Stalling the bus for too long can lead to many, many problems.

    EDIT: The post editor is fighting me... While hphoenix is correct that some textures are loaded into memory, the number of them that are is actually quite small.  Textures like displacement, normal, bump and other specialty textures are not used by the DS viewport and therefore (to my knowledge) are not loaded until render time.  These represent a massive amount of extra processing that isn't done at figure/prop load time.  The exception to this is when the auxillary viewport is in use.  Then the rendering process is always active and the additional textures are loaded and sent to the GPU. 

    Kendall

    Post edited by Kendall Sears on
  • prixatprixat Posts: 1,588

    Looking at some of the latest posts in the benchmarking thread and it looks like load-balancing has been fixed.

    No need to switch off the CPU or slower GPU anymore? Can anyone confirm that?

  • hphoenixhphoenix Posts: 1,335
    edited November 2016
    hphoenix said:

    A point people are forgetting is that if Iray Texture Compression is on, the *CPU* will have to convert the textures from jpg/png to raw then compress them before sending them to the VRAM.  This is extremely CPU dependent AND a reason it takes so long for the render to start showing results.

    Kendall

    While I ususally agree with Kendall, I think this point is a bit misleading.

    When you load the scene, the images on disk (in png/jpg/tiff/whatever) have to be loaded into memory so that they can be used by OpenGL as well as by Qt for display in the UI.  These will be in RAW format in memory.  3DeLight had to convert them to mipmap formats, which did take some extra time at render start for 3Delight which actually runs a shell program (TDLMAKE) to do the mip-map conversions..

    While DS _does_ have to send each and every texture image to the GPU (separate from the OpenGL context), it doesn't have to do any special compression.  I believe the Iray SDK does that (at least partially) and the GPU itself handles part of it.

    So the whole 'decompress' side is already handled long before render-time (specifically, it happens at load time for the scene).  The presence of the textures in OpenGL is one of the reasons that using the same card for display as for rendering uses up so much GPU VRAM.....the images exist TWICE on the card, along with the geometry.  While the consumption varies from OpenGL to Iray, it's still a considerable amount, especially for involved scenes.

    But the increase in time between render start and iray first iteration start (the setup/load time) probably isn't a huge difference whether texture compression is on or off.  But it WILL affect render time, as well as whether or not a given scene may fit in GPU memory.  My experiments don't show much of a percentage difference when it's on or off.

     

     

    This was a stretch to reply to without violating NDA and I contemplated not replying at all.  Luckily I am not a DS exclusive developer and I have a different source that will allow me to make the point and not get into trouble:

    https://knowledge.autodesk.com/support/3ds-max/learn-explore/caas/CloudHelp/cloudhelp/2016/ENU/3DSMax/files/GUID-83D8A821-190E-46D2-A7AD-87EC9038F875-htm.html

    Please see the last few paragraphs.  

    and from the Iray SDK:

    virtual void mi::neuraylib::ITexture::set_compression ( Texture_compression  compression)    
    pure virtual

    Sets the texture compression method.

    Note

    This setting does not affect the referenced image itself, it only affects image data that has been processed by the render modes. For example, in order to save GPU memory processed image data can be compressed before being uploaded to the GPU.

    See Also

    mi::neuraylib::Texture_compression

    bold emphasis is mine.

    While the compression is handled by Iray, it is not done on the GPU side.  Please note that Iray is both a CPU and GPU system and splits the work across the two as necessary to get the work done.  Pushing huge amounts of uncompressed data across the PCI-e bus will cause massive stalls since most desktop systems do not contain bus controllers capable of bus mastering at the level necessary to keep systems from freezing up.  On Xeon systems and multiprocessor systems with asyncronous bus-mastering circuitry (AMD Opteron systems), this is less of a problem.  Note that many 4x Xeon/Opteron Server systems that push huge data use PCI-e x8 buses (most half-length slots) because the bus controllers can keep the systems from stalling without the extra lines.  Stalling the bus for too long can lead to many, many problems.

     

    While I'll acquiese to the GPU not doing the majority of the front-load compression, It would be crazy for it to not use the built-in hardware level compression systems to offset some of the load.  These are high-speed encoders (not CUDA programs) that handle DCT and other kinds of video/image compression (they're used for video acceleration).  While the CPU will have to prepare the image data to an extent, if nVidia isn't using ANY hardware level encoding/decoding for their image storage in VRAM, that's just plain counter-productive.

    I've read through the linked docs, and I come away with a very different take on what they mean.  IMO, within the neuraylib docs, GPU means the actual GPU, not the whole graphics card. Therefore, to save memory, the image data is saved in memory compressed, and is decompressed (taking GPU cycles) when loaded into the GPU during a render mode action.  For acceleration, it can be stored in VRAM as uncompressed, so that while there is more data to move (bitblt ops are fast in-circuit) there is no additional decomrpression operations, resulting in a net speed increase.

    I could be wrong, but until I hear back from nVidia concerning exactly where the compression algorithms are implemented, we'll just have to disagree on interpretation.

    EDIT: The post editor is fighting me... While hphoenix is correct that some textures are loaded into memory, the number of them that are is actually quite small.  Textures like displacement, normal, bump and other specialty textures are not used by the DS viewport and therefore (to my knowledge) are not loaded until render time.  These represent a massive amount of extra processing that isn't done at figure/prop load time.  The exception to this is when the auxillary viewport is in use.  Then the rendering process is always active and the additional textures are loaded and sent to the GPU.

    ALL the images are loaded, as they are visible in the UI when the surfaces are expanded and the thumbnails are present.  The non-rendered images are not loaded to the GPU (even for OpenGL, where displacement/bump/normal/etc. aren't being used) but they ARE already loaded in system memory, and have been uncompressed.  This is one of the main reasons it takes so long to load scenes in DS.  Now, I cannot say if DS is loading those, reducing them for the thumbnails, and then disposing them.....but that seems very unlikely, as it would double the workload to have to re-load and decompress the image again EVERY time something changed or a render was started.   But it IS possible, as TDLMAKE has to create temporary image files (with mip-maps) for 3Delight which are then loaded at render time.  But if the images could already be there in memory, as long as sufficient memory was available, I would think DS would save time and CPU by keeping the images cached in memory.

    And until one of the DAZ devs confirm it one way or the other, we'll just have to wonder......

     

    Post edited by hphoenix on
  • SimonWMSimonWM Posts: 924

    I would like to see benchmarks in DAZ Studio of the GTX980ti vs. the GTX 1080.

  • namffuaknamffuak Posts: 4,145
    edited November 2016
    SimonWM said:

    I would like to see benchmarks in DAZ Studio of the GTX980ti vs. the GTX 1080.

    With Sickleyield's test scene and a pair of factory overclocked boards (980 TI, core clock 1240 MHz, memory clock 3304 MHz; 1080, core clock 1949 MHz, memory clock 4513 MHz)

    Both cards - 1 minute 50.8 seconds

    1080 - 3 Minutes 12.16 seconds

    980 TI - 3 minutes 18.8 seconds

    On a long-running scene - 1080, 1 hour 3 minutes 49 seconds; 95% gpu load, roughly 57% power dissipation. 980 TI, 57 minutes 35 seconds; 97% gpu load, roughly 77% power disipation. In real terms, power usage for the 1080 was 102 watts, the 980 TI was 192 watts. And this scene will render in just over 26 minutes with both cards in use.

    Post edited by namffuak on
  • It kept crashing before finishing. I dropped SubD down to 1, that brought me just down to 11,900 MB usage and then I was able to render the scene.

    Could you do that scene with texture compression on and tell me what it reduced mem utilization to?

  • SimonWMSimonWM Posts: 924
    namffuak said:
    SimonWM said:

    I would like to see benchmarks in DAZ Studio of the GTX980ti vs. the GTX 1080.

    With Sickleyield's test scene and a pair of factory overclocked boards (980 TI, core clock 1240 MHz, memory clock 3304 MHz; 1080, core clock 1949 MHz, memory clock 4513 MHz)

    Both cards - 1 minute 50.8 seconds

    1080 - 3 Minutes 12.16 seconds

    980 TI - 3 minutes 18.8 seconds

    On a long-running scene - 1080, 1 hour 3 minutes 49 seconds; 95% gpu load, roughly 57% power dissipation. 980 TI, 57 minutes 35 seconds; 97% gpu load, roughly 77% power disipation. In real terms, power usage for the 1080 was 102 watts, the 980 TI was 192 watts. And this scene will render in just over 26 minutes with both cards in use.

    Thank you, namffuak.

  • nicsttnicstt Posts: 11,715

    I'm not seeing much of the ten times improvement Nvidia was stateing; actually, I'm not seeing any of it. :)

  • Kendall SearsKendall Sears Posts: 2,995
    edited November 2016
    nicstt said:

    I'm not seeing much of the ten times improvement Nvidia was stateing; actually, I'm not seeing any of it. :)

    You're (generic you) likely not using the system(s) that nVidia was using to make the claims.  There is a lot more involved with the card's perfomance than just the number of CUDA cores and VRAM size.  PCI-e bus conflicts and such play a SIGNIFICANT part.  If you're using a midlevel MB or lower, don't expect much out of the 10 series over similarly spec'd 900 series.

    Watch the nVidia vids and you will see that the systems they are using are NOT sub $1000 systems.  Most of those workstations are $2000+ BEFORE the costs of the video cards and very likely not running 'i' series CPUs.

    Kendall

    Post edited by Kendall Sears on
  • IcedreamIcedream Posts: 6
    edited November 2016

    The advantages of the GTX1070 are not especilally the speed, but the memory and power consumtion:
    - allow to render scenes with >10 instead of only 3 G3F/G3M characters
    - power comsumtion 40% less than Maxwell

    Post edited by Icedream on
  • deleted userdeleted user Posts: 1,204
    edited November 2016

    Testing out my 1070. Everything worked as it should accept for one minor thing. For some odd reason I'm getting black lines around the edges of the opacity maps. Is anyone else experiencing this issue?

    Post edited by deleted user on
  • nicsttnicstt Posts: 11,715
    nicstt said:

    I'm not seeing much of the ten times improvement Nvidia was stateing; actually, I'm not seeing any of it. :)

    You're (generic you) likely not using the system(s) that nVidia was using to make the claims.  There is a lot more involved with the card's perfomance than just the number of CUDA cores and VRAM size.  PCI-e bus conflicts and such play a SIGNIFICANT part.  If you're using a midlevel MB or lower, don't expect much out of the 10 series over similarly spec'd 900 series.

    Watch the nVidia vids and you will see that the systems they are using are NOT sub $1000 systems.  Most of those workstations are $2000+ BEFORE the costs of the video cards and very likely not running 'i' series CPUs.

    Kendall

    Well my comment relates to other's posts; I'm reserving final judgment until I decide if the hype is even on the same planet as the actuality, and buy one or two myself.

    (My system is more high end not including gfx card, and my next is likely to be a Xeon.)

  • hphoenixhphoenix Posts: 1,335
    nicstt said:
    nicstt said:

    I'm not seeing much of the ten times improvement Nvidia was stateing; actually, I'm not seeing any of it. :)

    You're (generic you) likely not using the system(s) that nVidia was using to make the claims.  There is a lot more involved with the card's perfomance than just the number of CUDA cores and VRAM size.  PCI-e bus conflicts and such play a SIGNIFICANT part.  If you're using a midlevel MB or lower, don't expect much out of the 10 series over similarly spec'd 900 series.

    Watch the nVidia vids and you will see that the systems they are using are NOT sub $1000 systems.  Most of those workstations are $2000+ BEFORE the costs of the video cards and very likely not running 'i' series CPUs.

    Kendall

    Well my comment relates to other's posts; I'm reserving final judgment until I decide if the hype is even on the same planet as the actuality, and buy one or two myself.

    (My system is more high end not including gfx card, and my next is likely to be a Xeon.)

    Also, remember a lot of the 'hype' was around supporting VR applications (the whole 'VR Ready' branding).....so big part of the really large numbers they were putting up depended on the fact that Pascal can (if your VR app uses the correct and new code) can effectively get the second view and the barrel correction effectively free.  So 'compared' to a non Pascal card, you get over DOUBLE the frame rate (since VR apps have to render TWO views per frame, they count it as double the frame rate for Pascal.)

    But a lot of VR apps (and other apps) haven't been updated to take advantage of Pascal's new features.  So they will NOT show those kinds of gains.  And regular (non-VR) apps will show only modest gains (20% - 30%) at best.

    But the hype looked really cool, showing a stereo display render at full quality at a rock solid 90fps....too bad very little supports Ansel, SMP, and the other new features of Pascal yet......

     

  • mjc1016mjc1016 Posts: 15,001
    hphoenix said:
    But the hype looked really cool, showing a stereo display render at full quality at a rock solid 90fps....too bad very little supports Ansel, SMP, and the other new features of Pascal yet......

     

    That's the main thing...yet.  Pascal is still pretty new and while it may not be around for a very long time, what it is the start of and what it is introducing will be.  This is pretty standard with any first/next gen product.

    The other thing to keep in mind, when dealing with Studio specific uses...a Pascal card with 8 GB of memory can handle much larger scenes that would be CPU only on older cards.  So ANY GPU rendering is a major gain.  That can turn a multi-hour/day long render into an hour or two...how do you calculate the performance increase then? 

    Of course, if your scenes are fitting on the card already, you won't be seeing those kinds of performance increases, but should see something.   And as been mentioned in this and similar threads there are many factors that will influence exactly how much...

Sign In or Register to comment.