GTX 1080 Iray support?

1246728

Comments

  • hphoenixhphoenix Posts: 1,335

    so really there is no use point in having 2 cards in sli then. nice to know can use just about any card for zbrush and others so long it's nvidia think saw that required just need a lot of space to save for z. Well most pcs offered here from local stores premade, those that build their own and those that give you custon build options are gamer computers with the rest being very very very very basic basic computers built just to really browse online. There are a tiny number of places that offer computers that are classified for other areas video editing, photoshop and 3d creation so they say but are very very basic in what they offer small drives and low end cards so for most part the gamer ones offer so much more and can do more. Well if 2 cards in sli for example gtx 1080 over just having one card and one gtx 1080 is more than enough to do whatever I want and multitask with several things running that'll save me a fair bit and give me more options also reduces amount power supply needed correct? So 850w more than enough? Though 850w is max power supply get with that C-Pro, those other two rigs still to much for not much even though one had 64gb memory, 32gb still really good though right? Anyway the single gtx 1080 makes another computer in my reach yes it's billed as a gamer but the price allows me to get extra drives even has 8tb drives though that raises price fair bit and 5x 4tb drives should take long time to fill up so below my two new choice builds chronos still basicly same but with one gtx 1080 card and the other hopefully you say the 850w power is enough for this build, oh and one thing with this single card option if it turns out the single gtx 1080 isn't enough or good enough when it's iray ready can get add another card for just iraying and use the 1080 for everything else

    Chronos pro - high performance silent fans, ASUS Maximus VIII Gene, FROSTBYTE 240 Sealed Liquid Cooling System, Intel Core i7 6700K Quad-Core 4.0GHz , 850w tough power 80+gold, single gtx1080, 32GB Corsair Dominator Platinum DDR4 3000Mhz,
    raid1, 1tb ssd, 3 4TB Western Digital Black SATA 6.0Gb/s, 7200RPM, 64MB Cache

    battlebox -  High-Performance Ultra Silent Fans, EVGA Z170 Classified,  FROSTBYTE 240 Sealed Liquid Cooling System for 1151 Socket, Intel Core i7 6700K Quad-Core 4.0GHz,  Professional Processor Overclocking (yes or no), Thermal Compound-GELID GC-Extreme CPU Application, single gtx 1080, 32GB Corsair Dominator Platinum DDR4 3000Mhz, 
    raid 1, 2 x operating system drives both 1TB Samsung 850 Pro Series, HotSwap Drives RAID Configuration-raid 10, 5 x 4TB Seagate Solid State Hybrid Drives, 850 Watt Corsair RM850

    Having 2 cards in SLI will benefit gaming.  If you turn off SLI (don't have to remove the bridge....) in the nVidia control panel for the DAZ Studio app, it won't use SLI and the cards will function just as if they weren't bridged.  But your games can have it turned on, so they will benefit.

     

  • MEC4DMEC4D Posts: 5,249

    I missed this one , well then it sucks and that is a game changer but good news for those that use iray or other rendering programs unless the GPU driver for iray will not perform as well with more that 2 cards , however for iray enthusiasts 2 cards are more than enough anyway and one card always better than CPU

    hphoenix said:
    MEC4D said:

    NVIDIA stopped supporting SLI for more than 2 cards , you need request a code from NVIDIA if you want to use more than 2 cards in SLI for games

    and as hphoenix said for iray you don't need SLI or any unlock codes  

    Actually, just a few days ago nVidia announced it was NOT going to provide codes and that only Benchmarking programs (or programs developed to utilize beyond the second card) would support 3 and 4 way SLI.  So no support for 3 and 4 way profiles for SLI without the support being directly in the app (i.e., the driver is not going to provide that support).

     

     

     

  • MEC4DMEC4D Posts: 5,249

    Booth sounds good but I will go for Corsair Dominator Platinum 2133Mhz since they are expensive and with the profile 2 they will run almost at 3000Mhz so waste of the money on 3000Mhz 

    you can add second 1080 or another card with this system and use both for rendering and one for games , the CPU is 91W but then on turbo it can get higher , my i75960x is 140W but on turbo it can go to 495W , also after I changed my mother board to MSI X99A Godlike , I noticed today that iray used 6 cores while rotating in viewport and while rendering where I selected only GPU , never saw it before max 3 cores .. but it rendered so much faster , but the i76700K is good CPU and unlocked and at standard clock speed of 4.0ghz you will have good support for the GPU and 12 threads for a very fast work in Zbrush , 32GB is enough too even with 2 x 1080 , they say if you have as much RAM for the system as you have for the cards you are good.

    I purchased myself $78 single slot GT 730 4GB for the monitors , Zbrush and other apps that don;t need a lot of cudas and keep my mayor cards just for GPU rendering but if the new cards perform well with GPU rendering I may get one more in place of 4thTitan X but definitely water cooled so waiting for EVGA  version as I never had any issues with EVGA products and their support is fantastic .

    so if you decide to go with the 850W you will have to skip the professional overclocking it will be not enough power for that if you run 2 cards , with one card yes but the CPU run at 4.2Ghz turbo without OC what is very fast for what you need . It will make your Zbrush 250% faster and for hobby it is more as you need so go for it .

     

    Chronos pro - high performance silent fans, ASUS Maximus VIII Gene, FROSTBYTE 240 Sealed Liquid Cooling System, Intel Core i7 6700K Quad-Core 4.0GHz , 850w tough power 80+gold, single gtx1080, 32GB Corsair Dominator Platinum DDR4 3000Mhz,
    raid1, 1tb ssd, 3 4TB Western Digital Black SATA 6.0Gb/s, 7200RPM, 64MB Cache

    battlebox -  High-Performance Ultra Silent Fans, EVGA Z170 Classified,  FROSTBYTE 240 Sealed Liquid Cooling System for 1151 Socket, Intel Core i7 6700K Quad-Core 4.0GHz,  Professional Processor Overclocking (yes or no), Thermal Compound-GELID GC-Extreme CPU Application, single gtx 1080, 32GB Corsair Dominator Platinum DDR4 3000Mhz, 
    raid 1, 2 x operating system drives both 1TB Samsung 850 Pro Series, HotSwap Drives RAID Configuration-raid 10, 5 x 4TB Seagate Solid State Hybrid Drives, 850 Watt Corsair RM850

     

  • 3delinquent3delinquent Posts: 355

    Angelreaper, I'm in Oz and I got my local pc place to source and build my computer. I spoke to them about what I wanted to use it for and the litle bit I know and understand about it all. It gives me some confidence that they sourced the X99A motherboard and i75960X that Mec4D mentioned. It's a hyperthreaded 8 core I think. The board is maxed out with 64gb Ram (8 x 8gb). The processor has a cooling system with 2 fans and a radiator in the top of the box. There are also 2 fans in the front of the box and one at the back and room for more. It has a 250gb SSD for the system and 2 mirrored 1tb SATA6 HDD's. The power supply is 850W and it came with a 27" 1080p HD display. They built that for $4000. I was going to put a titan in it for around $1700 more when the 1080 thing happened so I have the funds to get a 1080, just holding off until everything is sorted and I know what I'm getting. If it costs as much as $1300 installed I'll be up for $5300 altogether. By comparison to what I've been working with it's magic! I'm hoping it's going to have enough capacity to allow me to continue learning at a much faster rate for a fair while. At some point it would be nice to add a second GPU and upgrade the power supply if necessary. There's room to add more drives when required. For what I could afford it seems pretty good to me. I can't wait to start using iray with a GPU.

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,505

    thank you Mec3D for helping me nail down what I can get now I know I can wait next month since next month is Christmas they may offer a good deal

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,505

    sounds pretty good 3dlinquent you got a bit more grunt than one I'm going for though with mine I'll have larger drives and more the 2 1tb ssd drives and 5 4th hybrids think MEC3D or someone else said hybrids are good. Probably though if you had those too are prices probably be closer to each other oh well maybe not as powerful as yours but still pretty good and at least wont run out of room for a very long time. I am a gamer yes but really on consoles but probably not on pc in past cause well probably cause space on hard drives with the next one no worries about that plus the gtx 1080s what the hell might as well especially those huge open world games love those. Another thing with this next pc it is huge not kidding may or may not be as big as the millenium or genesis models but still freaking massive check out some of the videos on others getting one if you haven't seen it/them tons of room inside plenty of air circulation this guy Linus took one apart to show how it's put together and the way cables and other parts are managed was pretty good. Oh btw did you get liquid cooling system or air? Next month is Christmas in July  so now know what I want can get in setup I can wait for next month might have a really good deal and/or bonus they have monthly deals but being Christmas in July they might have something pretty sweet either really good discount or free upgrade on larger drives or something

  • MarkMMarkM Posts: 27

    Alright guys , I have question for you . Whats more important for rendering Iray , number of cores or speed ? My card is a gtx 670 4gb I thought I might get one of the new gtx 1070 ' s  when they are compatible since they have more ram but the gtx 980ti's have more cores . Just curious because I'm not about to spend $600 for a video card .

  • MEC4DMEC4D Posts: 5,249
    edited June 2016

     in GPU rendering the speed is not much faster 2 sec for 1080 and a lot less  for  1070 vs 980ti , but in your case I would go for the 1070 and keep the 640 only for monitor , do not pair it together with 1070 for rendering as you will lose in performance . 1080 and 1070 may be good choice for people that render right now with CPU only but if you own already older cards that are slower don't pair them with faster cards 

    with GPU rendering the new cards are not much faster at all but more memory make them more attractive here , but still nobody knows how well they will perform in iray , it can be less . the same or little more all depends of how well the Nvidia driver will works .

    You see according to Nvidia 4 core CPU do not run faster with GPUs than 8 cores CPU but the truth is different , all depends of the system , so better video card and less good system or slower CPU will not perform the same way as it is not only about how much Cudas or clock speed you have , iray is a hybrid , even if you don't use CPU for rendering and only GPU it still need fast CPU to do the tasks needed for GPU rendering like tonemaping,  viewport and other task that are CPU depend

    this mean if you have slow processor everything will work slower no matter how fast GPU you have , you need to find good balance and good system that can support the amount of cards you are using  

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    MarkM said:

    Alright guys , I have question for you . Whats more important for rendering Iray , number of cores or speed ? My card is a gtx 670 4gb I thought I might get one of the new gtx 1070 ' s  when they are compatible since they have more ram but the gtx 980ti's have more cores . Just curious because I'm not about to spend $600 for a video card .

     

    pic_disp.jpg
    650 x 383 - 46K
    Post edited by MEC4D on
  • hphoenixhphoenix Posts: 1,335
    edited June 2016
    MEC4D said:

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    MarkM said:

    Alright guys , I have question for you . Whats more important for rendering Iray , number of cores or speed ? My card is a gtx 670 4gb I thought I might get one of the new gtx 1070 ' s  when they are compatible since they have more ram but the gtx 980ti's have more cores . Just curious because I'm not about to spend $600 for a video card .

    That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA.  This is why that graph has the cards virtually identical (the only real difference is bandwidth, with faster cards getting a slight edge, as do higher bus-width cards.)  Those graphs really didn't demonstrate CUDA/arch/speed differences very much.  They were a bit deceptive.  Those graphs are really about handling 4k (2160p) monitors and 4k (2160p) video.

    We'll have to wait for more market penetration to start getting realiable and consistent averages on actual CUDA/3d performance of the new 1080/1070 cards vs the last generation.

    Now if they'll get their damn production issues fixed (or stop trying to keep them artificially small to boost demand) and let us get our hands on them....

    (hmm....that may be why they're keeping the production low.  So they have time to fix driver issues and such BEFORE too many people have them, so there isn't a huge kerfuffle about them not 'measuring up' to the hype....)

     

    Post edited by hphoenix on
  • MEC4DMEC4D Posts: 5,249

    Do you use Adobe Premiere  ? no you don't and I do, it use Cuda (GPU Acceleration) and OpenCL  ,there will be not magical drivers or improvements , the biggest success the cards have is only VR and Games the rest is just the way it is . A limited amount of Cuda cores that run on higher clock speed and the mat is simple , and it may be the new king in games and VR but it is not in rendering and will be not, at last not the way everyone hopped for and I will reminds you again about when we get the final iray benchmarks , it is not a driver that iray need, iray don''t work with the new architecture and OptiX don't support Cuda 8, so the software need to be adjusted but before that many other things and how it is going to perform is to Nvidia and even if they can they will not do it as that are cards for games and not workstation cards .Due to higher clock speed GTX cards overheat too much with rendering on air cooling limiting the performance of the GPU , that are Nvidia suggestions not mine 

    And I don''t expect any superior performance , the new cards are not 2 times faster in games and will even less in rendering and I am not going to argue about again as the cards are simply not worth my investment for the work I would like to use them for and I would rather spend $1000 on water cooled and super clocked Titan X that give me better performance and memory in all my programs than the overpriced standard 1080 and I am very confident about that statement , I have it , I use it I know what I have and I don't believe in nothing else until I test it for myself and sadly the standard 1080 is not the small studios choice , , if that was the case I would have 4 of them already in my other rig  but I suspect next year may bring us something better than that .

     

    hphoenix said:
    MEC4D said:

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    MarkM said:

    Alright guyshe  , I have question for you . Whats more important for rendering Iray , number of cores or speed ? My card is a gtx 670 4gb I thought I might get one of the new gtx 1070 ' s  when they are compatible since they have more ram but the gtx 980ti's have more cores . Just curious because I'm not about to spend $600 for a video card .

    That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA.  This is why that graph has the cards virtually identical (the only real difference is bandwidth, with faster cards getting a slight edge, as do higher bus-width cards.)  Those graphs really didn't demonstrate CUDA/arch/speed differences very much.  They were a bit deceptive.  Those graphs are really about handling 4k (2160p) monitors and 4k (2160p) video.

    We'll have to wait for more market penetration to start getting realiable and consistent averages on actual CUDA/3d performance of the new 1080/1070 cards vs the last generation.

    Now if they'll get their damn production issues fixed (or stop trying to keep them artificially small to boost demand) and let us get our hands on them....

    (hmm....that may be why they're keeping the production low.  So they have time to fix driver issues and such BEFORE too many people have them, so there isn't a huge kerfuffle about them not 'measuring up' to the hype....)

     

     

  • hphoenixhphoenix Posts: 1,335
    edited June 2016
    MEC4D said:

    Do you use Adobe Premiere  ? no you don't and I do, it use Cuda (GPU Acceleration) and OpenCL  ,there will be not magical drivers or improvements , the biggest success the cards have is only VR and Games the rest is just the way it is . A limited amount of Cuda cores that run on higher clock speed and the mat is simple , and it may be the new king in games and VR but it is not in rendering and will be not, at last not the way everyone hopped for and I will reminds you again about when we get the final iray benchmarks , it is not a driver that iray need, iray don''t work with the new architecture and OptiX don't support Cuda 8, so the software need to be adjusted but before that many other things and how it is going to perform is to Nvidia and even if they can they will not do it as that are cards for games and not workstation cards .Due to higher clock speed GTX cards overheat too much with rendering on air cooling limiting the performance of the GPU , that are Nvidia suggestions not mine 

    And I don''t expect any superior performance , the new cards are not 2 times faster in games and will even less in rendering and I am not going to argue about again as the cards are simply not worth my investment for the work I would like to use them for and I would rather spend $1000 on water cooled and super clocked Titan X that give me better performance and memory in all my programs than the overpriced standard 1080 and I am very confident about that statement , I have it , I use it I know what I have and I don't believe in nothing else until I test it for myself and sadly the standard 1080 is not the small studios choice , , if that was the case I would have 4 of them already in my other rig  but I suspect next year may bring us something better than that .

     

    hphoenix said:
    MEC4D said:

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    MarkM said:

    Alright guyshe  , I have question for you . Whats more important for rendering Iray , number of cores or speed ? My card is a gtx 670 4gb I thought I might get one of the new gtx 1070 ' s  when they are compatible since they have more ram but the gtx 980ti's have more cores . Just curious because I'm not about to spend $600 for a video card .

    That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA.  This is why that graph has the cards virtually identical (the only real difference is bandwidth, with faster cards getting a slight edge, as do higher bus-width cards.)  Those graphs really didn't demonstrate CUDA/arch/speed differences very much.  They were a bit deceptive.  Those graphs are really about handling 4k (2160p) monitors and 4k (2160p) video.

    We'll have to wait for more market penetration to start getting realiable and consistent averages on actual CUDA/3d performance of the new 1080/1070 cards vs the last generation.

    Now if they'll get their damn production issues fixed (or stop trying to keep them artificially small to boost demand) and let us get our hands on them....

    (hmm....that may be why they're keeping the production low.  So they have time to fix driver issues and such BEFORE too many people have them, so there isn't a huge kerfuffle about them not 'measuring up' to the hype....)

     

     

    Actually, I _DO_ use Adobe Premiere.  Please don't make assumptions about what I do and don't use.  I'm not using the new CC version.  And while it might use OpenCL/CUDA for doing DCTs or such for speeding up encoding, I can guarantee that it is NOT remotely similar to the nature of PBR rendering OR 3D pipeline instructions.  I stand by my statement.  It is NOT a 3D rendering or even a PBR (path-based light reflectance Physical Based Renderer) related set of algorithms, and while it can speed up encoding, the very nature of frame-based compression algorithms do NOT parallelize well, as dependancies exist between blocks and the decomposition matrices.

    It is NOT a good comparison graph for basing 3D performance on.  Period.

    That graph was put in the article it was from to show comparitive performance in another real-world use, namely 4k video encoding.  It performs as well as the others, and they are all very close in performance.  This is to be expected, as the nature of the problem is bound by the algorithm, not the number of cores.  While more cores help, it is only helpful when they are a multiple of the resolution divided by the block-size, as individual blocks have to be DCT'ed down first, then the various cores can take the adjacency matrices to compute the differences from the prior keyframes.  So if you are doing 2160p encoding, with a 8x8 block size, That's 129,600 blocks.  So for each block (64 pixels) you can use ONE core to do the DCT (as the values depend on adjacent pixels, and you can't do it in parallel without everything stalling in wait-states for other values to finish resolving.)  So you STILL have a huge number of iterations, and even blocks have adjacency dependencies IF there are certain criteria met in the bitstream (contrast between blocks, luminence cliffs and such.)

    And given that the video acceleration included in EVERY GeForce card has hardware-based DCT computation blocks JUST FOR doing it, and much faster than trying to do it via CUDA/OpenCL, I'm pretty certain a LOT of that is still run through the general video acceleration.  And I think there is ONE of those blocks per SEGMENT of CUDA cores (they exist in blocks that share certain resources on the GPU) so even in the older cards (where they didn't have CUDA specifically, but pipelines) they had multiple ones, which allowed codec acceleration through the driver.  4k requires a LOT more per frame, so older cards struggle.  But with enough core segments, it gets a little faster.  But the speed ups from Maxwell to Pascal in this regard are minor, and mostly core clock speed based, not the number of CUDA cores.

     

     

     

     

    Post edited by hphoenix on
  • MEC4DMEC4D Posts: 5,249

    My assumption was correct since you don't use the last version , I have my information from Adobe documentation and this is not just about 4k video encoding , but real time rendering and other rendering functions like filters and 3D and GPU scaling etc.. but this really no matter at this point since 1080 will be not better with Adobe anyway so I am glad I got myself another Titan X SC as it renderer faster so for me another reason to skip 1080 since the thing it is good for I don't use , generally games and VR

     and not going to argue with you on this subject until there is definitely prove , speculations are fine but you have no right to say that I am wrong , as you know nothing more at this moment than I do , you just assume I must be wrong .. and if I do I admit and I really hope I am but until then nobody knows exactly the truth but from my practice I know it will be not what you expecting it to be .

    it is simply waste of money at the current price , I was right before and I guess I can be right again and no words can change my mind until I see it for myself in iray , do I really care ? not much I have super rig already anyway and no Pascal cards can replace it at the currently stage 

     

    hphoenix said:
    MEC4D said:

    Do you use Ado be Premiere  ? no you don't and I do, it use Cuda (GPU Acceleration) and OpenCL  ,there will be not magical drivers or improvements , the biggest success the cards have is only VR and Games the rest is just the way it is . A limited amount of Cuda cores that run on higher clock speed and the mat is simple , and it may be the new king in games and VR but it is not in rendering and will be not, at last not the way everyone hopped for and I will reminds you again about when we get the final iray benchmarks , it is not a driver that iray need, iray don''t work with the new architecture and OptiX don't support Cuda 8, so the software need to be adjusted but before that many other things and how it is going to perform is to Nvidia and even if they can they will not do it as that are cards for games and not workstation cards .Due to higher clock speed GTX cards overheat too much with rendering on air cooling limiting the performance of the GPU , that are Nvidia suggestions not mine 

    And I don''t expect any superior performance , the new cards are not 2 times faster in games and will even less in rendering and I am not going to argue about again as the cards are simply not worth my investment for the work I would like to use them for and I would rather spend $1000 on water cooled and super clocked Titan X that give me better performance and memory in all my programs than the overpriced standard 1080 and I am very confident about that statement , I have it , I use it I know what I have and I don't believe in nothing else until I test it for myself and sadly the standard 1080 is not the small studios choice , , if that was the case I would have 4 of them already in my other rig  but I suspect next year may bring us something better than that .

     

    hphoenix said:
    MEC4D said:

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    MarkM said:

    Alright guyshe  , I have question for you . Whats more important for rendering Iray , number of cores or speed ? My card is a gtx 670 4gb I thought I might get one of the new gtx 1070 ' s  when they are compatible since they have more ram but the gtx 980ti's have more cores . Just curious because I'm not about to spend $600 for a video card .

    That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA.  This is why that graph has the cards virtually identical (the only real difference is bandwidth, with faster cards getting a slight edge, as do higher bus-width cards.)  Those graphs really didn't demonstrate CUDA/arch/speed differences very much.  They were a bit deceptive.  Those graphs are really about handling 4k (2160p) monitors and 4k (2160p) video.

    We'll have to wait for more market penetration to start getting realiable and consistent averages on actual CUDA/3d performance of the new 1080/1070 cards vs the last generation.

    Now if they'll get their damn production issues fixed (or stop trying to keep them artificially small to boost demand) and let us get our hands on them....

    (hmm....that may be why they're keeping the production low.  So they have time to fix driver issues and such BEFORE too many people have them, so there isn't a huge kerfuffle about them not 'measuring up' to the hype....)

     

     

    Actually, I _DO_ use Adobe Premiere.  Please don't make assumptions about what I do and don't use.  I'm not using the new CC version.  And while it might use OpenCL/CUDA for doing DCTs or such for speeding up encoding, I can guarantee that it is NOT remotely similar to the nature of PBR rendering OR 3D pipeline instructions.  I stand by my statement.  It is NOT a 3D rendering or even a PBR (path-based light reflectance Physical Based Renderer) related set of algorithms, and while it can speed up encoding, the very nature of frame-based compression algorithms do NOT parallelize well, as dependancies exist between blocks and the decomposition matrices.

    It is NOT a good comparison graph for basing 3D performance on.  Period.

    That graph was put in the article it was from to show comparitive performance in another real-world use, namely 4k video encoding.  It performs as well as the others, and they are all very close in performance.  This is to be expected, as the nature of the problem is bound by the algorithm, not the number of cores.  While more cores help, it is only helpful when they are a multiple of the resolution divided by the block-size, as individual blocks have to be DCT'ed down first, then the various cores can take the adjacency matrices to compute the differences from the prior keyframes.  So if you are doing 2160p encoding, with a 8x8 block size, That's 129,600 blocks.  So for each block (64 pixels) you can use ONE core to do the DCT (as the values depend on adjacent pixels, and you can't do it in parallel without everything stalling in wait-states for other values to finish resolving.)  So you STILL have a huge number of iterations, and even blocks have adjacency dependencies IF there are certain criteria met in the bitstream (contrast between blocks, luminence cliffs and such.)

    And given that the video acceleration included in EVERY GeForce card has hardware-based DCT computation blocks JUST FOR doing it, and much faster than trying to do it via CUDA/OpenCL, I'm pretty certain a LOT of that is still run through the general video acceleration.  And I think there is ONE of those blocks per SEGMENT of CUDA cores (they exist in blocks that share certain resources on the GPU) so even in the older cards (where they didn't have CUDA specifically, but pipelines) they had multiple ones, which allowed codec acceleration through the driver.  4k requires a LOT more per frame, so older cards struggle.  But with enough core segments, it gets a little faster.  But the speed ups from Maxwell to Pascal in this regard are minor, and mostly core clock speed based, not the number of CUDA cores.

     

     

     

     

     

  • hphoenixhphoenix Posts: 1,335
    MEC4D said:

    My assumption was correct since you don't use the last version , I have my information from Adobe documentation and this is not just about 4k video encoding , but real time rendering and other rendering functions like filters and 3D and GPU scaling etc.. but this really no matter at this point since 1080 will be not better with Adobe anyway so I am glad I got myself another Titan X SC as it renderer faster so for me another reason to skip 1080 since the thing it is good for I don't use , generally games and VR

     and not going to argue with you on this subject until there is definitely prove , speculations are fine but you have no right to say that I am wrong , as you know nothing more at this moment than I do , you just assume I must be wrong .. and if I do I admit and I really hope I am but until then nobody knows exactly the truth but from my practice I know it will be not what you expecting it to be .

    it is simply waste of money at the current price , I was right before and I guess I can be right again and no words can change my mind until I see it for myself in iray , do I really care ? not much I have super rig already anyway and no Pascal cards can replace it at the currently stage 

     

    hphoenix said:
    MEC4D said:

    Do you use Ado be Premiere  ? no you don't and I do, it use Cuda (GPU Acceleration) and OpenCL  ,there will be not magical drivers or improvements , the biggest success the cards have is only VR and Games the rest is just the way it is . A limited amount of Cuda cores that run on higher clock speed and the mat is simple , and it may be the new king in games and VR but it is not in rendering and will be not, at last not the way everyone hopped for and I will reminds you again about when we get the final iray benchmarks , it is not a driver that iray need, iray don''t work with the new architecture and OptiX don't support Cuda 8, so the software need to be adjusted but before that many other things and how it is going to perform is to Nvidia and even if they can they will not do it as that are cards for games and not workstation cards .Due to higher clock speed GTX cards overheat too much with rendering on air cooling limiting the performance of the GPU , that are Nvidia suggestions not mine 

    And I don''t expect any superior performance , the new cards are not 2 times faster in games and will even less in rendering and I am not going to argue about again as the cards are simply not worth my investment for the work I would like to use them for and I would rather spend $1000 on water cooled and super clocked Titan X that give me better performance and memory in all my programs than the overpriced standard 1080 and I am very confident about that statement , I have it , I use it I know what I have and I don't believe in nothing else until I test it for myself and sadly the standard 1080 is not the small studios choice , , if that was the case I would have 4 of them already in my other rig  but I suspect next year may bring us something better than that .

     

    hphoenix said:
    MEC4D said:

    Below is the GPU rendering time for all Dual cards , as you see no much difference 

    MarkM said:

    Alright guyshe  , I have question for you . Whats more important for rendering Iray , number of cores or speed ? My card is a gtx 670 4gb I thought I might get one of the new gtx 1070 ' s  when they are compatible since they have more ram but the gtx 980ti's have more cores . Just curious because I'm not about to spend $600 for a video card .

    That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA.  This is why that graph has the cards virtually identical (the only real difference is bandwidth, with faster cards getting a slight edge, as do higher bus-width cards.)  Those graphs really didn't demonstrate CUDA/arch/speed differences very much.  They were a bit deceptive.  Those graphs are really about handling 4k (2160p) monitors and 4k (2160p) video.

    We'll have to wait for more market penetration to start getting realiable and consistent averages on actual CUDA/3d performance of the new 1080/1070 cards vs the last generation.

    Now if they'll get their damn production issues fixed (or stop trying to keep them artificially small to boost demand) and let us get our hands on them....

    (hmm....that may be why they're keeping the production low.  So they have time to fix driver issues and such BEFORE too many people have them, so there isn't a huge kerfuffle about them not 'measuring up' to the hype....)

     

     

    Actually, I _DO_ use Adobe Premiere.  Please don't make assumptions about what I do and don't use.  I'm not using the new CC version.  And while it might use OpenCL/CUDA for doing DCTs or such for speeding up encoding, I can guarantee that it is NOT remotely similar to the nature of PBR rendering OR 3D pipeline instructions.  I stand by my statement.  It is NOT a 3D rendering or even a PBR (path-based light reflectance Physical Based Renderer) related set of algorithms, and while it can speed up encoding, the very nature of frame-based compression algorithms do NOT parallelize well, as dependancies exist between blocks and the decomposition matrices.

    It is NOT a good comparison graph for basing 3D performance on.  Period.

    That graph was put in the article it was from to show comparitive performance in another real-world use, namely 4k video encoding.  It performs as well as the others, and they are all very close in performance.  This is to be expected, as the nature of the problem is bound by the algorithm, not the number of cores.  While more cores help, it is only helpful when they are a multiple of the resolution divided by the block-size, as individual blocks have to be DCT'ed down first, then the various cores can take the adjacency matrices to compute the differences from the prior keyframes.  So if you are doing 2160p encoding, with a 8x8 block size, That's 129,600 blocks.  So for each block (64 pixels) you can use ONE core to do the DCT (as the values depend on adjacent pixels, and you can't do it in parallel without everything stalling in wait-states for other values to finish resolving.)  So you STILL have a huge number of iterations, and even blocks have adjacency dependencies IF there are certain criteria met in the bitstream (contrast between blocks, luminence cliffs and such.)

    And given that the video acceleration included in EVERY GeForce card has hardware-based DCT computation blocks JUST FOR doing it, and much faster than trying to do it via CUDA/OpenCL, I'm pretty certain a LOT of that is still run through the general video acceleration.  And I think there is ONE of those blocks per SEGMENT of CUDA cores (they exist in blocks that share certain resources on the GPU) so even in the older cards (where they didn't have CUDA specifically, but pipelines) they had multiple ones, which allowed codec acceleration through the driver.  4k requires a LOT more per frame, so older cards struggle.  But with enough core segments, it gets a little faster.  But the speed ups from Maxwell to Pascal in this regard are minor, and mostly core clock speed based, not the number of CUDA cores.

     

     

    So if I use DAZ Studio 4.8, and not 4.9, I don't use DAZ Studio?

    And according to Adobe themselves, CUDA/OpenGL is NOT used for encoding and decoding.  It's primarily used for applying filters, scaling, deinterlacing, blending, and color-space conversions.  http://blogs.adobe.com/creativecloud/cuda-mercury-playback-engine-and-adobe-premiere-pro/

    The graph in question is, in fact, labelled "Premiere Pro 4k EXPORT TIME", which is ENCODING.  It may include 3d overlays, or any of the above.  It may not.  Since it isn't mentioned, I'd think it didn't.

    And your statement "...but you have no right to say that I am wrong..", is fallacious.  If I know something about the internals of an algorithm, firmware, hardware, or software that conflicts with what you are claiming, I have every right to.  You have every right to disagree, and to prove me wrong.  You seem to have no problems stating or implying that I am wrong.....

    But in this case, your statement "Below is the GPU rendering time for all Dual cards , as you see no much difference " with reference to this particular graph (which you attached and were directly referring to) is completely innaccurate.  THAT graph has NOTHING to do with GPU rendering, it is all about encoding time for video, which is only very tenuously connected to CUDA/OpenCL performance or the fixed 3D pipeline.

     

  • nickalamannickalaman Posts: 196

    No one knows exactly what the performance of the GTX1080 will be within iray, but if we look the peak compute values of each card, those usually tend to correspond quite well with render performance. So based on the numbers provided by NVidia we should be looking at a performance gain of about 45%.

    GTX1080 TFLOPS = 8.2
    GTX 980TI TFLOPS = 5.63

    This might take a while because I’m sure the first drivers will not be that efficient, but eventually it should get there. As far as heat, they will be running hot because of the increase in frequency, but the aftermarket cooling designs from ASUS for example should cool things down.

  • MEC4DMEC4D Posts: 5,249

    @nickalaman I really hope it will be this way , but I am not going to tell people to buy the card based on my hopes, it is wrong , the cards suppose to be 2 times faster before too , are they ? nope , so it is better to take caution and wait out so nobody should recommend it  , there is a lot of people that purchased it based on the hype and can't even use it yet or return without lost or use it better wat with other tasks .. so I would be very careful , I prefer to blow on cold then get burned .

     

  • hphoenixhphoenix Posts: 1,335

    No one knows exactly what the performance of the GTX1080 will be within iray, but if we look the peak compute values of each card, those usually tend to correspond quite well with render performance. So based on the numbers provided by NVidia we should be looking at a performance gain of about 45%.

    GTX1080 TFLOPS = 8.2
    GTX 980TI TFLOPS = 5.63

    This might take a while because I’m sure the first drivers will not be that efficient, but eventually it should get there. As far as heat, they will be running hot because of the increase in frequency, but the aftermarket cooling designs from ASUS for example should cool things down.

    Those numbers are what are referred to as "Peak Performance", and are theoretical maximums.  There are no guarantees as to how close to theoretical values the performance will actually get (from architecture to architecture) so even percentage-based values are suspect.  We should expect a performance gain, and a significant one.  But I wouldn't count on 45%.  Could be less, could even be more (if the prior arch was less efficient than the newer one and the newer one gets even closer to theoretical performance.)

    The heat isn't just frequency based.  The new arch uses FinFET process, which reduces the logic voltage, as well as better regulation on the board, providing for smoother power with less spiking.  Both of these reduce the power consumption by the card overall.  At stock clock speeds, the 10x0 Founders cards run cool and smooth.  Overclocking them runs into problems due to the fan/throttle issue that's already been discussed, but will likely be fixed with upcoming drivers or firmware updates.

     

  • MEC4DMEC4D Posts: 5,249
    edited June 2016

    That is huge difference between Daz Studio 4.8 and 4.9.2.70 especially with rendering speed and overall perfomance 

    but coming back to Adobe ..

    hphoenix said:
    It uses the video accelleration portion of the GPU, not CUDA

     

    NVIDIA said :

    CUDA performance optimizations
    providing up to 40% faster Premiere Pro CC performance vs out-of-the box configuration. Simply upgrade to the NVIDIA CUDA driver, and get speed for free.Make sure that GPU Acceleration(CUDA) was enabled under your project settings
    The GPU-accelerated Adobe Mercury Playback Engine, co-developed by Adobe and NVIDIA, leverages NVIDIA GPUs and the NVIDIA® CUDA® Parallel Computing Platform to deliver interactive, real-time editing and up to 24x¹ faster performance on final rendered exports. 

    and please don''t twist this subject into rendering light paths and other stuff that have nothing to do with this subject and you don't know how it will even perform in iray without speculating about , this reply is about using CUDA in Adobe Premiere  what you stated it does not use at all . Adobe products use GPU 3D raytraced rendering and OptiX with the selected CUDA cards from Nvidia . It is under General options if you ever used it you should know better . 

     

     

     

    hphoenix said:
    MEC4D said:

     

     

    So if I use DAZ Studio 4.8, and not 4.9, I don't use DAZ Studio?

    And according to Adobe themselves, CUDA/OpenGL is NOT used for encoding and decoding.  It's primarily used for applying filters, scaling, deinterlacing, blending, and color-space conversions.  http://blogs.adobe.com/creativecloud/cuda-mercury-playback-engine-and-adobe-premiere-pro/

    The graph in question is, in fact, labelled "Premiere Pro 4k EXPORT TIME", which is ENCODING.  It may include 3d overlays, or any of the above.  It may not.  Since it isn't mentioned, I'd think it didn't.

    And your statement "...but you have no right to say that I am wrong..", is fallacious.  If I know something about the internals of an algorithm, firmware, hardware, or software that conflicts with what you are claiming, I have every right to.  You have every right to disagree, and to prove me wrong.  You seem to have no problems stating or implying that I am wrong.....

    But in this case, your statement "Below is the GPU rendering time for all Dual cards , as you see no much difference " with reference to this particular graph (which you attached and were directly referring to) is completely innaccurate.  THAT graph has NOTHING to do with GPU rendering, it is all about encoding time for video, which is only very tenuously connected to CUDA/OpenCL performance or the fixed 3D pipeline.

     

     

    Post edited by MEC4D on
  • Richard HaseltineRichard Haseltine Posts: 100,784

    Please remember to address the subject, not other posters.

  • hphoenixhphoenix Posts: 1,335
    edited June 2016
    MEC4D said:

    That is huge difference between Daz Studio 4.8 and 4.9.2.70 especially with rendering speed and overall perfomance 

    In a FEW particular cases, yes.  And 4.8 does better in some.  But I still "use" DAZ Studio, and I'm familiar with it's functionality, and what features it leverages.

    MEC4D said:

    but coming back to Adobe ..

    hphoenix said:
    It uses the video accelleration portion of the GPU, not CUDA

     

    NVIDIA said :

    CUDA performance optimizations
    providing up to 40% faster Premiere Pro CC performance vs out-of-the box configuration. Simply upgrade to the NVIDIA CUDA driver, and get speed for free.Make sure that GPU Acceleration(CUDA) was enabled under your project settings
    The GPU-accelerated Adobe Mercury Playback Engine, co-developed by Adobe and NVIDIA, leverages NVIDIA GPUs and the NVIDIA® CUDA® Parallel Computing Platform to deliver interactive, real-time editing and up to 24x¹ faster performance on final rendered exports. 

    nVidia pushing some marketing on us?  Say it ain't so.  Adobe's blog post (which is still considered current, as PP CC is basically PP CS6 moved to cloud), enumerates exactly WHAT is accelerated using CUDA/OpenCL.  The link ON the linked page to the CS6 updates lists the additions.  If a PP user applies a bunch of effect, filters, corrections, and more, but doesn't actually render them to the current timeline, and does an export (so that it has to do all those renders AS IT ENCODES) then yes, you could potentially see a huge speedup.  The article the graph is from mentions nothing about that, only that it is a 4k encoding export.

    You are arguing against what Adobe itself says about its own software.

    MEC4D said:

    and please don''t twist this subject into rendering light paths and other stuff that have nothing to do with this subject

    Light Path Tracing IS the basic algorithm of Iray.  It most certainly is germane to the topic.  However, in this case I was referring to DCTs, which are Discrete Cosine Transforms, the primary decomposition used to compress motion video.  It's been modified and I believe H.264 uses Wavelets instead.  Doesn't change the fundamentals of frame decomposition, though.

    MEC4D said:

    and you don't know how it will even perform in iray without speculating about ,

    I wasn't.  This was specifically about that particular graph being used to 'speculate' about poor performance when it wasn't even applicable (for the reasons noted.)

    MEC4D said:

    this reply is about using CUDA in Adobe Premiere  what you stated it does not use at all ,

    No, I said the test that benchmark graph did not use it.  And according to Adobe, encoding and decoding DO NOT use CUDA/OpenCL acceleration.  And that is an Encoding benchmark.  I also DID mention that Premiere might use CUDA/OpenCL for other operations.  Which it does, and is confirmed by Adobe.

    MEC4D said:

    based on your own experiences ?

    Yes.

    MEC4D said:

    not at all as you don't even own the proper needed card to even test it out

    Surprisingly, I do own a nVidia card with CUDA cores.  So yes, I can.

    MEC4D said:

    but you always first to argue on subject , based on what? internet information you fishing out from difference old articles ? come on man everyone can do that and that is so artificial.

    Not even going to respond to this, in deference to @RichardHaseltine's request.

     

    MEC4D said:

     

    hphoenix said:

    So if I use DAZ Studio 4.8, and not 4.9, I don't use DAZ Studio?

    And according to Adobe themselves, CUDA/OpenGL is NOT used for encoding and decoding.  It's primarily used for applying filters, scaling, deinterlacing, blending, and color-space conversions.  http://blogs.adobe.com/creativecloud/cuda-mercury-playback-engine-and-adobe-premiere-pro/

    The graph in question is, in fact, labelled "Premiere Pro 4k EXPORT TIME", which is ENCODING.  It may include 3d overlays, or any of the above.  It may not.  Since it isn't mentioned, I'd think it didn't.

    And your statement "...but you have no right to say that I am wrong..", is fallacious.  If I know something about the internals of an algorithm, firmware, hardware, or software that conflicts with what you are claiming, I have every right to.  You have every right to disagree, and to prove me wrong.  You seem to have no problems stating or implying that I am wrong.....

    But in this case, your statement "Below is the GPU rendering time for all Dual cards , as you see no much difference " with reference to this particular graph (which you attached and were directly referring to) is completely innaccurate.  THAT graph has NOTHING to do with GPU rendering, it is all about encoding time for video, which is only very tenuously connected to CUDA/OpenCL performance or the fixed 3D pipeline.

     

     

    Post edited by hphoenix on
  • MEC4DMEC4D Posts: 5,249

    you stated it does not use CUDA and that was all about what bother me in your statement , with Cuda device the rendering speed can increase from hours to minutes and you can''t do that with no cuda card at all as it was designed specific for Cuda cards only , what about 3D raytrace rendering, OptiX ? without CUDA there is no GPU Acceleration in Adobe products , beside that not all Cuda enabled card will do that and just group of selected cards .

    4.8 render better in some cases ? lol  definitely not with iray it is bad apple.. not to mention it run the old material shaders with rather bad SSS and other bugs , the last version is the most fastest with iray to date as I need 200 less msec to even run the iray viewport what is huge improvement and with pixel reduction it  works even with my CPU in real time photoreal mode without GPU , plus there are a lot  more good stuff and features beside rendering that sadly 4.8 does not offer and should be forgotten already , it is too bad 

    My apologize to getting too personal but  I like to be direct and less shady with people so you know what is on my mind , I rather tell you in face than behind you, but this don't mean I am attacking or judge , we have just more spicy conversation than usual , if we agree on everything things would be boring very quickly , to make it short at my age I rather spend less time on BS LOL and go straight to the core ..without the evil behind this all .

    hphoenix said:
    MEC4D said:

    That is huge difference between Daz Studio 4.8 and 4.9.2.70 especially with rendering speed and overall perfomance 

    In a FEW particular cases, yes.  And 4.8 does better in some.  But I still "use" DAZ Studio, and I'm familiar with it's functionality, and what features it leverages.

    MEC4D said:

    but coming back to Adobe ..

    hphoenix said:
    It uses the video accelleration portion of the GPU, not CUDA

     

    NVIDIA said :

    CUDA performance optimizations
    providing up to 40% faster Premiere Pro CC performance vs out-of-the box configuration. Simply upgrade to the NVIDIA CUDA driver, and get speed for free.Make sure that GPU Acceleration(CUDA) was enabled under your project settings
    The GPU-accelerated Adobe Mercury Playback Engine, co-developed by Adobe and NVIDIA, leverages NVIDIA GPUs and the NVIDIA® CUDA® Parallel Computing Platform to deliver interactive, real-time editing and up to 24x¹ faster performance on final rendered exports. 

    nVidia pushing some marketing on us?  Say it ain't so.  Adobe's blog post (which is still considered current, as PP CC is basically PP CS6 moved to cloud), enumerates exactly WHAT is accelerated using CUDA/OpenCL.  The link ON the linked page to the CS6 updates lists the additions.  If a PP user applies a bunch of effect, filters, corrections, and more, but doesn't actually render them to the current timeline, and does an export (so that it has to do all those renders AS IT ENCODES) then yes, you could potentially see a huge speedup.  The article the graph is from mentions nothing about that, only that it is a 4k encoding export.

    You are arguing against what Adobe itself says about its own software.

    MEC4D said:

    and please don''t twist this subject into rendering light paths and other stuff that have nothing to do with this subject

    Light Path Tracing IS the basic algorithm of Iray.  It most certainly is germane to the topic.  However, in this case I was referring to DCTs, which are Discrete Cosine Transforms, the primary decomposition used to compress motion video.  It's been modified and I believe H.264 uses Wavelets instead.  Doesn't change the fundamentals of frame decomposition, though.

    MEC4D said:

    and you don't know how it will even perform in iray without speculating about ,

    I wasn't.  This was specifically about that particular graph being used to 'speculate' about poor performance when it wasn't even applicable (for the reasons noted.)

    MEC4D said:

    this reply is about using CUDA in Adobe Premiere  what you stated it does not use at all ,

    No, I said the test that benchmark graph did not use it.  And according to Adobe, encoding and decoding DO NOT use CUDA/OpenCL acceleration.  And that is an Encoding benchmark.  I also DID mention that Premiere might use CUDA/OpenCL for other operations.  Which it does, and is confirmed by Adobe.

    MEC4D said:

    based on your own experiences ?

    Yes.

    MEC4D said:

    not at all as you don't even own the proper needed card to even test it out

    Surprisingly, I do own a nVidia card with CUDA cores.  So yes, I can.

    MEC4D said:

    but you always first to argue on subject , based on what? internet information you fishing out from difference old articles ? come on man everyone can do that and that is so artificial.

    Not even going to respond to this, in deference to @RichardHaseltine's request.

     

    MEC4D said:

     

    hphoenix said:

    So if I use DAZ Studio 4.8, and not 4.9, I don't use DAZ Studio?

    And according to Adobe themselves, CUDA/OpenGL is NOT used for encoding and decoding.  It's primarily used for applying filters, scaling, deinterlacing, blending, and color-space conversions.  http://blogs.adobe.com/creativecloud/cuda-mercury-playback-engine-and-adobe-premiere-pro/

    The graph in question is, in fact, labelled "Premiere Pro 4k EXPORT TIME", which is ENCODING.  It may include 3d overlays, or any of the above.  It may not.  Since it isn't mentioned, I'd think it didn't.

    And your statement "...but you have no right to say that I am wrong..", is fallacious.  If I know something about the internals of an algorithm, firmware, hardware, or software that conflicts with what you are claiming, I have every right to.  You have every right to disagree, and to prove me wrong.  You seem to have no problems stating or implying that I am wrong.....

    But in this case, your statement "Below is the GPU rendering time for all Dual cards , as you see no much difference " with reference to this particular graph (which you attached and were directly referring to) is completely innaccurate.  THAT graph has NOTHING to do with GPU rendering, it is all about encoding time for video, which is only very tenuously connected to CUDA/OpenCL performance or the fixed 3D pipeline.

     

     

     

  • jerhamjerham Posts: 155

    Please remember to address the subject, not other posters.

    On the subject ;) ...Maybe DAZ can make a sticky/ notice or something in regard to the support of video cards (especially the lack of GTX 10xx support at this time?), The Daz product page ( http://www.daz3d.com/daz_studio ) does not say much about version support (or i missed it)

    Ordered mine and receving it today, Was aware that it is not supported yet using my old card untel that time in the second PCIE slot.

    On the nvidia forum i've read that they expect to release the new version around the siggraph event  (24-29 july). In combination with the post from  DAZ_Spooky...does that mean that DAZ Studio will support the GTX 10xx cards with out DAZ studio updates as soon Nvidia releases the new cuda driver version?

    The post on the Nvidia forum said they're working on it, but no forecasted completion date. And once Nvidia gets it into Iray DAZ will need to add it to Studio and go through Q/A and possibley a beta cycle.

    That is a driver thing. It should not involve DS. 

     

  • MEC4DMEC4D Posts: 5,249

    There is already new Cuda driver what is #8 that comes with the display driver to support new cards ( that have nothing to do with iray)

     I guess we need to wait for the new iray patch when Nvidia is ready next month and that will be for sure early than July-24 , 

     simple CUDA driver upgrade will not fix it on your end as you have it already and the last one .

    also OptiX don't support yet CUDA 8 or the new architecture and new build is still in BETA

    so all you have is wait and see .. and hope for fast release

    jerham said:

    Please remember to address the subject, not other posters.

    On the subject ;) ...Maybe DAZ can make a sticky/ notice or something in regard to the support of video cards (especially the lack of GTX 10xx support at this time?), The Daz product page ( http://www.daz3d.com/daz_studio ) does not say much about version support (or i missed it)

    Ordered mine and receving it today, Was aware that it is not supported yet using my old card untel that time in the second PCIE slot.

    On the nvidia forum i've read that they expect to release the new version around the siggraph event  (24-29 july). In combination with the post from  DAZ_Spooky...does that mean that DAZ Studio will support the GTX 10xx cards with out DAZ studio updates as soon Nvidia releases the new cuda driver version?

    namffuak said:

    The post on the Nvidia forum said they're working on it, but no forecasted completion date. And once Nvidia gets it into Iray DAZ will need to add it to Studio and go through Q/A and possibley a beta cycle.

    That is a driver thing. It should not involve DS. 

     

     

  • hphoenixhphoenix Posts: 1,335
    MEC4D said:

    you stated it does not use CUDA and that was all about what bother me in your statement , with Cuda device the rendering speed can increase from hours to minutes and you can''t do that with no cuda card at all as it was designed specific for Cuda cards only , what about 3D raytrace rendering, OptiX ? without CUDA there is no GPU Acceleration in Adobe products , beside that not all Cuda enabled card will do that and just group of selected cards .

    4.8 render better in some cases ? lol  definitely not with iray it is bad apple.. not to mention it run the old material shaders with rather bad SSS and other bugs , the last version is the most fastest with iray to date as I need 200 less msec to even run the iray viewport what is huge improvement and with pixel reduction it  works even with my CPU in real time photoreal mode without GPU , plus there are a lot  more good stuff and features beside rendering that sadly 4.8 does not offer and should be forgotten already , it is too bad 

    My apologize to getting too personal but  I like to be direct and less shady with people so you know what is on my mind , I rather tell you in face than behind you, but this don't mean I am attacking or judge , we have just more spicy conversation than usual , if we agree on everything things would be boring very quickly , to make it short at my age I rather spend less time on BS LOL and go straight to the core ..without the evil behind this all .

     

    My exact quote from back up where this started "That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA."  (emphasis added)

    I didn't claim it didn't use CUDA or OpenCL at all, or that it wouldn't help rendering performance, just not in video encoding.  And there is more to a GPU than CUDA or GPGPU instruction sets, as they've been accelerating other parts since way back before big 3D stuff.  Can a card do things with CUDA/OpenCL as well?  Of course.  But that doesn't mean it is the most efficient or fastest way to do it.  Not every algorithm scales to massive parallelism well (which is what using CUDA/OpenCL for performance is based on.)  Video compression and encoding is one of those algorithms that doesn't parallelize well.  For other raster operations, CUDA and OpenCL can give massive speed-ups, allowing greatly reduced time to generate effects, color-shifts, and even 3D overlays.  But it doesn't help during compression and encoding.  That uses the Video Acceleration portion of the GPU segments, which is pretty much based on core clock speed and memory bandwidth.  Which is why that particular graph does not provide appropriate comparison with regard to 3D rendering performance.

     

  • hphoenixhphoenix Posts: 1,335
    jerham said:

    Please remember to address the subject, not other posters.

    On the subject ;) ...Maybe DAZ can make a sticky/ notice or something in regard to the support of video cards (especially the lack of GTX 10xx support at this time?), The Daz product page ( http://www.daz3d.com/daz_studio ) does not say much about version support (or i missed it)

    Ordered mine and receving it today, Was aware that it is not supported yet using my old card untel that time in the second PCIE slot.

    On the nvidia forum i've read that they expect to release the new version around the siggraph event  (24-29 july). In combination with the post from  DAZ_Spooky...does that mean that DAZ Studio will support the GTX 10xx cards with out DAZ studio updates as soon Nvidia releases the new cuda driver version?

    namffuak said:

    The post on the Nvidia forum said they're working on it, but no forecasted completion date. And once Nvidia gets it into Iray DAZ will need to add it to Studio and go through Q/A and possibley a beta cycle.

    That is a driver thing. It should not involve DS. 

     

    IF the 1000 series cards Iray problem is just driver support, then DS won't require any changes, and it will just 'start working' in DAZ Studio when you update to the driver.

    HOWEVER......If it also requires a change to Iray itself (i.e., they have to change the Iray renderer to do things differently with Pascal) then that WILL require a change in DS.....specifically, they'll have to update to the newer version of Iray.

     

  • nicsttnicstt Posts: 11,715
    MEC4D said:

    @nickalaman I really hope it will be this way , but I am not going to tell people to buy the card based on my hopes, it is wrong , the cards suppose to be 2 times faster before too , are they ? nope , so it is better to take caution and wait out so nobody should recommend it  , there is a lot of people that purchased it based on the hype and can't even use it yet or return without lost or use it better wat with other tasks .. so I would be very careful , I prefer to blow on cold then get burned .

     

    +1

  • MEC4DMEC4D Posts: 5,249

    Yeah exactly,  the cards are not worth to be used as workstation cards and definitely not at the current price and I am not expecting wonders in iray as no matter what they will be not faster than my card anyway plus it have 4GB more to offer on top if that just for $100 more

    hphoenix said:
    MEC4D said:

    4.8 render better in some cases ? lol  definitely not with iray it is bad apple.. not to mention it run the old material shaders with rather bad SSS and other bugs , the last version is the most fastest with iray to date as I need 200 less msec to even run the iray viewport what is huge improvement and with pixel reduction it  works even with my CPU in real time photoreal mode without GPU , plus there are a lot  more good stuff and features beside rendering that sadly 4.8 does not offer and should be forgotten already , it is too bad 

    My apologize to getting too personal but  I like to be direct and less shady with people so you know what is on my mind , I rather tell you in face than behind you, but this don't mean I am attacking or judge , we have just more spicy conversation than usual , if we agree on everything things would be boring very quickly , to make it short at my age I rather spend less time on BS LOL and go straight to the core ..without the evil behind this all .

     

    My exact quote from back up where this started "That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA."  (emphasis added)

    I didn't claim it didn't use CUDA or OpenCL at all, or that it wouldn't help rendering performance, just not in video encoding.  And there is more to a GPU than CUDA or GPGPU instruction sets, as they've been accelerating other parts since way back before big 3D stuff.  Can a card do things with CUDA/OpenCL as well?  Of course.  But that doesn't mean it is the most efficient or fastest way to do it.  Not every algorithm scales to massive parallelism well (which is what using CUDA/OpenCL for performance is based on.)  Video compression and encoding is one of those algorithms that doesn't parallelize well.  For other raster operations, CUDA and OpenCL can give massive speed-ups, allowing greatly reduced time to generate effects, color-shifts, and even 3D overlays.  But it doesn't help during compression and encoding.  That uses the Video Acceleration portion of the GPU segments, which is pretty much based on core clock speed and memory bandwidth.  Which is why that particular graph does not provide appropriate comparison with regard to 3D rendering performance.

     

     

  • ANGELREAPER1972ANGELREAPER1972 Posts: 4,505
    edited June 2016

    Angelreaper, I'm in Oz and I got my local pc place to source and build my computer. I spoke to them about what I wanted to use it for and the litle bit I know and understand about it all. It gives me some confidence that they sourced the X99A motherboard and i75960X that Mec4D mentioned. It's a hyperthreaded 8 core I think. The board is maxed out with 64gb Ram (8 x 8gb). The processor has a cooling system with 2 fans and a radiator in the top of the box. There are also 2 fans in the front of the box and one at the back and room for more. It has a 250gb SSD for the system and 2 mirrored 1tb SATA6 HDD's. The power supply is 850W and it came with a 27" 1080p HD display. They built that for $4000. I was going to put a titan in it for around $1700 more when the 1080 thing happened so I have the funds to get a 1080, just holding off until everything is sorted and I know what I'm getting. If it costs as much as $1300 installed I'll be up for $5300 altogether. By comparison to what I've been working with it's magic! I'm hoping it's going to have enough capacity to allow me to continue learning at a much faster rate for a fair while. At some point it would be nice to add a second GPU and upgrade the power supply if necessary. There's room to add more drives when required. For what I could afford it seems pretty good to me. I can't wait to start using iray with a GPU.

    just wondering this is the build they are offering you but you haven't decided yet and still deciding? you ever hear of mwave?https://www.mwave.com.au/tools/pc-custom-build-diy  was looking at the pro workstation   they have a lot of different types of pcs can customize lots of options probably too much at least for one like me who knows nothing about these things probably put something together probably blow up they have the new gtx 1070 cards but not the gtx 1080s but they have titans, hey do you know what powersupply we can go up to here with standard type plug like is 850w our highest without having to get an electrician to convert? know 1500w is out you reckon same with 1200w and 1000w?  Those other two picked was due to fact they sounded the best builds and easiest custom options  could get away with with the most but costly still though, but may be real good deal next month if they do the christmas in july thing oh btw mwave has 4k moniters really cheap starting $549

    Post edited by ANGELREAPER1972 on
  • MEC4DMEC4D Posts: 5,249

    I checked the link you posted and they have ok prices so you can have very good and fast iray rig for au $3002.95 / us 2282.24  with 1 card without monitor , with second card it will be around au $3600 / us 2736

    and you will be as fast as me in my iray spin video from last year as that was closer to what I had last year

    if you go for one card then you can get much cheaper by reducing the memory, CPU and space and even PSU .

     

    Angelreaper, I'm in Oz and I got my local pc place to source and build my computer. I spoke to them about what I wanted to use it for and the litle bit I know and understand about it all. It gives me some confidence that they sourced the X99A motherboard and i75960X that Mec4D mentioned. It's a hyperthreaded 8 core I think. The board is maxed out with 64gb Ram (8 x 8gb). The processor has a cooling system with 2 fans and a radiator in the top of the box. There are also 2 fans in the front of the box and one at the back and room for more. It has a 250gb SSD for the system and 2 mirrored 1tb SATA6 HDD's. The power supply is 850W and it came with a 27" 1080p HD display. They built that for $4000. I was going to put a titan in it for around $1700 more when the 1080 thing happened so I have the funds to get a 1080, just holding off until everything is sorted and I know what I'm getting. If it costs as much as $1300 installed I'll be up for $5300 altogether. By comparison to what I've been working with it's magic! I'm hoping it's going to have enough capacity to allow me to continue learning at a much faster rate for a fair while. At some point it would be nice to add a second GPU and upgrade the power supply if necessary. There's room to add more drives when required. For what I could afford it seems pretty good to me. I can't wait to start using iray with a GPU.

    just wondering this is the build they are offering you but you haven't decided yet and still deciding? you ever hear of mwave?https://www.mwave.com.au/tools/pc-custom-build-diy  was looking at the pro workstation   they have a lot of different types of pcs can customize lots of options probably too much at least for one like me who knows nothing about these things probably put something together probably blow up they have the new gtx 1070 cards but not the gtx 1080s but they have titans, hey do you know what powersupply we can go up to here with standard type plug like is 850w our highest without having to get an electrician to convert? know 1500w is out you reckon same with 1200w and 1000w?  Those other two picked was due to fact they sounded the best builds and easiest custom options  could get away with with the most but costly still though, but may be real good deal next month if they do the christmas in july thing oh btw mwave has 4k moniters really cheap starting $549

     

  • hphoenixhphoenix Posts: 1,335
    MEC4D said:

    Yeah exactly,  the cards are not worth to be used as workstation cards and definitely not at the current price and I am not expecting wonders in iray as no matter what they will be not faster than my card anyway plus it have 4GB more to offer on top if that just for $100 more

    hphoenix said:
    MEC4D said:

    4.8 render better in some cases ? lol  definitely not with iray it is bad apple.. not to mention it run the old material shaders with rather bad SSS and other bugs , the last version is the most fastest with iray to date as I need 200 less msec to even run the iray viewport what is huge improvement and with pixel reduction it  works even with my CPU in real time photoreal mode without GPU , plus there are a lot  more good stuff and features beside rendering that sadly 4.8 does not offer and should be forgotten already , it is too bad 

    My apologize to getting too personal but  I like to be direct and less shady with people so you know what is on my mind , I rather tell you in face than behind you, but this don't mean I am attacking or judge , we have just more spicy conversation than usual , if we agree on everything things would be boring very quickly , to make it short at my age I rather spend less time on BS LOL and go straight to the core ..without the evil behind this all .

     

    My exact quote from back up where this started "That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA."  (emphasis added)

    I didn't claim it didn't use CUDA or OpenCL at all, or that it wouldn't help rendering performance, just not in video encoding.  And there is more to a GPU than CUDA or GPGPU instruction sets, as they've been accelerating other parts since way back before big 3D stuff.  Can a card do things with CUDA/OpenCL as well?  Of course.  But that doesn't mean it is the most efficient or fastest way to do it.  Not every algorithm scales to massive parallelism well (which is what using CUDA/OpenCL for performance is based on.)  Video compression and encoding is one of those algorithms that doesn't parallelize well.  For other raster operations, CUDA and OpenCL can give massive speed-ups, allowing greatly reduced time to generate effects, color-shifts, and even 3D overlays.  But it doesn't help during compression and encoding.  That uses the Video Acceleration portion of the GPU segments, which is pretty much based on core clock speed and memory bandwidth.  Which is why that particular graph does not provide appropriate comparison with regard to 3D rendering performance.

     

     

    Well, I personally expect that once they get the driver/iray issues resolved, that we will see similar performance gains to the current gaming performance gains.  Which based on benchmarks that are being posted, shows a pretty consistent boost over a Titan X (stock, yes.)  Not double, true.  Saying the card is 'twice as powerful' is that whole 'peak performance' thing that no one EVER sees in practice.  But the gaming framerates indicate a pretty consistent 25% - 45% boost at 4k compared to a Titan X (stock Titan vs. stock 1080.)  The 1070 shows pretty consistent performance comparable to a Titan-X (again, stock vs. stock.)

    Now, obviously, with the possibility of nVidia intentionally hamstringing the 1000 series cards in Iray to prevent it from competing against Quadro's, that may not be what we see.  Neither of us will know until they get the Iray fixes into place and resolve any bugs (no release is bug-free....)

    nVidia may decide to not hamstring the 1000 series, and let the Pros upgrade for cheap for 6-12mo, then spring Volta on them for Quadros with specs that make the 1000 series look tame (and at typical Quadro prices!) so they upgrade AGAIN, thus even higher profits......wouldn't put it past them.

    Obviously (again) the Titan-X and the Quadro cards all have more VRAM, and in that respect, I don't see any 1000-series cards exceeding.  The 1080Ti (if/when) we get specs, may offer 12GB, or even possibly 16GB.  We won't know until later.  And some AIB may make 1000-series cards with additional memory.  They've done it before.....

    Such additional features are pure speculation at this point, but are based on prior releases and how they progressed (Kepler, Maxwell) but it'll be at least a month before we get any hard info on Iray performance.

    So until we do, let's just agree that it could go either way.  smiley

     

  • MEC4DMEC4D Posts: 5,249

    Oh @hphoenix are you tired already ? hahahaha , with respect I love our conversations 

    I do hope as well , but fast rendering as in gaming ? not see it coming as there is many factors, beside the competition to Quadros's second the higher temperatures while rendering, a lot higher than gaming could reduce the performance and last , very much possible not optimal driver on purpose to avoid everything I just said .

    but as always , if grandma has a mustache she would be a grandpa cheeky everything ----if if if if --- and we deserve some light on it already 

    I just hope for better cards , I have 2 years to the next upgrade , by the time I guess I will be running something better than 1080 anyway , but I really hope for the regular users here so they can enjoy iray as I do with less resources and more efficient , I mean fast card for the masses as we all hoped for .. one more month and the mystery will be solved 

    and whatever it will be I am fine with it as it does not affect me in any way but the people that want to upgrade for better iray experiences and if it really render at last 150% faster that Titan X  I may consider upgrading my monitor card as well and use it for running OpenGL but only water cooled version since I have open case Rig Thermaltake P5

    hphoenix said:
    MEC4D said:

    Yeah exactly,  the cards are not worth to be used as workstation cards and definitely not at the current price and I am not expecting wonders in iray as no matter what they will be not faster than my card anyway plus it have 4GB more to offer on top if that just for $100 more

    hphoenix said:
    MEC4D said:

    4.8 render better in some cases ? lol  definitely not with iray it is bad apple.. not to mention it run the old material shaders with rather bad SSS and other bugs , the last version is the most fastest with iray to date as I need 200 less msec to even run the iray viewport what is huge improvement and with pixel reduction it  works even with my CPU in real time photoreal mode without GPU , plus there are a lot  more good stuff and features beside rendering that sadly 4.8 does not offer and should be forgotten already , it is too bad 

    My apologize to getting too personal but  I like to be direct and less shady with people so you know what is on my mind , I rather tell you in face than behind you, but this don't mean I am attacking or judge , we have just more spicy conversation than usual , if we agree on everything things would be boring very quickly , to make it short at my age I rather spend less time on BS LOL and go straight to the core ..without the evil behind this all .

     

    My exact quote from back up where this started "That graph is for Premiere Pro 4K video encoding.  Not 3D work or Iray.  It uses the video accelleration portion of the GPU, not CUDA."  (emphasis added)

    I didn't claim it didn't use CUDA or OpenCL at all, or that it wouldn't help rendering performance, just not in video encoding.  And there is more to a GPU than CUDA or GPGPU instruction sets, as they've been accelerating other parts since way back before big 3D stuff.  Can a card do things with CUDA/OpenCL as well?  Of course.  But that doesn't mean it is the most efficient or fastest way to do it.  Not every algorithm scales to massive parallelism well (which is what using CUDA/OpenCL for performance is based on.)  Video compression and encoding is one of those algorithms that doesn't parallelize well.  For other raster operations, CUDA and OpenCL can give massive speed-ups, allowing greatly reduced time to generate effects, color-shifts, and even 3D overlays.  But it doesn't help during compression and encoding.  That uses the Video Acceleration portion of the GPU segments, which is pretty much based on core clock speed and memory bandwidth.  Which is why that particular graph does not provide appropriate comparison with regard to 3D rendering performance.

     

     

    Well, I personally expect that once they get the driver/iray issues resolved, that we will see similar performance gains to the current gaming performance gains.  Which based on benchmarks that are being posted, shows a pretty consistent boost over a Titan X (stock, yes.)  Not double, true.  Saying the card is 'twice as powerful' is that whole 'peak performance' thing that no one EVER sees in practice.  But the gaming framerates indicate a pretty consistent 25% - 45% boost at 4k compared to a Titan X (stock Titan vs. stock 1080.)  The 1070 shows pretty consistent performance comparable to a Titan-X (again, stock vs. stock.)

    Now, obviously, with the possibility of nVidia intentionally hamstringing the 1000 series cards in Iray to prevent it from competing against Quadro's, that may not be what we see.  Neither of us will know until they get the Iray fixes into place and resolve any bugs (no release is bug-free....)

    nVidia may decide to not hamstring the 1000 series, and let the Pros upgrade for cheap for 6-12mo, then spring Volta on them for Quadros with specs that make the 1000 series look tame (and at typical Quadro prices!) so they upgrade AGAIN, thus even higher profits......wouldn't put it past them.

    Obviously (again) the Titan-X and the Quadro cards all have more VRAM, and in that respect, I don't see any 1000-series cards exceeding.  The 1080Ti (if/when) we get specs, may offer 12GB, or even possibly 16GB.  We won't know until later.  And some AIB may make 1000-series cards with additional memory.  They've done it before.....

    Such additional features are pure speculation at this point, but are based on prior releases and how they progressed (Kepler, Maxwell) but it'll be at least a month before we get any hard info on Iray performance.

    So until we do, let's just agree that it could go either way.  smiley

     

     

Sign In or Register to comment.