Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
So 980 Ti being faster than 1080 (wile costing less), was that expected? I thought since game peformance was better on 1080 people kinda expected it to be better for Iray too.
I'm having the same crashing issue that others are reported but I have a feeling I know what the cause is.
It seems one thing we all have in common is AMD CPUs and in at least one case, the person had the same generation CPU that I did. I've seen this error before (written as its hex value, rather than out as "ILLEGAL_INSTRUCTION") and it occurred when I compiled a C++ application with the wrong settings resulting in SSE4.1 instructions embedded. The Barcelona line of AMD processors (Phenom II X6 is among them) do not support SSE 4.1 instructions (4a, but not 4.1) and will result in an appcrash like this. I did a little googling along the way and found that NoMansSky and another game used to crash with this exact same message on similar hardware and in both cases it was an issue of the native binary requiring SSE 4.1 support.
Just a guess, and if it is the case, I doubt there's anything that can be done about it until the next release comes along (short of buying a new processor and probably motherboard). I'm also wondering what boards those who are having problems are using. I have had various problems with my Gigabyte 890FX chipset board (centered around IOMMU resulting in a hard lock when booting Windows with Hyper-V installed) along with some odd USB 3.0 compatibility issues -- though I doubt these are directly related to this issue, I wouldn't be surprised if this wasn't yet another gremlin caused by this buggy board.
One of the folks that posted didn't give enough info on his/her processor for me to determine if it was a CPU without SSE 4.1. If you have a moment and can install CPU-Z -- grab the processor type (or just look for "SSE 4.1" in the box labeled "Instructions" -- if it's there, then my theory is bunk).
The 1080 is supposed to be the replacement for the 980, not TI. Regardless, the main thing is the power consumption, the 1080 is designed to use about 180 W of power while higher end cards user 250 W. That is a limiter, so to speak, on the card. Maybe liquid cooling with overclock could extend the performance, but that is not known yet. Game performance is dynamic so it doesn't draw max power constantly.
I have a Phenom II X4 910 2.6ghz
I noticed a few items rendered in Iray but others caused a crash. This is frustrating. I will probably take the card back and just stop 3D altogether. This isnt worth the hassle if I need a new CPU.
This is the beta release? Then it can't be expected to be without issue. I was running the released version on a Phenom x4 with older gtx without issue.
Ok i understand thx
GTX 1080, 12 GB RAM, Phenom X6 1090t, WIN10 pro 64bit, Nvidia Driver 375.63-desktop-win10-64bit-international-whql and tryed also 373.06, Daz Studio Pro BETA - version 4.9.3.117! Release Candidate 2
thx
True but if the 980ti is cheaper and faster its not important which one the 1080 is supposed to replace. Hopefully it will improve with higher driver/software versions but except for ram and lower power consumption there aren't many reasons to buy a 1080 for iray at the moment.
Maybe its an issue of older CPU´s that not support SSE4.1/4.2.
This explains some people can render (AMD-FX CPU´s and Intel) an some cannot (AMD PHENOM and maybe older Intel)
in benchmark thread
http://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks#latest
980 Ti overclocked to 1395 Mhz. 2 minutes 39.27 seconds
my result with overclocked 1080 2 minutes 30.61 seconds
practically the same result 6 percent advantage over 980ti , very few...
in Octane 1080 faster 980ti on 20-30%
Since it may have been lost in my previous post, I'm not *positive* that's the issue (and it looks like we've got at least one person here with a processor without SSE 4.1 that is working so it's looking like it may not be the issue), but even if it is related to SSE 4.1, it's something that can is usually fixed via a compiler flag on their end so if that *is* the issue, it'll end up being fixed in a future build. I'm going to see if I can narrow down other possible causes over here a bit later this afternoon. In my case, empty scenes and really simple models render. Add any "figure" and the Iray Preview or render crashes the entire app.
"True but if the 980ti is cheaper and faster its not important which one the 1080 is supposed to replace. Hopefully it will improve with higher driver/software versions but except for ram and lower power consumption there aren't many reasons to buy a 1080 for iray at the moment."
It is still beta and newer drivers may improve. Vray tests were finding the 1080 closer to previous Titan X. Of course there's no way to know how this translates to another renderer.
http://www.evermotion.org/articles/show/10189/geforce-gtx-1080-tested-with-v-ray
I'm also having a crash on render, error messages as follow. Also added my motherboard model as well, seems that there's a theme... btw, I recently upgraded from a Radeon HD7770 so I'm hoping that this will be resolved in subsequent beta updates
Iray VERBOSE - module:category(IRAY:RENDER): 1.3 IRAY rend progr: CPU: Processing scene...
Iray VERBOSE - module:category(IRAY:RENDER): 1.2 IRAY rend progr: CUDA device 0 (GeForce GTX 1060 6GB): Processing scene...
Iray VERBOSE - module:category(IRAY:RENDER): 1.10 IRAY rend stat : Geometry memory consumption: 42.0146 MiB (device 0), 0 B (host)
Reason for crash
DAZStudio.exe caused ILLEGAL_INSTRUCTION in module "M:\DAZ 3D\DAZ 3D\DAZStudio4 Public Build\libs\iray\libneuray.dll" at 0033:00000000D5040728, mi_neuray_factory()+7334328 byte(s)
My system: AMD Phenom(tm) II X4 810 Processor, Instructions sets MMX (+), 3DNow! (+), SSE, SSE2, SSE3, SSE4A, x86-64, AMD-V, Mainboard Model GA-MA790XT-UD4P
Is anyone able to render a frame larger than the active viewport? No matter what custom pixel size I set the rendered frame is always matching the viewport size. It works fine in the non-public beta. Is there a trick I can use in the public beta to get any custom size frame I want?
Just checking the code at the crash address shows it fails on pshufb instruction. That's an instruction from SSSE3 set which is not supported by some AMD cpus (it's not part of either SSE3 or SSE4A).
has somone already the developers informed?
i want to make a bug report but "http://www.daz3d.com/redirects/bugs.php" = not reachable
They are working on a fix
Thank you!
It looks like the bottom line from most of the results right now is that the 1080 is slower than the 980Ti with Daz Iray in this first build. Theoretically the 1080 should be faster. Iray is a GPGPU application so it really should scale roughly with FLOPS, which is basically a product of cores * clock * IPC. The 1080 performs (FP32) at around 9 TFLOPs vs ~6 TFLOPs for the 980 Ti - about a 50% increase. Some of the cuda-based GPGPU applications, like cuda nbody, are showing a 20-40% performance increase. The Vray RT benchmarks I've seen are looking more like ~20% improvement. I guess that seems about right given that the GPU is not doing 100% floating point ops during a compute.
Remember also that the software stack is pretty complicated: Daz -> Iray SDK/plugin -> Cuda toolkit/driver, and each piece is first-gen or beta in the case of the Daz build. It seems more likely that performance optimization would come from improvements in the cuda toolkit or iray plugin, but maybe there are some improvements Daz can make in how they code against the Iray SDK that would help as well. Certainly the Pascal architecture is different enough (64 shaders per SM) from the Maxwell architecture that figuring out how to optimally feed all the cores probably leaves room for optimization.
I'm happy to see that the Pascal cards can now GPU render in Daz and hopeful that after optimization they'll get performance where it should be.
Good catch! Yes, that's an obscure one... the name always makes me think someone put a typo in there. SSSE3 is only supported by Bulldozer, Bobcat and Piledriver CPUs (the other user with this problem had a processor that does SSE4.1, but does not do SSSE3 which explains that).
Of course, Intel supports it almost everywhere going back to the Core 2 Duo which is always frustrating. I don't blame the DAZ guys. I'm guessing it's one big ugly C++ application and someone didn't get the compiler flags exactly right on this build (and it can be an artform getting a binary that contains the optimisations and uses them for processors that support them but uses the compatible code for those that don't).
I'll post a comment on the beta page letting them know of this finding if one isn't already there.
Remember this is still a beta release, so performance numbers will be subject to change...the fact that it is working (for many, with the promise of full functioning in the near furture) should be enough for now. Performance optimization can come later...especially, since, in most cases it is not horribly worse than the previous generation cards.
Use https://helpdaz.zendesk.com/hc/en-us/requests/new
No it isn't. First, you're thinking that the Cuda 7 cores are the same as Cuda 8 cores, which they are not. Second, you're not factoring in Maxwell vs. Pascal architecture which isn't fully implemented yet. Third, you're not considering the fact that the 980ti has been well hashed out within Daz Studio and the 1080 has barely begun. Forth, you don't know if he tested the 1080 as the first test and then the 980ti as the second test, in which case the first test needed to set up the canvas, which affects the time significantly. Fifth, you cannot compare a suped up card like the 980to to an off the shelf card like the 1080. The only way this test could be useful is if both cards were the same issue - ti or not ti, and if they had the same kind of Cuda cores - 7 or 8, and if they ran at the same clock speed, and if both used the same architecture - Maxwell or Pascal. They don't even have the same size memory. If I did a test with an 8gig scene the 980ti wouldn't even get off the ground.
I've been using my 1070 since I updated the driver and installed the latest Beta and it kicks ass over the 980ti. I use both the GPU and the Optix turned on and what takes half an hour on a 980ti takes about 3 minutes on my 1070. Note that I'm talking about a 1070 here, and not a 1080, and so I have available about 75% of the cores that a 1080 has, since it only uses 3 of the 4 quadrants. Perhaps that is why it seems to be much faster than the 1080 is, at the moment. My guess is that it is because implementation of the cards is only beginning and the controller inside the 1000 series uses some overhead that hasn't been optimized properly yet. I expect the 1080 will be much better once it's fully online.
One more from me on this one - I've submitted a bug report on the issue since there's really nothing we can do on this end (well, there may be -- there's a utility put out by Intel that can be used to emulate these instructions in software, but I couldn't get it working on my AMD box ... which is strange because I had an older version working two years ago but that version is no longer available -- either way the performance would be abysmal under emulation which kind of defeats the point when the whole purpose is high-speed rendering).
Might not hurt to have them hear from you, though, since you more deeply investigated the exception than I bothered to. :o) You can do so here: https://helpdaz.zendesk.com/hc/en-us/articles/207532333
Thanks again!
brand new $4000 computer with a 1070 MSI Seahawk card and it cant render a scene with iray with only GPU checked it only works with CPU checked and it takes forever even with the deault iray settings quality IS amazing if ya just let it run an Im after quality 3delight and OpenGL just suck for reflection refraction etc. here it is almost november and I simply amazed at the fact this lovely render engine isnt working with the 1070 3d studio max maya poser all fly with their respective render engines but not Daz Studio and nvidia iray guess I should have saved back some money for octane hurrumph over 90 minutes to render this 1280x720 single image and my animation is going to be around 630 frames thats arounf 39 Days running full tilt at slow as hell speeds on a machine with a i7 6850K OCed to 4.6Ghz, MSI Seahawk X GeForce GTX 1070, 64GB of Corsair Dominator ram at 3001Mhz and 2 TB of SSD drive space and 3 TB of WD Velociraptor drive space boot drive is 512GB Samsung 950 Pro M.2
Have you downloaded the Daz Beta which has Pascal support?
http://www.daz3d.com/daz-studio-beta
In Dim be sure you have Public Build cheked off in "Download Filters"
Umm...as mentioned many, many times...everyone who is offering Iray is pretty much in the same boat, as far as support for Pascal cards is concerned. Nvidia only released the final version of the SDK not long ago. So if anyone else has something, they used a beta release...and as can be seen from the way Studio is having problems in its beta release, with the final version from Nvidia, I can't imagine any Iray plugins based on the beta SDK being anywhere closer to production ready.
rerender of my render hello there ladies here's Johnny, using my two gtx 1080 cards, previous version took 2hrs 2 minutes cpu only, this one with the 1080s fraction under 2hrs 1 minute 26 seconds, but there are a lot of characters and other content took forever to load the scene also had trouble loading Bethany 7 characters 1st day of the new betta this time they were fine and loaded but had to remove one character as she kept loading really distorted she looked like the slender man extremely long arms and fingers and face wierd tried reloading her swapping in a different character and deleting and loading another looked ok in t pose but once any other pose was selected warping happened regardless of character so some kind of glitch in the new test build don't think cards
For all, please post your results ( benchmarks) on this page http://www.daz3d.com/forums/discussion/53771/iray-starter-scene-post-your-benchmarks/p1
Then we will see clearly, how big the difference
Are you sure the GPU was even used? That's a huge scene, if you didn't downsample the textures I would think it way beyond even an 8GB card. The log file should tell you.