Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
This is the secondary reason why I got the GTX 1080 in the first place, I'm still irritated that I cannot even access iRay's GPU rendering, just like my r9 290x that I upgraded from! I blame the dunder-heads over at NVidia for this though...
It's more they put the more emphasis on gaming than anything else.
even though iray isn't working for my two 1080s yet and have to render via cpu it's sill pretty fast/faster for me than what I was using on my laptop only done two renders so far longest being just few seconds over two hours that is still really good for me so when these cards are ready if it's under two or even one hour I'll be really happy of course if it was seconds like Mech4D's I'll be extastic though those were just one character or prop and hdmi? so more complex larger scenes with lots of characters props extra which I like to do guessing maybe will take at least half hour maybe? those with 980s and previous titans if any here reading what's your render times for large scenes with several characters, props extra rendered at say ultra hd 16:9 3840 x 2160 that's the usual seting I use for my renders
I was refering to the on-board yes...
Why do so many people keep saying this? Like it somehow justifies putting out a half complete product or something. Nvidia didn't make this GPU "just" for the gaming customers, they made it for parralel computing on many platforms and for many purposes. If you read what Nvidia has to say about Cuda cores (and Cuda 8 and Pascal in general), they are not saying that the cards are intended mostly for gaming. They talk at great lengths about parralel computing with these multiple Cuda core chips and Pascal technology. Yes, they do say that they are great for gamers, but they do not say that they are made exclusively for gamers. I'd like to see anyone here who claims that Nvidia hasn't gotten the GPU's fully up and running for everyone buying them, supply documentation from Nvidia stating that as a fact. I would like to hear from Nvidia that the reason for these cards not being usable for graphics rendering is because they consider our market to be insignificant compared to the gamers and only worth the minimal consideration, when they get around to it. Does anyone have any official documentation from Nvidia stating that this is the case?
Ummm...the 'workstation' class cards also have CUDA and that is the product line that is geared toward parallel computing. The Geforce line has ALWAYS been the 'gaming' line.
+1
Before:
Up to August 22, 2016 Titan line cards have also "always" been supported by the board partners.
Up to August 22, 2016 Titan have also "always" been open to purchase in any normal store.
Now:
- Board partners are not allowed to produce their own versions of Pascal Titans
- cards are now exclusively sold in Nvidia online stores with limited access and delivery control
- the "gaming" cards are now sold at a price level of up to 1000$ similar as the Titans before
-> Nvidia made some major changes to their card lineup and those changes do affect the GPU rendering community.
- -
Facts are:
- Nvidia found the "screen capture art community" important enough to create Ansel:
http://www.geforce.com/hardware/technology/ansel
- Many customers of the DAZ3d store cannot afford even the high end gaming cards.
- Even professional freelancers and smaller studios who are working with GPU rendering software have been useing the "gaming" line and not quadro, tessla for rendering.
- To create large scale scenes cards with more VRAM than what the current gaming line offers would be needed.
- - -
So yes, I would really love to see an official statement by Nvidia about all this.
Does the GPU rendering community matter to Nvidia?
Maybe DAZ3d could put one staff member on a plane to attend the GPU technology conference in Amsterdam from 28-29 September?
It is extremely important to communicate to Nvidia that the way they are currently treating GPU rendering is not working out for some people. Would not the whole point of GPU conferences be that companies involved in GPU rendering talk to each other and communicate thier current issues and find solutions together?
- - -
nVidia are not saying you can't use gaming cards for rendering, just that you have to wait a few months.
It's understandable that the priority for Gaming cards is gaming.
Here's a listing of their market segments as they saw them in 2010: http://www.nvidia.com/object/citizenship-report-product-services.html
Cuda 8 was released today, so at least that part of it is sort of done. If any of you downloaded Cuda 8, there is already a patch to version 8.0.47...
Update/Edit: Rephrased to hopefully better describe the difference of expectations and reality.
A lot has happened since 2010. If you followed todays Amsterdam GTC keynote and then compare it with this outdated listing then you realize how the market now looks completly different.
AI, deep learning and Self driving cards were not even an important part of the business of Nvidia in 2010. And now it seems those are the areas where most investments happen because they expect to make the most profit in the future.
Still, allready on the 2010 chart you find indications that Nvidia had not a one clear purpose for the Quadro line in mind:
Quadro was allready back in 2010 a multi purpose card. After Nvidia introduced the first Titan with 6 GB there was no more reason to buy a Quadro for GPU rendering.
An indication what the customers actually want to use or can afford to render with GPU can be found in OctaneBench:
https://render.otoy.com/octanebench/results.php
True. The issue nevertheless is that Nvidia in the last year (s) neglected to actually develop cards targeted directly towards the GPU rendering community.
We should not need to be forced to rely on gaming cards.
There is only talk about creating cards optimized for deep learning and A.I. In the last year there has not been any information from Nvidia about any plans of creating cards for GPU rendering.
In the meantime companies like Otoy are actively working with partners to create dedicated card solutions targeted for GPU rendering.
- - -
I hear you, but I also enjoy paying less for cards because they are made for the huge gaming market.
- Greg
Well, now that CUDA 8 is released, and Iray 2016.2 has been available, it's in the hands of DAZ now to get the Iray functionality in studio updated. Hopefully, they will give us SOME kind of timeline as to when we can expect to use our 1000 series cards with DS.
Haven't they actually done the opposite? By persisting with their program of adding CUDA cores all over the place, they've made almost all their cards capable of GPU rendering!
I realize that probably there are two kinds of users:
- some are happy that all cards now have Cuda cores and after waiting around some time they will be useable to render
And if you look at the latest rumor post for Pascal generation 2 in 2017 and Volta in 2018 this will continue:
http://wccftech.com/nvidia-pascal-volta-gpu-leaked-2017-2018/
Sooner or later all mainstream cards will be useable for GPU rendering and yes of course that is a good thing.
- - -
- some others hoped that Nvidia would be as dedicated in producing cards "just for" meaning "only for" GPU rendering as other companies
https://home.otoy.com/otoy-and-imagination-unveil-breakthrough-powervr-ray-tracing-platform/
It is difficult to predict if those dedicated render boost technologies will be cheaper as a multi purpose card or move expensive.
- It could be cheaper because you can leave out all features that are not used for GPU rendering.
- It could be more expensive because you have a smaller audience that will purchase the card and carry the development cost.
- - -
Fair enough, as I am a nVidia newbie (First nVidia GPU) after all, ironic that I spent hours/days researching the GPU's prowess vs price, but neglected that important distinction! Thanks for the clarification!
Daz has an SDK, but I'm not clear whether these are final releases on the SDK and CUDA 8 or still late betas.
Cuda 8 is released and not in beta now.
If you have a deveoper account you can download it here from the main page.
https://developer.nvidia.com/cuda-toolkit
Additional files can be found in other directories there as well, Opensource Code, etc
Cuda 8 has only been released for Windows and Linux, not MacOSX, and Cuda 8 being out of beta doesn't mean Iray 2016.2 is. Then Daz has to integrate the non-beta Iray 2016 and test it.
Yes they say that the cards are very good for gaming. No, they don't say "only use them for gaming" nor do they say "forget these cards if you want to do anything other than gaming and go use our workstation cards instead". On the Geforce website they also describe how the power of paralell computing using the cuda cores on these cards makes the gaming experience better. Follow the link below and you can read all about how this technology (in the geforce cards) is being used to create more realistic characters. Don't you think that "more realistic characters" is a little bit like what we do here?
http://www.geforce.com/hardware/technology/cuda
The trouble I'm having with accepting the notion that the cards are for gaming first and therefore everyone else has to wait is that I think that is BS. It's like a car manufacturer selling a 12 seat minivan and saying that it's primarily designed for driving, so only the driver seat is installed and usable, but in a few months we'll be adding a few more seats for passengers and cargo, as we get around to doing so, if we feel like it, but meanwhile, we won't tell you anything about how they will look or feel.
A GPU is neither a "gaming card" nor a "rendering card". It is what you use them for that gives you cause to call them one or the other. Both applications use the card to "process graphics". In the case of games, it's used to process images for display on your screen for the moving images for the games. In rendering like we do it's used to to do the same thing - process an image for display upon your screen. The fact that we choose to make them high resolution images which we then save in fancy high resolution formats has nothing to do with special rendering only cards. Once an image is on your screen you can always save it regardless of how it got there. Any card sold as a "gaming card" still "renders" each image for each frame of a moving game. It's like saying such and such a car is a passenger car, but paint the same model different and supe up the engine a bit and now you call it a police car. It's still a car with 4 wheels and and engine and transmision and steering wheel and seats etc... What you choose to call it doesn't change what it actually is. Same with GPU cards. Call one a gaming card and turn around and call the same card a rendering card, and nothings changed other than your point of view about it. The Geforce cards are defacto "gaming cards" AND they are defacto "Rendering cards". Nvidia is just slow turning some parts of the cards on and saying that is because "gamers get priority" is just a load of BS.
And one last bit of info... GPU"s originally came into being for the purpose of assisting the CPU's on motherboards with the number crunching needs of producing a increasingly higher resolution images upon computer screens, like their cousins the math co-processors helped with other number crunching duties. They were originally on board chips but as time went by they started adding them to video cards which then morphed into GPU dedicated add on cards. These days they are so big that they need special power supplies and cooling systems, but at the end of the day, they are still there to assist the CPU by taking care of producing the graphics for the screen whether you are creating an image for a game, or an image for a still frame render.
That's a falacy. Using your car example, the van already has 12 seats installed, but you are angry that they cannot be detached to fit in your cargo. The manufacturer states that the van was 'designed to transport lots of people" and that their Cargo Van with the features you want will be released later as the P6000 Quadro card, since that is used for professional delivery services. What you bought was a huge, family car and you're mad you cannot use it to deliver cargo efficiently. Saying you lack the budget for the P6000 Quadro does not change the fact your 12 seat van was made with a different purpose and because of that is way cheaper.
Geforce cards were originally created for gaming.
Quadro cards were designed for graphic design.
The Geforce Pascal card is already out. It's working for pretty much every new game out there. Every new game runs buttery smooth on my GTX 1080.
The Pascal Quadro card is not out yet. Why? It's waiting for full CUDA support and Iray upgrades in most popular applications.
Even though the hobbyist market uses GeForces for GPU rendering, that does not mean the primary function of those cards was not gaming.
Only slightly OT (Out of Topic), but I have tested rendering in Cycles in Blender 2.78 RC 2, which already has GPU support for GTX 1080, 1070 and 1060.
The graphic card was GTX 1080. I have made a simple test scene with the torus and the terrain. The rendering with GPU was only about 20 percent faster,
than rendering with CPU (i7 4 GHz). So, either the implementation of Cycles is very well optimized for CPU use, or the GPU acceleration needs further development.
I am only afraid, if similar thing will happen with the iray implementation for Pascal graphic cards, then GTX 1080 rendering in iray will be not so fast.
Fingers crossed - hopefully it will be faster, but who knows.
You might've used a scene that was too simple for that. 20% faster at what speed? If CPU render was 20 seconds, while GPU was 16 seconds, that 20% means nothing.
Yes, you may be right. I have made a test on the older computer with GTX 670 (2GB VRAM) and i7-3770K @ 3.5 GHz
with similar scene in Blender 2.77a and got the rendering time: with GPU - 28.86 seconds, with CPU - 52.25 seconds
It looks like the render with GPU was about 45% faster than with CPU, so I guess such tests, does not give any real
estimation about how good will performance be in iray in Daz Studio. One need to wait for the release of new iray for Daz Studio, then.
As per the NVIDIA developer website, the Iray 2016.2 SDK is now released; so I think it's up to DAZ to integrate and test. Hopefully coming soon!
That is not true.
The Geforce website clearly advertises that the cards are for rendering, and that they are "good for games" because of that. Read this from the Geforce website: http://www.geforce.com/hardware/technology/cuda and read the part about the realistic characters you can create with it.
My analogy stands. You changing it doesn't make it less true. The cards come with Cuda 8 cores that are not activated. They aren't to be removed at some later date to make room for something else, they are simply only partlially installed. The hardware is there, but the firmware isn't.The cards were designed to render first and formost. The fact that they render high quality fast is just good for gamers, so they point this out. Car manufacturers design and build cars to drive down the road first and foremost. Saying that a particular car is aimed at people who like to drive doesn't take away from the fact that it is a car - mechanically speaking. Same with these cards. Saying that they are aimed at gamers does not change the fact that they are built to render images - electronically speaking.
Who said anything about me lacking a budget?
I still stand by what I said. The logic behind your argument is based on a fallacy.
The fallacy in your argument is that you try to broaden by a lot the definiton of what the GPU does and then make a point about cars that shows a problem that does not relate to the GPU. Translating the car example fully to the GPU problem, you basically say that "GPUs render images for the screen, so the process should be the same whether we render from games or PBR and should be clearly optimized for both at the same time." And that is not true.
Also, the problem is not that the card is not activated - its just that the software not able to utilize it for what you want.
And since there already exists software that can use the Pascal cards - this means the card functions.
The GPU is performing its function as it should by the way - its rendering images for your PC. It was designed and optimized to work exceptionally with a selected group of software - Games. And it functions great for them - all are supported. If it was not working, you would see nothing when launching your latest games. So the card is functioning correctly. Software that is not games has technical issues with it, as is to be expected - but then, if money is not an issue - why did you not buy the Quadro card that is supported? If its because you wanted to buy the Pascal Quadro card, it is not released yet. Probably because they want the software to function for it before its released.
Wow, you sure like to make stuff up.
First you state this as some kind of fact: - "Saying you lack the budget for the P6000 Quadro" - Where did you come up with that? I've never said that or anything like that, or even remotely close to that.
And where did you come up with me saying this: "you basically say that "GPUs render images for the screen, so the process should be the same whether we render from games or PBR and should be clearly optimized for both at the same time."? Don't use quotation marks to indicate that you are quoting me when I've never said it. You can't use quotation marks to quote me saying what you think I "basically said". That's not a quote, that's just your take on what you think I meant. I never said anything about the process "should be the same ....etc...". I've never said that I think the process "should be" anything at all. You just made all that up. I never mentioned PBR or what the cards should be optimized for at all, anywhere.
If you don't think that my car analogy applies to GPU cards, then why are you using it?
If I remember correctly, you can roughly estimate power increase by checking the clock speed of the card * CUDA cores for cards you are comparing and dividing them. That gives you a very rough estimation of how much stronger the card will be compared to the previous one.
For example a GT 640 vs a GTX 1080.
GT 640 will have a number of 306 048.
GTX 1080 will have 4 113 920.
Divide GTX 1080 by GT 640 number and you will get around 13,4. So the rough estimation is that the GTX 1080 will be around 13 times faster.
Of course, that can be way off, mind you. It's does give you a rough idea how much faster you will render with a new card though, just... well, it can be off so don't put your money on it if you can wait for quality results.