Anyone still using a GTX 1060?
![Philippi_Child](https://farnsworth-prod.uc.r.appspot.com/forums/uploads/userpics/762/nWK74K0Z7TO4X.jpg)
in The Commons
I have one with 6gb ram can this graphic card still be used for DS 4.16 with the recent driver? I'm not into huge resource intense renders so just asking if anyone is using this card how is it working for you.
Comments
I only have a 980ti
and it works fine as long as the scene fits
In 2019 Nvidia moved all iray engine render modes over to use its RTX and Tensor core technology for most everything that could be off loaded to those cores.
I don't remember the exact iray version of when this happened.
Cards that don't have those cores must emulate them.So there is some overhead for those cards in VRAM and computational resources.
I'm not real sure about the Tensor cores being emulated, just the RTX cores.Someone more familiar with the inner workings of the SDK/engine would need to help us out on that one.
6GB will very limiting but can be used, as you said, with very light and optimized scenes.
reference link;
https://www.nvidia.com/en-us/design-visualization/iray/
I'm using a GTX 1070 with the latest Studio driver in DS 4.16, and it works fine. So I'd guess a 1060 will work fine too.
I have a GTX 1060 6GB card and it works fine. I'm running 4.15.0.2 as there was no compelling reason to go to 4.16. I've done renders with 3 G8 figures with outfits, some low-poly props and an HRDI that just fit. 2 G8 figures and a more complex set also works.
Thanks everyone good to know. I've been dragging my feet on the next version ( still using 4.12) and this is good news considering the price of graphic cards.
yeah I get a gigabyte less VRAM to play with in the 4.15 beta than 4.11 but it still works OK.
Redacted
I wonder if this is the first time we get some indication on how much VRAM the emulation of RTX functions in software actually reserves...
That would mean that the users of GTX cards with 6GB VRAM (GTX 1660) on W10, would be limited to about 1.5GiB of VRAM for geometry and textures = Not much
is certainly related,
since 4.12 I believe the Optix prime no longer has an option to be disabled which is what makes animated renders difficult for me in 4,12+
and yes it only affects GTX users
the first frame will fit, then it goes to GPU
I think 1GB was a previouslty offered estimate too, but nice to have it confirmed.
I have a 6gb 1060 as well... and it works but.......
I think mine shows just a tad over 5gb available. What I like to render is 2 people (G8s) in a converstion in a decent setting.
I FEEL that back with 4.12, I could do that on card. I am one of those that tend to download an update as soon as its available... so with 4.16 those same types of scenes now almost always get kicked to my CPU (Ryzen 9 3900X - 32MB RAM)
My goal was to have a 3090 by now... but.... I'll wait until I can get one at retail.
The change Nvidia made to Iray rendering uses about a gigabyte of VRAM on GTX cards to emulate the RTX functions.
Now a 6GB GTX card running W10, has just about 1.5GiB's left for geometry+textures, and that is not much.
I mention W10, because you said you have "just a tad over 5gb available" and I'm assuming that is from DS log, mine says I have 7.7GB's available out of 8GB, but then again I'm using W7
Is that before or after the fairly recent Windows 10 update that reserved less memory for unused but present connectors?
I still use a GTX 980 with DS 4.12. No plans on upgrading on that particular unit.
I haven't seen any significant change on the logs posted here at the forum. If MS did such an update, it should show already.
I'm running 3 monitors on W7 with my RTX 2070 Super
Thanks for the explanation PerttiA... and yes, that was what I was reffering to in my post (the Daz log stating there was 5gb available).
If what you are saying is correct... that there really is only 1.5gb's left... that would explain why everything is getting sent to CPU now.
Sigh....
My calculations are based on a test that I made some months ago to see how much RAM and VRAM was used while rendering in IRAY
Case a) just one lightweight G8 figure with lightweight clothing and hair
Case b) four similar G8 characters with architecture
Case c and d) started increasing SubD on the characters to see at which point the rendering would drop to CPU
"RAM/GB" and "VRAM/MB" taken from GPU-Z, "DS Log/GiB" taken from DS Log, no other programs were running but DS and GPU-Z
The "DS Log/GiB" is the sum of Geometry usage, Texture usage and Working Space - After Geometry and Textures, there should still be at least a Gigabyte of VRAM available for the Working space => In my case, Geometry + Textures should not exceed 4.7GiB
Note; Case c) was already using 38GB's of RAM, even though the rendering was done on the GPU, Case d) when rendering on CPU the RAM usage went almost over my 64GB's
Using RTX 2070 Super (8GB), i7-5820K and 64GB's of RAM on W7 Ultimate
Since emulating the RTX functions reserves about a gigabyte of VRAM and the base load with W10 is almost a gigabyte (300MB's on W7), that would leave about 3.5GiB's available for textures+geometry on an 8GB GTX card and 1.5GiB's on an 6GB GTX and even then the space left for the "Working Space" would be less than one GiB, which would slow down the rendering even if still done on GPU.
Came to think about this as I do remember the talk about MS making a change due to given feedback, but as we (DS users) tend to have more than the usual 2-4GB's of VRAM, it becomes a question of how did they change it. Was it to limit the amount to certain prosentage of the total amount or some max amount in MB's...
If the change was to limit the reservations to certain prosentage of the total VRAM, it might not reduce the reservations on our GPU's
I have 2- GTX1080 Ti's 11.5 gig each gpu's. & unfortunately with GTX gpu's any driver over 436.30 The Optix-prime Accelerator is always on. & is why I stop, updating gpu drivers beyond 436.30 for my render rig. I do have a copy of the latest version of daz 4.16 for some new plugins & shaders i bought and render in 3dl which does seem to render faster than daz 4.12 3dl. Plus I will play with filament once in a while .
But I only use daz 4.12.0.86 to render iray with. IMO its at least 10x faster than 4.16 that has the updated 472.12 driver. The older 436.30 driver I believe was the last optimal driver for the GTX 1080-ti's. all the drivers beyond 436.30 were designed for the RTX gpus which left the Optix-prime Accelerator option as always on for all the GTX cards to make up for the lack of Retracing that the RTX cards have . So the 436.30 NVIDIA driver. is really the optimal driver for older GTX gpu's. So if you have daz 4.12 .0.86 version or later like ds4.11 & roll your driver back to 436.30 you will see a substantial increase in your performance onr your gtx gpu. the Downside 436.30 drivers will not work at all in 4.14 thru 4.16 to render iray. But Filament will use the 436.30 driver in OpenGL alright. so I recommend a back up copy of 4.12.0.86 or later versions. if you have a GTX gpu. Because GTX gpu's are not much good for anything after daz 4.12.0.86 public build Its more NVIDIA fault than daz's
On my win11 laptop I have a gtx 1070 8 gig gpu with the up to date 472.12 driver. which is useless to render iray with it, Seriously it be faster to crate a image with a paint brush
You can try rolling back your driver and see if it works for you as well as it does for me, there is nothing to loose. https://www.nvidia.com/Download/driverResults.aspx/151275/en-us Or WIN 7 users https://www.nvidia.com/Download/driverResults.aspx/151274/en-us
Edited for spelling sorry I am very dyslexic
4.16 is ten times slower than 4.12? That seems a bit harsh. I have not upgraded to 4.16, but I have used multiple 4.15 versions. I have also been tracking my speeds for a long time and still have note pads of my Iray Benchmark times. I also had two 1080tis for a long time.
4.12 with the 436 driver was faster, at least until 4.14 came along with its changes to normal maps. 4.14 basically recovered the speed that Iray lost after driver 436. 4.15 performed the same as 4.14 for me, regardless of driver. I read some things about 4.16 and decided to skip, I have nothing that needs 4.16 specifically anyway.
I did have some issues in the last year I had the 1080tis. I am not sure what it is though, it may even be hardware. But it probably is Daz. I started having instability. Daz would force close with errors like illegal memory access or whatever. This only happened when starting a render. This did not happen all the time, I could usually make a number of renders with no issue, but it was sure annoying when it did. I had to make a habit of saving my scene before hitting the render button or risk losing what I had done. Considering how painfully long Daz Studio takes to load any decent size scene any kind of crash wasted a huge amount of time for me.
I struck gold and snagged a 3090 in October. I used the Hot Stock app, and had it set for the Founder's Edition. That is the one Best Buy sells for MSRP of $1500. I had failed for months before finally scoring in October.
I kept one of my 1080tis in my machine. But I still had this error pop up sometimes, crashing Daz. At this point, I decided to leave the 1080ti off the renders, and only use the 3090. It is so much faster that the 1080ti didn't feel like it was adding much anyway. The 3090 is the real deal, guys.
Then a month later I got an email from EVGA for the 3060 Black Edition. I had signed up for their que way back when the 3060 launched. In fact, I had totally forgotten about, LOL. So I considered not buying it, but this is Daz Studio, I can use it with the 3090. So I bought it, too! I got the 3060 installed now, and so far I have had no trouble. I did have one crash a week or so ago, but only that one crash in the several months of using both cards to render.
So running the 3090 and 3060 has been a big deal for me. I have far fewer issues. But it is important to note that I have indeed still had a crash happen, and I am not sure what the deal is. The scene was not even that big. I was just rendering a character without any 3D background at the time of the crash. Daz Studio seriously needs to figure out what is going on, because that is without a doubt the best aspect of 4.12, I pretty much never crashed. And just to be a jerk, I will also point out that I frequently used multiple instances of Daz Studio without any issue. Ever.
At any rate, my render times were not the issue, especially after 4.14. They certainly aren't now! The 3060 all by itself is faster than my 1080tis. Together. I am planning on doing a direct head to head with the 3060 vs the 1080ti in a wide battery of test scenes to see just how much faster the 3060 truly is. I just have not got around to it yet. I have not felt motivated enough to pull that 3090 out of my rig, LOL. The 3060 also uses a comically small amount of power while rendering, which is real nice considering the beast 3090 is right beside it in my system. All in all, I am not actually using much more power with this 3090+3060 combo versus my old 1080ti+1080ti combo. And while a 3090 might use a lot of power, it certainly uses less than having two 1080tis.
Sadly EVGA is not doing the sign up program anymore, so there is no way to do that. But the Hot Stock app tracks most retailers. You can try tracking the Founders versions of different cards and maybe you will get lucky. I never disabled my Hot Stock app, out of curiosity I suppose, but I can tell you that it has alerted me numerous times for the 3090 Founders being restocked at Best Buy. The most recent was right at Christmas time, so just a couple weeks ago. I know it sucks, but that is the only way to get a MSRP card, since the Founders editions are the only ones being sold at MSRP. They are also exclusive to Best Buy, so non US residents are out of luck there. You need to have an account with Best Buy all set up and ready to go. I messed that up. I had a chance to get a 3090 in July or August, it was in my cart but I needed to log into stupid Best Buy. That cost me! So you need to have everything set up, including your payment info. You will not have time to type it in.
BTW, new RTX 3050s are coming up soon, and hopefully they will be more available than other releases. I don't know what the pricing will be, though. The spec is for 8GB of VRAM, they launch on Jan 27 with a MSRP of $250, so just two weeks from now! Of course, who knows what the street price will be. They did not release any real specs, other than saying it is a 9 TFLOP card. The 3060 hits over 12 TFLOPs, so you can make esitimates on the 3050. So the 3050 is roughly 30% slower than the 3060. However, I can be confident the 3050 will be fine for any Daz user not currently running RTX. Some of the fastest Pascal cards might still beat it....might. Remember, the 3060 wasn't just faster than the 1080ti, it is way faster, and the 1080ti was the fastest Pascal you could buy (well, the Titan was a tiny hair faster, but only a hair). It is totally possible that the 3050 can render as fast or faster as the 1080ti, which is kind of mind blowing if it turns out to be true. Of course we do not have any kind of benchmarks on the 3050 to go on, and no, gaming benchmarks do not translate to Iray. I can asure you the 1080ti will be faster than the 3050 at gaming.
There is a lot of competition coming all at once in the next few months, with Nvidia, AMD and even Intel all releasing new low end GPUs. So the hope is that with all of these companies releasing so close together that these GPUs will be a little easier to get. I expect prices to actually be near MSRP at least for the first day. That is what happened whan the 6600XT released a while back. Prices were right at MSRP that first day, then it went up afterward. So if you are thinking of getting a 3050, then mark January 27 down and try to grab one that day. That might be the best chance you have.
I would say you are very lucky if you found and can afford a new RTX 3080 or 3090ti gpu. good for you
I wish I could, But a lot of people can't. life happens
My comment was to the OP. that GTX Gpu's are Driver depended. So to the OP if your buying a older gtx gpu like most people are right now , you will get better performance from using the older legacy driver that was meant for the gtx gpu.
RTX Gpu Cards have Retracing technology. where GTX cards do not. thats the change. so the the newer RTX drivers leave some features on in the GTX gpu architecture such as "Opitex Prime Acceleration" to compensate for new RTX raytrace. that change in the driver uses extra resources in the GTX gpu's slowing the gtx cards usage to a crawl when they are used in newer daz studio 4.14 thru 4.16 versions & this is pretty old complaint, so no sense beating that drum again.
The reality is a lot of people can not AFFORD or even find a new RTX graphic cards. so the unlucky people in that category like myself, use what we have or can afford. GTX gaming gpu's. are cheap and plentiful on ebay. RTX gpus not so much.
So in conclusion If your using daz studio 4.14 and newer you mise well forget the gtx gpu & get a rtx gpu, if you want to render Iray . Because those GTX gpu are going to be slower than cold molasses running out of the bottle in January using daz studio 4.16 and most likely will throw to CPU 95% of the time for rendering Iray from my personal experiences, I can not speak for outriders experiences.
Its not a secret that the demand for graphic cards & GPU rendering can be a very expensive proposition, not everyone one can afford a brand new or even used RTX gpu. We poor people are called gtx card users, we have to buy things like food and heat and fuel for work. etc.
So for the OP GTX cards will still work pretty good if you work within there parameters, nothing wrong with using older gpu's or older versions of daz studio to render Iray . It would be better if you have a older versions of daz studio pre 4.12 then the gtx 1060 gpu will be great![smiley smiley](https://www.daz3d.com/forums/plugins/ckeditor/js/ckeditor/plugins/smiley/images/regular_smile.png)
I had to choose between iClone and DAZ for driver 436.30 and below (I think 429 being the previous one working for both) or I would use it with D|S 4.11
but maybe not as Unreal and other programs benefit from the latest drivers
iClone won't load my Speedtrees on 430-436 funny enough they do load without a graphics card at all.
so I use the latest driver with D|S 4.11 and it seems fine
I still get up to 5GB to use on D|S 4.16beta on my 6GB 980Ti but yes D|S 4.11 is slightly faster and gives me close to all 6GB
I will say this: The Kingpin RTX 3090 with its 360mm radiator makes for a good space heater in a small room in the winter. I hope they start developing cards that run more efficiently, otherwise we'll be plugging our computers into ranges or dryer outlets in the next few years and will require much bigger desktop cases.
Actually, at these temperatures I miss my 7200rpm spinner tower that housed 10 drives at one time, together with two 21" CRT monitors they kept the room warmed up as long as the temperature outside was above freezing, these new SSD's and flat panels just don't produce enough heat![wink wink](https://www.daz3d.com/forums/plugins/ckeditor/js/ckeditor/plugins/smiley/images/wink_smile.png)
You can always undervolt or downclock to save power and thus heat if that is a concern. I have not done so myself yet, as it hasn't been an issue, but you can find a variety of videos and blogs that discuss undervolting and its benefits. If you look at the Quadro/A series those cards are so stable in large part because they are simply clocked lower. You can see just how efficient Ampere can actually be by looking at the A series. But you always set your gaming cards to match the Quadro numbers and thus power efficiency. There is no drawback to doing this, other than the obvious potential to decrease performance. It does not do any harm.
I'm sorry, but I just feel that some of these things are a bit exaggerated. I mean, you said that 4.16 was ten times slower, that doesn't sound like an accurate statement. So I wrote my experiences to give a different take. I did indeed see an uptick in VRAM usage, but this was not so extreme, either. It was only a factor in very specific situations for me. I certainly admit it was annoying, but to say Iray RTX took a ton of VRAM away is just not factual. And I am one of the harshest critics of Daz you'll find in these forums. We also cannot forget the role of Windows. Windows 10 also shipped an update that changed the VRAM reservation on GPUs, and this allows users to get more out of their cards, and is quite close to what Win 7 did. For Windows 10 users, the Daz Studio help file now shows 10.1 gb available for a 1080ti, as opposed to the barely 9 something it was before. This change shifted the balance so that the VRAM loss from Iray RTX had less impact on Windows 10 users. I was able to render some pretty large scenes with my 1080tis. I rendered scenes that hit right at 10 gb of VRAM while eating up nearly 50 gb of system RAM. Both cards were rendering, too, so the card used for display was still able to run. I was not able to do anything, if I opened up a browser I could cause the render to drop to CPU. And because I have 2 GPUs, I can prove how much memory Iray is using by looking at GPU #2. Since GPU 2 is not connected to a display, it only uses 200 MB without a load. So I can compare the memory use between the cards with and without a display attached to see the Windows overhead. The next test I will do is compare memory use between the 3060 and 1080ti when I get a chance. I will swap them out so each will have a turn with the display attached or off. This will allow us to have a real data point for exactly how much VRAM the Iray RTX is using on GTX cards.
But the point here is that I was able to render my scenes on GTX. And I did this with Daz 4.14 and 4.15. I rendered very similar scenes with 4.12 as well, some of these were continuations from scenes created in 4.12. The VRAM use was fairly similar.
I play some video games and use features that need the newer drivers. So going back to 436 just for Daz was never ideal. And some like Wendy use other software like Unity and Unreal, which need much more up to date drivers to get their constantly updating new features. Unity and Unreal have been moving at a breakneck pace feature wise.
I also understand that the past couple years have been rough for a lot of people, but there are also plenty who are doing ok. Indeed, a part of this crazy GPU demand comes from those staying at home more often. And while GPUs are very hard to get, it is not impossible, and I am proof of that. Actually, the 3090 is the first brand new GPU I have ever bought in my life. Up until then, every GPU I have ever bought was second hand off ebay, including my 1080tis which were bought at different times. I bought my second 1080ti right as the 2080ti was launching because I saw the price of the 2080ti, LOL. I once bought a AMD 5870 for $50, and at the time that was the most high end GPU I owned and also the most I could afford. For the people who can buy a GPU today, I just wanted to say it can be done. The 1060 that the op uses was a $250 card when it launched way back in 2016. So I brought up the 3050 because it is a $250 card, at least MSRP. If these can be found near MSRP, then the 3050 would be a decent upgrade for 1060 (and under) owners. Not only does the 3050 offer the magical RT cores, but it also has 8gb of VRAM, 2 more than the 1060 had, or 5 more than the 1060 3gb (I almost forgot about that one). Of course, if you can check reviews that would be great, though I don't know if it that would be possible given that reviews will probably come out the day it launches. And again, most reviews are going to be about gaming, which does not apply to Daz. But I expect it to be around 30% slower than the 3060 given the CUDA count, and we have benchmarks for the 3060.
The 3060 does over 8.2 iterations per second in the bench. A 1080ti does 4. Oddly enough there is no 1060 on the bench! So if anybody in this thread would like to try the bench out, the thread is linked in my sig, and any new data is always appreciated. The 1070 does about 2.5 iterations per second, so I would assume the 1060 would fall below that. I think the 1060 might do about 2 iterations per second in this test. If the 3050 is indeed 30% slower than the 3060, it would be doing around 5.7 iterations per second, which is still pretty darn respectable. So if this is correct (and I must stress this is my personal guess not a fact), then the 3050 would also be faster than the 1080ti by a pretty comfortable margin, and more than double the performance of the 1060. In the best case it could possibly even triple the 1060!
The bench we have is not definitive, "real world" results can be different depending on just what your scenes are. However it is still a good indicator of what the GPUs are capable of.
Nvidia is releasing the 3050, AMD is releasing the 6500, and Intel is releasing...something, they still have not really defined what their GPU is. All of these are going to offer similar performance and pricing, so the hope is that all of this can help make the market a little better. It certainly does not hurt to have more GPUs hit the market. I actually think the miners will be gunning for the Intel GPUs, just a hunch, but I think those will be great for mining. If that is true, it could help take some pressure off the 3050 stock. But I do think miners will like the 3050, too, since they like to have 6-8gb of VRAM as well these days for Ether. Ether currently requires more than 4gb to mine.
I have a Gtx 1080Ti and i did not notice any significant change in performance as I went from 4.11 to 4.15 (not tried 4.16 yet). If anything I would say rendering is a bit faster, so I am baffled why some people are seeing such a huge difference in render speed in the latter versions. I have also managed to fit complex multiple character scenes into the GPU. I don't have the patience to render in CPU, so in the rare occasions a scene I am working on drops to CPU I will quit the render and optimise the scene until it fits.
It's likely down to individual equipment configurations, scene contents, shader selection, etc.
Maybe I should just stay with what I have for now. Luckily when I built my own PC I did it in such a way upgrading things like power supply, RAM and graphics card would be easy. Upgrading to a new power supply is easy. It's the graphics card that will cost a pretty penny. By what I've read here sticking with simple scenes and such works for it has for some time. I was really hoping to move to 4.16 but it doesn't sound like I've enough firepower.
So... to revisit I am on the hunt for a RTX 2060 6gb however, each time I find one to fit my budget it gets snatched before I can get it. I have a 650 watt PSU and the required PSU is a 500 watt. So I think this would suit me. I may just go ahead and move to 4.16 but have my backup DS version handy. I am not a techy so a lot of what is mentioned in benchmarks and such are kinda past me. What I gather, and I maybe wrong, is that the present drivers are no use to a bloke using a GTX card but more geared to RTX cards? I guess I just wanted to understand and reword my question can I use 4.16 on a GTX 1060 or should I just stay where I am and just use 4.12 on a GTX 1060? I mean I am not unhappy with my current setup I just have been seeing some really cool figures that require 8.