Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2024 Daz Productions Inc. All Rights Reserved.
Comments
No - not remotely, since you can have it and simply not use it.
It's still there, adding to the complexity of the hardware and software environment. Over the years they have been culprit to countless of problems in systems that do have a dedicated GPU as well.
Nice build very similar to something I had going for a long time.This is a great setup for Daz.
System/Motherboard: SuperMicro X12
CPU: 2x Xeon Gold 6348 @ Stock 3.5 GHZ
GPU: 5x A6000
System Memory: 512 GB ECC @ 3200 mhz
OS Drive: WD SN850 1 TB NVMe
Asset Drive: 256 GB RAM DRIVE
Operating System: Win 11 Pro
Nvidia Drivers Version: 516.25 DCH
Daz Studio Version: 4.20.1 public Beta
PSU: 2x Corsair AX1600
2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Device statistics:
2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (NVIDIA RTX A6000): 346 iterations, 2.466s init, 24.390s render
2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 1 (NVIDIA RTX A6000): 358 iterations, 1.749s init, 24.506s render
2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 2 (NVIDIA RTX A6000): 354 iterations, 1.846s init, 24.407s render
2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 3 (NVIDIA RTX A6000): 354 iterations, 1.743s init, 24.532s render
2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 4 (NVIDIA RTX A6000): 348 iterations, 1.781s init, 23.969s render
2022-07-03 23:02:53.887 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CPU: 40 iterations, 1.032s init, 24.351s render
2022-07-03 23:02:54.694 [INFO] :: Finished Rendering
2022-07-03 23:02:54.783 [INFO] :: Total Rendering Time: 32.44 seconds
Loading Time: 7.908 Seconds
Rendering Performance: 73.38 Iterations Per Second
Finally got all five A6000 connected with PCIe Gen 4x16. Seems like a small milestone, but it was a long time coming.
Not if it's turned off or disconnected.
"countless problems" such as...?
And when the user is too afraid to touch any settings in the Bios?
Manifests in different ways
You can apply these exact words to somebody who does NOT have a iGPU. If somebody is afraid of the BIOS...they are going to need help using their computer regardless. Suggesting people are afraid to use the BIOS is not even a valid point on the subject, because this is something effects them equally on any potential issue, iGPU or not.
They can have issues that can "manifest in different ways"? Yeah...so can not having a iGPU. If you have no iGPU and you have a video problem, you are going to have a hard time trouble shooting that issue without a SECOND GPU on hand. How many people have that? You are fine with assuming some people don't like using the BIOS but then expect them to know how to trouble shoot without a iGPU...these two things rather contradict each other.
My friend's GPU died. He was without a computer at all for a month while he waited for the painfully slow RMA process from that GPU maker. He didn't have a backup GPU, or a tablet or laptop, so he was pretty much offline for a whole month. If only he had a iGPU his life would not have been so miserable for that month!
Another one had a problem trying to upgrade their video card on an older motherboard. Without a iGPU, he was unable to fix the problem on his own. This is a situation where accessing the BIOS would fix, but this person was afraid to do that...which is exactly like your first statement about people who might not want to use a BIOS.
So I don't see the relevance to those words. Anybody can have any problem with a computer. To cast so much blame on the iGPU is just wrong here, when the presence of iGPU can in fact be the saving grace of a trouble shooting session.
Can a iGPU cause an issue, sure...but you are throwing the baby out with the bathwater here.
And again, the other issue is that ditching iGPU is straight up leaving performance on the table for anybody who does content creation. While it may not directly effect Iray, it can in certain configurations.
Given how much crossover there is between Daz Studio and other content creation software, it is a fair bet that many Daz users may also use such software that benefits from iGPUs. In which case telling them to avoid iGPU is just flat out wrong. iGPU can also help streamers.
I combed the comments on both videos, and could only find ONE comment out of hundreds that made any reference to iGPUs potentially being a problem. Otherwise the comments almost universally support iGPU. The second fellow builds and repairs a lot of computers as well, he owns a shop.
iGPUs have come a long way. In the past, onboard graphics were terrible, consuming valuable system RAM and creating havoc with off brand drivers. Today, they are built into the CPU dies and very stable. When I build workstations that are used for business, I always opt for the iGPU for simplicity and effeciency. Especially for laptops.
For content creation, I skip the iGPU altogether. Descrete GPUs are better for video encoding, rendering or even image libraries in Lightroom. They are great for taking the load off the chip and keeping it cooler. For content creation, I would suggest spending the money on extra CPU cores or higher clock speeds. I think price might be your bigger concern here in this equation though. You need to look at the overall value for what you do, and the processor you have is a great value. In this species of CPU, I would personally lean to the i9-12900k for more cores and threads, because that allows more processing overhead and better system responsiveness. This is personal preference though and much more expensive.
Guys I have a question. I currently have a bog standard 2070 (MSI). According to the benches, it's 32.6 iterations per dollar per hour, and 4.5 iterations per second. For the 3080 it's 62.1 iterations per dollar per hour, and 12 iterations per second. So for iRay the 3080 is x3 better than the 2070 (in performance terms).
The question is then as follows: given I want to upgrade, should I wait for a 4080 as they're mere months away or buy a 3080 and skip the 4xxx series.
You know I'm not so sure about the RAM bump. I read they're going to 12GB for the 4080, with the '70 at 10GB. That's 3080 TI territory. I'm really curious about the iRay performance of the new hardware. They're reporting huge improvement, but they're also reporting huge power use. That's going to be expensive in 2022/2023. If I did buy a 400x I'd probably undervolt it.
Anyway I think you're right. I should just wait.
You are asking a difficult question. The question you need to ask first is just how important is an upgrade to you, and how soon. Hardware is always getting better, with new hardware releasing about every 2 years. And of course the past 2 years have been pure chaos, to put it mildly.
The particular stat you mention can vary wildly, too. I am assuming the cost per iteration is based on MSRP, correct? But we haven't had MSRP in so long that any cost per iteration comparison is not valid. Plus every model can have different base prices. The thing to really look at here is the iteration rate, the general speed of the card in question. Take the benchmark test for yourself, the download link to the DUF is in the first post. I believe the 2070 numbers are out of date since the new releases of Daz nerf rendering speeds.
Then compare your iteration rate versus a 3080 in the same version of Daz. These numbers can vary a little depending on the scenes you build, so keep in mind this is a general number. The actual performance gap may increase or decrease in your scenes. But this bench will give you an idea of what kind of performance a 3080 will give you over the 2070.
Still, the 3080 should be a big increase over the 2070, while offering a little more VRAM. The Iray performance of Ampere is far greater than any gaming benchmarks indicate because the ray tracing cores are fully utilized. Even games that use ray tracing don't push them like Iray.
The 4000 series is just around the corner...but how close to that corner are we talking??? The latest rumors say that Nvidia is looking to delay the 4000 launch as long as possible because they are struggling to sell old 3000 stock. They do not want to launch a new product and leave old products collecting dust on store shelves. Some even say it might be December when the 4080 launches, which is 5 months away. Is that worth waiting for? That is up to you to decide. I personally do not believe Nvidia will wait longer than that, AMD is also set to launch new GPUs, and Nvidia does not want AMD to launch first. So AMD could force Nvidia's hand here. They are playing a corporate game of cat and mouse. Ultimately when RTX 4000 launches is actually up to AMD! If AMD announces they are launching in September, Nvidia will respond immediately by doing the same. But if AMD holds off or takes a while, Nvidia will as well.
BTW, rumors have been suggesting the 4080 will actually have 16GB of VRAM, not just 12. So if this is true, then that would be doubling your current 2070 VRAM.
There is one other option here, if you have not considered it, you can keep your 2070 and run two GPUs at once. If your PC can handle it, that would give you the biggest performance boost of all. Running a 2070+3080 is WAY faster than running a 3080 alone. This will probably not be as fast as a single 4080 though, and VRAM does not stack, so you would still be limited by the 8GB in your 2070. If the scene exceeded 8GB, the 2070 would not be used. And there is also the crazy chance you could go with a 2070+4080, LOL.
So you have lots of options. I wouldn't be too concerned about power, I don't think it will be as crazy as some people suggest. But it will be more than you are used to.
System Configuration
System/Motherboard: Gigabyte B450M DS3H Wifi
CPU: AMD Ryzen 7 5700G @ 3.80GHz with Radeon Vega 8 Graphics
GPU: EVGA GeForce RTX 3060 12GB @ 1.882GHz
System Memory: Patriot 2X16GB RAM @ 2.66GHz
OS Drive: PNY CS2130 2TB NVMe M.2 PCIe SSD
Asset Drive: same
Power Supply: SeaSonic 750Watts Gold
Operating System: MS Windows 11 21H2
Nvidia Drivers Version: nVidia GEForce 3060 Game Drivers 516.59
Daz Studio Version: DAZ Studio Pro Public Beta 4.20.1.43
Optix Prime Acceleration: N/A
Benchmark Results
2022-07-21 21:25:13.209 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend progr: Received update to 01781 iterations after 294.084s.
2022-07-21 21:25:14.121 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend progr: Received update to 01786 iterations after 294.995s.
2022-07-21 21:25:15.043 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend progr: Received update to 01791 iterations after 295.917s.
2022-07-21 21:25:15.970 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend progr: Received update to 01796 iterations after 296.844s.
2022-07-21 21:25:16.752 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend progr: Received update to 01800 iterations after 297.626s.
2022-07-21 21:25:16.753 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend progr: Maximum number of samples reached.
2022-07-21 21:25:17.258 [INFO] :: Finished Rendering
2022-07-21 21:25:17.293 [INFO] :: Total Rendering Time: 4 minutes 59.84 seconds
Iteration Rate: (DEVICE_ITERATION_COUNT / DEVICE_RENDER_TIME) : 1800 iterations/297.626s = 6.048 iterations per second
Loading Time: ((TRT_HOURS * 3600 + TRT_MINUTES * 60 + TRT_SECONDS) - DEVICE_RENDER_TIME) seconds : 42.584s
+++++
The PNY GTX 1650 Super 4GB I had last year took about 16 minutes to render the test scene so that's a descrease in render time of 11 minutes or the RTX 3060 12GB render the scene almost 3 times or 300% faster faster than the GTX 1650 Super 4GB.
Special thanks to AgitatedRiot who gifted me the RTX 3060. I am sort of excited to play with lighting since the RTX 3060 haves iRay previews in the DS viewport quite easily.
Newegg knows I have a weak will and they fill my email box up with video card sales.
Which lead to more experiments in an attempt to convince myself I don't need to make the investment.
System/Motherboard: PRIME Z390-A
CPU: Intel Core i9-9900K CPU @ 3.60GHz
GPU: NVIDIA GeForce RTX 2070 SUPER
NVIDIA GeForce RTX 3080
NVIDIA GeForce RTX 2080 SUPER
System Memory: DDR4-2666 16GB x2 STT
OS Drive: Windows 10 Pro 64
Pass 1 (3 GPUs)
===============
2022-07-28 14:23:30.992 [INFO] :: Total Rendering Time: 1 minutes 16.9 seconds
2022-07-28 14:23:37.948 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Device statistics:
2022-07-28 14:23:37.948 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 2 (NVIDIA GeForce RTX 2070 SUPER): 418 iterations, 5.588s init, 67.117s render
2022-07-28 14:23:37.948 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (NVIDIA GeForce RTX 3080): 971 iterations, 2.861s init, 69.999s render
2022-07-28 14:23:37.948 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 1 (NVIDIA GeForce RTX 2080 SUPER): 411 iterations, 2.413s init, 69.275s render
Iteration Rate: 6 + 14 + 6 iterations per second / 26 iterations per second
Loading Time: 7 seconds
Pass 2 (1 GPU)
==============
2022-07-28 14:33:00.640 [INFO] :: Total Rendering Time: 2 minutes 10.25 seconds
2022-07-28 14:33:30.176 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Device statistics:
2022-07-28 14:33:30.176 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (NVIDIA GeForce RTX 3080): 1800 iterations, 2.090s init, 126.593s render
Iteration Rate: 14 iterations per second
Loading Time: 4 seconds
Pass 3 (2 GPUs)
===============
2022-07-28 14:35:56.340 [INFO] :: Total Rendering Time: 1 minutes 35.57 seconds
2022-07-28 14:36:00.372 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : Device statistics:
2022-07-28 14:36:00.372 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (NVIDIA GeForce RTX 3080): 1285 iterations, 1.987s init, 91.615s render
2022-07-28 14:36:00.373 Iray [INFO] - IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 1 (NVIDIA GeForce RTX 2080 SUPER): 515 iterations, 2.873s init, 90.983s render
Iteration Rate: 14 + 6 / 20 iterations per second
Loading Time: 6 seconds
In a mostly non scientific, theoretical way, swapping out the 2070 for a second 3080 should result in:
Iteration Rate: 14 + 14 + 6 / 34 iterations per second
New render time should be 54 seconds + load time, gain of about 16 seconds
If a real scene takes x10 longer than the benchmark scene, I am looking at a current render of time of about 16 minutes. Investing in a new card would bring that down to around 9 minutes.
Can anyone poke some holes in this logic? The gain is just on the edge of being worth it.
The biggest hole is that RTX 4000 is coming soon. You will have even more options for performance. While the 4090 might be the only one to launch at first, the prices of existing cards will keep dropping.
But otherwise, you can never have too much GPU power. It is always going to be a question of how much are you willing to pay for the extra speed. The 3080 would also offer extra VRAM over the 2070, so there is at least that.
The other caveat is that this bench will not always scale to your exact scenes. The new performance of RT cores is enhanced most by geometry. The more complex your scene gets, the more of a difference the RT cores make. So if your scenes are more complex, then maybe you will actually see more improvement than what you do in the bench. But if your scenes are not that complex, your improvements might be less. It just depends on what you do.
Since you have different cards, you have a unique chance to test them in your own scenes and see just how they do. You could cap the iteration or convergence to make the tests faster. Then you should have enough information to judge if another 3080 is worth it in your eyes. Or if waiting helps.
After all, if a 4090 more than doubles the 3090 performance, it might actually be logical to dump the 2000 series cards completely in favor of a 3080+4090(or 4080)
BTW, which version of Daz Studio are you running?
DS version is 4.20.0.17, 64 bit.
I'm all over the place on the decision making here. To be perfectly honest, what I have is more than adequate. The drop on 3XXX prices, plus their availability pushed me to experiment more.
I wanted to see if my set up could even handle the third card. I'm stuck with the 2080 because nothing else will really fit inside the computer case without a lot of monkeying around.
The 2070 was gathering dust and I really should think about donating it. Just been burned in the past... The 3080 is on an 850 Power Supply by itself, so I am wasting energy there.
I've had the 3080 about a month now and I've been particularly impressed with how quiet it runs. Much quieter than both of my 2XXX. Granted that could be just the specific configuration of this 3080 GPU.
The jump from 2080 to 3080 also made Iray interactive mode much smoother. The 2080 could do it, but it would chug along at times.
Picking up a 3090 is tempting because of the increased memory.
Waiting for a 4XXX is also tempting, but concerned about the lack of supply being an issue again. I've been hearing the warning bells of low chip availability in the manufacturing sector. Nothing concrete, yet, but they usually don't start sounding the alarm unless there is a problem coming.
One thing I noticed a while ago is that its not worth it to use your CPU in renders with most newer GPUs. Even an overclocked Threadripper 32 core CPU struggles to match the performance of a 1080 ti. Even the 1080 ti isn't actually that great in Daz rendering performance when compared to most Nvidia RTX 2000 and RTX 3000 series GPUs. Quite often adding the CPU intot he rendering task increases render times if tis not a high core count CPU
I haven't had a chance to test the 5000 series Threadripper CPUs in Daz yet, but I don't expect them to be much better than a 1080 ti without chilled liquid cooling/
Here is an interesting discovery that is sort of on topic.
My UPS absolutely does not like having the 3rd video card attached to it.
The UPS is fairly beefy, a CyberPower that I picked up at CostCo for ~$200 USD. The only things plugged into it are my desktop (750 PS), the external video card PS (850 PS), and my monitor.
Now I'm not an electrical engineer by the CyberPower website implies that this PS would only output 600W. Which wouldn't even be enough to power the desktop PS at load. Since this is the first time I am having issues, I have to assume its the last thing I changed. I.e. adding the 2070 into the mix.
With the 850 powering only one video card, no issues. With it powering both video cards,no issue until DS started rendering. Then it begin beeping to alert me of an issue. I didn't loose power, but the UPS was warning me that it needed to pull from the battery to handle the load.
I shut down, pulled the power to the 2070, rebooted and rendered the same scene. No issues.
I will probably shift the monitor to another UPS and see if that will lighten the load. I don't need the UPS batteries overheating when I am away from my desk.
(Edited to change the price.)
I've never bought a UPS but I am surprised they'd even consider selling one that wasn't rated for 1500W.
Results for RTX 3060 12 GB (non-TI version) only GPU
System Configuration
System/Motherboard: Asrock X570 Phantom Gaming 4
CPU: Amd Ryzen 3400G @ 3.7GHz (default)
GPU: Gigabyte GeForce RTX 3060 Eagle OC 12GB GDDR6 (GV-N3060EAGLE OC-12GD 2.0) @ (Core clock 1320 MHz / Boost clock 1807 MHz) (default)
System Memory: GOODRAM 32GB (2x16GB) 3600MHz CL17 IRDM PRO @2133Mhz (default DR 1066/15/15/15/36/50/1.20V, no XMP enabled)
OS Drive: Silicon Power XD80 2 TB M.2 2280 PCI-E x4 Gen3 NVMe
Asset Drive: Same
Power Supply: Thermaltake Toughpower GF1 750W
Operating System: Windows 10 Home 19044.1826
Nvidia Drivers Version: 516.93 (studio drivers)
Daz Studio Version: 4.20.0.17
Benchmark Results
2022-07-30 10:46:16.909 [INFO] :: Total Rendering Time: 4 minutes 25.25 seconds
IRAY:RENDER :: 1.0 IRAY rend info : CUDA device 0 (NVIDIA GeForce RTX 3060): 1800 iterations, 2.059s init, 260.058s render
Iteration Rate: (1800/260,058) 6.92307692308
Loading Time: (265.25-260.058) 5.192
Comment:
With msrp at $329 that would give 75.50 iterations per dollar per hour, which is still not bad (2nd place in the table If I'm correct).
3060 Ti is 96.0722 iterations per dollar per hour (1st place), but major bonus of 3060 over 3060 8GB Ti, is extra 4 GB for a larger scenes (although it is not so easy to hit the bottleneck here, as I thought earlier. The test scene while rendering takes around 5 GB of GPU memory)
System Configuration
System/Motherboard: MacBook Pro (16-inch, 2019)
CPU: Intel Core i9-9880H 8-Core CPU @ 2.30GHz
GPU: N/A
System Memory: 16 GB 2667 MHz DDR4
OS Drive: Macintosh HD 1TB
Asset Drive: Same
Power Supply: 96W
Operating System: macOS 12.5
Nvidia Drivers Version: N/A
Daz Studio Version: 4.20.0.2
Benchmark Results
Total Rendering Time: 1 hours 32 minutes 22.17 seconds
IRAY:RENDER :: 1.0 IRAY rend info : Device statistics:
IRAY:RENDER :: 1.0 IRAY rend info : CPU: 1800 iterations, 2.905s init, 5535.202s render
Iteration Rate: (1800 iterations / 5535.202s) = 0.325191384162674 iterations per second
Loading Time: ((1 hour * 3600 + 32 minutes * 60 + 22.17 seconds) = 5542.17 seconds
CPU MSRP: USD556
iterations per dollar per hour: 02.1029
iterations per second: 0.32519
Most home PC UPS systems are rated for 600-900 watts because most people don't have PCs that draw that much power. My 1200 watt rated UPS trips overload protection if I run game benchmarks with ray tracing enabled using both RTX 3090s. However that's because each 3090 is drawing 700 watts. During 3D rendering they usually don't draw much more than 400 watts, even with an uncapped power limit. When I had a pair of RTX 2080 Supers with a higher wattage manufacturer's BIOS they would trip overload on my old 900 watt rated UPS.
Its hard to find a 120v home USP system rated for more than 1200 watts. A 1500 watt 120v UPS is usually a commercial rack mount system of a floor system that is the size of a mid tower PC case.
The 4090 is not yet available, anyone who has one is bound by strict NDAs. It is of course quite possible that an Iray update will be needed before the new cards can be used at all.
No problem finding one for 240V, I can get a 1900W APC at 1500eur (VAT 24% included) in a week, a 2850W would cost about twice as much.
I already made one guess. Historically Iray has faired much better than video game performance uplift with RTX. The 2000 series saw a huge bump, and the 3000 series saw another very large bump. These were much higher than what games got. The ray tracing cores do a lot more work, so the actual uplift is closer to the performance increase of the ray tracing cores.
The 4090 is supposed to be twice as fast as the 3090, maybe even slightly more, like 2.1 or 2.2. The ray tracing cores once again see a bigger uplift, as they are talking about a 2.2 to 2.5 time increase.
So Iray should easily fall in between these and likely go on the high side. The 3090 does between 18 and 20 iterations in this benchmark. Doubling that should be give us 36-40 iterations. A 2.5x increase would put it right at 50 iterations, and I actually think this is very possible.
None of the big tech outlets benchmark the ultra niche Iray, but some do bench Octane and Vray. Their results should give us a good clue.
But like Richard said, the 4000 series might not even work with Iray for a while. That is one of Iray's biggest failing points, you have to update to use a new generation of GPU, and the process of getting that update to consumers is painfully slow. The Iray Dev Team first has to release an updated version of Iray. We can only hope that they already have dev models of Lovelace on hand to get it done before launch, but we don't know. THEN we have to wait for Daz to release an updated version of Daz Studio that uses this new Iray. In other words we have to go through 2 separate verification processes in order to get an update.
And we all know how fast "Daz time" can be. <.<
Maybe we get lucky and it just works. Oddly it seemed to work with the Titan V pretty quickly back when it launched.
I'm fully expecting next generation cards to work out-of-the-box with Iray given the lack of any sort of entirely new ASIC "core" processing units seemingly on the horizon in GPU world right now (the big GPU compute initatives - raytraced rendering, ai processing, and conventional rendering already being addressed by existing solutions.) Although obviously only time will tell.
According to rumors Lovelace is not just a refresh, there is a change in core design from Ampere. Last time the Iray Dev Team snuck in Ampere support before it released. But there is no guarantee that they repeat that, and there is no guarantee whatsoever that Daz will quickly release a beta channel with the update, either. Last time, even when the Dev Team put out an update for Ampere before Ampere even launched, we still had to wait until October 13 for a Daz Studio beta that added support. The hardware released on September 17, so it took over 3 weeks, almost 4 for Daz Studio to get the update out even under the very best circumstances where they had access to the new Iray before the cards launched!
The fact that the Iray Dev Team even needed to post an update for Ampere proves you cannot simply plug the new cards in and expect them to work. As Lovelace is not just a refresh of Ampere, there is a good chance that the 4000 series will not release until a solid month later.
Basically, I would not want to place bets on this. It has happened enough times (Daz Studio getting support well after launch), that history shows we cannot make assumptions about it. The key will be watching the Iray updates on their own site, not the Daz forums. If they mention Lovelace support soon after it gets announced, then that is a good sign DS will be able to get quickly. Of course they cannot post anything about it yet, Lovelace is not official. The days after the announcement will be key.
If they give no word, we may have to reach out to them and ask them directly if Lovelace will get supported by update, or if it already works.
So if anybody is selling their current card in order to get a 4000 series card, you might be waiting for that update. It is probably better to keep your current card for at least a little while to make sure you can use the new card you buy.