Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I know the double vram cards were cancelled but the Ti's should have more memory than their non Ti counterparts. We just don't know how much yet until we get better rumors. I am hoping the RTYX 3080 Ti is a slightly cut down 3090 with at least 12GB on a 3090 board which would mean two could be cinnected via NVLink and share memory. I upgraded my power supply to 1200 wats and installed an X570 motherboard, Ryzen 3700X, and 32 GB of memory in anticipation of running a dual card RTX 3000 system and boy did Nvidia slap me in the face with their offerings. Nvidia may still have better Raytracing than AMD but if all the new games are built for RX 6000 series cards due to both Playstation and Xbox using basically the same graphics hardware, games tha are ported over to PC form consoles are going to have a distinct advantage on AMD hardware.
BTW if you want to see something funny, go over to YouTube and check out Robtech's channed and watch "Hitler Reacts to Radeon 6000 Graphics Card Announcement - NVIDIA". I hesitate to link it here because the language is definately not family friendly but I can imagine Jensen's conversation with his board was quite similar on Wednesday after AMD's announcement.
I know it may be wishful thinking but I am STILL hoping that the RX 6900 XT is powerful enough to force Nvidia to drop the price on the RTX 3090. We'll see.
That was... just... so much more funny than I expected.
If you really want more VRAM the best option remains, IMO, dual 2070 Supers with an NVLink bridge. You'll get nearly 16Gb for around $1100 (depending on what you actually pay for the cards). A quick check of eBay shows used cards selling for around $500 and searching around the web shows there are some new cards at that price as well. I strongly doubt you'd have seen a 16Gb card from Nvidia much below that, to avoid cannablizing the 3090's sales.
I also doubt that Nvidia will be forcing their AIB's to cut the prices of the 3090's. It isn't clear that Nvidia could. Nvidia sets the price of their FE card and the AIB's usually produce at least one design to sell at that price but most of those who sell in the US did not produce one design at the 3080 MSRP and by all accounts the two AIB's who did produce designs at the MSRP are not rushing to restock them. There might be more profit margin in the 3090 but considering how annoyed the AIB's are reported to be at Nvidia over the Ampere launch I doubt Nvidia is also going to try to cut their profits.
To reiterate a point from earlier, all modern efforts to introduce realtime raytracing into gaming-type workloads on Windows is being done via hardware-agnostic APIs like DirectX DXR. Meaning that games "ported" over from the new AMD powered console generations will actually work just as well if not better (judging by the lack of raytracing performance bragging in AMD's release events) on Nvidia cards right away.
All of the XBox ones will use DXR because DirectX is how all of those games work. Sony could conceivably use Vulkan for PS5 ports but even then the Vulkan API team has said they will hook into both drivers agnostically once they get the AMD RT details.
I was actually thinking about doing exactly that except I need a 3 slot bridge and finding one is like pulling Hens teeth. The few that are out there cost almost as much as a video card
Lots of times being best in some metric is irrerevant when people are just trying to minimize the probability of encountering bugs by using as nearly the same GPU as they can. AMD GPUs in those two consoles is big. AMD putting 16GB in there new cards isn't aimed at us so much as it's aimed at the game market, the Unity/UE4 game making hobbyist crowd, and lastly Blender, DAZ, and the like. nVidia made a mistake being stingy with RAM on the 30X0s.
There's a 3 slot 2070 Super? or do you just have a board with weird spacing?
AMD made no headway despite having been in both consoles in the last generation. It flatly does not matter. DirectX, and Vulkan, makes the card irrelevant. The days of having to check if the game you're buying supports the card you have is 25 years past, literally, as DirectX just passed its silver anniversary as amazing as that is.
As I've patiently and repeatedly explained in this thread Nvidia made no mistake, beyond losing a few sales to the clueless and the epeen wavers, in choosing to cap its flagship at 10Gb. There is not going to be some huge rush of games actually take up 16Gb of textures porting over from console.
First the console game makers are cheap. Making all those textures would make those games a lot mroe expensive. Console gamers are not going to put $100 for a game. The big volume games like Madden and FIFA certainly aren't going to spend a dime on bigger textures not when they can't sell them as a separate transaction and the EU has dropped the hammer on that. Second there is no need for them until 8k is an actual thing.
I have Horizon Zero Dawn for PC which is likely the best looking console game of the last generation. It looked amazing on my friends 65" TV at 4k 30fps on the PS/4 and looks better still on both my 32" 4k monitor at 60fps and my 55" OLED. And that is without using more than 6Gb of textures, or DLSS or even real time ray tracing.
Personally I think about the time the first 8k monitors start to become an actual thing, along with next gen GPU's (or gen after next) that can actually drive one, we'll also have PCIE gen 5 which is double the bandwidth of gen 4. IOW four times the transfer speed of gen 3 per lane. Or possibly even gen 6 which is coming up right quick as well which again doubles bandwidth. At those sorts of transfer speeds (2 or 4 billion transfers/second per lane) it may become essentially irrelevant where a texture is stored for a game. The entire system may just be so fast that the various latencies may just no longer matter at that point, assuming you use NVME storage etc.
No, there is a 3 slot bridge and a 4 slot bridge bridge. This refers to the spacing between the PCI Express slots. The NVLinks are rigid, not flexible like a SLI connection. With my motherboard, it requires a 3 slot bridge
.
To reiterate another point from earlier, direct-to-SSD GPU functionality makes large framebuffers (for gaming type workloads especially) wholly unnecessary. And given that AMD is actually ahead of Nvidia in terms of direct-to-SSD GPU functionality integration (courtesy of those new consoles) there is actually less of a rationale right now for large framebuffers on new gen AMD cards than on new gen Nvidia ones for pretty much any usecase other than pure content creation or deep-learning type things. Neither of which are task spaces in which AMD graphics really has a serious stake (as demonstrated by the lackluster state of Radeon hardware/software integrations into professional apps - unlike what you see with Nvidia's Cuda/etc.)
Frankly, the best reason I can come up with for AMD's baseline jump to 16GB is because people who work there think people who buy graphics cards are attracted to increasing multiples of 2.
Single slot liquid cooled RTX 3090s? Apparently it's going to be a thing...
https://wccftech.com/ek-asus-unveil-geforce-rtx-3090-rtx-3080-rtx-3070-liquid-cooled-graphics-cards/
Yea the lack of DVI on these mean it's a single slot, but I can let see people really throwing 7 in a single machine with the power/heat side of things.
I'll be happy with a pair :)
The main advantage I see here is not blocking your other PCIe slots, allowing you to use those for other cards, say quad NVME storage cards, video capture cards, adapter cards, etc...
Getting rid of DVI makes sense. Who is using DVI any more?
Depending on the cost these could be viable for someone looking to do a custom loop. Probably the biggest time sink of building a loop including a GPU is disassembling the card and installing the water block. I'd pay a reasonable premium to have that done properly by someone else. Plus you don't void the warranty by removing the cooler.
Being single slot isn't really all that important though as every motherboard out there separates it's full length slots by a full slot. I guess you put something in the x1 that is usually in between but you'd have to really be jamming stuff into the machine to need that slot.
So, if someone wants to use more than one of these, do they tie all the liquid lines to a single radiator in the case, or do they require separate ones? I did get a kick out of the line in the article "the I/O plate is made out of 304 Stainless Steel so it won't corrode". As someone who used to work in the chemical industry, if you think 304 SS won't corrode, I've got a bunch of stuff I'd like to sell you (just for fun, put some rubbing alcohol or a few drops of household bleach on 304 SS and watch what happens).
Of course, all of this is moot if the cards continue to sell out in under 30 seconds every time they become available...
The loop is really whatever you want.. but yea you'd want a few rads for a full loop if you have a 3090 + CPU etc
The point is that it's stainless steel not aluminum. Since there will be copper else where in the loop having aluminum would cause galvanic corrosion.
As to how many radiators you'd need? That would depend on size and thickness of the radiator(s), speed of the pump, how high you want to run the fans, how many GPU's you want etc.
I definitely wouldn't try cooling a pair of 3090's with a single 120mm radiator. That would be unwise.
You could probably do just fine with dual 3090's and thick 420mm rad and 6 static pressure 140mm fans setup for both push and pull, assuming you have adequate airflow to the fans and are not trying to OC the cards. But that's a lot of bulk and would be hard to fit in most cases. So I'd lean to a pair of radiators, 420 or 360, most likely given space constraints in most cases.
You make an excellent point here. In my previous life teaching freshman chemistry I used to get a chuckle when students would try to weigh out copper chloride into an aluminum weigh boat and then wonder why everything turned black. For many of them that was the first time they actually witnessed a solid state reaction so it was fun to ask them what was happening.
I'm not sure whether my case will support adding an additional radiator (I have one installed in the top of my case for my CPU)--I guess I could try mounting one in front and then switch the rear fans to pull air in. Right now I am still waiting for 3090's to be available and not out of stock everywhere. I think I'll start with air cooling and then consider other options once I see how the card runs (I expect it will be a significant upgrade from my current card in any event).
I would not mix loops. If you have an AIO for you CPU or an existing custom loop for your CPU I'd build one loop for the entire system rather than have separate loops with multiple radiators and tubes all over the place. Unless you are just overclocking the heck out of the CPU it likely isn't putting out that much heat, not compared to a 3090.
If you already have a top mounted radiator you certainly could mount one front as well. You could setup the front one as intake, mount the fans "outside" the radiator to pull air into the case, and then setup the fans on the other radiator as exhaust. With the rear fan as intake you get some cooler air to cool the Mobo VRM's and the rest of the system, as well as some cooler air for the second radiator. With both radiators in the same loop you'd likely get pretty good cooling.
If you're leaning towards WC'ing the system just wait for the Asus EKWB cards. They're likely to be more available since demand won't be as high but who evern knows at this point nothing is staying on shelves. I sort of want a 5600X this week but I am not dealing with scalpers etc. to get a CPU.
Not entirely true. While DXR is a standard, AMD already provides an extension for it: https://github.com/GPUOpen-LibrariesAndSDKs/RadeonRays_SDK
In Radeon Rays 4.0, there is a possibility to target two more RTRT level. One with intrinsics, and one with fully custom BVH traversal support. These are not exposed in the standard, but they will come in the next DXR. A preview should be out next year.
Xbox Series S and X run the code written to DXR, but these consoles are provide a better approach, and this is not exposed on PC, well at least in a standard way. Radeon Rays 4.0 allows to do more or less the same with DXR intrinsics extension, but this is not a standard.
Sony is using a more different API, with even more features and much better programibility. There is a Radeon Rays 4.0 mode for it, but not fully compatible, so the compute shaders must be ported, even it it's not a huge job.
They don't have a choice. GDDR6X is freaking expensive, and super hard to dealt with the PAM4 signaling. The situation is much worse, when they add more memory to the opposite side of the PCB.
Just a note that the official launch date for Ryzen 5000 CPUs is still November 5th. The Ryzen 5000 CPUs continue to look rather impressive in leaked benchmarks versus the latest Intel CPUs, enough so that Intel is leaking more details about their own upcoming product stack, which isn't expected to hit the market until sometime in 2021.
Anyways, I'd imagine that we'll see the independent reviews for the Ryzen 5000 CPUs on the 5th, so only 3 days to go. I suspect that they will get hoovered up by scalpers on launch day, similar to to other recent product launches, but as I'm waiting until the upcoming year to do my hardware upgrade anyways, it won't affect me much. Not expecting any new Threadripper news in the short term, but apparently the new EPYC CPUs based on the new Zen cores are trickling out to longtime AMD partners (Amazon, Google, etc.), and perhaps we'll hear more about the new EPYC CPUs in late November, and almost certainly at CES, which falls on January 11th to 14th, according to the CES website.
Another interesting note is that apparently the new Ryzen CPUs, when combined with a 500 series motherboard and the new Radeon 6000s, will improve direct GPU memory access from the CPU, which may boost performance slightly in certain workloads. I look forward to seeing how this plays out in practice, via the indepenent reviews later this month. Of course, unless/until AMD sells Daz3D on Radeon ProRender, most of this won't matter to most people around here, that are heavily invested in the Iray bandwagon, but some people may be able to make use of it...
Maybe in Poserland though. Renderosity is releasing teasers for Poser 12 regularly, but as I don't use Poser myself I haven't been paying much attention.
Edit: Here's the Poser 12 update that talks about GPU based rendering:
https://www.posersoftware.com/article/488/poser-12-update-how-the-new-superfly-improves-art-render-time
Apparently P12 will support CUDA for older Nvidia cards, Optix for newer ones, and via Open CL for AMD cards.
Latest news, it seems that Nvidia has moved the release date for the 3060 Ti from November 17th to December 2nd. Just something else to mak you go hmm. Comspiracy theories abound but I am really beginning to think that Nvidia is sitting on chips just to create an artificial shortage (or maybe saving the few chips they have for later released Ti cards). You can only hang your board partners out to dry so many times before they go elsewhere. At this rate, EVGA may actually start making cards with Radeon GPUs.
I love a good conspiracy theory!
It would be the AIO's driving this pushback of the release date, most likely. I think most realistically that there simply aren't that many chips passing QC, even at the level that would make for this chip, to satisfy demand. This would be the card that parents buy for their kids and more casual gamers would want so supply would need to be sufficient to actually last at least a few days on shelves.
If Nvidia was artificially doing this word would eventually get out and they would be screwed.
R.E. the Nvidia conspiracy theories:
I read some speculation that Nvidia is depressing their production numbers a bit while they work out the kinks in the manufacturing process so that they can improve their yields at Samsung as the process matures. Of course, rumormill so grain of salt.
Still seeing a steady trickle of leaked benchmarks for Ryzen 5000 CPUs and AMD released a few more in house benchmarks of the RX 6000 GPUs, so not much new to report. Intel has shared a couple of benchmarks for their upcoming CPUs (2021) as well, but of course grain of salt until the independent reviewers get hands on the silicon and share their results.
A batch of the latest leaks show the 6 core Ryzen 5 5600X beating the i5-10600K in Cinebench (Tech Radar used the term destroyed, but my adblock won't let me view the site, even if I 'allow' it, so I won't read that site).
Of course, a number of US of A based tech journalists may be focused on something else today, so I'm not really expecting much on the tech news front today...
So we are still in the (relative) calm before the November AMD storm. Less than 2 days to go on the CPU front!
BTW, for those that might be interested in Dr. Ian Cutress' breakdown of AMD's quarterly results and the Xilinx deal, to pass the time... video clocks in at 1:30:55
I may have spoken too soon. AMD may be announcing it's MI100 Instinct CDNA based accelerators on November 16th...
https://www.aroged.com/2020/11/amd-unveils-instinct-cdna-compute-accelerators-nov-16/
I'm not sure how useful accelerators are for rendering, plus if you could use it, it'd mainly only be of help for Blender, Poser 12, and other 'brand agnostic' GPU rendering engines, plus it'll likely be pricey. Has anyone here tried using an AMD Instinct for rendering? I'm just curious, still planning on that 3090 early next year...