Adding to Cart…

Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
From the rumor mill just today, it looks like you called it. 24/20/10 are the configurations being bandied about now. Again, just rumors and we can't know, but your logic seems sound. Even with the middle card and NVLink, I would never have to think about VRAM again.
No. Linus was right first and then was wrong.
The hardware is NVME. As I've said twice. The firmware is custom to make the fetch efficient. You can even go watch his "apology" to see him fumble the explanation. If they had some super fast hardware do you really think the enterprise wouldn't have it? I'll say this again, PS5 $500 unit, servers $50k unit. There are more servers in just the AWS space than Sony will sell.
NVidia owns PhysX. I know of no way to check whether a specific card supports the full library but you can even run a second card as a PhysX card. So I have a hard time believing they don't fully support their own tech on their own cards.
What makes an SSD is more than the process technology of its cells. "The hardware is NVME" is like saying my car is "internal combustion": technically correct but incomplete enough to still call it wrong. And if we were all a firmware flash away from this kind of performance, you're right, we'd all have it, so it is not just firmware either.
Linus's mistake was thinking that it was merely that, a faster SSD, and not what it is: a totally different bus architecture that PCs will not embrace, but the PS5 can because it isn't beholden to be compatible with a universe of peripherals that can't connect to this new bus architecture, over which they have no control.
They are free to innovate in their closed propietary system, and that is what they've done. Remember the Cell processor in the Playstation? It was awesome enough to make it into supercomputers, but didn't exactly take the PC world nor enterprise by storm, did it? That the enterprise didn't pick it up is not the indictment that you think it is... the choice to do so is probably a lot more nuanced than you think it is.
You complained about me saying the HW was NVME! IT IS NVME! As you now admit. There is no new bus architecture. PCIE is the bus. Again if they had developed a new bus we'd have it. Why in the world you think a bus would somehow be better for moving one kind of electrical impulses than another is mind boggling. PC's have switched bus architecures repeatedly. I forget what proceeded VLB and EISA but something did back in the 386 days. Then PCI replaced VLB. Then PCI-X came out. Then came PCIE gen 1 then gen 2 then gen 3 and we're in the middle of the switch to gen 4 with gen 5 coming in the next 2 or 3 years. If they have a bus faster than gen 5 great but they don't and they don't have a GPGPU that fast anyway.
Firmware is SOFTWARE. The firmware in question is specifically optimized for moving textures and other games assets, or so they claim. Since that just sounds like compression we'll just have to wait and see.
But wow, just wow. Cell isn't in supercomputers. I know it was, a decade ago, in a failed IBM system but emphasis on failed. IBM was trying to save the failed PowerPC architecture, again note failed, and Cell was based on PowerPC. I did some checking around and Cell has been dead for over a decade.
No, let me make this very clear. When you build a supercomputer now what you do is take a whole lot of enterprise components, or for what is known as a Beowulf cluster a bunch of PC's, and just build a big datacenter with specific networking. Sometimes, for the guys going for the records, you'll get AMD or Intel to build you some custom stuff but that gets cost prohibitive really fast and when you're dealing with 2,000+ nodes adding $25,000 to the cost of each node takes these projects out of the range where even governments want to pay for them. So there is software in the supercomputing space most of us in the enterprise don't care about, baton waving for clusters that big aren't really interesting to me, but their hardware is our hardware.
I complained not about you saying it was NVME, but about you acting as if there was no innovation.
The Bus Architecture is more than the transport spec, which is what SCSI, ISA, EISA, VESA, and PCI are. It is a relatively dumb system responsible for reliably getting bits from one system component to another, agnostic of the purpose of those bits because it has to work for diverse devices. But the interpretations of those bits, is the job of the controller connected to the bus. The PS5's SSD controller is totally new and represents an innovation.
Compression is part of it, Kraken I think I read, but that is not all of it. If it were, again, everyone would have it.
It was in IBM's Roadrunner. A supercomputer.
I'm not sure what the precise definition of a supercomputer is, but from your description of what you think one is, I'm reasonably sure you've never built one.
I don't know how far down the top 100 you'd have to go to find a supercomputer based on commodity parts, but there isn't one in the top four. But by then, they are already an order of magnitude slower than Summit.
Their hardware is not your hardware. Supercomputers are one-offs with custom hardware based on commodity parts, like OpenSPARC or POWER but with custom logic implemented on FPGAs, like the QCDOC architecture. The fact that you say "customers" plural indicates that you're talking about something else; a supercomputer is built for a specific task, by a specific research house, for a specific user, such as IBM for the Deparment of Energy. No one else needs to simulate nuclear reactions regardless of the cost, which in Sierra's case was $330 million. It's not a matter of wanting or not wanting, its a matter of needing to do something that can't otherwise be done, say a treaty that doesn't allow you to test a real nuke, at any cost.
Getting back to the topic, the PS5 represents a real innovation, and it's SSD architecture is a large part of that.
People, can we get back to discussing Ampere and quit flaming each other. I don't want to see another thread closed becasuse someone is trying to start a fight.
I am still hoping that Big NAVI actually kicks Amperes derere this time around so maybe we gan get some cheaper render cards. Not gonna happen because Nvidia will kick some serious ... you know but I am still hoping that they actually release a 3070 Super with 16 GB of memory and maybe an NV Link connection.
The top 4 are all commodity components!
The #1 was just a big splash because it is the first time it was ARM based! Do you think someone spends a few billion to design custom silicon for a couple of thosand CPU's? How would they ever get those even remotely debugged? The top 10 are ARM, Xeons, Power9, Epyc/Ampere and one Chinese thing. You can look up there networking. They all use commodity components there, IIRC the plurality use infiniband.
I never once wrote customer discussing supercomputers but thanks so much for putting words in my mouth its always so honest when some one does that.
Now onto the bus. You seem to have no idea WTF you are talking about to be as nice as possible. The bus does not interpret anything. Bits come in and bits go out. The bus has two main jobs to transmit the data and to guarantee the integrity fo that data. That's it. PCIE also provides some power but that is not part of the bus part of the spec. Above the serial bus part of PCIE there are some other protocols but they just control how data is transmitted not interpreted. In very simple terms they make sure one side isn't sending when it should be receiving and makes sure that the packets are read in the correct order.
As to the PS5 SSd firmware we don't know what it is because they haven't delivered. When they do we'll know. I anticipate it not being anything near as ground breaking as they claim. Because again, lots more money to be made elsewhere if it was.
Roadrunner, IOW, as I said, failed project based on failed PowerPC architecture.
There is no precise definition of supercomputer but I have built a Beowulf, back around 2005 out of Pentiums. It was mostly just curiousity about how powerful it would be. It's pretty easy to get that sort of thing going. Pick up 10 or 20 (or more) junk desktops and switches. Find a guide to setting up the network and installing one of the distros. Then find something for it to do. I had mine crunch Seti@Home. If you can install a Linux distro and setup an ethernet switch you can handle setting up a Beowulf, assuming you don't need to do any repairs on the boxes.
While it's interesting watching the walls of text going back and forth r.e. the Sony SSD thing, let's get back on point guys and gals! Agree to disagree please!
Someone mentioned earlier the 'gulf' that currently exists between RTX Titan and the RTX 2080 Ti (24 Gb vs 11 GB). I'd just like to point out that just a generation ago, the Titan X and Xp had 12 GB at the same time the 1080 Ti had 11 GB. So we have 'precedent' for say a 24 GB Titan vs a 20 GB Ti, if that rumor holds true. As noted above, we had another leak today that indicates we may be looking at a 24/20/10 GB split, at least at launch. Per that rumor, other cards will come later.
I'd link Tech Radar's article, but their anti-adblock measures (as well as Tom's) piss me off. Even if I whitelist/allow their site with adblock, their web code still won't let me view the content. Hitting the esc key sometimes helps, until you try to scroll...
There was a PCB shot over at Chiphell today (shared on WCCFTech), but I don't speak/understand that language, so other than relying on other's translations of the article (which seems to be focused on the choke configuration), I can't tell you exactly what is being discussed in the Chiphell forum ATM.
No, let's talk about Intel! A company that actually provided information about its new products today as it battles to remain relevant. I'm not entirely sure what market they are targeting, what with being squeezed by AMD on one side and ARM on the other, but they're trying. Something. Mind you their really big Xe-LP reveal is scheduled for September 2nd, and probably no one will be listening a couple of days after the Nvidia announcement. They'll have to give people a reason to want their GPUs, integrated or otherwise. I wonder what it will be.
https://www.theverge.com/circuitbreaker/2020/8/13/21365544/intel-tiger-lake-11th-gen-xe-graphics-gpu-preview-first-look-architecture-day-2020
Summit is POWER, not ARM.
The interconnects are all custom FPGAs. China has two in the top 5, and they cannot even get these parts because of ITAR and EAR regulations.
You said that even governments didn't want to pay 2000 x $25,000. The entity that pays is the customer.
That is literally exactly what I said. The bus interprets nothing, it is just a transport as in the OSI model. The innovation is in the controller. I didn't say spec, I said architecture, implying that there is more than one component, i.e. the communicating sides.
That is not a convincing argument.
That is an insult to eveyone who has ever designed and built a real Beowulf cluster.
Sorry, @billyben_0077a25354 you are right, that was my last volley, I promise.
@Sevrin I can't remember where I read it, but I was under the impression that Xe was going to be for mobile first?
Probably? But lots of people render on laptops nowadays. We'll see in a couple of weeks?
Summit isn't #1. Fugaku is. The Rankings come out in June.
https://www.top500.org/lists/top500/2020/06/
Sure are a lot of infinibands on that list. The 2 Chinese ones seem to use custom FPGA's but they sort of have to but they could jsut as easily be using stolen IP since to teh best of my knowledge no non Chinese has ever seen either system.
No, you specifically wrote that I "The fact that you say "customers" plural indicates that you're talking about something else" not "You said that even governments didn't want to pay 2000 x $25,000. The entity that pays is the customer." I await your apology not countinued dissembling.
Actually what you wrote is "The Bus Architecture is more than the transport spec" Not what you are now claiming. You are now trying to equate not just the bus and the transport layer and the comm protocols but the software ontop of that.
Yes, you have been very insulting to me as I've built a Beowulf. They really are very easy to build. As anyone who has done so can tell you.
https://beowulf.org/overview/faq.html
https://www.linux.com/training-tutorials/building-beowulf-cluster-just-13-steps/
Intel said Xe-HPG (HP-Gaming) will be the last product to launch in the Xe family and won't be avilable until 2021, probably late 2021 at that.
You won't be able to use Xe for Daz GPU rendering anyway. Iray is Nvidia only.
It would be nice if somebody could make a layer over the other GPU drivers that spoofs an Nvidia driver.
...so Intel is taking another crack at making GPUs again
2021 is too long to hold my breath.
Yes, well sorry, I had actually looked up the information only yesterday because these cards are getting really stuffed with specialized math algorithms and the website I chose from search claimed only a subset of collision calculations were directly optimized on the nVidia GPUs. But it was wrong, thanks, and the https://en.wikipedia.org/wiki/PhysX page (which is not the site that told me wrong) says otherwise which is more than good enough for me. I'll be happy to let nVidia continue to grow the size of the physx library and add that to their GPUs. The next 3 years should be really exciting with regards to storage capacity, ram capacity, GPU and CPU speed and power consumption.
Another day, another leak:
https://wccftech.com/nvidia-geforce-rtx-3090-ampere-gaming-graphics-card-pcb-pictured/
In the article, we see a non-reference card (from Colorful possibly?) with (presumably) 11 VRAM chips surrounding the GPU, on the BACK of the card (note the screw heads). The article suggests that said chips would probably be mirrored on the front of the card, and mentions that a recent rumor on the Baidu forums mentions that there may be a 22 GB card floating around. There could also be another pair of VRAM chips on the card (at the bottom maybe) but since that part is obscured/blurred out there's no way to really know.
So yeah, even more confusion for the rumormill... and we still have a bit over 2 weeks to go!!!
Well this is rather disappointing.
Micron Confirms NVidia Geforce RTX 3090 Gets 21gbps GDDR6x Memory
Well, there's always a possibility of a "ti" after the number later on. Oooh, oooh, and a "SUPER"!
I take it you mean this is what is disappointing. I agree. I think I'll look for another hobby.
@Sevrin That's funny and not funny at the same time.
Yes, me too, and I hear sheep herding needs people.
I would treat the information about the amount of vram with a grain of salt because the Micron source is only showing it as an example and the figure is wrong for other GPU as well. Probably Nvidia's plan have since changed and currently, the leaked motherboard is showing 24Gb. Expect it to cost two legs and an arm though.
Move to Blender; same hobby and not tied to Iray, which is what Studio affectively requires - at least imo.
... And a Kidney.
Oh, and 21 Gbps is the speed of the RAM not the amount; they state the amount and speed separately.
So, has DAZ commented on how long we should expect before DAZ Studio iRay will support Ampere?
Right now I'm fine with my RTX 2080 for games and DAZ is the big incentive to upgrade and if it takes them months before an RTX 3090 or 3080 ti will even work in it then I'll just hang tight in the meantime.
I dont think it's actually down to Daz per se, my guess is it (an updated version of Iray if needed) will have to come from Nvidia, then added into DS (after testing etc)... and quite possibly there isnt even an 'official' answer yet. So yes, hang tight basically, let the reviews hit, then go from there :)
Honestly, unless you're using it for actual paid work or you're a baller, the thought of spending over a grand on any graphics card seems completely ridiculous to me. Anyway on the software thing, if it requires API changes it's not transparent to Daz so some code and a new release will be needed. If the API stays the same, it's all down to the NVIDIA drivers. I suspect the former actually (new hardware architecture), so it'll be a while before it's in Daz.