Nvidia Ampere (2080 Ti, etc. replacements) and other rumors...

1356745

Comments

  • AMD is also not anywhere near release. They are the ones who will respond to Ampere. Nvidia may then respond, as they did with the Super line. But these initial cards will not be a response to some mythical big Navi which if it even happens is still at least a couple of months away. 

    What we know, and we know it for a fact, is Nvidia wants a flagship release on shelves by Black Friday. They like that buzz in the press and they like that bump in their sales volume. They certainly are not going to hold up a release waiting for AMD. The question that remains unanswered is will AMD push back Navi 2, which is what they say is coming this year not "big Navi" which they have never even said is a thing, to get Ryzen 4000 on shelves by Black Friday as well as this all goes through TSMC's 7nm process and there are only so many wafers. Personally I'm betting we see Ryzen 4000 before any new Radeon cards because that is where AMD is making money right now.

    And no they could not release cards with 12Gb of VRAM and software lock it to 10. People would lose their shit. Those GDDR chips can be counted, each one has the capacity right on it (1,2 or 4Gb). The guys who buy the first cards on the market are the ones most likely to WC them. They'd get the coolers off and see 12Gb instead of 10 and it would a huge scandal. Further those things are expensive, a 1Gb GDDR6 chips is right around $12US cost to the manufacturer so putting two of those on that you then turn off is adding a lot ofcost that they can't pass on to consumers unless they want to get sued when the truth comes out.

    No, the PS5 is not going to use an SSD in place of RAM. That marketing speak is really really bad. They're claiming the SSD has a very specific custom firmware meant to integrate with the APU such that it is very efficient at fecthing assets. The APU will still have regular RAM and will still use it use as regular RAM and as VRAM. Of course the SSD will be used as virtual RAM. That's standard on every OS for the last 20+ years. That's not some new thing. Your Pc does it and so has every one since Win95 (or maybe Win 3.1 I forget when it was added).

    No, Nvidia is not worried at all about the new consoles. The announced graphics specs are beat by Turing cards right now. I think people are getting caught up by some outlets saying the PS5 will support 4k 120fps. that's not what the games will run at. That's just the media describing HDMI 2.1. The actual GPU is described in terms of GPGPU performance and it is nothing all that special (10.28 teraflops) which is roughly a 2080 (10.1). So assuming Nvidia does their usual generational step the 3070 will beat the PS5.

    Dude, Nvidia is launching before AMD to beat them to the punch. What Nvidia is doing is a primitive strike. They are striking first, and by doing so, AMD will not be able to claim any superior hardware at all. If AMD had managed to get to market first, they would have had a GPU faster than the 2080ti. Even if Nvidia was launching Ampere the next month, AMD would still be able to say they had the fastest consumer graphics card in the world. It has been a very long time since AMD has been able to make such a claim at all. It would be huge news all around the tech industry, and do not doubt for a single second that people would be buying whatever they had. AMD has made big strides and some people are just tired of Nvidia.

    Instead, by launching first, Nvidia will leap frog AMD before they even get out of the gate. This is a basic strategy of war, attack before they can attack you. The tech conversation will be focused on Ampere more than AMD, even when AMD finally launches. And unless AMD can outright beat Nvidia, their sales will be lower than if they had launched before Nvidia did. People are waiting for the next generation, and ready to snatch something up.

    And yes Nvidia is concerned about consoles. Sure, Turing is faster than a Xbox Series X, but only with a $1200 piece of hardware! The 2080 Super is a pretty penny, too. That doesn't include the rest of the PC! A Series X will not require you to buy anything else, except obviously games. So a 3070 beating a PS5 or Series X...well it better! And lets not forget the 2070 was $600 at launch. So you are comparing what may be a $600 GPU to a fully fledged box that is a complete package. Also, everybody knows that because consoles are designed to be gaming machines from the ground up, they can punch above their class and do more than a supposedly equivalent PC. Consoles are not weighed down by things like Windows or motherboard designs. The CPU and GPU are on a single package together in a chiplet design with pure VRAM right there close by. On a PC these things are physically separated and all that data has to travel across various buses and things. Consoles remove much of this congestion. Just look at Last of Us 2 on PS4 Pro, which is only 4 teraflops. Imagine what they can do with more than double that power, plus SSD and ray tracing.

    Did you not see the Unreal 5 demo that was confirmed to be running on PS5 hardware? There are not many gaming PCs that can handle what was shown there, and the ones that can certainly cost a LOT more than a console.

    There are going to be people who decide to buy a console because the GPUs that are better than consoles are outrageously expensive, and good gaming PCs can be very costly. This is one of the main selling points of consoles in the first place! Remember, a number of years ago PC gaming was basically left for dead. But it began to make a comeback. Why? Because the prices of a building a game machine came down. This was during the 4 core era, and almost any CPU was good enough to play video games. The Xbox 360 and PS3 had been out for ages, and gamers were hungry for better hardware. Then the Xbox One and PS4 were kind of weak, so you could easily build a PC that beat them for roughly the same price as those consoles were. You could build a PC with a i3 and a 750ti that was about the same. A 750ti!

    Think about that. A 750ti was all it took to match a console. The 750ti was less than $150 brand new, and was sold for as little as $120. PC gamine made a huge comeback, and cheap hardware was the biggest reason why. Fast forward to now...you need to drop what, $800 to $900 to match the Series X??? That is not cheap anymore! And if the 3070 matches the PS5, Like I said before, that's probably a $500-600 card by itself. You cannot build a PC that matches the consoles in both price and performance. It is just not possible like it was before. So sure, you can match the performance...but not the price.

    If Nvidia is not worried about consoles, they seriously should be. They need to not just be faster than consoles, because of their prices, they need to be laughably faster than consoles. The 3070 can't just match a PS5, it needs to beat it by a wide margin.

    Just FYI, Halo's multiplayer mode is already confirmed by MS themselves to run at 4K 120 frames per second. So...yeah, there will be 120 FPS games out there. It is not just a bullet point for marketing. I can imagine a lot of first person shooters and other eports like games will target 4K 120. It is going to happen and be a thing.

    Virtual RAM has been around a long time, but it has never been this fast or quickly accessible. The throughput speed is only PART of the equation. Both consoles are able to access the data on SSD faster than a PC can. You cannot plug up a SSD that is as fast as a PS5's and expect it to be as fast in practical use. PCs are simply not designed that way. While Nvidia is developing Nvcache and AMD has their version, these are still not the same as the streamlined design that the consoles will have.

    Let me give you another example of how far the console VRAM will go. Most PS4 games designed their games to run with about 30 seconds of gameplay in their VRAM. Ok. For the PS5, they are targeting 1 to 2 seconds of gameplay. Let that sink in. They are only using a mere couple seconds of gameplay in that 13.5 GB VRAM. That is crazy efficient. What this means is that the games will be incredibly streamlined. The game files themselves will take up less data because current games must duplicate their data all over the hard drive so that it can be found faster. And then you add in the fact that they only need a couple seconds in VRAM at a time? Where do you think the data is coming from then, if it is not in VRAM? It is not coming from some magical place, it is coming from the SSD. The PS5 may be using virtual RAM, but it is doing in a way that no PC has done before, or is even capable of doing.

    What this means is that Sony could make games that are just impossible on modern PC, even if the PC has SSD. There is literally no way around this...unless you increase VRAM.

    They demonstrated this with Ratchet and Clank in the PS5 show. You now have an ability that open portals to other worlds at any time. And not just a portal to one other world, it can open portals to a wide variety of entirely different worlds, with different enemies, different AI, different everything. And it can do this instantly during the game. There is no PC game that can match this exactly. It would need to be able to store ALL of that data for every single area available to the portal in VRAM at the same time. It should not take much effort to see that this would require a LOT of VRAM to be able to do. That is why way back when Steam's Portal came out, you could only use a portal back to a specific place. There were only two locations possible at any time, and the locations were very similar, often the same map even. It is nothing like what this new game is doing.

    In this video, the dev explains in very clear language that the SSD is why they able to load these new worlds on the fly. You can call it virtual RAM or whatever you like, the fact is that they are using this SSD like RAM. That is by definition, virtual RAM, LOL. But unlike PC which resorts to virtual RAM when actual VRAM and RAM is low, the PS5 is using virtual RAM constantly. You know, like RAM. :)

    From the rumor mill just today, it looks like you called it. 24/20/10 are the configurations being bandied about now. Again, just rumors and we can't know, but your logic seems sound. Even with the middle card and NVLink, I would never have to think about VRAM again.

  • kenshaw011267kenshaw011267 Posts: 3,805
    No, the PS5 SSD is not faster than those on PC's. It's just a standard NVME one. If it was some better standard the enterprise would be all over that stuff and we have a shit ton more money than console peasants. It's just they claim, remember this is all unverified marketing stuff, that this firmware makes the fetching of assets more efficient. No such claim about virtual RAM has been made.

    Emphasis is mine.

    That's exactly what Linus at Linus Tech Tips thought as well. He later discovered that he was so wrong that he later published the best apology video I've ever seen, for violating the trust his viewers put in him, his words. Descriptions of what they've done with the architecture are all over the internet and I'm surprised a "hardware guy" like you hadn't heard about it.

    Saying objectively incorrect things but with great passion and conviction does not make them any more correct.

    No. Linus was right first and then was wrong.

    The hardware is NVME. As I've said twice. The firmware is custom to make the fetch efficient. You can even go watch his "apology" to see him fumble the explanation. If they had some super fast hardware do you really think the enterprise wouldn't have it? I'll say this again, PS5 $500 unit, servers $50k unit. There are more servers in just the AWS space than Sony will sell.

  • kenshaw011267kenshaw011267 Posts: 3,805

    Well regarding getting much more VRAM sooner rather than later as anyone that's rendered a complex DAZ scene in iRay knows from having their scene kicked out of VRAM to CPU render because of size if nVidia are serious about ray tracing in their GPUs those GPUs must have the VRAM to support that. And with AMD adding hardware ray tracing AMD must as well. And both of those are nice because that means when GPUs have an overabundance of VRAM for 99.9999999 % of other GPU tasks so now you can buy GPUs based strictly off of performance now if you don't need raytracing.

    I think GPU efficiency is starting to stifle and more VRAM, wider busses, and smaller dies are where they are going to have to push for performance the next few years. There comes a time when the most efficient algorithms for GPUs are done relative to ROI and they are nearly there. I mean we don't expect them to add entire physics engines (some stuff is already added) next do we? Well maybe they will.

    Nvidia has supported entire physics engines on GPU's for quite some time. PhysX, Hairworks and I think one I can't remember the name of. The big issue, in the gaming world at least, is that since they are proprietary that mean that people with AMD cards get much worse look or performance when playing the game. IIRC it was Witcher 3 that had a bunch of these features and looks amazing on a good Nvidia card with everything turned on and just awaful on any AMD one.

    Really, I read that they only supported parts of the PhysX engine, not all of it, as it's quite large and a long term growth proposition. Unity uses the PhysX library.

    NVidia owns PhysX. I know of no way to check whether a specific card supports the full library but you can even run a second card as a PhysX card.  So I have a hard time believing they don't fully support their own tech on their own cards.

  • No, the PS5 SSD is not faster than those on PC's. It's just a standard NVME one. If it was some better standard the enterprise would be all over that stuff and we have a shit ton more money than console peasants. It's just they claim, remember this is all unverified marketing stuff, that this firmware makes the fetching of assets more efficient. No such claim about virtual RAM has been made.

    Emphasis is mine.

    That's exactly what Linus at Linus Tech Tips thought as well. He later discovered that he was so wrong that he later published the best apology video I've ever seen, for violating the trust his viewers put in him, his words. Descriptions of what they've done with the architecture are all over the internet and I'm surprised a "hardware guy" like you hadn't heard about it.

    Saying objectively incorrect things but with great passion and conviction does not make them any more correct.

    No. Linus was right first and then was wrong.

    The hardware is NVME. As I've said twice. The firmware is custom to make the fetch efficient. You can even go watch his "apology" to see him fumble the explanation. If they had some super fast hardware do you really think the enterprise wouldn't have it? I'll say this again, PS5 $500 unit, servers $50k unit. There are more servers in just the AWS space than Sony will sell.

    What makes an SSD is more than the process technology of its cells. "The hardware is NVME" is like saying my car is "internal combustion": technically correct but incomplete enough to still call it wrong. And if we were all a firmware flash away from this kind of performance, you're right, we'd all have it, so it is not just firmware either.

    Linus's mistake was thinking that it was merely that, a faster SSD, and not what it is: a totally different bus architecture that PCs will not embrace, but the PS5 can because it isn't beholden to be compatible with a universe of peripherals that can't connect to this new bus architecture, over which they have no control.

    They are free to innovate in their closed propietary system, and that is what they've done. Remember the Cell processor in the Playstation? It was awesome enough to make it into supercomputers, but didn't exactly take the PC world nor enterprise by storm, did it? That the enterprise didn't pick it up is not the indictment that you think it is... the choice to do so is probably a lot more nuanced than you think it is.

  • kenshaw011267kenshaw011267 Posts: 3,805
    No, the PS5 SSD is not faster than those on PC's. It's just a standard NVME one. If it was some better standard the enterprise would be all over that stuff and we have a shit ton more money than console peasants. It's just they claim, remember this is all unverified marketing stuff, that this firmware makes the fetching of assets more efficient. No such claim about virtual RAM has been made.

    Emphasis is mine.

    That's exactly what Linus at Linus Tech Tips thought as well. He later discovered that he was so wrong that he later published the best apology video I've ever seen, for violating the trust his viewers put in him, his words. Descriptions of what they've done with the architecture are all over the internet and I'm surprised a "hardware guy" like you hadn't heard about it.

    Saying objectively incorrect things but with great passion and conviction does not make them any more correct.

    No. Linus was right first and then was wrong.

    The hardware is NVME. As I've said twice. The firmware is custom to make the fetch efficient. You can even go watch his "apology" to see him fumble the explanation. If they had some super fast hardware do you really think the enterprise wouldn't have it? I'll say this again, PS5 $500 unit, servers $50k unit. There are more servers in just the AWS space than Sony will sell.

    What makes an SSD is more than the process technology of its cells. "The hardware is NVME" is like saying my car is "internal combustion": technically correct but incomplete enough to still call it wrong. And if we were all a firmware flash away from this kind of performance, you're right, we'd all have it, so it is not just firmware either.

    Linus's mistake was thinking that it was merely that, a faster SSD, and not what it is: a totally different bus architecture that PCs will not embrace, but the PS5 can because it isn't beholden to be compatible with a universe of peripherals that can't connect to this new bus architecture, over which they have no control.

    They are free to innovate in their closed propietary system, and that is what they've done. Remember the Cell processor in the Playstation? It was awesome enough to make it into supercomputers, but didn't exactly take the PC world nor enterprise by storm, did it? That the enterprise didn't pick it up is not the indictment that you think it is... the choice to do so is probably a lot more nuanced than you think it is.

    You complained about me saying the HW was NVME! IT IS NVME! As you now admit. There is no new bus architecture. PCIE is the bus. Again if they had developed a new bus we'd have it. Why in the world you think a bus would somehow be better for moving one kind of electrical impulses than another is mind boggling. PC's have switched bus architecures repeatedly. I forget what proceeded VLB and EISA but something did back in the 386 days. Then PCI replaced VLB. Then PCI-X came out. Then came PCIE gen 1 then gen 2 then gen 3 and we're in the middle of the switch to gen 4 with gen 5 coming in the next 2 or 3 years. If they have a bus faster than gen 5 great but they don't and they don't have a GPGPU that fast anyway.

    Firmware is SOFTWARE. The firmware in question is specifically optimized for moving textures and other games assets, or so they claim. Since that just sounds like compression we'll just have to wait and see. 

    But wow, just wow. Cell isn't in supercomputers. I know it was, a decade ago, in a failed IBM system but emphasis on failed. IBM was trying to save the failed PowerPC architecture, again note failed, and Cell was based on PowerPC. I did some checking around and Cell has been dead for over a decade.

    No, let me make this very clear. When you build a supercomputer now what you do is take a whole lot of enterprise components, or for what is known as a Beowulf cluster a bunch of PC's, and just build a big datacenter with specific networking. Sometimes, for the guys going for the records, you'll get AMD or Intel to build you some custom stuff but that gets cost prohibitive really fast and when you're dealing with 2,000+ nodes adding $25,000 to the cost of each node takes these projects out of the range where even governments want to pay for them. So there is software in the supercomputing space most of us in the enterprise don't care about, baton waving for clusters that big aren't really interesting to me, but their hardware is our hardware.

  • You complained about me saying the HW was NVME! IT IS NVME! As you now admit.

    I complained not about you saying it was NVME, but about you acting as if there was no innovation.

    There is no new bus architecture. PCIE is the bus. Again if they had developed a new bus we'd have it. Why in the world you think a bus would somehow be better for moving one kind of electrical impulses than another is mind boggling. PC's have switched bus architecures repeatedly. I forget what proceeded VLB and EISA but something did back in the 386 days. Then PCI replaced VLB. Then PCI-X came out. Then came PCIE gen 1 then gen 2 then gen 3 and we're in the middle of the switch to gen 4 with gen 5 coming in the next 2 or 3 years. If they have a bus faster than gen 5 great but they don't and they don't have a GPGPU that fast anyway.

    The Bus Architecture is more than the transport spec, which is what SCSI, ISA, EISA, VESA, and PCI are. It is a relatively dumb system responsible for reliably getting bits from one system component to another, agnostic of the purpose of those bits because it has to work for diverse devices. But the interpretations of those bits, is the job of the controller connected to the bus. The PS5's SSD controller is totally new and represents an innovation.

    Firmware is SOFTWARE. The firmware in question is specifically optimized for moving textures and other games assets, or so they claim. Since that just sounds like compression we'll just have to wait and see. 

    Compression is part of it, Kraken I think I read, but that is not all of it. If it were, again, everyone would have it.

    But wow, just wow. Cell isn't in supercomputers. I know it was, a decade ago, in a failed IBM system but emphasis on failed. IBM was trying to save the failed PowerPC architecture, again note failed, and Cell was based on PowerPC. I did some checking around and Cell has been dead for over a decade.

    It was in IBM's Roadrunner. A supercomputer.

    No, let me make this very clear. When you build a supercomputer now what you do is take a whole lot of enterprise components, or for what is known as a Beowulf cluster a bunch of PC's, and just build a big datacenter with specific networking. Sometimes, for the guys going for the records, you'll get AMD or Intel to build you some custom stuff but that gets cost prohibitive really fast and when you're dealing with 2,000+ nodes adding $25,000 to the cost of each node takes these projects out of the range where even governments want to pay for them. So there is software in the supercomputing space most of us in the enterprise don't care about, baton waving for clusters that big aren't really interesting to me, but their hardware is our hardware.

    I'm not sure what the precise definition of a supercomputer is, but from your description of what you think one is, I'm reasonably sure you've never built one.

    I don't know how far down the top 100 you'd have to go to find a supercomputer based on commodity parts, but there isn't one in the top four. But by then, they are already an order of magnitude slower than Summit.

    Their hardware is not your hardware. Supercomputers are one-offs with custom hardware based on commodity parts, like OpenSPARC or POWER but with custom logic implemented on FPGAs, like the QCDOC architecture. The fact that you say "customers" plural indicates that you're talking about something else; a supercomputer is built for a specific task, by a specific research house, for a specific user, such as IBM for the Deparment of Energy. No one else needs to simulate nuclear reactions regardless of the cost, which in Sierra's case was $330 million. It's not a matter of wanting or not wanting, its a matter of needing to do something that can't otherwise be done, say a treaty that doesn't allow you to test a real nuke, at any cost.

    Getting back to the topic, the PS5 represents a real innovation, and it's SSD architecture is a large part of that.

  • billyben_0077a25354billyben_0077a25354 Posts: 771
    edited August 2020

    People, can we get back to discussing Ampere and quit flaming each other.  I don't want to see another thread closed becasuse someone is trying to start a fight.

    I am still hoping that Big NAVI actually kicks Amperes derere this time around so maybe we gan get some cheaper render cards.  Not gonna happen because Nvidia will kick some serious ... you know  but I am still hoping that they actually release a 3070 Super with 16 GB of memory and maybe an NV Link connection.

    Post edited by billyben_0077a25354 on
  • kenshaw011267kenshaw011267 Posts: 3,805

    You complained about me saying the HW was NVME! IT IS NVME! As you now admit.

    I complained not about you saying it was NVME, but about you acting as if there was no innovation.

    There is no new bus architecture. PCIE is the bus. Again if they had developed a new bus we'd have it. Why in the world you think a bus would somehow be better for moving one kind of electrical impulses than another is mind boggling. PC's have switched bus architecures repeatedly. I forget what proceeded VLB and EISA but something did back in the 386 days. Then PCI replaced VLB. Then PCI-X came out. Then came PCIE gen 1 then gen 2 then gen 3 and we're in the middle of the switch to gen 4 with gen 5 coming in the next 2 or 3 years. If they have a bus faster than gen 5 great but they don't and they don't have a GPGPU that fast anyway.

    The Bus Architecture is more than the transport spec, which is what SCSI, ISA, EISA, VESA, and PCI are. It is a relatively dumb system responsible for reliably getting bits from one system component to another, agnostic of the purpose of those bits because it has to work for diverse devices. But the interpretations of those bits, is the job of the controller connected to the bus. The PS5's SSD controller is totally new and represents an innovation.

    Firmware is SOFTWARE. The firmware in question is specifically optimized for moving textures and other games assets, or so they claim. Since that just sounds like compression we'll just have to wait and see. 

    Compression is part of it, Kraken I think I read, but that is not all of it. If it were, again, everyone would have it.

    But wow, just wow. Cell isn't in supercomputers. I know it was, a decade ago, in a failed IBM system but emphasis on failed. IBM was trying to save the failed PowerPC architecture, again note failed, and Cell was based on PowerPC. I did some checking around and Cell has been dead for over a decade.

    It was in IBM's Roadrunner. A supercomputer.

    No, let me make this very clear. When you build a supercomputer now what you do is take a whole lot of enterprise components, or for what is known as a Beowulf cluster a bunch of PC's, and just build a big datacenter with specific networking. Sometimes, for the guys going for the records, you'll get AMD or Intel to build you some custom stuff but that gets cost prohibitive really fast and when you're dealing with 2,000+ nodes adding $25,000 to the cost of each node takes these projects out of the range where even governments want to pay for them. So there is software in the supercomputing space most of us in the enterprise don't care about, baton waving for clusters that big aren't really interesting to me, but their hardware is our hardware.

    I'm not sure what the precise definition of a supercomputer is, but from your description of what you think one is, I'm reasonably sure you've never built one.

    I don't know how far down the top 100 you'd have to go to find a supercomputer based on commodity parts, but there isn't one in the top four. But by then, they are already an order of magnitude slower than Summit.

    Their hardware is not your hardware. Supercomputers are one-offs with custom hardware based on commodity parts, like OpenSPARC or POWER but with custom logic implemented on FPGAs, like the QCDOC architecture. The fact that you say "customers" plural indicates that you're talking about something else; a supercomputer is built for a specific task, by a specific research house, for a specific user, such as IBM for the Deparment of Energy. No one else needs to simulate nuclear reactions regardless of the cost, which in Sierra's case was $330 million. It's not a matter of wanting or not wanting, its a matter of needing to do something that can't otherwise be done, say a treaty that doesn't allow you to test a real nuke, at any cost.

    Getting back to the topic, the PS5 represents a real innovation, and it's SSD architecture is a large part of that.

    The top 4 are all commodity components!

    The #1 was just a big splash because it is the first time it was ARM based! Do you think someone spends a few billion to design custom silicon for a couple of thosand CPU's? How would they ever get those even remotely debugged? The top 10 are ARM, Xeons, Power9, Epyc/Ampere and one Chinese thing. You can look up there networking. They all use commodity components there, IIRC the plurality use infiniband.

    I never once wrote customer discussing supercomputers but thanks so much for putting words in my mouth its always so honest when some one does that.

    Now onto the bus. You seem to have no idea WTF you are talking about to be as nice as possible. The bus does not interpret anything. Bits come in and bits go out. The bus has two main jobs to transmit the data and to guarantee the integrity fo that data. That's it. PCIE also provides some power but that is not part of the bus part of the spec. Above the serial bus part of PCIE there are some other protocols but they just control how data is transmitted not interpreted. In very simple terms they make sure one side isn't sending when it should be receiving and makes sure that the packets are read in the correct order.

    As to the PS5 SSd firmware we don't know what it is because they haven't delivered. When they do we'll know. I anticipate it not being anything near as ground breaking as they claim. Because again, lots more money to be made elsewhere if it was.

    Roadrunner, IOW, as I said, failed project based on failed PowerPC architecture.

    There is no precise definition of supercomputer but I have built a Beowulf, back around 2005 out of Pentiums. It was mostly just curiousity about how powerful it would be. It's pretty easy to get that sort of thing going. Pick up 10 or 20 (or more) junk desktops and switches. Find a guide to setting up the network and installing one of the distros. Then find something for it to do. I had mine crunch Seti@Home. If you can install a Linux distro and setup an ethernet switch you can handle setting up a Beowulf, assuming you don't need to do any repairs on the boxes.

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited August 2020

    While it's interesting watching the walls of text going back and forth r.e. the Sony SSD thing, let's get back on point guys and gals!  Agree to disagree please!

    Someone mentioned earlier the 'gulf' that currently exists between RTX Titan and the RTX 2080 Ti (24 Gb vs 11 GB).  I'd just like to point out that just a generation ago, the Titan X and Xp had 12 GB at the same time the 1080 Ti had 11 GB.  So we have 'precedent' for say a 24 GB Titan vs a 20 GB Ti, if that rumor holds true.  As noted above, we had another leak today that indicates we may be looking at a 24/20/10 GB split, at least at launch.  Per that rumor, other cards will come later. 

    I'd link Tech Radar's article, but their anti-adblock measures (as well as Tom's) piss me off.  Even if I whitelist/allow their site with adblock, their web code still won't let me view the content.  Hitting the esc key sometimes helps, until you try to scroll...

    There was a PCB shot over at Chiphell today (shared on WCCFTech), but I don't speak/understand that language, so other than relying on other's translations of the article (which seems to be focused on the choke configuration), I can't tell you exactly what is being discussed in the Chiphell forum ATM.

    Post edited by tj_1ca9500b on
  • SevrinSevrin Posts: 6,310

    People, can we get back to discussing Ampere and quit flaming each other.  I don't want to see another thread closed becasuse someone is trying to start a fight.

    I am still hoping that Big NAVI actually kicks Amperes derere this time around so maybe we gan get some cheaper render cards.  Not gonna happen because Nvidia will kick some serious ... you know  but I am still hoping that they actually release a 3070 Super with 16 GB of memory and maybe an NV Link connection.

    No, let's talk about Intel!  A company that actually provided information about its new products today as it battles to remain relevant.  I'm not entirely sure what market they are targeting, what with being squeezed by AMD on one side and ARM on the other, but they're trying.  Something.  Mind you their really big Xe-LP reveal is scheduled for September 2nd, and probably no one will be listening a couple of days after the Nvidia announcement.  They'll have to give people a reason to want their GPUs, integrated or otherwise.  I wonder what it will be.

    https://www.theverge.com/circuitbreaker/2020/8/13/21365544/intel-tiger-lake-11th-gen-xe-graphics-gpu-preview-first-look-architecture-day-2020

     

  • The top 4 are all commodity components!

     

    The #1 was just a big splash because it is the first time it was ARM based!

    Summit is POWER, not ARM.

    Do you think someone spends a few billion to design custom silicon for a couple of thosand CPU's? How would they ever get those even remotely debugged? The top 10 are ARM, Xeons, Power9, Epyc/Ampere and one Chinese thing. You can look up there networking. They all use commodity components there, IIRC the plurality use infiniband.

    The interconnects are all custom FPGAs. China has two in the top 5, and they cannot even get these parts because of ITAR and EAR regulations.

    I never once wrote customer discussing supercomputers but thanks so much for putting words in my mouth its always so honest when some one does that.

    You said that even governments didn't want to pay 2000 x $25,000. The entity that pays is the customer.

    Now onto the bus. You seem to have no idea WTF you are talking about to be as nice as possible. The bus does not interpret anything. Bits come in and bits go out.

    The bus has two main jobs to transmit the data and to guarantee the integrity fo that data. That's it. PCIE also provides some power but that is not part of the bus part of the spec. Above the serial bus part of PCIE there are some other protocols but they just control how data is transmitted not interpreted. In very simple terms they make sure one side isn't sending when it should be receiving and makes sure that the packets are read in the correct order.

    That is literally exactly what I said. The bus interprets nothing, it is just a transport as in the OSI model. The innovation is in the controller. I didn't say spec, I said architecture, implying that there is more than one component, i.e. the communicating sides.

    As to the PS5 SSd firmware we don't know what it is because they haven't delivered. When they do we'll know. I anticipate it not being anything near as ground breaking as they claim. Because again, lots more money to be made elsewhere if it was.

    That is not a convincing argument.

    Roadrunner, IOW, as I said, failed project based on failed PowerPC architecture.

    There is no precise definition of supercomputer but I have built a Beowulf, back around 2005 out of Pentiums. It was mostly just curiousity about how powerful it would be. It's pretty easy to get that sort of thing going. Pick up 10 or 20 (or more) junk desktops and switches. Find a guide to setting up the network and installing one of the distros. Then find something for it to do. I had mine crunch Seti@Home. If you can install a Linux distro and setup an ethernet switch you can handle setting up a Beowulf, assuming you don't need to do any repairs on the boxes.

    That is an insult to eveyone who has ever designed and built a real Beowulf cluster.

    Sorry, @billyben_0077a25354 you are right, that was my last volley, I promise.

  • Sevrin said:

    People, can we get back to discussing Ampere and quit flaming each other.  I don't want to see another thread closed becasuse someone is trying to start a fight.

    I am still hoping that Big NAVI actually kicks Amperes derere this time around so maybe we gan get some cheaper render cards.  Not gonna happen because Nvidia will kick some serious ... you know  but I am still hoping that they actually release a 3070 Super with 16 GB of memory and maybe an NV Link connection.

    No, let's talk about Intel!  A company that actually provided information about its new products today as it battles to remain relevant.  I'm not entirely sure what market they are targeting, what with being squeezed by AMD on one side and ARM on the other, but they're trying.  Something.  Mind you their really big Xe-LP reveal is scheduled for September 2nd, and probably no one will be listening a couple of days after the Nvidia announcement.  They'll have to give people a reason to want their GPUs, integrated or otherwise.  I wonder what it will be.

    https://www.theverge.com/circuitbreaker/2020/8/13/21365544/intel-tiger-lake-11th-gen-xe-graphics-gpu-preview-first-look-architecture-day-2020

     

    @Sevrin I can't remember where I read it, but I was under the impression that Xe was going to be for mobile first?

  • SevrinSevrin Posts: 6,310
    Sevrin said:

    People, can we get back to discussing Ampere and quit flaming each other.  I don't want to see another thread closed becasuse someone is trying to start a fight.

    I am still hoping that Big NAVI actually kicks Amperes derere this time around so maybe we gan get some cheaper render cards.  Not gonna happen because Nvidia will kick some serious ... you know  but I am still hoping that they actually release a 3070 Super with 16 GB of memory and maybe an NV Link connection.

    No, let's talk about Intel!  A company that actually provided information about its new products today as it battles to remain relevant.  I'm not entirely sure what market they are targeting, what with being squeezed by AMD on one side and ARM on the other, but they're trying.  Something.  Mind you their really big Xe-LP reveal is scheduled for September 2nd, and probably no one will be listening a couple of days after the Nvidia announcement.  They'll have to give people a reason to want their GPUs, integrated or otherwise.  I wonder what it will be.

    https://www.theverge.com/circuitbreaker/2020/8/13/21365544/intel-tiger-lake-11th-gen-xe-graphics-gpu-preview-first-look-architecture-day-2020

     

    @Sevrin I can't remember where I read it, but I was under the impression that Xe was going to be for mobile first?

    Probably?  But lots of people render on laptops nowadays.  We'll see in a couple of weeks?

  • kenshaw011267kenshaw011267 Posts: 3,805

    The top 4 are all commodity components!

     

    The #1 was just a big splash because it is the first time it was ARM based!

    Summit is POWER, not ARM.

    Do you think someone spends a few billion to design custom silicon for a couple of thosand CPU's? How would they ever get those even remotely debugged? The top 10 are ARM, Xeons, Power9, Epyc/Ampere and one Chinese thing. You can look up there networking. They all use commodity components there, IIRC the plurality use infiniband.

    The interconnects are all custom FPGAs. China has two in the top 5, and they cannot even get these parts because of ITAR and EAR regulations.

    I never once wrote customer discussing supercomputers but thanks so much for putting words in my mouth its always so honest when some one does that.

    You said that even governments didn't want to pay 2000 x $25,000. The entity that pays is the customer.

    Now onto the bus. You seem to have no idea WTF you are talking about to be as nice as possible. The bus does not interpret anything. Bits come in and bits go out.

    The bus has two main jobs to transmit the data and to guarantee the integrity fo that data. That's it. PCIE also provides some power but that is not part of the bus part of the spec. Above the serial bus part of PCIE there are some other protocols but they just control how data is transmitted not interpreted. In very simple terms they make sure one side isn't sending when it should be receiving and makes sure that the packets are read in the correct order.

    That is literally exactly what I said. The bus interprets nothing, it is just a transport as in the OSI model. The innovation is in the controller. I didn't say spec, I said architecture, implying that there is more than one component, i.e. the communicating sides.

    As to the PS5 SSd firmware we don't know what it is because they haven't delivered. When they do we'll know. I anticipate it not being anything near as ground breaking as they claim. Because again, lots more money to be made elsewhere if it was.

    That is not a convincing argument.

    Roadrunner, IOW, as I said, failed project based on failed PowerPC architecture.

    There is no precise definition of supercomputer but I have built a Beowulf, back around 2005 out of Pentiums. It was mostly just curiousity about how powerful it would be. It's pretty easy to get that sort of thing going. Pick up 10 or 20 (or more) junk desktops and switches. Find a guide to setting up the network and installing one of the distros. Then find something for it to do. I had mine crunch Seti@Home. If you can install a Linux distro and setup an ethernet switch you can handle setting up a Beowulf, assuming you don't need to do any repairs on the boxes.

    That is an insult to eveyone who has ever designed and built a real Beowulf cluster.

    Sorry, @billyben_0077a25354 you are right, that was my last volley, I promise.

    Summit isn't #1. Fugaku is. The Rankings come out in June.

    https://www.top500.org/lists/top500/2020/06/

    Sure are a lot of infinibands on that list. The 2 Chinese ones seem to use custom FPGA's but they sort of have to but they could jsut as easily be using stolen IP since to teh best of my knowledge no non Chinese has ever seen either system.

    No, you specifically wrote that I "The fact that you say "customers" plural indicates that you're talking about something else" not  "You said that even governments didn't want to pay 2000 x $25,000. The entity that pays is the customer." I await your apology not countinued dissembling.

    Actually what you wrote is "The Bus Architecture is more than the transport spec" Not what you are now claiming. You are now trying to equate not just the bus and the transport layer and the comm protocols but the software ontop of that.

    Yes, you have been very insulting to me as I've built a Beowulf. They really are very easy to build. As anyone who has done so can tell you. 

    https://beowulf.org/overview/faq.html

    https://www.linux.com/training-tutorials/building-beowulf-cluster-just-13-steps/

     

  • i53570ki53570k Posts: 212

    Intel said Xe-HPG (HP-Gaming) will be the last product to launch in the Xe family and won't be avilable until 2021, probably late 2021 at that.

  • i53570ki53570k Posts: 212
    Sevrin said:
    Sevrin said:

    People, can we get back to discussing Ampere and quit flaming each other.  I don't want to see another thread closed becasuse someone is trying to start a fight.

    I am still hoping that Big NAVI actually kicks Amperes derere this time around so maybe we gan get some cheaper render cards.  Not gonna happen because Nvidia will kick some serious ... you know  but I am still hoping that they actually release a 3070 Super with 16 GB of memory and maybe an NV Link connection.

    No, let's talk about Intel!  A company that actually provided information about its new products today as it battles to remain relevant.  I'm not entirely sure what market they are targeting, what with being squeezed by AMD on one side and ARM on the other, but they're trying.  Something.  Mind you their really big Xe-LP reveal is scheduled for September 2nd, and probably no one will be listening a couple of days after the Nvidia announcement.  They'll have to give people a reason to want their GPUs, integrated or otherwise.  I wonder what it will be.

    https://www.theverge.com/circuitbreaker/2020/8/13/21365544/intel-tiger-lake-11th-gen-xe-graphics-gpu-preview-first-look-architecture-day-2020

     

    @Sevrin I can't remember where I read it, but I was under the impression that Xe was going to be for mobile first?

    Probably?  But lots of people render on laptops nowadays.  We'll see in a couple of weeks?

    You won't be able to use Xe for Daz GPU rendering anyway.  Iray is Nvidia only.

  • NylonGirlNylonGirl Posts: 1,938

    It would be nice if somebody could make a layer over the other GPU drivers that spoofs an Nvidia driver.

  • kyoto kidkyoto kid Posts: 41,256

    ...so Intel is taking another crack at making GPUs again

    2021 is too long to hold my breath.

  • nonesuch00nonesuch00 Posts: 18,320

    Well regarding getting much more VRAM sooner rather than later as anyone that's rendered a complex DAZ scene in iRay knows from having their scene kicked out of VRAM to CPU render because of size if nVidia are serious about ray tracing in their GPUs those GPUs must have the VRAM to support that. And with AMD adding hardware ray tracing AMD must as well. And both of those are nice because that means when GPUs have an overabundance of VRAM for 99.9999999 % of other GPU tasks so now you can buy GPUs based strictly off of performance now if you don't need raytracing.

    I think GPU efficiency is starting to stifle and more VRAM, wider busses, and smaller dies are where they are going to have to push for performance the next few years. There comes a time when the most efficient algorithms for GPUs are done relative to ROI and they are nearly there. I mean we don't expect them to add entire physics engines (some stuff is already added) next do we? Well maybe they will.

    Nvidia has supported entire physics engines on GPU's for quite some time. PhysX, Hairworks and I think one I can't remember the name of. The big issue, in the gaming world at least, is that since they are proprietary that mean that people with AMD cards get much worse look or performance when playing the game. IIRC it was Witcher 3 that had a bunch of these features and looks amazing on a good Nvidia card with everything turned on and just awaful on any AMD one.

    Really, I read that they only supported parts of the PhysX engine, not all of it, as it's quite large and a long term growth proposition. Unity uses the PhysX library.

    NVidia owns PhysX. I know of no way to check whether a specific card supports the full library but you can even run a second card as a PhysX card.  So I have a hard time believing they don't fully support their own tech on their own cards.

    Yes, well sorry, I had actually looked up the information only yesterday because these cards are getting really stuffed with specialized math algorithms and the website I chose from search claimed only a subset of collision calculations were directly optimized on the nVidia GPUs. But it was wrong, thanks, and the https://en.wikipedia.org/wiki/PhysX  page (which is not the site that told me wrong) says otherwise which is more than good enough for me. I'll be happy to let nVidia continue to grow the size of the physx library and add that to their GPUs. The next 3 years should be really exciting with regards to storage capacity, ram capacity, GPU and CPU speed and power consumption.

  • tj_1ca9500btj_1ca9500b Posts: 2,057
    edited August 2020

    Another day, another leak:

    https://wccftech.com/nvidia-geforce-rtx-3090-ampere-gaming-graphics-card-pcb-pictured/

    In the article, we see a non-reference card (from Colorful possibly?) with (presumably) 11 VRAM chips surrounding the GPU, on the BACK of the card (note the screw heads).  The article suggests that said chips would probably be mirrored on the front of the card, and mentions that a recent rumor on the Baidu forums mentions that there may be a 22 GB card floating around.  There could also be another pair of VRAM chips on the card (at the bottom maybe) but since that part is obscured/blurred out there's no way to really know.

    So yeah, even more confusion for the rumormill... and we still have a bit over 2 weeks to go!!!

    Post edited by tj_1ca9500b on
  • SevrinSevrin Posts: 6,310

    Well, there's always a possibility of a "ti" after the number later on.  Oooh, oooh, and a "SUPER"!

  • marblemarble Posts: 7,500

    I take it you mean this is what is disappointing. I agree. I think I'll look for another hobby.

    NVIDIA GeForce RTX 3090 graphics card is listed on the Micron website with 12GB GDDR6X memory.

  • Sevrin said:

    Well, there's always a possibility of a "ti" after the number later on.  Oooh, oooh, and a "SUPER"!

    @Sevrin That's funny and not funny at the same time.

  • marble said:

    I take it you mean this is what is disappointing. I agree. I think I'll look for another hobby.

    NVIDIA GeForce RTX 3090 graphics card is listed on the Micron website with 12GB GDDR6X memory.

    Yes, me too, and I hear sheep herding needs people.

  • I would treat the information about the amount of vram with a grain of salt because the Micron source is only showing it as an example and the figure is wrong for other GPU as well. Probably Nvidia's plan have since changed and currently, the leaked motherboard is showing 24Gb. Expect it to cost two legs and an arm though.

  • nicsttnicstt Posts: 11,715
    edited August 2020
    marble said:

    I take it you mean this is what is disappointing. I agree. I think I'll look for another hobby.

    NVIDIA GeForce RTX 3090 graphics card is listed on the Micron website with 12GB GDDR6X memory.

    Move to Blender; same hobby and not tied to Iray, which is what Studio affectively requires - at least imo.

     

    volpler11 said:

    I would treat the information about the amount of vram with a grain of salt because the Micron source is only showing it as an example and the figure is wrong for other GPU as well. Probably Nvidia's plan have since changed and currently, the leaked motherboard is showing 24Gb. Expect it to cost two legs and an arm though.

    ... And a Kidney.

    Oh, and 21 Gbps is the speed of the RAM not the amount; they state the amount and speed separately.

    Post edited by nicstt on
  • DiasporaDiaspora Posts: 459

    So, has DAZ commented on how long we should expect before DAZ Studio iRay will support Ampere?

    Right now I'm fine with my RTX 2080 for games and DAZ is the big incentive to upgrade and if it takes them months before an RTX 3090 or 3080 ti will even work in it then I'll just hang tight in the meantime. 

  • Daz Jack TomalinDaz Jack Tomalin Posts: 13,500
    edited August 2020
    Diaspora said:

    So, has DAZ commented on how long we should expect before DAZ Studio iRay will support Ampere?

    Right now I'm fine with my RTX 2080 for games and DAZ is the big incentive to upgrade and if it takes them months before an RTX 3090 or 3080 ti will even work in it then I'll just hang tight in the meantime. 

    I dont think it's actually down to Daz per se, my guess is it (an updated version of Iray if needed) will have to come from Nvidia, then added into DS (after testing etc)... and quite possibly there isnt even an 'official' answer yet.  So yes, hang tight basically, let the reviews hit, then go from there :)

    Post edited by Daz Jack Tomalin on
  • RobinsonRobinson Posts: 751

    Honestly, unless you're using it for actual paid work or you're a baller, the thought of spending over a grand on any graphics card seems completely ridiculous to me.  Anyway on the software thing, if it requires API changes it's not transparent to Daz so some code and a new release will be needed.  If the API stays the same, it's all down to the NVIDIA drivers.  I suspect the former actually (new hardware architecture), so it'll be a while before it's in Daz.

Sign In or Register to comment.