Accessing PC over RDP - Open GL version not high enough?

So I'm trying a few things out, and I've got my Titan Z in my HP Z600 (Win 7 Pro 64) as the lone GPU. I'm accessing the Z600 from my Win 8.1 (Pro 64) PC via Remote Desktop. The session loads fine, and I can do everything to the Z600 as if I were controlling it directly.

I did the BIOS mod of setting it to "headless" (no GPU assigned for direct display, or none in the machine at all). This step probably wasn't required, but I was trying to get the Z600 to free up enough resources to use a pair (or just one) of Tesla K40s. I figured that one less GPU in the machine means those dozen-or-more resources are now free. Turns out, they're not.

At any rate, I'm down to the Titan Z instead. No other GPUs in the machine, and the machine has no MOBO video.

When I try to launch DS 4.10 x64 or the 4.11 x64 Beta, I get an error saying it requires OGL 1.3, and that the system reports OGL 1.1 only, and please upgrade to an Open GL 1.3 GPU.

I know the Titan Z (triple-wide 12GB 5760 CUDA core monster) works with DS and Iray. I had it working before when I had my Quadro K4000 as primary display. However, I'm trying something different (and because the Titan Z causes the machine to reboot during a render, probably heat-related).

Yes, I know the PSU for the Z600 can't run a Titan Z. That is not the issue.

The issue is DS not reading the GPU's capabilities properly when run over Remote Desktop connections (both machines are in the same room about 2 feet away from each other).

And it's an error regarding Open GEE ELL, not Open SEE ELL.

Comments

  • How do you know the issue isn't power related? Things get real flaky near 100% draw ime.

  • Because the Titan Z is powered by a 550w PSU which powers nothing else. It meets all the requirements to handle a Titan Z (the 12v rail, etc).

  • Ok. If it's causing a reboot during renders have you tried running gpu-z to monitor the cards temps?

    I'd try and figure out your main problem before getting into this other stuff.

  • algovincianalgovincian Posts: 2,636

    Edit the shortcut to DS by adding “ -allowRemote” to the end. It should launch now, but there are still some issues compared to launching it on the local machine. Hope this helps.

    - Greg

  • Ok. If it's causing a reboot during renders have you tried running gpu-z to monitor the cards temps?

    I'd try and figure out your main problem before getting into this other stuff.

    Well, I do have the option of using a GTX 980 for rendering on this machine. Granted it's only 4GB and 2072 cores, but it gets the job done. The main issue is not being able to launch DS remotely, even though it works fine when accessed directly. The Titan Z is most likely overheating. It's an EVGA Superclocked edition with 2 big fans, but even then...

    I was able to get more stability from it by pulling the clock speed down using EVGA Precision X, but that also doubled the render time. I need to find the "happy medium". Right after I resolve the remote control issue. However, in the interim, I took the 1080ti out of my other machine and put it in the Z600 for rendering (direct access), and left the Titan X (Pascal) in the other one. I now have 2 machines that render at about the same speed, but it's robbing Peter to pay Paul.

    Once I get my GPU cluster replaced and a few more 1080tis or 2080tis........

     

    Edit the shortcut to DS by adding “ -allowRemote” to the end. It should launch now, but there are still some issues compared to launching it on the local machine. Hope this helps.

    - Greg

    Thankls! I'll give it a whack.

  • It should not be overheating during a render unless a fan is failing or the case isn't ventilated properly. That downclocking it made it stable strongly supports that. Putting a 1080ti in the same case without issue strongly suggests it's the fan on the Titan. I'd get gpu-Z and monitor the temps and other readings, like fan speed, on that card. Replacing a fan is possible. The whole card is expensive.

  • Only trouble with monitoring the temps is that it BSODs, so unless I stare at it from start to flaming demise, I won't know.

    Also, Precision X and Afterburner both have real-time temp monitoring in the task tray. GPU-Z is just another process loaded needlessly.

    And the case gets plenty of ventilation - the side panel is off, otherwise the external PSU can't get to the card.

    It could also be that because the Titan Z is so big, it's right next to the main PSU in the case. Since it's an HP, it's a custom jobby with snap-on connectors, so I can't take it out of the case without adapter cables. Yet another argument against ready-made systems, which I try to avoid for these very reasons, but I was young and dumb at the time.

  • The -allowRemote switch helped - thanks!

    Also ran across an older thread from February about this while searching for something else (funny how that works). Wound up with Splashtop Personal (free for personal use, but still requires an account for some reason - probably data mining). Both methods work fine - DS loads, can select rendering devices (GPU, CPU, etc) - it's just like being there. I do have to say, being able to control both machines with one mouse and KB is soo much easier than keeping track of two of each.

    Running dual monitors means each one can have its own, so it's really just like being there.

    Now, of course, I still have the initial problems to iron out re the Tesla K40s, the Titan Z, and then getting both machines to contribute to the same render over the in-house network within DS and without any sort of render-queue add-ons - basically we need an option in DS besides Iray Server ($300 a year LOL yeh ryt) or VCA.
    Like a built-in GRID For Carrara, but without the pesky limitations (50 CPUs? Dude, that's a paltry TWO HP Z600s with dual Xeons set to virtualization, unless it's only counting physical CPUs. Even then, it needs to incorporate GPUs).

     

  • You should be able to see the temp going up steadily over time. If that's happening something is very wrong. The card should reach a steady state within a couple of minutes, a little longer if water cooled. A riser cable is less $15 not that much to get the Titan some air. 

    Where's the Quadro in your rig anyway? You have a K40 but no Quadro. You do know the K40 is made to add on to Quadro's right? Are you sure the K40 will work with RTX/GTX drivers?

    As to networking the renders, you're not going to be able to get iray going. For Carrara why are setting the Z600's for virtualization? There's overhead involved in that that you don't need. You'd better off with the two boxes acting as individual PC's with lots of cores and threads. You wouldn't get GPU rendering but I'm not sure what your end goal of all this decrepit hardware is.

  • Actually I got the final word from Nvidia (finally) that the K40m is purpose built to not work in anything but a server configuration as it requires access to upper-level memory than typical workstation BIOSes don't access. I'm sure there's a way to butcher the BIOS to make it work, but I haven't found it.

    The K40c, on the other hand, has a built-in fan like any other GPU, and works a treat in Workstations. Makes no sense to me, but whatev.

    As for the Tesla "requiring" a Quadro, there's an incredible amount of vagueness surrounding Teslas. I know they don't have outputs. In my latest system reconfiguration, the outputs on my Titan X Pascal and 1080ti aren't being used because the 980 is acting as primary display while the other two do the Iray thing. As far as I'm concerned, they're doing what a Tesla does: everything except audio and video output. That means the Tesla doesn't allocate GPU memory to those functions. A Titan Z can be put into a similar state: TCC mode. Audio and Video output is disabled by doing so.


    I know Teslas are touted as "accelerators", but really, that's it. I have no idea what they're defining as "acceleration". Would a 12GB K40 only "help" the 3GB of my K4000 for a grand total of 3GB? Would the 2880 CUDA cores of the K40 stack with the 768 CUDA cores of the K4000, or would it only assign as many resources as the K4000 has, for a whopping grand total of 1536 CUDA cores (768*2) instead of 3648 (2880+768)?

    I did not get a clear answer to that question on the "official" documentation (what little I could find due to the age of the K40), so I presumed that "it's just a 12GB 780ti without outputs)" based on the specs.

    As for "decrepit" hardware, it still functions, and is less expensive than new hardware that is either under-supported (the wait for Iray to catch up to the Pascal was excruciating). I originally got the Z600 for Poser Firefly rendering, but once Iray hit, I switched over to GPU rendering.

  • Additionally, after further testing and analysis, it appears the Titan Z is bjorked. Only half the card is being used by Iray, though both halves are selected in Iray. GPU-Z actually reads both halves correctly, which is new (CUDA functionality was unchecked previously). After several rounds of flashing the BIOS for both halves, it's a 6GB triple-wide 780ti. Looks like it's going under the hammer (along with the 2 780tis which are also electrical toast).

    I'm guessing when my Amfeltec 4x GPU cluster died it took those with it, since they were on it at the time. $1500 total down the toilet. Ah well, I had my fun with them, anyway.

  • I work in a datacenter but not directly with Tesla's so I'm basing this on as copy and stuff from seminars I've attended over the years. It is my understanding that the Tesla line was intended to fulfill the role that Nvlink now does. IOW they combined VRAM and CUDA with the Quadro in the same rig but since they do it over PCIE that means a lot of chipset resources particularly if there are multiple Tesla's. IIRC there are some Nvlink equipped Teslas but they are crazy expensive.

    But no matter what everything I've ever see n has assumed that the Tesla(s) would be installed with a Quadro even in a headless configuration.

Sign In or Register to comment.