Nvidia Tesla K40m - The Word from Nvidia

These have been discussed here and there as regarding their use with Iray. I've finally received official word from Nvidia regarding their use in workstation/non-server systems, and I quote:
It's a system incompatibility. Tesla cards (with a few historical exceptions, e.g. C2075, K20c, K40c) are designed to be purchased and installed (only) in an OEM server system certified for their use. HP does not certify any of their workstations for any current Tesla cards, nor were any ever certified for K40m usage. If you buy a Tesla card believing you can install it in any system you want, you are asking for trouble. It is simply not possible, in the general case, and there is no design intent to make it possible.
There is no documentation to support this configuration. There is no OEM system load or card lock. The resources in question are resources that would be assigned to PCI BAR regions by the system BIOS during the PCI plug-and-play enumeration process. The K40m requires a large complement of resources. A system that cannot or will not assign these resources will cause the cards to be non-functional. There is nothing you can do to fix this (barring modification of user-accessible BIOS settings that modify the BIOS resource assignment behavior).
A server designed to support this card of course has taken these requirements into account in the design of the server, which includes the design of the system BIOS. It's not a "lock" of any sort. In most cases, a PCIE Tesla GPU can be easily enough removed from a supported HP server configuration and placed in a supported Dell server configuration (just to pick a random pair/example), with full expectation that it should work normally.
But your workstation is not a supported configuration for that GPU. There are many statements like this on these forums.
Tesla K40 is an obsolete product. For non-obsolete products, you can find supported server configurations here:
https://www.nvidia.com/en-us/data-center/tesla/tesla-qualified-servers-catalog/
There is no suggestion anywhere that Tesla cards can be placed in any system you want with an expectation of proper behavior.
Original thread at Nvidia:
https://devtalk.nvidia.com/default/topic/1039289/cuda-setup-and-installation/k40-setup-on-lenovo-p510/
This was in response to my query at Nvidia's Developer site for help as to why the two K40ms I'd recently purchased from Ebay would not work in my HP Z600 Workstation, posing the question as to whether OEMs have installed any sort of proprietary lock in the card's BIOS so they only work with their branded systems.
Note the K40c is stated as an exception (presumably because it has a cooling fan, where the K40m does not). It's not just the lack of a cooling system (which is easy to get around), but the actual resource demands of the card, which server BIOS' are customized for.
Now, could a server BIOS be injected into a workstation board? If so, would it work then? I'm half tempted to inquire (and expect the obligatory "LOL no" response) and half tempted to just throw these two back on Ebay and focus on building a 1080ti-based "VCA alternative".
At any rate, this is more info about these than I've found in a month (literally) of searching. Most of what I'd found was simplistic "these are intended for AI/Deep Learning/Use in servers TL/DR" without actual information as to "why". Now I have "the why", and I'm sharing it here to help others avoid the issue I've been dealing with every day since November 22nd.
Comments
Thanks for sharing. That is useful to know.
I could have told you that before since I've had these issues since the release of DS 4.10. In my Dell workstation in combination with an NVidia K6000. These cards are not supported by Dell although they have 6GB of memory. This will cause permanent system crashes when using this card. Too bad for the money that has been spent on such high-quality graphics cards.