Computer Confusion

Jason GalterioJason Galterio Posts: 2,562
edited December 1969 in Technical Help (nuts n bolts)

Unhappy with my old desktop's performance with Luxrender, I decided I would upgrade. But the results have been less than spectacular. I was expecting to get a significant difference in performance, but instead it is almost the same.

The old computer, a Dell XPS 9100, 64 Bit Windows 7, 6 GB RAM, i7 930 @ 2.8 GHz, GeForce GTX 750 Ti:
LuxRender Slave: 67.84 kS/s

The new computer, a Dell XPS 8700, 64 Bit Windows 8.1, 8 GB RAM, i7 4790 @ 3.6, GeForce GT 720:
Luxrender: 74.10 kS/s

Now this doesn't seem right to me. I would think that the new computer would be doing considerably better.

Both computers are running 8 Threads, 1 GPU.

I know the video card in the new computer is woefully poor, but I did not think it would have that much of an effect on the system.

Would it be better to invert the arrangement? The old computer running Luxrender and the new computer running the slave?

Or am I missing something fundamental here?

I know this is hard to gauge without more information, but I was wondering what other people's gut reactions would be. Am I being unreasonable in expecting better performance?

Comments

  • SzarkSzark Posts: 10,634
    edited December 1969

    I went from a dual core 32 bit to a Quad 64 bit and I noticed the same...ok a little better but not as much as I would have thought. Anyway no use the old one as a slave. Well that is what I would do. :)

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    You'd get better performance just running on GPU

    With just the 750 Ti, I get 1 Ms/s on simple lightning and exterior scene vs 170 Ks/s in complex interior scene

    I use the latest luxcore beta with biased path which is also quicker than official v 1.31 luxrender

    Drawback is that some material are not supported on GPU

  • prixatprixat Posts: 1,588
    edited December 1969

    I would swap the graphics cards.

    In 3Delight that CPU would render twice as fast as the older one but how that translates to Samples per second in Lux/hybrid mode? I've no idea!

  • Jason GalterioJason Galterio Posts: 2,562
    edited December 1969

    Szark:
    Yeah. I guess I blew the difference in clockspeed up in my mind. I just expected a monumental difference in performance. I am looking on the bright side though; with the old computer slaved I am still getting better than twice the performance I was getting before.

    Takeo:
    I have GPU acceleration turned on, but I am reluctant to go full GPU. I know the speed would be better, but I really don't want to loose texture quality. I was led to believe that there is a significant difference between the textures in each.

    Prixat:
    I thought about that as well, but I am planning to just leave it as is. The new desktop is still covered under warranty so I don't want to muck it up this early in its life. Plus the new graphics card in the old desktop gave it new life.

    (That and I still use the old desktop as a gaming rig while renders are going on the new machine.)

  • jestmartjestmart Posts: 4,449
    edited December 1969

    It looks to my that Dell is using a weaselly marketing trick in that they are listing the maximum Turbo Boost speeds instead of the actual base speed. Turbo Boost is essentially overclocking so a newer more energy and thermal efficient chip could have a much higher boost speed while the two chips actual speeds may not be that different.

  • StratDragonStratDragon Posts: 3,168
    edited December 1969

    The CPU speed advancements we saw a few years back are no longer happening at this stage of the development process for the current generation of Intel and AMD CPUs.
    My 5 year old i7 can still run well against the current crop of i7's because CPU performance gains we were seeing a decade ago were not sustained and the industry has slowed to a smaller increment of performance gains over a longer period of time.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    Try the CPU only mode then. No GPU. It seems hybrid is not as quick as pure GPU or CPU

    I don't really see any difference in textures between CPU and GPU but there is a difference in the algorythms

    You could also try to begin with GPU render and finish with CPU

    see http://forum.runtimedna.com/showthread.php?85929-GPU-Acceleration-woes

  • Jason GalterioJason Galterio Posts: 2,562
    edited December 1969

    Try the CPU only mode then. No GPU. It seems hybrid is not as quick as pure GPU or CPU

    I don't really see any difference in textures between CPU and GPU but there is a difference in the algorythms

    You could also try to begin with GPU render and finish with CPU

    see http://forum.runtimedna.com/showthread.php?85929-GPU-Acceleration-woes

    I started this same conversation over there too and that was Paolo's suggestion as well.

    I am going to start a new scene, without GPU acceleration, rendering this evening and let it go until the morning, then check the results.

    I have to say that this morning I was surprised at the end results. The old machine's progress had over taken the new machine by a significant margin. I want to say it was around 200 to 300k more than the new machine.

    While still rendering I turned on the Windows monitor to get an idea as to what was going on...

    I noted that the old machine (i7 930 @ 2.8) was consistently running at around 85 to 90% of CPU capability.

    The new machine (i7 4790 @ 3.6) was running at between 75 and 80% of CPU capability, but consistently clocking at 3.8. All eight cores had consistent activity.

    I checked the processes on the new machine and couldn't find anything odd running. Nor could I find anything else making any drain on the CPU. Again, all eight cores had consistent activity, but there was a ninth core that was labeled "default" or something like that, with no activity.

    Now I am starting to wonder if Luxrender only needs 75 to 80% or if something else is stomping on it's usage. Like Windows 8.1 has some sort of threshold stepping in.

  • Takeo.KenseiTakeo.Kensei Posts: 1,303
    edited December 1969

    In CPU mode, my four cores go up to 100%

    I'd rather say you have some software or hardware function that limits the CPU usage

  • prixatprixat Posts: 1,588
    edited December 1969

    Legionair said:
    ...and I still use the old desktop as a gaming rig while renders are going on the new machine.)

    Good to see someone who's got their priorities right :coolsmile:

  • Jason GalterioJason Galterio Posts: 2,562
    edited December 1969

    prixat said:
    Legionair said:
    ...and I still use the old desktop as a gaming rig while renders are going on the new machine.)

    Good to see someone who's got their priorities right :coolsmile:

    The old computer, with the new video card, runs AC Black Flag like a dream. So I am intent on finishing that before modifying the machine. :)

    I did run LuxMark on both machines and now I am even further perplexed by all of it.

    Before I get to the results, the first pass on the new machine resulted in a "no OpenCL devices" when I tried to run the CPU only test. Much trial and error then occurred in trying to get the right OpenCL drivers installed.

    Old Computer (i7 930 / GTX 750 Ti)
    OpenCL 1.2 AMD-APP (1084.4)
    GPU: 1178
    GPU & CPU: 1496
    CPU: 381

    New Computer (i7 4790 / GT 720)
    OpenCL 1.2
    GPU: 222
    GPU & CPU: 222
    CPU: -

    GPU: 222
    GPU & CPU: 583
    CPU: 445

    The second set of values was after the OpenCL drivers were installed.

    If I install a 750 Ti in the new computer, would the results then be (approximately): 1178 / 1178+445 / 445?

  • prixatprixat Posts: 1,588
    edited December 1969

    Yes, the Passmark 3D results for the two cards show a similar difference.

Sign In or Register to comment.