out of core rendering and dual channel ram

I'm planning a new pc. And I guess having fast dual channel ram will speed up out of core rendering with Octane or Cycles. This would push me toward ryzen cpus that seem to better support high frequency ram. But I can't find any benchmark on this subject that could help to better understand the possible advantages.

Any help please ?

Comments

  • PadonePadone Posts: 3,778
    edited February 2019

    I found some data around the web. It seems that dual channel ram performs about 20% faster in memory operations. While out of core rendering takes about a 20% penalty in rendering time. The main factor for out of core performances seems to be the PCIE speed 16x vs 8x, rather than the ram speed itself. But once the PCIE is 16x then ram speed seems to matter.

    So finally I guess dual channel ram would increase rendering speed by 5% or something. Not a main gain.

    https://beebom.com/single-channel-vs-dual-channel-memory/

    https://developer.blender.org/T48651

    https://render.otoy.com/forum/viewtopic.php?f=9&t=53895

     

    EDIT. It seems the main advantage for dual channel ram is for ryzen integrated GPUs, here the ram speed does matter to get a better framerate in games, and the 20% dual channel advantage above seems to apply directly.

    I'm wondering if ryzen would somewhat accelerate out of core rendering too.

    https://www.techspot.com/review/1574-amd-ryzen-5-2400g-and-ryzen-3-2200g/page8.html

     

    EDIT. On a second thought, from the results above there may be a main gain in the case where a vega gpu is used for the viewport and a gtx gpu is used for rendering. In this case the dual channel memory should keep the viewport fast while the gtx is rendering out of core.

    Post edited by Padone on
  • PadonePadone Posts: 3,778
    edited February 2019

    UPDATE. I just found that raven ridge is limited to pcie x8. So it seems that vega gpus are very bad for out of core rendering. That is, in the blender article above they state that the main speed factor for out of core rendering are the pcie available lanes, so 16x instead of 8x. While a gtx card with a raven ridge cpu will be limited to pcie x8.

    https://www.techpowerup.com/241444/amd-ryzen-raven-ridge-comes-with-a-limited-pcie-interface

    EDIT. It is also true that a 2666 ddr4 has about a 20G bandwith, a pcie3 8x is about 8G. So may be a 16x wouldn't have any real advantage over a 8x. Since with 16x the memory bandwidth would be saturated when the pcie card is rendering out of core. That's bad for the cpu.

    https://www.kingston.com/en/memory/ddr4

    Post edited by Padone on
Sign In or Register to comment.