How fast is Octane compared to iRay?
![l_riefkohl_ferrer_1923365ed0](https://secure.gravatar.com/avatar/ed7e76a411f3912b52d482525cc186e7?&r=pg&s=100&d=https%3A%2F%2Fvanillicon.com%2Fed7e76a411f3912b52d482525cc186e7_100.png)
I would like to know if somebody can share a comparison between those two renderers. Specially I would like to know if it is viable to render a full lenght movie with Octane. Thanks in advance.
You currently have no notifications.
I would like to know if somebody can share a comparison between those two renderers. Specially I would like to know if it is viable to render a full lenght movie with Octane. Thanks in advance.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Both will depend on material settings. Trying to render a full length movie without a render farm is, regardless of engine, not likely to be a happy experience.
Do you know of any render farm that works with Daz Studio, Poser and even Carrara?
- - -
@ Why RENDER SPEED alone will not get the job done faster
- The render speed is pretty much the same for "simple" projects with a character in front of a standard background with HDRI lighting.
However things change drastically with challenging scenes that use complex light, subsurface scattering, transmission, refraction, caustic setups and other advanced features.
-> It will take a lot longer to clean up noise from complex scenes in Iray if it is even possible to render them at a similar quality level as in OctaneRender.
- - -
In general:
It is not just about rendering speed.
What matters much more when creating projects are
-> features that help you save time while you create your scenes and in postproduction.
- - -
The big advantage of Iray is that all materials are set up allready for licensed DAZ3D content.
You currently loose time when converting scenes from Iray to OctaneRender because you may want to tweak the autoconversion to get the look you want.
- - -
On the other hand compared to OctaneRender Iray is still lacking many basic and advanced features:
- Animating materials and render engine specific parameters (Discussed in another thread)
- VRAM management. (Several threads exist on that.)
- Advanced compositing.
When creating Matte images with alpha layers in OR not only object shadows on the floors are included but also reflections and caustics.
You can render out separate render layer passes for each light in the scene and then composite them in photoshop with masks.
You can render out separate render layer passes for diffuse, reflection, subsurface scattering, transmission and composite them in photoshop with masks.
-> This means even if you choose the wrong render settings or are not happy with the results of your first test render there is no need to rerender
-> all those passes you rendered out are enough to find a solution in postproduction.
- Texture baking extracts lighting information from a mesh's surface by using its UV map to generate a texture that can be mapped back to the mesh later on
etc.
- - -
- Support for and by 3rd party technology:
- The ability to read fiber mesh hair of the most common plugins and transform them into memory saving splines
- Support for VFX file formats like OpenVDB http://www.openvdb.org/
- Professional level tools that help you create your scenes when setting up advanced instances, composite with photographs or create architecture large scenes
http://www.phantomtechnology.nl/
- The Otoy team makes a great effort to support existing features of each software like Deep image rendering in Nuke.
etc.
- - -
Conclusion:
-> If you want to quickly render out a simple scene Iray will do just that.
-> If you are looking at an advanced compositing heavy project OctaneRender may provide the features you need to save a lot of time.
- - -
- - -
@ CREATING FULL LENGHT MOVIES with GPU render engines
Depends on what you want to do.
If you have access to Houdini and OpenVDB you may be able to get some special effects done.
https://www.sidefx.com/
With such technology you may be able to create impressive spots like intro or credit rolls.
Example:
The Westworld intro was created with OctaneRender and Houdini:
https://www.youtube.com/watch?v=elkHuRROPfk
The main limitation of GPU render engines for usage in full lenght feature movies is the limited VRAM space compared to available options when using CPU.
With GPU rendering it is still challenging to handle high resolution "star" objects combined with large scenes.
- - -
-> If you just want to render a few animated CG objects and combine them with live action footage and other VFX shots you may be able to work with the current limitations.
- - -
- - -
@ Available RENDER FARMS and CLOUD RENDERING
I do not know of a service that allows you to upload scenes for rendering to the cloud directly from inside DAZ Studio.
-> You may need to export the animated scene in a file format that is supported by the cloud rendering service.
When rendering with OctaneRender standalone you can now get in contact with Otoy staff to get access to the OctaneRender Cloud services:
https://render.otoy.com/forum/viewtopic.php?f=100&t=55146
- - -
Basic DAZ Studio to OctaneRender cloud workflow:
- Create scene in DAZ Studio and animate objects and cameras.
- Use animation features provided by the OctaneRender for DAZ Studio plugin to animate lights and OR materials.
- Export scene & cameras to OctaneRender standalone as .ORBX or .OCS file.
- upload to OctaneRender cloud for rendering
- - -
Well, I'm certainly not an expert, but if you just do the math and assume 30 frames per second, and 60 seconds per minute, means that for every minute of your full length movie you have to render 1800 frames. And if each frame takes 10 minutes to render, that's 18,000 minutes to render 1 minute of a movie. Which is 300 hours of rendering. Which means 2 weeks to render 1 minute of a movie.
Soo.....
I just finished an Iray render of a gorgeous image that took over 2 hours on my 8 core CPU. With a better graphics card that will probably be only a fraction of that time. But still it's forever if you're gonna render a full length movie.
So IMO, Iray vs. Octane isn't really the issue, You have to either figure out how to do a render farm, or do what most CG movie makers have done. And that is figure out how to break your scene/animation into parts and composite stuff together, and fake the lighting so people believe it's real, instead of solely relying on a fancy (and slow) renderer for your entire frame.
So if your character is moving on a stationary background, render the background (with all the bounce light, etc.) once with Iray/Octane, and composite your character on top of that so you're not always rendering the entire environment. I think that's one of the points linvanchene was making.
I bookmarked this after someone here said they rendered something in DS iray for them. You could contact them to find out more.
https://www.revuprender.com/
"
You can render out separate render layer passes for each light in the scene and then composite them in photoshop with masks.
You can render out separate render layer passes for diffuse, reflection, subsurface scattering, transmission and composite them in photoshop with masks.
"
I don't know OctaneRender enough to say if it's more flexible or not than Iray
But iray has already similars features. You can provide light path expression or use the "canvas" feature in DAZ 4.9 to make separate layer HDR rendering and merge then in photoshop or similar software.
Octane and Iray are ok for rendering stills but animation is not something you can do on video cards yet. There would have to be a way to break up the sequence to fit into the memory of the cards and then render that portion and move to the next sequence. That is not something Iray or Octane are capable of yet.
Ummm... I did this a few months ago in Iray... May of 2016.
![](http://img.youtube.com/vi/0TLpf4_c7_g/0.jpg)
This is indeed very challenging. But it is not only a question of the render engine.
-> It is also depends on which software you are using and how much time you are willing to spend preparing the scene for an optimal time vs quality relationship.
Even the best CPU render engines are inferior to the in house tools the big studios are using.
The best technology to render animations is not shared with the public at all.
-> When video cards with 32 GB or 64 GB arrive then GPU rendering may become an alternative compared to available CPU solutions for the public. Hobbyists, freelancers, small studios.
But the big studios with their own proprietary solutions may always keep the edge and produce at a much higher quality level.
-> The viewers do not care which tools are used and how much of an achievement it is to create stunning work with access only to public software.
They clients and other viewers will compare anything they see to the highest quality content they know from movies, games, music clips and superbowl advertisments.
If a technology can not provide similar results they will not be impressed.
- - -
How are scenes set up to lower memory use for rendering animations?
Many beginners in animation just reduce object and render quality all across the board to bring down their render time to the often quoted "10 minutes per frame".
That way the scene may render at a somewhat reasonable speed but in the end the animation will not look as good as it could.
Instead you have to carefully think about each individual scene object and deceide for each sequence of frames what quality is needed.
- - -
Think of
- car chases
- superheroes battling across a whole landscape
- - -
Such scenes are not rendered out in one take.
Even if the camera follows one hero for a very long time the clip consists of multiple tiny sequences only a few frames long.
You can achieve such scenes in different ways:
1) Automatic memory management on a scene level
The software itself is smart enough to display objects at a close radius around the camera at high resolution and objects at a distance behind a set distance threshold at lower resolution. There may even be different types of thresholds for different types of objects.
Examples: grass, npc, building, mountains, etc.
This is a common practice in game engines who are used for large scale environment scenes.
The switch between high and low resolution can happen by
- swapping in and out objects with low and high resolution
- swapping in and out real geometry and "billboards"
(-> common practice for plants. at far distances they consist of a simple plane that automatically always faces the camera)
- switching from lower to higher subdivision values
- switching to different map types (bump, normal vs displacement)
-> The challenge is to have that swap happen at a distance in which it is not noticeable.
This may be an other reason why fast cut action became so popular in VFX heavy productions.
Between cuts from one camera to the other you can make changes to object quality without much worry!
-> You face different challenges when trying to use such techniques during rendering an animation or in real time while playing a game.
- When you are playing a game objects in the background can load in continously. You may only notice a change in object quality when you are traveling at higher speeds in one direction. -> pop ups
- When rendering animations you have to consider that any changes to the scene may also need to be processed for each frame.
2) Compositing
You break up the scene into several foreground, mid and background sections.
- Render out each part of the scene and composite it in post production.
- Clever use of matte objects etc.
This is not as simple as it sounds.
For high quality it is not just a matter of capturing the shadows but you may also need to consider reflections, caustic light effects and physical interaction.
If you break up the scene many interactions between objects are not anymore accurate.
-> Which effects can be used to sell the illusion that the viewer is not looking at many different parts of a puzzle but a scene as a whole?
- - -
3) Manual adjustment of the scene for each take
If the software does not support any automatic solutions you have to be very smart to break up your animation into very small sequences.
Example: Prepare 7-8 frames. stop. Adjust the scene. Prepare the next 7-8 frames.
At the end of the day you queue each sequence up in a batch rendering application and let it render over night.
Manually change maps, object resolutions, subdivision levels as long as they are not detectable.
-> Add objects to the scene that will move in front of the camera next.
-> Remove objects from the scene that no longer are needed.
Any manual adjustments will take away much time.
- - -
-> I hope by now it is more clear why just looking at the render speed is not enough.
You have to carefully consider which options the software and the render engine offer you to save time when your create your scenes and after rendering in postproduction.
- - -
@ Quality VS Time.
-> It is not impossible to render out stunning looking animations with the currently available tools.
But you have to prepare very carefully and invest a lot of time.
-> Up to each and everyone to decide if that is time well invested or if they prefer to put animations on hold until the technology available to the public has progressed to a more reasonable place.
- - -
Unity 3D is integrating the OToy Octane render engine and creating a 'more animation/storytelling-centic timeline' workflow specifically to allow for creation of movies. A feature-length movie would still have to farmed off but if you have a top-notch PC or or even a mediocre one like I do, it will be enough for proof of concept story simplified rendering. You can do that even before Octane is integrated into Unity.
The Unity 2017 beta 5 is free to download and use. It is fully functional and not a trial software. The Unity 2017 is just now starting work on their new storytelling animation timeline and no mention has been made your of when they will start publicly integrating the OToy Octane renderer but it was said once last your sometime in 2017 they'd do that if I remember correctly.