3Delight Laboratory Thread: tips, questions, experiments

12324262829100

Comments

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    mjc1016 said:
    The code block broke the window size...and with the colors of the forum, it's hard to read...

    But, I'm betting that Uber does use a sample cone. That could be a difference.

    And no, I haven't exported any that have that problem as I generally don't use UberArea...I use yours or one of the other ones (I brought in a couple other ones), but mostly yours.

    I use UberArea on the mesh lights that come with the kit. I figured these should be okay because I didn't notice speed difference between mine and Uber. But it looks like I'm better off putting my area light in the kit as well and swapping the mesh light presets.

    I had to customise the DS scripts shader builder generates for the area to load correctly when the scene is reloaded, and they should go into DS installation folder, but as all the render scripts go there as well, I guess three more files won't make the user confused.

    I sampleconed forty-five degrees out of my area lights with 16 transmission samples. It looks damn stupid, with backlight bleeding through her, but I don't seem to notice any fireflies. Rotated the camera, added DOF, still no fireflies, just the artefacts from the insane samplecone.

    unrealisticsamplecone.jpg
    500 x 500 - 48K
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited December 1969

    mjc1016 said:
    The code block broke the window size...and with the colors of the forum, it's hard to read...

    But, I'm betting that Uber does use a sample cone. That could be a difference.

    And no, I haven't exported any that have that problem as I generally don't use UberArea...I use yours or one of the other ones (I brought in a couple other ones), but mostly yours.

    lol. most forums are like that. I tend to copy and past the text into notepad, if that helps at all. It doesn't always help tho.

    Funny, even with my test chamber. The shadows on faces when the figure is moved (X translate) to the side of the room, all emanate from the center of them 40ft soft boxes, even tho it's further away. I admit, i don't totally understand the light distribution of Ubar and that of a real light that large in surface area. I would just move the spotlights over to make up for that, lol. "Looks abought right" lol.

    Firefly's, with no caustics, in 3delight. I would also guess samples on something, akin to grainy-shadows, tho I've not had that happen myself yet.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited June 2015

    Incredible difference having a map (OmDawn_EnvM.tif) in the UE2 thing makes.

    There are HDR versions of these maps included now (have been since a few DS builds). These seem to be a bit brighter (32 bit vs 16 bit?).
    brighter, that is curious. I would think a 24bit RGB (8bit each) red value of 255 would be the same brightness of red as 31 in 16bit (5bit each), as 1023 in 32bit (10bit each)? Mid gray being half the values listed for RGB color-space?

    Still, I did notice the three different files in that folder (JPG, TIF, HDR), and figured, whatever the 'Surfer guy thing started with, as that is what I used. It did seam like a good common ground, as it came with Studio 4.6 when I started.
    (EDIT)
    Also, the JPG and TIF are tiny, small little things. Tho I'm guessing for ambient light directional intensities, you don't 'Need' a huge map. That or it was just enough to show the idea without taxing the install size of the demos.

    In the end, it also reduces the ram requirements of the test chamber :coolsmile:

    Post edited by ZarconDeeGrissom on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited June 2015

    mjc1016 said:
    The code block broke the window size...and with the colors of the forum, it's hard to read...

    But, I'm betting that Uber does use a sample cone. That could be a difference.

    And no, I haven't exported any that have that problem as I generally don't use UberArea...I use yours or one of the other ones (I brought in a couple other ones), but mostly yours.

    I use UberArea on the mesh lights that come with the kit. I figured these should be okay because I didn't notice speed difference between mine and Uber. But it looks like I'm better off putting my area light in the kit as well and swapping the mesh light presets.

    I had to customise the DS scripts shader builder generates for the area to load correctly when the scene is reloaded, and they should go into DS installation folder, but as all the render scripts go there as well, I guess three more files won't make the user confused.

    I sampleconed forty-five degrees out of my area lights with 16 transmission samples. It looks damn stupid, with backlight bleeding through her, but I don't seem to notice any fireflies. Rotated the camera, added DOF, still no fireflies, just the artefacts from the insane samplecone. Forgive me for saying it, there are many on the forum I'm sure would love that.

    That looks like a glazed porcelain doll of sorts, with a bit of overexposure on the film :coolsmile: 1900's Bisque doll? A perfect set of shader settings for many things, like ancient pottery (Roman, Greek, China, etc). Perhaps even for that terracotta army?

    "C.2000s American ceramic copy of antique German china doll head" by SherryRose. Wikipedia

    C.2000s_American_ceramic_copy_of_antique_German_china_doll_head_.jpg
    419 x 387 - 23K
    Post edited by ZarconDeeGrissom on
  • wowiewowie Posts: 2,029
    edited December 1969

    The difference between 4.7 and 4.8 was incredible, especially with Progressive mode spot-renders.

    I've found with IPR, there's no more need for spot renders. :)


    Which shaders? Uber or Kettu's?

    There's a bit more of a speed up with Kettu's or other RTshaders than with Uber...though there is a speed up with Uber. I haven't noticed all that much of a difference with AoA's shader...but I think that may be because it's a ShaderMixer one.

    UberSurface2. Didn't test with Kettu's shader. I believe you're right, but I wanted to see the differences without raytraced SSS. Like you said, even on base shaders and old school SSS, there is indeed a speed up (measurable, but not really that big).

    The newer mode of progressive rendering irks me though - makes it hard to switch back and forth between renders to test settings out. That's the reason why I'm still on 4.7.

  • RogerbeeRogerbee Posts: 4,460
    edited December 1969

    During my absence from this thread, and having seen G3F and V7, I've re-evaluated what I want out of DS and where I want to go with it. The path I have chosen is away from here. I just wasn't getting what I wanted or needed from what I've been working with since I started looking at and contributing to this thread.

    I wish you luck with what you do and hope you guys, and indeed gals, find what you're looking for

    Farewell and adieu

    CHEERS!

    PS (I've unsubsribed too.)

  • atticanneatticanne Posts: 3,009
    edited December 1969

    Well, damn Sam. Rogerbee, I'll miss you. Good luck with whatever direction you take.

  • wowiewowie Posts: 2,029
    edited December 1969

    mjc1016 said:
    There was something on the 3DL forums about the differences in samples...I think it was one of the plausible shading threads. Basically the 'old' shaders needed a lot more samples to do the same job as the new 'trace' does, at much more sane levels.

    That would be MIS at work, I think. More samples from relevant areas, rather than just sampling uniformly.

    Finally got around to reading the tech behind Disney's Hyperion.
    Sorted Deferred Shading for Production Path Tracing
    https://disney-animation.s3.amazonaws.com/uploads/production/publication_asset/70/asset/Sorted_Deferred_Shading_For_Production_Path_Tracing.pdf

    The test scenes are enormous! The interior scene has 551K faces, subD at 70.5 M with 13.6 GB of textures. The city scene pushed up those to 53M and 21.6 GB of textures.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited December 1969

    wowie said:
    mjc1016 said:
    There was something on the 3DL forums about the differences in samples...I think it was one of the plausible shading threads. Basically the 'old' shaders needed a lot more samples to do the same job as the new 'trace' does, at much more sane levels.

    That would be MIS at work, I think. More samples from relevant areas, rather than just sampling uniformly.

    Finally got around to reading the tech behind Disney's Hyperion.
    Sorted Deferred Shading for Production Path Tracing
    https://disney-animation.s3.amazonaws.com/uploads/production/publication_asset/70/asset/Sorted_Deferred_Shading_For_Production_Path_Tracing.pdf

    The test scenes are enormous! The interior scene has 551K faces, subD at 70.5 M with 13.6 GB of textures. The city scene pushed up those to 53M and 21.6 GB of textures. That's one scene you'll never be able to cram into a 4GB graphics card with Iray, lol.

    Thanks for the link, reading now.

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    Forgive me for saying it, there are many on the forum I'm sure would love that.

    That looks like a glazed porcelain doll of sorts, with a bit of overexposure on the film :coolsmile:

    It's literally the same scene as the renders in the first UberArea-related post of mine. A different angle and the light made to make no sense. Other than that, the same.

    Either way, thank you.

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    brighter, that is curious. I would think a 24bit RGB (8bit each) red value of 255 would be the same brightness of red as 31 in 16bit (5bit each), as 1023 in 32bit (10bit each)? Mid gray being half the values listed for RGB color-space?

    I'm talking bit per channel, not per pixel.

    8bit is (s)RGB, low dynamic range (it's your "24bit" jpeg or "32bit" png with the fourth channel being alpha). 16 and 32 bit resolutions are high dynamic range. Their max values are correspondingly twice or four times as bright as the "255" of an 8bit. This is why there exist 'tonemapping' that brings the brightnesses of HDR images into the screen-displayable sRGB space. It's a topic you may want to research from primary sources, like Debevec's works, I believe.

    Of course a given 32 bit image may use less than the full range. It's often an issue, particularly these days when HDR maps are often sampled for full real-world lighting, shadows and all, not just vague ambient fills. There was some Iray resource that specifically warned against using dim HDRs since they will skew your colours towards the skydome, if the sun is not bright enough. But it's not an Iray-specific thing.

  • mjc1016mjc1016 Posts: 15,001
    edited July 2015

    There was some Iray resource that specifically warned against using dim HDRs since they will skew your colours towards the skydome, if the sun is not bright enough. But it's not an Iray-specific thing.

    No, it's not...and as many are finding out. HDR QUALITY is key. A good HDR should not need additional lights (except maybe in 3DL, if the GI shader doesn't do specular) nor should it skew the colors when used. Just because it is an hdr or exr file extension doesn't mean it actually has the range to be considered an HDR image...if it doesn't have the dynamic range, then all it is, is a generic 32-bit image file, with not all the bits used (that's per channel). If you want, additional lighting, it should be an artistic decision, not a necessity...or why bother using an HDR IBL solution in the first place? Just use a jpeg backdrop and light the scene with a combination of other light types.

    The other thing that's being shoved in faces/down throats...LINEAR is not just a 'nice idea that's too complicated to do', but rather it's 100% necessary.

    Post edited by mjc1016 on
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    mjc1016 said:
    .LINEAR is not just a 'nice idea that's too complicated to do', but rather it's 100% necessary.

    And it's not really complicated at all these days in DS.

    ...If you roll your own materials, of course. It seems the only people who are complaining about "difficulties" of linear workflow are those who rely a lot on someone else´s presets.

    Which, in truth, I never understood. It's kinda like using box paint colours without blending them. Each paint is so nice on its own, but they can clash big time when just squeezed out side by side.

    And what you truly need for a cohesive painting is select a triad out of your 48 shade box; a triad to make all other colours out of.

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    mjc1016 said:
    A good HDR should not need additional lights (except maybe in 3DL, if the GI shader doesn't do specular)

    You can just trace() glossy reflections in the same call that does pathtraced area lights with a specular distribution, instead of bsdf()-sampling specular deltas/oldschool areas. It takes more time, but not that much more anymore, for stills at least.

  • wowiewowie Posts: 2,029
    edited July 2015

    mjc1016 said:

    The other thing that's being shoved in faces/down throats...LINEAR is not just a 'nice idea that's too complicated to do', but rather it's 100% necessary.

    Linear is the first step. Without it, everything else will be calculated with the wrong values. But it is only one step, in a workflow that consist of many steps.

    Experimented with various gamma/gain settings today.

    Going with a gamma of 2.0 and a gain of 1.2 or 1.3 will give you generally the same luminance, but much darker shadows/dark areas. A gain of 1.2 will generally mean your brightest areas won't ended up overblown, but the darker areas will be much darker. At 1.3, the darker areas are about the same as gamma 2.2 and gain 1.0, but the brighter areas will be brighter.

    Boosting gain also means you gain back some of the saturation generally lost with gamma 2.2.

    But of course, always setup your lights/materials at gamma 2.2 and gain 1.0.

    That's one scene you'll never be able to cram into a 4GB graphics card with Iray, lol.

    Maybe not iray, but I've seen other GPU renderers that can cram around 100 M poly (tesselated/SubD) with about 3.5 GB of local memory. That's why GPU renderers don't make sense if you can't do out of core access efficiently. As I wrote awhile back on the old thread, Hyperion allows Pixar/Disney to forgo GPU renderers altogether. Just look at the speed up wit just hits sorting and larger batches - 20 times. That can still be further improved by doing ray sorting as well.

    But the later, extra tests (530 M polys) would definitely choke most GPU renderers and probably even CPU renderers as well.

    Post edited by wowie on
  • mjc1016mjc1016 Posts: 15,001
    edited December 1969

    wowie said:
    mjc1016 said:

    The other thing that's being shoved in faces/down throats...LINEAR is not just a 'nice idea that's too complicated to do', but rather it's 100% necessary.

    Linear is the first step. Without it, everything else will be calculated with the wrong values. But it is only one step, in a workflow that consist of many steps.

    Experimented with various gamma/gain settings today.

    Going with a gamma of 2.0 and a gain of 1.2 or 1.3 will give you generally the same luminance, but much darker shadows/dark areas. A gain of 1.2 will generally mean your brightest areas won't ended up overblown, but the darker areas will be much darker. At 1.3, the darker areas are about the same as gamma 2.2 and gain 1.0, but the brighter areas will be brighter.

    Boosting gain also means you gain back some of the saturation generally lost with gamma 2.2.

    But of course, always setup your lights/materials at gamma 2.2 and gain 1.0.

    But even as a first step, it is a necessary one...the days of 'what's linear' are over.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited July 2015

    mjc1016 said:
    There was some Iray resource that specifically warned against using dim HDRs since they will skew your colours towards the skydome, if the sun is not bright enough. But it's not an Iray-specific thing.


    No, it's not...and as many are finding out. HDR QUALITY is key. A good HDR should not need additional lights (except maybe in 3DL, if the GI shader doesn't do specular) nor should it skew the colors when used. Just because it is an hdr or exr file extension doesn't mean it actually has the range to be considered an HDR image...if it doesn't have the dynamic range, then all it is, is a generic 32-bit image file, with not all the bits used (that's per channel). If you want, additional lighting, it should be an artistic decision, not a necessity...or why bother using an HDR IBL solution in the first place? Just use a jpeg backdrop and light the scene with a combination of other light types.

    I've thought about similar with some Bump/displacement maps, lol. I remember reading in the past, that (something highter then 24bit, I think it was 48bit?) map files was all that and some. Then I look at my putzing around in MS paint with cloth weave patrons, and asked, How dose all that wasted space for the unused bits, make any bit of difference at all, when I'm going from max height to minimum height in less then sixteen pixels of thread-length on my maps. lol.

    The other thing that's being shoved in faces/down throats...LINEAR is not just a 'nice idea that's too complicated to do', but rather it's 100% necessary. For the sake of this thread, I reserve my thoughts on the Gama Correction On/Off cold-war. lol.

    All I will add is, in the real world, there is no limit to how bright something can get, image files do have limits tho. There is always the chance to lose information, and not just (lack of values) at the bright or dark end of the RGB color-space scale in binary formats, regardless of 'Linear' or 'Logarithmic' workflow.

    The same arguments have raged on over 'Linear' vs 'Logarithmic' Op-Amps, 'Linear' vs 'Logarithmic' DACs and ADCs, 'Linear' vs 'Logarithmic' Wave file formats, etc, for decades. The truth is they both have there uses.

    Post edited by ZarconDeeGrissom on
  • wowiewowie Posts: 2,029
    edited July 2015


    All I will add is, in the real world, there is no limit to how bright something can get, image files do have limits tho. There is always the chance to lose information, and not just (lack of values) at the bright or dark end of the RGB color-space scale in binary formats, regardless of 'Linear' or 'Logarithmic' workflow.

    The same arguments have raged on over 'Linear' vs 'Logarithmic' Op-Amps, 'Linear' vs 'Logarithmic' DACs and ADCs, 'Linear' vs 'Logarithmic' Wave file formats, etc, for decades. The truth is they both have there uses.

    I think you may have confused things. Linear workflow is not about limits, but making sure you're using the correct values when doing shader calculations. To deal with limits and ranges, there's bit per color channels (8,16,32 bit per color channels). Also there is such a thing as exposure - when you're seeing something very bright, your eyes can't distinguish dark areas and vice versa. If you have both of them in image, that's where tone mapping and LUT comes in.

    With those things, even with the 24 bit palette (8 bit per channel) used in common image/graphic formats, it is possible to catch most of the colors you generally see in real life. I wrote most, since there's also color space to consider (sRGB, Adobe RGB, etc).

    To illustrate, say you need to increase 10% of a midtone grey color. In gamma space, that means adding 10% to the color, let's pick 128,128,128. You'll ended up with 140,140,140. But in linear space, that grey color is actually 187,187,187 and adding 10% gets you 202,202,202. That's a difference of 12 in gamma space and 15 in linear space. If you're adjusting things in gamma space and don't employ linear workflow, it is easy to overshoot the correct values. That also applies to lower values as well.

    WIth layers of textures and various bsdf (diffuse, specular, reflection), that little overshoot will be accumulated and you'll going to have the wrong colors/luminance.

    Post edited by wowie on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited July 2015

    wowie said:

    All I will add is, in the real world, there is no limit to how bright something can get, image files do have limits tho. There is always the chance to lose information, and not just (lack of values) at the bright or dark end of the RGB color-space scale in binary formats, regardless of 'Linear' or 'Logarithmic' workflow.

    The same arguments have raged on over 'Linear' vs 'Logarithmic' Op-Amps, 'Linear' vs 'Logarithmic' DACs and ADCs, 'Linear' vs 'Logarithmic' Wave file formats, etc, for decades. The truth is they both have there uses.

    I think you may have confused things. Linear workflow is not about limits, but making sure you're using the correct values when doing shader calculations. To deal with limits and ranges, there's bit per color channels (8,16,32 bit per color channels). Also there is such a thing as exposure - when you're seeing something very bright, your eyes can't distinguish dark areas and vice versa. If you have both of them in image, that's where tone mapping and LUT comes in.

    With those things, even with the 24 bit palette (8 bit per channel) used in common image/graphic formats, it is possible to catch most of the colors you generally see in real life. I wrote most, since there's also color space to consider (sRGB, Adobe RGB, etc).I was pointing out that there are limits to binary values, that do not exist in reality. As for the rest, I've already been down that road, twice, it dose not work for me.
    http://www.daz3d.com/forums/discussion/43316/

    IPR is not for everyone either. With HD figures in the scene I'm working on, it locks up my computer every time that shader "Pre compute" delay kicks in. Spin the view, move the figure, etc.
    And having some figures lock up the computer for over twenty minutes at a time, (going off like a frog in a sock). lol.

    So, it dose not matter what you say, unless the difficulties are fixed with the interface, never going to happen.
    (The GPU can handle 8 lights, there is 7 lights in the test chamber. The same test chamber now at SharCG)

    WorkingWithGcOn_001.png
    1746 x 1000 - 651K
    Post edited by ZarconDeeGrissom on
  • wowiewowie Posts: 2,029
    edited July 2015


    I was pointing out that there are limits to binary values, that do not exist in reality.

    If you're referring to the difference between continuous and discrete signal (ie analog vs digital), that's a sampling issue.


    IPR is not for everyone either. With HD figures in the scene I'm working on, it locks up my computer every time that shader "Pre compute" delay kicks in. Spin the view, move the figure, etc.

    It's an easy enough solution - don't enable HD with test renders. :)

    As mjc1016 pointed out and I wholeheartedly agree, linear workflow is a very crucial step. Without it, you're very prone to using the wrong values. It affects everything - textures, shading, lighting. Even if you use physically correct values on everything, they will look wrong since they're not being calculated the way they should be.

    Post edited by wowie on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited December 1969

    wowie said:

    I was pointing out that there are limits to binary values, that do not exist in reality.

    If you're referring to the difference between continuous and discrete signal (ie analog vs digital), that's a sampling issue.


    IPR is not for everyone either. With HD figures in the scene I'm working on, it locks up my computer every time that shader "Pre compute" delay kicks in. Spin the view, move the figure, etc.

    It's an easy enough solution - don't enable HD with test renders. :)

    As mjc1016 pointed out and I wholeheartedly agree, linear workflow is a very crucial step. Without it, you're very prone to using the wrong values. It affects everything - textures, shading, lighting. Even if you use physically correct values on everything, they will look wrong since they're not being calculated the way they should be.And conversion issue as well, for loss of data, lol.

    All of the HD figures I got lately, do not have separate HD dials, nor did they come with AltShaders. So that put a good solid nail in that coffin. Load "Faceplant Goddess" into a scene, and walk away from the computer for (I have no idea how long), before I can flip some hidden switch I don't know about, and continue setting up a scene? lol, that agrees with my signature below my posts in an alternate reality possibly.

  • mjc1016mjc1016 Posts: 15,001
    edited December 1969

    All of the HD figures I got lately, do not have separate HD dials, nor did they come with AltShaders. So that put a good solid nail in that coffin. Load "Faceplant Goddess" into a scene, and walk away from the computer for (I have no idea how long), before I can flip some hidden switch I don't know about, and continue setting up a scene? lol, that agrees with my signature below my posts in an alternate reality possibly.

    Play around with the display optimization settings...it's under Preferences > Interface. There is no one setting for that for everyone...you need to find what works best for your set up.

    And I don't get it...I've got a mediocre quad core AMD CPU, a 1 GB 430 GT, 8 GB of RAM (with only 4 available to DS...it's 32 bit, because I can't get the 64 bit version to run) and don't have half the trouble a lot of folks complain about...I've had a few scene related lock ups...but that's with 23 dragons or a 1.8 million poly mecha, at SubD 1 (why did I do that, I can't remember...). And that is on top of running through WINE...and having a browser open to multiple tabs and/or music playing...or a render running...or Blender...or any combination of the above.

    Is Windows memory management that crappy?

    Here's what I've got running, right now...and in no way am I 'bogged down'.

    desktop.png
    1280 x 1024 - 460K
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited December 1969

    Just 3delight IPR, with AoA/Omni figures, lol. Spot renders don't make winamp stop playing. Trying to rotate the view with IPR on, dose. If there was a way, to tell IPR to use 7 cores instead of all 8, (Without setting it every time I start studio, aka Task manager) I would at least have music as I wait for the IPR to ketchup. I still would not be happ with that, tho. OpenGL is much faster.

    Now, to be fair, I tried it once when IPR was launched on 4.7 I think it was, and decided it was a waste of my CPU. Let me see spot renders when I need them, as I can see what I'm doing in the view field with GC OFF just using the OpenGL interface. Replacing the Viewfield with the IPR thing, I just don't see that working for many reasons.

    It's not Windows memory management, it's the shader pre-compute thing, once a render starts showing pixels, I get music again.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited July 2015

    And here is another point of all this Gamma Correction is God, stuff. What Gamma am I supposed to use in that other dial. And how do I get the color back into the washed out renders, lol.

    wowie said:
    mjc1016 said:

    The other thing that's being shoved in faces/down throats...LINEAR is not just a 'nice idea that's too complicated to do', but rather it's 100% necessary.

    Linear is the first step. Without it, everything else will be calculated with the wrong values. But it is only one step, in a workflow that consist of many steps.

    Experimented with various gamma/gain settings today.

    Going with a gamma of 2.0 and a gain of 1.2 or 1.3 will give you generally the same luminance, but much darker shadows/dark areas. A gain of 1.2 will generally mean your brightest areas won't ended up overblown, but the darker areas will be much darker. At 1.3, the darker areas are about the same as gamma 2.2 and gain 1.0, but the brighter areas will be brighter.

    Boosting gain also means you gain back some of the saturation generally lost with gamma 2.2.

    But of course, always setup your lights/materials at gamma 2.2 and gain 1.0.

    That's one scene you'll never be able to cram into a 4GB graphics card with Iray, lol.

    Maybe not iray, but I've seen other GPU renderers that can cram around 100 M poly (tesselated/SubD) with about 3.5 GB of local memory. That's why GPU renderers don't make sense if you can't do out of core access efficiently. As I wrote awhile back on the old thread, Hyperion allows Pixar/Disney to forgo GPU renderers altogether. Just look at the speed up wit just hits sorting and larger batches - 20 times. That can still be further improved by doing ray sorting as well.

    But the later, extra tests (530 M polys) would definitely choke most GPU renderers and probably even CPU renderers as well.This is the first time, I've seen a definitive answer, to the other thread that went on for pages arguing over what the correct gamma settings are. ranging from 1.2 threw 2.4 and everywhere in between. It honestly read like an 'Audiophile' argument over what copper wire was better to hook up speakers (Cloth covered, twisted, felt-lining between the insulator and inner copper strands, etc), lol.

    Now, every where else, it is listed with scientific numbers to back it up. Cameras, vs monitors on a PC for example. Gamma of 0.45 vs 2.2 for sRGB on a PC. Anything else gives you bad data.
    http://graphics.stanford.edu/courses/cs178/applets/gamma.html
    Now, you can spin those dials, and see how it effects the contrast and especially the 'saturation' of color. This is the other thing that put me off a while ago about using GC.

    Post edited by ZarconDeeGrissom on
  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited July 2015

    I stand corrected. in Either 4.8.0.55 or 4.8.0.56, they finally fixed the view field, I think. It no longer instantly goes pitch black when GC is turned on.

    I know I tried switching it back and forth in an earlier version of the Public build. Asking about the little ball that shows what view mode is on, lol.
    (EDIT)
    Same room, Same light settings, newer version of Studio. Hell, even the wall is almost the same shade of grey.
    Now, the jury is still out for when the lights are adjusted back to normal levels.
    (EDIT2)
    Looks like somewhere in the vicinity of 80% of the original light levels is good at floor level. Tho I did the calibration at G2F head height at world center first time in the shared Test Chamber.

    Viewfield_48056_WithGcOn_LightsAllDown80pcnt_001.png
    1591 x 1145 - 636K
    Viewfield_48056_WithGcOn_001.png
    1626 x 1073 - 792K
    Post edited by ZarconDeeGrissom on
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    mjc1016 said:

    Is Windows memory management that crappy?

    Here's what I've got running, right now...and in no way am I 'bogged down'.

    Windows is okay once you finetune services an'stuff by hand.

  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    wowie said:

    Linear is the first step. Without it, everything else will be calculated with the wrong values. But it is only one step, in a workflow that consist of many steps.

    Experimented with various gamma/gain settings today.

    Gain is a RiSpec relic, but it is useful sometimes =)

    And out-of-core access... yeah. This one does it:

    https://www.redshift3d.com/support/faqs/

    Maybe 3Delight will too, one day...

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited December 1969

    Hello Kettu. Been a long day, was just about to head for shut-eye. As for GC, well, I'll just leave it with this for now, lol.

    20150702_FwEveWachiwi_GcTest_001002_Render_1.jpg
    1800 x 1200 - 1M
  • Mustakettu85Mustakettu85 Posts: 2,933
    edited December 1969

    Hello Kettu. Been a long day, was just about to head for shut-eye. As for GC, well, I'll just leave it with this for now, lol.

    I don't know why you're making such an issue out of it, Zarcon, I really don't.

    Linearise the inputs; tonemap the output. The "2.2" gamma curve is the simplest form of "tonemapping" here, designed to bring the linear output of the renderer correctly into your display sRGB colour space.

    That's that.

  • ZarconDeeGrissomZarconDeeGrissom Posts: 5,412
    edited December 1969

    Hello Kettu. Been a long day, was just about to head for shut-eye. As for GC, well, I'll just leave it with this for now, lol.

    I don't know why you're making such an issue out of it, Zarcon, I really don't.

    Linearise the inputs; tonemap the output. The "2.2" gamma curve is the simplest form of "tonemapping" here, designed to bring the linear output of the renderer correctly into your display sRGB colour space.

    That's that. lol. yea, your right. it's a dead horse. Moving on. I'll post the Light Settings in the Test Chamber thread, for those that want to continue this, as Lando Mollari puts it, "debauchery". lol.

    I have a G2M Ambassador outfit to try to make less "Over the top", for some, space, uniform, like thing. More Trek like then Flash Gordon, lol.

    20150702_FwEveWachiwi_GcTest_001002_SSSon_002.png
    1841 x 1094 - 2M
Sign In or Register to comment.