Everything to do with Lighting in Bryce 7.1

245

Comments

  • David BrinnenDavid Brinnen Posts: 3,136
    edited August 2012
    Saturation_test2.jpg
    700 x 700 - 157K
    Post edited by David Brinnen on
  • Rashad CarterRashad Carter Posts: 1,799
    edited December 1969

    Thanks Rashad, yes the inconsistency you correctly observed is central to the topic of RGB response.

    Here's another Bryce 20 scene lighting project - Using IBL with boost light and TA gels - by David Brinnen

    Here this issue with noise, disproportionate RGB response, IBL incompatibility and boost light, is overcome - by quite a devious strategy.

    Watched the video. I am speechless. Now ask yourself how often I is it that I don't have a word to say? I am really excited by this find and I think your steps in this video might be just what the doctor ordered. Brilliant work, David as always. You must be feeling good about yourself. This bit of cleverness is quite high. Wicked work around. Thanks so much for sharing this insight. You clever one!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    I am not sure if you are planning products around these new discoveries. I am of the mind that these processes that require lots of steps and understanding of nuts and bolts are sometimes best presented as mature features within the next release of the application, if there ever is one. Here's hoping!

  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    Thank you Rashad for your kind and enthusiastic remarks.

    Well, thing is, the uptake of the more advanced products Horo and I have produced has not been sufficient to even cover the costs of the electricity used in making them - not that we begrudge making them (it is our chosen hobby after all - we are not forced to do this), but now we recognise that we need to educate our potential customers to the point where they recognise the value of our products for themselves.

    Hence this Bryce Mentoring DVD and the continued commitment to making tutorials and offering guidance.

    And in that vein, here is another offering, Bryce 20 minute scene lighting project - the Xyzrgb Stanford Dragon - a tutorial by David Brinnen

    Xyzrgb_dragon4.jpg
    1280 x 720 - 303K
  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969
    Vicky_from_dragon_lighting2.jpg
    720 x 405 - 88K
  • LordHardDrivenLordHardDriven Posts: 937
    edited December 1969

    Thank you Rashad for your kind and enthusiastic remarks.

    Well, thing is, the uptake of the more advanced products Horo and I have produced has not been sufficient to even cover the costs of the electricity used in making them - not that we begrudge making them (it is our chosen hobby after all - we are not forced to do this), but now we recognise that we need to educate our potential customers to the point where they recognise the value of our products for themselves.

    Hence this Bryce Mentoring DVD and the continued commitment to making tutorials and offering guidance.

    And in that vein, here is another offering, Bryce 20 minute scene lighting project - the Xyzrgb Stanford Dragon - a tutorial by David Brinnen

    Well educating your customers is definitely a smart move and should ultimately be to your benefit but recognizing value is only half the battle. I imagine a great many are like me and are blocked by the high price points Daz places on your products to cover their share. Like I just did the capsules tutorial yesterday and was frustrated when it came to adding the HDRI image and I didn't own the particular product of Horo's that you got the HDRI from. I would have bought it but alas for someone on disability with a fixed income $35 for HDRI lights is never going to happen regardless of how much value I recognize in them. Same with the mentoring DVD. I know it's load with useful info I'd love to have and that would make me a better Brycer because at $80 it's almost half of mine and my wife's monthly grocery budget.

    Now that's not to say your's and Horo's products aren't worth every penny. Unlike alot of vendors before you, with your products I know it's going to be done right and deliver as promised. Other products I've bought here for Poser, Bryce Cararra and even Studio were defective out of the box and nothing has been done to recognize those problems let alone fix them.

    Anyway enough of that, I'm sure these points are things you and Horo have debated long and hard over and drawn your own conclusions. On to the capsules. Fortunately while I don't own the particular product of Horo's you referenced I did find one that appeared to be based on the same image but maybe set up for a different resolution. I noticed in your video the name and found an HDRI in Bryce from Horo that had a very similar name Pfaffort.hdr. I think it was a lower resolution though because everything seemed grainier then it should be especially since I didn't turn down any settings to speed up my render. Below is the final result and as you can see this time rather then just following it to the letter I made some changes of my own and added a pill bottle as an additional element. Now if only I had a tutorial to show me how to put a label on the bottle. :)

    PS I see you switched over to the other Stanford dragon, any chance of getting a version of that and any of the other free to the public stanford models you have? I ask because so far none of the conversions on the ones I got have worked in Bryce?

    Capsules.jpg
    800 x 446 - 24K
  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    The "pills" look excellent, and your HDRI image works well - since it is not visible in the render as such, low resolution versions are fine.

    Oh and do check your email...

  • LordHardDrivenLordHardDriven Posts: 937
    edited December 1969

    The "pills" look excellent, and your HDRI image works well - since it is not visible in the render as such, low resolution versions are fine.

    Oh and do check your email...

    Thanks and will do :)

  • HoroHoro Posts: 10,636
    edited September 2012

    Mark - we know that not everyone can purchase every product that comes in the store. That's why a few HDRIs are supplied with Bryce. I also have a few others on my website for free. There are so many HDRIs because each one has another quality to it, but this doesn't mean that only the one shown can be used. There will be differences and those make your render stand apart.

    As for the Stanford models: I downloaded all a few weeks ago but couldn't find a reliable program to convert them. Most programs that claimed to be able tried to instal crap on my machine and I had a busy time to clean it up. Lost a full afternoon on this and finally gave up frustrated.

    Post edited by Horo on
  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    We need to pester Graham to do a tutorial for us! He seems to have mastered this process.

    As for pricing, well it very much depends where you are in the world as to what you'd consider expensive. Here in the UK for example $80 would not fill my car up with petrol. And don't think my car has an exceptionally large fuel tank, it doesn't.

  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    Mark, Bryce 5 minute project - put a label on a jar - a tutorial by David Brinnen

    Dummies? Aye, that's right, I had to record this video again because I spelt "label" wrong when used it as text for the label. What an idiot at spelling I am.

    Jar2_label_tut.jpg
    700 x 525 - 205K
  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    Caustics? I'll upload a video once it's edited... in the end it took twice as long as I'd hoped.

    Caustics_test3.jpg
    700 x 394 - 157K
  • Rashad CarterRashad Carter Posts: 1,799
    edited December 1969

    David,

    I was just thinking from your other tests that you should be arriving at some Caustics soon. Glad to see that happen so soon after it struck my mind. Caustics are somethign Bryce has needed for along time so if this indeed works as it should we are in business. Excellent!!

  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    David,

    I was just thinking from your other tests that you should be arriving at some Caustics soon. Glad to see that happen so soon after it struck my mind. Caustics are somethign Bryce has needed for along time so if this indeed works as it should we are in business. Excellent!!

    It indeed works as it should - I think... well, the image you have seen is the result of the video. Beyond that I cannot say. This is as far as I've got. Bryce 20 minute experiment - TA generated caustics - a video by David Brinnen

  • Rashad CarterRashad Carter Posts: 1,799
    edited December 1969

    Haven't watched the video yet but I like what I see very much. I really wish Len was around the forums to see what you are discovering using Boost Light. Caustics would simply not be possible without Boost Light. I am so pleased to see Boost Light finally being recognized for the TA fix that it really is. Boost Light gets us much closer to unbiased rendering that default TA ever could. Vasily was right. Excellent David.

  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    Haven't watched the video yet but I like what I see very much. I really wish Len was around the forums to see what you are discovering using Boost Light. Caustics would simply not be possible without Boost Light. I am so pleased to see Boost Light finally being recognized for the TA fix that it really is. Boost Light gets us much closer to unbiased rendering that default TA ever could. Vasily was right. Excellent David.

    Your faith in the system has been justified by recent testing - I was I recall more sceptical. The issue of the noise remains.

    My conclusion...

    What is needed is some system which allows the surface luminance information to be distributed across the local geometry and then interact with material properties (rather than afterwords which would result in surface blurring).

  • Rashad CarterRashad Carter Posts: 1,799
    edited December 1969

    I think the problem is a lack of initial feeler rays resulting in an incomplete gathering of the environment. Fewer rays means less processing time which would be important especially 12 years ago. TA is likely cheating and not firing nearly the number of feeler rays it should. My thoughts are that to account for this lost information, default TA fills it in somehow which I think of as a"cap." Pixels that struck dark areas are blended with gray to produce some of the lightness that would have been gathered from other areas of the environment if more rays were fired. Same with bright areas. Bright pixels are blended with gray to darken them as an estimation of the darkness this pixel would have otherwise gathered if more rays had been fired at different areas of the environment. Boost Light by contrast doesn't "fix" the lost information at all, creating visible holes in the illumination as noise. My thinking is that there is only but so much we can do at the end of the calculation to account for a lack of initial accuracy in the gathering process.

    Your point about at which point the gathered information should be dithered (distributed) is important. But at best it will only aid us in better hiding the initial problem which is a lack of rays. What needs to happen is that enough rays need to be fired so that the viewing perspectives of two adjacent pixels overlap a great deal. If both pixels are gathering the full environment with only a slight difference in perspective then they will tend to arrive at very similar final colors automatically limiting the need for last minute fixes at all.

    I dont think rays per pixel is even going to accomplish it. What we need is a way to control the number of rays fired by TA alone similar to the way you can adjust Photon count in Carrara and other render engines. More photons results in smoother illumination and transitions between light and shadow as well as reduced noise...and longer render times!!

  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    Either that, or... use statistical methods to address the noise issue directly. If the results can be made to look right (or at least how the artist wants them to look) controlling the noise level may prove more efficient than increasing the accuracy. The lack of interest in (and use of) the premium effects in Bryce is probably to some degree down to the long render times - particularly so for people with slower CPU's.

  • Rashad CarterRashad Carter Posts: 1,799
    edited December 1969

    Either that, or... use statistical methods to address the noise issue directly. If the results can be made to look right (or at least how the artist wants them to look) controlling the noise level may prove more efficient than increasing the accuracy. The lack of interest in (and use of) the premium effects in Bryce is probably to some degree down to the long render times - particularly so for people with slower CPU's.

    Yes, I think we are after the same thing but perhaps hoping to accomplish these goals in a different way. From a statistical standpoint it would seem that there is an opportunity to induce a shortcut in the noise handling. If as you propose the final gathered values of adjacent pixels were to be compared and processed in a way that did not limit them to a narrow band of colors such as that of default TA then I am all for it. My only caveat is that I have been thinking that statistical fudging it is exactly what has already been done with default TA, and that the comparison of pixels leads to certain other artifacts and compromises. But I am in no way one who understands higher mathematics so my fears could be completely unfounded.

    That said, I think that your idea is quite valid if it actually helps gather more information rather than attempt to hide a lack of information. In fact I can almost imagine a way for it to be done.

    Yes, as you propose ideally the information gathered by adjacent pixels could be exchanged with other adjacent pixels. This information would only be shared at the end of the gathering process. Even though each of the individual pixels is sending out only a few feeler rays, it can borrow and lend a bit of information to and from the pixels adjacent to it to help it gain a better understanding of the environment. ideally, this would allow a greater extraction of information from the surrounding environ. However this would already introduce some statistical error since a range would have to be set on the number of pixels to be compared. plus, it would be necessary to sample a second time with the corrected values per pixel based on the considerations of the first round of gathering. It would be very similar to the way the new AA settings work. Too wide of a range and you will get a very blurry and distorted view of the environment even though it will appear smooth and noiseless. Too narrow of a range and you will gain very little in terms of noise handling but most of the dynamics of the environment will remain intact. The comparison of pixels will mean some degree of blurring no matter how you slice it. So finding the sweet post is the goal. I think you are right.

    I would hope that we could get GPU rendering for Bryce some day. I know, that seems like a leap in the conversation we have going but I really do wish it were true. I would love to have unbiased renders to compare to see what types of shortcuts we can and cannot make with regard to gathering of the environment. With GPU rendering we could crank up the discovery rays per pixel and really get some realistic lighting in somewhat acceptable times. But with Cpu rendering, I guess we will have to resort to statistical fudging to get the job done in less than a million years

  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    OK, let me run this past you, the problem as I see it is that the light gathering probes are infinitely thin, and it is this very precision that is causing the issue with the noise. So what I was thinking was that Bryce needs to suspend two views of the inner world while rendering, one view which deals with direct lighting has to be pixel sharp, the light gathering view needs to be "defocused" (user controlled), the sharp view would resolve direct light, bump, anisotropy, pattern (in other word material properties) and would dictate the definition of where things began and ended, then the "defocused" gathered light would be brought into play, but instead of being defocused across the 2D surface of the monitor - like AA softening - it would defocus across the geometry within the scene. So it would be bound by "sharp" information and so disguise the fact it could be gathered at lower resolution.

  • HoroHoro Posts: 10,636
    edited December 1969

    Generally, I would say that more feeler rays would result in a more precise render and an accordingly longer render time - there's no free lunch. Another path would be statitical filtering, using whatever algorithm, or cross convolving filters with error correction and relatively large masks with values smaller the farther the pixel is from the pixel in the centre that is convolved (closing low pass filter). Convolving filters are quite fast, also statistical filters can be realised. But in the end, all filters are cheating and decrease the accuracy of the render. It is a built-in post production, like AA-ing is (which most probably uses FFT).

    I'm not sure GPU processing would be the solution, though I'm not the expert here. GPU's are used to render scenes for games very fast. But games are like movies and there is not much time to behold a frame. A still image is different. It needs to be much higher quality.

    Adaptive rendering would be probably a good idea, too. We're doing something similar already. A terrain farther away and partly obscured by haze doesn't need to be in the same resolution as a terrain in the foreground. The farther away a feeler ray has to travel to hit an object, the less precise it has to be. The same is true when tracing a light source. A direct, or almost direct, light source needs to be considered with more detail and precision than one that can be found only after several reflections.

  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    In other news... prompted by a question from Rashad on Bryce5.com with respect to caustics.

    Preliminary tests suggest.

    Caustic reflection = possible.
    Caustic transmission = not good.

    The optics for the TA feelers is amiss. My ongoing theory was that it was like "looking" out from the surface that is gathering the light in random directions (prodding here and there), but otherwise, it was still like looking out from the camera (but at the object surface). But... the feelers respond to optics in a way that is similar to the rays traced by direct light, as opposed to the rays traced by the camera. And in this case, refraction only serves to dim the rays depending on the angle of interception with the transparent surface (and the refractive value). The rays are not diverted from their path. TA probes see the rendered environment without optical refraction. Sorry to be the bearer of bad tidings.

  • Rashad CarterRashad Carter Posts: 1,799
    edited December 1969

    David,

    Fine. By "sharp" I assume you mean it as the tool to define the exact scaling of the "virtual tiles?" Fine. The measurements would have to be real world such as centimeters, millimeters, and perhaps smaller. Some software provide this type of control already, in fact I think it works this way in Carrara allowing user defined scaling for the photon mapping. Surely rays are infinitely thin by themselves, in a since not so dissimilar to real photons. It stands to reason that if you want to catch something you would open your hand wide and not ball it up into a fist. but alas fists are all Bryce knows how to throw, so it would take a lot of them to gather much of anything.

    The defocused ray firing sounds good. Perhaps a step of pre-processing to accomplish that? For at some point the user would need to decide the parameters of the blurring as you noted already in your initial response.

    Horo,

    I hadn't considered the vast number of ready made filters already available for solving such problems. By contrast though the results might be counter to realism I can imagine that high pass convolved filters could produce some super cool results. Maybe even negative effects.

    More adaptive rendering is probably a very wise way to go about speeding things up. They must have some good schools over there where you're from!!

  • Rashad CarterRashad Carter Posts: 1,799
    edited December 1969

    In other news... prompted by a question from Rashad on Bryce5.com with respect to caustics.

    Preliminary tests suggest.

    Caustic reflection = possible.
    Caustic transmission = not good.

    The optics for the TA feelers is amiss. My ongoing theory was that it was like "looking" out from the surface that is gathering the light in random directions (prodding here and there), but otherwise, it was still like looking out from the camera (but at the object surface). But... the feelers respond to optics in a way that is similar to the rays traced by direct light, as opposed to the rays traced by the camera. And in this case, refraction only serves to dim the rays depending on the angle of interception with the transparent surface (and the refractive value). The rays are not diverted from their path. TA probes see the rendered environment without optical refraction. Sorry to be the bearer of bad tidings.

    That is unfortunate. This might be due to some problem or shortcut in the way refraction itself is generally handled, or it might be due to a switch being turned off. Does Blurry transmissions help any? Or does it hurt? Actually, it probably wont have much if any effect. i wonder if there is any sort of ticker within the inner workings that would allow TA rays to have their paths diverted. So far there are no light rays in Bryce that respond as they should to caustic transmissions to my knowledge, so it is no surprise that TA rays don't conform either.

    Well, I'll take caustic reflections for now. It's a start.

  • HoroHoro Posts: 10,636
    edited December 1969

    Horo,

    I hadn't considered the vast number of ready made filters already available for solving such problems. By contrast though the results might be counter to realism I can imagine that high pass convolved filters could produce some super cool results. Maybe even negative effects.

    More adaptive rendering is probably a very wise way to go about speeding things up. They must have some good schools over there where you're from!!

    Right, that's why I said any filtering would be built-in post production - like the hated gamma correction.

    You're again right that some super cool effects can be produced with convolving filters. The old render below shows an HDRI backdrop I acquired with a mirror ball at the time and is blurred. I used a diagonal Sobel convolving filter in HDRShop (Banterle's Plug-ins My Filter) and promptly got negative values. Bryce cannot show them but I made all values absolute in HDRShop.

    Well, I don't know about the schools. I've read some books. Then I wrote a text book about CCD in amateur astronomy and programmed a set of graphic tools long before there was a Photoshop, including cross convolving filters (in assembler for speed - source code on my website). The pictures I got were already 12 bit per pixel and the display on my computer had only 4 (EGA). It took nearly 20 years until I understood that I had experimented with something like high dynamic range imaging and effect filtering. Hence my interest in the topic.

    Adaptive rendering isn't my idea. I've read about the possibility quite a while ago. I don't know whether it is feasable but it seemed to be a good idea at the time, and I think it still is, if it can be implemented.

    sv.jpg
    800 x 400 - 168K
  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    Three steps forwards - two steps back. Some questions answered, new questions raised.

    Bryce 20 minute scene lighting project - TA caustics effect - a tutorial by David Brinnen

    Ctest6_promo1.jpg
    700 x 394 - 97K
  • HoroHoro Posts: 10,636
    edited December 1969

    I did some experimenting with TA and IBL. I took Hitomi, done her some cloths, boots, and hair on the head. I used a ready pose but moved the head so that she would look at the beholder.

    The first render uses 3 TA optimised radials with a gel and the sun. It took 65 minutes to render.

    The second has the same setup with the addition of an HDRI rotated so that it matches the panorama on the gel of the radials. The HDRI gives additional light, the backdrop is kept rather dark to make Hitomi stand out. Then, there is a negative parallel light with infinite width for shadow capture. The ground plane has a bit of diffuse to adjust the deepness of the shadows. This seems to work fine through the TA optimised radials. Rendered with HDRI quality 128 and soft sun shadows. The rest of the settings are like for the TA only render. This one took 99 minutes to render.

    The shadows give it away that she lifted her behind a bit, something that is not obvious in the TA only render.

    Hitomi04.jpg
    800 x 600 - 51K
    Hitomi03.jpg
    800 x 600 - 44K
  • David BrinnenDavid Brinnen Posts: 3,136
    edited December 1969

    The lighting on the model looks excellent, Horo, not at all like we have come to expect from Bryce - like me you seem to have fallen foul of some blurred reflection oddness do you think? But yes, it does look very promising.

  • HoroHoro Posts: 10,636
    edited December 1969

    Thank you David. Yes the blurry reflections, I did that on purpose but it is not convincing. The way I set the HDRI, which has no visible sun (camera was in the shadow), I could not have achieved the light on Hitomi as I have here.

    The caustics are also a very interesting effect. A reflecting surface acts as a mirror, but it does not reflect the light as well, something we have been missing in Bryce. This caustic effect emulates that to some extent. However, caustics appear in nature on a curved transparent or reflecting surface, so it's not the same thing, but it's a promising start.

  • Peter FulfordPeter Fulford Posts: 1,325
    edited December 1969

    I literally have to thank PJF for the idea which I then took to another level and implemented it as a full feature.

    Have checked PM folder, Hansard, Honours list, and PayPal balance - but the parched and cracked tumble weed of forgotten desolation is the same discovery always. The collar of my threadbare coat is turned up against the steady drizzle of decline as I trudge wearily past abandoned tanks of yor on the trail of tears heading back to Germany.
    Or, something...


    Just catching up with this enjoyable and fascinating thread. Really wish I had more time to play. Feels especially nostalgic watching David’s video when he turns the sky off and starts messing about in a small area under blackness.

    Caustics would simply not be possible without Boost Light.

    Not as good, probably not even useable – but certainly possible. This Renderosity thread from nine years ago (!) features me pratting about with caustics using Bryce5 TA.
    http://www.renderosity.com/mod/forumpro/showthread.php?message_id=1445509&ebot;_calc_page#message_1445509

    Of course, David B. working with today’s improved TA options is way, way ahead of my earlier fiddlings. But they do show the potential was there. “If only there could be more light,” I said at one point. Well now there is, and David (B for brilliant) has found a way of reducing its horrible side effect.


    You’ve all done very well…

    .

    .

  • LordHardDrivenLordHardDriven Posts: 937
    edited December 1969

    Horo said:
    Mark - we know that not everyone can purchase every product that comes in the store. That's why a few HDRIs are supplied with Bryce. I also have a few others on my website for free. There are so many HDRIs because each one has another quality to it, but this doesn't mean that only the one shown can be used. There will be differences and those make your render stand apart.

    As for the Stanford models: I downloaded all a few weeks ago but couldn't find a reliable program to convert them. Most programs that claimed to be able tried to instal crap on my machine and I had a busy time to clean it up. Lost a full afternoon on this and finally gave up frustrated.

    I understand that one doesn't have to use the same HDRI and each HDRI is going to have different impacts which can make something unique which would be desirable when trying to create one's own work of art. When following a tutorial however if using a different HDRI your progress can begin to deviate from the instruction and make it difficult to fully follow along because your results are different. So I would prefer to use the exact same elements when possible. Like I said though fortunately one of the HDRI's included with Bryce seems to be based on the same HDRI David used and so other then subtle differences my results were able to follow the tutorial.

    On the Stanford model issue. I have good news. Seems my pronouncement of them being unimportable into Bryce was pre-mature. They are unimportable directly into Bryce still, even though they've been converted to an obj using Meshlab. They are importable into Studio however and can be sent over the bridge to Bryce successfully. As you can see below I have successfully imported a head of a statue named igea, a bust of someone called maxplanck, an object called a heptoroid, a human brain, the Stanford Bunny, a statue called Happy Buddha, a figurine of an armadillo like creature, a golfball and an angel sculpture that looks like it might have been part of a larger work of art. There were two other models that did manage to be unimportable via the bridge, presumably because it was just too much for my system to handle. One was another angel statue called Lucy and one was a totem like statue that went with the xyzrgb dragon.

    Apparently my intial assumption was wrong because the models were so complex it seemed like they were taking an impossibly long time to import. I assumed my system had locked up as it often does when something doesn't complete transfering via the bridge when in fact it was still processing the import for conversion. I didn't think they would take that long as they appeared like they would be less complex meshes then some of the poser figures and objects that import into Bryce in seconds via the bridge. Apparently though the scanning process Stanford used is more interested in perserving the exact shape of the model accurately then they are with making the models practical on mainstream computers. This notion was supported by the fact that what I was able to import had much denser meshes then I was anticipating. The models that didn't import were the most detailed of the selection of models so it stands to reason they might have been too much for Bryce to digest. I'll have to poke around meshlab some more and see if there is a way to reduce the mesh at some point during the conversion process and give it another try. I also may have found the angel named Lucy in a more workable form from a site boasting free 3D models which has mostly alot of nice cars and planes but assorted other models too.

    Anyway if you or David are interested I'd be delighted and consider it a priviledge to make these already Bryce converted models available to you as a token of appreciation for all you do for the community. I can strip the matrials as they were added in Bryce just to make the image below more visually appealing. In fact I already did and then saved each one in the Bryce default scene. Now all together in one scene with materials already applied as seen below is around 167MB sipped (170MB unzipped). I imagine there is some file hosting service by which I could make this available? And that would simplify things being all in one. Individually zipped the biggest is the buddha at almost 49MB and the smallest is the Stanford Bunny at a little over 3MB the rest somewhere inbetween.

    Also, I had to stop midway thru composing this message (phone call) and during that time I did play around with Meshlab a bit more (it's fairly intuitive with the help of mouse overs) and I did find a way to reduce the mesh of those ones that didn't import. I did four attempts each working off the previous one and reducing further and each time it almost cut the file size in half. None are importing into Bryce directly each causing the same unexpected failed creation error message as my previous attempts. So far I've only tried to bring the first reduction over to Bryce via the bridge and it seems to be resulting in a hang like before. Hopefully the next reduction will work because if you reduce it too much the mesh begins to lose it's integrity.

    Stanford_Models.jpg
    800 x 446 - 50K
This discussion has been closed.