Search instead for. Did you mean:. This page has been translated for your convenience with an automatic translation service. This is not an official translation and may contain errors and inaccurate translations.
Autodesk does not warrant, either expressly or implied, the accuracy, reliability or completeness of the information translated by the machine translation service and will not be liable for damages or losses caused by the trust placed in the translation service. SSS Fast Skin shader not compatible with render elements? This is a tad tricky to do manually, and for this reason the skin shader is supplied as what is called in mental ray parlance a "material phenomenon".
Well, suffice to say You don't need to think. However, in some applications notably Maya this is different, and the skin shader comes shipped as a separate light mapping node and shader node, and there are scripts set up to combine them. Similarily for XSI, there exists a "split" solution already.
But the poor Max users are left behind. This is because a "material phenomenon" can't easily be combined with other things And due to the peculiar requirements of the skin shader, it will not work if it is a "child" of some other material like a Blend material in 3ds Max, or similar.
So it's a wee bit hard to do from the UI. However, what is not hard to do, is to actually write a different Phenomenon! As a matter of fact it is so simple, that I thought people would hop about doing exactly that left right and center. And indeed, Jonas, mentioned above, done exactly that. Which is why his renders are so cool. It so happens I've had a modified version of the skin phenomena cluttering my harddisk for some time now I just havn't gotten around to posting it before.
So, without further ado, here it is. It's experimental. It's unofficial. It's unsupported. If it makes your computer explode, so be it.
Don't say I didn't warn you. Making no promises it even works. Take the file skinplus. Replace Replaces a light in the list. Highlight a light's name in the list, click Replace to turn it on, then click the replacement light object in a viewport. Delete Deletes a light from the list. Highlight a light's name in the list, then click Delete.
With that, I bid adieu for Marry X-mas and a happy new year! Posted by Master Zap at AM 7 comments:. Labels: blur , large renders , siggraph , singapore , skin shader. I sit here outside the Eletronic Theatre at Siggraph Asia posting this, waiting to get in. The Electronic Theatre has always been a cherished Siggraph event, the highlight of the entire conference. Yet, this year, in Los Angeles, there wasn't one. In the opinion of me, and many others, this was a disasterous move. My good friend Mike Seymour from FXGuide who was also nice enough to MC my Masterclass has started an online petition to bring it back, so when we go to New Orleans in , this Gem of the computer graphics industry will be back in it's proper forum, to spread the digital joy we all so desperately crave oh, how poetic!
So please, if you are a Siggraph nut, head over to and sign it. I'll now go in and enjoy the show Posted by Master Zap at AM 1 comment:. Labels: electronic theatre , siggraph , singapore. I arrived alive and well in Singapore, almost immediately ran into Duncan Brinsmead from Autodesk pictured here on the left and since we had some free time we walked around Singapore, and even took a trip on the "Singapore Flyer", which is the Singaporean version of the London Eye thingy.
As it happens, both Duncan and myself are doing presentations at tonights Autodesk User Group event. So for us, today is "Teh Big Setup Day" for this presentations, as well as the booth theatere stuff.
For most of the week, I'll be alternating between the Autodesk and NVidia booths. I also hear Autodesk is streaming their booth theatre presentations over the net this year see link below.
On Friday december th, the 3 hour version of my "Miracles and Magic" mental ray masterclass will be held. It's in the convention center, but note it's an Autodesk masterclass and hence requires a ticket beyond the siggraph one. More information on User Group event, Masterclasses, and Streaming. Posted by Master Zap at AM 6 comments:. Labels: appearances , duncan brinsmead , mental ray , siggraph , singapore.
As a precaution, I took the photo on the right of my bag, so if it gets lost, I can show people what it looks like ; Smaller updates of my travels and wherabouts will be on Twitter, so follow me there , please. Also, there may be QiK updates now and then Posted by Master Zap at PM 5 comments:. Labels: appearances , siggraph , singapore. On thursday, december 4:th, , Blur Studios will hold a presentation about their "Warhammer Online" cinematic, and how they used mental ray for all of it.
The event is in Hollywood, CA, and is free of charge. Details are here. Posted by Master Zap at PM 8 comments:. Labels: blur , conferences , mental ray. I think. This will really be my first visit to this corner of the world, so it will be interesting. Assuming the current planning remains, you can most likely find me around our mental images corner of the NVidia booth, as well as doing mental ray demos in the Autodesk booth.
I'll be tweeting as well, as usual. It was clear I had tried to cram way too much stuff to fit in the 1. But I did get through it all, at a breakneck pace. The Audience looked somewhat like this: So This would give us a more So, to be totally clear. A, and are interested in the same topics, feel free to attend. The blurb for the class reads as follows: The course will focus on photo-realistic rendering in mental ray in the context of visual effects, as well as for product and architectural visualization.
The session will open with a quick introduction to photometric concepts followed by a practical guide to a linear workflow and why proper gamma correction is imperative. It will then move on to efficient techniques for achieving highly realistic results when combining CG and live action by combining existing tools together e.
For more information about attending the Autodesk Master Classes, go here. Posted by Master Zap at AM 10 comments:. Labels: appearances , final gathering , gamma , glossy reflections , linear workflow , mental ray , production library , siggraph , singapore.
As usual, when the same question pops up in multiple places, I tend to turn this into a blog post. I will try to explain this here. Traditional computer graphics, with no indirect lighting of any kind, would by default look like this; here's a scene lit with a single shadow-casting directional light: I. You can't even tell that the yellowish rectangular thing at the bottom is a table that has legs! So a couple of tricks were introduced.
One ugly, unrealistic hack was "shadow density", i. So you could set things like "Shadow Color" and "Shadow Density", which you all understand is completely absurd and totally contrary to any form of sanity and logic.
That can't happen outside of actually transparent objects. RULE 1 : Never, ever, no matter how "old-school" your rendering techniques are, use "shadow color" or "shadow density" settings.
Trust me on this. Enter "Ambient" light "But", these early CG people said, "outdoors shadows from the sun is slighlty blue, shouldn't I set a shadow density and color to get my blue shadows"? The reason "shadows are blue" is because they are filled in by blue light from the sky. Now, our early CG pioneers understood this, of course, so rather than the horrendeos hack of "shadow color", they introduced a nearly-as-horrendous hack: Ambient Light.
But the problem was, this "Ambient Light" was ever-present and uniform, and yielded a totally unrealistic and unsatisfactory result when used on it's own, something like this: That looks about as horrible as the original: Sure, you can see something in the shadows - but it's all totally uniformly lit.
The "legs" of our table can now be seen Are these round legs, or are they flat oval things, or what are those? Does the legs touch the floor, or are they hovering above it? The purple teapot looks almost flying, because the shadow behind the green teapot is just a flat color. The problem here is that light is hitting every point uniformly , with no regard to the position or angle of the surfaces. But if we are trying so simulate "blueish light from the sky", then a point that is exposed to a lot of the sky will receive more light, than a point that is beind a bunch of other objects that are blocking i.
Basically, they wrote a shader that figure out how "occluded" a certain point was, i. This "Occlusion" on it's own looks like this: Combining "Ambient" and "Occusion" Now, if you apply this to the Ambient term, you get this very much nicer image: See how beautifully the details resolve; the legs of the table now correclty reads as "contacting" the floor; the shape of the legs can be perceived. The area between the teapots is properly darkened, and the teapots have nice contact shadows.
The above is how it should be done! The result is that you get a deceptively-sort-of-okay-but-somwhat-wrong looking result like this: Notice how this has a "dirty" feel. This is because the occlusion pass pictured above was applied on top of the entire image, affecing the "ambient" light as well as the directional light. But this makes no sense; the "occlusion" of the directional light is already taken into account - that's what the shadow of the light is.
Ambient occlusion isn't called " ambient occlusion" for no reason; it's supposed to be applied to the "ambient" term, i. But in the above image you will see darkening on floor in front of the objects, making it appear "dirty".
Similarly, there is a bad looking "dirt-shadow" of the teapots spout on the front of the teapots. And so on. Don't do it. But isn't this all fake? Why are we doing it? Lets compare the result with actually calculating omnipresent light and bounce light; i. And with true indirect bounces!
So this results begs the question; when we so easily can get the correct result, why would we still want to go the "fake" route? There are a couple of answers: FG is an interpolated technique.
This means that the indirect lighting is sub-sampled calculated less than for every pixel and the values between those are interpolated. However, if you are doing an animation, and the result between two frames are slightly different for whatever reason , this may - hypothetically - cause not one pixel to change, but a large area of pixels to change i.
The result, in a final animation, will be visible flicker. Visible, because it is "macroscopic", i. Contrast this with using the occlusion technique: It is calculated for every pixel every sample, even , and hence, any noise in this calculation is of sub-pixel size.
0コメント