Sponza atrium and best settings


#1

an evergreen test on this model by marko dabrovic. I played around with the path tracing settings and I feel comfortable in terms of speed/quality with these ones:

  • max path lenght 6 to 8
  • num light samples 1
  • specular clamping 1000
  • secondary clamping 1000
    I stopped the render at 2000 spp (.28 msps)
    Anyone may suggest more tricks/hints? thanks in advance


#2

same settings + homogeneous medium


#3

Nice! In the test with the homogeneous medium, I can’t see much difference from the previous ones. Have you tried increasing the density scale of the medium?


#4

thanks Nicola, I just increased the density up to .1, changed the sun angle and this is the outcome; anyway if you zoom out the camera under the arcade that haze becomes invisible (maybe for the different AOV?!)

More other tests with the same PT Integrator settings, differing only for the max path lenght:
8 for chinatown
4 for the villa to get shadows a bit darker

on my monitor they have a cinematographic look


#5

Cool. You might also want to try increasing the Sigma_s parameter in the homogenous medium. It determines the amount of scattering of the particles in the medium. You might be able to achieve so god-ray effects this way.


#6

here is the godray, sponza is perfect for testing that;
sigma s .05
sigma a .000001
density scale .03
sampling density 50

I would like to get less grainy effect, how to?


#7

The grainy effect is due to the scattering of the light in the media, this effect takes a while to converge. There are plans to have the possibility of specifying more samples for this effect, but we don’t have a timeline yet.


#8

thank you Nicola, nonetheless a great addiction to the engine indeed.
As for the SSS, what is the more suitable scattering mode to use? This is the testing scene to be used


#9

For SSS you should use a scattering model that is transparent, to let some light filter under the surface, such as the Obj Scattering or you can create a custom one with a mix of Null Scattering and Glossy Diffuse. The medium should be a homogeneous medium with high density.


#10

cool, it works! playing around with opacity and transparency in a obj scattering material and using an homogeneous medium you’ll get ears light up when lit from behind
this one shows another head scan by ten24; if only we had a camera animation in FR… :blush:


#11

Cool one! Also, you could use a texture on the opacity channel of the Obj Scattering. This way you’ll be able to have different amounts of SSS on different parts of the face.


#12

thank you Nicola, this is my homework with a negative depth map (black is totally transparent); the medium density is set to 35 but if you have a good depth map you can do without it; the good news is the opacity color chip that adds a bit of translucency to the diffuse map, while the transparency color affects the color of the SSS


#13

Nice! How long did it take to render?


#14

only a couple of minutes on my i7, FR handles maps quickly indeed. The following is a comparison between PT integrator and Direct one (a 15 sec. render) to emphasize the SSS effect. By the way Nicola, have we irradiance cache into bidirectional integrator as it is supposed to?


#15

Nice! The bidirectional integrator doesn’t support participating media yet.


#16

pity we can’t have sort of lighting cache to speed up the Bidirectional integrator; in the meanwhile I’ve been testing the direct one: the first image uses hdri + emissive big plane (1 min. render), the second one sun+sky + emissive big plane (3 min render)


#17

It looks like you are a pretty advanced user, did you have the chance to look at other advanced stuff, such as light path expressions?


#18

sounds interesting from the lighting point of view: assuming I need to create an amount of anisotropy on hair shaders, or velvet as well, depending from the direction of view and light, how can I use regular expressions to visualize and/or single out the effect?


#19

With LPEs, you can’t single out based on light direction, but you can but you can, for example, produce different output channels for light hitting a velvet shader or hair, or light coming from specific lights, and then compose them together in post processing, tweaking each contribution. Here is a very good intro to LPEs:


#20

allright Nicola, I was asking how this remarkable advanced function might help in tweaking some materials behaviours (like anisotropy or fresnel) and generally speaking contribute to achieve a physically based shading model