All color computation in mental ray is based on shaders. There are various types of shaders for different situations, such as material shaders to evaluate the material properties of a surface, light shaders to evaluate the light-emitting properties of a light source, lens shaders to specify camera properties other than the default pinhole camera, and so on.
There are built-in shaders that support SOFTIMAGE, Wavefront, and Alias compatibility. Much of the power of mental ray relies on the possibility to write custom shaders and link them dynamically to mental ray at runtime to replace some or all of the built-in shaders. Custom shaders are written in C, using the full language and library support available in C.
Shaders are written as C subroutines, stored in files with the extension ``.c''. To use these shaders in a scene, they must be dynamically linked into mental ray at runtime. mental ray accepts shaders in three forms:
The commands to create a DSO depend on the operating system type. To create a DSO named shader.so from a source file shader.c, use the following commands. Insert the -g command line option after -c to insert debugging information, or insert -O to compile an optimized version. On most systems -g and -O cannot be combined. Refer to the compiler documentation for details.
SGI machines search for DSO files whose name does not contain a ``/'' in all directories specified by the LD_LIBRARY_PATH environment variable. It contains a sequence of paths, separated by colons. If LD_LIBRARY_PATH is undefined, the directories /usr/lib, /lib, /lib/cmplrs/cc, and /usr/lib/cmplrs/cc are searched. If LD_LIBRARY_PATH is set by the user, it is very important to always include these default directories because otherwise the standard Unix libraries will no longer be found, and the shell will be unable to start virtually all utilities and applications. This can be fixed by setting LD_LIBRARY_PATH correctly, or by exiting the shell and starting another one. For example, if all .so files to be linked are in /tmp, the following command at a shell prompt before starting mental ray or the application containing mental ray will make mental ray search /tmp:
setenv LD_LIBRARY_PATH /usr/lib:/lib:/lib/cmplrs/cc: /usr/lib/cmplrs/cc:/tmp LD_LIBRARY_PATH=/usr/lib:/lib:/lib/cmplrs/cc: /usr/lib/cmplrs/cc:/tmp export LD_LIBRARY_PATH
(Long lines are split here, but must be entered on a single line.)
setenv LD_LIBRARY_PATH /usr/lib:/lib:/lib/cmplrs/cc:/usr/lib/cmplrs/cc:/tmp LD_LIBRARY_PATH=/usr/lib:/lib:/lib/cmplrs/cc:/usr/lib/cmplrs/cc:/tmp export LD_LIBRARY_PATH
The first line applies to the csh and tcsh shells; the other two lines apply to the sh and bash shells. Slaves started from the /usr/etc/inetd.conf file should be started with a matching LD_LIBRARY_PATH by using the following line as the last column of the respective /usr/etc/inetd.conf entry:
env LD_LIBRARY_PATH=/usr/lib:...:/tmp /path/rayslave
It is recommended that .so files are given with a ``/'' in the name to avoid having to change LD_LIBRARY_PATH. For example, to link a shared object ``myshader.so'' in the current directory, give the name as ``./myshader.so''.
Note that source code (.c extension) is normally portable, unless nonportable system features (such as fsqrt on SGIs) are used. This means that the shader will run on all other vendors' systems unchanged. Both object files (.o extension) and DSOs (.so extension) do not have this advantage, they must be compiled separately for each platform and, usually, for each major operating system release. For example, a Hewlett-Packard object file will not run on an SGI system, and an SGI IRIX 4.x object file cannot be used on an IRIX 5.x system, and vice versa.
On SGI systems, a shader can be debugged after it has been called for the first time, which attaches it to the program and makes its symbols available for the debugger. It must have been compiled with the -g compiler option. On non-SGI systems, debugging shaders is, unfortunately, difficult. The reason is that most debuggers cannot deal with parts of a program that have been dynamically linked. In general, the debugger will refuse to set breakpoints in dynamically linked shaders, and will step over calls to these shaders as if they were a single operating system call. Some vendors are working on fixing these problems, but at this time the only option on non-SGI systems is using printf or mi_debug statements in the shader sources. Note that when using printf, you must include <stdio.h>, or mental ray will crash.
Internal space is the coordinate system mental ray uses to present intersection points and other points and vectors to shaders. All points and vectors in the state except bump basis vectors (which are in object space) are presented in internal space, namely, org, dir, point, normal, and normal_geom. The actual meaning of internal space is left undefined, it varies between different versions of mental ray. A shader may not assume that internal space is identical to camera space, even though this was true in versions of mental ray prior to 1.9.
World space is the coordinate system in which modeling and animation takes place.
Object space is a coordinate system relative to the object's origin. The modeler that created the scene defines the object's origin; for example, the SOFTIMAGE translator uses the center of the bounding box of the object as the object origin.
Camera space is a coordinate system in which the camera is at the coordinate origin (0, 0, 0) with an up vector of (0, 1, 0) and points down the negative Z axis.
In addition to these 3D coordinate spaces, raster space is is a two-dimensional pixel location on the screen bounded by (0, 0) in the lower left corner of the image, and the rendered image resolution. The center of the pixel in the lower left corner of raster space has the coordinate (0.5, 0.5).
Screen space is defined such that (-1, -1/a) is in the lower left corner of the screen and (1, 1/a) is in the the upper right, where a is the aspect ratio of the screen.
Most shaders never need to transform between spaces. Texture shaders frequently need to operate in object space; for example, in order to apply bump basis vectors to state->normal, the normal must be transformed to object space before the bump basis vectors are applied, and back to internal space before the result is passed to any mental ray function such as mi_trace_reflect. mental ray offers twelve functions to convert points and vectors between coordinate spaces:
(see mi_point_to_world) (see mi_point_to_camera) (see mi_point_to_object) (see mi_point_from_world) (see mi_point_from_camera) (see mi_point_from_object) (see mi_vector_to_world) (see mi_vector_to_camera) (see mi_vector_to_object) (see mi_vector_from_world) (see mi_vector_from_camera) (see mi_vector_from_object)
function operation
mi_point_to_world(s,pr,p) internal point to world space mi_point_to_camera(s,pr,p) internal point to camera space mi_point_to_object(s,pr,p) internal point to object space mi_point_from_world(s,pr,p) world point to internal space mi_point_from_camera(s,pr,p) camera point to internal space mi_point_from_object(s,pr,p) object point to internal space mi_vector_to_world(s,vr,v) internal vector to world space mi_vector_to_camera(s,vr,v) internal vector to camera space mi_vector_to_object(s,vr,v) internal vector to object space mi_vector_from_world(s,vr,v) world vector to internal space mi_vector_from_camera(s,vr,v) camera vector to internal space mi_vector_from_object(s,vr,v) object vector to internal space
Point and vector transformations are similar, except that the vector versions ignore the translation part of the matrix. The length of vectors is preserved only if the transformation matrix does not scale. The mi_point_transform and mi_vector_transform functions are also available to transform points and vectors between arbitrary coordinate systems given by a transformation matrix.
There are five types of shaders, all of which can be substituted by user-written shaders:
The annotations set in italics are numbered; the events described happen in the sequence given by the numbers.
Since material shaders may do inside/outside calculations based on the surface normal or the parent state chain (see below), the volume shaders are marked (1) and (2), depending on whether the volume shader left by A or by T/D in the refraction volume field of the state. The default refraction volume shader is the one found in the material definition, or the standard volume shader if the material defines no volume shader. For details on choosing volume shaders, see the section on writing material and volume shaders. Note that the volume shaders in this diagram are called immediately after the material shader returns.
The next diagram depicts the situation when the material shader at the intersection point M requests a light ray from the light source at L, by calling a function such as mi_sample_light. This results in the light shader of L to be called. No intersection testing is done at this point. Intersection testing takes place when shadows are enabled and the light shader casts shadow rays (see shadow ray) from the light source L to the illuminated point M. For each obscuring object (A and B), a shadow ray is generated with the origin L and the intersection point A or B, and the shadow shaders of objects A and B are called to modify the light emitted by the light source based on the transparency attributes of the obscuring object. Note that no shadow ray is generated for the segment from B to M because no other obscuring object whose shadow shader could be called exists. Note also that although shadow rays always go from the light source towards the illuminated point, the order in which the shadow shaders are called is not defined unless the shadow_sort option is set in the view. Here, steps 4 and 5 may be reversed if it is not. Two shadow rays are cast, even though the light shader has called trace_shadow only once.
The remainder of this chapter describes how to write all types of shaders. First, the concepts of ray tracing state parameter passing common to all shaders are presented, followed by a detailed discussion of each type of shader.
Every shader needs to access information about the current state of mental ray, and information about the intersection that led to the shader call. This information is stored in a single structure known as the state. Not all information in the state is of interest or defined for all shaders; for example, lens shaders are called before an intersection is done and hence have no information such as the intersection point or the normal there. It is recommended to call the state parameter that shaders receive as a formal parameter state because some macros provided in the mi_shader.h include file that require access to the state rely on this name. The state, and everything else needed to write shaders, is defined in mi_shader.h, which must be included by all shader source files. Note that the state parameter of a shader should have the name state because several convenience macros depend on this.
Before a shader is called, mental ray prepares a new state structure that provides global information to the shader. This state may be the same data structure that was used in the previous call (this is the case for shaders that modify another shader's result, like lens, shadow, and volume shaders); or it may be a new state structure that is a copy of the calling shader's state with some state variables changed (this is done if a secondary ray is cast with one of the tracing functions provided by mental ray). For example, if a material shader that is using state A casts a reflected ray, which hits another object and causes that object's material shader to be called with state B, state B will be a copy of state A except for the ray and intersection information, which will be different in states A and B. State A is said to be the parent of state B. The state contains a parent pointer that allows sub-shader to access the state of parent shaders. If a volume shader is called after the material shader, the volume shader modifies the color calculated by the material shader, and gets the same state as the material shader, instead of a fresh copy.
This means that it is possible to pass information from one shader to another in the call tree for a primary ray, by one of two methods: either the parent (the caller) changes its own state that will be inherited by the child, or the child follows the parent pointer. The state contains a user pointer that a parent can store the address of a local data structure in, for passing it to sub-shaders. Since every sub-shader inherits this pointer, it may access information provided by its parent. A typical application of this are inside/outside calculations performed by material shaders to find out whether the ray is inside a closed object to base the interpretation of parameters such as the index of refraction on.
Note that the state can be used to pass information from one shader to sub-shaders that are lower in the call tree. Care must be taken not to destroy information in the state because some shaders (shadow, volume, and the first eye shader) re-use the state from the previous call. In particular, the state cannot be used to pass information from one primary (camera) ray to the next. Static variables can be used in the shader for this purpose, but care must be taken to avoid multiple access on multiprocessor shared-memory machines. On such a machine, all processors share the same set of static variables, and every change by one processor becomes immediately visible to all other processors, which may be executing the same shader at the same time. Locking facilities are available in mental ray to protect critical sections that may execute only once at any time.
Here is a complete list of state variables usable by shaders. Variables not listed here are for internal use only and should not be accessed or modified by shaders. The first table lists all state variables that remain unchanged for the duration of the frame:
type name content
int version shader interface version miTag camera_inst tag of camera instance miCamera * camera camera information miRc_options * options general rendering options
The camera data structure pointed to by camera has the following fields. None of these may be written to by a shader.
type name content
miBoolean orthographic orthographic rendering miScalar focal focal length of the camera miScalar aperture aperture of the camera miScalar aspect aspect ratio y /x miRange clip Z clipping distances int x_resolution image width in pixels int y_resolution image height in pixels int window.xl left image margin int window.yl bottom image margin int window.xh right image margin int window.yh top image margin miTag volume view volume (atmosphere) miTag environment view environment shader int frame frame number float frame_time frame time in seconds
The option data structure pointed to by option has this format. The option structure may also not be written to by shaders.
type name content
miBoolean shadow shadow casting turned on? miBoolean trace ray tracing turned on? miBoolean scanline scanline mode turned on? miBoolean shadow_sort sort shadow shader calls? miBoolean contour contours turned on? miBoolean motion motion blur turned on? enum miRc_sampling sampling image sampling mode enum miRc_filter filter nonlocal sampling filter
enum miRc_acceleration acceleration enum miRc_acceleration acceleration ray tracing algorithm enum miRc_face face front, back, or both faces enum miRc_field field odd, even, or both fields int reflection_depth max reflection trace depth int refraction_depth max refraction trace depth int trace_depth max combined trace depth
The state variables in the next table describe an eye (primary) ray. There is one eye ray for every sample that contributes to a pixel in the output image. If a material shader that evaluates a material hit by a primary ray casts secondary reflection, refraction, transparency, light, or shadow rays, all shaders called as a consequence will inherit these variables unmodified:
type name content
miScalar raster_x X coordinate of image pixel miScalar raster_y Y coordinate of image pixel struct miFunction shader current shader miLock global_lock lock shared by all shaders (see locking)
Whenever a ray is cast the following state variables are set to describe the ray:
type name content
miState * parent state of parent shader int type type of ray: reflect, light... miBoolean contour set in contour-line mode miBoolean scanline from scanline algorithm void * cache RC intersection cache miVector org start point of the ray miVector dir direction of the ray miScalar time shutter interval time miTag volume volume shader of primitive miTag environment environment shader int reflection_level current reflection ray depth int refraction_level current refraction ray depth
The variables in the next table are closely related to the previous. They describe the intersection of the ray with an object, and give information about that object and how it was hit.
type name content
miTag refraction_volume volume shader for refraction unsigned int label object label for label file miTag instance instance of object miTag light_instance instance of light miScalar [4] bary barycentric coordinates miVector point intersection (ray end) point miVector normal interpolated normal at point miVector normal_geom geometry normal at point miBoolean inv_normal true if normals were inverted miScalar dot_nd dot prod of normal and dir miScalar dist length of the ray int pri indentifies hit primitive miScalar shadow_tol safe zone against self-shadows miScalar ior index of refraction of medium miScalar ior_in index of refraction of previous medium
(see texture) The following table is an extension to the previous. These variables give information about the intersection point for texture mapping. They are defined when the ray has hit a textured object:
type name content
miVector * tex_list list of texture coordinates miVector * bump_x_list list of X bump basis vectors miVector * bump_y_list list of Y bump basis vectors miVector tex texture coord (tex shaders) miVector motion interpolated motion vector miVector u_deriv Surface U derivative miVector v_deriv Surface V derivative
Finally, the user field allows a shader to store a pointer to an arbitrary data structure in the shader. Subsequent shaders called as a result of operations done by this shader (such as casting a reflection ray or evaluating a light or a texture) inherit the pointer and can read and write this shader's local data. Sub-shaders can also find other parent's user data by following state parent pointers, see above. With this method, extra parameters can be provided to and extra return values received from sub-shaders. The user variables are initialized to 0.
type name content
void * user user data pointer int user_size user data size (optional) miFunction * shader shader data structure
The shader pointer can be used to access the shader parameters, as state->parameters. This is redundant for the current shader because this pointer is also passed as the third shader argument, but it can be used to find a parent shader's parameters. For example, the SOFTIMAGE material shader uses this to perform inside/outside calculations.
In addition to the state variables that are provided by mental ray and are shared by all shaders, every shader has user parameters. In the .mi scene file, shader references look much like a function call: the shader name is given along with a list of parameters. Every shader call may have a different list of parameters. mental ray does not restrict or predefine the number and types of user parameters, any kind of information may be passed to the shader. Typical examples for user parameters are ambient, diffuse, and specular colors for material shaders, attenuation parameters for light shaders, and so on. An empty parameter list in a shader call (as opposed to a shader declaration) has a special meaning; see the note at the end of this chapter.
In this manual, the term ``parameters'' refers to shader parameters in the .mi scene file; the term ``arguments'' is used for arguments to C functions.
Shaders need both state variables and user parameters. Generally, variables that are computed by mental ray, or whose interpretation is otherwise known to mental ray, and that are useful to different types or instances of shaders are found in state variables. Variables that are specific to a shader, and that may change for each instance of the shader, are found in user variables. mental ray does not access or compute user variables in any way, it merely passes them from the .mi file to the shader when it is invoked.
To interpret these parameters in the .mi file, mental ray needs a declaration of parameter names and types that is equivalent to the C struct that the shader later uses to access the parameters. The declaration in the .mi file must be exactly equivalent to the C struct, or the shader will mis-interpret the parameter data structure constructed by mental ray. This means that three parts are needed to write a shader: the C source of the shader, the C parameter struct, and the .mi declaration. The latter is normally stored in a separate file that is included into the .mi scene file using a $include statement.
Every .mi declaration has the following form:
declare "shadername" ( type "parametername", type "parametername", ... type "parametername" )
It is recommended that shadername and parametername are enclosed in double quotes to disambiguate them from reserved keywords and to allow special characters such as punctuation marks.
The declaration gives the name of the shader to declare, which is the name of the C function and the name used when the shader is called, followed by a list of parameters with types. Names are normally quoted to disambiguate them from keywords reserved by mental ray. Commas separate parameter declarations. The following types are supported:
declare "my_material" ( color "ambient", color "diffuse", color "specular", scalar "shiny", scalar "reflect", scalar "transparency", scalar "ior", vector texture "bump", array light "lights" )
If there is only one array, there is a small efficiency advantage in listing it last. The material shader declared in this example can be used in a material statement like this:
material "mat1" "my_material" ( "specular" 1.0 1.0 1.0, "ambient" 0.3 0.3 0.0, "diffuse" 0.7 0.7 0.0, "shiny" 50.0, "bump" "tex1", "lights" [ "light1", "light2", "light3" ], "reflect" 0.5 ) end material
Note that the parameters can be defined in any order, and that parameters can be omitted. This example assumes that the texture tex1 and the three lights have been defined prior to this material definition. Again, be sure to use the names of the textures and lights, not the names of the texture and light shaders. All names in the above two examples were written as strings enclosed in double quotes to disambiguate names from reserved keywords, and to allow special characters in the names that would otherwise be illegal.
When the shader my_material is called, its third argument will be a pointer to a data structure built by mental ray from the declaration and the actual parameters in the .mi file. In order for the C source of the shader to access the parameters, it needs an equivalent declaration in C syntax that must agree exactly with the .mi declaration. The type names can be translated according to the following table:
.mi syntax mental ray 1.8 syntax mental ray 1.9 syntax
boolean int miBoolean integer int miInteger scalar float miScalar vector Vector miVector transform Trans miMatrix color Color miColor color texture Texture * miTag scalar texture Texture * miTag vector texture Texture * miTag light Light * miTag string char * N/A struct struct struct
It is strongly recommended to use the same parameter names in the C declaration as in the .mi declaration.
Arrays are more complicated than the types in this table because the size of the array is not known at declaration time. In mental ray 1.8 syntax, an array keyword in the .mi declaration must be declared as a pointer to the appropriate type followed by an integer with the same name with n_ prepended. mental ray will store the number of elements in the array pointed to by the pointer in the integer. If the array is empty, both the pointer and the integer will be 0.
In mental ray 1.9, the C declaration consists of a start index prefixed with i_, the size of the array prefixed with n_, and the array itself, declared as a pointer.Future versions of mental ray will change this pointer to an array of size [1]; see below. mental ray will allocate the structure as large as required by the actual array size at call time. To access array element i in the range 0 ...n_array, the C expression array[i + i_array] must be used. This expression allows mental ray 1.9 to store the user parameters in virtual shared memory regardless of the base address of the user parameter structure, which is different on every host on the network.
For the above example .mi declaration, the equivalent C structure declaration using mental ray types looks like this:
struct my_material { Color ambient; Color diffuse; Color specular; miScalar shiny; miScalar reflect; miScalar transparency; miScalar ior; Texture *bump; Light **lights; int n_lights; };
while the equivalent declaration using mental ray 1.9 types is:
struct my_material { miColor ambient; miColor diffuse; miColor specular; miScalar shiny; miScalar reflect; miScalar transparency; miScalar ior; miTag bump; int i_lights; int n_lights; miTag *lights; };
Note that here the order of structure members must match the order in the .mi declaration exactly. For example, suppose a shader has a .mi declaration containing an array of integers:
declare "myshader" ( array integer "list" )
The C declaration for the shader's parameters is:
struct myshader { int i_list; int n_list; miInteger *list; };
A shader that needs to operate on this array, for example printing all the array elements to stdout, would use a loop like this:
int i; for (i=0; i < paras->n_list; i++) printf("%d\n", paras->list[paras->i_list + i]);
assuming that paras is the third shader argument and has type struct myshader *. (Note that printf requires that stdio.h is included.) The use of the i_list parameter may seem strange to C programmers, who may wish to hide it in a macro like
#define EL(array,nel) array[i_##array + nel]
This macro requires an ANSI C preprocessor; K&R preprocessors do not support the ## notation and should use /**/ instead. This macro is not predefined in shader.h. The reason for this peculiar way of accessing arrays is improved performance. Future versions of mental ray will not use a pointer to the array, as in *list above, but an array of size 1, like list[1]. The array list[1] has space for only one element, because the actual number of array elements depends on the shader instance in the .mi file, which may list an arbitrary number of elements. Since future versions of mental ray (2.0 and later) are based on a virtual shared database that moves pieces of data such as shader parameters transparently from one host to another, no such piece of data may contain a pointer. Pointers would not be valid in another host's virtual address space. Adjusting the pointer on the other host is impractical because it would significantly reduce performance for some scenes, and would require knowledge of the structure layout for finding the pointers that may not be available in versions of mental ray not based on a .mi front-end parser. Therefore, the array is appended to the parameter structure, so the entire block can be moved to another host in a single network transfer. It is safe to access the first element of the array, because space for it is always allocated by declarations such as list[1], but the second is a problem because in a C declaration like
struct myshader { int i_list; int n_list; miInteger list[1]; miScalar factor; miBoolean bool; };
the second element, list[1], occupies the same address as factor, and the third overlays bool. The situation becomes more complex for arrays of structures. The solution is to put the value of the first element after the last ``regular'' shader parameter, bool in this example, followed by the other element values. This means that the first few C array elements that overlay other parameters must be skipped. The i_ variable tells the shader writer exactly how many. In the example, i_list would be 3. Assuming the following shader instance, used as part of a material, texture, or some other definition requiring a shader call:
"myshader" ( "factor" 1.4142136, "list" [ 42, 123, 486921, 777 ], "bool" on )
mental ray would arrange the values in memory like this:
This diagram assumes that a miScalar uses four bytes; this may not be true in all versions. If it used eight bytes, four bytes of padding would be inserted before it by mental ray and the C compiler, and i_list would have the value 5.
There is one exception to shader parameter passing that can be a hard-to-find source of errors. If a shader is called with no parameters in the .mi file, using an opening parenthesis directly followed by a closing parenthesis, the shader will receive a zero-sized parameter block instead of a zero-filled parameter block. This is done to support an optimization for shadow shaders: a shadow shader called with no parameters is called with the parameters of the material shader. This reduces memory consumption because the shadow shader and the material shader almost always have the same parameters, which can be quite large. The problem occurs if a shader other than a shadow shader is called with no parameters because there is no material shader whose parameters could be substituted.
The following sections discuss the various types of shaders, and how to write custom shaders of those types. Basic concepts are introduced step by step, including supporting functions and state variables supplied by mental ray. All support functions are summarized at the end of this chapter. All descriptions apply to mental ray 1.9.
Material shaders are the primary type of shaders. All materials defined in the scene must at least define a material shader. Materials may also define other types of shaders, such as shadow, volume, and environment shaders, which are optional and of secondary importance.
When mental ray casts a visible ray, such as those cast by the camera (called primary rays) or those that are cast for reflections and refractions (collectively called secondary rays), mental ray determines the next object in the scene that is hit by that ray. This process is called intersection testing. For example, when a primary ray cast from the camera through the viewing plane's pixel (100,100) intersects with a yellow sphere, pixel (100, 100) in the output image will be painted yellow. (The actual process is slightly complicated by supersampling, which can cause more than one primary ray to contribute to a pixel.)
The core of mental ray has no concept of ``yellow''. This color is computed by the material shader attached to the sphere that was hit by the ray. mental ray records general information about the sphere object, such as point of intersection, normal vector, transformation matrix etc. in a data structure called the state, and calls the material shader attached to the object. More precisely, the material shader, along with its parameters (called user parameters), is part of the material, which is attached to the polygon or surface that forms the part of the object that was hit by the ray. Objects are usually built from multiple polygons and/or surfaces, each of which may have a different material.
The material shader uses the values provided by mental ray in the state and the variables provided by the .mi file in the user parameters to calculate the color of the object, and returns that color. In the above example, the material shader would return the color yellow. mental ray stores this color in the frame buffer and casts the next primary ray. Note that if the material shader has a bug that causes it to return infinity or NaN (Not a Number) in the result color, the infinity or NaN is stored as 1.0 in integer color frame buffers. This usually results in white pixels in the rendered image. This is true for subshaders such as texture shaders also.
If an appropriate output statement (see the scene decsription chapter), mental ray computes depth, label, and normal-vector frame buffers in addition to the standard color frame buffer. The color returned by the first-generation material shader is stored in the color frame buffer (unless a lens shader exists; lens shaders also have the option of modifying colors). The material shader can control what gets stored in the depth, label, and normal-vector frame buffers by storing appropriate values into state-> point.z, state->label, and state->normal, respectively. Depth is the negative Z coordinate.
Material shaders normally do quite complicated computations to arrive at the final color of a point on the object:
(see texture)
(see mi_call_shader) (see mi_sample_light) (see mi_reflection_dir) (see mi_refraction_dir) (see mi_trace_reflection) (see mi_trace_refraction) (see mi_trace_environment)
miBoolean my_material( miColor *result, miState *state, struct my_material *paras) { miVector bump, dir; miColor color; int num; miTag *light; miScalar ns; /* * bump map */ state->tex = state->tex_list[0]; (void)mi_call_shader((miColor *)&bump, miSHADER_TEXTURE, state, paras->bump); if (bump.x != 0 || bump.y != 0) { mi_vector_to_object(&state->normal, &state->normal); state->normal.x+=bump.x * state->bump_x_list->x +bump.y * state->bump_y_list->x; state->normal.y+=bump.x * state->bump_x_list->y +bump.y * state->bump_y_list->y; state->normal.z+=bump.x * state->bump_x_list->z +bump.y * state->bump_y_list->z; mi_vector_from_object(&state->normal, &state->normal); mi_vector_normalize(&state->normal); state->dot_nd = mi_vector_dot(&state->normal, &state->dir); } /* * illumination */ *result = paras->ambient; for (num=0; num < paras->n_lights; num++) { miColor color, sum; miInteger samples = 0; miScalar dot_nl; sum.r = sum.g = sum.b = 0; light = paras->lights[paras->i_lights + num]; while (mi_sample_light(&color, &dir, &dot_nl, state, light, &samples)) { sum.r += dot_nl*paras->diffuse.r * color.r; sum.g += dot_nl*paras->diffuse.g * color.g; sum.b += dot_nl*paras->diffuse.b * color.b; ns = mi_phong_specular(paras->shiny, state, &dir); sum.r += ns * paras->specular.r * color.r; sum.g += ns * paras->specular.g * color.g; sum.b += ns * paras->specular.b * color.b; } if (samples) { result->r += sum.r / samples; result->g += sum.g / samples; result->b += sum.b / samples; } } result->a = 1; /* * reflections */ if (paras->reflect > 0) { miScalar f = 1 - paras->reflect; result->r *= f; result->g *= f; result->b *= f; mi_reflection_dir(&dir, state); if (mi_trace_reflection (&color,state,&dir) || mi_trace_environment(&color,state,&dir)) { result->r += paras->reflect * color.r; result->g += paras->reflect * color.g; result->b += paras->reflect * color.b; } } /* * refractions */ if (paras->transparency > 0) { miScalar f = 1 - paras->transparency; result->r *= f; result->g *= f; result->b *= f; result->a = f; if (mi_refraction_dir(&dir,state,1.0,state->ior) && mi_trace_refraction (&color,state,&dir) || mi_trace_environment(&color,state,&dir))) { result->r += paras->transparency * color.r; result->g += paras->transparency * color.g; result->b += paras->transparency * color.b; result->a += paras->transparency * color.a; } } return(miTRUE); }
Four steps are required for computing the material color in this shader. First, the normal is perturbed by looking up a vector in the vector texture, and using the bump basis vectors to determine the orientation of the perturbation (the lookup always returns an XY vector). The second step loops over all light sources in the light array parameter, adding the contribution of each light according to the Phong equation. In the case of area lights, the light is sampled more than once, until the light sampling function is satisfied.
Finally, reflection and refraction rays are cast if the appropriate parameters are nonzero. In both cases, first the direction vector dir is computed using a built-in function, and a ray is cast in that direction. If either trace function returns miFALSE, indicating that no object was hit, the material's environment map that forms a sphere around the entire scene is evaluated. (Note that if the material has no environment map, the environment map in the state defaults to the environment shader from the view, if present.) When all computations are finished, the calculated color, including the alpha component, is returned in the result parameter. The shader returns miTRUE indicating that the computation succeded.
Texture shaders evaluate a texture and return a color. Textures can either be procedural, for example evaluating a 3D texture based on noise functions or calling other shaders, or they can do an image lookup. The .mi format provides different texture statements for these two types, one with a function call and one with a texture file name. Refer to the scene description for details.
Texture shaders are not first-class shaders, mental ray never calls one by itself and provides no special support for them. Texture shaders are called exclusively by other shaders. There are three ways of calling a texture shader from a material shader or other shaders: either by simply calling the shader by name like any other C function, or by using a built-in convenience function like mi_lookup_color_texture, or by a statement like
(see mi_call_shader)
mi_call_shader(result, miSHADER_TEXTURE, state, tag);
The tag argument references the texture function. The texture function is a data structure in the scene database that contains a reference to the C function itself, plus a user parameter block that is passed to the texture shader when it is called. All textures listed in the .mi scene description file are installed as texture shaders callable only with the mi_call_shader method, because only then can user parameters be passed. Although the texture shader could also be called directly with a statement such as
soft_color(result, state, &soft_color_paras);
the caller would have to write the required arguments into the user argument structure soft_color_paras itself; it would not have access to user parameters specified in the .mi file. Also, this call (see shader call tree) does not copy the state, as mi_call_shader does.
Unlike material shaders, texture shaders return a simple color or scalar or other return value. There are no lighting calculations or secondary rays. This greatly simplifies the task of changing a textured surface. For example, a simple texture shader that does a simple, non-antialiased lookup in a texture image could be written as:
(see mi_db_access) (see mi_img_get_color) (see mi_db_unpin)
miBoolean mytexture( register miColor *result, register miState *state, struct image_lookup *paras) { miImg_image *image; int xs, ys; image = mi_db_access(paras->texture); mi_texture_info(paras->texture, &xs, &ys, 0); mi_img_get_color(image, result, state->tex.x * (xs - 1), state->tex.y * (ys - 1)); mi_db_unpin(paras->texture); return(miTRUE); }
This shader assumes that the texture coordinate can be taken from state->tex, where the caller (usually a material shader) has stored it, probably by selecting a texture coordinate from state->tex_list. A more complicated shader that can properly anti-alias image textures with a simple box filter, could look like this:
(see tag type) (see mi_db_type) (see mi_call_shader) (see mi_db_access) (see mi_img_get_color) (see mi_db_unpin)
miBoolean mytexture2( register miColor *result, register miState *state, struct image_lookup *paras) { miImg_image *image; int xs, ys; miColor col00, col01, col10, col11; register int x, y; register miScalar u, v, nu, nv; image = mi_db_access(paras->texture); mi_texture_info(paras->texture, &xs, &ys, 0); x = u = state->tex.x * (xs - 2); y = v = state->tex.y * (ys - 2); u -= x; v -= y; nu = 1 - u; nv = 1 - v; mi_img_get_color(image, &col00, x, y); mi_img_get_color(image, &col01, x+1, y); mi_img_get_color(image, &col10, x, y+1); mi_img_get_color(image, &col11, x+1, y+1); result->r = nv * (nu * col00.r + u * col01.r) + v * (nu * col10.r + u * col11.r); result->g = nv * (nu * col00.g + u * col01.g) + v * (nu * col10.g + u * col11.g); result->b = nv * (nu * col00.b + u * col01.b) + v * (nu * col10.b + u * col11.b); result->a = nv * (nu * col00.a + u * col01.a) + v * (nu * col10.a + u * col11.a); mi_db_unpin(paras->texture); return(miTRUE); }
The implementation of the body of this shader is equivalent to the built-in mi_lookup_color_texture function if called with paras->texture, except that this function also recognizes if the texture is a shader, and calls mi_call_shader in this case.
This shader can further be extended by applying texture transformations to state->tex before it is used for the lookup, for example for rotated, scaled, repeating, or cropped textures. The shader may also decide that a scaled-down texture was missed, and return miFALSE. The material shader must then skip this texture if mi_call_shader returns miFALSE; the built-in SOFTIMAGE material shader does this.
Both of the above shaders have user parameters that consist of a single texture. Textures always have type miTag. Image file textures are read in by the translator and provided as a tag.
Volume shaders may be attached to the view or to a material. They modify the color returned from an intersection point to account for the distance the ray traveled through a volume. The most common application for volume shaders is atmospheric fog effects; for example, a simple volume shader may simulate fog by fading the input color to white depending on the ray distance. By definition, the distance dist given in the state is 0.0 and the intersection point is undefined if the ray has infinite length.
(see shader call tree) Volume shaders are normally called in three situations. When a material shader returns, the volume shader that the material shader left in the state->volume variable is called, without copying the state, as if it had been called as the last operation of the material shader. Copying the state is not necessary because the volume shader does not return to the material shader, so it is not necessary to preserve any variables in the state.
Volume shaders are also called when a light shader has returned; in this case the volume shader state->volume is called once for the entire distance from the light source to the illuminated point (i.e., to the point that caused the material shader that sampled the light to be called). Some volume shaders may decide that they should not apply to such light rays; this can be done by returning immediately if the state->type variable is miRAY_LIGHT. Finally, volume shaders are called after an environment shader was called. Note that if a volume shader is called after the material, light, or other shader, the return value of that other shader is discarded and the return value of the volume shader is used. The reason is that a volume shader can substitute a non-black color even the original shader has given up. Volume shaders return miFALSE if no light can pass through the given volume, and miTRUE if there is a non-black result color.
(see material shader) Material shaders have two separate state variables dealing with volumes, volume and refraction_volume. If the material shader casts a refraction or transparency ray, the tracing function will copy the refraction volume shader, if there is one, to the volume shader after copying the state. This means that the next intersection point finds the refraction volume in state->volume, which effectively means that once the ray has entered an object, that object's interior volume shader is used. However, the material shader is responsible to detect when a refraction ray exits an object, and overwrite state->refraction_volume with an appropriate outside volume shader, such as state->camera->volume, or a volume shader found by following the state->parent links.
Since volume shaders modify a color calculated by a previous material shader, environment shader, or light shader, they differ from these shaders in that they receive an input color in the result argument that they are expected to modify. A very simple fog volume shader could be written as:
miBoolean myfog( register miColor *result, register miState *state, register struct myfog *paras) { register miScalar fade; if (state->type == miRAY_LIGHT) return(miTRUE); fade = state->dist > paras->maxdist ? 1.0 : state->dist / paras->maxdist; result->r = fade * paras->fogcolor.r + (1-fade) * result->r; result->g = fade * paras->fogcolor.g + (1-fade) * result->g; result->b = fade * paras->fogcolor.b + (1-fade) * result->b; result->a = fade * paras->fogcolor.a + (1-fade) * result->a; return(miTRUE); }
This shader linearly fades the input color to state->fogcolor (probably white) within state->maxdist internal space units. (see atmosphere) Objects more distant are completely lost in fog. The length of the ray to be modified can be found in state->dist, its start point in state->org, and its end point in state->point. This example shader does not apply to light rays, light sources can penetrate fog of any depth because of the miRAY_LIGHT check. In this case, the shader returns miTRUE anyway because the shader did not fail; it merely decided not to apply fog.
If this shader is attached to the view, the atmosphere surrounding the scene will contain fog. Every state->volume will inherit this view volume shader, until a refraction or transparency ray is cast. (see shader call tree) The ray will copy the material's volume shader, state->refraction_volume, if there is one, to state->volume, and the ray is now assumed to be in the object. If the material has no volume shader to be copied, the old volume shader will remain in place and will be inherited by subsequent rays.
Some volume shaders employ ray marching techniques to sample lights from empty space, to achieve effects such as visible light beams. Before such a shader calls mi_sample_light, it should store 0 in state->pri to inform mental ray that there is no primitive to illuminate, and to not attempt certain optimizations such as backface elimination. Shaders other than volume shaders may do this too, but must restore pri before returning.
Environment shaders provide a color for rays that leave the scene entirely, and for rays that would exceed the trace depth limit. Environment shaders are called automatically by mental ray if a ray leaves the scene, but not when a ray exceeds the trace depth. This can be done by the shader that tried to cast the ray if the ray-tracing function returned miFALSE, by calling mi_trace_environment:
(see mi_reflection_dir) (see mi_trace_reflection) (see mi_trace_environment)
mi_reflection_dir(&dir, state); if (mi_trace_reflection (&color, state, &dir) || mi_trace_environment(&color, state, &dir)) /* use the returned color */
This code fragment was taken from the example material shader in the section on materials above. If the mi_trace_reflection call fails, call mi_trace_environment; if that also fails, do not use the returned color. Environment shaders, like any other shader, may return miFALSE to inform the caller that the environment lookup failed.
In both the explicit case and the automatic case (when a ray cast by a function call such as mi_trace_reflection leaves the scene without intersecting with any object) mental ray calls the environment shader found in state->environment. In primary rays, this variable is initialized with the global environment shader in the view (also found in state->camera->environment). (see shader call tree) Subsequent material shaders get the environment defined in the material if present, or the view environment otherwise. Material shaders never inherit the environment from the parent shader, they always use the environment in the material or the view. All other types of shaders inherit the environment from the parent shader.
Here is an example environment shader that uses a texture that covers an infinite sphere around the scene:
miBoolean myenv( register miColor *result, register miState *state, register struct myenv *paras) { register miScalar theta; miVector coord; theta = fabs(state->dir.z)*HUGE < fabs(state->dir.x) ? state->dir.x > 0 ? 1.5*M_PI : 0.5*M_PI : state->dir.z > 0 ? 1.0*M_PI + atan(state->dir.x / state->dir.z) : 2.0*M_PI + atan(state->dir.x / state->dir.z); if (theta > 2 * M_PI) theta -= 2 * M_PI; coord.x = 1 - theta / (2 * M_PI); coord.y = 0.5 * (state->dir.y + 1.0); coord.z = 0; state->tex = coord; return(mi_call_shader(result, miSHADER_TEXTURE, state, paras->texture)); }
This shader gets a single parameter in its user parameter structure, a miTag for a texture shader. The texture is evaluated by storing the texture coordinate in state->tex and calling the texture shader with mi_call_shader. For a description of texture shaders and how to call them, see the Texture section above.
Light shaders are called from other shaders by sampling a light using the mi_sample_light or mi_trace_light functions, which perform some calculations and then call the given light shader. mi_sample_light may also request that it is called more than once if an area light source is to be sampled, at locations chosen by the sampling algorithm chosen by the -mc or -qmc command-line options. For an example for using mi_sample_light, see the section on material shaders above. mi_trace_light performs less exact shading for area lights, and is provided for backwards compatibility only.
The light shader computes the amount of light contributed by the light source to a previous intersection point, stored in state->point. The calculation may be based on the direction state->dir to that point, and the distance state->dist from the light source to that ray. There may also be user parameters that specify directional and distance attenuation. Directional lights have no location; state->dist is undefined in this case.
Light shaders are also responsible for shadow casting. Shadows are computed by finding all objects that are in the path of the light from the light source to the illuminated intersection point. This is done in the light shader by casting ``shadow rays'' after the standard light color computation including attenuation is finished. Shadow rays are cast from the light source back towards the illuminated point, in the same direction of the light ray. Every time an occluding object is found, that object's shadow shader is called, if it has one, which reduces the amount of light based on the object's transparency and color. If an occluding object is found that has no shadow shader, it is assumed to be opaque, so no light from the light source can reach the illuminated point. For details on shadow shaders, see the next section.
Here is an example for a simple point light that supports no attenuation, but casts shadows:
(see mi_trace_shadow)
miBoolean mypoint( register miColor *result, register miState *state, register struct mypoint *paras) { *result = paras->color; return(mi_trace_shadow(result, state)); }
The user parameters are assumed to contain the light color. The shadows are calculated simply by giving the shadow shaders of all occluding objects the chance to reduce the light from the light source, by calling mi_trace_shadow. The shader returns miTRUE if some light reaches the illuminated point.
The point light can be turned into a spot light by adding directional attenuation parameters for the inner and outer cones and a spot direction parameter to the user parameters, and change the shader to reduce the light intensity if the illuminated point falls between the inner and outer cones, and turns the light off if it doesn't fall into the outer cone at all:
(see mi_trace_shadow)
miBoolean myspot( register miColor *result, register miState *state, register struct soft_light *paras) { register miScalar d, t; *result = paras->color; d = mi_vector_dot(&state->dir, ¶s->direction); if (d <= 0) return(miFALSE); if (d < paras->outer) return(miFALSE); if (d < paras->inner) { t = (paras->outer - d) / (paras->outer - paras->inner); result->r *= t; result->g *= t; result->b *= t; } return(mi_trace_shadow(result, state)); }
Again, miFALSE is returned if no illumination takes place, and miTRUE otherwise. Note that none of these light shaders takes the normal at the illuminated point into account; the light shader is merely responsible for calculating the amount of light that reaches (see material shader) that point. The material shader (or other shader) that sampled the light must use the dot_nd value returned by mi_sample_light, and its own user parameters such as the diffuse color, to calculate the actual fraction of light reflected by the material.
As described in the previous section, light shaders may trace a shadow ray from the light source to the point to be illuminated. When this ray hits an occluding object, that object's shadow shader is called, if present. (If the object has no shadow shader, the object is assumed to block all light.) Shadow shaders accept an input color that is dimmed according to the transparency and color of the occluding object.
If there is more than one occluding object between the light source and the illuminated point, the order in which the shadow shaders of the occluding objects is called is undefined, unless the shadow_sort option in the view is turned on. Shadow shaders that rely on being called in the correct order, the one closest to the light source first, should check that state->shadow_sort is miTRUE, and abort with a fatal error message otherwise, telling the user to turn on shadow sorting.
If a new material shader is written, it is often necessary to also write a matching shadow shader. The shadow shader performs a subset of the calculations done in the material shader: it may evaluate textures and transparencies, but it will not sample lights and it will not cast rays. The shader writer can either write a separate shadow shader, or let the material shader double as shadow shader by building the scene such that the material shader appears twice in the material definition. SOFTIMAGE shaders take the latter approach. It relies on the material shader to omit all calculations that are necessary only in the material shader when called as a shadow shader. The shader can find out whether it is called as a material shader or as a shadow shader by checking if state->type is miRAY_SHADOW: if yes, this is a shadow shader. This sharing of shaders pays off only when the texture computations are very complicated, as is the case in SOFTIMAGE materials.
The following shadow shader is a separate shader that attenuates the light that passes through the object based on two user parameters, the diffuse color and the transparency. Material shaders usually also have ambient and specular colors, but the best approach is to pass the diffuse color to shadow shaders because it describes the ``true color'' of the object best. Note that the scene can be arranged such that although the shadow shader is separate from the material shader, (see shader parameters) it still gets a copy of the material shader's user parameters so the shadow shader can access the ``true'' material parameters. In a .mi file, this is done by declaring the shadow shader with no parameters and naming none in the shadow statement in the material definition (just give ()). This sharing of parameters even if the shader itself is not shared can save duplicating a large set of parameters.
miBoolean myshadow( register miColor *result, register miState *state, register struct myshadow *paras) { register miScalar opacity; register miScalar f, omf; opacity = 1 - paras->transp; if (opacity < 0.5) { f = 2 * opacity; result->r *= f * diffuse.r; result->g *= f * diffuse.g; result->b *= f * diffuse.b; } else { f = 2 * (opacity - 0.5); omf = 1 - f; result->r *= f + omf * diffuse.r; result->g *= f + omf * diffuse.g; result->b *= f + omf * diffuse.b; } return(result->r != 0 || result->g != 0 || result->b != 0); }
The org variable in the state always contains the position of the light source responsible for casting the shadow rays; the point state variable contains the point on the shadow-casting object. The dist state variable is the distance to the light source (except for directional lights, which have no origin).
Lens shaders are called for primary rays from the camera. The camera is normally a simple pinhole camera. A lens shader modifies the origin and direction of a primary ray from the camera. More than one lens shader may be attached to the camera; each modifies the origin and direction calculated by the previous one. By convention, all rays up to and including the one leaving the last lens are called ``primary rays''. The origin and direction input parameters can be found in the state, in the origin and dir variables. The outgoing ray is cast with (see mi_trace_eye) mi_trace_eye, whose return color may be modified before the shader itself returns. Lens shaders are called recursively; a call to mi_trace_eye will call the next lens shader if there is another one.
Here is a sample lens shader that implements a fish-eye lens:
(see mi_trace_eye)
miBoolean fisheye( register miColor *result, register miState *state, register void *paras) { register miVector camdir, dir; register miScalar x, y, r, t; mi_vector_to_camera(state, &camdir, &state->dir); t = state->camera->focal / -camdir.z / (state->camera->aperture/2); x = t * camdir.x; y = t * camdir.y * state->camera->aspect; r = x * x + y * y; if (r < 1) { dir.x = camdir.x * r; dir.y = camdir.y * r; dir.z = -sqrt(1 - dir.x*dir.x - dir.y*dir.y); return(mi_trace_eye(result, state, &state->point, &dir)); } else { result->r = result->g = result->b = result->a = 0; return(miFALSE); } }
This shader does not take the image aspect ratio into account, and is not physically correct. It merely bends rays away from the camera axis depending on their angle to the camera axis. Rays that fall outside the circle that touches the image edges are set to black (note that alpha is also set to 0). The rays are bent according to the square of the angle, which approaches the physically correct deflection for small values of . This example shader has no user parameters, which is why the type of the paras parameter is void *.
Output shaders are functions that are run after rendering has finished. They modify the resulting image or images. Typical uses are output filters and compositing operations. Since rendering has completed, the state variables are not available in an output shader; an output shader uses a simple structure called miOutstate:
type name content
int xres image X resultion in pixels int yres image Y resultion in pixels miImg_image * frame_rgba RGBA color frame buffer miImg_image * frame_z depth frame buffer miImg_image * frame_n normal-vector frame buffer miImg_image * frame_label label frame buffer miCamera * camera camera miRc_options * options options miMatrix camera_to_world world transformation miMatrix world_to_camera inverse world transformation
All frame buffers have the same resolution of xres *yres pixels. The four frame buffers are passed for use by the frame buffer access functions, mi_img_get_color, mi_img_put_color, etc. For each type of frame buffer, there are functions to retrieve and store a pixel value that accept the frame buffer pointer as their first argument. All output shaders must be declared like any other type of shader, and the same types of arguments can be declared. This includes textures and lights. Nonprocedural textures can be looked up using functions like mi_lookup_color_texture and mi_texture_info, and lights can be looked up with mi_light_info. Since rendering has completed, it is not possible to look up procedural textures or to use tracing functions such as mi_sample_light.
Output shaders are called with two arguments, the output shader state and the shader parameters. There is no result argument like for the other types of shaders; output shaders do not return a value. By convention, they should still be declared as miBoolean, although the return value is discarded by mental ray. Here is a typical output shader C declaration:
miBoolean my_output( miOutstate *state, struct my_output *paras)
The my_output parameter data structure is defined normally, matching the declaration of the my_output shader in the .mi file. Here is a simple output shader that depth-fades the rendered image towards total transparency: first, the C code is written:
#include <shader.h> struct out_depthfade { miScalar near; /* no fade closer than this */ miScalar far; /* farther objects disappear */ }; miBoolean out_depthfade( register miOutstate *state, struct out_depthfade *paras) { register int x, y; miColor color; miScalar depth, fade; for (y=0; y < state->yres; y++) for (x=0; x < state->xres; x++) { mi_img_get_color(state->frame_rgba, &color, x, y); mi_img_get_depth(state->frame_z, &depth, x, y); if (depth >= paras->far || depth == 0.0) color.r=color.g=color.b=color.a = 0; else if (depth > paras->near) { fade = (paras->far - depth) / (paras->far - paras->near); color.r *= fade; color.g *= fade; color.b *= fade; color.a *= fade; } mi_img_put_color(state->frame_rgba, &color, x, y); } return(miTRUE); }
This shader is stored in a file out_depthfade.c and installed in the .mi file with a code statement and a declaration:
code "out_depthfade.c" declare "out_depthfade" (scalar "near", scalar "far")
This declaration should appear before the frame statement of the first frame using this shader. The shader is referenced in a output statement in the view:
view output "rgba,z" "out_depthfade" ("near" 10.0, "far" 100.0) output "pic" "filename.pic" min samples 0 max samples 0 ...
Note that the output shader statement appears before the output file statement. The output shader must get a chance to change the output image before it is written to the file filename.pic. It is possible to insert another file output statement before the output shader statement; in this case two files would be written, one with and one without depth fading.
Note also that the output shader has a type string "rgba,z". This string tells mental ray to render both an RGBA and a Z (depth) frame buffer. The RGBA buffer would have been rendered anyway because the file output statement requires it, but the depth buffer would not have been rendered without the z in the type string. In this case, all depth values returned by mi_img_get_depth would be 0.0.
The min samples parameter should be set to 0 or greater, because otherwise there might be fewer than one sample per pixel, leaving gaps in the depth frame buffer. mental ray interpolates only the color frame buffer to ``bridge'' unsampled pixels; depths, normals, and labels cannot be interpolated by their nature. Either way, this shader does not anti-alias very well because there is only one depth value per pixel.
The shader makes pixels for which a depth of 0.0 is returned totally transparent to fade edges of objects correctly that have no other object behind them. By definition, mi_img_get_depth returns 0.0 for a position x, y if no object was hit at that pixel. This may be true for anti-aliased edges because the last subsample shot for that pixel may happen to miss the object, and only the last sample for a pixel is stored in the depth frame buffer.
mental ray 1.9 makes a range of functions available to shaders that can be used to access data, cast rays, looking up images, performing standard mathematical computations. The functions are grouped by the module that supplies them. The shader writer may also use C library functions, but it is very important to include <stdio.h> and <math.h> if printing functions such as printf or math functions such as sin are used. Not including these headers may abort rendering at runtime, even though the compiler did not complain. All shaders must include the standard mental ray header file, mi_shader.h.
Here is a summary of functions provided by mental ray:
RC Functions
type name arguments
miBoolean mi_trace_eye *result, *state, *org, *dir miBoolean mi_trace_reflection *result, *state, *dir miBoolean mi_trace_refraction *result, *state, *dir miBoolean mi_trace_transparent *result, *state miBoolean mi_trace_environment *result, *state, *dir miBoolean mi_trace_light *result, *dir, *nl, *st, i miBoolean mi_sample_light *result, *dir, *nl, *st,i,*s miBoolean mi_trace_shadow *result, *state miBoolean mi_call_shader *result, type, *state, tag
DB
type name arguments
int mi_db_type tag void * mi_db_access tag void mi_db_unpin tag void mi_db_flush tag
IMG Functions
type name arguments
void mi_img_put_color *image, *color, x, y void mi_img_get_color *image, *color, x, y void mi_img_put_scalar *image, scalar, x, y void mi_img_get_scalar *image, *scalar, x, y void mi_img_put_vector *image, *vector, x, y void mi_img_get_vector *image, *vector, x, y void mi_img_put_depth *image, depth, x, y void mi_img_get_depth *image, *depth, x, y void mi_img_put_normal *image, *normal, x, y void mi_img_get_normal *image, *normal, x, y void mi_img_put_label *image, label, x, y void mi_img_get_label *image, *label, x, y
Math Functions
type name arguments
void mi_vector_neg *r void mi_vector_add *r, *a, *b void mi_vector_sub *r, *a, *b void mi_vector_mul *r, f void mi_vector_div *r, f void mi_vector_prod *r, *a, *b miScalar mi_vector_dot *a, *b miScalar mi_vector_norm *a void mi_vector_normalize *r void mi_vector_min *r, *a, *b void mi_vector_max *r, *a, *b miScalar mi_vector_det *a, *b, *c miScalar mi_vector_dist *a, *b void mi_matrix_ident r miBoolean mi_matrix_invert r void mi_matrix_prod r, a, b void mi_matrix_rotate a, x, y, z double mi_random
void mi_point_transform *r, *a, m void mi_vector_transform *r, *a, m void mi_point_to_world *state, *r, *v void mi_point_to_camera *state, *r, *v void mi_point_to_object *state, *r, *v void mi_point_from_world *state, *r, *v void mi_point_from_camera *state, *r, *v void mi_point_from_object *state, *r, *v void mi_vector_to_world *state, *r, *v void mi_vector_to_camera *state, *r, *v void mi_vector_to_object *state, *r, *v void mi_vector_from_world *state, *r, *v void mi_vector_from_camera *state, *r, *v void mi_vector_from_object *state, *r, *v
Auxiliary Functions
type name arguments
void mi_reflection_dir *dir, *state miBoolean mi_refraction_dir *dir, *state, *in, *out double mi_fresnel n1, n2, t1, t2
double mi_fresnel_reflection *state, *i, *o double mi_phong_specular spec, *state, *dir double mi_blinn_specular spec, *state, *dir void mi_fresnel_specular *ns, *ks, s, *st, *dir, *in, *out double mi_fresnel_reflection *state, *i, *o double mi_phong_specular spec, *state, *dir double mi_blinn_specular spec, *state, *dir void mi_fresnel_specular *ns, *ks, s, *st, *dir, *in, *out double mi_spline t, n, *ctl double mi_noise_1d p double mi_noise_2d u, v double mi_noise_3d *p double mi_noise_1d_grad p, *g double mi_noise_2d_grad u, v, *gu, *gv double mi_noise_3d_grad *p, *g
miBoolean mi_lookup_color_texture *col, *state, tag, *v miBoolean mi_lookup_scalar_texture *scal, *state, tag, *v miBoolean mi_lookup_vector_texture *vec, *state, tag, *v miBoolean mi_lookup_color_texture *col, *state, tag, *v miBoolean mi_lookup_scalar_texture *scal, *state, tag, *v miBoolean mi_lookup_vector_texture *vec, *state, tag, *v void mi_light_info tag, *org, *dir, **paras void mi_texture_info tag, *xres, *yres, **paras miBoolean mi_tri_vectors *state, wh, nt, **a, **b, **c
Memory Allocation
type name arguments
void * mi_mem_allocate size void * mi_mem_reallocate mem, size void mi_mem_release mem void mi_mem_check void mi_mem_dump mod
Thread Parallelism and Semaphores
type name arguments
void mi_init_lock *lock void mi_delete_lock *lock void mi_lock lock void mi_unlock lock int mi_par_localvpu int mi_par_nthreads
Messages and Errors
type name arguments
void mi_fatal *message, ... void mi_error *message, ... void mi_warning *message, ... void mi_info *message, ... void mi_progress *message, ... void mi_debug *message, ... void mi_vdebug *message, ...
Note that many of these functions return double instead of miScalar, or have double parameters. This allows these functions to be used from shaders written in classic (K&R) C, which always promotes floating-point arguments to double.
These are the functions supplied by the Rendering Core of mental ray, RC. All following trace functions return miTRUE if any subsequent call of a shader returned miTRUE to indicate presence of illumination. Otherwise no illumination is present and miFALSE is returned. (see shader call tree) All trace functions derive from the given state of the parent ray the state of the ray to be cast. The state is always copied, and the given state is not modified. This state is passed to subsequent calls of shaders, which are eventually a lens (see lens shader) , material (see material shader) , light (see light shader) , environment shader, and in the case of material, light, and environment shaders optionally a volume shader. The volume shader gets the same state as the previous (material) shader. Note that all point and direction vectors passed as arguments to tracing functions must be in internal space.
miBoolean mi_trace_eye( miColor *result, miState *state, miVector *origin, miVector *direction)
casts an eye ray from origin in direction, or calls the next lens shader. The allowed origin and direction values are restricted when using ray classification. If scanline is turned on and state->scanline is not zero, origin and direction must be the same as in the initial call of mi_trace_eye. Lens shaders may not modify them. Origin and directiuon must be given in internal space.
miBoolean mi_trace_reflection( miColor *result, miState *state, miVector *direction)
casts a reflection ray from state->point to direction. It returns miFALSE if the trace depth has been exhausted. If no intersection is found, the optional environment shader is called. The direction must be given in internal space.
miBoolean mi_trace_refraction( miColor *result, miState *state, miVector *direction)
casts a refraction ray from state->point to direction. It returns miFALSE if the trace depth has been exhausted. If no intersection is found, the optional environment shader is called. Before this functions casts the refraction ray, after copying the state, it copies state->refraction_volume to state->volume because the ray is now assumed to be ``inside'' the object, so the volume shader that describes the inside should be used to modify the ray while travelling inside the object. It is the caller's responsibility to set state->refraction_volume to the camera's volume shader state->camera->volume or some other volume shader if it determines that the ray has now left the object. The direction must be given in internal space.
miBoolean mi_trace_transparent( miColor *result, miState *state)
does the same as mi_trace_refraction with dir == state->dir (that is, no change in the ray direction) but may be executed faster if the parent ray is an eye ray. It also works when ray tracing is turned off. If the ray direction does not change (because no index of refraction or similar modification is applied), it is more efficient to cast a transparency ray than a refraction ray. Like mi_trace_refraction, this function copies the refraction volume volume shader because the ray is now assumed to be inside the object.
miBoolean mi_trace_environment( miColor *result, miState *state, miVector *direction)
casts a ray into the environment. The trace depth is not incremented or checked. The environment shader in the state is called to evaluate the returned color. The direction must be given in internal space.
miBoolean mi_sample_light( miColor *result, miVector *dir, miScalar *dot_nl, miState *state, miTag light_inst, miInteger *samples)
(see light shader) casts a light ray from the light source to the intersection point, causing the light source's light shader to be called. The light shader may then calculate shadows by casting a shadow ray to the intersection point. This may cause shadow shaders of occluding objects to be called, and will also cause the volume shader of the state to be called, if there is one. Before the light is sampled, the direction from the current intersection point in the state to the light and the dot product of this direction and the normal in the state are calculated and returned in dir and dot_nl if these pointers are nonzero. The direction is returned in internal space. The light instance to sample must be given in light_inst. samples must point to an integer that is initialized to 0. mi_sample_light must be called in a loop until it returns miFALSE. *samples will then contain the total number of light samples taken; it may be larger than 1 for area light sources.
For every call in the loop, a different dir and dot_nl is returned because the rays go to different points on the area light source. The caller is expected to use these variables, the returned color, and other variables such as diffuse and specular colors from the shader parameters to compute a color. These colors are accumulated until mi_sample_light returns miFALSE and the loop terminates. The caller then divides the accumulated color by the number of samples (*samples) if it is greater than 0, effectively averaging all the intermediate results.
Multiple samples are controlled by the -mc or -qmc command-line options. See the section on material shaders for an example. When casting light rays with mi_sample_light, mental ray may check whether the primitive's normal is pointing away from the light and ignore the light in this case. For this reason some shaders, such as ray-marching volume shaders, should assign 0 to state->pri first, and restore it before returning.
miBoolean mi_trace_light( miColor *result, miVector *dir, miScalar *dot_nl, miState *state, miTag light_inst)
(see light shader) is a simpler variation of mi_sample_light that does not keep a sample counter, and is not called in a loop. It is equivalent to mi_sample_light except for area light sources. Area light sources must be sampled multiple times with different directions.
miBoolean mi_trace_shadow( miColor * const result, miState * const state)
(see shadow ray) computes shadows for the given light ray. This function is normally (see light shader) called from a light shader to take occluding objects that prevent some or all of the light emitted by the light source to reach the illuminated (see material shader) point (whose material shader has probably called the light shader). The result color is modified by the shadow shaders that are called if occluding objects are found.
miBoolean mi_call_shader( miColor * const result, miShader_type type, miState * const state, miTag shader)
This function calls the shader specified by the tag shader. The tag is normally a texture shader or light shader or some other type of shader found in the calling shader's parameter list. The caller must pass its own state and the shader type, which must be one of miSHADER_LENS, miSHADER_MATERIAL, miSHADER_LIGHT, miSHADER_SHADOW, miSHADER_ENVIRONMENT, miSHADER_VOLUME, and miSHADER_TEXTURE. The sequence of operations is:
Database access functions can be used to convert pointers into tags, and to get the type of a tag. The scene database contains only tags and no pointers at all, because pointers are not valid on other hosts. All DB functions are available in all shaders, including output shaders.
int mi_db_type( const miTag tag)
Return the type of a database item, or 0 if the given tag does not exist. Valid types that are of interest in shaders are:
miSCENE_FUNCTION Function to call, such as a shading function miSCENE_MATERIAL Material containing shaders and flags miSCENE_LIGHT Light source miSCENE_IMAGE Image in memory
The most important are functions and images, because general-purpose texture shaders need to distinguish procedural and image textures. (see procedural texture) (see image texture) See the texture shader example above.
void *mi_db_access( const miTag tag)
Look up the tag in the database, pin it, and return a pointer to the referenced item. Pinning means that the database item is guaranteed to stay in memory at the same location until the item is explicitly unpinned. Rendering aborts if the given tag does not exist. mi_db_access always returns a valid pointer. If an item is accessed twice, it must be unpinned twice; pinned is a counter, not a flag. The maximum number of pins is 255.
void mi_db_unpin( const miTag tag)
Every tag that was accessed with mi_db_access must be unpinned with this function when the pointer is no longer needed. Failure to unpin can cause a pin overflow, which aborts rendering. After unpinning, the pointer may not used any more.
void mi_db_flush( const miTag tag)
Normally, a shader does not use a pointer obtained with mi_db_access to write to a database item. If it does, other hosts on the network may still hold stale copies, which must explicitly be deleted by calling this function. This function must be used with great care; it is an error to flush an item that another shader has pinned. For this reason, it is not generally possible to pass information back and forth between shaders or hosts by writing into database items and then flushing them.
The IMG module of mental ray provides functions that deal with images. There are functions to read and write image files in various formats, and to (see texture) access in-core frame buffers such as image textures. First, the functions that access frame buffers are listed. These functions are (see texture shader) typically used by texture shaders, which can obtain an image pointer by calling mi_db_access with the image tag as an argument. All these functions do nothing or return defaults if the image pointer is 0. They do not check whether the frame buffer has the correct data type. All these functions are available in all shaders, including output shaders.
void mi_img_put_color( miImg_image *image, miColor *color, int x, int y)
Store the color color in the color frame buffer image at coordinate x y, after performing desaturation or color clipping, gamma correction, dithering, and compensating for premultiplication. This function works with 1, 2, or 4 components per pixel, and with 8 or 16 bits per component. The normal range for the R, G, B, and A color components is [0, 1] inclusive.
void mi_img_get_color( miImg_image *image, miColor *color, int x, int y)
This is the reverse function to mi_img_put_color, it returns the color stored in a frame buffer at the specified coordinates. Gamma compensation and premultiplication, if enabled by mi_img_mode, are applied in reverse. The returned color may differ from the original color given to mi_img_put_color because of color clipping and color quantization.
void mi_img_put_scalar ( miImg_image *image, float scalar, int x, int y)
Store the scalar scalar in the scalar frame buffer image at coordinate x y, after clipping to the range [0, 1]. Scalars are stored as 8-bit or 16-bit unsigned values. This function is intended for scalar texture files of type miIMG_S or miIMG_S_16.
void mi_img_get_scalar ( miImg_image *image, float *scalar, int x, int y)
This is the reverse function to mi_img_put_scalar, it returns the scalar stored in a frame buffer at the specified coordinates, converted to a scalar in the range [0, 1]. If the frame buffer pointer is 0, the scalar is set to 0.
void mi_img_put_vector ( miImg_image *image, miVector *vector, int x, int y)
Store the X and Y components of the vector vector in the vector frame buffer image at coordinate x y, after clipping to the range [-1, 1]. Vectors are stored as 16-bit signed values. This function is intended for vector texture files of type miIMG_VTA or miIMG_VTS.
void mi_img_get_vector ( miImg_image *image, miVector *vector, int x, int y)
This is the reverse function to mi_img_put_vector, it returns the UV vector stored in a frame buffer at the specified coordinates, with coordinates converted to the range [-1, 1]. The Z component of the vector is always set to 0. If the frame buffer pointer is 0, all components are set to 0.
void mi_img_put_depth( miImg_image *image, float depth, int x, int y)
Store the depth value depth in the frame buffer image at the coordinates x y. The depth value is not changed in any way. The standard interpretation of the depth is the (positive) Z distance of objects relative to the camera. mental ray uses this function internally to store -state->point.z (in camera space) if the depth frame buffer is enabled with an appropriate output statement.
void mi_img_get_depth( miImg_image *image, float *depth, int x, int y)
Read the depth value to the float pointed to by depth from frame buffer image at the coordinates x y. If the image pointer is 0, return the MAX_FLT constant from limits.h.
void mi_img_put_normal( miImg_image *image, miVector *normal, int x, int y)
Store the normal vector normal in the frame buffer image at the coordinates x y. The normal vector is not changed in any way.
void mi_img_get_normal( miImg_image *image, miVector *normal, int x, int y)
Read the normal vector normal from frame buffer image at the coordinates x y. If the image pointer is 0, return a null vector.
void mi_img_put_label( miImg_image *image, miUint label, int x, int y)
Store the label value label in the frame buffer image at the coordinates x y. The label value is not changed in any way.
void mi_img_get_label( miImg_image *image, miUint *label, int x, int y)
Read the label value to the unsigned integer pointed to by label from frame buffer image at the coordinates x y. If the image pointer is 0, return 0.
Math functions include common vector and matrix operations. More specific rendering functions can be found in the next section, Auxiliary Functions.
void mi_vector_neg( miVector *r)
r := -r
void mi_vector_add( miVector *r, miVector *a, miVector *b)
r := a + b
void mi_vector_sub( miVector *r, miVector *a, miVector *b)
r := a - b
void mi_vector_mul( miVector *r, double f)
r := r *f
void mi_vector_div( miVector *r, double f)
r := r *1 /f (If f is zero, leave r unchanged.)
void mi_vector_prod( miVector *r, miVector *a, miVector *b)
r := a *b
double mi_vector_dot( miVector *a, miVector *b)
a *b
double mi_vector_norm( miVector *a)
|a |
void mi_vector_normalize( miVector *r)
r := r /|r | (If r is a null vector, leave r unchanged.)
void mi_vector_min( miVector *r, miVector *a, miVector *b)
r := ax < bx ? ax : bx ay < by ? ay : by az < bz ? az : bz
void mi_vector_max( miVector *r, miVector *a, miVector *b)
r := ax > bx ? ax : bx ay > by ? ay : by az > bz ? az : bz
double mi_vector_det( miVector *a, miVector *b, miVector *c)
ax bx cx ay by cy az bz cz
double mi_vector_dist( miVector *a, miVector *b)
|a - b|
void mi_matrix_ident( miMatrix r)
R := 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
miBoolean mi_matrix_invert( miMatrix r, miMatrix a)
R := A^-1 (Returns miFALSE if the matrix cannot be inverted.)
void mi_matrix_prod( miMatrix r, miMatrix a, miMatrix b)
R := A *B
void mi_matrix_rotate( miMatrix a, const double xrot, const double yrot, const double zrot)
Create a rotation matrix a rotating by xrot, then yrot, then zrot, in radians.
double mi_random(void)
Return a random number in the range [0, 1).
void mi_point_transform( miVector *r, miVector *v, miMatrix m)
r := v *A
All fourteen transformation functions may be called with identical pointers r and v. The vector is transformed in-place in this case. If the result of one of the 14 transformations is a homogeneous vector with a w component that is not equal to 1.0, the result vector's x, y, and z components are divided by w. For the multiplication, a w component of 1.0 is implicitly appended to the a vector. If the matrix m is a null pointer, no transformation is done and v is copied to r.
void mi_vector_transform( miVector *r, miVector *v, miMatrix m)
Same as void mi_point_transform, but ignores the translation row in the matrix.
void mi_point_to_world( miState * const state, miVector * const r, miVector * const v)
Convert internal point v in the state to world space, r.
void mi_point_to_camera( miState * const state, miVector * const r, miVector * const v)
Convert internal point v in the state to camera space, r.
void mi_point_to_object( miState * const state, miVector * const r, miVector * const v)
Convert internal point v in the state to object space, r. For a light, object space is the space of the light, not the illuminated object.
void mi_point_from_world( miState * const state, miVector * const r, miVector * const v)
Convert point v in world space to internal space, r.
void mi_point_from_camera( miState * const state, miVector * const r, miVector * const v)
Convert point in camera space v to internal space, r.
void mi_point_from_object( miState * const state, miVector * const r, miVector * const v)
Convert point v in object space to internal space, r. For a light, object space is the space of the light, not the illuminated object.
void mi_vector_to_world( miState * const state, miVector * const r, miVector * const v)
Convert internal vector v in the state to world space, r. Vector transformations work like point transformation, except that the translation row of the transformation matrix is ignored. The resulting vector is not (re-)normalized. Vector transformations transform normals correctly only if there is no scaling.
void mi_vector_to_camera( miState * const state, miVector * const r, miVector * const v)
Convert internal vector v in the state to camera space, r.
void mi_vector_to_object( miState * const state, miVector * const r, miVector * const v)
Convert internal vector v in the state to object space, r. For a light, object space is the space of the light, not the illuminated object.
void mi_vector_from_world( miState * const state, miVector * const r, miVector * const v)
Convert vector v in world space to internal space, r.
void mi_vector_from_camera( miState * const state, miVector * const r, miVector * const v)
Convert vector in camera space v to internal space, r.
void mi_vector_from_object( miState * const state, miVector * const r, miVector * const v)
Convert vector v in object space to internal space, r. For a light, object space is the space of the light, not the illuminated object.
The following functions are provided for support of shaders, to simplify common mathematical operations required in shaders:
void mi_reflection_dir( miVector *dir, miState *state);
Calculate the reflection direction based on the dir, normal, and normal_geom state variables. The returned direction dir can be passed to mi_trace_reflect. It is returned in internal space.
miBoolean mi_refraction_dir( miVector *dir, miState *state, double ior_in, double ior_out);
Calculate the refraction direction in internal space based on the interior and exterior indices of refraction ior_in and ior_out, and on dir, normal, and normal_geom state variables. The returned direction dir can be passed to mi_trace_refract. Returns miFALSE and leaves *dir undefined in case of total internal reflection.
double mi_fresnel( double n1, double n2, double t1, double t2);
double mi_fresnel_reflection( miState *state, double ior_in, double ior_out);
Call mi_fresnel with parameters appropriate for the given indices of refraction ior_in and ior_out, and for the dot_nd state variable.
double mi_phong_specular( double spec_exp, miState *state, miVector *dir);
Calculate the Phong factor based on the direction of illumination dir, the specular exponent spec_exp, and the state variables normal and dir. The direction must be given in internal space.
double mi_blinn_specular( double spec_exp, miState *state, miVector *dir);
As mi_phong_specular, but attenuated by a geometric attenuation factor (see [Foley 90]).
void mi_fresnel_specular( miScalar *ns, miScalar *ks, double spec_exp, miState *state, miVector *dir, double ior_in, double iot_out);
Calculate the specular factor ns based on the illumination direction dir, the specular exponent spec_exp, the inside and outside indices of refraction ior_in and ior_out, and the state variables normal and dir. ks is the value returned by mi_fresnel, which is called by mi_fresnel_specular. The direction must be given in internal space.
double mi_spline( double t, const int n, miScalar * const ctl)
This function calculates a one-dimensional cardinal spline at location t. The t parameter must be in the range 0 ...1. The spline is defined by n control points specified in the array ctl. There must be at least two control points. To calculate multi-dimensional splines, this function must be called once for each dimension. For example, spline can be used three times to interpolate colors smoothly.
double mi_noise_1d( const double p)
Return a one-dimensional coherent noise function of p. All six noise functions compute a Perlin noise function from the given one, two, or three dimension parameters such that the noise changes gradually with changing parameters. The returned values are in the range 0 ...1, with a bell-shaped distribution centered at 0.5 and falling off to both sides. This means that 0.5 is returned most often, and values of less than 0.0 and more than 1.0 are never returned. See [Perlin 85].
double mi_noise_2d( const double u, const double v)
Return a two-dimensional noise function of u, v.
double mi_noise_3d( miVector * const p)
Return a three-dimensional noise function of the vector p. This is probably the most useful noise function; a simple procedural texture shader can be written that converts a copy of the state->point vector to object space, passes it to mi_noise_3d, and assigns the returned value to the red, green, and blue components of the result color. The average feature size of the texture will be approximately one unit in space.
double mi_noise_1d_grad( const double p, miScalar * const g)
Return a one-dimensional noise function of p. The gradient of the computed texture at the location p is assigned to *g. Gradients are not normalized.
double mi_noise_2d_grad( const double u, const double v, miScalar * const gu, miScalar * const gv)
Return a two-dimensional noise function of u, v. The gradient is assigned to *gu and *gv.
double mi_noise_3d_grad( miVector * const p, miVector * const g)
Return a three-dimensional noise function of the vector p. The gradient is assigned to the vector g.
miBoolean mi_lookup_color_texture( miColor *color, miState *state, miTag tag, miVector *coord)
tag is assumed to be a texture as taken from a color texture parameter of a shader. This function checks whether the tag refers to a shader (procedural texture) or an image, depending on which type of color texture statement was used in the .mi file. If tag is a shader, coord is stored in state->tex, the referenced texture shader is called, and its return value is returned. If tag is an image, coord is brought into the range (0..1, 0..1) by removing the integer part, the image is looked up at the resulting 2D coordinate, and miTRUE is returned. In both cases, the color resulting from the lookup is stored in *color.
miBoolean mi_lookup_scalar_texture( miScalar *scalar, miState *state, miTag tag, miVector *coord)
This function is equivalent to mi_lookup_color_texture, except that tag is assumed to refer to a scalar texture shader or scalar image, as defined in the .mi file with a scalar texture statement, and a scalar is looked up returned in *scalar.
miBoolean mi_lookup_vector_texture( miVector *vector, miState *state, miTag tag, miVector *coord)
This function is also equivalent to mi_lookup_color_texture, except that tag is assumed to refer to a vector texture shader or vector image, as defined in the .mi file with a vector texture statement, and a vector is looked up returned in *vector.
void mi_light_info( miTag tag, miVector *org, miVector *dir, void **paras)
tag is assumed to be a light source as found in a light parameter of a shader. It is looked up, and its origin (location in internal space) is stored in *org, and its direction (also in internal space) is stored in *dir. Since light sources can only have one or the other but not both, the unused vector is set to a null vector. This can be used to distinguish directional (infinite) light sources; their org vector is set to (0, 0, 0). The paras pointer is set to the shader parameters of the referenced light shader; if properly cast by the caller, it can extract information such as whether a non-directional light source is a point or a spot light, and its color and attenuation parameters. (mental ray considers a spot light to be a point light with directional attenuation.) Any of the three pointers org, dir, and paras can be a null pointer.
void mi_texture_info( miTag tag, int *xres, int *yres, void **paras)
tag is assumed to be a texture as found in a texture parameter of a shader. If tag refers to a procedural texture shader, * xres and *yres are set to 0 and *paras is set to the shader parameters of the texture shader. If tag is an image texture, *xres and *yres are set to the image resolution in pixels, and *paras is set to 0. Any of the three pointers can be a null pointer.
void mi_tri_vectors( miState *state, int which, int ntex, miVector **a, miVector **b, miVector **c)
All the information in the state pertains to the interpolated intersection point in a triangle. This function can be used to obtain information about the uninterpolated triangle vertices. Together with the barycentric coordinates in the state, parameters retrieved with mi_tri_vectors may be interpolated differently by the shader. The which argument is a character that controls which triple of vectors is to be retrieved:
mental ray's memory allocation functions replace the standard malloc packages found on most systems. They have built-in functions for memory leak tracing and consistency checks, and handle error automatically.
void *mi_mem_allocate( const int size)
Accepts one argument specifying the size of the memory to allocate. A pointer to the allocated memory is returned. If the allocation fails, an error is reported automatically, and mental ray is aborted. This call is guaranteed to return a valid pointer, or not to return at all. The allocated memory is zeroed.
void *mi_mem_reallocate( void * const mem, const int size)
Change the size of an allocated block of memory. There are two arguments: a pointer to the old block of memory, and the requested new size of that block. A pointer to the new block is returned, which may be different from the pointer to the old block. If the old pointer was a null pointer, mi_mem_reallocate behaves like mi_mem_allocate. If the new size is zero, mi_mem_reallocate behaves like mi_mem_release, and returns a null pointer. If there is an allocation error, an error is reported and raylib is aborted. Like mi_mem_alloc, mi_mem_reallocate never returns if the re-allocation fails. If the block grows, the extra bytes are undefined.
void *mi_mem_release( void * const mem)
Frees a block of memory. There is one argument: the address of the block. If a null pointer is passed, nothing is done. There is no return value.
void *mi_mem_check(void)
This call is currently not available. It will be available in version 2.0.
void *mi_mem_dump( const miModule module)
This call is currently not available. It will be available in version 2.0.
Thread Parallelism and Semaphores
In addition to network parallelism, mental ray also supports shared memory parallelism through threads. Network parallelism is a form of distributed memory parallelism where processes cooperate by exchanging messages. Messages are used to exchange data as well as to synchronize. With shared memory data can easily be exchanged, a process must only access the common memory to do so. A different mechanism has to be used for synchronization. This is usually done by locking. Basically what has to be done is one process has to tell the other that it is waiting to access data, and another process can signal that it has finished working with it, so that any other process may now access it.
By default threads are used on shared memory multiprocessor machines. Threads are sometimes also called lightweight processes. Threads behave like processes running on a common shared memory.
Since memory is shared between two threads both can write to memory at the same time. It can also happen that one thread writes while another reads the same memory. Both these cases can lead to surprising unwanted results. Therefore -- to guard against these surprises -- when using threads certain precautions have to be observed. Care has to be taken when using heap memory such as global or static data, as any thread may potentially modify it. To prevent corrupting any data (or reading corrupted data), locking must be used when it is not otherwise guaranteed that concurrent accesses will not occur.
In addition to making sure that write accesses to data are performed when no other thread accesses the data, it is important to use only so-called concurrency safe libraries and calls. If a call to a nonreentrant function is done, locking should be used. A function is called reentrant if it can be executed by multiple threads at the same time without adverse effects. (Reentrancy and concurrency safety are related, but the terms stem from different historical contexts.) Details and examples are explained below.
For example, static data on a shared memory multiprocessor can be modified by more than one processor at a time. Consider this test:
if (!is_init) { is_init = miTRUE; initialize(); }
This does not guarantee that initialize is called only once. The reason is that all threads share the is_init flag, so two threads may simultaneously examine the flag. Both will find that it has not been set, and enter the if body. Next, both will set the flag to miTRUE, and then both will call the initialize function. This situation is called a race condition. The example is contrived because initialization and termination should be done with init and exit functions as described in the next section, but this problem can occur with any heap variable. In general, all threads on a host share all data except local auto variables on the stack.
The behavior described above could also occur if more than one thread is used on a single processor, but by default mental ray does not create more threads then processors are available.
There are two methods for guarding against race conditions. One is to guarantee that only one thread executes certain code at a time. Such code surrounded by lock and unlock operations is called a critical section. Code inside of critical sections may access global or static data or call any function that does so (as long as all is protected by the same lock). The lock used in this example is assumed to have been created and initialized with a call to mi_init_lock before it used here. (See below how locks are initialized.) Here is an example of how a critical section may be used:
miLock lock; mi_lock(lock); if (!is_init) { /* critical section */ is_init = miTRUE; initialize(); } mi_unlock(lock);
The other method is to use separate variables for each thread. This is done by allocating an array with one entry for each thread, and indexing this array with the current thread number. Allocation is done in the shader's initialization routine (which has the same name as the shader with _init appended). No locking is required because it is called only once. The termination routine (which also has the same name but with _exit appended) must release the array.
mental ray provides two locks for general use: state->global_lock is a lock shared by all threads and all shaders. No two critical sections protected by this lock can execute simultaneously on this host. The second is state->shader->lock, which is local to all instances of the current shader. The lock is tied to the shader, not the particular call with particular shader parameters. Every shader in mental ray, built-in or dynamically linked, has exactly one such lock.
The relevant functions provided by the parallelism modules are:
void mi_init_lock( miLock * const lock)
Before a lock can be used by one of the other locking functions, it must be initialized with this function. Note that the lock variable must be static or global. Shaders will normally use this function in their _init function.
void mi_delete_lock( miLock * const lock)
Destroy a lock. This should be done when it is no longer needed. The code should use lock and immediately unlock the lock first to make sure that no other thread is in or waiting for a critical section protected by this lock. Shaders will normally use this function in their _exit function.
void mi_lock( const miLock lock)
Check if any other code holds the lock. If so, block; otherwise set the lock and proceed. This is done in a parallel-safe way so only one critical section locked can execute at a time. Note that locking the same lock twice in a row without anyone unlocking it will block the thread forever, effectively freezing mental ray, because the second lock can never succeed.
void mi_unlock( const miLock lock)
Release a lock. If another thread was blocked attempting to set the lock, it can proceed now. Locks and unlocks must always be paired, and the code between locking and unlocking must be as short and as fast as possible to avoid defeating parallelism.
miVpu mi_par_localvpu(void) int miTHREAD(miVpu vpu)
The term VPU stands for Virtual Processing Unit. All threads on the network have a unique VPU number. mi_par_localvpu returns the VPU number of the VPU this thread is running on. VPUs are a concatenation of the host number and the thread number, both numbered from 0 to the number of hosts or threads, respectively, minus 1. The miTHREAD macro extracts a thread number from a VPU. Thread 0 on host 0 is normally running the translator that controls the entire operation. The mi_par_localvpu function returns the VPU of the current thread on the local host.
int mi_par_nthreads(void)
Returns the number of threads on the local host. This is normally 1 on a single-processor system. This number can be used to allocate an array of per-thread variables in the shader initialization code. The array can then be indexed by the shader with miTHREAD(mi_par_localvpu()).
Shaders may print messages and errors. They are printed in the same format as rendering (RC) messages. Options given to the translator determine which messages are printed and which are suppressed. All message routines have the same parameters as printf(3). All append a newline to the message. Messages are printed in the form
RC host.thread level: message
with the module name RC, the host number host if available, the thread number thread with a leading dot if available, the message type level (fatal, error, warning etc), and the message given in the function call.
void mi_fatal( const char * const message, ...)
An unrecoverable error has occurred. Unlike all others, this call will not return; it will attempt to recover mental ray and return to the top-level translator. Recovering may involve aborting all operations in progress and clearing the entire database. Fatal messages can be suppressed, but mental ray is always re-initialized.
void mi_error( const char * const message, ...)
An unrecoverable error has occurred. This call returns; the caller must abort the current operation gracefully and return.
void mi_warning( const char * const message, ...)
A recoverable error occurred. The current operation proceeds.
void mi_info( const char * const message, ...)
Prints information about the current operation, such as the number of triangles and timing information. Infos should be used sparingly; do not print information for every intersection point or shader call.
void mi_progress( const char * const message, ...)
Prints progress reports, such as rendering percentages.
void mi_debug( const char * const message, ...)
Prints debugging information useful only for shader development.
void mi_vdebug( const char * const message, ...)
Prints more debugging information useful only for shader development. Messages that are likely to be useful only in rare circumstances, or that generate a very large number of lines should be printed with this function.
mental ray 1.9 provides a way to define initialization and cleanup functions for each user defined function. Many shaders need to perform operations such as initializing color tables or allocating arrays before rendering starts. They may also need to do cleanup operations after rendering has finished, for operations like releasing storage to prevent memory leaks. Before a shader is called for the first time, ray checks if a function of the same name with _init appended exists. If so, it assumes that this is an initialization routine and calls it once before the first call of the function. The state passed to the initialization function is the same as passed to the first call of the actual shader to be initialized. Note that the order of shader calls is unpredictable because the order of pixel samples is unpredictable, so the initialization function should not rely on sample-specific state variables such as state->point.
The initialization function has the option of requesting shader instance intializations by setting the boolean variable its third argument points to to miTRUE. A shader instance is a unique pair of shader and shader parameters. For example, if the shader soft_material is used in two different materials it is said to have two different instances (even if the parameter values are similar).
When rendering has finished, ray checks for each user provided shader which was called if a function of the same name with _exit appended exists. If yes, it assumes that this is a cleanup routine and calls it once. For example, if a shader myshader exists, the functions myshader_init and myshader_exit are called for initialization and cleanup if they exist.
Both routines are assumed to have the following type:
void myshader_init(miState *state, void *paras, miBoolean *inst_init_req); void myshader_exit(void *paras);
Here is an example for init and exit shaders for a shader named myshader. When myshader is about to be used for the first time in a frame, the calling order is:
void myshader_init( /* must end with "_init" */ miState *state, struct myshader *paras, /* valid for inst inits */ miBoolean *inst_req) /* for inst init request */ { if (!paras) { /* main shader init */ *inst_req = miTRUE; /* want inst inits too */ ... } else { /* shader instance init */ paras->something = 1; /* just an example */ ... } } void myshader_exit( /* must end with "_exit" */ struct myshader *paras) /* valid for inst inits */ { if (!paras) { /* main shader exit */ ... /* no further inst exits * will occur */ } else { /* shader instance exit */ paras->something = 0; /* just an example */ ... } }
Note that there will generally be many instance init/exits (if enabled), but only one shader init/exit. If an init/exit shader isn't available, it isn't called; this is not an error. Initialization and cleanup are done on every host where the function was used, but only once on shared memory parallel machines. They are done for each frame separately.
Trace functions are functions provided by mental ray that allow a shader to cast a ray into the scene, most of them using standard ray tracing. Not all types of tracing functions can be used in all types of shaders. Conversely, many trace functions cause shaders to be called. This chapter lists these interdependencies.
The following list shows which shaders are called from which trace functions:
(see mi_trace_eye) (see mi_trace_reflection) (see mi_trace_refraction) (see mi_trace_transparent) (see mi_trace_environment) (see mi_sample_light) (see mi_trace_shadow)
lens material environ
mi_trace_eye yes yes yes mi_trace_reflection no yes yes mi_trace_refraction no yes yes mi_trace_transparent no yes yes mi_trace_environment no no yes mi_sample_light no no no mi_trace_shadow no no no
light shadow volume
mi_trace_eye no no yes mi_trace_reflection no no yes mi_trace_refraction no no yes mi_trace_transparent no no yes mi_trace_environment no no yes mi_sample_light yes no yes mi_trace_shadow no yes no
lens material environ light shadow volume
mi_trace_eye yes yes yes no no yes mi_trace_reflection no yes yes no no yes mi_trace_refraction no yes yes no no yes mi_trace_transparent no yes yes no no yes mi_trace_environment no no yes no no yes mi_sample_light no no no yes no yes mi_trace_shadow no no no no yes no
(see shader call tree) mental ray's RC module holds internal data corresponding to the ray tree. Therefore shaders may not call arbitrary trace functions, since in RC's data structures entries are only provided for the following children at a node:
lens material environ
mi_trace_eye yes no no mi_trace_reflection ** yes * mi_trace_refraction ** yes * mi_trace_transparent ** yes * mi_trace_environment * yes yes mi_sample_light * yes * mi_trace_shadow ** ** **
ray light light shadow volume volume
mi_trace_eye no no no no mi_trace_reflection ** ** yes ** mi_trace_refraction ** ** yes ** mi_trace_transparent ** ** yes ** mi_trace_environment * yes yes yes mi_sample_light ** ** yes ** mi_trace_shadow yes ** no yes
ray light lens material environ light shadow volume volume
mi_trace_eye yes no no no no no no mi_trace_reflection ** yes * ** ** yes ** mi_trace_refraction ** yes * ** ** yes ** mi_trace_transparent ** yes * ** ** yes ** mi_trace_environment * yes yes * yes yes yes mi_sample_light * yes * ** ** yes ** mi_trace_shadow ** ** ** yes ** no yes
The shader interface of the previous generation of mental ray, 1.8, differed from version 1.9 in various ways. When converting shaders from 1.8 to 1.9, the following changes should be made: