A .mi file contains commands and scene entities in any order, with the restriction that an element must be defined before it can be referenced. All entities are named; all references are done by name. The following entities can be defined:
options | global options |
camera | camera: output files, aperture, resolution, etc. |
material | shading, shadows, volumes, environments, contour, etc. |
texture | procedural texture or texture image |
light | light |
object | polygonal or free-form surface geometry |
instance | places objects, lights, cameras, and groups in 3D space |
instgroup | for grouping instances; the nodes of the scene DAG |
shader | optional named shaders |
data | arbitrary user data |
lightprofile | light profile such as IES or Eulumdat3.1 |
All of these can be defined at any place, as long as they are not nested (the definition of an element must be completed before the next element can be defined). All these entities can also be incrementally changed by introducing the definition with the incremental keyword, which tells mental ray to re-define an existing element instead of starting a new one. The contents of the existing element become the defaults for the new one.
options
This element contains rendering options such as the shadow
mode, tracing depth, sampling algorithm and its parameters,
acceleration algorithm and its parameters, dithering and other
modes.
All scene entities are described in more detail in the following subsections.
options "name"
option_statements
end options
Options contain rendering modes. An options element must be specified to
render a scene. There is a variety of option_statements that can
be listed in the options. Most of them can be overridden with an
appropriate command-line option; see appendix .
The following option_statements are supported:
photon trace depth, photonmap, and photonmap rebuild on|off have the same meaning as for caustics.
camera "name"
camera_statements
end camera
A camera describes a view of the scene, including the list of files to write, the lens shaders to use, volume shaders to be used as the global atmosphere or fog, global environment shaders that control what happens to rays that leave the scene, and other parameters. Cameras are scene entities that need to be placed in the scene with an instance element. In object space mode (see options element above), the location of the camera in world space is determined by the camera instance transformation. Note that the camera instance must be attached to the root instance group of the scene. See below for information on instance groups.
Cameras contain output statements that specify output shaders and output files to write to disk, and control which frame buffers mental ray creates and maintains during rendering. More than one output file can be created, and output shaders such as filters can be listed that operate on the final rendered image, before it is written to a picture file. outputs is one or more output statements. Output statements are very similar to shader lists, like lens shader statements, but the syntax is different to allow type specifications and output file names:
output ["datatype"] "filetype" [options] "filename"
output "datatype" "shader_name" (parameters)
The first kind writes a picture to a file named filename, using the file format filetype. Normally, file formats imply a data type, but the defaults can be overridden by naming an explicit datatype. For example, the file type "rgb", which stands for a SGI RGBA image file, implies the data type "rgba".
The options specify additional format related parameters. Currently, the "jpg" file format supports one option quality q, where q is an integer value between 0 and 100. Lower values force higher lossy compression resulting in lower image quality. A quality value of 0 will cause the use of the default quality 75. For Softimage "pic" file formats, the options keywords even and odd are available to set the corresponding fields in the file header.
The second kind of output statement calls an output shader, such as a filter, that may operate on all available frame buffers. Here, the datatype may be a comma-separated list of types if the shader requires multiple frame buffers. Each type can be prefixed with a ``+'' or ``-'' to turn interpolation on or off. Interpolation is averaging for color, depth, and normal images and max'ing for label images. Interpolation is on by default for the standard color frame buffer and off by default for all others. For example, a shader that filters the standard RGBA image with a filter whose size depends on the distance of objects needs both the interpolated RGBA buffer and the interpolated depth buffer, and would have a data type "rgba,+z". mental ray creates all types of frame buffers requested by at least one output statement of either kind.
The data type "rgbe"3.1that stores high dynamic range RGBE data is normally used for formats that understand RGBE data ("hdr" or "ht"), but it can also be stored in any format that accepts 8-bit RGBA. This will result in image files that cannot be displayed with standard viewers, but tools exist that can process such data. For example, the following output statement sill store RGBE data in an RLA file:
output "+rgbe" "rla" "file1.rla"
Unless there is also a true floating-point buffer ("rgba_fp"), specification of the "rgbe" type will switch mental ray's color frame buffer to RGBE mode because its high dynamic range is considered a superset of regular RGB. This can significantly reduce memory usage for large frame buffers, like 4000x4000, compared to floating-point frame buffers which are four times as large. Note that RGBE stores no alpha.
The data types "fb0" through "fb7" refer to user frame buffers 0...7. User frame buffers are defined in the options statement using frame buffer statements. The actual data type of fbn is the type of frame buffer n. For example, the output statements
output "+rgba" "rgb" "file1.rgb" output "fb0" "ctfp" "file2.ct"
write the standard frame buffer to the image file file1.rgb, and then write the contents of user frame buffer 0 to the image file file2.ct. This assumes that the options block contains a statement that defines user frame buffer 0, such as:
frame buffer 0 "+rgba_fp"
User frame buffers are empty unless some shader writes to them during rendering. Their purpose is to collect nonstandard image data during rendering, and making the data available for output shading and image file writing.
A special data type "contour" can be specified that enables contour rendering. Special contour output shaders must be specified that pick up the contour information from the contour cell frame buffer and compute a color image, which it can either put into the regular color frame buffer, or composite on top of the color frame buffer. In the latter case, one rendering phase creates a color image with contours. The color frame buffer can then be written to an image file using a regular image output statement. There is also a built-in contour output shader that creates a PostScript file instead of a color image. See the Contour chapter in this manual for details and examples.
Multipass rendering is a feature set that allows saving the results of rendering not in the form of image files, but in the form of sample lists. A sample is the result of a single primary eye ray, including all collected frame buffer information. Oversampling creates more than one sample per pixel, which are filtered to compute the pixel. Since multipass rendering operates on samples, it has access to the complete set of subpixel information, and does not suffer from pixel aliasing problems as image compositing does. mental ray also supports merging of multiple pass file, sample by sample, to generate the final filtered output images. Finally, mental ray supports pass preprocessing, which is a function with random access to a single pass file to perform operations such as subpixel motion blur postprocessing.
All multipass rendering functionality is controlled by a script in the camera definition, consisting of five types of statements:
pass null
pass write "filename"
pass merge read [ "filename", "filename", ... ]
[ write "filename" ]
[ function ( parameters ) ]
pass prep read "filename"
write "filename"
function ( parameters )
pass delete "filename"
Pass statement lists are similar to output statement lists, which also allow storing and processing data in order.
The execution order of statements is important. Before rendering, all pass prep statements are executed in order; during rendering all pass and pass merge statements are executed in order for every finished rectangle; and after rendering all pass delete statements are executed. It is important to note that pass and pass merge statements are executed for every finished rectangle; this allows mental ray to minimize memory usage because only small sets of samples reside in memory at any one time. It also means that a pass prep statement cannot operate on the currently rendered pass because the pass prep function runs before rendering begins. In general, no two pass or pass merge statements should write to the same filename; this would result in sample data loss because of the per-rectangle interleaving.
All files are created before reading begins; hence, a pass prep statement should not read from a file written to during rendering, and the write filename should not be identical to any read filenames in any statement. In general, all filenames written to should be unique. The pass delete statement was provided to clean up temporary pass files after rendering completes. Pass files can become quite large, often in the hundreds of megabytes if the resolution, oversampling parameters, and the number and size of frame buffers is large.
See page for more information on multipass rendering.
There is a variety of camera_statements that can be listed in
the camera. Some of them can be overridden by specifying an
appropriate command-line option; see appendix .
There are four camera statements that accept shader lists: output, lens, volume, and environment. As with all types of shaders, more than one shader can be listed, or more than one such statement can be given, to attach multiple shaders (or output files in the case of the output statement) to each type. In an incremental change (the incremental keyword is used before the camera keyword), each of the four first resets the list from the previous incremental change and does not add to the existing list, as multiple statements inside the same camera ... end camera block would.
The following camera_statements are supported:
scalar texture "texture_name" [widthint heightint [depthint] ] bytes ...
[ local ] [ writable ] [ filter [scale_const]] scalar texture "texture_name" "filename"
scalar texture "texture_name" shader_list
color texture "texture_name" [widthint heightint [depthint] ] bytes ...
[ local ] [ writable ] [ filter [scale_const]] color texture "texture_name" "filename"
color texture "texture_name" shader_list
vector texture "texture_name" [widthint heightint ] bytes ...
[ local ] [ writable ] vector texture "texture_name" "filename"
vector texture "texture_name" shader_list
Textures are lookup functions. They come in two flavors: lookups of two-dimensional texture or picture files or literal bytes, and procedural lookups. File textures require a file name parameter or a byte list; procedural textures require a shading function parameter. There are three types of texture functions: textures computing scalars, colors, and vectors. Which one is chosen depends on what the texture is used for. Textures are used as parameters to other shaders, typically material shaders. A material shader could, for example, use a color texture to wrap a picture around an object, or a scalar texture as a transparency or displacement map, or a vector texture as a bump map. The actual use of the texture result is entirely up to the shader that uses the texture.
All of the above syntax variations define a texture texture_name. The texture_name should be quoted to avoid reserved words and to allow non-alphabetic characters. This is the name that the texture will later be referenced as.
Non-procedural textures can be defined by specifying the width and height of the texture and an optional depth (bytes per component, 1 or 2, default is 1; mental ray 3.0 also supports 4 for floating-point textures with four bytes per component), followed by a list of width x height x depth hexadecimal two-digit bytes, most significant digit first if depth is 2 or 4, in RGBA order for colors and UV order for vectors. Note that the brackets around the sizes are literally part of the .mi file, while the skinny brackets around depth denote that the depth is optional and are not part of the .mi file. These textures are called , and should be used with care because they can increase the scene file size enormously.
Non-procedural textures can also be defined by naming a texture or picture file. For a list of allowed file formats, see the section on Available Output File Formats. In this case, the sizes (width, height, and depth) are read from the file. If the local keyword is not present, the file is read once on the master host and then transmitted over the network to all slave hosts that participate in the rendering. With the local keyword, only the file name is transmitted to the slave hosts; this requires the file exists locally on all slave hosts but reduces network transfer times drastically if many texture files or very large texture files are used. Filename rewriting is available for interpreting the remote filename locally, for example to translate between Unix and Windows NT paths. Maximum speed improvements are achieved if local files are memory-mapped pyramids (see the -p option of the imf_copy tool), and reside on physical disks and not on NFS-mounted file systems (NFS stands for Network File System, distinguishable by the nfs type in the output of the Unix df command).
If the writable3.x keyword is present, the texture is written to a file after it was written by a shader. This kind of texture is used by lightmap shaders to write back the light mapping result. Light mapping involves scanning the surface of an object, and collecting data for each point. This data is later written to a texture file. A typical example is ``baking'' indirect illumination into a texture that can then be simply texture-mapped at a later render pass, instead of computing the information at rendering time. If writable is specified, local should be specified as well because the file should be written to disk on the master host only.
The filter keyword, if present, enables texture filtering based on texture pyramids, a technique comparable to mip-map textures. during rendering. Filtered textures are preprocessed before rendering begins and use approximately 30% more memory. Filtering should be used when the texture is large and seen at a distance, such that every sample covers many texture pixels. Without filtering, widely spaced samples ``overlook'' the areas between the samples; filtered textures perform a filter operation to take the skipped areas into account. The compression of the texture on the viewing plane can be scaled by the optional scale value if necessary.
When loading a texture image, it is checked whether the texture is memory-mappable. This is the case if the texture file has the special uncompressed .map format. If this is the case, the texture is not loaded into memory but mapped into virtual memory. Memory-mapped textures use far less physical RAM and no swap space, but they use virtual memory. Memory mapping is especially useful for large textures that are not used often (i.e., many or most of its pixels are not sampled or the textured object is small or far away from the camera), but is recommended for all nontrivial texture images. Memory-mapped textures are implicitly also local textures. Memory-map textures should be created with the imf_copy utility, with the -p option to create pyramids.
Procedural textures are defined by naming a shading function with parameters; the shading function can either be one of the built-in functions or an external function from a code or link command.
When the material shader (or any other shader) evaluates a texture by calling a texture evaluation function, the program either looks up non-procedural shaders by looking up the texture in the range [0, 1) in each dimension, or it calls the named shader in the procedural case. The shader is free to interpret the point for which it evaluates the texture in any way it wants, two-dimensional or three-dimensional.
material "material_name"
[opaque]
shader_list
[displace [shader_list]]
[shadow [shader_list]]
[volume [shader_list]]
[environment [shader_list]]
[contour [shader_list]]
[photon [shader_list]]
[photonvol [shader_list]]
[lightmap [shader_list]]
end material
Materials determine the look of geometric objects. They are referenced by material_name in the geometry definition in object statements (see below). Lights and textures cannot be referenced by objects; they are referenced by the material which uses them to compute the color of a point on the object's surface. All built-in material shaders accept textures and light instances as shader parameters.
When a primary ray cast from the camera hits an object, that object's material shader (the first, mandatory, shader_list) is called. The material shader then calculates a color (and certain other optional values such as labels, depths, and normals that can be written to special output files). This color may then be modified by the optional volume shader if present. The resulting color is stored in the output frame buffer, which is written to the output picture file when rendering has finished. In order to calculate the color, the material shader may cast secondary reflection, refraction, or transparency rays, which in turn may hit objects and cause other (or the same; multiple objects may share a material) material shaders to be called. The material shader bases the decision whether to cast secondary rays on its parameters, which are part of the scene description and may contain parameters such as the material's diffuse color or its reflectivity and transparency, light instances, and textures. The parameters depend entirely on the material shader. In this sense, material shaders are ``primary'' shaders that get help from ``secondary'' texture and light shaders.
It is possible to specify a shader type such as shadow without following it with a shader_list. This is useful if an incremental change is done to the material. The incremental change leaves the contents of the material undisturbed except where explicitly rewritten, so the shadow shader list remains intact. It can be replaced by specifying a new one, but it can only be deleted with a shadow keyword not followed by any shaders. In an incremental change, the first statement (say, volume) first resets the old volume list; every subsequent volume statement in the same material block adds to the list.
The material_name should be quoted to avoid conflicts with reserved names and non-alphabetic characters. The opaque flag, if present, informs mental ray that this material is not transparent (i.e., it does not cast refraction or transparency rays and always sets its alpha result value to 1); this allows certain optimizations that improve rendering speed. The material shader and its parameters are the only mandatory part of a material.
There are several optional functions that can be listed in a material. The displacement shader is a function returning a scalar that displaces the object's surface in the direction of its normal, or in arbitrary directions. Displacement shaders can be applied to both free-form surface objects and polygonal objects.
The shadow shader is called when a shadow calculation is done, and the shadow ray from the light source towards the point in shadow intersects with this material. The shadow shader then changes the color of the ray, which is initially the (possibly attenuated) color of the light to another color, typically a darker or tinted color if the material is colored glass. It returns black if the material is totally opaque, which is also the default if no shadow shader is present. Shadow shaders are usually reduced versions of the material shaders; they evaluate transparencies and colors but cast no secondary rays. Shadow shaders are only required for transparent objects. If global illumination is enabled, no shadow shaders should be used because global illumination provides a more powerful way to compute light transmission, and using two ``competing'' methods at the same time for the same object may produce incorrect results. This is explained in more detail in the mental ray User's Guide.
It is possible to use the material shader as a shadow shader; material shaders can find out whether they are called as material or shadow shaders and do only the required subset in the latter case. This is done by naming the material shader after the shadow keyword, and giving no parameters (i.e., giving ()). mental ray will notice the absence of parameters and pass the material parameters instead. If the shadow shader has no parameters of its own, it is not defined whether it receives a pointer to the material shader parameters, or a pointer to a copy of the material shader parameters.
A volume shader affects rays traveling inside an object. Volume shaders are conceptually similar to fog or atmosphere shaders of other rendering programs. When a ray (either from the eye or from a light source) hits this material, the volume shader, if present, is called to change the color returned by the ray based on the distance the ray has traveled, and atmospheric or material parameters. A volume shader can also be named in the camera (see above); that shader is used for rays traveling outside objects. It is the material shader's responsibility to determine inside and outside of objects.
The environment shader is called when a reflection or refraction ray cast by the material shader leaves the scene entirely without striking another object. For example, the environment shader may map a texture on a sphere with an infinite radius surrounding the scene. (This is another example for an application of a texture; a texture name must be used as a parameter for the environment shader for this to work.) The camera statement also allows naming an environment shader; that shader is used when the ray leaves the scene without ever striking any object (or exceeding the trace depth).
If a contour shader is given, it is called when contours are enabled
with an appropriate output statement in the camera element, and certain
contour store and contour contrast shaders are specified in the options
element. For more information on contour rendering see chapter .
If caustics or global illumination computation is enabled, the photon shader is called during a preprocessing stage (before rendering) to determine the light distribution in the scene. Like shadow shaders, photon shaders without parameter lists are called with the material shader parameter lists. See the mental ray User's Guide for details.
A volume photon shader affects photons traveling inside an object. When a photon hits this material, the volume photon shader, if present, is called to trace the photon through the volume. Volume photon shaders are to volume shaders what photon shaders are to material shaders.
The lightmap shader is available only in mental ray 3.0. If present, it is called for the object that the material is attached to, and is expected to create a light map or other information collection about the object that can be saved to disk or used during rendering. In its most common form, the shader creates a texture that contains the illumination of the object; hence the term ``light map''.
Materials can be replaced with phenomena. In all places where the name of a material may be given, the name of a shader that references a phenomenon declaration of type material is legal. Given the following scene fragment:
declare phenomenon material "phen_mtl" (color "param") material "mtl" opaque "shader" ("diffuse" = interface "param") end material root material "mtl" end declare shader "mtl_sh" "phen_mtl" ("param" 1.0 0.7 0.3)
the name mtl_sh can be used like a material_name, for
example in polygon or free-form surface definitions in objects. For more
information on material phenomena, see section .
Note that there are three ways to use materials in a scene:
See section for more details on material lists and
material inheritance.
light "light_name"
shader_list
[ origin x y z ]
[ direction dx dy dz ]
end light
Lights have a large number of optional parameters that are used if global illumination, caustics or shadow maps are enabled. These techniques use a preprocessing step that analyzes how light travels through the scene. Lights that participate in this preprocessing stage must specify a number of extra parameters. For clarity, regular lights and more specialized lights are shown separately:
light "light_name"
shader_list
[ emitter shader_list ]
[ area_light_primitive ]
[ origin x y z ]
[ direction dx dy dz ]
[ spread spread ]
[ visible ]
[ tag labelint ]
[ data [ "data_name"|null ]]
[ energy r g b ]
[ exponent exp ]
[ caustic photons storeint [ emitint ]]
[ globillum photons storeint [ emitint ]]
[ shadowmap [ on|off ]]
[ shadowmap resolution resint ]
[ shadowmap samples numint ]
[ shadowmap softness size ]
[ shadowmap file "filename" ]
end light
This statement defines a light source. All light sources need a light
shader, such as the mib_light_point shader in the base shader
library, or another shader linked with a code or link command
(see above). "shader" above stands for the quoted
name of the shader. Like any other shader, a parameter list (possibly empty) enclosed in parentheses must be given. The
parameters depend on the particular shader; they include the light
color, attenuations, and spot light directions. The declaration of the
shader determines which parameters are available in the parameters list; see chapter for details on shader
parameters.
mental ray distinguishes three kinds of light shaders: point lights, giving off light in all directions; directional (infinite) lights, whose light rays are all parallel in a particular direction, and spot lights which emit light from a point along a certain direction. Point lights must define an origin but no direction, while directional lights must define a direction but no origin. Spot lights must define an origin, a direction, and a spread. The spread defines the maximum angle of the cone defined along the direction in which the spot produces illumination. The value of spread is the cosine of this maximum angle; it must be between 1 (infinitely thin) and 0 (hemisphere). Spot lights often use a directional attenuation, but this is purely a function of the shader that is independent of the spread and direction keywords in the light definition. All types of lights can be turned into area light sources.
After the definition, the light source can be instanced with an instance statement that references light_name. The instance can then be referenced in parameter lists of shaders (such as a material shader) by listing the light instance name. Material shaders normally have an array parameter accepting one or more light instances, which they loop over to accumulate the contribution by each light (unless they rely solely on the global light list). Light instances are one of the standard data types that are available for shader parameters. The light_name should be quoted to avoid clashes with predefined words, and to allow non-alphabetic characters.
Any point or spot light may be turned into an area light source by naming an area_light_primitive. Area light sources generate soft shadows because shadow-casting objects may partially obscure the light source. Four types of area light primitives are supported:
rectangle [ x0 y0 z0 x1 y1 z1 sampling ]
disc [ x y z radius sampling ]
sphere [ radius sampling ]
cylinder [ axis radius sampling ]
user sampling
object object_inst sampling
The common sampling substatement is optional:
[ u_samples v_samples [ level [ low_u_samples low_v_samples ]]]
All three area light types are centered at the origin position in the light definition. A rectangular area light is specified by two vectors that describe the lengths of the edges; a disc area light is specified by its normal vector and a radius; a sphere area light is specified only by its radius; and a cylinder area light is specified by its axis and radius. Note that the orientation of the rectangle, disc, or cylinder are independent of the direction and any directional attenuation the shader applies, although both will generally be similar. Also note that the end caps of the cylinder are not sampled.
mental ray 3.1 supports a user-defined area light source. This requires a special light shader that controls the points on which it is sampled, instead of leaving the sample point location to mi_sample_light. The light shader will be called in a loop until the shader decides it has been sampled enough, or when the sample limit (u . v) is reached, and returns (miBoolean)2. User area light sources also do not apply the optimization that cancels light rays under the horizon of the illuminated point. It is not necessary (or desirable, because of self-shadowing issues) to set state - > pri to null.
mental ray 3.1 also supports geometric area light sources for point lights, specified by the object keyword. Its first argument must be the instance of a single-group object that defines the geometry of the area light source. All points on the surface of the object will emit light uniformly. It is generally a good idea to keep the triangle count of the object as low as possible for maximum performance. The sampling rates usually have to be set much higher for object lights.
The u_samples and v_samples parameters subdivide the area light source primitive. For discs and spheres, u_samples subdivides the radius and v_samples subdivides the angle. For a cylinder, u_samples subdivides the height and v_samples subdivides the angle. When sampling the area light source, mental ray samples one point in each subdivision at a location precisely determined by the sample parameters and a predefined lighting distribution, and then combines the results. The default is 3 for each sample parameter, so an area light source without explicitly given samples parameters is sampled 9 times.
If the optional level exists and is greater than 0, then mental ray will use low_u_samples and low_v_samples instead of u_samples and v_samples, respectively, if the sum of the reflection and refraction trace level exceeds level. The defaults for the low levels are 2. The effect is that reflections and refractions of soft shadows are sampled at lower precision, which can improve performance significantly. Since shaders have control over the trace level in the state, they can influence the switching depth, which can be used to sample soft volume shadows less precisely, for example.
If the rectangle, disc, sphere, or cylinder keyword is specified without any of the following arguments, then the light source reverts to a non-area light source. This is useful for incremental changes.
Light sources are by default invisible. However, area lights can be made visible by adding a visible flag to the light. Any visible flags on non-area lights are ignored since they have zero size. Light visibility cannot be inherited from the instance. It reduces performance if the number of visible lights is very large.
A label integer can be attached to a light using the tag statement. Labels are not used by mental ray in any way, but a shader can use the mi_query function to obtain the label of a light and perform light-specific operations.
Also, user data can be attached with a data statement. The argument must be the name of a previously defined data element in the scene file. If the argument is missing, a previously existing data reference is removed.
The second light form is for caustics and global illumination.
It requires specification of the light energy. The light energy is
given as an RGB triple to allow colors, but the RGB values are
typically much higher than the usual 0...1 range for colors. The
number of photons emitted from this light source in the preprocessing
step is determined by store, and the number of emitted photons is
determined by emit, if specified. When either limit is reached,
photon emission stops. If store is 0, emit must be specified and
storing is unlimited (this requires mental ray 2.1.37 or later).
Physical correctness demands an 1r2 power law for energy
falloff, causing the energy received from a light source to fall off
with the square of the distance to the light source. However, the exponent parameter allows modification of the power law to 1
rexp. For any exp other than 2, physical correctness is
lost, but for achieving certain looks it is often useful to use exp values between 1 and 2 to reduce the falloff, and better
approximate classical local illumination non-physically correct lights.
For caustics, one can specify a caustics photons value that controls the number of caustic photons stored during caustics preprocessing. Similarly, a globillum photons value can be specified for global illumination. Typical values range from 10,000 to 100,000; larger values improve accuracy and reduce blurriness.
Shadow maps are controlled per light source using the information about the light source type and the information provided by the shadow map keywords. Shadow maps are supported for spot lights with a cone-angle less than 90 degrees (i.e. spread > 0), for directional lights, and for point lights. A shadow map is activated for a light source by specifying the shadowmap keyword. The resolution of the shadow map which controls the quality and also the amount of memory used is specified with the shadowmap resolution keyword, which specifies the width and height of the shadowmap depth buffer in pixels. The shadowmap softness and shadowmap samples keywords determine the type of shadow produced with the shadow map; if the softness is zero a sharp shadow is generated. If softness is larger than zero, shadowmap samples different samples will be taken from the shadowmap, on a square region the size of shadowmap softness. This will make the boundaries of the shadows appear softer.
The softness is specified in internal space units on the shadow map projection plane. For directional lights, an orthographic projection is used, so the softness will be constant in the scene, the soft region having roughly the given softness value in size. For other lights, because of the projective projection used, apparent softness will increase with distance from the light. This means that much smaller softness values are usually required for spot lights than directional lights. If an excessively high softness value is specified, a warning will be given during rendering. Very high values tend to blur the shadow out of existence. The number of samples determines the quality of the soft shadow.
The shadowmap file statement can be used to specify a shadow map file in which the shadow map will be saved the first time it is rendered, and subsequently loaded every times it is used. In the case of point lights, six different files will be saved, each for a different direction (the resolution of each file will be lower so that the total number of pixels rendered will be approximately resolution x resolution). If objects in the scene move, the old shadow map files should be deleted to prevent loading and re-using outdated shadow maps. If the filename contains the # character, it will be expanded by mental ray into a hash code number identifying the transformation of the light instance. This is useful when a light is multiply instanced, because it allows distinguishing between files representing multiple instances of the same light. However, the user must take care to remove obsolete files or they will eventually fill all available disk space.
For spot light sources, the extent of the shadow map is determined by the spread parameter. For directional light sources, the extent of the shadow map is determined by the extent of the parts of the scene that cast shadows. For example, in a scene with small objects on a large background polygon, the small objects casting shadows should have a shadow flag, while the background polygon should not. Then the extent of the shadow map will only cover the small objects that cast shadows. If the large background polygon also has the shadow flag, the extent of the shadow map will be larger, and the shadow map will lack detail at the small objects where detailed shadows are needed.
data "name"
[ tag labelint ]
[ bytes list ]
end data
data "name"
[ tag labelint ]
"filename"
end data
data "name"
[ tag labelint ]
declaration_name (parameters)
end data
User data is arbitrary data stored in the scene file, which can be passed to shaders as a shader parameter of type data. This is useful for large amounts of data that is shared by several shaders, or too large or complex to be defined with individual shader parameters. There are three ways to define user data:
Shaders see a tag if they evaluate a parameter of type data. This tag can be accessed with the mi_query modes miQ_DATA_*, especially miQ_DATA_PARAM for accessing the data payload, and miQ_DATA_NEEDSWAP for determining if a raw byte block (the first two data definition methods) need swapping by the shader.
lightprofile "name"
format ies
file "filename"
[ flags flagsint ]
[ hermite degreeint ]
[ resolution xresint yresint ]
end lightprofile
Light profiles3.1 such as IES or Eulumdat are files supplied by lamp vendors to describe their products. They contain a mesh of measured light intensities. The light profile block in the scene file allows loading a light profile file on disk, and converting the data into an internal format that is efficient for lookup in light shaders using the mi_lightprofile_sample (or mi_lightprofile_value) shader interface function.
The only supported light profile format supported at this time is IES and Eulumdat. Other formats such as CIBSE may be supported in later versions of mental ray. The format statement may not be omitted.
The file statement names the file supplied by the lamp vendor.
The flags statement can be used to override the horizontal sample order in an IES file. There are two IES file types in common use, type B and type C. The IES standard defines that samples are stored in counter-clockwise order. Type C files conform to this standard, but about 30% of the type B files deviate from the standard and store samples in clockwise order, without giving any indication in the IES file that mental ray could use to switch the order. (Sometimes there is an informal comment.) If flags is 1, mental ray assumes clockwise order contrary to the IES standard for these incorrect type B files. If flags is 2, mental ray assumes the normal counter-clockwise order. Type A IES files are not supported. Flags have no effect on Eulumdat light profiles.
At this time only linear and cubic interpolation (hermite 1 or hermite 3) are supported. The resolution statement defines the precision of this interpolation by specifying the number of points on the smoothed mesh; for linear interpolation it always matches the resolution of the sample mesh in the profile file.
All geometry is specified in either camera space
or object space, depending on the corresponding statement in the
options statement (see section ). In camera space mode,
the camera is assumed to sit at the coordinate origin and point
down the negative Z axis, and objects are defined using camera space
coordinates. In object space mode, the camera location is determined
by its instance, and objects are defined in local object coordinates
that are positioned in the scene with its object instance. Every
object, camera, and light requires an instance. Camera space mode is
only used for backwards compatibility with mental ray 1.x, and is now
obsolete and not recommended. All existing integrations of mental ray
on the market use object space exclusively.
The appearance of the object, such as color and transparency, is determined by naming materials in the object definition. Before a material can be used in an object, it must be defined. Naming the material determines all aspects of the object's appearance. No further parameters, textures, or lights need to be specified; they are all part of the material definition.
The two most common approaches to materials and objects are to name all materials first and then all objects, which may simplify the implementation of material editors because all materials may be put in a separate file and then included in the .mi file using a $include command; or materials and objects may be interspersed. Either way, each material definition precedes its first use.
All polygonal and free-form surface objects have the same common format in the .mi file:
object "object_name"
[ visible [on|off] ]
[ shadow [on|off] ]
[ trace [on|off] ]
[ select [on|off] ]3.x
[ tagged [on|off] ]
[ caustic [on|off] ]
[ globillum [on|off] ]
[ caustic [mode] ]
[ globillum [mode] ]
[ box [minx miny minz maxx maxy maxz] ]3.x
[ motion box [minx miny minz maxx maxy maxz] ]3.x
[ max displace value ]3.x
[ samples min max ]3.1
[ data null|"data_name" ]3.x
[ tag label_numberint ]
[ file "file_name" ]3.x
[ basis list ]
group
[ merge epsilon ]
vector list
vertex list
geometry list
approximation list
end group
end object
The individual parameters are:
The mode argument controls the caustic operation: 1 enables caustic casting, 2 enables caustic receiving, 3 enables both, and 0 neither. off means that the object is invisible to caustic photons, and 'on' is the same as 3. In the pool example, the water surface should have mode 1 and the floor should have mode 2. If the caustic keyword is given without mode argument, the mode defaults to on (that is, 3). If no caustic keyword is given, caustics default to mode 0.
The mode argument controls the global illumination mode: 1 enables global illumination casting, 2 enables global illumination receiving, 3 enables both, and 0 neither. The default is specified by the options. off means that the object is invisible to global illumination photons, and on (the default) enables global illumination interactions with this object. In the table example, the red table should have (at least) mode 1 and the white wall should have (at least) mode 2. If the globillum keyword is given without mode argument, the mode defaults to 3).
If an object is very complex, it may be desirable to set only the visible flag but not the globillum flag, and create a second object that resembles the first one but is much simpler and set the globillum but not the visible flag on it. The effect is that the object appears unchanged, but simulation of global illumination is faster since a simpler object is used.
A value that is too large generates correct images but puts more pressure on the cache, so rendering may use more memory and run more slowly. In particular, mental ray 3.1 may suffer serious performance losses, easily by an order of magnitude, if the displaced object uses fine approximations. If the maxdisplace value is too small, mental ray 3.0 may clip parts of the object; mental ray 3.1 always limits the absolute displacement to the maxdisplace value.
At the end of each object group, approximation statements
can be given that control the tessellation method. They are
valid for both polygonal and free-form surface object groups.
In polygonal object groups, the approximation is used only for
polygons whose material contains a displacement shader.
Free-form surfaces are always controlled by their
approximations; see page for details.
The visible, shadow, trace, caustic, and globillum flags can be overridden by the instance using the standard inheritance mechanism. Instances can specify that a flag in the instanced object is turned on or off, or that it is left unchanged. The object flags are used only if all the instances from the root of the scene DAG down to the object all leave the flag unchanged.
Object groups contain the actual geometry. All geometry needs vector lists and vertex lists. The vector list contains 3D vectors that can describe points in space, normals, texture vertices, basis vectors, motion vectors, and others. Vectors are anonymous, they are triples of floating-point numbers separated by whitespace without inherent meaning. They are numbered beginning with 0. Numbering restarts at 0 whenever a new object group starts.
mental ray also accepts a compressed binary format for vectors. Instead of three floating-point numbers, a sequence of 12 bytes enclosed in backquotes is accepted. These 12 bytes are the memory image of three floats in IEEE 854 format, using big-endian byte order. This format is intended for increasing translation and parsing speed when ray is connected to a native translator; it should not be used in files modified with text filters. Many filters and editors refuse to accept files containing binary data, or corrupt them.
Vertices build on vectors. In the .mi format, there is no syntactical difference between polygon vertices and control points vertices for free-form surfaces; both are collectively referred to as ``vertices'' in this discussion. All vertices define a point in space and optional vertex normals, motion vectors, derivatives, zero or more textures and basis vectors, and user vectors:
v indexint
[ n indexint ]
[ d indexint indexint [ indexint [ indexint indexint ] ] ]
[ t indexint [ indexint indexint ] ]
[ m indexint ]
[ u indexint
...
Polygon vertices may use all of these. Free-form surface control points may use v and m only; the others are either computed analytically or are specified in other ways as part of the surface definition.
Every vertex begins with a v statement and ends with the next v statement or with the start of the geometry description. All occurrences of index above reference the vector list; 0 is the first vector in this group. References of different types (for example, v and n) may not reference the same vector. As stated before, all vectors are 3D. If the third coordinate is not used (as is the case for 2D texture vertices, for 2D curve control points, and for 2D surface special points) it should be set to 0.0 by convention. If both the second and third coordinates are unused (as is the case for 1D curve special points), they should both be set to 0.0.
Vertices themselves are numbered independently of vectors. The first vertex in every group is numbered 0. The geometry description is referencing vertices by vertex index, just like vertices are referencing vectors by vector index. This results in a three-stage definition of geometry:
The reason for this three-stage process is that it allows both sharing vectors and sharing vertices. This is best illustrated with an example. Consider two triangles ABC and ABD sharing an edge AB. (This example will use the simplest form of polygon syntax that will be described later in this section.) The simplest definition of this two-triangle object is:
object "twotri"
visible
group
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
1.0 0.0 0.0
1.0 1.0 0.0
0.0 1.0 0.0
v 0
v 1
v 2
v 3
v 4
v 5
p "material_name" 0 1 2
p 3 4 5
end group
end object
The first three vectors are used to build the first three vertices, which are used in the first triangle. The remaining three vectors build the next three vertices, which are used for the second triangle. Two vectors are listed twice and can be shared:
object "twotri"
visible
group
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
1.0 1.0 0.0
v 0
v 1
v 2
v 1
v 3
v 2
p "material_name" 0 1 2
p 3 4 5
end group
end object
The order of vector references is noncontiguous to ensure that the second triangle is in counter-clockwise order. Two vertices are redundant and can also be removed by sharing:
object "twotri"
visible
group
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
1.0 1.0 0.0
v 0
v 1
v 2
v 3
p "material_name" 0 1 2
p 1 3 2
end group
end object
The need for sharing both vectors and vertices can be shown by specifying vertex normals:
object "twotri"
visible
group
0.0 0.0 0.0
1.0 0.0 0.0
0.0 1.0 0.0
1.0 1.0 0.0
0.0 0.0 1.0
v 0 n 4
v 1 n 4
v 2 n 4
v 3 n 4
p "material_name" 0 1 2
p 1 3 2
end group
end object
In this last example, both vector sharing and vertex sharing takes place. The normal in this example is actually redundant: if no normal is specified, mental ray uses the polygon normal. Defaulting to the polygon normal is slightly more efficient than interpolating vertex normals, if vertex normals are explicitly specified.
Two types of geometry can be contained in the geometry list, polygonal geometry and free-from surfaces. In the next sections the syntax of the definitions of polygonal geometry and free-form surfaces is described and illustrated by examples.
If the optional mental matter product is available, objects can also contain subdivision surface geometry. Although part of the standard mental ray grammar (see the appendix), subdivision surfaces are not described in here. Refer to the mental matter manual instead.
An object group permits only one type of geometry, either polygons or surfaces but not both. It is recommended that objects contain only a single object group, so normally objects are either polygonal or surface objects but not both at the same time. Also, vector sharing is supported only for vectors of similar types (point in space, normal, motion, texture, basis vector, derivative, or user vector. A vector may not be referenced by vertices once as a point in space and once as a normal, for example.
Polygonal geometry consists of polygons. For efficiency reasons, mental ray distinguishes simple convex polygons from general concave polygons or polygons with holes. Both are distinguished by keyword:
c ["material_name"] vertex_ref_list
cp ["material_name"] vertex_ref_list
p ["material_name"] vertex_ref_list
p ["material_name"] vertex_ref_list hole vertex_ref_list ...
If the enclosing object has the tagged flag set, mandatory label integers must be given instead of the optional materials:
c label_numberint vertex_ref_list
cp label_numberint vertex_ref_list
p label_numberint vertex_ref_list
p label_numberint vertex_ref_list hole vertex_ref_list ...
The c keyword selects convex polygons without holes. The results are unpredictable if the polygon is not convex. The cp keyword is a synonym for c for backwards compatibility; c should be used in new translators. The p keyword also renders concave polygons correctly, and allows specification of holes, using one or more hole keywords, each followed by a vertex_ref_list. If all polygons within the same object group are simple convex polygons containing three sides (i.e. triangles), mental ray will pre-process them in a more efficient manner than non-triangular polygons.
A vertex_ref_list is a list of non-negative integers indexthat reference vertices in the vertex list of the group described in the previous section. The first vertex in the vertex list is numbered 0.
Any vertex index can be used in both polygon and hole vertex_ref_lists. A polygon with n vertices is defined by n index values in the vertex list following the material name. The order of the polygon vertices is important. A counter-clockwise ordering of the vertices yields a front-facing polygon. The vertex list of a hole may be ordered either way. Any polygon violating this rule, for example because it has been displaced such that its new normal points the wrong way, causes the error message ``orientation of triangles inconsistent'' and the surface to be dropped.
The material name must have been defined before the object definition that contains the polygon definition, in a statement like
material "material_name"
...
end material
In both cases, it is recommended to quote the material name to avoid
conflicts with reserved words, and to allow arbitrary characters in the
name. For a detailed description of material definitions, see
section . Once a material name has been specified for
a polygon, it becomes the default material. All following polygons may
omit the material name. Polygons without explicit material use the same
material as the last polygon that does have an explicit material. Not
specifying materials improves parsing speed because no names must be
looked up in the symbol table.
If no material is specified, polygons remain without material; in this case the material from the closest instance up the scene DAG is used instead. This is called material inheritance. Tagged objects always inherit their material from the instance. It can distinguish polygons by using the miQ_GEO_LABEL mode of the mi_query function during rendering (not in displacement shaders).
The tessellation of polygons assumes that polygons are ``reasonably'' planar. This means that every polygon will be tessellated, but the exact subdivision into triangles does not attempt to minimize curvature. If the curvature is low, different tessellations cannot be distinguished, but consider the extreme case where the four corners of a tetrahedron are given as polygon vertices: the resulting polygon will consist of two triangles, but it cannot be predicted which of the four possible triangles will be chosen.
The behavior will be different for convex polygons without holes (c keyword) and polygons which contain holes or are concave (p keyword). Convex polygons without holes are triangulated by picking a vertex on the outer loop and connecting it with every other vertex except its direct neighbors. If polygons are not flagged with the c keyword but do not have any holes an automatic convexity test is performed and if they are indeed convex they are triangulated as described. Convex polygons with holes and concave polygons are triangulated with a different algorithm. In any case a projection plane is chosen such that the extents of the projection of the bounding box of the (outer) loop have maximal size. If the projection of the polygon onto that plane is not one-to-one the results of the triangulation will be erroneous.
If a textured polygon's material contains a displacement map the
vertices are shifted along the normals accordingly. If an approximation
statement is given triangles are subdivided until the specified criteria
are fulfilled; see section for details.
Free-form surfaces are polynomial patches of any degree up to twenty-one.2.4 Supported basis types include Bézier, Taylor, B-spline, cardinal, and basis-matrix form. Any type can be rational or non-rational. Patches can be explicitly or automatically connected to one another, or may be defined to contain explicitly defined points or curves in their approximation. Various approximation types including (regular) parametric, spatial, curvature-dependent, view-dependent, and combinations of these. mental ray 3.1 also introduces fine approximation, which can generate microtriangle tessellations very efficiently. Surfaces may be bounded by a trimming curve, and may contain holes.
Surface geometry, like polygonal geometry, is defined by a series of sections. An object containing only surface geometry follows this broad outline:
object "object_name"
[ visible [on|off] ]
[ shadow [on|off] ]
[ trace [on|off] ]
[ select [on|off] ]3.x
[ tagged [on|off] ]
[ caustic [on|off] ]
[ globillum [on|off] ]
[ caustic [mode] ]
[ globillum [mode] ]
[ box [minx miny minz maxx maxy maxz] ]3.x
[ motion box [minx miny minz maxx maxy maxz] ]3.x
[ max displace value ]3.x
[ samples min max ]3.1
[ data null|"data_name" ]3.x
[ tag label_numberint ]
[ basis list ]
group
[ merge epsilon ]
vector list
vertex list
[ list of curves ]
surface
[ list of surface derivative requests ]
[ list of texture or vector surfaces ]
... # more surfaces
[ list of approximation statements ]
[ list of connection statements ]
end group
end object
Curves, surfaces, approximations, and connections may be interspersed as long as names are defined before they are used. For example, a curve must come before the surface it is trimming, and an approximation must come after the surface to be approximated. Texture and vector texture surfaces must always directly follow the surface they apply to. The individual sections are:
For a description of vector lists and vertex lists, refer to
page .
When surfaces and curves are present within an object group, it is mandatory that at least one basis has been defined within the object. Bases define the degree and type of polynomials (denoted by Ni, n below) to be used in the description of curves or surfaces. Curves and surfaces reference bases by name. Every surface needs two bases, one for the U and one for the V parameter direction. Both can have a different degree, but must have the same type (for example, rational Bézier in U and Cardinal in V is not allowed). There are five basis types:
basis "basis_name" [rational] taylor degreeint
basis "basis_name" [rational] bezier degreeint
basis "basis_name" [rational] cardinal
basis "basis_name" [rational] bspline degreeint
basis "basis_name" [rational] matrix degreeint stepsizeint basis_matrix
A parametric representation may be either non-rational or rational as indicated by the rational flag. Rational curves and surfaces specify additional weights at each control point. This flag is optional; it can also be specified in the curves and surfaces that reference the basis.
The degree specifies the degree of the polynomials used in the description of curves or surfaces. Recall that the degree of a polynomial is the highest power of the parameter occurring in its definition. When bases of degree 1 are used control points are connected with straight lines. Cardinal bases always have degree 3. The degree and the type combined determine the length of the parameter vector and the number of control points needed for the surface. The meaning of the parameter vector differs for the different basis types. This is described in detail below.
The supported polynomial types for curves and surfaces are bezier, bspline, taylor, cardinal and matrix.
taylor specifies the basis functions:
bezier specifies the basis functions:
cardinal specifies third degree curves and surfaces . The Cardinal splines, also known as Catmull-Rom splines, are most easily formulated as a conversion from Bézier form. If we let Bi, 3(t) be the cubic Bézier basis functions (i.e., the above basis functions Ni, n(t) with n = 3), then we may write the cardinal basis functions as
N0, 3 | = | - ![]() |
|
N1, 3 | = | B0, 3(t) + B1, 3(t) + ![]() |
|
N2, 3 | = | ![]() |
|
N3, 3 | = | - ![]() |
|
bspline specifies a non-uniform B-spline representation whose basis functions are given by the following recursive definition:
A matrix (bi, j)0i
n, 0
j
n specifies the
basis functions:
When a curve or surface is being evaluated and a transition from one segment or patch to the next occurs, the set of control points (the `evaluation window') used is incremented by the stepsize. The appropriate stepsize depends on the representation type expressed through the basis matrix and on the degree.
Consider a curve with k control points {v1,..., vk}. If the curve is of degree n, then n + 1 control points are needed for each polynomial segment. If the stepsize is given as s, then the (1 + i)th polynomial segment, will use the control points {vis + 1,..., vis + n + 1}. For example, for Bézier curves s = n, whereas for Cardinal curves s = 1. For surfaces, the above description applies independently to each parametric dimension.
The basis_matrix specifies the basis functions used to evaluate a parametric representation. For a basis of degree n the matrix must be of size (n + 1) x (n + 1). The matrix is laid out in the order b0, 0, b0, 1, ... , b0, n, ... , bn, n. Note that the generalization to the rational case for all representations is admitted in all cases.
As an example, an object containing a nonrational Bézier surface of degree 3 in one parameter direction and degree 1 in the other parameter direction needs two bases defined at the beginning of the object like this:
object "mysurface"
visible
basis "bez1" bezier 1
basis "bez3" bezier 3
group
...
The surface definition will reference the two bases by their names, bez1 and bez3.
A surface specifies a name and a list of control points. For both parametric dimensions it specifies a basis, a global parameter range, and a parameter list. Optionally, it specifies surface derivative requests, texture surfaces, trimming curves, hole curves, special curves and special points. Special curves and points are included as edges and vertices in the approximation ( triangulation) of the surface.
surface "surface_name" "material_name"
"u_basis_name" range u_param_list
"v_basis_name" range v_param_list
hom_vertex_ref_list
[ derivative_request ]
[ texture_surface_list ]
[ surface_specials_list ]
If the enclosing object has the tagged flag
set, a label integer must be given instead of a material name (see
page ). This changes the first line of the preceding
syntax block to:
surface "surface_name" label_numberint
The bases used in the definition of the surface must have been defined in the basis list of the object. They are referenced by their basis_names. Their ranges consist of two floating-point numbers specifying the minimum and maximum parameter values used in the respective direction.
The parameter_lists in the basis specifications define the number
of patches of the surface and the number of control points. For bases
of the types taylor, bezier, cardinal and matrix
such a parameter_list consists of a strictly increasing list
of at least two floating-point numbers. For bspline bases the
parameter_lists specify the knot vector. If the B-spline
basis to be used is of degree n the knot vector (x0,..., xq) must
have at least q + 1 = 2(n + 1) elements. Knot values represent a monotone
sequence of floating-point numbers but are not necessarily strictly
increasing, i.e. xixi + 1. Moreover, they must satisfy the
following conditions:
(1) x0 < xn + 1
(2) xq - n - 1 < xq
(3) xi < xi + n for 0 < i < q-n-1
(4) xntmin < tmax
xq - n
where [tmin, tmax] is the range over which the B-spline is to be evaluated. Equation (1) demands that no more than n + 1 parameters at the beginning of the parameter list may have the same value. Equation (2) is the same restriction for the end of the parameter list. Equation (3) says that in the middle of the parameter list, at most nconsecutive parameters may have the same value. To generate closed B-spline curves, it is often necessary to write a parameter list where the first n and last n parameters in the list produce initial and final curve segments that should not become part of the curve; in this case equation (4) allows choosing a start and end parameter in the range bounded by the first and last parameter of the parameter list.
The number of control points per direction can be derived from the number of parameters p, the degree of the basis n, and the step size s. Their total number can be calculated by multiplying the numbers taken from the following table for each of the U and V directions.
type | min # of parameters | # of control points |
Taylor | 2 | (p - 1) . (n + 1) |
Bézier | 2 | (p - 1) . n + 1 |
cardinal | 2 | p + 2 |
basis matrix | 2 | (p - 2) . s + n + 1 |
B-spline | 2(n + 1) | p - n - 1 |
Note that only certain numbers of control points are possible; for example, if the U basis is a degree-3 Bézier, the number of control points in the U direction can be 4, 7, 10, 13, and so on, but not 3 or 5. For B-spline bases of degree 3 the minimum number of parameters is 8 corresponding to 4 control points.
Each vertex reference in the hom_vertex_ref_list is an integer index into the vertex list of the current group in the object (index 0 is the first vertex). When the surface is rational, homogeneous coordinates must be given with the control points, by appending a floating-point weight to every vertex reference integer in the hom_vertex_ref_list. There are two methods for specifying weights: either a simple floating-point number that must contain a decimal point to distinguish it from an integer index, or the keyword w followed by a weight value that need not contain a decimal point. The w keyword method is recommended because it eliminates the requirement that numbers contain decimal points, so translators can use %g format specifiers. Weights are used only if the surface is rational and ignored otherwise. If a weight in a rational surface is missing, it defaults to 1.0.
The surface specials list is used to define trimming curves, hole curves, special curves, and special points (vertex references). A surface may be further modified by approximation and connection statements, as described below.
For example, an object with a simple degree-3 Bézier surface can be written as:
object "mysurface" visible basis "bez3" bezier 3 group 0.314772 -3.204608 -7.744229 # vector 0 0.314772 -2.146943 -6.932366 0.314772 -1.089277 -6.120503 0.314772 -0.031611 -5.308641 -0.660089 -2.650739 -8.465791 # vector 4 -0.660089 -1.593073 -7.653928 -0.660089 -0.535407 -6.842065 -0.660089 0.522259 -6.030203 -1.634951 -2.096869 -9.187352 # vector 8 -1.634951 -1.039203 -8.375489 -1.634951 0.018462 -7.563627 -1.634951 1.076128 -6.751764 -2.609813 -1.543000 -9.908914 # vector 12 -2.609813 -0.485334 -9.097052 -2.609813 0.572332 -8.285189 -2.609813 1.629998 -7.473326 v 0 v 1 v 2 v 3 # vertices v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 v 12 v 13 v 14 v 15 surface "surf1" "material" "bez3" 0.0 1.0 0.0 1.0 "bez3" 0.0 1.0 0.0 1.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 end group end object
First, 16 vectors are defined, each of which is used to build one vertex (control point). Next, a surface is defined that uses basis bez3 for both the U and V parameter directions. Since the surface is built from only one 4 x 4 Bézier patch, the parameter vector after the basis range has only length 2. If there had been two patches in the U direction and three in the V direction, the bases would have been referenced as
"bez3" 0.0 1.0 0.0 0.5 1.0 "bez3" 0.0 1.0 0.0 0.33333 0.66667 1.0
Alternatively, the parameter vector may be given as
"bez3" 0.0 2.0 0.0 1.0 2.0 "bez3" 0.0 3.0 0.0 1.0 2.0 3.0
by changing the parameter range of the basis. This has no influence on the geometry of the surface, but generates UV texture coordinates in a different range (here, [0.0, 2.0] x [0.0, 3.0]). However, a different parametrization does affect the texture surface range (see below), and the range of trimming, hole, and special curves (which do not define their own ranges but borrow the range from the surface they apply to).
The optional surface_specials_list that completes the surface definition is a sequence of trimming curves, hole curves, special curves, and special points as described in the next section.
mental ray can automatically generate surface derivative vectors if requested. First derivatives describe the UV parametric gradient of a surface; second derivatives describe the curvature. They are computed and stored only if requested by derivative_request statements in the surface definition:
derivative numberint [numberint]
There can be one or more derivative statements that request first and/or second derivatives. Valid values for number are 1 and 2, for first and second derivatives, respectively.
mental ray does not use derivative vectors but makes them available to shaders. First derivatives are presented as two vectors (dPduand dPdv, with P being the point in space); second derivatives are presented as three vectors (d2Pdu2, d2Pdv2, and d2Pdudv). This is the same format that can be explicitly given for polygonal data using the d keyword in vertices. Surfaces always compute the vertex derivatives analytically, explicit vertex derivatives given by d keywords are ignored.
A plain surface statement defines the geometry of the surface. If a texture is to be mapped on the surface, it is necessary to include texture surfaces. A texture surface defines a mapping from raw UV coordinates to texture coordinates as used by shaders. A vector texture is a variation of a texture surface that additionally defines a pair of basis vectors; it is used for bump mapping.
The texture or vector texture directly following a surface defines texture space number 0, the next defines texture space number 1, and so on, exactly like the first t statement after the v statement in a vertex used for building polygonal geometry defines texture space number 0, the next t defines texture space number 1, and so on. Basically, texture and vector texture surfaces replace the t statements used by polygonal geometry, because attaching textures to control points that usually are not part of the surface is not useful.
Texture spaces is what ends up in the state - > tex_listarray where it can be accessed by texture shaders to decide which texture is mapped which way. Texture space 0 is the first entry in that array, which is used by the shader for the first texture listed in the texture list in the material definition. In general, there is one texture space per texture on a material, although shaders making nonstandard use of texture spaces could be written.
The syntax for texture surfaces is a simplified version of geometric surfaces. The texture_surface_list in the grammar summary at the beginning of the ``Surfaces'' section above expands to zero or more copies of the following block:
[ volume ] [ vector ] texture
"u_basis_name" u_param_list
"v_basis_name" v_param_list
vertex_ref_list
Unlike geometric surfaces, no surface name and material name is given. Bases are given like in geometric surfaces. Texture surfaces use the ranges of the geometric surface they are attached to, they are not repeated in the texture surface basis statements. The vertex_ref_list follows the same rules as the geometric surface's vertex_ref_list. Texture surfaces have no specials such as trimming curves or holes.
The optional volume keyword in the texture surface definition disables seam compensation. It should be used for 3D textures where each texture vector should be used verbatim. If the volume flag is missing, the tessellator detects textures that span the geometric seam on closed surfaces, and prevents rewinding. Consider a sphere with a 2D texture that is shifted slightly in the U parameter direction: a triangle might have u0 = 0.0 on one side and u1 = 0.1 on the other side. If the texture is shifted towards higher u coordinates by 0.05, u0 and u1 will map to texture coordinates t0 = 0.95and t1 = 0.05, assuming an otherwise normal UV mapping. Even though u0 < u1, t0 > > t1, causing a fast ``rewind'' of the texture. Seam compensation corrects t1 to 1.05. This is undesirable for 3D textures, which should have the volume keyword set. Most problems with strangely shifted textures are caused by inappropriately used or missing volume keywords.
The optional vector keyword in the texture surface definition is a flag that causes bump basis vectors to be calculated during tessellation. This flag must be used if the texture surface is used for a bump map that expects to find bump basis vectors in the geometry. However, this is an extremely rare requirement - none of the standard shaders (base, physics, and contour) or any standard modeling tool integration shaders require base shaders, so automatic bump basis vector generation is largely obsolete now. It was originally introduced for Wavefront (not Alias|Wavefront) compatibility.
This is an example for the simplest of all texture surfaces, a bilinear mapping:
object "mysurface" visible basis "bez1" bezier 1 basis "bez3" bezier 3 group # ... 16 vectors used for the surface 0.0 0.0 0.0 # vector number 16 0.0 1.0 0.0 # vector number 17 1.0 0.0 0.0 # vector number 18 1.0 1.0 0.0 # vector number 19 # ... 16 vertices used for the surface v 16 # vertex number 16 v 17 # vertex number 17 v 18 # vertex number 18 v 19 # vertex number 19 surface "surf1" "material" "bez3" 0.0 1.0 0.0 1.0 "bez3" 0.0 1.0 0.0 1.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 texture "bez1" 0.0 1.0 "bez1" 0.0 1.0 16 17 18 19 end group end object
This texture surface defines a bilinear mapping from the UV coordinates computed during surface tessellation to the texture coordinates. To define other than bilinear mappings, the texture surface needs to have more control points than just one at every corner of the surface. Whenever the surface tessellator generates a triangle vertex, it uses the UV coordinate of that vertex to look up the texture surface and interpolate the texture coordinate from the nearest four points of the texture surface. The resulting texture coordinate is stored with the vertex and becomes available in state - > tex_list when the material shader is called because a ray has hit the surface.
If more than one texture surface is given, one texture coordinate is computed for each texture surface and stored in sequence in the generated triangle vertices. Each texture surface is said to define a `` texture space''. They are available in the state - > tex_list array in the same order. The number and order of texture surfaces should agree with the number and order of textures given in the texture list in the material definition. (Note that not all material shaders support multiple textures.)
If the material name of a surface is empty (two consecutive double quotes), the surface uses the material from the closest instance (this is called material inheritance).
Curves are two-dimensional parametric curves when they are referenced by surfaces. They are used as trimming curves, hole curves, and special curves. They must be defined before the surface which references them. Curves are three-dimensional parametric curves when referenced by space curves. A curve is defined as:
curve "curve_name" "basis_name"
parameter_list
hom_vertex_ref_list
[ special special_point_list ]
The parameter_list of a curve is a list of monotonically increasing floating-point numbers that define the number of segments of the curve and the number of control points. Curve parameter lists work very much the same way as surface parameter lists except that no range needs to be provided, because they are supplied by the surfaces that reference the curve under consideration as explained in the next section. For details on parameter lists, see the sections on bases and surfaces above.
Each vertex reference in the hom_vertex_ref_list is an integer index into the vertex list of the current group in the object (index 0 is the first vertex), optionally followed by the keyword w and a weight value. (For backwards compatibility, the w keyword may be omitted if the weight is a floating-point value containing a decimal point.) Weights are used only if the curve is rational, they are ignored otherwise. If a weight in a rational curve is missing, it defaults to 1.0. The vertices indexed by the integers in the hom_vertex_ref_list should have no normals or textures (no n and t statements), and the third component of the vector (v statement) should be 0.0 because curves are defined in UV space, not 3D space.
The optional special_point_list specifies points that are included in the approximation of the curve. After the special keyword, a sequence of integers follows that index into the vertex list, just like the integers in the hom_vertex_ref_list. The first component of the vector is used as the t parameter; it forces the point on the curve at parameter value t to become part of the curve approximation. Of course t must be in the range of parameters allowed by the surface definition.
A surface may reference curves to trim the surface, to cut holes into it, and to specify `` special curves'' that become part of the tessellation of the surface. Special points in surfaces work like special points in curves, except that they provide a point in the parameter range of the surface, that is, a two-dimensional UV coordinate, rather than a one-dimensional curve parameter. They specify single points on the surface that are to be included in the tessellation. As all curves and points are in UV space, the third component of the vectors provided for them is ignored. None of the above types of curves and points may exceed the range of (0.0, 1.0) at any point.
No two curves may intersect each other, and no curve may self-intersect. This is an important point because trimming curves and holes that are not closing or intersecting themselves or other loops can produce unexpected tessellation results.
Trimming, hole, and special curves and special points are defined at the end of the surface definition. The curves are composed of segments from the list of curves of the surface's group. The surface_specials_list given in the previous section is a list of zero or more of the following four items:
trim "curve_name" min max
...
hole "curve_name" min max
...
special "curve_name" min max
...
special vertexint
...
The dots indicate that each trim, hole, and special statement may be followed by more than one curve segment or vertex, respectively. All listed segments are concatenated to form a single curve.
The vertex integers specify vertices from the vertex section of the current group in the current object. Such a vertex specifies the UV coordinate of the special point that is to be included in the tessellation.
Each of the three types of curves references a curve that has been defined earlier with a curve statement. If a single trim, hole, or special statement is followed by more than one curve, the resulting trimming, hole, or special curve is pieced together by concatenating the given curves. The min and max parameters allow using only part of the curve referenced. min and max must be in the range of the parameter vector of the curve which in turn must be mapped into the parameter range of the surface. The min and max parameters of two different curve pieces are independent, they only depend on the curve parameter lists. For example, a trimming curve can be built from two curves, using the first three quarters of the first curve and the last three quarters of the second curve:
curve "trim1" "bez1" 0.0 1.0 2.0 3.0 4.0 0 1 2 3 4 curve "trim2" "bez1" 0.0 1.0 2.0 3 5 0 surface "patch1" "mtl" "bez3" 0.0 1.0 0.0 1.0 "bez3" 0.0 1.0 0.0 1.0 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 trim "trim1" 0.0 3.0 "trim2" 0.5 2.0
Both trimming curves use the basis bez1, which is assumed to be a degree-1 linear curve. Hence, trim1 connects the UV vertices 0, 1, 2, 3, and 4 with straight lines, and trim2 connects the vertices 3, 5, and 0. If these two curves are put together by the trim statement in the surface definition, all parts of the surface that fall outside the polygon formed by the UV vertices 0, 1, 2, 3, and 5 are trimmed off. The trim2 curve includes vertex 0 to close the trimming curve. Holes and special curves are constructed exactly the same way. Trimming curves and holes must form closed loops but special curves are not restricted in this way.
Note that trimming and hole curves must be listed in the correct order, outside in. If there is an outer trimming curve, it must be listed first, followed by the holes. If a hole has a hole, the inner hole must be listed after the outer hole. Since curves may never intersect, there is always an unambiguous order - if a curve A encloses curve B, A must be listed before B. Curves that do not enclose one another can be listed in any order.
This example omits the vector and vertex parts of the group in the object. Here is an example that defines a complete object containing a surface with a trimming curve that precisely follows the outer boundary. A trimming curve that follows the outer surface boundary does not actually clip off any part of the surface, but it is still useful if surfaces are to be connected, because connections work on trimming curves.
object "mysurface" visible basis "bez1" bezier 1 basis "bez3" bezier 3 group # ... 16 vectors used for the surface 0.0 0.0 0.0 # vector number 16 1.0 0.0 0.0 # vector number 17 1.0 1.0 0.0 # vector number 18 0.0 1.0 0.0 # vector number 19 # ... 16 vertices used for the surface v 16 # vertex number 16 v 17 # vertex number 17 v 18 # vertex number 18 v 19 # vertex number 19 curve "trim1" "bez1" 0.0 0.25 0.5 0.75 1.0 16 17 18 19 16 surface "surf1" "material" ... trim "trim1" 0.0 1.0 end group end object
The trimming curve in the example is linear, using a degree-1 Bézier basis. This means that the parameter vector has five equally-spaced parameters, one for each corner in counter-clockwise order and back to the first corner to close the trimming curve. Trimming and holes always require a closed curve or sequence of curves (they can be pieced together by multiple curves as long as the pieces form a closed loop together). The results are undefined if trimming or hole loops are not closed, or intersect.
If the trimming curve would be a degree-3 Bézier going through four corner points, a parameter vector with 3 . 5 + 1 = 16 parameters would be required (again, the 5 is the number of corners visited including the return to the first to close the curve).
For details on the parameter vector following the basis name in the
definition of the curve, refer to section . The bases and
parameter vectors for curves and surfaces follow the same rules, except
that curves have no explicit range; they always use the implicit range
given by the parameter list.
Free-form surfaces may either manually or automatically be connected to each other using the connect or merge statements, respectively.
The connect statement is ``manual'' in that it requires an explictit
specification of the parts of two surfaces to be connected. These parts refer
to intervals of trimming curves or hole curves of these surfaces,
see section .
Edge merging is ``automatic'' in that it only requires the specification
of a merge epsilon. Any surfaces having boundary
components that are within epsilon of each other will then automatically
get connected along these components. Effectively, all gaps narrower than
epsilon will get eliminated.
A merge epsilon can either be defined in an object group or instance group.
When a merge epsilon is specified in an object group, it merges all free-form surfaces contained in that group:
object "object_name"
[ ... ]
group
[ merge epsilon ]
...
end group
end object
When a merge epsilon is specified in an instance group, it merges all surfaces contained in all the objects in the sub-scene rooted at the instance group. The merge epsilon can be inserted directly after the instance group name:
instgroup "instgroup_name"
[ merge epsilon ]
[ ... ]
end instgroup
The merge epsilon determines the maximum
gap between boundary components of surfaces (specified as
trimming curves or hole curves) that still leads to the
automatic connection of these components.
The smaller this epsilon, the closer any two
surfaces must be to become merged. Trimming curves have to be specified for
each surface that participates in automatic edge merging; for an example of
a simple trimming curve that goes around the edge of a surface see
section . Trimming curves used for edge merging
An example that uses edge merging is given below.
A connection is defined as:
connect "surface_name1" "curve_name1" min1 max1
"surface_name2" "curve_name2" min2 max2
This statement closes the gap between two surfaces surface_name1 and surface_name2 by connecting their trimming curves curve_name1and curve_name2. The curves are connected only in the range (min1...max1) and (min2...max2), respectively. They share the same points, but normals, textures etc. are evaluated on the individual surfaces. Only surfaces that have trimming curves can be connected by an explicit connect statement. Trimming curves used in connections must satisfy three conditions:
The range values min1, 2 and max1, 2 must not exceed the range of the trimming curve segment as referenced by a trim statement of the corresponding surface. The minimum value must be less than the maximum value; it is not possible to satisfy the third condition by inverting the range.
Best results are obtained if the curves to be connected are close to each other in world space and have at least approximately the same length. Neither the connect nor the merge statement are meant to be a replacement for proper modeling. For carefully modeled surfaces these teqniues will not be necessary most of the time. Their purpose is to close small cracks between adjacent surfaces that are already not too far from each other. Topologically complex situations with several connections meeting in a point are beyond its scope.
Here is an example of two surfaces that meet along one of their edges such that a gap remains. Either the merge or the connect keyword may be used to close that gap. The four control points defining the straight trimming curves that are connected are marked as #0, #1, #2, and #3; the control points of the second surface marked (*) have been modified slightly to create the gap. This is a complete .mi file that can be rendered directly.
verbose on link "base.so" $include <base.mi> options "opt" samples -1 1 contrast .1 .1 .1 .1 trace depth 2 2 end options camera "cam" frame 1 output "rgb" "x.rgb" focal 50.000000 aperture 44.724029 aspect 1.179245 resolution 500 424 end camera instance "cam_inst" "cam" end instance light "light" "mib_light_point" ( "color" 1 1 1, "shadow" on, "factor" 1 ) origin 140.189178 83.103180 50.617714 end light instance "light_inst" "light" end instance material "mtl" opaque "mib_illum_phong" ( "ambience" .3 .3 .3, "ambient" .5 .5 .5, "diffuse" .7 .7 .7, "specular" 1 1 1, "exponent" 50, "lights" [ "light_inst" ] ) end material object "obj" visible shadow trace
basis "bez1" bezier 1 basis "bez3" bezier 3 group "example" merge 1.0 0.314772 -3.204608 -7.744229 0.314772 -2.146943 -6.932366 0.314772 -1.089277 -6.120503 0.314772 -0.031611 -5.308641 #0 -0.660089 -2.650739 -8.465791 -0.660089 -1.593073 -7.653928 -0.660089 -0.535407 -6.842065 -0.660089 0.522259 -6.030203 #1 -1.634951 -2.096869 -9.187352 -1.634951 -1.039203 -8.375489 -1.634951 0.018462 -7.563627 -1.634951 1.076128 -6.751764 #2 -2.609813 -1.543000 -9.908914 -2.609813 -0.485334 -9.097052 -2.609813 0.572332 -8.285189 -2.609813 1.629998 -7.473326 #3 0.000000 0.000000 -5.000000 #0 (*) 1.224400 0.561979 -6.081950 2.134028 1.155570 -6.855258 3.043655 1.749160 -7.628566 -0.500000 0.700000 -6.000000 #1 (*) 0.249538 1.115849 -6.803511 1.159166 1.709439 -7.576819 2.068794 2.303029 -8.350128 -1.200000 1.000000 -7.000000 #2 (*) -0.725323 1.669719 -7.525073 0.184305 2.263309 -8.298381 1.093932 2.856899 -9.071690 -2.000000 2.000000 -7.500000 #3 (*) -1.700185 2.223588 -8.246634 -0.790557 2.817178 -9.019943 0.119071 3.410769 -9.793251 0.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0 0.0 0.0 1.0 0.0 v 0 v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 v 12 v 13 v 14 v 15 v 16 v 17 v 18 v 19 v 20 v 21 v 22 v 23 v 24 v 25 v 26 v 27 v 28 v 29 v 30 v 31 v 32 v 33 v 34 v 35 curve "curve1" "bez1" 0.0 0.25 0.5 0.75 1.0 32 33 34 35 32 curve "curve2" "bez1" 0.0 0.25 0.5 0.75 1.0 32 35 34 33 32 surface "patch1" "mtl" "bez3" 0.0 1.0 0.0 1.0 "bez3" 0.0 1.0 0.0 1.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 trim "curve1" 0.0 1.0 surface "patch2" "mtl" "bez3" 0.0 1.0 0.0 1.0 "bez3" 0.0 1.0 0.0 1.0 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 trim "curve2" 0.0 1.0 approximate surface parametric 1.0 1.0 "patch1" approximate surface parametric 1.0 1.0 "patch2" approximate trim parametric 3.0 "patch1" approximate trim parametric 3.0 "patch2" end group end object instance "obj_inst" "obj" end instance instgroup "root" "light_inst" "cam_inst" "obj_inst" end instgroup render "root" "cam_inst" "opt"
If instead of automatic edge merging one would like to use the explicit connect statement one would have to include
connect "patch1" "curve1" 0.25 0.50
"patch2" "curve2" 0.00 0.25
right before the end group keyword and remove the merge keyword in the above example.
Note that the trimming curves curve1 and curve2 have a different orientation, one clockwise and one counterclockwise, because their control point lists are in a different order. This means that where both trimming curves run in parallel, they run in the same direction in 3D space, which is a required condition for trimming curves when connect is used - this is not necessary if merge is used, however. The trimming curves must be closed (another condition) and so run around all four edges of the (square) surfaces. Since only one edge of each surface is connected to the other, the connection ranges select only one quarter (0.5...0.25 and 0.25...0.0) of each curve.
The example produces the following image, once rendered without and then with the connect statement:
Unlike trimming curves, space curves are defined in 3D space. A space curve object may not contain any other type of geometry, such as free-form surfaces or polygons. Space curves are not rendered but serve as input geometry for modeling operations, by passing the space curve object as a parameter to a geometry shader. The curve geometry is defined as a list of space curves , each referencing multiple curve segments.
An object containing space curve geometry follows this outline:
object "object_name"
[ tag label_numberint ]
[ basis list ]
group
[ merge epsilon ]
vector list
vertex list
[ list of curves ]
space curve
[ list of curve segment references ]
end group
end object
A single space curve definition follows this outline:
space curve "curve_name" min max
...
The dots indicate that each space curve statement may be followed by more than one curve segment reference. The min and max parameters allow using only part of the curve referenced. min and max must be in the range of the parameter vector of the curve.
Here is an example of a space curve object:
object "myspacecurve" basis "bezier_1" bezier 1 group 0.4 0.4 1.2 0.6 0.4 1.2 0.6 0.6 1.2 0.4 0.6 1.2 v 0 v 1 v 2 v 3 curve "curve1" bezier_1 0. 1. 2. 3. 4. 0 1 2 3 0 space curve "sp1" "curve1" 0. 4. end group end object
Hair support was introduced in mental ray 3.1. Rendering hair is a difficult problem for triangle rendering because the number of hairs is typically very large (hundreds of thousands or millions), and each hair has a very large bounding box that it fills poorly. For this reason, a new hair primitive was introduced that avoids this problem, and also has a highly optimized storage format that avoids the storage overhead for triangle hairs.
Like
object "object_name"
[ visible [on|off] ]
[ shadow [on|off] ]
[ trace [on|off] ]
[ select [on|off] ]
[ tagged [on|off] ]
[ caustic [on|off] ]
[ globillum [on|off] ]
[ caustic [mode] ]
[ globillum [mode] ]
[ box [minx miny minz maxx maxy maxz] ]
[ motion box [minx miny minz maxx maxy maxz] ]
[ max displace value ]
[ samples min max ]3.1
[ data null|"data_name" ]
[ tag label_numberint ]
hair
[ material "material_name" ]
[ radius radius ]
[ approximate segments ]
[ degree degree ]
[ max size size ]
[ max depth depth ]
[ hair n ]
[ hair m hm ]
[ hair t ht ]
[ hair u hu ]
[ hair radius ]
[ vertex n ]
[ vertex m vm ]
[ vertex t vt ]
[ vertex u vu ]
[ vertex radius ]
scalar [ nscalars ]
scalar list
hair [ nhairs ]
hair offset list
end hair
end object
The object header is similar to the headers for all other geometry types, but the standard group...end group block is replaced with hair...end hair. This block begins with a common header, followed by a scalar list, followed by a hair list. The header has the following optional statements:
It does not make sense to specify hair n if there is also a vertex hair because all hair normals will be overridden. Similarly, if vertex radii are used, no hair radii (or the global radius) may be present. If any normals are specified, the hairs are not cylinders but flat ribbons.
The scalar list defines a sequence of hairs. Each hair consists of a certain number of scalars that describe the entire hair, followed by another number of scalars that describe each vertex of the hair. The layout of these sequences of scalars is identical for all hairs, except that each hair may have a different number of vertices. It is not possible to have one hair with three texture scalars and another with two, for example. Here is the exact sequence of scalars for a single hair:
All vertices begin with three scalars for the location. The order of the other vertex scalars, and the order of the header scalars, is determined by the order of hair and vertex statements. The lists above correspond to the syntax listing at the beginning of this section: first n, then m, then t, then u, then radius.
Each hair has only one header but multiple vertices. As described above, the number of vertices must be (1 + degree . segments), where segments may be different for each hair. This number is not encoded in the hair but in the separate hair list. Note that hairs do not use texture vectors like polygons and free-form surfaces but texture scalars. It's up to the shader to interpret state - > tex_list as a list of scalars, and properly use them in groups of one, two, three, or whatever is needed. This makes it possible to avoid the third null component if only two-component UV textures are needed, for example, which can save a lot of memory because hair objects tend to have a very large number of vertices, probably millions.
The hair list specifies where in the scalar list each hair begins, by offset in the scalar list such that 0 is the first scalar, 1 is the second scalar, and so on. At the end of this offset list, one extra offset specifies the last scalar plus one, where the next hair would begin if there were another one. If all scalars are used (which is normally the case), the first offset is 0 and the extra one at the end equals the number of scalars. Here is a simple example for a hair object:
object "hair1" visible trace shadow tag 1 hair material "mtl" radius 0.3 degree 1 hair t 2 vertex t 1 scalar [ 42 ] 11 22 0.0 0.0 0.0 1111 0.0 1.0 0.0 1112 1.0 1.0 0.0 1113 33 44 0.0 0.0 0.0 1114 0.0 -1.0 0.0 1115 -1.0 -1.0 0.0 1116 55 66 0.0 0.0 0.0 1117 0.0 -1.0 0.0 1118 -1.0 -1.0 0.0 1119 hair [ 4 ] 0 14 28 42 end hair end object
This example consists of three hairs, each with three vertices. The header of each hair consists of two texture scalars (11 22, 33 44, and 55 66, respectively). Each vertex consists of four scalars, three for the location in object space and one more for a vertex texture. The shader will receive three texture scalars, two from the hair and one more from the vertices. The former are copied from the hair that was hit, and the latter is interpolated from the nearest two vertices. They are stored in state - > tex_list as if it were a scalar array, header texture scalars first, so the two hair texture scalars end up in tex_list[0].x and tex_list[0].y, and the vertex texture scalar ends up in tex_list[0].z. It is best to re-cast tex_list to a miScalar pointer in the hair shader.
Hair objects may use the same material shaders as any other object, but often special hair material shaders are used because although hair may be a cylinder, it is too thin for properly resolving the upper and lower edge and the diffuse terminator and the highlight. There is a simplified hair illumination shader in the base shader library, mib_illum_hair that implements a much more effective hair shading model.
Subdivision surfaces are created by repeated refinement of polyhedral control meshes according to certain subdivision rules. The control mesh is basically a polygon mesh with certain restrictions (triangles and quads only) and certain extensions (for creases, for example), which is approximated more and more finely to generate a smooth limit surface. The limit surface obtained by this process is tangent-plane smooth at extraordinary vertices and curvature smooth at regular vertices. The main advantage of subdivision surfaces over NUBRS is their ability to define arbitrary topology geometry, that is, they do not require a rectangular parameter domain.
Subdivision surfaces are available in mental ray as an optional product, called mental matter. Subdivision surfaces support can be added to mental ray either by linking the libmisubdiv.so in the same way a shader library is linked, or by using an integrated version of mental ray that combines mental ray and mental matter. The library implements approximation and management of subdivision surfaces, and has a powerful C++ API allowing multiresolution modeling operations. The .mi format allows definition of subdivision surfaces. For complex modeling operations geometry shaders using the above mentioned C++ API can be written. This API comes with a separate documentation, see [CAPI2].
The subdivision surface implementation supports the Loop scheme,
which operates on a control mesh consisting of triangles, and the
Catmull-Clark scheme, which operates on a control mesh consisting
of quads (polygons with four vertices). Face refinement is adaptive,
supporting the LDA approximation criteria described in
section . Vertices may have features assigned which modify
the subdivision rules. Detail vectors may be specified for vertices on
any level in the face hierarchy for multiresolution representation.
Trim edges can be specified on any level to cut holes into the surface.
Edges can be tagged as a crease with fractional sharpness to model for
example wrinkle features. All these features are described in the
following sections.
Other features such as multiresolution modeling, approximation precision per face, face visibility, animation capabilities and NURBS export and import are not accessible from the .mi format. See [CAPI2] for details on these advanced modeling capabilities.
Subdivision surface geometry, like polygonal geometry, is defined by a series of sections. An object containing only subdivision surface geometry follows this broad outline:
object "object_name"
... # flags, boxes, data, etc.
group
[ merge epsilon ]
vector list
vertex list
subdivision surface
... # more subdivision surfaces
[ approximation list ]
end group
end object
The vector list in the group is a list of (x, y, z) vectors used for face vertex positions, detail vectors, texture and motion vectors.
The vertex list that follows the vector list builds control vertices from the vectors. This works like the vertex list of polygonal geometry, except that no normals can be defined here. Here position vectors, texture coordinates and motion vectors can be referenced.
Features may be associated with vertices by specifying either corner, conic, cusp, dart or smooth behind the regular vertex definition:
v vec_ref [ tex_list ] [ m motion_ref ] [ corner [ level level ] ]
v vec_ref [ tex_list ] [ m motion_ref ] [ conic [ level level ] ]
v vec_ref [ tex_list ] [ m motion_ref ] [ cusp [ level level ] ]
v vec_ref [ tex_list ] [ m motion_ref ] [ dart [ level level ] ]
v vec_ref [ tex_list ] [ m motion_ref ] [ smooth [ level level ] ]
Here vec_ref is the reference of a position or detail vector, tex_list is a list of texture coordinates with t keywords and corresponding coordinate indices, and motion_ref is an optional motion vector. Only one motion vector may be specified for each vertex with mental ray 2.1 and 3.0, and up to 15 with mental ray 3.1.
The optional vertex feature follows, and an optional level can be specified if the feature is active on a level above vertex definition level. Texture coordinates may only be specified for base face level vertices. The vertex features are described in more detail in a separate section below.
The subdivision surface geometry list consists of subdivision
surface statements, much like Free-Form geometry consists of surface statements. For a description of vector lists and vertex
lists, refer to page .
The approximation statements are very similar to the free-form case,
except that approximate surface is replaced by approximate
subdivision surface. See section for details.
A subdivision surface specifies a name and a list of optionally refined base faces:
subdivision surface "surface_name "
[ base_mesh ]
end subdivision surface
Triangles and quads can be mixed in the same subdivision surface, but they may not share vertices.
The base mesh of a subdivision surface is very similar to polygons. It uses the same syntax, but only faces with three or four vertices are allowed, since the Loop subdivision scheme operates on triangle meshes and and the Catmull-Clark scheme on quad meshes. The face vertices must be specified in counter-clockwise order, the mesh must be 2-manifold.
p ["material_name"] vertex_ref_list
[ crease crease_mask sharpness_list ]
[ trim trim_mask ]
[ { hira_spec } ]
The p keyword begins definition of a base face. An optional material name may follow, otherwise the material of the previous base face is assigned to the current base face. It the current object is marked as tagged, a label integer must be given instead of the material name (this is not shown here). Face vertices must be specified in the vertex_ref_list.
The optional crease statement allows specification of crease edges. With crease_mask up to four edges are selected for crease, the floating point sharpness values follow in the sharpness_list. crease_mask is a bitmap where crease edge indices are associated with corresponding bit positions in the mask. The diagram below shows the vertex and edge labeling of a base triangle:
The numbers inside the triangle indicate vertex numbers, the numbers at the three edges indicate the corresponding bit position in the crease bitmap. Bit 0 is assumed to be the least significant bit. Here is an example where edges 0 and 1 in a triangle should be crease edges, with corresponding sharpness values of 0.7 and 1.0:
crease 3 0.7 1
For quads the vertex and edge labeling is similar, as shown in this diagram:
For more information on crease edges, see section
below. A base face may be trimmed with the trim statement. The
trim_mask is a bitmap where trim edges are selected according to
their index in the face, in a way very similar to the crease bitmap
described above. For more information on trim edges, see
section
below.
It is possible to explicitly subdivide faces and to specify detail Subdivision is at the heart of subdivision surfaces. The surface begins with a base mesh, which is subdivided into finer and finer levels, until the final smooth limit surface is reached. These levels form a hierarchy with the base mesh at its root. Modeling allows control over the levels of the hierarchy, such that the subdivision is guided by predefining how the subdivision takes place by predefining vertices on higher levels. This is useful to introduce local detail. For example, a head can be modeled by defining the general shape as the base mesh, and the nose can be introduced by defining one or more higher levels that introduce the required local detail. It is sufficient to define higher levels only where detail is needed; it is not necessary to define all parts of a higher level. (This would very quickly require very large numbers of vertices, even where no detail is required.)
Definition of detail on higher levels of the hierarchy is done by subdividing a face of the next-lower hierarchy level. High-level vertices can not be placed just anywhere, but only by subdividing a triangle or quad that already exists on the next-lower level, in a fixed way by dividing its edges.
In the .mi scene language, a face is subdivided by specifying a pair of curly brackets around hira_spec. Inside the hira_spec block a configuration of four children on the next level is selected, called a kit. The kit specification hira_spec is defined as:
[ child child_index { hira_spec } ]
[ detail detail_mask detail_list ]
[ trim child_index trim_mask ]
[ crease child_index crease_mask sharpness_list ]
[ material child_index material_name ]
[ material child_index label
In all the statements above, the child within the kit is selected with a child_index. A child may be further subdivided explicitly by specifying a hira_spec block inside curly brackets. The maximum number of subdivision levels that can be specified is 15.
A detail vector specifies the offset to be added to the result of subdivision on a certain level for a certain vertex. The detail vector is specified in object coordinates, but is transformed using local reference frames. Adjacent subdivided faces automatically create a shared midpoint on the edge they share. A detail vector can be assigned using both faces, but a single assignment is sufficient since the vertex is shared.
Detail vectors may be specified for the current kit vertices for the current subdivision level. A triangle kit has six vertices where detail vectors can be assigned, a quad kit has nine vertices. Detail vectors may be defined also on higher hierarchy levels above the current vertex definition level by explicitly subdividing the face and specifying details on the higher level kits.
The same detail vector may be assigned to different vertices, even on different levels, since there is no relationship between the vertices in the vertex_list and the vertices that are created automatically for the limit surface. Here the detail vectors are simply looked up in the vector list and their value is assigned to internal subdivision surface vertices.
Detail vertices for the current kit are selected with the bitmap detail_mask. Each vertex of the kit has a corresponding bit in the bitmap. The vertex labeling for a triangle kit is shown below:
For a quad kit the labeling is similar:
The detail vectors are specified with the detail_list, which is a list of vertex indices. For example, if a detail vector is given by vertex index 10 and it should be assigned to the central vertex of a quad kit, one would have to specify 256 for the detail_mask, followed by 10.
Crease edges on the kit subdivision level can be specified by selecting a child in the kit with child_index, followed by an edge mask selecting the edges within that child, and finally followed by floating point values specifying the sharpness of the selected edges.
Trim edges on the kit subdivision level can be specified by selecting a child in the kit with child_index, selecting one of the four children, followed by an edge mask similar to the trim mask for base faces.
Materials can be assigned to individual faces by using the material keyword, followed by a child index selecting the face within the kit, finally followed by the material name (or label index if the object is tagged). Materials are inherited in the hierarchy.
Vertices may be tagged with the following features to modify the subdivision rules:
Creases are discontinuities introduced into the surface in oder to define sharp geometric details such as wrinkles. Crease edges may be specified for edges which are not on the geometric boundary. Connecting crease edges must be defined on the same level.
A vertex with exactly two incident crease edges is internally marked as a crease vertex. Normal vectors for vertices on an infinite sharp crease line are not shared; instead, two normal vectors are created for the faces on each of the two sides on the crease line. A crease sharpness value of 0 assigned to an edge will result in smooth subdivision, a value of 1 results will generate an infinite sharp crease, and fractional values between 0 and 1 will generate smooth creases. Sharpness values of subsequent higher subdivision levels are computed using a quadratic B-Spline function applied to the sharpness values of the parent edges.
Trim edges may not be defined on the boundary or on crease edges. Connecting trim edges must be defined on the same level. Trim loops may not intersect or touch. Regions where faces are located inside a closed trim loop will generate a hole in the surface. It is not allowed to create trim regions within trim regions. It is legal to specify open trim loops, but here no holes are created in the surface.
When vertex features must be defined for vertices on trim loops above level 0, one has to reference a vertex defining that feature. If this vertex does not have significant detail, a zero length detail vector must be referenced to satisfy the .mi grammar in the vertex section.
At trim loops, boundary subdivision rules are applied.
In the example below a cube base mesh is created, the four bottom level 0 edges are marked for infinite crease (1.0 for each edge), the top face is subdivided once and a detail vector is assigned to the central vertex on level 1.
object "quadcube" group -1 -1 -1 1 -1 -1 1 1 -1 -1 1 -1 -1 -1 1 1 -1 1 1 1 1 -1 1 1 0 0 0.4 v 0 v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 subdivision surface "surf1" p 0 1 5 4 p 1 2 6 5 p 2 3 7 6 p 3 0 4 7 p 4 5 6 7 { detail 256 8 } p 0 3 2 1 crease 15 1 1 1 1 end subdivision surface approximate subdivision surface angle 7 "surf1" end group end object
Approximations are defined within an object group and they specify how previously defined polygons, surfaces, and curves should be tessellated. Within an object group containing free-form surface geometry the approximation statements are given separately for the surface itself and for curves used by the surface. The surface approximation statement sets the approximation technique for the surface itself. If it carries a displacement map this statement refers to the underlying geometric base surface and does not take the displacement into account. One may specify the approximation criteria on the displaced surface with an additional displace approximation statement or even leave out the surface approximation statement altogether.
If the material of the surface does not contain a displacement shader the displace approximation statement is ignored. A trim approximation statement applies to all trimming, hole and special curves attached to the given surface or surfaces collectively; it is equivalent to separate curve approximations for each individual curve. When the keyword approximate is directly followed by an approximation technique it refers to a polygon or a list of polygons. It only has an effect on displacement mapped polygons. If the options statement specifies approximation statements for base surfaces and/or displacements, they override the approximation statements in the object. This can be used for quick previews with low tessellation quality, for example.
approximate
technique [ minint maxint ]
approximate surface
technique [ minint maxint ] [ max maxint ] "surface_name" ...
approximate displace
technique [ minint maxint ] "surface_name" ...
approximate trim
technique [ minint maxint ] "surface_name" ...
approximate curve
technique [ minint maxint ] "curve_name" ...
approximate space curve
technique [ minint maxint ] "spacecurve_name" ...
The dots indicate that there may be more than one surface_name or curve_name following the approximation statement. The given approximation is then used for all named surfaces or curves.
technique stands for one or more of the following:
view
tree
grid
fine3.1
delaunay
[ regular ] parametric u_subdiv [ v_subdiv ]
any
sharp sharp3.1
length edge
distance dist
angle angle
spatial [ view ] edge
curvature [ view ] dist angle
grading angle
tree, grid, and delaunay are mutually exclusive. parametric cannot be combined with any of the others except grid, which is the default for the parametric case anyway. regular can only be used together with parametric. view has no effect unless one of length, distance, spatial, or curvature is also given. Grading can only be used in combination with Delaunay triangulation.
View-dependent approximation is enabled if the view statement is present. It controls whether the edge argument of the length and spatial statements, and the dist argument of the distance and curvature statements, are in the space the object is defined in or in raster space.
Tree, grid, and Delaunay approximation algorithms are available for surface approximation. The grid algorithm tessellates on a regular grid of isolines in parameter space; the tree algorithm tessellates in a hierarchy of successive refinements that produces fewer triangles for the same quality criteria; the Delaunay algorithm creates a successive refinement that maximizes triangle equiangularity. criteria. By definition parametric approximations always use the grid algorithm; all others can use either but tree is the default. tree, grid, and delaunay have no effect on curve approximations. Delaunay triangulation creates more regular triangles but takes longer to compute.
Parametric approximation subdivides each patch of the surface into u_subdiv . degree equal-sized pieces in the U parameter direction, and v_subdiv . degree equal-sized pieces in the V parameter direction. If regular the number of pieces the whole surface is subdivided into simply equals the parameter value. v_subdivmust be present for surface approximations and must be omitted for curve and trim approximations. Note that the factor is a floating point number, although a patch can only be subdivided an integral number of times. For example, if a factor of 2.0 is given and the surface is of degree three, each patch will be subdivided six times in each parametric direction. If a factor of 0.0 is given, each patch is approximated by two triangles.
Curves are subdivided in subdiv . degree equal pieces by the parametric approximation and into subdiv equal pieces by the regular parametric approximation.
For displacement mapped polygons and displacement mapped surfaces with a displace statement regular parametric has the same meaning as parametric in the approximation. For displacement mapped polygons the u_subdiv constant specifies that each edge in the triangulation of the original polygon is subdivided for the displacement 2u_subdivtimes. If a displace approximation is given for a displacement mapped surface, the initial tessellation of the underlying geometric surface is subdivided in the same way as for polygons. For example, a value of 2 leads to a fourfold subdivision of each edge. Non-integer values for the subdivision constant are admissible. Nothing is done if the expression above is smaller than 2 (i.e. if u_subdiv < 1). The v_subdiv constant is ignored for the parametric approximation of displacement maps.
Length/distance/angle (LDA) approximation specifies curvature-dependent approximation according to the criteria specified by the length, distance, and angle statements. These statements can be given in any combination and order, but cannot be combined with parametric approximation in the same approximate statement. If they are preceded by the any keyword the approximation stops as soon as any of the criteria is satisfied.
The length statement subdivides the surface or curve such that no edge length of the tessellation exceeds the edge parameter. edge is given as a distance in the space the object is defined in, or as a fraction of a pixel diagonal in raster space if the view keyword is present. Small values such as 1.0 are recommended. For tree and grid approximation the min and max parameters, if present, specify the minimum and maximum number of recursion levels of the adaptive subdivision. The min parameter is a means to enforce a minimal triangulation fineness without any tests. Edges are further subdivided until they satisfy the given criterion is fulfilled or the max subdivision level is reached. The defaults are 0 and 5, respectively; 5 is a very high number. Good results can often be achieved with a maximum of 3 subdivisions. For Delaunay approximation, the number max following the keyword max specifies the maximum number of triangles of the surface tessellation. This number will be exceeded only if required by trimming, hole, and special curves because every curve vertex must become part of the tessellation regardless of the specified maximum.
For displacement mapped polygons and displacement mapped surfaces with a displace approximation statement the length criterion in the approximation limits the size of the edges of the displaced triangles and ensures that at least all features of this size are resolved. Subdivision stops as soon as an edge satisfies the criterion or when the maximum subdivision level is reached. The possibility that at an even finer scale new details may show up which would lead again to longer edges of course cannot be ruled out. This caveat about the potential miss of high-frequency detail applies also to the distance and angle criteria.
The distance statement specifies the maximum distance dist between the tessellation and the actual curve or surface. The value of dist is a distance in the space the object is defined in, or a fraction of a pixel diagonal in raster space if the view statement is present. As a starting point, a small distance such as 0.1 is recommended. For tree and grid approximation the min and max parameters, if present, specify the minimum and maximum number of recursion levels of the adaptive subdivision. For Delaunay approximation, the number max following the keyword max specifies the maximum number of triangles of the surface tessellation.
For displacement mapped polygons and displacement mapped surfaces with a displace approximation statement the distance criterion cannot be used in the same way because the displaced surface is not known analytically. Instead, the displacements of the vertices of a triangle in the tessellation are compared. The criterion is fulfilled only if they differ by less than the given threshold. Subdivision is finest in areas where the displacement changes. For example, if a black-and-white picture is used for the displacement map the triangulation will be finest along the borders between black and white areas but the resolution will be lower away from them in the uniformly colored areas. In such a case one could choose a moderately dense parametric surface approximation that samples the displacement map at sufficient density to catch small features, and use the curvature-dependent displace approximation to resolve the curvature introduced by the displacement map. Even if the base surface is triangulated without adding interior points, as if its trim curve defined a polygon in parameter space, it is still possible to guarantee a certain resolution by increasing the min subdivision level. Only the consecutive subdivisions are then performed adaptively.
The angle statement specifies the maximum angle angle in degrees between normals of adjacent tiles of a displaced polygon or the tessellation of a surface or its displacement or between tangents of adjacent segments of the curve approximation. Large angles such as 45.0 are recommended. For tree and grid approximation the min and max parameters, if present, specify the minimum and maximum number of recursion levels of the adaptive subdivision. For Delaunay approximation, the number max following the keyword max specifies the maximum number of triangles of the surface tessellation.
Spatial approximation as specified by a spatial statement is a special case of an LDA approximation that specifies only the length statement. For backwards compatibility, the spatial statement has been retained; it is equivalent to the length statement plus an optional view statement.
Curvature-dependent approximation as specified by the curvature statement is also a special case of LDA approximation, equivalent to a distance statement, an angle statement, and an optional view statement. The spatial and curvature statements can be combined, but future designs should use length, distance, and angle directly.
Grading applies only to Delaunay triangulation controls the density of triangles around the border of the surface. It allows the density of triangles to vary quickly in a smooth transition between a finer curve approximation and a coarser surface approximation. The angle constant specifies a lower bound related to the degree of the minimum angle of a triangle. Values from 0.0 to 30.0 can be specified. Small values up to 20.0 are recommended. The default is 0.0. When using high grading values it is recommended to specify a maximum number of triangles because otherwise high grading values might result in a huge number of triangles or endless mesh refinement. The purpose of this option is to prevent a large number of tiny triangles at the trimming or hole curve to abruptly join very large triangles in the interior of the surface.
The sharp3.1 keyword controls the normal vector interpolation. If set to 0.0, mental ray uses the interpolated normal as specified by the base surface, modified by displacement if available. If the argument sharp is set to 1.0, mental ray will use the geometric normal for a faceted look. This is primarily useful in fine mode. Future version will be able to blend between these two modes.
If no approximation statement is given the parametric technique is used by default with u_subdiv = v_subdiv = 1 for surfaces, or u_subdiv = 1 in the case of curves and u_subdiv = 0 for polygons.
Standard approximations as described in the previous section work under the assumption that as few triangles as possible should be used to approximate a surface to achieve a user-defined quality. mental ray 3.1 also supports a new approximation mode called fine approximation, which addresses the problem from a different angle: it is capable of efficiently expending very large numbers of triangles to faithfully approximate even very complex surfaces, especially displaced surfaces, without excessive memory consumption.
This is done by reducing the granularity of mental ray's cache manager. In mental ray 3.0, it operated on entire objects, which could become very large when tessellated. mental ray 3.1 applies cache management to smaller units formed by splitting objects into smaller sets, which can be individually tessellated without excessive memory requirements. This is especially useful for extremely detailed displacement maps.
Fine approximations support a small subset of approximation techniques since the remainder exists only to trade off triangle counts vs. quality, which is no longer a problem for fine approximations:
fine
[ sharp sharp ] 3.1
[ view ] length edge
parametric u_subdiv v_subdiv
The fine keyword enables fine approximation. It can be used for polygon displacement, free-form surface displacement, subdivision-surface displacement and free-form surface approximations, but not for curves (because they are not tessellated to triangles). As with standard approximations, the sharp keyword controls normal-vector calculations. If set to 0.0, mental ray uses the interpolated normal as specified by the base surface, modified by displacement if available; if set to 1.0, mental ray will use the geometric normal to achieve a crispy faceted look.2.5
Fine approximation requires the choice of one of three techniques:
This simplicity makes it very easy to control fine displacement, without the risk of accidentally creating billions of triangles until memory runs out, and without juggling a large number of temperamental displacement-mapping parameters.
However, fine displacement critically depends on the specification of a cache size limit, because otherwise the fine tessellation results would not flow through the cache but accumulate until memory runs out. For this reason, mental ray 3.1 has a default cache limit of 512 MB. This can be overridden with the -jobmemory command-line option. A good choice is half the amount of physical RAM, or 500-800 MB on 32-bit machines, whichever is smaller. If the number is too large, the operating system may run out of virtual address space; if it is too small, mental ray will perform too many cache flush operations.
If fine is used to approxiamte displaced geometry it is also critical to specify a correct max displace. This parameter specifies the maximum absolute scalar value a displacement shader may return and serves to give mental ray a hint of the maximum extension of the displaced object. It is measured in object space. This parameter is somewhat ``sensible'': if chosen too big it may effect rendering performance; if chosen too small any bigger values as returned by a displacement shader will be truncated to it. mental ray issues a warning message if max dispalce is chosen too small and a greater value is returned by the shader during rendering - thus providing a way to adjust max displace optimally. mental ray relies on maxdisplace exclusively; if accidentally left at the default of zero, all displacement will disappear (with a warning message).
Fine approximation cannot be used together with merging and connections.
instance "name"
"element"|geometry function
[ hide on|off ]
[ visible on|off ]
[ shadow on|off ]
[ trace on|off ]
[ caustic [ mode ]]
[ globillum [ mode ]]
[ transform [ matrix ]]
[ motion transform [ matrix ]]
[ motion off ]
[ material "material_name" ]
[ material [ "material_name" [ , "material_name" ... ] ] ]
[ tag labelint ]
[ data [ "data_name" ]]
[ (parameters) ]
end instance
Instances place cameras, lights, objects, and instance groups into the scene. Without instances, these entities have no effect; they are not tessellated and are not scheduled for processing. An instance has a name that identifies the instance when it is placed into an instance group (see below). Every instance references exactly one element element, which must be the name of a camera, a light, an object, or an instance group. If the instanced item is a geometry shader function, the scene element created by this special shader is actually used as the instanced item.
The hide flag can be set to on to disable the instance and the element it references. This is useful to temporarily suspend an instance to evaluate a subset of the scene, without deleting and later recreating suspended parts. hide is off by default.
The visible, shadow, trace, caustic, and globillum modes are inherited down the scene DAG. Flags in instances lower (closer to the objects) override flags in instances higher up. The flags from the instance closest to the object are merged with the corresponding object flags. The resulting values become the effective flags for rendering. If no flags are specified in the relevant instances, only the object flags are used. For the exact definition of these flags refer to the Object section. The caustics mode bitmap contains six bits, and the desired behavior is the sum of:
Obviously, 1 and 4, 2 and 8, and 16 and 32 cannot be mixed, respectively. If mode is omitted, the default is 3 (enable casting and receiving). The fifth and sixth bits control ``invisibibility to photons'', that is, if visibility is disabled photons do not intersect this object and fly right through. This also affects the portion of the scene where photons have an effect and will be traced by mental ray. For example, if caustics occur only in a small part of the scene, objects outside that area should be made invisible to caustic photons to tell mental ray it should not waste time tracing photons there. The globillum mode bitmap works the same way and has the same bit layout, but controls global illuination instead of caustics.
The transform statement is followed by 16 numbers that define a 4 x 4 matrix in row-major order. The matrix establishes the transformation from the parent coordinate space to the object space of the instanced element. If the instance is directly attached to the root instance group (see below), the parent coordinate space is world space. For example, the following matrix translates the instanced element to the coordinate (x, y, z):
transform 1 0 0 0
0 1 0 0
0 0 1 0
x y z 1
Instance transformations are ignored if the options element explicitly sets the coordinate space to camera space, using the camera space statement. This is not recommended. The parent-to-local space transformation direction has the effect that in order to move an instanced object one (local) unit in the (local) +X direction, xmust be decremented by 1.
The motion transform matrix specifies a transformation from parent space to local space for motion blurred geometry. If not specified, the instance transformation is used for the motion blur transformation. In this case the parent instance determines whether motion blur is active or not. Motion blur is activated by specifying a motion transformation in the scene DAG. This transformation is propagated through the scene DAG in the same way as the instance transformations. The motion off statement turns off all motion information inherited up to this point, as if the camera and all instances above did not have motion transforms. This can be used to disable motion transformations for a scene subtree. The motion steps3.1 option in the options block controls the number of segments of the curved motion path resulting from evaluating the transformation at different times in the interval 0..1.
If a motion transformation is specified in an object instance, the triangle vertex points of the tessellated geometry are transformed by the matrix product of the accumulated instance matrix and the inverse accumulated motion transformation matrix. The difference vector between the transformed and the untransformed triangle vertex point is used as a motion vector in local object space. If an object has motion vectors attached to the vertices, the motion vector calculated as described above is combined with the object motion vector. A motion transformation can be given for both object and camera instances. If a motion transformation is specified in a camera instance, the effective motion transformation for the triangle vertices is the matrix product of the relative instance and relative camera motion transformation.
The material_name is the name of a previously defined material. It is stored along with the instance. Instance materials are inherited down the scene DAG. Materials in instances lower in the scene DAG (closer to the leaves) override materials in instances higher up. The material defined lowest becomes the default material for any polygon or surface in a geometrical object that has no material of its own.
If a bracketed, comma-separated list of material_names is given, mental ray will use the n-th material in the list if the polygon or surface label is n. If the label exceeds the length of the list, the first material in the list is used. Polygon and surface labels can be specified in the object definition that have the tagged flag set. If this flag is not set, the first material in the list is used. The list may not be empty.
A label integer can be attached to an instance using the tag statement. Labels are not used by mental ray in any way, but a shader can use the mi_query function to obtain the label and perform instance-specific operations.
Also, user data can be attached with a data statement. The argument must be the name of a previously defined data element in the scene file. If the argument is missing, a previously existing data reference is removed.
An instance may define parameters. Instance parameters are evaluated during scene preprocessing during preprocessing. Whenever the initial scene traversal finds an instance, it calls the inheritance shader defined in the options element with the parent instance parameters and the parameters of the new instance. The inheritance function must then compute a new parameter set, which becomes the parent parameters for any future instances found in the element subtree below the new instance, if element is an instance group (if not, no sub-instances can exist and recursion ends). The inheritance function is also called if there is no parent instance yet or if the new instance contains no parameters. The final parameter set created by the inheritance function called for the bottom-level instance (which instances a camera, light, or object) is made available to shaders, in addition to the regular shader parameters.
mental ray 3.1.2 introduces traversal functions in the options block, which are called like inheritance functions but have more control over the inheritance process. For example, they can controlk not only instance parameters but also flags, materials, and transaformation matrices.
The instance parameters must be declared just like shader parameters. The declare command must name the inheritance function, as specified in the options element. All instances share the same declaration. Note that this limits the portability of the scene -- it is difficult to merge it with another scene that uses a different parameter inheritance function.
If transform, motion transform, and material are given without arguments, the respective feature is turned off. This is useful for incremental changes. It is not relevant for the initial definition because these features are off by default when an instance is created.
The element may be named in more than one instance. This is called `` multiple instancing.'' If two instances name the same object, the object appears twice in the scene, even though it is stored only once in the scene database. This greatly reduces memory consumption. For example, it is sufficient to create one wheel object for a car, and then instance it four times. All four instances will contain a different transformation matrices to place the wheels in four different locations. (This implies that multiple instancing is not useful in camera space mode because in this mode the transformations are ignored.) It is also possible to apply multiple instancing to object groups to replicate entire sub-scenes.
If the instanced item is a `` geometry shader'', the function is called with shader parameters and the scene element created by the shader is defined in the local coordinate space of the instance. The geometry shader is called just before tessellation takes place. The following example uses a geometry shader mib_geo_sphere:
instance "sphere" geometry "mib_geo_sphere" () end instance
This example creates a spherical object procedurally. It uses the syntax for anonymous shaders; as usual the named shader syntax using the shader keyword and named shader assignments using the ``='' sign can also be used. As usual, shader lists may be used; if the shader is correctly written all created objects are put in a group and instanced together. Named shaders created inside or outside procedural object definitions are in global scope and can be shared with other objects.
For a complete example for building scene graphs with instances and instance groups, see below.
instgroup "name"
"name"
[ tag labelint ]
[ merge epsilon ]
[ data [ "data_name" ]]
...
end instgroup
Instance groups, together with instances, provide the structure from which scenes are built. The scene is anchored at a root instance group, which contains instances referencing objects, cameras, lights, objects, and/or other groups. In the simplest case, all cameras, lights, and objects can be collected into a single group, forming a ``flat scene'' because there is no hierarchy. Cameras, lights, and objects are never put into an instance group directly. Instead, an instance must be defined, one for each element, and the instance is then put into the group. (This is why it is called an ``instance group.'')
Instance groups can be nested. An instance group is placed into a parent instance group exactly like a camera, light, or object: an instance must be defined for the child instance group, and the instance is put into the parent instance group. As with other entities, it is possible to create more than one instance for an instance group; this allows multiple instancing of sub-scenes. There is no limit on the nesting depth of instance groups.
Since the only purpose of instance groups is as a container for instances, the syntax is very simple. After the name of the instance group, one or more names of instances follow. An incremental change to an instance group clears the old instance list (without deleting the instances themselves); to add or remove an instance in an instance group, the incremental change must respecify the entire instance list.
The top-level instance group has no instance. It is called the root instance group. The root instance group stands for the entire scene. It is passed to the render command to process the scene. More than one root instance group can exist, but only one can be processed at a time. Camera instances must always be attached to the root instance group, not a lower-level instance group, and it may not be multiply instanced to ensure unambiguity. Multiple cameras can exist in the root instance group, but only one can be passed to the render command.
A label integer can be attached to an instance group using the tag statement. Labels are not used by mental ray in any way, but a shader can use the mi_query function to obtain the label and perform light-specific operations.
Also, user data can be attached with a data statement. The argument must be the name of a previously defined data element in the scene file. If the argument is missing, a previously existing data reference is removed.
If a nonzero merge epsilon is specified, the instance group is a
merge group. It behaves like a single object, and is constructed
by merging all objects below this merge group. Merging is done by finding
all surface edges closer than the merge epsilon, and adjusting them to
remove the gap. This can be used to correct small modeling problems.