next up previous contents
Next: 1.11 Texture Filtering Up: 1. Functionality Previous: 1.9 Shadow Maps

1.10 Texture, Bump, Displacement, and Reflection Mapping

mental ray supports  texture, bump, displacement and  reflection mapping, all of which may be derived from an image file or procedurally defined using user-supplied functions.

The following table lists the  file formats accepted by mental ray:

format description comp bits/comp colormap compress
rla/rlb Wavefront image 3, 4 8 no RLE
    3, 4 16 no RLE
pic SOFTIMAGE image 3, 4 8 no RLE, no
alias Alias image 3 8 no RLE
rgb SGI color 32.1, 4 8, 162.1 no RLE, no
jpg JFIF image 3 8 no JPEG*
tif TIFF image 1 1, 4, 8 no RLE, no
    1 4, 8 yes RLE, no
    3, 4 8 no RLE, no
    3, 4 16 no RLE, no
picture Dassault Systèmes picture 3 8 no RLE
ppm Portable pixmap 3 8, 16 no no
tga Targa image 1 8 no, yes RLE, no
    3 5 no RLE, no
    3/1 5/1 no RLE, no
    3, 4 8 no RLE, no
bmp MS Windows/OS2 bitmap 1 1, 4, 8 yes no
    3, 4 8 no no
qnt Quantel/Abekas YUV image 3 8 no no
ct mental images texture 4 8, 16, float no no
st mental images alpha texture 1 8, 16 no no
vt/wt mental images basis vectors 2 16 no no
zt mental images depth channel 1 float no no
nt/mt mental images vectors 3 float no no
tt mental images tag channel 1 32 no no
bit mental images bit mask 1 1 no no
map memory mapped textures any any no no

In the table any combination of comma separated values determines a valid format subtype. For example, the SGI RGB image format will be read when the data type is 8 bits per component with or without alpha, either RLE compressed or uncompressed. The actual image format is determined by analyzing the file content, not just by checking the filename extension. This allows replacing texture files with memory-mapped textures without changing the name, for example.

Typical image types like black/white, grayscale, colormapped and truecolor images, optionally compressed, are supported. Some of them could be used to supply additional alpha channel information (number of components greater than 3). The collection covers most common platform independent formats like TIFF and JFIF/JPEG1.3, special UNIX (PPM) or Windows bitmap (BMP) types and well known application formats. The mental images formats, normally created by mental ray itself, are mainly available to exchange data not storable with other formats.

The other way to define any sort of map is supplying user functions, which are linked to mental ray at run time without user intervention. The function may require parameters which could specify, for example, the turbulence of a procedural marble texture.

Frequently, a function is used to apply texture coordinate transformations such as scaling, cropping, and repetitions. Such a function would have a sub-texture argument that refers to the actual image file texture.

A user-defined  material shader is not restricted to the above applications for textures. It is free to evaluate any texture and any number of textures for a given point, and use the result for any purpose.

In the parameter list of the standard material shaders, a list of  texture maps may be given in addition to, for example, a literal RGB value for the diffuse component of a material. The color of the diffuse component will then vary across a surface. To shade a given point on a surface, the coordinates in  texture space are first determined for the point. The diffuse color used for shading calculations is then the value of the  texture map at these coordinates. The SOFTIMAGE-compatible material shader uses a different approach; it accepts a single list of textures, with parameters attached to each texture that control the way the texture is applied to ambient, diffuse, and other parameters. The shader interface is extremely flexible and permits user-defined shaders to use either of these approaches, or completely different formats. The remainder of this section describes the standard shader parameters only.

The standard material shaders support  texture mapping for all standard  material parameters except the  index of refraction. Shinyness,  transparency, refraction transparency, and  reflectivity are scalar values and may be mapped by a  scalar map. Most shaders use  color maps to implement  bump mapping by sampling the color map three times, but some shaders such as the Wavefront material shader accept a  vector map that requires only a single sample.

Determining the  texture coordinates of a point on a surface to be shaded requires defining a mapping from points in  camera space to points in  texture space. Such a mapping is itself referred to as a  texture space for the surface. Multiple texture spaces may be specified for a surface. If the geometry is a  polygon, a texture space is created by associating texture vertices with the geometric vertices. If the geometry is a  free-form surface, a texture space is created by associating a  texture surface with the surface. A texture surface is a free-form surface which defines the mapping from the natural surface parameter space to texture space. Texture maps, and therefore texture spaces and texture vertices, may be one, two, or three dimensional.

Pyramid textures  are a variant of  mip-map textures. When loading a texture that is flagged with the filter keyword, mental ray builds a hierarchy of different-resolution texture images that allow elliptical filtering of texture samples. Without filtering, distant textures would be point-sampled at widely separated locations, missing the texture areas between the samples, which causes texture aliasing. Texture filtering attempts to project the screen pixel on the texture, which results in an elliptic area on the texture. Pyramid textures allow sampling this ellipse very efficiently, taking every pixel in the texture in the ellipse into account without sampling every pixel. Pyramid textures are not restricted to square and power-of-two resolutions, and work with any RGB or RGBA picture file format. The shader can either rely on mental ray's texture projection or specify its own. Filter blurriness can be adjusted per texture.

A  procedural texture is free to use the texture space in any way it wants, but  texture files are always defined to have unit size and to be repeated through all of texture space. That is, the lower-left corner of the file maps to (0.0, 0.0) in texture space, and again to (1.0, 0.0), (2.0, 0.0), and so on; the lower-right corner maps to (1.0, 0.0),(2.0, 0.0),... and the upper right to (1.0, 1.0),(2.0, 2.0),....

Just as a  texture map can vary a parameter such as the diffuse color over every point on a surface, a  bump map can be associated with a  material, perturbing the normal at every point on a surface which uses the material. This will affect the shading, though not the geometry, giving the illusion of a pattern being embossed on the surface.

Bump maps, like  texture maps, require a texture space. In addition, bump maps require a pair of basis vectors to define the coordinate system in which the normal is displaced. A  bump map defines a scalar x and a scalar y displacement over the texture space. These components are used together with the respective basis vectors in order to calculate a perturbed surface normal. The basis vectors are automatically defined for  free-form surfaces in a way which conforms to the texture space. For polygons, the basis vectors must be explicitly given along with the  texture coordinates for every polygon vertex.

A  displacement map is a  scalar map which is used to displace a  free-form surface or a  polygon at each point in the direction of the local normal. Like texture, bump and  reflection maps, a  displacement map may be either a file or a user-defined function, or a combination of the two.

The surface must be triangulated fine enough to reveal the details of the displacement map. In general, the triangles must be smaller than the smallest feature of the displacement map which is to be resolved.

Displacement mapped polygons are at first triangulated as ordinary polygons. The initial triangulation is then further subdivided according to the specified approximation criteria. The parametric technique subdivides each triangle a given number of times. All the other techniques take the displacement into account. The length criterion, for example, limits the size of the edges of the triangles of the displaced polygons and ensures that at least all features of this size are resolved. As the displaced surface is not known analytically, the distance criterion compares the displacements of the vertices of a triangle with each other. The criterion is fulfilled only if they differ by less than the given threshold. Subdivision is finest in areas where the displacement changes. The angle criterion limits the angle under which two triangles meet in an edge contained in the triangulation. Subdivision stops as soon as the given criterion or combination of them is satisfied or the maximum subdivision level is reached. This does not preclude the possibility that at an even finer scale new details may show up which would again violate the approximation criteria.

For displacement mapped free-form surfaces approximation techniques can be specified either on the underlying geometric surface or for the surface resulting from the displacement. Before mental ray version 2.0, only the former method existed. This is still available, but it does not take into account variations in curvature imparted to the surface as a result of displacement mapping. If one wants to control the  approximation from the geometric surface the most suitable technique for use with displacement mapping on free-from surfaces is usually the view dependent uniform spatial subdivision technique, which allows specification of triangle size in raster space. An alternative is to place special curves on the surface which follow the contours or isolines of the  displacement map, thus creating flexibility in the surface tessellation at those points where it is most needed for displacement. This would also facilitate the approximation of the displacement map by the new adaptive triangulation method. In addition to or even instead of specifying the subdivision criteria for the base surface they can be given for the displaced surface itself. This approximation statement works exactly the same way as for polygons, i.e. an initial tessellation is subdivided until the criteria on the displaced surface are met.

The final type of map which may be associated with a  material is an  environment map. This is a color-mapped virtual sphere of usually infinite radius which surrounds any object referencing the given  material. This sphere is also seen by refracted rays; the environment seen by first-generation (primary) rays can also be specified but is part of the camera, not of any particular material. In general, if a ray does not intersect any objects, or if casting such a ray would exceed the  trace depth, the ray is considered to strike the sphere of the  environment map of the last material visited, or the camera environment map in the case of first-generation rays that did not hit any material.

The  environment map always covers the entire sphere exactly once. Rotations, translations, and repetitions are supported by special shader parameters or remapping shader nodes such as the ones found in the base shader library. There are also environment shaders that implement cubical instead of spherical environment maps, and cubical maps of finite instead of infinite size.



Footnotes

... JFIF/JPEG1.3
The JPEG software is based in part on the work of the Independent JPEG Group.

next up previous contents
Next: 1.11 Texture Filtering Up: 1. Functionality Previous: 1.9 Shadow Maps
Copyright 2000 by mental images