mental ray has been designed to take full advantage of parallel hardware. On multiprocessor machines that provide the necessary facilities, it automatically exploits thread parallelism where multiple threads of execution access shared memory. No user intervention is required to take advantage of this type of parallelism. mental ray is also capable of exploiting thread and process level parallelism where multiple threads or processes cooperate in the rendering of a single image but do not share memory. This is done using a distributed shared database that provides demand-driven transparent sharing of database items on multiple systems.1.1 This allows parallel execution across a network of computers.
Load balancing and distribution is achieved by subdividing the entire rendering operation into a large number of small jobs. Jobs may perform a wide variety of operations: tessellating a surface, loading a texture, casting a bundle of rays, rendering a small section of the image, balancing a photon map, and many others. mental ray 2.1 executes these jobs in phases: first the scene is read into memory, then all objects are tessellated, then photon maps and shadow maps are generated, and finally the image is rendered. Each phase involves execution of all the jobs required for successful completion of the phase. Jobs get distributed to all threads and all hosts that are available.
mental ray 3.0 is using finer-grain jobs, manages a dependency graph of jobs, and executes jobs on demand. There are no more phases; a job is simply executed when another job access data generated by that job. Data generated by jobs enters a memory pool called the geometry cache. (Actually, the cache manages almost all data stored in memory by mental ray, including textures, photons, images, etc, and not just geometry; the name geometry cache is mostly traditional.) Data can not only enter the cache but can also be deleted when memory fills up and the data has not been used for a while.
Dynamic job execution and the geometry cache as the central hub of mental ray 3.0 has several advantages:
In general, geometry caching has the effect of exploiting scene coherence very effectively. Frequently, mental ray 3.0 achieves significantly higher speed at over 80% memory usage reduction. Unlike traditional geometry caching methods, this advantage is not limited to scanline-only scenes; it works with all aspects of mental ray including ray tracing and global illumination. Since it exploits scene coherence, it works better if the scene does have a high degree of coherence, but its advantage remains even for less-coherent scenes, only reduced. The advantage of the cache degrades gracefully with less coherent scenes, instead of shutting off at some point. For example, scenes that make extensive use of global illumination, which by definition is global and less coherent, require a larger cache but still benefit from it.
The host that reads or translates the scene, or runs front-end application software that mental ray is integrated in, is called the master host. The master host is responsible for connecting to all other hosts, called slave hosts. A slave host may also act as master host if an independent copy of mental ray is used by another user; systems do not become unavailable for other jobs if used as slaves. However, running a mental ray slave on a host may degrade the performance of independent interactive application programs such as modelers on that host significantly.
The list of hosts to connect to is stored in the .rayhosts file. The first existing file of .ray2hosts (ray 2.x) or .ray3hosts (ray 3.x), .rayhosts, $HOME/.ray2hosts (ray 2.x) or $HOME/.ray3hosts (ray 3.x), $HOME/.rayhosts is used as .rayhosts file. Each line contains a hostname with an optional colon-separated port number of the service to connect to and an optional whitespace-separated parameter list that is passed to the host to supply additional command line parameters. Such a remote host is called a slave, while the main host that reads the .rayhosts file is called the master. The first line that literally matches the name of the master host is ignored; this allows all hosts on the network to share a single .rayhosts file, each ignoring the first reference to itself. Only masters ever access the host list. If the -hosts option is given to mental ray, the .rayhosts file is ignored, and the list of slave hosts is taken from the command line. In this case, no hosts are ignored. The library version of mental ray may get its host list directly from the application.