Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Advanced_Renderman_Book[torrents.ru]

.pdf
Скачиваний:
1714
Добавлен:
30.05.2015
Размер:
38.84 Mб
Скачать

90 Describing ModelsAnd Scenes in RenderMan

MotionBegin times

Motioneegin initiates a motion block. A motion block contains two or more RI API calls that are time samples-versions of the same call at different points in time. The parameter times is an array of floating point numbers that specifies the times that each of the following time samples correspond to. Naturally, the number of samples in the block must match the number of values in the times array.

MotionEnd

The MotionEnd call ends a motion block. The renderer will validate that the renderer can correctly interpolate (through time) between the motion samples provided. If there are errors in the block, such as incorrect matching of API calls or topological inconsistencies, the entire motion block will typically be discarded.

For example, a sphere that is inflating from a radius of 1.0 to a radius of 1.5 can be specified with the following snippet of RIB:

MotionBegin [ 0.0 1.0 ] Sphere 1 -1 1 360 Sphere 1.5 -1.5 1.5 360 MotionEnd

RenderMan is very particular about the data that appears in a motion block. In particular, each time sample

must be the same API call

must be on the short list of API calls that are amenable to motion blur

must differ only in parameter data that can be interpolated through time

This is because the renderer must be able to generate interpolated versions of the calls at any other point in time that it needs data. Not all API calls and not all parameters can be interpolated, so time samples are restricted to the set that can.

PRMan and BMRT implement motion blur on slightly different sets of data. Both can handle two-sample motion blur of all transformation calls (such as Rotate, Translate, or ConcatTransform). PRMan can motion blur primitive geometry calls (excepting Procedural) quite robustly, though BMRT's ability to do so is restricted to only certain primitives. Both renderers also require that any primitive geometry so blurred must be "topologically equivalent." That is, the control points of the primitives can move, but the number of polyhedral facets, edge connectivity, or any other values that affect the topology of the primitive are not allowed to change between samples.

In addition, recent versions of PRMan permit motion blocks that are slightly more general, such as those with more than two time samples in the block or those with time samples that do not occur exactly at shutter times. PRMan also permits geometry to be created (or destroyed) during the frame time so that it can

3.8 Advanced Features

appear to "blink in" (or out). The instantaneous occurrence of a transformation is not permitted because it implies that objects have discontinuous motion.

Neither PRMan nor BMRT currently support motion blur on shading parameters, although this will probably be implemented by one or both eventually. This has three important consequences. First, shading and other attribute calls are not on the "short list" of motion-blurrable API calls. There is no way to specify that a primitive changes color during a frame. Second, as objects move, they do not correctly respond to the changing lighting environment around them. In PRMan, shading samples occur at some particular time (depending on several factors), and the moving geometry that inherits this color "drags" the color across the screen as it moves. If an object leaves a shadow, for example, the object drags its darkened in-shadow color with it out of the shadow. Note that BMRT only suffers this particular problem on the radiosity pass, not on the ray tracing pass. Third, as objects move, the lighting changes that they imply to neighboring objects do not occur. For example, the shadows cast by a moving object do not move because the shadow receiver is not sampling the shadow through time.

3.8.2Depth of Field

Another effect that real cameras exhibit, but that most computer graphics cameras do not, is depth of field. Depth of field describes the photographic effect that occurs when a camera focuses at a particular distance. Objects that are nearly that distance from the camera appear to be in focus, while objects that are far from that distance are blurry. Most computer graphics cameras do not simulate out-of-focus objects, and this is equivalent to a pinhole camera, which has infinite depth of field (all distances are in focus, so there are no out-of-focus distances).

The depth of the field depends on three parameters: the length of the lens, the distance at which the camera is focused, and the diameter of the aperture, expressed as its f-stop. RenderMan allows these camera parameters to be specified as part of the scene description, and RenderMan renderers then simulate the appropriate amount of blurriness for the objects in the scene.

DepthOfField fstop focallength focaldistance

The DepthOfField call is a camera option that sets the parameters of the virtual camera that determine the camera's depth of field. The focallength specifies the fundamental length of the camera's lens, which for most modern physical cameras is in the range of 30 to 300 mm. The focaldistance specifies the distance at which the camera is focused. Both of these distances are measured in the units of "camera" space. For example, if one unit in the "camera" coordinate system is 1 meter, then a reasonable number for focallength is 0.055, and focaldistance might be 5.0. On the other hand, if one unit in the "camera" coordinate system is I inch, a reasonable number for focallength is

92 3 Describing Models and Scenes in RenderMan

more like 2.2. The fstop specifies the camera's aperture f-stop in the normal way (a value between 2.8 and 16 is typical).

In order to turn off depth of field calculations, we specify the camera to be a pinhole camera. This is the default. A pinhole camera has an infinite f-stop and consequently has infinite depth of field. This can be done by specifying infinity for fstop, in which case the values of the other parameters are irrelevant. As a shortcut, specifying DepthOfFi el d with no parameters will do the same thing.

Because depth of field requires a large amount of blurring of objects that are significantly out of focus, the stochastic sampling algorithms that compute it usually require extremely large numbers of samples in order to create an image with acceptable amounts of grain and noise. For this reason, accurate depth of field is quite expensive to compute. On the other hand, simple approximations of depth of field can be made with blurred composited layers. Although there are limits to the generality and the fidelity of this 2 z-D approximation, it is so much faster that it is often used instead of rendered DepthOfField.

3.9The Rest of the Story

There are some options and attributes defined in the RenderMan Interface Speci fication that, with 20/20 hindsight, we think should not have been there and are rarely used. Perhaps they have never been implemented by any renderer, or perhaps they have been subsumed by more powerful mechanisms provided in a different part of the interface. Without spending too much space going into the details of unimportant calls, we will simply mention that the API has these calls and refer the curious to the RenderMan Interface Specification and/or renderer documentation to learn the details. These calls include

TextureCoordinates: sets default texture coordinates on patches

Deformation: sets the deformation shader to be used

Perspective: pushes a perspective matrix onto the modeling transformation

Exterior: sets the volume shader to be used on reflected rays

Interior: sets the volume shader to be used on refracted rays

Bound: provides a bounding box for subsequent primitives

MakeBump: creates bump texture maps

ObjectBegin/ObjectEnd: creates a piece of reusable geometry

PixelVariance: provides an error bound for sampling algorithms that can compute error metrics

Imager: sets the imager shader to be used on pixels prior to display

RiTransformPoints: a C-only call that transforms data between named coordinate systems

Geometric Primitives

Probably the most important RI API calls are the ones that draw geometric primitives. The original specification had a wide variety of geometric primitives, and over the years, renderers have added new geometric primitive types to meet the evolving needs of the users.

Many graphics APIs support only a small number of "fundamental" primitives, such as triangle strips or perhaps polyhedra, under the assumptions that (1) any other primitive can always be tessellated into or approximated by these, and (2) it simplifies the renderer to optimize it for a few well-understood primitives. These primitives, which are chosen for their ease of rendering, might be considered to be "drawing" primitives.

94 4 Geometric Primitives

RenderMan, on the other hand, supports a large variety of high-level curvedsurface primitives, which we consider to be "modeling" primitives. It does so for three reasons. First, high-level primitives are a very compact way to represent an object. Tessellating primitives into large collections of simpler ones will clearly increase the size of the geometric database, and usually dramatically so. Second, they are more appropriate for a photorealistic renderer. Tessellated primitives usually show artifacts such as polygonal silhouettes, which need to be avoided in highquality renderings. Third, a renderer with a different algorithm might find them significantly more efficient to process. A ray tracer, for example, can make short work of a single sphere, but it takes a lot of extra (wasted) computation to handle a tessellated version.

Of course, RenderMan doesn't try to implement every possible geometric primitive that has ever been suggested in the computer graphics literature. It also relies on the modeler approximating any other primitive types with some from the basic set. But the basic set has enough variety and spans enough computational geometry to handle almost anything.

4.1Primitive Variables

On a few occasions now we have alluded to primitive variables, which are attached to the geometric primitives. Primitive variables refer to the geometric, appearance, and other data that the modeler provides on the parameter list of each geometric primitive.

As we will soon see, nearly every primitive is made up of vertices that are combined into strings of edges that outline facets, which are then joined to form the geometric primitive. The minimal data necessary to describe the shape and position of each geometric primitive is the vertex data itself. This is provided (on vertex-based primitives) with a parameter list entry for the variable "P" (or in some cases "Pw"). It is common for simple graphics APIs to provide for additional visual information at each vertex-for example, specifying the surface color at each vertex or putting a Phong normal vector at each vertex. The parameter list mechanism provides this functionality easily, as well, by specifying variables such as "Cs" or "N" for the primitive. Table 4.1 lists several of the useful predefined primitive variables that are commonly used.

RenderMan generalizes this simple concept in two powerful ways. First, RenderMan provides that any visual attribute or geometric parameter that might be interesting to the shaders on that surface can be provided on each vertex. This includes a relatively rich set of data that becomes the global variables of the shaders (see Chapter 7). But more powerfully and uniquely, it includes any or all of the parameters to the shaders themselves. This means that shader writers can define arbitrary interesting data as being parameters to the shader (perhaps geometric information, perhaps a new surface property, perhaps something completely unique),

95

4.1 Primitive Variables

and instruct the modeler to attach that data to the primitives-extending the attribute set of the renderer at runtime. Because primitive variables are for the benefit of the shaders that are attached to objects, they can be created using any data type that the Shading Language accepts. Second, RenderMan provides that this data attached to the primitives can have a variety of different granularities, so data that smoothly changes over the surface can be described as easily as data that is identical over the whole surface. There are four such granularities, known as storage classes. Some RenderMan geometric primitives are individual simple primitives, whereas others are collections of other primitives connected in some way. The number of data values that is required for a particular class is dependent on the type and topology of the primitive.

vertex: takes data at every primitive vertex. There are as many data values in a vertex variable as there are in the position variable "P", and the data is interpolated using the same geometric equations as "P" (bicubicly, with the basis matrix, etc.).

varying: takes data at every parametric corner. For simple parametric primitives, there are obviously four such corners. For patch meshes and other collective parametric primitives, the number of corners is data dependent. varying data is interpolated bilinearly in the parametric space of the primitive.

uniform: takes data at every facet and is identical everywhere on the facet. For individual primitives, there is a single facet, but for polyhedra and other collective primitives, the number of facets is data dependent.

constant: takes exactly one piece of data, no matter what type of primitive, and so is identical everywhere on the primitive.

The formulas for determining exactly how many data values are required for each class on each primitive type is therefore dependent on the topology of the primitive, and will be mentioned with the primitives as they are discussed. Notice that when primitive variable data is accessed inside the Shading Language, there are only two data classes, uniform and varying. All constant and uniform RI primitive variables become uniform Shading Language variables. All varying and vertex RI primitive variables become varyi ng Shading Language variables.

96 4 Geometric Primitives

When a primitive variable of type "point", "vector", "normal", or "matrix" (any of the geometric data types) is supplied for a primitive, the data are assumed to be expressed in the current coordinate system-that is, in "object" coordinates. However, as we will see in Section 7.2.3, shader calculations do not happen in "object" space. Therefore, there is an implied coordinate transformation that brings this "object" space data into the Shading Language's "current" space before the shader starts execution.

4.2 Parametric Quadrics

The simplest primitives to describe are the seven quadrics that are defined by RI. Each quadric

is defined parametrically, using the trigonometric equation that sweeps it out as a function of two angles

is created by sweeping a curve around the z-axis in its local coordinate system, so z is always "up." Sweeping a curve by a negative angle creates a quadric that is inside out

has simple controls for sweeping a partial quadric, using ranges of z or the parametric angles

is placed by using a transformation matrix, since it has no built-in translation or rotational controls

has a parameter list that is used solely for applying primitive variables, and so does not affect the shape of the primitive;

requires four data values for any vertex or varying parameter, one for each parametric corner, and one data value for any uniform or constant parameter.

The seven quadric primitives are illustrated in Figures 4.1 and 4.2. In each of the following descriptions, the parameters are floating-point values (with the exception of pointl and point2, which are obviously points). Angles are all measured in degrees.

Sphere radius zmin zmax sweepangle parameterlist

Sphere creates a partial or full sphere, centered at the origin, with radius radius. zmin and zmax cut the top and bottom off of the sphere to make ring-like primitives. As with all the quadrics, sweepangle (denoted Omax in this chapter's figures) controls the maximum angle of sweep of the primitive around the z-axis. With these controls, hemispheres can be made two different ways (around the z-axis or around the y-axis).

Cylinder radius zmin zmax sweepangle parameterlist

Cylinder creates a partial or full cylinder with radius radius. Because zmin and zmax are arbitrary, it can be slid up and down the z-axis to match the location of other quadric primitives.

4.2 Parametric Quadrics

97

Figure 4.1 The parametric quadrics (part 1).

Cone height radius sweepangle parameterlist

Cone creates a cone that is closed at the top (at the position (0.0, 0.0, height)) and open with radius radius at the bottom (on the x-y plane).

Paraboloid topradius zmin zmax sweepangle parameterlist

Paraboloid creates a partial paraboloid, swept around the z-axis. The paraboloid is defined as having its minimum at the origin and has radius topradius at height zmax, and only the portions above zmin are drawn.

Hype rbol of d point] point2 sweepangle parameterlist

The Hype rbol ofd (of one sheet) is perhaps the hardest quadric to visualize. It is created by rotating a line segment around the z-axis, where the segment is

98 4 Geometric Primitives

Figure 4.2 The parametric quadrics (part II).

defined by the two points pointl and point2. If pointl and point2 are not both in an axial plane, it will generate a cooling-tower-like shape.

The hyperboloid is actually quite a flexible superset of some of the other primitives. For example, if these points have the same x- and y-coordinates, and differ only in z, this will create a cylinder. If the points both have the same z-coordinate, it will make a planar ring (a disk with a hole cut out of the center). If the points are placed so that they have the same angle with the x-axis (in other words, are on the same radial line if looked at from the top), they will create a truncated cone. In truth, some of these special cases are more useful for geometric modeling than the general case that creates the "familiar" hyperboloid shape.

Disk height radius sweepangle parameterlist

The Disk primitive is usually used to cap the tops or bottoms of partial quadrics such as cylinders, hemispheres, or paraboloids. For this reason, it

4.3

Polygons and Polyhedra

99

 

 

has a height control, which allows it to be slid up and down the z-axis, but it

 

 

stays parallel to the x-y plane. Partial sweeps look like pie segments.

 

 

Torus majorradius minorradius phimin Phimax sweepangle parameterlist

 

 

To rus creates the quartic "donut" surface (so it isn't a quadric, but it is defined

 

 

with two angles, so we let it go). The majorradius defines the outside radius of

 

 

the torus (the size), and minorradius defines the inner diameter (the radius of

 

 

the hole). The cross section of a torus is a circle on the x-z plane, and the angles

 

 

phimin and phimax define the arc of that circle that will be swept around z to

 

 

create the torus.

 

4.3 Polygons and Polyhedra

Compared to other graphics APIs, RenderMan appears to have less support for polygons and polyhedra. This is somewhat true because the RenderMan API was clearly optimized for parametric curved surfaces, and polygons are neither parametric nor curved. However, the major difference is that most graphics APIs consider polygons to be drawing primitives, whereas RenderMan considers polygons to be modeling primitives. As a result, those other graphics APIs have many variants of the same primitive based on hardware drawing efficiency constraints (for example, triangles, triangle meshes, triangle strips, etc.) or coloring parameters. RenderMan has only four polygon primitives, based on one modeling constraint and one packaging efficiency.

RenderMan recognizes that there are two types of polygons-convex and concave. Convex polygons are loosely defined as polygons where every vertex can be connected to every other vertex by a line that stays within the polygon. In other words, they don't have any indentations or holes. Concave polygons do. This difference matters because renderers can usually make short work of convex polygons by chopping them quickly into almost random small pieces, whereas concave polygons require careful and thoughtful segmentation in order to be cut into smaller chunks without losing or adding surface area. In either case, polygons must be planar in order to render correctly in all renderers at all viewing angles.

RenderMan calls convex polygons polygons and concave polygons general polygons. General polygons also include any polygons that have holes in them. Therefore, the description of a general polygon is a list of loops, the first of which specifies the outside edge of the polygon and all the rest describe holes cut out of the interior.' It is not an error to call a polygon "general" when it is actually not, just to be cautious. It is simply a little less efficient. However, if a general polygon is called

I Some modeling packages accept general polygons that have multiple disconnected "islands" as a single polygon. RenderMan does not permit this and considers these islands to be separate polygons.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]