Perhaps one of the most enjoyable topics in 3D adventure is to produce "Volumetric" effects. What are these effects? It is creating natural phenomena like clouds, fires or fog and rendering them realistically. Some light forms can also be described as volume effects (eg Volume lights, God rays). Producing such effects is very difficult and requires a special technique. This technique, which was used long ago in medical, seismic or other scientific fields before entering 3D applications, is called "Volume Rendering". It is a term used to visualize the datasets that are in place, and this term has been studied and developed for over 20 years. This is the fruit of this development. However, when it comes to render, it is a time-consuming process. As we have mentioned before, every good thing has a price, and it returns to us as "time" when it comes to 3D rendering.


Since it is a very technical subject, we will give a brief description here. Volume Rendering means rendering the voxel-based data into viewable 2D image. Our display screens are composed of a two-dimensional array of pixels each representing a unit area. A volume is a three-dimensional array of cubic elements, each representing a unit of space. Individual elements of a three-dimensional space are called volume elements or voxels. Voxel is the basic element of the volume. Also called "volume element" or "volume cell". It is the 3D conceptual counterpart of the 2D pixel. Each voxel is a quantum unit of volume and has a numeric value (or values) associated with it that represents some measurable properties or independent variables of the real objects or phenomena.

The collection of all these values is called a scalar field on the volume. The set of all points in the volume width a given scalar value is called a level surface. Volume rendering is the process of displaying scalar fields. It is a method for visualizing a three dimensional data set. The interior information about a data set is projected to a display screen using the volume rendering methods. Along the ray path from each screen pixel, interior data values are examined and encoded for display. How the data are encoded for display depends on the application. Seismic data, for example, is often examined to find the maximum and minimum values along each ray. The values can then be color coded to give information about the width of the interval and the minimum value. In medical applications, the data values are opacity factors in the range from 0 to 1 for the tissue and bone layers. Bone layers are completely opaque, while tissue is somewhat transparent. Voxels represent various physical characteristics, such as density, temperature, velocity or pressure. Other measurements, such as area, and volume, can be extracted from the volume datasets.  

where are my polygons?

To put it briefly: There is no such polygons in case of Volume. Volume Rendering is a process that allows us data hunt and see what we have gathered in the end. Here's what you call polygon is the "Data". The classic Isosurface rhetoric does not work in this realm. Polygons are not needed to produce meaningful results from 3D data voxels. The "Meaning" is already there. You have to dive into that data. Therefore, while working on Voxel based, we have to put Isosurface logic aside.

If there is no polygon, how do we render it?

A number of methods have been developed to visualize voxel data. These methods give different outputs according to the purpose. Two major methods are: Direct Volume Rendering (Object Order, Image Order, Hybrid Order) and Indirect Volume Rendering (Surface Tracking, Isosurfacing, Domain Based). Since they are so technical, we will not explain here. However, the method we use today in 3D applications is usually "Direct Volume Rendering". In this type of rendering method, every voxel in the volume raster directly, without conversion to geometric primitives. They usually include an illumination model which supports semi-transparent voxels; this allows rendering of every voxel in the volume is (potentially) visible. Each voxel contributes to the final 2D image. The following diagram shows this process.


VDBs (Created by Dreamworks) are a generic volume format that is used to create effects such as smoke, fog, vapor, and similar gaseous objects. VDBs are usually generated and exported from other 3D software packages such as Houdini. There is also a number of VDB files available for download online at www.openvdb.org/download. VDBs can be a single frame or an animated file sequence. VDB Loader bölümünde bu konuyu daha detaylı açıklayacağız.

For more information about OpenVDB, please see http://www.openvdb.org


You will often use these 2 options in all volume effects. Unlike the medium previously in the material section, these 2 options now work according to the voxel grid. And as input, they refer to the Voxel grid entirely. In the "Volume Effects" section you will find plenty of examples to use. As you can see in the below image, the HDR is placed as a background of the  VDB cloud. It's also a lighting model. Octane Daylight is also added to enhance the sun effect. As you can see, Scattering and Absorption are almost at maximum levels. This is because of both the density and the setting of the step length. A cloud of this mass will absorb minimal of light. But scattering has spread throughout the entire cloud. The picture taken from the Live Viewer. No compositing done. You can find this VDB cloud tutorial in the Volume effect section.