HA7 Task 2 - Displaying 3D Polygon Animations

How are 3D objects polished and rendered to be shown in a different angles?

This is done by a process called rendering which uses a graphics card and a processor that helps process the image to be rendered in certain angles. As we all know every 3D object is in a computer is made by polygons, rendering displays the polygon  that the player needs to see in a certain angle hiding the polygons in the other side, this also applies light and textured added by the user.

 API

API or Application Programming Interface is used to build software applications by providing a set of  protocols, routines and tools. An API can help the work of programming components easier by assisting distinct applications with sharing data which can help to integrate and enhance the functionalities of the application. The video below explains briefly what an API is.



Some of the API's include :

Direct3D -

Direct3D is a graphics API made by windows to render 3D objects in applications where performance is important. This uses hardware acceleration when available allowing for hardware acceleration of 3D rendering pipelines.

OpenGL -

Open GL is a multi language and multi platform application programming interface made for rendering 2D and 3D graphics. This API is used to interact with a GPU (graphics processing unit) to get hardware accelerated rendering.

GRAPHICS PIPELINE

A graphics pipeline is an important concept in computer graphics and refers to the process of different stages that is used to convert instruction in a computer in to graphics on a screen. Each API has it's own pipeline but the stages used are similar as are the final results. These stages include The graphics pipeline accepting a 3D primitive/model and then providing us with a 2D raster image as an output.

(http://common.ziffdavisinternet.com/encyclopedia_images/GRAFPIPE.GIF)


Stages of the graphics pipeline


(http://upload.wikimedia.org/wikipedia/commons/0/01/Render_Types.png)


3D Geometric primitives - The scene is first created using 3D primitives normally done using polygons or triangles

Modelling and transformation - This is when the model is transported into the world co-ordinate system from the local co-ordinate system

Camera transformation - Exporting the object from the world co-ordinate system to the camera co-ordinates where the origin point is traditionally the camera.

Lighting - Lighting is done by adding light sources where the light will hit the object from the orgin point of the light. This allows the camera to see the object using the light, if a light source is not added the camera would not see anything but pitch black.

Project transformation - This is the transformation of the 3D object into a 2D view as scene by the camera, the camera could be adjusted to get a good view of the 3D objects but the output from the camera will always be a 2D image.

Clipping - The 3D objects that are not shown in the camera's view port will not be shown in the final images, only the objects as seen using the camera can be seen.

Scan conversion or rasterization - This is the process of the 2D image taken from the scene is converted into a raster format. Form this point on all tasks can be carried out on each single pixel rather than having to take multiple steps .

Texturing, fragment shading - At this stage the individual pieces  are given colour based on values assigned from the vertices during rasterization.

Display- The final results after these steps the user can be see the rendered image on a display.