CS 260 01 / Zelle | May 2022 |
---|
Newest | Syllabus | Handouts | Portfolio FAQ | Python | VPython | Pypy | Manhattan VC | Dr. Zelle |
---|
To use your own computer for this class, make sure that:
I'd like to get a little information from all of you to help me in shaping the class. Please complete this short survey in the time between class sessions (10--11:30) on Monday, 5/2. CS 260 Initial Survey
Required Project: Complete and test the image.py file located in handouts/image
Potentially useful resources:
PPM
file format
Note: we are using the binary (P6) form.
Required Project: Write and test at least 2 image processing algorithms. See sunset.py in handouts/images for an example.
Optional Project: Write a menu-driven image processing program that incorporates a number of the above effects.
Required Project: Complete the painter.py file in handouts/painter.
Required Project: Use the Painter class to draw at least one interesting picture. I suggest using the information in drawing.dat to draw a polyline figure, or writing a program that draws a Sierpinski triangle (using the recursive algorithm and filled triangles). You may have other ideas---perhaps a picture of a hobby. Extra drawings count as optional projects.
Required Project: matrix operations Complete the matrix.py file (see handouts/render2d) so that it passes the doctests.
Required Project: 2D transforms Complete the trans2d.py file (see handouts/render2d) so that it passes the doctests.
Required Project: Render2d class Complete the render2d.py file (see handouts/render2d) Note, this relies on your Painter and Image classes; make sure they are all working before attempting this step. Some notes:
Required Project: tic tac toe Use the Render2d class to draw a completed game of tic tac toe.
Optional Project: Dinos! Use the Render2d class and the drawing2D/line_art.dat file to draw a family of dinosaurs. Note: you can change the size, shape, and position (incuding flipping/reflecting) of a drawing just by modifying the window and viewport.
Optional Project: 2D drawing. Use the Render2d class to create a simple, but interesting drawing of your own invention.
Optional Project: More transforms. Add shears and/or reflections to Render2d and draw some scenes demonstrating them. You can find more information about these transforms here. Note: specify the amount of shear as an "angle" parameter. The shear factor will then be the tangent (tan) of the angle.
Exam 1 is Friday (5/6), second session
Check out the exam guide here.
Download the starter code for the 3D rendering project. It is supplied as a zip file: render3/version1.zip
Required Project: Points and Vectors Complete and test the Vector class in math3d.py code.
Required Project: RGB Complete and test the rgb.py Important Note: You should also add an __add__ method to the class.
Required Project: perspective camera Complete the top half of Camera class (camera.py) so that it passes the doctests.
Required Project: wire frames
Write the necessary
methods of the Sphere class and render_oo.py to render wireframe
perspective images of spheres. Test on scene0 and scene1.
Important Note: Change iter_triangles to iter_polygons
Required Project: raytracer
Complete the portions
of the ren3d required to raytrace and shade scene0 and scene1
assuming a single light source located at the eye, and simple
Lambertian shading and ambient lighting. These scenes are render
with scene.ambient = (0.2, 0.2, 0.2). I also "reduced" the max
value of the colors on the spheres to .8 (instead of 1) to avoid
saturation.
Required Project: Box
Add an axis aligned box to our
rendering engine. Test with scene2 and scene3.
Optional Project(s): Create some scenes using boxes and spheres and render them. Warning: for wireframe rendering, make sure all objects remain in front of the eye (z < 0). In fact, you should change the size of the "floor" box in scene3 to be 18 in the z so that it stays well in front of the eye.
Important Note:
So far you have: wireframe and
Lambert shaded scenes of spheres and boxes rendered in a
perspective view from a camera at the origin looking down the
z-axis. We will call this Version 1.0 of our rendering
engine. Your work has been additive, but features we will be
adding soon require systematic surgical changes to the
system. You should take your working renderer and "snapshot" it
by copying all of the necessary files into a separate folder
called version 1. That way, you will always have a working
system to refer back to.
Important Note 2
Version 1.0 of the renderer is the required work for Portfolio 2
(due 5/16). The remaining week 2 assignments will be considered
as OPTIONAL enhancements for portfolio 2. But these will be
required work for Portfolio 3.
Required Project: depth buffering
The new
render_oo.py module
from version2
reimplements the Framebuffer to carry along the z coordinate
associated with the pixels being drawn. It also includes a
draw_filled_triangle method. However, it does not yet
incorporate an actual depth buffer. This is illustrated by
running the associated run_oosig.py file on scene1, as shown in
the first image below. Your task is to add a depth buffer to
implement hidden surface removal, as discussed in class (see
second image).
Make sure to also include your (updated) render_wireframe so that it can take advantage of depth-buffering as well.
Required Project: Gouraud Shading
Add the ability to produce shaded images in render_oo. Models
must provide normals for each vertex when iterating the
polygons, and the renderer can use the normals to compute a
lambert shaded color at each vertex. These colors are then used
to interpolate shading across filled triangles. You will also
need a run_gs.py to call your new rendering method.
Required Project: Moveable light source.
Add the ability to set
the position of the light source in our scenes. The new scene operation
is set_light(pos=(0,0,0)).
Optional Project: Phong shader.
A better approximation
of the ray-traced image can be obtained in the OO renderer by
using phong shading. Whereas gouraud shading computes a color at
each vertex and then smoothly interpolates those colors across
the triangle face, phong shading pushes the color computations
into into the rendering of the triangle. The normals are
interpolated across the triangle, and the shading calculations
for a point are done with the interpolated normal. In this
project, you would add draw_phong_triangle to the Framebuffer,
as well as a render_phong function, and a run_ps.py.
Important Note: You should now have a 3d rendering engine that allows for ray-traced images with Lambert shading and wireframe images rendered with proper depth buffering as well as rendering with Gouraud shaded triangles. This is version 2 of our rendering engine. It's time for another snapshot. Make sure to save a copy of your current system somewhere safe.
From here on out, we will be concentrating our efforts on enhancing the realism of the ray tracer.
ren3d Version 3
Update the lighting/shading in
the ray-tracing engine so that we can make more realistic
images. Specifically you should implement the following required
enhancements:
You can find the materials.py file here.
Optional enhancement:
You can see these enhancements implemented in scene files 3a-3e here.
Click here for larger version |
I will be putting you in teams for the final project. Please complete the CS 260 Teams Survey by midnight on Wednesday (5/18). It will only take a couple minutes.
Important Note: Version 4 starts here
Make sure to "checkpoint" your working version 3 before proceeding.
ren3d Version 4
Update renderer to include texture
mapping using Boxtexture and Spheretexture.
You can find the starter code for texturess.py as well as the necessary texture files here.
You can see these enhancements implemented in scene files 3f and 3g here.
Click here for larger version | Click here for larger version |
Important Note: Version 5 starts here
Make sure
to "checkpoint" your working version 4 before proceeding.
required project: trans3d.py
Complete the trans3d
file from version 5 so
that it passes the doc tests.
Update the Point, Vector, and Ray classes to implement a trans method. So that we can use a 3D transforms like this: p1 = p.trans(transform)
Required Project: moveable camera
Enhance the
renderer with a (re)positionable camera. To accomplish this, you
will have to a add a setView method to the Camera class and
rework a number of the other methods.
def set_view(self, eye=(0,0,10), lookat=(0,0,0), up=(0,1,0)):Here is a docstring with tests for the new and improved Camera. You can put it at the top of the class for testing.
Camera is used to specify the view of the scene. >>> c = Camera() >>> c.set_perspective(60, 1.333, 20) >>> c.set_view(eye=(1, 2, 3), lookat=(0, 0, -10)) >>> c.trans[0] [0.9970544855015816, 0.0, -0.07669649888473705, -0.7669649888473704] >>> c.trans[1] [-0.01162869315077414, 0.9884389178158018, -0.15117301096006383, -1.5117301096006381] >>> c.trans[2] [0.07580980435789034, 0.15161960871578067, 0.9855274566525744, -3.335631391747175] >>> c.trans[3] [0, 0, 0, 1] >>> c.set_resolution(400, 300) >>> r = c.ij_ray(0, 0) >>> r.start Point([1.0, 2.0, 3.0]) >>> r.dir Vector([-12.900010270830052, -11.566123962675615, -17.521989305329008]) >>> r = c.ij_ray(100, 200) >>> r.start Point([1.0, 2.0, 3.0]) >>> r.dir Vector([-7.277823674777881, -0.14976036620738498, -19.7108288275589])Scene scene3h looks like this:
Required Project: Transformable Class
Add a Transformable class to scenedef.py. In addition to the doctest below, the class will need to implement our standard iterPolygons, intersect, and anyhit methods. See scene4.py and scene5.py files for example usage.
class Transformable: A wrapper to add transforms to surfaces >>> s = Transformable(Sphere(color=(0, 0, 0)) >>> x = s.scale(2, 3, 4).rotate_y(30).translate(5, -3, 8) >>> s.trans[0] [1.7320508075688774, 0.0, 1.9999999999999998, 5.0] >>> s.trans[1] [0.0, 3.0, 0.0, -3.0] >>> s.trans[2] [-0.9999999999999999, 0.0, 3.464101615137755, 8.0] >>> s.trans[3] [0.0, 0.0, 0.0, 1.0] >>> s.itrans[0] [0.43301270189221935, 0.0, -0.24999999999999997, -0.16506350946109705] >>> s.itrans[1] [0.0, 0.3333333333333333, 0.0, 1.0] >>> s.itrans[2] [0.12499999999999999, 0.0, 0.21650635094610968, -2.357050807568877] >>> s.itrans[3] [0.0, 0.0, 0.0, 1.0] >>> s.ntrans[0] [0.43301270189221935, 0.0, 0.12499999999999999, 0.0] >>> s.ntrans[1] [0.0, 0.3333333333333333, 0.0, 0.0] >>> s.ntrans[2] [-0.24999999999999997, 0.0, 0.21650635094610968, 0.0] >>> s.ntrans[3] [-0.16506350946109705, 1.0, -2.357050807568877, 1.0] >>>You should be able to render scene4.py and scene5.py
Optional Project: Square Class
Add a new primitive, Square(), to
your models.py. A square's only parameter is color. It
produces a unit square (vertices -.5 to .5) in the XZ plane
(Y=0). You will probably want to design a very simple test scene
for the purposes of debugging your square. Once it's working,
you can uncomment the section of scene5 that draws lines to get
a nicely completed tic tac toe board.
Important Note: Version 6 starts here
Make sure
to "checkpoint" your working version 5 before proceeding.
Required Project: Triangle Class
Complete the triangle class from mesh.py in version 6. We will be using this in the polygonal mesh implementation. A triangle will be created from 3 (3D) vertices. In addition to color, it will have an optional parameter for normals. If normals is supplied, it is a list of vertex normals that are interpolated across the face for smooth shading. Note: Triangle is an internal class used to implement other models, so vertices are actual Point objects, the color is a Material, and normals are Vectors. This will allow a set of Triangles (say those of a mesh) to share Points, Normals, and Material, rather than having separate copies stored for each Triangle.
Triangle must implement our standard rendering methods: iter_polygons and intersect. You can use the barycentric coordinates of the intersection to calculate the normal for the hit point from the normals of the vertices:
normal = (1-beta-gamma)*n0 + beta*n1 + gamma*n2Scene6a--c is a simple test scene. Make sure to read the comments in that file.
Required Project: Mesh
Complete the Mesh
class in mesh.py file
in version 6 to add
meshes to our renderer. Our meshes will use simple OFFFiles, you
can find a description of these
files here.
Here are example renderings of scenes 7 and 8:
Optional Project: Bounding Box incorporation
Add
bounding boxes throughout the raytracer to improve
efficiency. Every model will have to have to maintain an
appropriate bbox in its init; however, you should only do the
bounding box checks in the intersect method for the "costly"
objects: Group and Transform.
Once you have Boundingbox incorporated, your Mesh no longer needs to be a separate class. It can just become a function that returns a group, thus eliminating an extra bbox test.
Besides efficiency improvement, bounding boxes also provide every model (including Transformable and Group) a natural way to compute generic coordinates. You could use this to implement textures at any level.
Optional project(s): Create some nice scenes with transformables and meshes.
The final project for the class is a team project. It is up to your team to choose your project. The only firm requirements are that you:
All teams will "present" their projects on Thursday. The presentation will be informal and consist of explaining your projec to the class and showing your result. You will be graded on the difficulty of the project and quality of your product and code.
Below are some ideas of what your team might do. You can choose to do multiple enhancements, but keep in mind that one enhancement that works well infinitely superior to several that do not. For most of the projects listed, you will have to do a bit of online research to understand the technique.
Project ideas:
Efficiency. Improve the efficiency of our ray tracer by incorporating bounding boxes throughout, combined with space partitioning and/or by parallelizing to use multiple threads/ processors. (This is what I will be building/discussing in the morning sessions next week. You can follow along and implement what I discuss).
Texture enhancement. Allow textures on objects after they have been transformed. Add new useful textures (marble, simple flat maps for drawing features onto objects, 6 sided box, ...)
More sophisticated meshes. Find some meshes that include color and normal information and incorporate them into scenes. You could start with more sophisticated OFF files, but being able to import meshes from other systems (e.g., Blender) would be great.
New Surfaces. Cone, Cylinder, Torus (as solids, not just a mesh). Maybe implicit modeling (look it up).
Allow camera to be place inside objects. Think texture-mapped room box or sky sphere.
Anti-aliasing/soft shaddowing. Distribution ray tracing can do this easilty ( but slowly).
Refraction. Allow transparent materials.
Stereo Rendering. You can use this to produce red-blue anaglyphs and side-by-side images suitable for 3D viewing.
Animation. If you generate a sequence of frames, our lab has tools to build them into an animation.
Constuctive Solid Geometry. Allow objects defined by intersection and difference of surfaces.
Emissive objects. Putting realistic looking light sources into our scenes (like a lamp).
Iteractive previewer. Use a fast 2D graphics system to draw our wireframe polygons and allow interactive positioning of the camera via orbit and sooming to find the best view before raytracing.
Just about anything graphics related that tickles your fancy. Just be sure to clear it with me, if it is not on this list.
The final project Team evaluation form is here. You must complete this evaluation to receive a grade on your team project.