CS426 Assignment 3 (20%) Due: Friday November 25 (midnight) This is to be done in pairs, with a partner. Solo efforts are allowed, but will be held to the same requirements; three or more people in a group is not allowed. This is a multi-part assignment, but you may choose to do different parts in different orders, and plan an earlier part according to the requirements of a later part. Sample Python and GLSL code, along with more algorithm details will be provided on an ongoing basis. Part 1: rendering with shaders ------------------------------ Write code to render an animation of some surface using OpenGL 2 via PyQt / PyOpenGL / numpy. There's no particular requirement on what you animate or how you do it, but it must have smoothly curved surfaces (more precisely, a triangle mesh with varying vertex normals). On the rendering side the following requirements hold: - Certain old-style OpenGL calls are not allowed: glBegin, glEnd, glVertex*, glNormal*, glColor*, glTexCoord*, glLight* - You must instead load the triangle mesh (or meshes) using vertex buffers, and use vertex and fragment shaders to calculate the light. - You are encouraged to use a very simple vertex shader that mostly just passes parameters through to the fragment shader, and do the heavy work in the fragment shader. - In particular, the fragment shader should do a lighting calculation based on at least one light (not including ambient light) and using a reasonable model such as diffuse shading. Your source code along with a sequence of images thus produced should be handed in. Aim for around 100 frames to be played at 24fps, at resolution 640x360. Extra marks: implement antialiasing via supersampling. For example, you could render to a much larger image the usual way, and then average down (ideally with a weighted filter) to the final 640x360 image --- rendering to a texture and exploiting mipmaps, or using Qt's built in smooth image resizing. Part 2: shadows --------------- Implement a shadow map. That is, introduce an extra rendering pass from the point of view of the light source, storing depth (or even world space position) to a texture (the "shadow map"). Then in the fragment shader for the main render, look up the shadow map to see if there is a surface obscuring the light or not. Again, hand in source code and a sequence of images at 640x360 demonstrating shadows (if you get this working, you needn't hand in something separate for part 1). Extra marks: smooth out the jagged edges of the shadows in some way. For example, you can use several nearby look ups (search for "Percentage Closer Filtering", or even better, Percentage-Closer Soft Shadows by Randima Fernando at NVIDIA). Part 3: matchmove and compositing --------------------------------- Take a real photograph of a scene on top of which you will composite a computer generated animation with a consistent perspective. For full marks the computer generated images will also have to include visible shadows cast on the real scene. This will involve running matchmove to figure out the camera position relative to the scene, a simple model of the element in the scene on which the synthetic shadow will be cast, and a good estimate of the real lighting in the scene for the rendering. To make this tractable, I highly recommend setting up the photograph with care to make life easier. For example: - Make the central part of the photo a simple flat surface like a table or the floor, and arrange for your CG element to only cast shadows on this - so you only have to use a simple flat plane to model it. - Prepare and measure a calibration target to drive matchmove, and in particular lay it on the flat surface so the model of the surface is at a known position with respect to the target. You could even use the whole surface as the target, e.g. if you have a rectangular table with well-defined sharp corners, that you can measure, and that will be entirely visible in the photograph. If you do use an obvious calibration target, placing it where the computer generated model is going to be has two benefits - it will get you the most accuracy where you need it, and the computer generate model might be large enough to completely cover it in the final image, which looks a bit cleaner. - Do this in a dark room inside, with only a single light source like a lamp, as far as possible --- this can then be modeled with a single point light without too much error. - Either arrange for the light to be at a measurable location with respect to the calibration target --- for example, 2.5m directly above it --- or include, say, a short vertical post of known length in the target so that you can measure it's shadow and from that infer the direction to the light. - Avoid having anything in the foreground which might block where the computer generated part is supposed to end up: handling that is going to be very tricky. Hand in source code, the original photograph and notes on your calibration set up, and a sequence of images at 640x360 giving the animation. (If you get this working, you needn't hand in anything separate for parts 1 or 2). Handing it in ------------- Both members of a group should hand in a README identifying the other person and describing very briefly how you split up the work (it's OK to say you worked on everything together if that's the reality). One member should additionally include the code that was written with a description of how it runs, and the best image sequence you produced as described above.