Tuesday, February 9, 2010

Adventures in GLSL Shaders

I'm finally getting deeply into the world of GLSL and learning what can be done with it. I wanted to implement a mesh deformation shader that I could control with an audio signal. Inspired by sea anenomaes and spore-like forms, I decided to create a radial deformation.

This deformation shader program works as follows: For each vertex I would take XYZ, and convert that to 3D radial coordinates (r, theta, phi). From there, I could push the point's radius outward, as a function of the two angles theta and phi. Then, converting back into the XYZ space, pass these new coordinates into a fragment shader. The model pictured is a simple sphere, loaded as a VBO. The sphere is great for testing a deformer like this, because it shows a very pure representation of the deformation itself. I pass FFT values of an incoming audio signal into the vertex shader from Java, which control displacement amount, frequency, progression of time, etc. I can get 400 FPS with about 50k polygons on the object... With the complex deformation.. Pretty good I'd say.

The problem I quickly discovered is that although the vertices are easy to displace in this manner, their normals retain the original direction from when the model was a simple sphere. So, to get the proper shading and specular highlights, each vertex's normal must be recalculated along with its new position.

Thankfully, people a lot smarter than me have already encountered and solved this problem. Although the solution is not exactly simple to impliment, it is somewhat straightforward. I found this link explaining it very well.

Basically the idea is to take your deformation function and calculate some partial derivatives of it. Using these new functions, you can calculate a new transformation matrix to convert your original normals into the new proper normals reflecting the deformed object.

Next step, to brush up those calculus skills and see if the university Math degree pays off for fun eye candy! I hope the radial coordinate transform doesn't make the partial derivatives too weird...

Here's the vertex shader program as it stands... This version does not recalculate the normals after deformation.

const float TWO_PI=6.28318531;
const float PI=3.14159265;
const float HALF_PI=1.57079633;

varying vec4 screenPos, worldPos;

varying vec3 normal, lightDir, halfVector;
varying vec4 diffuse, ambient;

varying float radius,redShift,greenShift,blueShift;

uniform float time;
uniform float displaceAmount1;
uniform float displaceAmount2;

uniform float frequency1;
uniform float frequency2;

varying vec4 eyePosition;
varying vec3 diffuseColor;
varying vec3 specularColor;
varying vec3 emissiveColor;
varying vec3 ambientColor;
varying float shininess;

float d0;
float d1;
float d2;
float x,y,z;
float a1,a2,r;

vec4 displacedPoint;

float tri(float phase)
//Triangle Wave function
float ramppoint;
phase =mod(phase,TWO_PI);
if(phase>=0.0 && phase else if(phase>=HALF_PI && phase else if(phase>=PI && phase<(HALF_PI*3.0)) return(-ramppoint);
else if(phase>=(HALF_PI*3.0) && phase<=TWO_PI) return(-1.+ramppoint);
else return(1.0);

void main()

z=gl_Vertex[1]; //flip flopped for aesthetics


vec4 displaceVect;

d1=tri(a1*frequency1+time*1.2)*displaceAmount1; d0=tri(a2*frequency2+time*1.2)*displaceAmount2;ex[0]*25.+time*1.27)*.0511;

vec4 Vertex=gl_Vertex;




redShift=0.0;//abs(r); // USED BY MY FRAGMENT SHADERS


gl_Position = screenPos;

eyePosition = gl_ModelViewMatrix * gl_Vertex;

normal = normalize(gl_Normal + normalize( gl_Normal-worldPos.xyz));

normal = gl_NormalMatrix * normal;//gl_Normal;

diffuseColor = vec3(gl_FrontMaterial.diffuse);
specularColor = vec3(gl_FrontMaterial.specular);
emissiveColor = vec3(gl_FrontMaterial.emission);
ambientColor = vec3(gl_FrontMaterial.ambient);
shininess = gl_FrontMaterial.shininess;


No comments: