Much later than anticipated, here is the source code for my screen saver, Particle Life – and a short discussion of some techniques used to make the screen saver. I used GPU processing, pre-rendered depth blurring, and some fun camera tricks.
Particle Life Source Code
GPU Processing
You’ll see this in PositionsEffect.fx. Basically, I pre-generate a velocities texture with random noise, and pre-generate a positions texture with incremental depths for particles (to ensure a good distribution of particles) and random x/y values (with the camera’s frustum). Then every frame I sample both position and velocity, add the two together (multiplying the velocity by the elapsed time) and write the new values to the positions texture.
Now, here’s where I could use vertex texture fetching, but I wanted something that’d work on shader model 2.0 (and my laptop), so I did something simpler. The position texture is stored as a HalfVector4. Every frame, I dump this straight into an array and then push it into a vertex buffer of HalfVector4 vertices. Since I’m using point sprites, that’s all I need to do. It’s not the most efficient piece of code (the dump sure isn’t nice), but it works.
Pre-rendered Depth Blurring
I wanted particles to look blurry as they became more distant. However, with alpha-blended particles, I can’t quite do this in a post-processing step. I’d have to basically blur the texture whenever I rendered particles by taking several sample taps on the texture for each pixel. The fill rate would become horrible.
So, I took the easier route. Instead of doing all that work on the GPU just to get particles to blur, I just built the mipmaps in the texture by hand. If you open up part.dds, you’ll notice that each successive mipmap is not only smaller, but blurrier. Since the mipmaps are automatically blended when I sample the texture, that does the blurring for me for free.
Camera Work
One of the important things to take from this sample is that sometimes there’s ways you can “cheat” without being caught by the end-user, and save a lot of processing power in the meantime. One more example of this is how the particles are “culled”. Nothing exists outside of the camera’s viewing frustum. There’s no reason for it to – it’s not visible to the user, and it’d just waste processing power if we dealt with it. So when a particle leaves the camera’s frustum, it’s wrapped around to the other side.
So how do I know if the particle has left the view? Well, I could do a bunch of nasty math involving rotation, tilt, field of view, etc. Or I can just align the camera straight down the Z axis. Then I know that there is a linear relationship between the particle’s Z position and the width and height of the camera plane at that Z. This is the tangent of the field of view. So I just set that as a variable in the shader (or variables in this case: FieldWidthFactor and FieldHeightFactor in PositionsEffect.fx). Now I don’t even have to do any transformations of the particle positions.
One final note
You might recognize this, it’s from Grand Theft Auto IV (image from gtanet.com). If you’ve played the game, you’ll probably have noticed how short the full-detail draw-distance is. After a couple blocks, cars turn into headlight circles (a simple circle to show headlights) and everything becomes really blurry. My theory is that, to help with memory streaming and file sizes, a lot of the intermediate mip-maps in textures have been cut out. That way the full-detail texture maps only have to be loaded when you’re close to an object, but it can still have some definition if it’s in the background. However, due to either space or processing constraints, they couldn’t show too far away, so a post-processing effect adds some grainy blur for distant objects, helping to hide the low-resolution textures.
That’s in no way the official word on things, just my guess – but it’d be a good example of how you sometimes can make clever use of graphics tricks to keep your asset size and processing requirements low while still maintaining good visual quality.