Heya, it's been a while since I posted something here.
http://www.theinstructionlimit.com/wp-content/uploads/2008/06/FullScreenShaders.zip (C# 3.5, Visual Studio 2008 - 8.67 Mb)
This sample showcases different post-processing shaders with different-sized input surfaces (including the straight Main Buffer). It's really simple in terms of visuals, and it doesn't reinvent anything at all, but my intent is to show how to properly use texture coordinate offsets to do pixel-perfect sampling with post processing shaders.
Also it shows how to accumulate different effects in any order without using more than two surfaces, sometimes even a single one!
The Vignette shader was heavily inspired (i.e. stolen) from Arius' HDR demo.

I realize that the shaders and sample code have no documentation at all, so here's a little explanation.
When using the main buffer, a single RenderSurface is needed to do as many (single-input & single-output) post-process shaders as you want. I call it the Post surface. So if you want to apply a Grayscale effect,
after TV.RenderToScreen(), you :
- Blit the main buffer unto the Post surface
- Draw_FullscreenQuadWithShader from (0, 0) to (1, 1) while passing Post as input texture
- In the vertex shader, offset the texture coordinates by half the texel size of what you sample. That means a resolution of 1024x768 will need a positive offset of (0.5/1024, 0.5/768) to have pixel-perfectness.
When using a smaller/larger intermediary surface, you actually need two surfaces; a Temp surface and the Post surface. So if you want to apply a Grayscale effect...
Before TV.Clear() : (once per effect)
- Blit the Post surface onto the Temp surface (this will be read by the shader)
- StartRender on the Post surface
- Draw_FullscreenQuadWithShader from (0, 0) to (1, 1) while passing Temp as input texture
- In the vertex shader, offset the texture coordinates by half the texel size of what you sample. That means a resolution of 1024x768 downscaled by a factor of two will need a positive offset of (0.5 / 1024 / 2, 0.5 / 768 / 2) to have pixel-perfectness. It entirely and solely depends on the size of the texture you sample.
- EndRender on the Post surface
After TV.RenderToScreen() : (once for all effects)
- Blit the main buffer onto the Temp surface (this will be used for the next frame's effects)
- Draw_Texture using the Post surface (which contains the accumulated effects), making sure that you upscale/downscale the coordinates to fill the entire screen
- Blit the Temp surface onto the Post surface (this will be used for the next frame's effects)
If you followed this right, you'll notice that my 2nd method (the one with 2 RSes) is "one frame late" at all times. But this is pretty much unavoidable, I think. And it never shows at 30fps+.

But that's only part of what this sample's about. You'll also find a satellite project called
ComponentFramework which is my proposal for a basic, over-simplifying yet very functional component/service architecture for TV3D.
As you may know, I've been working with XNA on Fez for almost a year now, and when coming back to TV3D for some tests or prototypes, I find it hard not to work with game components like XNA provides.
So I've decided to take what I liked from XNA's component framework, add in what I already extended in Fez, and even some more stuff to keep the game code as lean and simple as possible.
I have documented my ComponentFramework entirely with XML comments so it should be understandable, and the sample serves as an example usage.
A couple of things to note :
- Components marked at [AutoLoad] and Services which are not marked [Disable] will be automatically loaded by reflection when calling Core.Run()
- Service injection is used when using the [ServiceDependency] attribute on properties with public setters. This is available inside services and components.
- The "Controllers/Effects/Hosts" namespaces I made in the sample are not necessary or important, it's a design choice. You can do whatever you want, you can even not use namespaces at all.
Enjoy!