CRT simulation

For years, I’ve been sharing this link whenever I were asked about the CRT simulation featured in a number of my games. Over time, that post has gotten increasingly out of date, so I figured I would do a proper write-up about my CRT simulation algorithm as it exists today in Super Win the Game. This method varies slightly from the one used in the original You Have to Win the Game, but the core principles are the same.

UPDATE: Additional slides and HLSL source from my August 2014 Dallas Society of Play talk are now available online: http://superwinthegame.com/dsoptalk/

I start by drawing the game scene to a pixel-perfect 1:1 buffer. This buffer is 256×224 pixels, shown here blown up 300% for clarity. The NES actually draws 240 lines vertically, but the 8 rows on the top and bottom are not visible under normal circumstances, so I can ignore them. For reference, the buffer in YHtWtG was 320×200, with a pixel aspect ratio of 5:6 to fit it precisely to a 4:3 frame.

crtsim1

Next I apply a number of effects within this space to recreate the look of an NTSC signal. I should note that none of my methods are physically accurate, and I am not attempting to actually emulate how the NES outputs color data or how NTSC signals are carried. This process is entirely based on my perception of what an ideal CRT should look like.

Let’s take a look at the output first, and then I’ll step back and explain how I got there.

crtsim2

So, that’s pretty different from the previous image. Let’s break down everything we’re seeing here.

The first and most obvious difference is the motion trails. These were present in YHtWtG as well, although they were much more subtle. (The large gap between the current frame and the end of the trail is due to the delay caused by taking a PIX capture. During actual gameplay, this gap would not exist.)

To produce trails, I save off the final output of the previous frame and blend it with the current scene. In the pixel shader, I also sample the pixels directly to the left and right of the local pixel on the previous frame. This makes the trails blur horizontally over time (as seen in the detail below) and also produces some blur or “fuzz” on static images. I scale all samples made from the previous frame buffer by an input RGB value to emphasize the reds and oranges.

crtsim_zoom1

One unfortunate downside to this implementation of trails is that it depends on having a consistent frame rate; it has been tuned for 60fps and will not behave exactly as expected at other rates or if vsync is disabled.

To produce the vertical bands seen at the edges of high contrast changes, I sample a few neighboring pixels to the left and right of the local pixel and scale the brightness of the local pixel by the difference in brightness between it and its neighbors, weighted by distance, and with every other weight negated in order to produce the alternating bands seen here.

crtsim_zoom2

Finally, in order to break up vertical lines, I’ve added a new feature to the shader for Super Win the Game, which is a cheap, simplified approximation of the way an NTSC scanline signal behaves. An explanation of the mechanics behind this aesthetic can be found on the excellent Nesdev wiki here: http://wiki.nesdev.com/w/index.php/NTSC_video

My approximation is to simply weight all pixel samples by a mask consisting of a cycle of pure red, green, and blue values.

crtsim_ntsc

I offset this mask by one pixel vertically every other frame in order to recreate the animated “roll” or “jitter” or “sparkle” characteristic of actual CRT screens. Once again, though, this depends very heavily on running at a consistent 60fps, so optionally (or necessarily if vsync is disabled), I blend the regular and offset mask values to produce a temporally stable result that still breaks up vertical lines.

Once all this is done, I save off the image to be used as the “previous” frame when it’s time to render the next frame. (To avoid having to copy the pixel data, I actually implement this by flipping between two buffers each tick.)

The next step is to draw this 256×224 image to the screen. I use the game scene buffer as a texture map and draw it across the surface of a 3D mesh of a curved screen. It is the geometry of this mesh as viewed by the scene’s camera that produces the curved lines seen here. If I were to use an orthographic camera to render this scene, the lines would be perfectly cardinal (ignoring additional distortion that I apply intentionally, which I will mention later).

crtsim3

In this step, I also apply pixel aspect ratio compensation, scanlines, and environmental effects. The NES uses an 8:7 pixel aspect ratio, which makes everything look slightly wider than authored. Scaling the source 256×224 image by this ratio produces an image with a ratio of 64:49, which is slightly narrower/taller than the 4:3 dimensions of the screen mesh, so when we fit this image to the horizontal bounds of the screen, it pushes a few pixels off the top and bottom. We can also apply additional overscan at this time if desired, but I am not currently doing this. (For comparison, YHtWtG actually underscanned the image a bit, so there was a black border between the edge of the image and the edge of the screen mesh.)

I also apply a scanline mask at this time. For Super Win the Game, this is the mask I am using:

crtsim_scanlines

This creates alternating vertical lines, as seen in the detail below.

crtsim_zoom3

In contrast to YHtWtG, where the scanlines perfectly overlaid the source pixels, I’m applying some pincushion distortion to intentionally stagger these a bit and also to compensate for some of the curvature due to camera perspective. This can be seen to some extent in the detail above; the white line on the left side of the block does not perfectly align with the scanlines.

Also visible in this detail is another new addition, which is a reflection on the screen border. This is a cheap hack that just samples from the game scene using the exact same shader code as the screen mesh, but with flipped texture coordinates to create a mirror image. At the corners of the screen, where texture distortion starts to look a little wonky as it wraps from one edge to another, I taper off the amount of reflection.

Once all that’s done, I blur the scene in a two-pass process. First, I downsample to a render target texture at 1/16 scale in each dimension. I do a number of Poisson taps during this step to blur the image, and I can also take advantage of automatic mipmap generation to produce a smoother image.

crtsim4

I then upsample to a full-scale buffer and again use a number of Poisson taps to further blur the image. The resulting image can be added back to the scene to produce bloom, and can also be drawn under menus when the game is paused.

crtsim5

To avoid washing out the whole scene, I take the square of the blurred image before adding it back to the scene. This produces a more subtle glow than if I used the blurred image directly. (Other common solutions are to subtract off a threshold value, which may be near or equal to 1.0 if HDR rendering is enabled. I do not use HDR for this game, so my solution necessarily must anticipate values exclusively with the [0,1] range.)

crtsim6

And that’s it!

I’ve skipped over a few non-crucial details like the specular and environmental reflections on the screen or the saturation increase to compensate for some muting that occurs when the scanline texture is applied, but this covers most of the process.

Of course, the really fun part (and a great sanity check) is seeing how it looks when the source image is…something more familiar.

crtsim_smb3

Looks good to me!

Update, June 22, 2014: I’ve recently added an additional step.

I have been authoring all the content for Super Win the Game using this standardized palette. For many cases, this is perfectly acceptable and probably familiar to anyone who’s played NES games on modern hardware, but it’s not quite what things looked like back in the 80s. This is because the NES generated color values in YIQ space (the space used by NTSC signals) rather than RGB space. The standardized palette provided on Wikipedia is a idealized representation of how these colors are intended to look, but in practice, it is substantially different from how NES games actually look when played on a circa 1980s CRT screen.

In order to recreate the look of a NTSC signal in YIQ space, I use a color grading lookup table to alter the RGB values rendered to the “clean” frame prior to blending with the previous frame. This lookup table is 32 texels to a side, represented as a 1024×32 2D texture map.

NTSCPaletteForArticle

This texture is programmatically generated at runtime given on a few input parameters. The algorithm is a simplified version of Drag’s NES Palette Generator.

NTSCResultsForArticle

As you can see above, this drastically alters the color palette in some cases, but more importantly, it serves to provide contrast between light and dark values. As with much of this process, it’s debatable whether the end results look “better. To my eyes, the content as authored looks more vibrant and cohesive than the altered images, but this palette conversion is an important part of establishing an authentic retro aesthetic, so it’s a tradeoff I’m happy to make.

2 thoughts on “CRT simulation

  1. Hi. Cool post! Most emulator programs just have the option to enable a post-process shader or two to fake the CRT curvature and add some scanlines. You go quite a bit farther, and it shows. :)

    For automatically creating mipmaps in OpenGL, the function glGenerateMipmap(GLenum target) will create mipmaps for the texture currently bound to the target you specify (you’d probably use GL_TEXTURE_2D.)

    https://www.opengl.org/sdk/docs/man4/html/glGenerateMipmap.xhtml

Leave a Reply