Image Processing
Become a master of bi-colour imaging.
Colour images of deep-sky objects fall into two categories, broadband and narrowband. Broadband images can be captured in a number of ways: either all of the colour data can be recorded simultaneously, as an RGB image using a DSLR or colour CCD camera, or it can be assembled from R,G, B and luminance (L) captures made with a monochrome CCD and external filters. Regardless of the capture method, broadband images deliver an approximation of the colours you would see if your eyes were sensitive enough.
Narrowband images are quite different. They are created using a monochrome CCD camera fitted with a narrowband filter that passes a slim portion of the visible light spectrum – popular filters include hydrogen-alpha (Ha), doubly ionised oxygen (OIII), singly ionised sulphur (SII) and hydrogen-beta (Hb).
By allocating narrowband data to the red, green and blue channels of an image, a process known as ‘mapping’, you can produce different colour combinations. One of the most popular is known as the ‘Hubble Palette’, where SII is mapped to red, Ha to green and OIII to blue. The striking falsecolour images that result are suitable for scientific examination, although amateur astronomers tend to manipulate their images for a more pleasing colour balance.
Three’s a crowd
Bi-colour imaging is another method of producing a false-colour narrowband vista, using just two filters instead of the traditional three. This reduces the amount of data that needs to be captured by a third – ideal for fickle British skies! From our diagram of the visible light spectrum (right), you can see that OIII emissions are on the cusp of green and blue light, something imagers can make use of by mapping Ha to the red channel and OIII to both the green and blue channels.
Although this mapping also produces a false colour image, the appearance is not far removed from a broadband image, but it will have crisper detail as the data has been collected from specific emission regions. Most astrophotographers adjust the colour hues to increase the contrast in certain regions of the image and, as the colours are already false, anything goes.
The process starts with stacking your Ha and OIII data into two master files using
your normal stacking software. Align the OIII image with the Ha image using your software of choice, then save the aligned OIII image. Open the Ha and aligned OIII files in Photoshop or an equivalent graphics editor and then apply adjustments using Levels and Curves to produce two acceptable mono images. Don’t be surprised to
discover that the OIII data is generally weaker than the Ha data. Save these files in PSD format with suitable filenames. Flatten both images using Layer > Flatten
Image then select the OIII image and duplicate it using Image > Duplicate.
The next task is to composite the two images to produce an RGB colour image by mapping Ha, OIII and the OIII Copy to the R, G and B channels respectively. Click on Window > Channels, then click on the ‘menu’ button at the top right corner and choose Merge Channels. Select RGB from the drop-down menu, pick the correct image for each channel and click on OK.
You now have the start point for your colour image. From here, you can decide the colour hue that you want represented in your image. You could choose a good representation of the colours that you’d expect to see if this were a standard RGB colour image, which is the palette produced initially by
merging the channels; alternatively, you can also choose a colour palette that increases the contrast between the various regions despite moving away from more natural colours. This is the route we have taken with our image of the Pelican Nebula. Select Image > Adjustments > Hue/Saturation and adjust the hue slider to achieve the colour tones that appeal to you.
Once you have settled on your colour choice, you can continue to process your image conventionally. Careful use of the Selective Colour dialogue box (Image >
Adjustments > Selective Colour) lets you tweak individual colour hues minutely, something that allowed us to really enhance the dusty areas. Using the Ha data as a false luminance channel also worked wonders in terms of overall detail.