Jump to content

Canada's top-tier Telescopes & Accessories
Be as specific as possible when reporting issues and *ALWAYS* include the full version number of the application you are using and your exact *CAMERA MODEL*
NEVER POST YOUR KEY IN ANY PUBLIC FORUM, INCLUDING THE O'TELESCOPE SUPPORT FORUM ::: IF YOU DO YOUR KEY WILL BE DEACTIVATED WITHOUT NOTICE!
  • 0

Narrowband imaging


oguerrea

Question

Hello,

 

I am just starting narrowband imaging with my Canon DSLR.

 

Is there any specific setting I should use? (Should there be since it is an RGB sensor and not mono?)

There will be some specific post-processing work to be done, but should there be any pre-processing or setting to use?

 

Thanks,

Olivier

Link to comment
Share on other sites

  • Answers 2
  • Created
  • Last Reply

2 answers to this question

Recommended Posts

NB subs will take a long time. Just be prepared for that. Where my OSC subs were 8 min, the NB subs were 30.

 

Since you're going to have essentially noise in the other channels, you may want to remove un-used channels in processing.

 

Ha and Si are both "red", so you could remove blue and green

Oii is blue-green, so you could remove red, and possibly just keep green (twice as many pixels as blue or red anyway) but you should compare your green and blue channels to decide. No sense in throwing away a channel with good signal to noise.

 

I think the exact workflow would depend on what you're using for stacking. If DSS, I'd probably stack the OSC data, then split the result into channels and remove the non signal data before further processing.

 

If you are using something more oriented to mono data, like CCDstack, you would split the subs into channels before anything else.

 

What are you going to use for post processing, Photoshop?

Link to comment
Share on other sites

dts350z is correct. Using a DSLR for narrowband imaging will require very long exposures, since the filter is passing only a small fraction of the total light.

 

There should be no other special considerations during capture. Except that if the filter is a clip-in filter, focusing can be a major pain.  You can get close to critical focus without the filter, then you have to dismount the camera, insert the filter, and remount the camera. You then should re-adjust the focus, but you need a star that is bright enough in the narrowband wavelength to focus on.  This most often means that you will not be able to use LiveView, but instead will have to take snap images for focusing.

 

Your processing workflow will depend on how you intend to use the narrowband data. 

 

Say that you are using a Hydrogen alpha (Ha) filter.  Since the Ha wavelength is deep in the red range, only the red pixels on you sensor will collect any photons. The values for the green and blue pixels will only have noise. Now you go through your normal stacking process (debayer, calibrate, normalize, grade, align, and combine are typical steps).  You will end up with a color (RGB) image, but only the red channel will have any data. So, what do you do now???

 

The simplest thing is to extract the red channel data into a monochrome image. How you do that will depend on what app you are using.  You can then process and publish that monochrome image, or you could process it and re-combine it with other data in some way.  There are really several options.One possibility is  to combine Ha data with RGB data for the same target by aligning the Ha data with the RGB data and then layering the Ha data in as a luminance layer.  Another option is to replace the red data in the RGB image with the Ha data.

 

One good source for how to combine Ha with RGB data is Photoshop Astronomy by R. Scott Ireland, published by Willmann-Bell.

 

IMO, and with all of that said, I do not recommend shooting narrowband with a DSLR, or any one-shot color camera for that matter. Others will likely disagree, since just purchasing a filter is much cheaper than purchasing a CCD camera with a filter wheel, plus filters.

 

In my above example, with an Ha filter, only 25% of the pixels (those with red filters) collect any data.  The debayering process then interpolates (makes up) red data for the green and blue pixels. Contrast that with a monochrome astroCCD camera where every pixel will collect the Ha data with a 16-bit data range, and you can bin 2x2 to increase the sensitivity.  It makes your processing much easier to create a satisfying image.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...

Important Information

This site uses cookies to offer your a better browsing experience. You can adjust your cookie settings. By closing this banner, scrolling this page, clicking a link or continuing to browse otherwise, you agree to the use of cookies, our Privacy Policy, and our Terms of Use