Click here to Skip to main content
15,868,116 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
See more:
I have 16-bit packed RAW GRAY SCALE images.
each image may have different high bit, for example: 9,11,13, or 15 (10,12,14, or 16 bits per pixel respectfully). Building and using image statistics I can calculate min/max pixel values, and expand dynamic range of image to 16-bit per pixel/or using other embedded information to expand range to 16bits. make notice this images have > 10 mega pixels, file size > 20MB ;) speed is an issue here.

all above is very easily done, and I have that part already done.

now I wish to display this image data onto CDC without loosing dynamic range of image (without converting pixel data ie 16->8 bit). Also not to create double images, for example: one with 16bit (original for processing), and second with 8bit (clone for displaying on DC). Is there any vc++/mfc solution for this, or this is only my wishful thinking?

briefly tried to use DIBSection with pallets, but to my knowledge this should be used on RGB displays with less than 24bits (8 bits per channel). Anyway got stuck in finding proper working example for pallets, ie none of them uses 16bits gratscale image data.

I guess I've tried almost anything. And it seams Windows GDI doesn't like me too much when 16 bit greyscale images are in question.

Any other ideas how to solve the problem would be very appreciated.

thanks
Posted
Updated 1-Apr-19 23:30pm
v2

Normally, windows machines are going to support at most, 8 bits per color plane.
So you'll need to trim your dynamic range down to 8 bits.

So you have two simple options for gray scale.

Use 24 bit color and no palette.
In this mode, give each plane, red, green and blue exactly the same value.
Color = RGB(x, x, x);

Use 8 bit mode with a palette.
In this mode you palette is simple, it's just the 256 values with each color plane set to the same value.
Palette[x] = RGB(x, x, x);

A more complex option is dithering, but I would expect that you won't like that option. Each pixel in your image will need ~four pixel images to render. This option was passable on CGA displays only because true color wasn't available.

A final option is to buy video hardware that supports 16 bits per color plane. i have no recommendations on this option.
 
Share this answer
 
Comments
SolarNigerija 17-Nov-12 22:30pm    
So to put it simply two images is the only solution ?
- original 16-bit image in memory (and only in memory, all processing done on this image)
- cloned original 16-bit image as representation in 8-bit (less memory consummation) for displaying purposes only (is it faster or slower from 24/32 bit?, I guess I'll start using HighPerformanceCounter sooner than later)

I wonder how Adobe solved this problem in PS.
as I can load this raw images with "Open as -> Adobe RAW".

PS: final option would be Matrox Medical Grade GFX which is not a targeted client GFX (much less display), as app would work only on those GFX cards ;)
JackDingler 17-Nov-12 22:38pm    
8 bit could be faster than 24 bit. It does take up a much smaller memory foot print. Graphics cards have long had native support for color palettes.

It's common in applications to build a second 'copy' of data for visual presentation.
Was discussing on subject with my old friend that has degree in Audio and Video Technologies, although he doesn't program he gave me an idea that so far stands. what we have is RGB color space 24 bit (8 bit per channel) display, and we have only 8 bit (256) levels of gray.
any conversion of 16bit gray -> 8bit gray is damaging image information so we need to preserve as much data as we can. While we present it as best as we can not to loose on speed in conversion. Since monitors interpret this as signal trough NON RGB system, we should look trough those systems (or only one system, need more intel on this). What we actually need is to preserve as much data as we can in Luminance or Brightness (I am only guessing here what components of this system might be).

Now his idea is to not only to use 24bit RGB images with primary levels of gray where all components are equal to each other. But also to use different values by 1 (or 2 in case of blue component) of each RGB component to display secondary levels of gray by using coefficients of RGB->GRAY conversion algorithms.
For example:
RMY Greyscale: Red: 0.5 Green: 0.419 Blue: 0.081, where blue has lowest intensity and we would map 1st secondary level of grey to it, 2nd to green, 3rd to red, 4th to blue+green, 5th to blue+red, 6th to green+red, and with primary level this gives me 7 levels of gray (could add for second time blue component to gain 8th which would give me 16bits levels in luma). Or with BT709 gray scale: Red: 0.2125 Green: 0.7154 Blue: 0.0721 similar approach.
Friend suggests using BT709 before RMY.

And that human eye is not that good that could recognize these little differences in color components, while we preserve LUMA of 16-bit gray scale images.

Can anyone confirm or deny this?
should I go this road, since I need to build huge LUT tables for conversion statically if I want a speed ;)
 
Share this answer
 
Windows started supporting 16 bit grayscale from Win 7 (and Vista SP2).
See System.Windows.Media namespace (present in .NET 3 and above).

See here, especially the sample at the end:
http://msdn.microsoft.com/en-us/library/system.windows.media.pixelformats.aspx[^]

Your friend is right in that human eye is more sensitive to Luminance as compared to Chrominance components. However, truncating Blue channel is not correct since eye is equally sensitive to R, G and B (well, actually, it is more sensitive to Green). The correct way is to convert RGB to YCbCr (you can also convert it to YUV). To compress data, you can scale down Chrominance components (Cb, Cr). Cb can scaled down by 50% and Cr to 25%. If you need only grayscale image, then discard the chrominance components completely.

Since you seem to have only grayscale and not the original color image, this conversion is not really applicable. If you want to fit 16 bit grayscale to 8 bit, you need to scale it down to 8 bits.
 
Share this answer
 
Comments
SolarNigerija 19-Nov-12 18:54pm    
Yes true ;) I know about WPF, it supports 128bit floats, and 64bit RGB, ... I even have Leadtools Medical PACS Imaging license but would need deployment/project license, and all that is pain in a$$, also too huge overhead == too big final exe (and yes they are that crazy, I can't even make library and expose only calls to my functions, if under hood I use their SDK without new project/deployment license).
And I am doing it old fashion way. Pure statically linked MFC, and all other libraries (so far ~450kb in relese version - supporting most of dicom datasets with different compression methods,...). So I can drop exe on any PC without any other requirements, licenses, fees and other things that annoys ppl ;) And in same time useful tool for me, my colleagues, and other ppl. That looks similar to ACDSee, and able to view/edit pure RAW formats + dicom, and possible some other digital image formats in radiology + I want this app to be free ;)

Conversion to 8 bit is eminent if I wish to display it on standard display monitors. But still seeking any solution to preserve more original image information in presentation/preview (not diagnostic). Current state of display monitors in World is that only ~5% may display more than 1000 (10-bit) levels of gray but should start soon to change, so I tend to explore some tricks how to expand dynamic range of current display monitors as much as I can if possible. Might be very useful later.

Also 1hr ago tried using BT.709 constants to build conversion->8bit RGB LUT that could satisfy at least 2048 (11-bit) levels of gray (with shifting in middle tones 1 or 2 components of RGB by 1) to gain better dynamics.
But to be honest I don't see any difference on my monitor compared with linear 8bit grayscale conversion, need to try this on some better display monitors to see if there is some actual gain from human perspective.

In any case thanks I love ideas, good or wrong,
just throw them all at me I always find something useful in them ;)
manoranjan 20-Nov-12 0:28am    
I would be very careful of modifying medical images. Usually, one uses Histogram equalization to improve dynamic range of a low contrast image. However, understand its pros and cons before you implement it!

You dont need to use Leadtools library for this task. You can call .NET API directly in your MFC app using either C++/CLI or managed C++ extensions. If your target is Win 7 (should be since > 8bit is not supported in XP), then .NET is available and you dont have to redistribute it.

Another option could be to use OpenGL (it supports 16 bit bit depths). It will be more portable than .NET. However, OpenGL would involve more work.
To conclude the story.

WPF, DX, or anything else first needs Hardware (Graphics card+Driver) for displaying anything beyond 8-bit gray-scale (unless we use software fallback which we do not).
So generally speaking we need to query hardware support for bit depths beyond 8-bit gray-scale.
And instantiate Display with that bit depth, took DirectX in my case for entire process.
Then we iter static lut's for selected bit depth during image conversion for display.

Mostly no one didn't sew any difference in displayed images.
There was instance with one HP monitor that displayed weird lines when no conversing was used, and little less when conversion was used - but that HP monitor in general had extremely bad image quality at all. But if high grade monitor (example: for medical use) were used displayed image quality was as expected in both cases (processed or not for displaying with static LUT).

This approach has little or no impact on image quality so it's not that worth pursuing.
So in conclusion just nice brain exercise and nothing more ;)
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900