|
I want to display different bitmaps those have different color keys.Suppose I have 3 bitmaps respectively black, red and gray color keys.I have to detect from the bitmap what is its color key.I think pixel 0,0 is its color key.How can I do this? I am new in Directx programming. I have already used CreateTextureFromFileEx function. In this function I have to fix the color key.But I don't know what is the actual color key.I have to take decision from the bitmap file.How is it possible? Please help me.
|
|
|
|
|
What you are trying to do is simple, but DirectX does not have an appropiate function for you to call. The simplest approach would be to just read from memory. Have a look at the MSDN GDI Bitmap Reference[^]
The concept is to load the bitmap into memory (this should exist as one continuous block of memory), by creating a DIB Section (see, Create DIB Section[^]), or similar technique. Then you locate the the pointer value for the actual bitmap bits (this is provided in the header structure), and read the appropiate value.
If all of this is new territory for you, you should probably begin with, Bitmap Creation[^], or, Device Independent Bitmaps[^]
|
|
|
|
|
This is jia.I want to have code in C++ of a human like robot.
Features of the Robot character
Your 3D robot character should have all of the following features and body parts:
1. Torso: Should have at least two parts torso. Upper torso and lower torso.
2. Left and Right leg: Each leg should have at least two parts, upper leg (above knee) and lower leg.
3. Left and Right Arm: Each arm should have at least two parts, upper (above elbow) and lower arm
4. Left and Right Feet
5. Left and Right Hand
6. Neck
7. Head
To make your Robot character look good, you should include the following features for your Robot character:
1. Each body part should have its own OpenGL lighting material property.
2. Each body part should have its own texture.
Features of the program
Your program must contain all of the following features:
1. The scene should also include a floor and a visualized 3D coordinate system.
2. You should be able to use the mouse to change the view angle.
|
|
|
|
|
do your own homework you > on
and try reading the forum rules before you waste our time
Harvey Saayman - South Africa
Junior Developer
.Net, C#, SQL
you.suck = (you.passion != Programming)
|
|
|
|
|
I could find topic on applying skins in applications developed in .Net Framework 3.0 and higher using the WPF. But i need to apply it in .Net Framework 2.0 using C#. Can anyone give me hint or some sort of sample code on how to apply skin.
Ur help will be really appreciated.
Thanx in advance.
|
|
|
|
|
|
Mr. Adams,
First of all thank you for the quick reply. I will definitely go though ur articles and try to learn as much as I can. I am hopeful that they are going to be of great help for me in trying to skin desktop application.
I also hope it helps .
Regards Sudyp
|
|
|
|
|
Mr. Adams,
I went through the article in the link you posted in reply to my query. The article is great. May be it will be helpful for me in the future.
I could only see things about WPF and its uses in data visualization.
Currently I need to skin application in .Net Framework 2.0. So, I guess, I need to keep on looking further on other articles to get some help on it.
Thank you
Regards Sudyp
|
|
|
|
|
Goto http://www.skinengine.com
|
|
|
|
|
I'm trying to figure some memory issues with wglUseFontBitmaps .
When setting the "hardware acceleration" to none in the display settings:
In our legacy software, when using wglUseFontBitmaps there is a lot of "Virtual Memory" (in the task manager) being eaten up (600megs!).
When doing the same thing in a test program, I get around only 40 megs for the same fonts (size and pitch and weight, ... ).
Are there factors that can affect the virtual memory footprint of my application in regards of OpenGL bitmap fonts ?
Here is our pixel format of OpenGL (simple, basic and straightforward)
bool bErrFound = false;
PIXELFORMATDESCRIPTOR pixelDesc;
pixelDesc.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pixelDesc.nVersion = 1;
pixelDesc.dwFlags = dwFlags;
pixelDesc.iPixelType = PFD_TYPE_RGBA;
pixelDesc.cColorBits = 24;
pixelDesc.cRedBits = 0;
pixelDesc.cRedShift = 0;
pixelDesc.cGreenBits = 0;
pixelDesc.cGreenShift = 0;
pixelDesc.cBlueBits = 0;
pixelDesc.cBlueShift = 0;
pixelDesc.cAlphaBits = 0;
pixelDesc.cAlphaShift = 0;
pixelDesc.cAccumBits = 0;
pixelDesc.cAccumRedBits = 0;
pixelDesc.cAccumGreenBits = 0;
pixelDesc.cAccumBlueBits = 0;
pixelDesc.cAccumAlphaBits = 0;
pixelDesc.cDepthBits = 16;
pixelDesc.cStencilBits = 0;
pixelDesc.cAuxBuffers = 0;
pixelDesc.iLayerType = PFD_MAIN_PLANE;
pixelDesc.bReserved = 0;
pixelDesc.dwLayerMask = 0;
pixelDesc.dwVisibleMask = 0;
pixelDesc.dwDamageMask = 0;
int iPixelIndex = ::ChoosePixelFormat( hDC, &pixelDesc );
if( iPixelIndex == 0 )
{
iPixelIndex = 1;
}
if( ::DescribePixelFormat( hDC, iPixelIndex, sizeof(PIXELFORMATDESCRIPTOR), &pixelDesc) != 0 )
{
if( !::SetPixelFormat( hDC, iPixelIndex, &pixelDesc ) )
bErrFound = true;
}
else
bErrFound = true;
and the fonts are created like this (one of 6 font)
strcpy(logfont.lfFaceName, "Arial");
logfont.lfHeight = 24;
logfont.lfWidth = 0;
logfont.lfEscapement = 0;
logfont.lfOrientation = logfont.lfEscapement;
logfont.lfWeight = FW_NORMAL;
logfont.lfItalic = FALSE;
logfont.lfUnderline = FALSE;
logfont.lfStrikeOut = FALSE;
logfont.lfCharSet = ANSI_CHARSET;
logfont.lfOutPrecision = OUT_DEFAULT_PRECIS;
logfont.lfClipPrecision = CLIP_DEFAULT_PRECIS;
logfont.lfQuality = DEFAULT_QUALITY;
logfont.lfPitchAndFamily = FF_DONTCARE | DEFAULT_PITCH;
hBigFont = CreateFontIndirect(&logfont);
if ( hBigFont == NULL)
{
glLargeFontBitmapListBase = 0;
}
else
{
HFONT oldHFont = (HFONT) SelectObject(hDC, hBigFont);
glLargeFontBitmapListBase = ::glGenLists(224);
IW_ASSERT(IsNoGLError());
if ( glLargeFontBitmapListBase)
{
::wglUseFontBitmaps(hDC, 32, 224, glLargeFontBitmapListBase);
}
else
{
}
SelectObject(hDC, oldHFont );
}
Thanks for any tips, hints or anything I could have forgotten.
Max.
|
|
|
|
|
Maximilien wrote: Thanks for any tips, hints or anything I could have forgotten.
Your best bet for fonts is to build a texture font. Bitmap fonts exchange memory with real-memory as needed, often at higher frame rates, thus increasing memory footprint, and preventing higher frame rates -- ironically. Your 3D card is designed for texture mapping, not for bit-blitting.
|
|
|
|
|
El Corazon wrote: texture font
Thanks.
Yeah, but I would need to have bitmaps for all the fonts I need in my application; unfortunately not a good short term solution.
|
|
|
|
|
I need to add animated image to smart application. I use picture box controler for that. But the gif image is not animated. Please help me
ssss
|
|
|
|
|
Edit: I've only just spotted that you cross posted to the Mobile forum, so "smart" means "on a smartphone" instead of "very pretty". I have no idea if the article I pointed you works on whatever version of windows mobile you are using, but it should be a good starting point. The 2nd solution of using an AVI and Animation control looks more attractive, as it will use preexisting code, and be better for your RAM. - Iain.
Dushan123 wrote: I use picture box controler for that
That's where you're going wrong. The picture control is purely showing a picture. You have a few choices. You could write a timer routine, and set the picture control to show successive frames of your GIF - probably not easy, as I remember the format...
Of you could use CAnimateCtrl which is a MFC wrapper around the Animation Common Control. But this will only work with AVIs.
Or you could do what I did.
"Hmm, a control that shows gif files...".
So go to your favourite programmer's website that has articles, and type in "gif control" into the search box, and look at the second entry. (Hint: It's an article called "GIF Animation Control"). It has good reviews, looks well written and simple to follow.
Good luck!
Iain.
modified on Friday, August 1, 2008 6:14 AM
|
|
|
|
|
I have a series of incoming byte arrays of unknown size (range can be anywhere from 1667 to 51200, but once the size is set it should stay steady for long periods but may change). The value contained in each element is a shade of grey. Currently the bitmap is in a fixed size PictureBox with the result the bitmap has a max height of 900. The incoming array is decimated to 900 for full display. The whole (or as much as is left after decimating) needs to be visible, there is no zooming. It is a waterfall display so once the bitmap width is full, the incoming array causes the oldest to "disappear". Is there a way to, something other than a bitmap or picture box perhaps, to reduce/eliminate the decimation with out having to increase the size taken up on the screen?
Thanks Jim
|
|
|
|
|
If you have more pixels in the bitmap than will fit in
the area you want to render them, then how can you render
the pixels without some kind of decimation?
Mark
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
My hope is (was?) there is some other method other than a bitmap I can use to avoid or reduce the decimation. Side note, the bitmap appears to be hard coded to a max of 32K.
Yes, if there are more pixels than the bitmap will hold there will be some kind of decimation.
Jim
this thing looks like it was written by an epileptic ferret
Dave Kreskowiak
|
|
|
|
|
32K bitmap dimensions are a limit of GDI.
You may want to take a look at GDI+[^] instead of GDI.
GDI+ works with larger bitmap dimensions and has a variety of built-in
interpolation modes for stretching bitmaps.
Mark
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
What language are you coding in?
Mark
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
I am coding with C# in VS2005 (may be able to get access to VS2008).
I will take a look at GDI+, thanks.
Jim
this thing looks like it was written by an epileptic ferret
Dave Kreskowiak
|
|
|
|
|
jimwawar wrote: I am coding with C# in VS2005
In that case, look at the System.Drawing namespace - classes like Graphics and Bitmap.
These use GDI+ internally
Mark
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|
|
Thanks Mark.
this thing looks like it was written by an epileptic ferret
Dave Kreskowiak
|
|
|
|
|
I'm trying to call glGenLists and it fails with the GL_INVALID_OPERATION error.
The error description (from MSDN) says that the glGenLists falls between a glBegin and its glEnd .
I have many glBegin and glEnd in my program; is there a way to :
- know when that call was done (i.e. after which glBegin , if it's of any use ?
- or to synchronize things up so that the glGenLists does not fall between the 2 calls ?
--------------
Solution :
I just needed to put my code at the beginning of the OnDraw of my view that reacts to a boolean that states the font change
(for example pseudo-ish-code)
OnChangeFont( int iSize )
{
m_bChangeFont = true;
m_iFontSize = iSize;
}
OnDraw(CDC* pDC)
{
if ( m_bChangeFont)
{
DoTheFontChange();
m_bChangeFont = false;
}
DrawTheScene();
}
Thanks.
Max
Last modified: 34mins after originally posted --
|
|
|
|
|
Hi,
I have a drawing routine in a class, two different object instances are alive:
void MyClass::DrawThumb(HDC hdc, RECT rc)
{
static Gdiplus::Bitmap bmThumb(m_hResource, MAKEINTRESOURCE(IDB_THUMB));
Gdiplus::Graphics gr(hdc);
gr.DrawImage(&bmThumb, rc.left, rc.top);
}
This compiles (VS2008) and runs OK under Vista but crashes on some Vista systems when the final destruction is happening i.e. end of _crtMain().
What is the problem?
Thanks,
AR
|
|
|
|
|
Why is drawing code getting called during destruction?
Mark
Mark Salsbery
Microsoft MVP - Visual C++
|
|
|
|