|
While, as i have not got some answers in the C/C++/MFC, i hope i can get some help in other discussions, I can not find is there any improper?
|
|
|
|
|
You have already been given the answer; reposting in other forums is not likely to change that.
Use the best guess
|
|
|
|
|
Im writing a paint program in openGL 3.0, it works fine so far
except that when I try and draw a square over another square it is drawn
behind it instead of on top of it. How can I fix this so that
I can draw on top of an image? Why would this happen?
C++
OpenGL3.0
|
|
|
|
|
Enable Depth test to draw objects beyond others.
glEnable( GL_DEPTH_TEST );
glDepthFunc( GL_LESS ); // Near objects will be displayed. object with z -3 displayed on top of object with z -4
If you are creating an app similar to MSPAINT, setup projection matrix with glOrtho().
glOrtho() is used because it creates an orthographic projection, therefore the size of rendered image of same size with different z value will be same.
Provide different z values for each objects, based on their z order in the screen.
ie, z value of the object drawn at first should be as small, say 1.
Then increase z value of each new objects.
Render code should be like this.
glDepthRange(-100, 100);
glClearDepth(100);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glOrtho(-20,20,-20,20,0, 100 );
glEnable( GL_DEPTH_TEST );
glDepthFunc( GL_LESS );
glMatrixMode( GL_MODELVIEW );
glLoadIdentity();
glColor3f(0,0,1);
glBegin( GL_TRIANGLES );
glVertex3f( 0,0, -3 );
glVertex3f( 1,1, -3 );
glVertex3f( 0,1, -3 );
glEnd();
glColor3f(1,0,0);
glBegin( GL_TRIANGLES );
glVertex3f( 0,1, -2 );
glVertex3f( 1,1, -2 );
glVertex3f( 1,0, -2 );
glEnd();
SwapBuffers( m_hDC );
|
|
|
|
|
Someone has inquired if <insert random="" computer="" device=""> will support gestures such as swipe. I've already written the code to reboot the device if you give it the middle finger
Seriously, I'm looking for articles and descriptions of the underlying technology and approach for implementing a page swipe. I can visualize that I'm working with off-screen images and moving in a column at a time, but this would seem quite tedious at the Win32 level (which is where I'm working).
Resource suggestions?
thanks
Charlie Gilley
<italic>You're going to tell me what I want to know, or I'm going to beat you to death in your own house.
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
What type of swipe? Touching the screen? In mid air?
|
|
|
|
|
There's my ignorance kicking in... what is a swipe?
I've become familiar with a couple of gestures - found a good article on wiki:
http://en.wikipedia.org/wiki/Multi-touch[^]
So, in the context of a gesture, a swipe would be a scroll or flick, as you would expect on your smart phone to move to the next page.
Charlie Gilley
<italic>You're going to tell me what I want to know, or I'm going to beat you to death in your own house.
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
I've been working heavily in Perceptual Computing for the last few months. A swipe is a single longer movement in any direction - in the case of PerC, this would be a swipe in mid air.
|
|
|
|
|
Fair enough. But if you do receive this "event", the application responds to it in some fashion. In the case of a smart phone, they might "smoothly scroll the page". And it's at that level I'm living in at the moment. I'm curious how it's generally implemented... the smooth scrolling or page animation if you prefer.
Charlie Gilley
<italic>You're going to tell me what I want to know, or I'm going to beat you to death in your own house.
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
Ah, I see what you're getting at. Well, it certainly depends on the development environment you are using. I use WPF, which provides inbuilt access to animation - I would simply hook into that. If you're using Win32, then you're going to have a lot more work because you're going to be hooked into GDI+.
|
|
|
|
|
I actually think I'll have to go below Win32. I have some algorithms worked up, and we'll see later this morning if the processor can keep up with what I need to do.
I suspect that most implementations have hardware acceleration of some sort to aid the process.
Charlie Gilley
<italic>You're going to tell me what I want to know, or I'm going to beat you to death in your own house.
"Where liberty dwells, there is my country." B. Franklin, 1783
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
|
|
|
|
|
They do, and that brings its own problems.
|
|
|
|
|
Hi all
I want Import 3D model in C# windows from aplication,
and rotating it.
Would you Help me, Please!
Thanks
|
|
|
|
|
Please don't crosspost[^].
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
|
Hi all.
I need to draw a thick line with SetPixel API. I search very, but can't find any open source algorithm (Bresenham algorithm is one pixel, murphy algorithm is for line with more than 3 pixel thick, ... ).
If you can please help me.
Notice that I can not use CPen and other facilities in windows. I should use SetPixel.
Excuse me for bad English.
Thanks.
|
|
|
|
|
Hello,
you can draw a filled rectangle and a filled circle at every end.
Uwe
|
|
|
|
|
You did read, that he has to build it on his own with setpixel? So the problem is he can't draw a rectangle.
------------------------------
Author of Primary ROleplaying SysTem
How do I take my coffee? Black as midnight on a moonless night.
War doesn't determine who's right. War determines who's left.
|
|
|
|
|
Some years ago (nearly ten) I adopted Xiaolin Wu's line algorithm for that. It's an antialiasing algorithm, but with little changes you can use it.
You can even do it on your own. When you draw, you got a virtual direction corresponding to an angle. You can turn 90° and paint the neighbor points.
------------------------------
Author of Primary ROleplaying SysTem
How do I take my coffee? Black as midnight on a moonless night.
War doesn't determine who's right. War determines who's left.
|
|
|
|
|
Sounds like a school assignment to me...
Its been a few years (20+) since I had to do this, but I seem to remember the best way to acheive this was to use the standard Bresenham algorithm, but to just set more than one pixel at each point. Basically for a three pixel wide line, set the 8 pixels around each point as well as the point itself... Some optimisation is possible.
|
|
|
|
|
|
Background
I have an application that displays (large 2500X3000) 12-bit grayscale images within an image control. I am using the Gray16 pixel format and I am scaling the data to 16 bits by simple bit-shifting. Images have low contrast, so I implemented functionality to adjust brightness and contrast. The functionality was based on setting the image's Effect to a pixel shader that accepts brightness and contrast parameters. Obviously, this implementation is very efficient and avoids moving large amount of data around in memory.
Problem
When adjusting the contrast to a high setting, the displayed image exhibits unacceptable contouring, as though the bit-depth of the image has been reduced to 5 or 6. As a baseline test, I implemented the same functionality without a pixel shader - I applied the contrast and brightness settings to the image, reduced the bit-depth to 8 bits and displayed the image with the Gray8 pixel format (one that I have a great deal of familiarity with). The latter implementation showed little to no contouring.
Question
Does anyone know why applying a pixel shader would reduce the apparent bit-depth of an image? Does the pixel shader HLSL code work on reduced bit-depth data, e.g. eight or less, that is a function of the hardware/drivers or the OS?
All data in the pixel shader are floats that are scaled from 0 to 1, so you haven't any idea of the quantization level (at least to my knowledge)
Thanks in advance for any feedback
|
|
|
|
|
|
How can I change hue and saturation of incoming stream while capturing in direct show?
tank you in advanced
|
|
|
|
|
Based on what I've read, I would look to write a DirectX Media Object which I could host in the DirectShow graph and use that to filter the stream. You can find details here[^].
|
|
|
|