|
Source is now available to download!
|
|
|
|
|
Hi,
I am trying to develop a CAD like application using OpenGL and wxWidgets... I have reached at a stage where i can draw a circle, square, load an image, polygon etc.. am also able to zoom and pan..
as a further step, i want to move object, scale it and rotate.. to do this i have written a code to select and pick object that lies under mouse pointer.. Now what do i have to do to move the picked object..?? I tried by finding the difference of old and new points and recalculating the points of the object..
C.x += dx;
C.y += dy;
where dx and dy are differences of old and new x, and y values resp.. And C.x C.y are original x and y coordinates
This is showing some movement. but not as desired.. it isnt moving along with the mouse movement.. For a small mouse movement, it shows a huge displacement and also very fast... so what can be done to make a selected object move along wiith the mouse pointer...??
Also i'm not able to zoom with reference to the mouse pointer.. it zooms with reference to the origin (0,0,0) only.. please help me..
Thanks in advance..
|
|
|
|
|
I have a program running on Windows 2000 or better PC's that needs to do a per-pixel operation on a Windows bitmap. Currently I'm using a tight loop that does little more than a pointer increment followed by a quick add operation on each pixel over the entire bitmap. This is done in real-time so I need it fast. On a 320 x 240 image that ends up being 230,400 addition operations since they are RGB bitmaps (320 x 240 x 3). This is being done in real time on an image stream that is pumping out 25 frames per second. At that resolution it's fast enough, but at 640 x 480 I have to start dropping frames significantly to keep up. I am not displaying the modified bitmap to the screen at any time. Instead I am shipping it off to a remote location over the Internet for display at the destination system.
I was wondering if I could push this operation to the Graphics Accelerator using one of Windows Graphics API's like DirectDraw, etc.? Or do PC Graphic accelerators only help with operations that are drawn to the screen (local PC video memory)? I assume that if it's possible I'd need to pump the bitmap to the Graphics Accelerator and then know how to do a global operation on each pixel and then copy it back to local RAM? If it is possible to use the Graphics Accelerator to help with per-pixel operations on a off-screen bitmap, what are the pros and cons?
Finally, is there a way to push JPEG decompression and compression operations onto the Graphics Accelerator?
If it is possible to do these things I would like to know where a quick easy to dive into sample is. I don't have any need for shading or rotations or any complex graphics operations at all like that. Therefore I would like to avoid wading through a ton of reading just to learn how to do a simple global pixel operation task.
Thanks in advance.
|
|
|
|
|
It is well known that calling GetPixel and SetPixel on a bitmap by addressing each pixel separately is an extremely time consuming graphics operation. I recently ran into this thread, (at the MASM32 forum) that tests a number of typical graphics operations and determines the required clock cycles: Graphics, Memory DIB[^]. Skip the assembly code and just check out the listings of clock cycles for the GDI operation,...I think you'll be amazed.
The DirectX SDK comes with a utility that displays the capabilities of your graphics accelerator. Basically, what you want is: GetDeviceCaps[^]. And, if your graphics accelerator supports Pixel Shaders, perhaps that would be faster. But, this is very time-consuming and error-prone.
BitBlt is much faster,...but, I think what you want is to have the bitmap already altered before you need to display it. Undoubtedly, that occurred to you. I'm assuming that you cannot access the bitmap before your application loads, or starts processing.
|
|
|
|
|
Hello Baltoro,
I am not using GetPixel/SetPixel. Instead I am using the Bitmap's scanline property to get a pointer to that memory area and simply walking a pointer over that area in a tight loop.
I do have access to the bitmap, in fact I call Delphi's JPEG code to decompress, with some optimizations I added to avoid needless memory reallocations between JPEG frames. I have since learned that there is an extension called DXVA (DirectX Video Acceleration), but that it is really tough to work with. However, I was also told that there probably already are JPEG compressors on my system that make use of DXVA. It might end up being an issue of learning how (if possible) to use DirectX to utilize those hardware accelerated compressors, but I have no idea to go about doing that currently.
Thanks,
Robert
|
|
|
|
|
Hi,all.
Currently I’ve been developing a Mosaic system of Two Calibrated Cameras using VS2008 and OpenCV , but my algrithm works badly
Here is my program routine, Any advice or help would be greatly appreciated!
1. I calibrated two cameras watching a same area, and undistort the input images,
2. then I use cvFindChessBoardCornerGuesses and cvFindCornerSubPix to get corresponding points of two undistorted input images,
3. and I process the corresponding points using cvFindHomography to get the homography of the two cameras
4. finally use cvWarpPerspective to warp the right camera image plane to the left camera image plane
But the warped image is extremely different from the ideal one
What's wrong with it?
Best Wishes!
|
|
|
|
|
How far apart are your cameras? And when you say that when the right image is warped to the left image plane is far from "ideal", do you know what the ideal image should be from theory or what you just expect it should be?
Have you tried the OpenCV group on Yahoo Groups? You might be more likely to get a good answer there than here.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.I'm a proud denizen of the Real Soapbox[ ^] ACCEPT NO SUBSTITUTES!!!
|
|
|
|
|
Hey,Tim,Thanks for your reply,
My right camera is only 15 degree right rotation from the left camera.
I know the ideal warp result
I've post a message on OpenCV group,
http://tech.groups.yahoo.com/group/OpenCV/message/64370
welcome to join discussion
Do you know how to get the homography of two cameras by the camera matrix or extrinsic matrix?
Thanks!
|
|
|
|
|
Do you have the book "Learning OpenCV"[^]? I noticed a section of homography in it. In my exploration of OpenCV I haven't gotten quite that far. I've been spending a lot of time getting a framework in place so I can quickly and easily write test applications. That and fighting the lack of documentation and weirdness in OpenCV.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.I'm a proud denizen of the Real Soapbox[ ^] ACCEPT NO SUBSTITUTES!!!
|
|
|
|
|
I've got the book "learning Opencv",but it doesn't solved my problem
So how is it going of your framework?
|
|
|
|
|
The one thing I got out of the homography section was that he suggests using at least 10 images when using the chessboard technique to get a good matrix. That's more than I expected.
My framework has been one step forward then two steps back. I had to get a new computer last year and to get it quickly, I took one with Vista preinstalled. OpenCV and Vista don't get along well. I've finally started gaining traction. After a false start, I settled on wxWidgets for the GUI and have a lot of the basics of OpenCV wrapped in C++ to hide some of the ugliness (like the relationship between CvMatrix and IplImage ). Right now I'm working through object detection and obstacle avoidance for mobile robots.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
object detection and obstacle avoidance for mobile robots?
That's really an interesting thing.What developing lib are you using? OpenCV or other robot vision lib in sorceForge.net
|
|
|
|
|
Right now I'm using OpenCV, particularly for acquiring the image. There's a lot there for testing ideas but I sometimes wonder if it was really with it. I suspect that for any production system I'll want to recode the algorithms to remove the generality that OpenCV brings and do specialized versions to work with just the format of the camera I'll be using. I've looked at a few other libraries but so far OpenCV has the best support and that's minimal. I do have a friend from Homebrew Robotics here who's interning at Willow Garage so I could get access to Gary Bradski if I really needed to. The last few days I've been looking at what it will take to visually detect the target cones for a RoboMagellan robot. How are you planning to use your stereo rig? Another friend has a two wheel balancing robot that has a stereo vision setup running into a Beagleboard.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
My two cameras are relatively fixed,so I want to program a reliable algorithm based on some stable thing just like camera parameter.
In the past,I've done mosaic algorithm based on SIFT,SURF,KLT,Harri Corner,But the mosaic result of these kind of algorithm based of feature points matching is very unstabile,sometimes good,sometimes bad
I've seen some projects of robot stereo vision before,They are mostly based on feature points matching.
|
|
|
|
|
Getting something robust enough to reliably work over a wide range of situations seems to be one of the big stumbling blocks in machine vision. Laser scanners seem to be more widely successful for mobile robots today since they provide easy range information but we're talking $$$. While a number of the DARPA Urban Challenge teams investigated vision systems, I don't think any actually used them for navigation. Willow Garage is using vision but they still have laser scanners on their robot.
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Hey Tim,
Today I got another project from my boss;
Car Safety System based on cameras
That means fix some cameras(two or more) on the car, protecting the car from some objects getting too close.
It reminded me of you.I think this project is similar with your project of robot avoiding obstacle.
But my hardware system is DM6467 or DM6446 from TI(Texas Instruments)
Have you developed the software system of robot avoiding obstacle?
|
|
|
|
|
That sounds similar to the OMAP35x they use in the Beagle Board except the OMAP has an ARM8. I haven't gotten that far along yet. I have a level with a laser line generator I want to try to detect with the camera and use the parallax shift to detect objects and get a range estimate similar to this article[^] although I envisioned the laser and camera positions reversed so the line would always be in view on the floor. Wouldn't work too well outside probably. I've been able to detect the spot from a laser pointer fairly reliably with a webcam but haven't gotten around to trying the line yet. Might have to spring for an optical bandpass filter. I'll try some red plastic first.
Are you planning on trying to extract depth with a single image or are you going to use your stereo rig?
You measure democracy by the freedom it gives its dissidents, not the freedom it gives its assimilated conformists.
|
|
|
|
|
Apologies for the late reply,I've been out for a work travel there days.
It sounds your robot project is on the right track. Congratulations!
Now I use stereo rig to extract depth.
What's your E-Mail address?
I sent my project solution and project report to you
Welcome your advice!
|
|
|
|
|
Is anyone aware of any existing work to find the centerline of a font boundary?
How about a suggested method?
For example I would the letter V would actually just be 2 lines instead of 7.
This will be used for engraving text.
I have tried exploding the glyph boundary into lines and then creating perpendicular lines from the mid points. Then I trim to near intersect points. Results are not great.
Thanks,
Jason
|
|
|
|
|
You could find the points top.l and bot.r of a bounding box and find the mid points.
If you are referring to something more complex, please give more detail.
|
|
|
|
|
Hi guys,
Thanks for colleagues who have answered my previous questions. Now my question is a modification of my first question.
How can I read and display from two usb webcams using MATLAB. I did it for just one usb webcam, changed the name of the variable for the second usb webcam but it didn't work.
Thank you in advance.
Sarkuzi
|
|
|
|
|
I don't know matlab well enough to answer this off of the top of my head but if you show us the code that worked for the first webcam and the code that isn't working we might be able to offer ideas on what you might doing wrong.
ps Also list any error messages (if any) that you are getting.
|
|
|
|
|
Hi there,
I'm working on a project to finding the depth from a 3D stereo Camera.
I have been adviced to use "ch professional 6.1" with two usb cameras. Can anyone please point me or send me a complete code that reads from these two cameras and give me the stereo picture. I just want to concentrate on my work... finding depth.
Many thanks in advance
Sarkuzi
|
|
|
|
|
sarkuzi wrote: send me a complete code
I bid $10,000.
Henry Minute
Do not read medical books! You could die of a misprint. - Mark Twain
Girl: (staring) "Why do you need an icy cucumber?"
“I want to report a fraud. The government is lying to us all.”
|
|
|
|
|
I raise you $5,000.
Luc Pattyn [Forum Guidelines] [My Articles]
DISCLAIMER: this message may have been modified by others; it may no longer reflect what I intended, and may contain bad advice; use at your own risk and with extreme care.
|
|
|
|