|
In a previous life, we had one system (as in "app", but it was mainframes not PCs). Three copies - dev, test, prod. (Oh, and test was hot standby for prod. Testers knew that their system could disappear at the drop of a hat if prod hardware chucked a wobbly.)
All worked fine until a second system was brought in (which of course had to interface with the first one). After a while, it became obvious that we needed another test copy of each system. Basically, to test a change for system A (which is going in as an isolated change, not a big-bang upgrade), you need a clone of system B prod. This is different from B test, where the B testers are playing with new stuff which will go into B prod down the track. We wound up combining that integration testing function with user training, which wasn't ideal but did save the odd megabuck's worth of big iron.
Just my two bob.
Peter
Software rusts. Simon Stephenson, ca 1994.
|
|
|
|
|
I'm not even sure if I can ask this question properly (shows how unclear I am about it myself).
So I developed a small part of a rather complex software. I am to run a system test on this software (the whole thing) to test its error handling procedures.
But it's not going so smoothly.
It's alright for those errors that I can actually cause (e.g. run the software without some essential hardware).
But of course I can't generate errors such as "Hardware broken" or "Hardware is connected but not responding" as there is a chance of actually doing harm to the hardware.
So I need a cheat.
Putting break points in the software and overwriting values is not an option because this is a system test.
So I duplicated a small part of my software and put some error testing mechanism in the second copy (let's call it the tester version). The software launches with one or the other of these versions depending on the registry value I set.
The problem is that I've limited the duplicated portion of the code, which I thought and still think is a good idea, but this means that the dummy errors can only be emitted from one place.
In reality there are many paths errors can take. Depending on the path, the final output to the user can be different.
I want to map the various error paths to help decide how best to generate the dummy errors.
It appears I need to do something like: "Generate error A as if function B caused it" and "Generate error A as if function C caused it" etc
So I need a mapping from error A to functions B and C, and so on.
Is there a clever way of doing this?
Or maybe a better question would be, what is the best way to run a system test on error handling functions?
Thanks for any input.
Almost, but not quite, entirely unlike... me...
|
|
|
|
|
There is no general purpose easy way to do everything.
It is possible, but difficult, to use code insertion techniques to simulate any error. At best, and at the most complicated, you actually modify the code at run time to force an error.
Some people suggest using interfaces. These are outside those required by the design but are put in place solely to support testing. The problem with that is that not everything is solved with that and it might require a lot of interfaces.
One can also be creative about testing. For example if I need to test connectivity problems I can use a system call, in the test code, to drop my IP (of course I better restore it as well.) Or I can stop SQL Server pro grammatically when doing database error tests (again make sure to restart it.)
|
|
|
|
|
What is the best method for Testing an application that use Windows Form accesing a database, especially UI part?
|
|
|
|
|
A human that knows how to operate your application and who knows what behavior would be expected.
I'd recommend AutoHotKey if you want to create automation-scripts to test your application; it's small, free, and uses almost no resources at all. A script can merely click where you tell it to, and it will not verify anything else.
I are Troll
|
|
|
|
|
I recomend a human who dont know your application. So that he can test as he want, with different conditions.
|
|
|
|
|
|
Hmmm. That's not always the best way. How, for instance, do you tell if a button is meant to be enabled only under certain conditions? In depth knowledge of what the application is supposed to do is invaluable, and should not be ignored.
|
|
|
|
|
Check out the MVP pattern. Of course you need to implement the pattern in your code before you can create the unit tests. So it may not be a cost effective option for legacy code.
[edit]Only just discovered that VS2010 Ultimate has a way of testing UI called "Coded UI Test". Ultimate license are expensive but it may also be a viable option especially for upgraded legacy code.
"You get that on the big jobs."
modified on Thursday, May 12, 2011 9:56 AM
|
|
|
|
|
Hey guy,
United ahead electronics.,ltd here, selling IC PARTS with good quality and low price.
contact me by eamil:diana@united-ahead-electronics.com
|
|
|
|
|
Hi, can anybody share some write-up on design patterns
with C++ code.
Regards
msr
|
|
|
|
|
You can find some (limited I grant you) details on design patterns with C++ code here[^].
|
|
|
|
|
click here ->[^] and here ->[^]
Yes U Can ...If U Can ,Dream it , U can do it ...ICAN
|
|
|
|
|
|
view the Book and change its code to c++.
|
|
|
|
|
Have a look here http://www.codeguru.com/forum/showthread.php?t=327982
|
|
|
|
|
Hello,
This question refers to testing using MS Visual Studio 2010 Ultimate Edition.
I think this is simple scenario that can happen during coded UI testing and it could be useful if someone could point to right direction.
Let's say we have ASP.NET application that has several controls on one web form. We wrote coded UI test and everything is ok. After some time development team decided to change the content of the form according to the new requirements and one button was removed. But, the test remained the same as was when button was on the form. What to do in this case? Of course, we can change the test as well. But, what to do in efficient manner if we have, for example, 1000+ coded UI tests and if changes affected a lot of forms? How can we find the changes on many forms programmatically in order to get the information of significant changes earlier and not to execute the tests where these significant changes occurred?
I'm interesting if we can use .uitest file or anything else as central repository of elements on all forms that we're testing. Is this possible to achieve?
Thank you in advance.
Regards,
Goran
|
|
|
|
|
Tesic Goran wrote: But, what to do in efficient manner if we have, for example, 1000+ coded UI tests and if changes affected a lot of forms? How can we find the changes on many forms programmatically in order to get the information of significant changes earlier and not to execute the tests where these significant changes occurred?
Using reflection. Load the assembly, enumerate all forms, enumerate all components on those forms. You can then write all the names of the controls into a database-table, along with a date. You could add other properties too, might be easier to decorate them with a custom attribute.
When validating changes, loop the controls again and compare them to the values in the database. Count how many things have changed, and drop all tests scoring more than a previously defined threshold.
Or, ask the developers to mark the forms' that they've been modifying extensively.
I are Troll
|
|
|
|
|
Thank you for suggestion.
Goran
|
|
|
|
|
It's one of the most important facets of product development!
In my experience, there are definite phases of product life:
1. Conceptual Design - in which marketing types toss around ideas that will make life miserable for the engineers and programmers who will eventually be called upon to actualize their insane, drunken imaginings.
2. Detail Design - in which phase the marketers release a "requirements" document to engineering, leading to much anguish and scribblings on cocktail napkins.
3. Implementation - wherein the engineers attempt to read the alleged minds of Marketing, and provide specifications to the programmers who have to code the vague descriptions from Marketing into a product that someone will want to buy.
4. Internal Testing (alpha) - in which phase the experts are asked to test their own code against the ever-changing requirements promulgated by Marketing; they patch the most obvious problems themselves, bypassing version control.
5. External Testing (beta) - during which selected computer-savvy customers are given free software to try out in real-world situations in return for feedback and bug reports to help the programmers make Marketing's drug-induced wet dream into a product someone will actually find useful.
6. Release - finally a product that does something useful, however badly! Of course, it only works for those computer-savvy beta testers; real people haven't a clue how to make it work, and there's no manual.
7. Maintenance - pesky customers will persist in finding flaws that must be fixed, else those stock options will expire worthless. Support programmers are busy in this phase just making the product function for users who want to do more than just log on and watch the pretty videos.
8. Retirement - the phase that begins about 30 minutes after entering the Maintenance phase - maintenance is expensive! Tech Support changes their phone number, and patches are phased out over a period of time. After all, the new version has just been released; who could possibly be using the old one?
Of course, for those on a tight budget, the Microsoft Endrun is available:
1. Marketing - drink heavily and promise the sky.
2. Conceptual Design - build flashy visuals (without using Flash, of course) to promote the product.
3. Implementation - just code something.
4. Internal Testing (alpha) - get the coders to test their own stuff.
5. Release - sell the damned thing before someone notices that it doesn't work.
6. Maintenance - Aww, why bother? Unless someone wants to pay through the nose for advice.
7. Retirement - What, that old thing? We stopped supporting that years ago!
"A Journey of a Thousand Rest Stops Begins with a Single Movement"
|
|
|
|
|
What's to discuss? You have explained everything in a nutshell; have a 5.
It's time for a new signature.
|
|
|
|
|
What's your question? I believe you described about the whole Life Cycle...
Don't forget to Click on [Vote] and [Good Answer] on the posts that helped you.
Regards - Kunal Chowdhury | Software Developer | Chennai | India | My Blog | My Tweets | Silverlight Tutorial
|
|
|
|
|
Perhaps, I am this one that ...
|
|
|
|
|
Good Message to all. have a 5
|
|
|
|
|
Good message.. 5 for that.
|
|
|
|