how many environments do you have / think is necessary?
As many as needed to solve the business needs of the company.
I think 3 is a minimum, I cannot decide if the 4th is necessary or not.
For me "dev" would mean what is on my box and nowhere else. This of course would be necessary.
"prod" is where the business makes money. So it is necessary.
Anything else is optional.
A "test" one is only reasonable if it is in fact used. And used consistently. This can often require dedicated QA.
You didn't mention "build" which is something I prefer. It is doable in "dev" but I prefer that everything is blown away first and that is problematic on "dev".
As for "train" that presumably would be a mirror of "prod" with less control and need for stability. Some businesses would require it. It can be available in house and/or externally. It would require a business driver rather than a development process driver.
You can also have "integration test", "system test" and "user acceptance test".
In a previous life, we had one system (as in "app", but it was mainframes not PCs). Three copies - dev, test, prod. (Oh, and test was hot standby for prod. Testers knew that their system could disappear at the drop of a hat if prod hardware chucked a wobbly.)
All worked fine until a second system was brought in (which of course had to interface with the first one). After a while, it became obvious that we needed another test copy of each system. Basically, to test a change for system A (which is going in as an isolated change, not a big-bang upgrade), you need a clone of system B prod. This is different from B test, where the B testers are playing with new stuff which will go into B prod down the track. We wound up combining that integration testing function with user training, which wasn't ideal but did save the odd megabuck's worth of big iron.
I'm not even sure if I can ask this question properly (shows how unclear I am about it myself).
So I developed a small part of a rather complex software. I am to run a system test on this software (the whole thing) to test its error handling procedures.
But it's not going so smoothly.
It's alright for those errors that I can actually cause (e.g. run the software without some essential hardware).
But of course I can't generate errors such as "Hardware broken" or "Hardware is connected but not responding" as there is a chance of actually doing harm to the hardware.
So I need a cheat.
Putting break points in the software and overwriting values is not an option because this is a system test.
So I duplicated a small part of my software and put some error testing mechanism in the second copy (let's call it the tester version). The software launches with one or the other of these versions depending on the registry value I set.
The problem is that I've limited the duplicated portion of the code, which I thought and still think is a good idea, but this means that the dummy errors can only be emitted from one place.
In reality there are many paths errors can take. Depending on the path, the final output to the user can be different.
I want to map the various error paths to help decide how best to generate the dummy errors.
It appears I need to do something like: "Generate error A as if function B caused it" and "Generate error A as if function C caused it" etc
So I need a mapping from error A to functions B and C, and so on.
Is there a clever way of doing this?
Or maybe a better question would be, what is the best way to run a system test on error handling functions?
There is no general purpose easy way to do everything.
It is possible, but difficult, to use code insertion techniques to simulate any error. At best, and at the most complicated, you actually modify the code at run time to force an error.
Some people suggest using interfaces. These are outside those required by the design but are put in place solely to support testing. The problem with that is that not everything is solved with that and it might require a lot of interfaces.
One can also be creative about testing. For example if I need to test connectivity problems I can use a system call, in the test code, to drop my IP (of course I better restore it as well.) Or I can stop SQL Server pro grammatically when doing database error tests (again make sure to restart it.)
A human that knows how to operate your application and who knows what behavior would be expected.
I'd recommend AutoHotKey if you want to create automation-scripts to test your application; it's small, free, and uses almost no resources at all. A script can merely click where you tell it to, and it will not verify anything else.
Hmmm. That's not always the best way. How, for instance, do you tell if a button is meant to be enabled only under certain conditions? In depth knowledge of what the application is supposed to do is invaluable, and should not be ignored.
This question refers to testing using MS Visual Studio 2010 Ultimate Edition.
I think this is simple scenario that can happen during coded UI testing and it could be useful if someone could point to right direction.
Let's say we have ASP.NET application that has several controls on one web form. We wrote coded UI test and everything is ok. After some time development team decided to change the content of the form according to the new requirements and one button was removed. But, the test remained the same as was when button was on the form. What to do in this case? Of course, we can change the test as well. But, what to do in efficient manner if we have, for example, 1000+ coded UI tests and if changes affected a lot of forms? How can we find the changes on many forms programmatically in order to get the information of significant changes earlier and not to execute the tests where these significant changes occurred?
I'm interesting if we can use .uitest file or anything else as central repository of elements on all forms that we're testing. Is this possible to achieve?
But, what to do in efficient manner if we have, for example, 1000+ coded UI tests and if changes affected a lot of forms? How can we find the changes on many forms programmatically in order to get the information of significant changes earlier and not to execute the tests where these significant changes occurred?
Using reflection. Load the assembly, enumerate all forms, enumerate all components on those forms. You can then write all the names of the controls into a database-table, along with a date. You could add other properties too, might be easier to decorate them with a custom attribute.
When validating changes, loop the controls again and compare them to the values in the database. Count how many things have changed, and drop all tests scoring more than a previously defined threshold.
Or, ask the developers to mark the forms' that they've been modifying extensively.
It's one of the most important facets of product development!
In my experience, there are definite phases of product life:
1. Conceptual Design - in which marketing types toss around ideas that will make life miserable for the engineers and programmers who will eventually be called upon to actualize their insane, drunken imaginings.
2. Detail Design - in which phase the marketers release a "requirements" document to engineering, leading to much anguish and scribblings on cocktail napkins.
3. Implementation - wherein the engineers attempt to read the alleged minds of Marketing, and provide specifications to the programmers who have to code the vague descriptions from Marketing into a product that someone will want to buy.
4. Internal Testing (alpha) - in which phase the experts are asked to test their own code against the ever-changing requirements promulgated by Marketing; they patch the most obvious problems themselves, bypassing version control.
5. External Testing (beta) - during which selected computer-savvy customers are given free software to try out in real-world situations in return for feedback and bug reports to help the programmers make Marketing's drug-induced wet dream into a product someone will actually find useful.
6. Release - finally a product that does something useful, however badly! Of course, it only works for those computer-savvy beta testers; real people haven't a clue how to make it work, and there's no manual.
7. Maintenance - pesky customers will persist in finding flaws that must be fixed, else those stock options will expire worthless. Support programmers are busy in this phase just making the product function for users who want to do more than just log on and watch the pretty videos.
8. Retirement - the phase that begins about 30 minutes after entering the Maintenance phase - maintenance is expensive! Tech Support changes their phone number, and patches are phased out over a period of time. After all, the new version has just been released; who could possibly be using the old one?
Of course, for those on a tight budget, the Microsoft Endrun is available:
1. Marketing - drink heavily and promise the sky.
2. Conceptual Design - build flashy visuals (without using Flash, of course) to promote the product.
3. Implementation - just code something.
4. Internal Testing (alpha) - get the coders to test their own stuff.
5. Release - sell the damned thing before someone notices that it doesn't work.
6. Maintenance - Aww, why bother? Unless someone wants to pay through the nose for advice.
7. Retirement - What, that old thing? We stopped supporting that years ago!
"A Journey of a Thousand Rest Stops Begins with a Single Movement"