|
Hi all,
I am looking for precious suggesstions about implementing SOA using .Net framework 3.0/3.5 in Enterprise environment.
I am considering using MVC, EntLib, Plug in based architecture etc.
Thanking you in anticipation.
-muneeb
A thing of beauty is the joy forever.
|
|
|
|
|
I think you've drinking a bit too much of the buzzword juice. In any case, this "question" is much too vague and general to generate any useful answers. There are several good books on the general topic of implementing SOA with .Net.
This one by Juval Lowy[^] is quite good.
|
|
|
|
|
I'm used to doing waterfall design and development. For my next project, we're going use agile methodology. Well, it's not really agile. The only part of agile they want us to do is to do iterative releases and have daily stand-up meetings. I estimated the project to be 1,900 hours if we use waterful (which we won't, but I did it waterful because that's what I'm use to).
Now, I want to convert that waterfall estimate into an agile estimate. Is there any rule of thumb or simple formula I can use?
I'm thinking that agile should take longer because of the iterative releases, in that every release requires a new round of testing and deployments.
Any thoughts?
|
|
|
|
|
emunews wrote: I'm thinking that agile should take longer because of the iterative releases
That's not the way it normally works. Because you do smaller, faster releases you should aim at producing better quality code up front. It's a discipline that you have to work at, but it pays dividends.
|
|
|
|
|
Yes, I understand that it proports to produce better code. But I'm not really asking about code quality. I'm asking about project estimation.
EDIT: Also, note that this is not a true agile approach. There won't be pair programming for example. The two agile tennants they want to use are interative releases and stand-up meetings. That's it. The business rules for the entire scope of the project are already done.
|
|
|
|
|
emunews wrote: The two agile tennants they want to use are interative releases
Well if you don't do the iterative part correctly ( this gets bastardized into total chaos in many cases ) you will wind up in a giant out of control mess. Therefore no estimating technique has any chance of being accurate.
led mike
|
|
|
|
|
emunews wrote: I'm asking about project estimation.
The quality of what you do upfront has a huge impact on your project estimation. Traditional waterfall has to cope with the testing phase occuring fairly late on in the development process, which means that your test teams come in late on, have a much larger surface area of code to test, and defect correction takes a heckuva lot longer.
Here's a hint - work out the high level use cases of what your project needs to do. Then, break these down into scope areas. This will give you a much better idea of how long it's going to take based on an iterative approach where you normally have a week of realisation, 2 weeks of indepth coding and a tidy up week. This level of scoping really does help - we made the leap and we haven't looked back.
BTW - standup meetings. Yuck. Try to introduce them to the concept of getting all the interested parties together on a regular basis. Realisation phase - a meeting to talk about the scope of the current iteration, and then a meeting later on in the week to show the high level use case along with what you've realised for it (typically this is 80% complete). This meeting would involve the developers, any architects you need, the testers and an end-user "champion". Then, at regular intervals over the development part of the iteration, have informal get-togethers where you show what's been done, and identify what's left to do. I'll guarantee that stuff will fall out of scope, but you'll carry it over into the next phase.
|
|
|
|
|
Don't you do a round of testing for each new release?
|
|
|
|
|
Yes. But the point is, you do it at the end of a release, and the new phase of development starts at the same time. Defects get rolled up into the next phase, so you don't have that time consuming round of testing at the very end. Tests are smaller, more focused, and cheaper to fix because they are caught earlier on.
|
|
|
|
|
Well, that means that time (measured as start of project to end) is quicker but time (measured in man-hours) is longer. Correct?
|
|
|
|
|
Actually, we've found that man-hours cost is actually lower because we don't have developers sitting around at the end of a project waiting to fix bugs, and the test teams aren't wasting time up front.
|
|
|
|
|
emunews wrote: every release requires a new round of testing and deployments
The sooner you find bug, the cheaper the project becomes.
Agile projects become expensive when new requirements are brought up between iterations because more attention is being paid to the working of the program. The final quality is better.
So the main issue is to keep track of changes in requirements during the iterations and make sure those changes are estimated and added to your budget.
|
|
|
|
|
I would give the same estimate for a given project regardless of methodology. When I give an estimate, I am essentially saying "I think this will take N days if done using reasonable development techniques." These do not have to be the exact development techniques used previously. Methodologies evolve, and what you're describing is the kind of incremental adjustment in methodology which is assumed to be going on all the time in any good shop.
Hopefully you will get faster over time, and the estimating model should always be updated as discrepancies are observed, but there's no immediate need that I perceive for you to adjust it right now.
Also, I do not think I have ever seen or heard anyone claiming to use the "waterfall method." It's considered a perjorative term these days, almost like saying "my coding style is spaghetti" or "our team's style is garage hacker." When you say "my estimating technique works for waterfall" you're basically saying it's an estimating model for bad techniques.
Finally, I think Agile is much better than waterfall, or (as proponents of Agile might say) it's much better than BDUF (Big Design Up Front). I don't think there's much value anymore to the style (call it waterfall, BDUF, or just mid-90s orthodoxy) in which the architect types spend weeks or months dicking around with object hierarchies, UML, etc. before coding ever starts. That time almost always ends up wasted, in my experience. In the absence of code, the architects don't have any real basis for their decisions.
Programming instructors are quite wise when they implore us to use natural language, pencil and paper, diagrams, etc., but I think many of us in the 90s went too far in this direction. Also, I think people attempted to over-formalize good technique. What emerged from this effort was a bunch of simplistic, canned methodologies that isolated "design" into its own step at the beginning of the process, performed by an elite cadre of non-programmers. Hopefully we have left, or are leaving, this era!
modified on Thursday, December 4, 2008 4:57 PM
|
|
|
|
|
Hi,
We have an application coded in C++ that runs on Windows. We also have an API that can be used by third party Unix apps. So the current architecture (in simplified form) is:
UI (in VC++) --> Functionality Dll (in C++)
Third party Unix client --> API --> Functionality Dll (C++ code recompiled as shared object in Unix)
We are now planning to redesign the UI in C#.Net. The question before us is how do we maintain the code base (of the Functionality Dll) common to both Unix API and Windows UI? If we just recompile the func dll with /clr switch and use it in .NET, will there be any loss in performance for the main app(func dll has a lot of math calculations involved)?
Guys, please help. Hope I was clear. Thank you in advance.
|
|
|
|
|
|
Mika Wendelius wrote: Not sure if I understood your problem correctly
Me either. However, I suspect he is looking for a two word book report on War and Piece.
led mike
|
|
|
|
|
That's an excellent interpretation, why didn't I figure it out
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Sorry for confusing you guys
As you must have guessed by now, I m new to the interoperability stuff . The questions on my mind are: does compiling existing C++ code with /CLR switch automatically emit MSIL for all the unmanaged code written? Will we get to keep the existing C++ code same across Unix and .Net(Windows), just like the way it is now (just needing recompilation)? If yes, is there any performance loss to it?
I know there is a chance that I am still not clear, but at least I added one more question to what I already asked. Perhaps you can have a clue where I m leading/misleading myself.
|
|
|
|
|
Btw, Mika, Mono seems a good option. But the API + func dll we have run in IBM AIX, SCO UnixWare, HP HP-UX, Sun Solaris apart from Linux. I dont find any mention of these on the Mono website.
|
|
|
|
|
|
|
i have one questions about Builder Pattern ..why we need product class where director class construct the component of product and when i want to produce any product i will implement IBuilder interface and write the behavior of each component and pass the concrete Builder to director without need the product and i will produce also different representation , plz give me the benefit of product class and real world example if u can ?
Discover Other ....
http://www.islamHouse.com
|
|
|
|
|
Builder pattern consists of 4 parts:
1. Builder-This is an abstract class
2. ConcreteBuilder
3. Director-This is the site where the construction occurs by calling the ConcreteBuilder
4. Product-This is the thing being built
public class Demo
{
public static void main(String args[])
{
Builder b = new ConcreteBuilder1();
Director d = new Director();
d.construct(b); // We tell director to construct a product using b as the builder and director
// will call the appropriate methods on b. We do not need to know how the director
// will do this, we assume the director knows this.
Product aProduct = b.GetProduct(); // Now we ask b to give us the built product
b = new ConcreteBuilder2();
d = new Director();
d.construct(b);
Product anotherProduct = b.GetProduct());
}
}
Therefore, you need the product class because after all you are after the built product at the end.
It is like going to a construction site and taking a builder with you. You ask the person in charge at the construction site (Director) to construct a house for you using the builder you introduced to the person in charge. The person in charge should know the sequence and what parts are needed to build a house but does not know how to build it. He simply asks the builder to build the parts. The builder now has a complete house. To see the completed house, you ask the builder for the house. I personally think we should ask the Director for the finished product as well and the director should ask the appropriate builder, but for some reason that is not the case.
|
|
|
|
|
Ok so I am looking into the 3.5 workflow services and developing support for some application form workflows. So far I have only come across fairly simplistic tutorials, the best being on channel9. I can understand that, there seems to be an astonishing level of complexity involved in putting one of these together.
My intentions are to have ASPX forms deployed via sharepoint services. The forms are to be served by a workflow service and data stored on SQL Server.
Basically a form to be filled, various levels of sign off with a reject or cancel option where reject allows the originator to resubmit. There may be substantial delays during the workflow. From my investigations so far it seems a state machine workflow will be the best solution but a sequential workflow may meet most requirements.
Issues are
Which workflow to use?
It seems all interaction needs to go through a send/receive event and therefore serialised datasets will be passed through the service.
Should the supporting data for the form (tables for dropdowns) also pass through the workflow service or should I seperate the static data from the workflow data?
So far the tutorials seem to assume the data is discarded once the workflow is completed (stored as xml during the workflow) but I would like to retain the information for analysis. Should the xml data be parsed out to tables etc for easier query access.
Am I missing anything from the following list
Persistence data store - SQL Server
Form information and static data - SQL Server
Logic layer to interact with the form information database (Program.cs)
Contract/Interface per application form
Workflow Service
Client (ASPX)
Also if there is a tutorial/article out there related to this type of workflow I would appreciate a link.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
A common task we always have to accomplish is to allow users to edit objects from the UI. For example, lets say I have an employee class with the regular properties. The UI will have textboxes and other controls to display the properties. A user can select an employee by name from a listbox, treeview, combobox or whatever (not important to this question) and the form will display the employee. These are different approaches I have been taking to display the employee and I was wondering which is a better approach or if you can recommend one:
1. Create a property within the employeeForm called Employee and when it is set the form will call its private ReLoad() method and display the employee.
2. Create a property as mentioned in 1, but do not ReLoad in the setter, instead make the ReLoad method public and clients should call the ReLoad method after setting the employee.
3. Forget the property altogether and just have a method called Load(Employee emp). This is basically a method which takes employee as a parameter and then displays it.
What do you think?
|
|
|
|