|
B413 wrote: I understand the usefulness of business rules in each of these layers of a program. But the technologies that I have used forced us to rewrite or generate business rules in each of these layers because the languages are different:
If you have the same set of business rules in all the layers then something is suspect.
But other than that you already mentioned code generation so why not just do that for the common functionality.
B413 wrote: Is there a language that can accommodate all business rules in a single place or language
Attempting to generate all rules ends up being a mess because the exceptions add so much complexity that maintaining the generation itself becomes very difficult. Generating the easy stuff is easy and removes much of the rote work.
If you fail to properly design the different layers to support insertion of generated rules then that will also become a mess.
|
|
|
|
|
I can now confirm it's possible to choose C# to build a Winform application and reuse entities and business rules code from business layer to data and UI layers. I can also confirm this is possible with the solution (mongoDB, Node, JavaScript)
|
|
|
|
|
A want to build simple chat application.
After some research on the internet I found that there is already many messaging protocols such as XMPP, STOMP etc. as well as open source messaging servers which implement these protocols.
But how can I use this servers if I need to add some logic to my app. For example the user's ability to send message depend on his state.
I though about another controlling server which will be receive an user state and according to that state will change the state of the messaging server. But I'm not sure that this is the best approach to my problem....
I need your advice and suggestions about the architecture and techniques that will help me to build this app.
Thank you
|
|
|
|
|
This is a very arcane question, but I'm going to ask it anyway: Is there a performance hit by having a method in a separate assembly?
I am designing a website with custom providers. Passwords will be hashed, which means my hashing routine will be used A LOT. Ideal design would be to put the method in the utility assembly, where it could be accessed by both the MembershipUser and MembershipProvider modules. Would giving these modules their own copy of the method noticeably improve performance, or would the nuisance of having two methods that must remain identical outweigh any improvement?
|
|
|
|
|
Strictly from a performance point of view, one assembly is always better than several assemblies.
Each assembly loading has a fixed overhead. For multiple assemblies, you pay the overhead several times.
The overhead of assembly loading at minimum includes the following:
1. Finding the assembly.
2. Loader in memory data structure tracking this assembly.
3. Assembly initialization.
Possibly could be a problem of cold start.
From a design point of view, duplicate code is a bad choice.
|
|
|
|
|
Gregory.Gadow wrote: Passwords will be hashed, which means my hashing routine will be used A LOT.
I sort of doubt that. Only way that would occur is if you required the password on every single request (not session.)
Not to mention of course that this would suggest that you think that the time for this process is at least significant if not the biggest factor in the request.
And regardless the impact of the call itself is going to be trivial compared to the cost of actually doing the hash.
Gregory.Gadow wrote: Is there a performance hit by having a method in a separate assembly?
If you have profiled your application using real message process flows and determined that calling semantics themselves are significant then you probably want to write everything in C++ and create one massively statically linked application.
Myself I doubt that you can measure the normal calling semantics in any application that isn't anything other than a benchmark in all of the major languages. And it won't be significant.
There are of course ways to produce non-normal calling semantics for instance if you dynamically loaded a module for every single method call (unloading it each time) then I would expect that to be measurable. And very, very likely to be a bad design as well.
|
|
|
|
|
This is more of a paranoid situation of having to decide between performance and maintainability. Been there, done that
Gregory.Gadow wrote: Is there a performance hit by having a method in a separate assembly? Obviously, there is. But it would be only when the Assembly is loaded for the first time. And if accessing another class has already loaded the said assembly, then you do not even have to worry about it. If you carefully set the Base DLL address of each of your assemblies in the order they're likely to load and the size of the assemblies, you can actually slightly increase the load time (although this will most probably be unnoticeable)
Gregory.Gadow wrote: would the nuisance of having two methods that must remain identical outweigh any improvement? Remember DRY. Having multiple copies of code actually results in a lot of maintenance headache that does not justify the performance improvement, if any.
|
|
|
|
|
Gregory.Gadow wrote: separate assembly In IIS every assembly embeds a penalty added to the start-up time of your website, however after this start-up it's not an issue anymore.
Gregory.Gadow wrote: Ideal design would be to put the method in the utility assembly Not exactly true. Good design only says that, that function should be written once, but it does not violate the design rules to compile the very same code into several assemblies if it proved to have a good impact on performance...
Gregory.Gadow wrote: their own copy Not copy but a linked source...
I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)
|
|
|
|
|
Have you? I'm going to try it... so want to hear your assessment
|
|
|
|
|
for the first time, it's kind of cheap, I think
but it's greater quality than my expectation.
Still.. erwin or powerdesigner supports better technology.
but Basic functions are supported in eXERD also.
modified 7-Feb-14 2:47am.
|
|
|
|
|
Technical Details:
.NET Framework 4.0 (C#)
Windows Server 2008
Requirement:
1. Polling a local folder for files
2. Polling a Message Queue for messages
3. Polling a FTP folder for files
4. Polling a SMTP server for mails
We need a single architecture that can be easily extended and also independently maintained for different requirements.
For example if we need to add a new mail server to be monitored, we would just add an entry (with sufficient details) in a config-like file and the system should ideally pick the config entry and start polling for that.
Another example to stop a single polling instance, it should be sufficient to just remove the entry from the config-like file.
Whenever polling succeeds, i.e. if a new mail is received, a new file is found in a folder, etc. the system should be able to instantiate a new controled thread to process the item in question.
It is a complex requirement for me to decide on an architecture solution to this. Would be thankful all who can provide possible solutions along with pros & cons of the same.
Thanks in advance.
|
|
|
|
|
Hi,
I would use something like Strategy[^] to make different polling implementations.
If a strategy finds something, I would let it create a Command[^] that can be executed by a thread. A Command can hold the data and have its own implementation.
All Strategy and Command implementations regarding a technology e.g. mailing could go into its own package/assembly and get wired to a common execution platform at start-up.
I hope it makes sense.
Kind Regards,
Keld Ølykke
|
|
|
|
|
Thanks Keld Ølykke!
I got the overall idea, but I will need to read up on these two patterns and think of the solution (pros & cons). Also the "common execution platform" itself might need a little more thinking ( for amateur like me at least )
|
|
|
|
|
|
I would suggest building a plug-able framework where the main app's job is maintaining plug-in modules. Each module can be standalone and handle specific source but discoverable by the main app. The main app can scan a folder for DLL or EXE with class that implements some common interface and dynamically instantiate modules using factory. Modules can be add or remove from the folder at will.
|
|
|
|
|
One way to make it generic and easy adaptable is:
Extend the .Net FileWatcher class and create a component that listens to certain file changes in a directory. Next Create a windows Service application that will use this component. The settings and number of instances can be made configurable through the app config file of your windows service application. When a Instance notices a file change write a message to the application log file of the windows event Log.
Using the Windows Event viewer You then can create custom views that filter for your messages generated by your Windows service application. On each custom view you can attach a task that will be triggered each time a message is written.
To this task you can attach a batch file, powershell script or VBScript that will execute and perform your needed functionality each time the task is triggered.
|
|
|
|
|
Hi everyone,
My post will be a bit long but, i think, it is necessary in order to understand the problem. It is not really a problem since i have identified several solutions but i would like to know what are you thinking and if it is feasable.
On one side i have a native C++ framework and on the other side i have a C# framework. Both of them use a modular approach (component based framework).
I have to write some modules that will be compatible with the two framework (C# and native C++).
First solution : i can write my modules for each platform respecting the formalism. But in terms of maintenability it's not perfect since i have duplicate software.
Second solution : i develop my modules for the C# (or C++) platform and use the compiled dll in the second platform.
Third solution (my prefered solution) : I would like to write a common base for the two platform (my modules) and then in order to integrate these modules in both framework i will have two branch. But i think this solution may be technically hard to do but it seems (for me) stylish.
you will understand that the challenge is to take into account the two programming languages : C# and C++
So my questions is really simple :
1) What are you thinking about these solutions ?
2) Is it feasable ?
3) Maybe i forgot some other solutions ?
Thanks for reading these looong post and for your kind response.
S.E
|
|
|
|
|
Managed C++ allows for wrapping native C++ code to the .Net framework, which in turn can be easily consumed by any VB/C#.Net application.
So I'd suggest to write your modules in native C++, then the wrapper in Managed C++.
|
|
|
|
|
Question seems confused.
The concept of "branch" normally applies to source control which is a different subject than creating the code in the first place.
In terms of the code and only the code the following possibilities exist
A. Create two distinct implementations
B. Create an implementation that can be accessed by both and which shares common code in some way.
In terms of library management (based on your use of "branch") you first must decide if the library is in fact a separate deliverable or not. If it is then it has its own source control tree, it own builds and its own deliveries. And the two applications consumes builds that come from that, not code (keep in mind that a 'deliverable' could if fact has some source code or entirely be source code but the concept of 'deliverable' remains.)
If however you want to manage the code as part of the existing applications then the following is true.
1. The two applications ALREADY use a common source control tree. If so you add it an an appropriate spot in the tree. There really isn't any point to do this if you are using A above.
2. The two applications are different source control trees but you are going to MOVE them into one tree. Then the comment for 1 still applies.
3. The two applications are in different source control trees and will remain there. In this case then option B is NOT an ideal solution since it requires code copying.
|
|
|
|
|
Hi jschell, Bernhard
Thanks for your response.
jschell, You're right... The word "branch" is not appropriate here so my explanation is a little bit confused.
I only think in terms of coding. I'm trying to explain what is my purpose.
As these two platforms are component-based platforms (re-usability process), i would like to capitalize the source code.
So, i don't want to create two distinct implementation because of maintainability and/or code evolution.
The second approach i'm thinking of is to implement for one platform (native C++ for example) and then directly use the original DLL on the other plarform (here C# for example). This solution seems to me "hard" in a sense i have to implement some P/invoke process and/or marshalling process (for type variables).
Then, i'm thinking of writing a common base code which will be use on the two platform. For example, i develop my base with native C++ and then write a C++/CLI wrapper that i can use it in the C# implementation.
The simple diagram of this solution would be something like that :
Native C++ impl(common base) ----> C++/CLI wrapper----> C# impl
I think (as Bernhard suggest me) that i'm taking this solution. The idea is if there is evolution of code, i only have to change the common part and the wrapper.
Regards,
S.E
|
|
|
|
|
I already mentioned that you need to determine what your deliverables are first. You haven't discussed that at all.
Your existing applications are either being treated as two deliverables or one. Simple as that.
|
|
|
|
|
Hi,
Sorry for the misunderstanding.
So it will have a common part and as i have to distribute on the two platform there will be two deliverable.
One for each platform respecting the coding convention of both of them.
|
|
|
|
|
nonogus wrote: One for each platform respecting the coding convention of both of them.
So presumably you are saying that your library will be a deliverable.
You can either keep all of the interface code in the library or you could make the one or both deliverables responsible for their own interfaces. Either option has merits. Although I would probably tend to keep the library all in one language and thus one deliverable would be responsible for providing its own interface (and unit testing that.)
|
|
|
|
|
Many thanks for your reply,
you're right and my first idea (still under consideration) is to build a main library in C++ and then providing a wrapper in order to consume my C++ library into my c# platform.
The wrapper will be written in C++/CLI and thus i can simply add a reference in my C# project.
Regards,
|
|
|
|
|
For me it would be Option 3.
I would write the common services as web services and consume them from each application. One code base properly abstracted to provide the functions that you need.
Thanks
JD
http://www.seitmc.com/seitmcWP
|
|
|
|