|
Kevin Marois wrote: The system must be controllable from the server side
Not sure what that is supposed to mean.
Actually pushing from the server would be possible but only in all of the following is true
1. The client has an addressable IP. From the internet that means a public IP address.
2. The client must be running. So certainly one can never go on vacation and leave the computer off.
3. The client must be on the network. So no taking the computer somewhere with no coverage.
4. The client has software installed and running which is expecting that push request.
All of the above is why the client requests updates from a server rather than the other way around.
|
|
|
|
|
This is needs to be supported by the client software which should check with the server and then take the appropriate action, I doubt there is a framework for your client. The server should certainly maintain the state for each client ie. optional, non financial.
Never underestimate the power of human stupidity -
RAH
I'm old. I know stuff - JSOP
|
|
|
|
|
Hello,
I have a project to develop a classified ads website. A project has already been created but, the technologies used are obsolete.
My question is the following: if you, you deviate to make this project, what technology and what tools will you use for each step?
Knowing that the project will be online on a cloud server infomaniak.
What I have tried:
I already have an idea but, I would like to take different opinions before launching.
Thank you!
|
|
|
|
|
My first step would be to ask for more detailed requirements. And also what is the exact market space they are attempting to target.
|
|
|
|
|
Pointer to Implementation (PIMPL) pattern can help when working with a dynamic link library as it helps maintain a stable ABI at the cost of an extra indirection in almost every data access.
My question is, do you see any merit to using them in a static link library? My gut feeling is no, but maybe you can sway me.
Mircea
|
|
|
|
|
Mircea Neacsu wrote: when working with a dynamic link library as it helps maintain a stable ABI
Not the only use case but that statement sounds like protectionism intended to defeat other programmers. So basically not trusting them. So not something that I engage in.
Mircea Neacsu wrote: in a static link library
Presuming C++ the library type has nothing to do with it.
Over time and hopefully rarely one comes up with a case where there is a 'lot' of information that is needed to fully declare the functionality that a class needs. The normal idiom for doing this is to put it all in the include file. Even though there is no need for the user of the class to be exposed to all of that.
So one moves most of that extra information either into the class file itself or even uses a secondary include file (if one finds it emotionally difficult to put it in the class file) which is only included in the class file.
Using an actual redirect is not required. It is fluff. And probably only hurts to make maintenance more complicated. A void pointer works as well and then using it is just a cast which has no runtime impact.
And if performance is a problem with the redirect, as in an actual measurable impact rather than something that is hypothetical and unrealistic, then seems likely to me that there is something wrong with the design that lead to the class in the first place.
|
|
|
|
|
jschell wrote: Over time and hopefully rarely one comes up with a case where there is a 'lot' of information that is needed to fully declare the functionality that a class needs. The key word here is "rarely" . Yes, I know this "God class" anti-pattern and I agree it is something to avoid, but the situation I ran into is just mindless application of a design principle. Every class, no matter how small, had it's entrails taken out and replaced with PIPMLs.
Also, for good measure, all classes have only private or protected constructors and objects can be created only through static member factories. Of course, that means all objects live on heap, not on stack and this is again, anywhere and everywhere!
What was that saying that to a man with a hammer everything looks like a nail
Mircea
|
|
|
|
|
Mircea Neacsu wrote: is just mindless application of a design principle. Every class, no matter how small, had it's entrails taken out and replaced with PIPMLs.
Which is wrong.
Not a unique event though. Someone, I believe on this site, years ago mentioning an application where every class was required to have an interface and factory.
Fallacy of that of course is that one then could of course insist that the factory must also have an interface and factory and that then recurses.
I have seen multiple times where people suggest that code should be written so code injection can be used at run time to load the various parts of the application. Yet they are unable to explain in realistic terms how that will in any way be cost effective.
|
|
|
|
|
A simple question with not that simple answer. Do you bind your dto classes directly to the UI (web page, rest API deserialization etc...), or you use a middle layer of object/properties between the dto and the UI?
Thanks!
Advertise here – minimum three posts per day are guaranteed.
|
|
|
|
|
A "DTO" be definition is a (Data Transport) "middle layer" (Object).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Data transfer objects. It's an architectural pattern in .Net Core, Java etc.
Advertise here – minimum three posts per day are guaranteed.
|
|
|
|
|
As always, "it depends".
If the data is simple enough and the view doesn't need any additional data for other UI elements, like filling in options for a dropdown, you could get away with using the DTO's in the UI. If your UI is more complex, you'd be better off building a view model to hold the data and any additional information needed to build out the UI.
Typically, I don't use DTO's to do anything other that talk to the datasources.
|
|
|
|
|
Thanks!
Advertise here – minimum three posts per day are guaranteed.
|
|
|
|
|
I think the real answer is unfortunately yes it goes in the UI.
What happens is that the start up does it that way because it is quick. And they keep doing it quick even after they have funding.
And even after they start hacking the DTOs into morphed states that make it confusing as to what is going on (or even where the DTOs live in the code stack.)
Then 5 years in when the application becomes large then everyone complains about why the interfaces are not clean but no one, and I mean no one, wants to do the major refactor that would be required to separate the different functional units (and provide appropriate DTOs for each.)
Of course eventually, if the company lasts, then it gets so big that the have to re-write major portions.
...but they want to do that quick.
|
|
|
|
|
I have 10 diffrent classes like a,b,c which have diffrent primary keys and structure but every class will b having a button which will be opening a same class like Charges if gets clicked.
I want to save charges class information along with the primary Key of the relevant class(a,b,c) from which it gets called.What would be the best design pattern for it. Thanks
|
|
|
|
|
You'll need a "class id and instance key" for the "10 different classes" if you want to relate them to a single source of "Charge classes". (bi-directional parent-child relation).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
The question doesn't make sense.
Presumably "a,b,c" are in fact different classes and not in fact different instances. And in the context of programming (not data) there is no point in having a "primary key". And it is probably a design flaw if you do in fact have 10 different classes and you intend to store then in a single table (for which the 'primary key' would in fact be a type value and not a 'primary key'.)
You do not "save" classes. Instead you save data.
Design patterns apply to programming designs. The term does not apply to data.
Nothing in your description as defines that there is any data at all to save with the "button".
But if you meant the following
1. You do in fact have 10 different classes where the data is the same (guaranteed) but the behavior is different.
2. You have an attribute that differentiates each class. What you are referring to has the "primary key". I will refer to it however as the 'type'.
3. The button has a state of either on or off.
The the persistent storage object (database table) would consist of three columns: Id, Type and ButtonState.
The 'Id' exists because, presumably, you are going to want to store more that one instance of a, and more than one instance of b, etc.
|
|
|
|
|
The Reactive Manifesto (https://www.reactivemanifesto.org/) is several paragraphs long and lists a large number of alleged characteristics and advantages of reactive programming; so many in fact, that reactive programming appears to be offered as some sort of panacea that will solve all of our problems forever.
To me it seems that most of the alleged advantages of reactive programming as listed in that manifesto are either false, or non-sequiturs, so despite what the manifesto says, I posit that the sole purpose of existence of reactive programming is performance, in other words saving threads.
Incidentally, this means that in any environment that offers virtual threads (for example, in Java starting from version 19) reactive programming is irrelevant.
Change my mind.
P.S.
Unfortunately, when people become salespersons for a certain cause, they seem to be never content with just mentioning the one game-changing advantage of their product over the competition; they seem to always want to throw as much as possible at the customer, hoping to make them buy; so, they tend to include a torrent of inconsequential or even entirely fictitious advantages, which often has the effect of drowning the one important advantage in the noise. For example I have seen this with microservices, whose lists of advantages are often twenty items long; most of them are preposterous, and almost all of them are nothing but filler, because in fact microservices only have one game-changing advantage, which is scalability, or two if we want to also count resilience.
|
|
|
|
|
Sorry, I'm not familiar with reactive systems but the reference you gave is just a bunch of gobbledygook. Platitudes and truisms not a design method make.
I spent many years in real-time data acquisition world and all the stuff mentioned in the "manifesto' was so obvious that no one would even bother to mention it:
- Responsive: show me a real time system that is not responsive
- Resilient: nah, we'll just let our system crash at the first bad input
- Elastic: we had to wait for 2014 for someone to bring up the subject because we never heard of the code 1201 and 1202 errors in '69[^]
- Message driven: another novelty... who would have thought?!
Laughable!
Mircea
|
|
|
|
|
It's Version 2 and published over eight years ago. The world has moved on ...
|
|
|
|
|
My apps reflect all the characteristics refered to as "reactive" though I never though of labeling them as such:
- Responsive: sub 100ms response time
- Resilient: graceful degradation in the face of system issues
- Elastic: a frame rate that accomodates an increased load
- Message driven: all operations that need to be async are so
So, "reactive" seems to be a new word to describe what should have been taken for granted in any worthwhile app.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Mike Nakis wrote: reactive programming appears to be offered as some sort of panacea that will solve all of our problems forever.
Of course. Not much of an evangelical if one has a new idea and then says 'but this solves almost nothing so you should almost never use it. And especially not when considering long term maintenance costs.'
Mike Nakis wrote: become salespersons for a certain cause, they seem to be never content with just mentioning the one game-changing advantage of their product over the competition; they seem to always want to throw as much as possible at the customer,
Also of course. As a salesperson ones income is derived from that. Probably not a good idea for the owner of a McDonald's to tell you that Burger King burgers are cheaper and taste better.
-------------------------------------------------
From the link
"Systems built as Reactive Systems are more flexible, loosely-coupled and scalable."<pre>
And with complexity that is significant. And that is by far the most significant factor is any large scale enterprise. No technology will fix that.
<pre>"The system responds in a timely manner if at all possible."<pre>
WTF? What system of any sort does not do that?
<pre>"The system stays responsive in the face of failure."
I doubt that is a given. If the architecture, design and implementation does not specifically do that then there is nothing a specific technology can do to fix that.
"that ensures loose coupling,"<pre>
Which also insures complexity.
Componentizing a system, any system using any methodology, means that the individual component (pick your granularity) is easier to understand but it adds to the complexity of understanding and diagnosing problems that impact the enterprise.
I just wish all of the difficult problems that I have worked on for the last 20 years could have been easily solved by looking at one component. The problems that were caused by a single component often could be diagnosed and fixed in less than a day.
<pre>"The largest systems in the world rely upon architectures based on these properties"
The largest systems in the world have been built over decades and have been based on many false starts and trial and error solutions to meet the specific business needs (and that is a very big plural) of the companies which hold those systems.
Small start ups cannot start with the assumption that they will need to support 1 billion users in 20 years because right now they are not even sure they have enough money to even run for 3 years. So they need something that works quickly no matter how it might fare in 20 years.
|
|
|
|
|
It's a raging tech sea. Many want anchors.
If you want a quick ride to the bottom just beeline to the nearest lighthouse looking thing like this while pretending it's a silver bullet. Sell it to management as such too.
It worked for agile! Even though that manifestos creators have basically since gone, "WTF, people? We weren't meaning to found a religion."
Not to say I think agile is even "bad". But snake oil, silver bullets, and fascist purists sure tend to be.
|
|
|
|
|
(I brought this up as a sidetrack in a Lounge thread. Even though I ask out of curiosity only, not any 'need' or immediate problem, the question is on the technical side, so I move it over here
There is no real reason why stack frames are allocated edge to edge! The stack frame head 'previous frame' pointer could go anywhere. Stack relative addressing never goes outside the current frame. Traditionally, with a million objects, each running its own thread, each of them has allotted a stack for its maximum call depth. This can bind up quite a few megabytes that are hardly if ever used, most certainly not all at the same time. Never ever are every single thread preempted at its very deepest nesting of calls at exactly the same time. In a non-preemptive, co-routine based system, they won't all yield (or whatever it is called in your favorite co-routine package) at the maximum stack depth, all at the same time.
Stack frames could be allocated from the heap, with space is occupied only as long as a function call is active. Then, a given amount of RAM would be capable of handling a much larger number of threads. Especially in event driven architectures, a large fraction of threads idle at a low stack level. They receive an event and go into a nest of calls for handling it, and then return to the low stack level waiting for the next event. The thread might do some sort of yield halfway, but the great majority of threads would be back to 'ground base', or at a moderate call nesting level, most of the time.
The argument against heap allocation is of course the processing cost. There are (/were) machines offering micro coded entry point instructions doing stack allocation from a buddy heap, essentially unlinking the top element from the free list for the appropriate frame size. Function return links the frame back to the top of the list. This requires a couple of memory cycles for each operation. What is that on the total instruction budget for a modern style method call?
Even though you with desktop PCs can just plug in another 64 GiB of RAM to satisfy stack needs, in other systems (such as embedded), this is not always possible. Yet, lots of programmers recoil in horror over buddy allocation: It causes lots internal fragmentation! 25% with binary buddies! Well, but how much can you save in stack requirements? Besides, lots of architectures demand word, doubleword or paragraph stack alignment anyway - that leads to internal fragmentation/waste as well!
If the allocation cost worries you, lots of optimizations are possible, in particular if frame allocation is the responsibility of the caller. E.g. if a function's stack frame has a lot of unused (internal fragmentation) space calls a function asking for a small frame, it could fit in that internal fragmentation space, not requiring another heap allocation. There are dozens of such optimization.
I never heard of any software (compiler, run time system) allocating stack frames on the heap. I never heard of anyone using the micro coded buddy stack frame allocation on the one architecture I know of providing it. Is that because I am ignorant? Smile | Is heap stack allocation 'commonplace' anywhere? Or has it been considered and rejected? If that is the case, what were the arguments for rejecting it?
|
|
|
|
|
Lot of text there with no context.
trønderen wrote: There is no real reason why stack frames are allocated edge to edge!
In C/C++ there were library calls that allowed one to chop the stack off. To reset it to a specific point. I never used it, so I don't know why it existed. Might have been performance or error handling. That call would have certainly been easier (and faster) if frames were next to each other.
However I also believe that is not the case for Java. I believe I looked at the source code for that at one point and it was just allocating on the heap.
trønderen wrote: Stack relative addressing never goes outside the current frame
False.
In C and C++ one can overwrite the frame. I have in fact done so in the past.
Since both Java and C# support native code overwriting the frame is also possible there.
trønderen wrote: Stack frames could be allocated from the heap, with space is occupied only as long as a function call is active
That is somewhat simplistic. Allocations of any sort still require a mechanism to deallocate it.
So for example a standard in C/C++ (at least when I looked at source code) was that local variable were allocated. Might have even been in the stack frame. Then the deallocate was when the frame was freed on exit.
That is why they were (or are) on the stack frame. Because the deallocation is already there.
Heap deallocation on the other hand is a different mechanism.
trønderen wrote: Then, a given amount of RAM would be capable of handling a much larger number of threads.
As stated that is wrong. Memory is not the only limitation for the number of threads. Not to mention that no developer should ever consider the idea that an unlimited or large number of threads is a 'good' idea.
trønderen wrote: Even though you with desktop PCs can just plug in another 64 GiB of RAM to satisfy stack needs
Just noting that I have never seen a computer that actually allows for the full addressable range of 64 bits. Certainly no desktop PC does that. So one cannot just keep plugging it in.
And cloud servers are always limited also. Sometimes to very surprising low limits.
trønderen wrote: (such as embedded), this is not always possible. Yet, lots of programmers recoil in horror
Because all resources, not just memory, is scarce. If you add complication to the processing model then it will require more resources. There is no way around that.
trønderen wrote: If the allocation cost worries you, lots of optimizations are possible,
The discussion however is not about me (and the general audience) but rather you.
If you want to fix it then do so.
I have in the past replaced the entire heap in C compiler eco system.
Certainly you could also do the same for the Stack Frame allocation. So do it. I suspect you will find that the deallocation part is going be complicated but doable.
trønderen wrote: I never heard of any software (compiler, run time system) allocating stack frames on the heap
As I suggested when I looked I found Java (JVM source code) to do that.
Following has a response that suggests the same thing.
https://stackoverflow.com/questions/26741925/is-frame-in-jvm-heap-allocated-or-stack-allocated[^]
|
|
|
|
|