|
I use IODA architecture since the very beginning, nearly 10 years now. I develop SW more than 30 years as a profi.
It is the first approach/technique, that offers the option to produce reusable, small classes like "lego building blocks".
It is ortogonal to other best practices like CleanCode, DRY and other.
It has a strong impact on unit testing: Unit testing will be easy.
The reason is simple:
IODA was designed to reduce dependencies between classes to a minimum.
I - Integration - calls all other, but does not contain logic or data
O - Operation - contains logic, just work with data, do not know other operations or integrations
D - Data - holds the data and knows/deals only about itself and its childs. No operations here.
A - API - Application Interfaces - To deal with the environment or to provide general functions like data validation and other stuff
All of these classes are devolped by the project. Standard-Libraries can be found below this code.
If you then divide the Operations in "IO operations" and "other operations" you get the following:
IO_operation "reader" reads "data_ger",
operation "translator" translates "data_ger" to english data "data_eng"
and then the IO operation "View_data" presents it to the user.
And the integrator "TextManager" has a method like
void viewAsEnglish(int textIdGer, string readLang = "ger")
{
var data_ger = reader.Read(textIdGer, readLang);
var data_eng = translator.Translate(data_ger, "eng");
dataViewer.View(data_eng);
}
After longer practice you will automatically think in these four types, when creating a new class.
And yes, it will work fine in real projects.
Now lets start discussion about it, if you want.
**Links**
Detailed description:
https://www.infoq.com/news/2015/05/ioda-architecture/[^]
Bigger sample project (documentation only in german, sorry)
GitHub - jpr65/VersandEtiketten: This is a reference projekt for IODA and also a developer test ticket. Only in german, possible later also in english.[^]
** Images **
IODA - Principle Diagram
IODA - Libraries
modified 1-Dec-23 2:53am.
|
|
|
|
|
I haven't seen this architecture before. Seems interesting but there is one thing I don't get:
Ralf Peine 2023 wrote: I - Integration - calls all other, but does not contain logic or data
O - Operation - contains logic, just work with data, do not know other operations or integrations If I need to to implement something like "if operation1 fails do operation2", where do I place the logic? I cannot place it in the integration units (cannot contain logic or data) and cannot place it in one of the operations (do not know other operations).
What is the solution?
Mircea
|
|
|
|
|
Yes, that's some important point.
You may add minimal logic into the integrator to set switches for the workflow.
However, the decision why a switch is set should again be made in an operation, if it is not a simple true/false decision.
It is ***allowed*** to put logik in integrators, sometimes it is easy for the moment, but this leads to technical debt and is sometimes difficult to test. Especially while doing hotfixes, you will break the IODA principles, and this is ok, for the moment.
The more you stick to the IODA principles, the more testable and reusable the code will be, that you create.
|
|
|
|
|
Like perhaps everything along these lines it has some utility as long as it is not taken as an absolute.
I certainly do not want to see every single operation in a million line application decomposed like this.
For example I have written reports at various times in various languages and attempting to decompose them often lead to nothing but making it harder to understand the whole.
Then after that one must consider in standard business programming of maintaining an application over years with multiple changes in employees is this likely to be rigorously maintained? If it isn't then doesn't it just devolve into what the original code might have looked like the first place?
|
|
|
|
|
I see ETL: extract, translate, load.
Your sample "integrator" looks more like a "translator".
While my "data" may not have methods, it resides in a repository that does: add, delete, update, query. Data as another object. The methods represent an "API".
And I have "operations" that depend on other operations: "distance" calculators for anything that needs to calculate a distance (in pixels, paces, yards, or feet); or an angle; or "a collision detector" while moving.
I don't disagree with the architecture, but things aren't that clear cut.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
This example and the dokumentation are very small. Please read the detailed description of Ralf Westphal.
"Translation" is what the integrator manages, and "integration" is the type of the class in the meaning of IODA.
Reason for IODA approach is the following
"Functional dependencies are evil!"
Only the integrator classes are allowed to call operation classes.
So, in an ideal world, you have no functional dependencies.
|
|
|
|
|
If you want people to appreciate the benefits of this approach, write an article, hosted here on Code Project showing a non-trivial application created using these techniques. I find myself deeply distrustful of prescriptive ways of working when they are presented as absolutes (e.g. "Functional dependencies are evil!"). No approach is 100% perfect and, generally, there are always be exceptions to the absolutes that are the driving reason for a new way of working.
|
|
|
|
|
I'm working on a WPF app for a client that gets invoice remittance data from their customers as CSV files. My app extracts the Invoice Date, Reference Number, and Amount using metadata defined by my client for the input files. This metadata defines the columns in the file, and the index position of the Invoice Date, Reference Number, and Amount.
The key is the reference number. It is used to look up the invoice row in their accounting system. Once in a while their customer formats the reference number wrong. For example, one of the reference number formats is eight numbers, as in '20158011'. But from time to time, they send invalid data, as in 'RECOUP19718470_002ZT'.
Another customer's format is 'RDHC608965'. This column's reference number always starts with 'RDHC'. The remainder is the six-digit reference number.
What I would like to do is provide a place in the UI to define the required format of the reference number. If it was internal I would specify a regex, but this is something that my client has to define.
What's the right way to go about this?
In theory, theory and practice are the same. But in practice, they never are.”
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
|
|
|
|
|
"Reference data" is just that; there's nothing to validate. It's like a "comment"; subject to the whims of the client and not something to build a "system" around.
"Invoice payments" are generally applied to outstanding balances, and not any particular invoice; that shows how "important" the reference number is (not).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
modified 15-Nov-23 20:27pm.
|
|
|
|
|
Gerry Schmitz wrote: "Reference data" is just that; there's nothing to validate. It's like a "comment"; subject to the whims of the client and not something to build a "system" around.
It's not 'Reference data'. It's a Reference Number. The Reference number in the customer's data file is a 'reference' to an Id in the client's accounting system. This matches an invoice from the client to a customer's payment. It's a reconciliation.
But that's not the question. See this[^]
Since my client can define new customers in the app, I need a way for them to tell me the FORMAT of the Reference Number. Then, my app is reading the file, I can verify the number so it can be looked up. The format of the Reference Number CAN be validated. I COULD use a RegEx, but defining the validation for it has to be user friendly.
The question, again, is how to allow the user some way of manually inputting the format of the reference number.
In theory, theory and practice are the same. But in practice, they never are.”
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
|
|
|
|
|
It's reference "junk" when it's in some one else's system that you have no control over. And anything you put in the way of "the payment process" makes you the problem.
A company issues their own "client numbers", Purchase Order numbers, Invoice Numbers, Product Numbers, etc. which becomes "reference junk" in someone else's system that they then send back to you as more "reference junk". You reconcile your own paperwork; not someone else's.
I'd send "checklists" of the outstanding invoices if they wanted to "reference" something.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
This whole process is outside of this discussion. I have no control over how my client processes their invoices.
Thank anyhow
In theory, theory and practice are the same. But in practice, they never are.”
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
|
|
|
|
|
You have (some) control when inputing from documents; then you can use "fuzzy" searching then and there.
You said CSV ... which means, you have to get it into the system "before" you can do anything with it. Throughput.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Kevin Marois wrote: What's the right way to go about this?
Your requirements description is incomplete for a start.
You must also answer the question - what do they want to happen when it is wrong? For example discard the entire csv, ignore than one row, collect the failures and present immediately to a human, collect somewhere and allow for a report. Might be some others.
You should also decide who actually manages it? Is it set up on install? Can it change day to day? Can different users change it (so not the customer but individual customer users.)
Those drive how you configure it. For example if install then you would need to ask that during install. (This probably is not viable since their needs might change over time.)
But other than that your application should have a section specifically for customer configurations. Yes plural. Presume one now and more for the future. And possible one for the application (admin users only) and one for normal users. You application might not support multiple users so the different levels might not apply.
With both application level and user level then you need to decide if the user one overrides the application one completely or if both work.
You can save the configuration in a configuration file or a database. Or other persistent store. You would normally load the configuration on start up. Naturally updates while running must impact the loaded configuration also.
|
|
|
|
|
I agree 100%, have some or other way to capture this to be used in the future, irrespective of which customer adds what as their own reference. Just do a callback then to that specific customer and you should be able to tie up the reference to the correct file reference.
|
|
|
|
|
I just started working with a business that made a web application that has a nodejs-expressjs backend api and a react front end. The business wants to sell its software as a white label solution to some enterprise sized businesses. My manager says that the customers will be expecting a detailed report to convince them that our solution is "secure". I need to determine steps to producing such a security report.
My first thoughts are to follow these steps:
1. Run the npm audit command on our backend and front end projects to identify all known vulnerabilities. And then fixed them according to recommended approaches I read about on the internet. This step has been done. The npm audit command shows no vulnerabilities or issues of any kind.
2. We upload our code as docker images to dockerhub.com. Dockerhub shows a list of vulnerabilities for us to address. I am currently in this step, and I have some issues which I will elaborate further down in this post.
3. Hire a 3rd party cyber security firm to test our solution. This firm will give us a report of issues to address.
That's my overall plan. However, I am currently stuck on step 2. Dockerhub is showing me MANY Critical and High priority vulnerabilities such as the following:
cve-2021-44906 - An Uncontrolled Resource Consumption flaw was found in minimist
https://access.redhat.com/security/cve/cve-2021-44906
CVE-2022-37434 - zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field.
https://nvd.nist.gov/vuln/detail/CVE-2022-37434
...etc...
According to dockerhub, there are about 100 of these types of vulnerabilities, where maybe 10% are critical, 15% are high, rest are medium or low. These issues look very difficult to address, because they are used by modules of modules that I don't directly access in my own software. Trying to replace these modules of modules basically means a complete rewrite of our software to not depend on ANY open source solutions at all! And I'm sure if I were to scan packages with another type of scanner, different sets of vulnerabilities would be exposed. And I haven't even gotten to step 3 yet.
So this got me wondering...how do other organizations selling white labelled solutions go about disclosing vulnerabilities to their end clients and how do they protect themselves?
I started thinking that maybe I don't have to deal with every single security vulnerability that exists. Instead, I should only address security issues that I am confident hackers will exploit or things that are easy to address. Then I hire a security party firm to find other vulnerabilities. Anything that's not caught by the security firm we deem as "not important". And we develop some contract and service agreement that protects our business from the legal actions if our clients experiences a security vulnerability not covered in our report?
But then, a customer will say, "But dockerhub.com clearly shows vulnerability X, and you as the seller were aware of vulnerability X, please justify to us why you did not address it." And how do we respond then?
That's what's in my head right now.
So back to my original question - what steps should a team take to address security concerns of a software that will be white labelled and sold to customers?
|
|
|
|
|
(Duplicate post).
If you're getting a "third party" to "certify" your software, you should be consulting with them, not the public.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
mozilly wrote: My first thoughts are to follow these steps:
That is not how you go about it.
That is like attempting to write code when you do not even know what the requirements are.
mozilly wrote: My manager says that the customers
Any larger company will expect this. Mid-size are also likely. Depending on the business domain every customer might require it.
mozilly wrote: what steps should a team take to address security concerns
Obviously application security is a part of it. But also company security.
Large companies will require 3rd party security audits. Smaller ones might also.
Steps
1 - Investigate various parts of security needed.
2 - Software security
3 - Employee training
4 - Employee access. And specifically how access is turned off when an employee exits the company and who has access to what.
5 - Reviewing code for security vulnerabilities - specifically. Tools and manual.
6 - 3rd party audits.
7- A DOCUMENTED Security Plan for the company. That includes all of the above.
8 - DOCUMENT all of the steps taken (which would be in the Security Plan.) You will need to track where those documents live.
9 - The Security Plan must include how to DOCUMENT exceptions to the plan and solutions to problems discovered.
10 - One or more people assigned to the Role of insuring that the Security Plan is followed.
3rd party audits will likely look at all of the above.
People tend to skip 9 because they think/claim that those will not occur. Then when they do they don't have any way to deal with it and thus end up ignoring the issue.
|
|
|
|
|
I just started working with a business that made a web application that has a nodejs-expressjs backend api and a react front end. The business wants to sell its software as a white label solution to some enterprise sized businesses. My manager says that the customers will be expecting a detailed report to convince them that our solution is "secure". I need to determine steps to producing such a security report.
My first thoughts are to follow these steps:
- Run the
npm audit command on our backend and front end projects to identify all known vulnerabilities. And then fixed them according to recommended approaches I read about on the internet. This step has been done. The npm audit command shows no vulnerabilities or issues of any kind. - We upload our code as docker images to dockerhub.com. Dockerhub shows a list of vulnerabilities for us to address. I am currently in this step, and I have some issues which I will elaborate further down in this post.
- Hire a 3rd party cyber security firm to test our solution. This firm will give us a report of issues to address.
That's my overall plan. However, I am currently stuck on step 2. Dockerhub is showing me MANY Critical and High priority vulnerabilities such as the following:
...etc...
According to dockerhub, there are about 100 of these types of vulnerabilities, where maybe 10% are critical, 15% are high, rest are medium or low. These issues look very difficult to address, because they are used by modules of modules that I don't directly access in my own software. Trying to replace these modules of modules basically means a complete rewrite of our software to not depend on ANY open source solutions at all! And I'm sure if I were to scan packages with another type of scanner, different sets of vulnerabilities would be exposed. And I haven't even gotten to step 3 yet.
So this got me wondering...how do other organizations selling white labelled solutions go about disclosing vulnerabilities to their end clients and how do they protect themselves?
I started thinking that maybe I don't have to deal with every single security vulnerability that exists. Instead, I should only address security issues that I am confident hackers will exploit or things that are easy to address. Then I hire a security party firm to find other vulnerabilities. Anything that's not caught by the security firm we deem as "not important". And we develop some contract and service agreement that protects our business from the legal actions if our clients experiences a security vulnerability not covered in our report?
But then, a customer will say, "But dockerhub.com clearly shows vulnerability X, and you as the seller were aware of vulnerability X, please justify to us why you did not address it." And how do we respond then?
That's what's in my head right now.
So back to my original question - what steps should a team take to address security concerns of a software that will be white labelled and sold to customers?
|
|
|
|
|
We're getting pressure from one of our customers to internationalize our software product. All of our currency and dates are handled correctly or get fixed quickly. We have about 70% of the words translated via resx files. There are also some database translations where we allow customization. All of this works.
However, since it isn't 100% complete it came up in a discussion from management. One of the devs wants to remove the resx files and put all translations in a database table (actually 3). Curious if anybody out there has any strong opinions on why database only vs resx translations are better or not. There are articles out there and stack overflow questions, but most of it is older.
Is resx still in favor? Is it a good choice. My feel is re-working all of the resx for some new custom format isn't a good use of our time.
Thanks for your thoughts.
Hogan
|
|
|
|
|
That's ... interesting.
Putting every string of text into the database is only going to slow down the entire app and, as an added bonus, give any admin users the ability to change the translations at will! Imagine that fun with a disgruntled admin!
Oh, and when the database access fails, how do you go to the database and get the localized version of the messages to explain that it can't get to the database?
It makes sense for some things, like customer data, but not for localization of the app.
|
|
|
|
|
On top of the obvious drawbacks that Dave pointed out, translations have a bad habit of being longer than their English equivalent (in many/most cases). A four letter word like "Desk" can become a 12 letter word like "Schreibtisch" in German. I read somewhere that you should expect a 15 to 30% increase when you go from English to other European languages. A user interface that is not size adjusted looks clunky with truncated fields or big empty spaces.
Mircea
|
|
|
|
|
Mircea Neacsu wrote: translations have a bad habit of being longer than their English equivalent (in many/most cases) This is most certainly true if the translating is done by a native English speaker. When I translate Norwegian text to English, the English text is frequently longer, and most certainly in the first version. I do not know the entire English language, often using rather cumbersome linguistic constructions where there is a much shorter way of expressing it. Same with the native English speaker: He doesn't know Norwegian well enough to find the short (and correct!) Norwegian translations.
Following up your example: A translator might look up 'Table' in an English-Norwegian dictionary, finding 'Skrivebord' (a close parallel to the German word). 'Skrivebord' is long and cumbersome for daily talk; we say 'pult'. (That is if you pronounce it with a short 'u' sound. With a long 'u' sound, the English equivalent would be 'had intercourse', although that is not very likely to appear in a user interface ) Also note that both the Norwegian terms are more specific that the English 'table': It is not a dinner table, not a sofa end table, not a set of columns, not a pedestal for a lamp, artwork or whatever. It is a worktable where you do some sort of writing ('skriving'). If you need to express that specific kind of table in English, the English term will increase in length.
Finding individual words that are longer in other languages than in English is a trivial task. So is the opposite. I have written quite a few documents in both English and Norwegian versions, and translated English ones not written by me into English. If the number of text pages differ at all, it is by a tiny amount.
On the average, that is. A user interface usually needs a few handful of words, some shorter, some longer than the original language. You must prepare for those that are longer - and you don't know which those are. So when translating from any original language, not only English, to any other language, including English, be prepared for single terms, prompts etc. being 30% larger than the original. (15% is definitely too little.) Although there is a significant effect of the translator not knowing the target language well enough, the increased length may be completely unavoidable. Regardless of original or target language.
|
|
|
|
|
Couldn't agree more. My point was that a simply plucking text from a database and putting it in a user interface will make it look bad to the point of being useless.
Here is a horror story I've seen "in a galaxy far, far away".
Programmer who knew everything tells his team: just put all the texts you need translated between some kind of delimiters. I'm going to write a nice little program that extracts all those texts, put them in a database and pass them to the i18n team. They will just have to enter translations for those texts and Bob's your uncle, I solved all these pesky problems.
Trouble came soon after, first when they realized some words had multiple meanings. In English "port" can be a harbour or the left side of the ship but in French "port" and "bâbord" are very different words. Translators had no clue in what context a word was used, besides they could enter only one translation for a word. Source code also became a cryptic mess where something like SetLabel("Density") became SetLabel(load_string(ID_452)) . Some of the texts where too long, others too short, in brief such a mess that most users gave up on using localized translations and stuck to English. But the programmer who everything remained convinced he solved the problem.
Moral of the story: humans are messy and their languages too. There is no silver bullet and text in a database is very, very far from being one.
Mircea
|
|
|
|
|
I just have to add my 'horror story' from at least as long ago:
My company company went to a professional translator to have the prompts for the word processor (remember that we used to call it a word processor?) to German. The translator was given all the strings as one list of English terms. Nothing else. No context indication.
This text processor had, of course, a function for 'Replace all -old text- with -new text-', with a checkmark for 'Manual check?'. In the German translation this came out as 'Ersetze -- mit --', and a checkbox 'Handbuch kontrollieren?'
This was discovered in time for the official release, but only a few days before.
|
|
|
|
|