|
mozilly wrote: My first thoughts are to follow these steps:
That is not how you go about it.
That is like attempting to write code when you do not even know what the requirements are.
mozilly wrote: My manager says that the customers
Any larger company will expect this. Mid-size are also likely. Depending on the business domain every customer might require it.
mozilly wrote: what steps should a team take to address security concerns
Obviously application security is a part of it. But also company security.
Large companies will require 3rd party security audits. Smaller ones might also.
Steps
1 - Investigate various parts of security needed.
2 - Software security
3 - Employee training
4 - Employee access. And specifically how access is turned off when an employee exits the company and who has access to what.
5 - Reviewing code for security vulnerabilities - specifically. Tools and manual.
6 - 3rd party audits.
7- A DOCUMENTED Security Plan for the company. That includes all of the above.
8 - DOCUMENT all of the steps taken (which would be in the Security Plan.) You will need to track where those documents live.
9 - The Security Plan must include how to DOCUMENT exceptions to the plan and solutions to problems discovered.
10 - One or more people assigned to the Role of insuring that the Security Plan is followed.
3rd party audits will likely look at all of the above.
People tend to skip 9 because they think/claim that those will not occur. Then when they do they don't have any way to deal with it and thus end up ignoring the issue.
|
|
|
|
|
I just started working with a business that made a web application that has a nodejs-expressjs backend api and a react front end. The business wants to sell its software as a white label solution to some enterprise sized businesses. My manager says that the customers will be expecting a detailed report to convince them that our solution is "secure". I need to determine steps to producing such a security report.
My first thoughts are to follow these steps:
- Run the
npm audit command on our backend and front end projects to identify all known vulnerabilities. And then fixed them according to recommended approaches I read about on the internet. This step has been done. The npm audit command shows no vulnerabilities or issues of any kind. - We upload our code as docker images to dockerhub.com. Dockerhub shows a list of vulnerabilities for us to address. I am currently in this step, and I have some issues which I will elaborate further down in this post.
- Hire a 3rd party cyber security firm to test our solution. This firm will give us a report of issues to address.
That's my overall plan. However, I am currently stuck on step 2. Dockerhub is showing me MANY Critical and High priority vulnerabilities such as the following:
...etc...
According to dockerhub, there are about 100 of these types of vulnerabilities, where maybe 10% are critical, 15% are high, rest are medium or low. These issues look very difficult to address, because they are used by modules of modules that I don't directly access in my own software. Trying to replace these modules of modules basically means a complete rewrite of our software to not depend on ANY open source solutions at all! And I'm sure if I were to scan packages with another type of scanner, different sets of vulnerabilities would be exposed. And I haven't even gotten to step 3 yet.
So this got me wondering...how do other organizations selling white labelled solutions go about disclosing vulnerabilities to their end clients and how do they protect themselves?
I started thinking that maybe I don't have to deal with every single security vulnerability that exists. Instead, I should only address security issues that I am confident hackers will exploit or things that are easy to address. Then I hire a security party firm to find other vulnerabilities. Anything that's not caught by the security firm we deem as "not important". And we develop some contract and service agreement that protects our business from the legal actions if our clients experiences a security vulnerability not covered in our report?
But then, a customer will say, "But dockerhub.com clearly shows vulnerability X, and you as the seller were aware of vulnerability X, please justify to us why you did not address it." And how do we respond then?
That's what's in my head right now.
So back to my original question - what steps should a team take to address security concerns of a software that will be white labelled and sold to customers?
|
|
|
|
|
We're getting pressure from one of our customers to internationalize our software product. All of our currency and dates are handled correctly or get fixed quickly. We have about 70% of the words translated via resx files. There are also some database translations where we allow customization. All of this works.
However, since it isn't 100% complete it came up in a discussion from management. One of the devs wants to remove the resx files and put all translations in a database table (actually 3). Curious if anybody out there has any strong opinions on why database only vs resx translations are better or not. There are articles out there and stack overflow questions, but most of it is older.
Is resx still in favor? Is it a good choice. My feel is re-working all of the resx for some new custom format isn't a good use of our time.
Thanks for your thoughts.
Hogan
|
|
|
|
|
That's ... interesting.
Putting every string of text into the database is only going to slow down the entire app and, as an added bonus, give any admin users the ability to change the translations at will! Imagine that fun with a disgruntled admin!
Oh, and when the database access fails, how do you go to the database and get the localized version of the messages to explain that it can't get to the database?
It makes sense for some things, like customer data, but not for localization of the app.
|
|
|
|
|
On top of the obvious drawbacks that Dave pointed out, translations have a bad habit of being longer than their English equivalent (in many/most cases). A four letter word like "Desk" can become a 12 letter word like "Schreibtisch" in German. I read somewhere that you should expect a 15 to 30% increase when you go from English to other European languages. A user interface that is not size adjusted looks clunky with truncated fields or big empty spaces.
Mircea
|
|
|
|
|
Mircea Neacsu wrote: translations have a bad habit of being longer than their English equivalent (in many/most cases) This is most certainly true if the translating is done by a native English speaker. When I translate Norwegian text to English, the English text is frequently longer, and most certainly in the first version. I do not know the entire English language, often using rather cumbersome linguistic constructions where there is a much shorter way of expressing it. Same with the native English speaker: He doesn't know Norwegian well enough to find the short (and correct!) Norwegian translations.
Following up your example: A translator might look up 'Table' in an English-Norwegian dictionary, finding 'Skrivebord' (a close parallel to the German word). 'Skrivebord' is long and cumbersome for daily talk; we say 'pult'. (That is if you pronounce it with a short 'u' sound. With a long 'u' sound, the English equivalent would be 'had intercourse', although that is not very likely to appear in a user interface ) Also note that both the Norwegian terms are more specific that the English 'table': It is not a dinner table, not a sofa end table, not a set of columns, not a pedestal for a lamp, artwork or whatever. It is a worktable where you do some sort of writing ('skriving'). If you need to express that specific kind of table in English, the English term will increase in length.
Finding individual words that are longer in other languages than in English is a trivial task. So is the opposite. I have written quite a few documents in both English and Norwegian versions, and translated English ones not written by me into English. If the number of text pages differ at all, it is by a tiny amount.
On the average, that is. A user interface usually needs a few handful of words, some shorter, some longer than the original language. You must prepare for those that are longer - and you don't know which those are. So when translating from any original language, not only English, to any other language, including English, be prepared for single terms, prompts etc. being 30% larger than the original. (15% is definitely too little.) Although there is a significant effect of the translator not knowing the target language well enough, the increased length may be completely unavoidable. Regardless of original or target language.
|
|
|
|
|
Couldn't agree more. My point was that a simply plucking text from a database and putting it in a user interface will make it look bad to the point of being useless.
Here is a horror story I've seen "in a galaxy far, far away".
Programmer who knew everything tells his team: just put all the texts you need translated between some kind of delimiters. I'm going to write a nice little program that extracts all those texts, put them in a database and pass them to the i18n team. They will just have to enter translations for those texts and Bob's your uncle, I solved all these pesky problems.
Trouble came soon after, first when they realized some words had multiple meanings. In English "port" can be a harbour or the left side of the ship but in French "port" and "bâbord" are very different words. Translators had no clue in what context a word was used, besides they could enter only one translation for a word. Source code also became a cryptic mess where something like SetLabel("Density") became SetLabel(load_string(ID_452)) . Some of the texts where too long, others too short, in brief such a mess that most users gave up on using localized translations and stuck to English. But the programmer who everything remained convinced he solved the problem.
Moral of the story: humans are messy and their languages too. There is no silver bullet and text in a database is very, very far from being one.
Mircea
|
|
|
|
|
I just have to add my 'horror story' from at least as long ago:
My company company went to a professional translator to have the prompts for the word processor (remember that we used to call it a word processor?) to German. The translator was given all the strings as one list of English terms. Nothing else. No context indication.
This text processor had, of course, a function for 'Replace all -old text- with -new text-', with a checkmark for 'Manual check?'. In the German translation this came out as 'Ersetze -- mit --', and a checkbox 'Handbuch kontrollieren?'
This was discovered in time for the official release, but only a few days before.
|
|
|
|
|
This point exactly. He has continued on the research ticket and identified how many duplicate translated items we have in the system. I believe he intends the optimization specified above.
Hogan
|
|
|
|
|
Yes but unfortunately there are very few companies, if any (even very large) that can afford to hire 100 people fluent in living languages to work exclusively on context translations for each software project.
And keeping in mind that 100 is not even close to the number of identified living languages. But it likely is close to what one might consider a viable market.
So one just hopes that one can get by.
Mircea Neacsu wrote: Trouble came soon after, first when they realized some words had multiple meanings.
I have worked for a number of companies that had no problems using services to provide translations based on provided text.
And there are more difficult problems than just providing the context for a specific word.
Mircea Neacsu wrote: gave up on using localized translations and stuck to English.
France and Quebec (province of Canada) both have laws that basically state that a company cannot require an employee to speak/read any language except French. So if you bring in that software there the company could end up with a number of employees sitting around staring at the walls all day.
And the governments stipulate that the software they use must be in French. You can't get the contract without agreeing to that.
Mircea Neacsu wrote: became SetLabel(load_string(ID_452)).
If programming was easy they wouldn't need people to do it.
|
|
|
|
|
jschell wrote: France and Quebec (province of Canada) both have laws... I lived in Montreal for over 30 years so I know a bit about language laws in Quebec. Accidentally I also know a bit about those in France. I cannot say more because I would run afoul of CP rules 🤐
No amount of regulation can force people to use a dysfunctional product. They will find a way to go over/under/behind those regulations. If, in your case, a database or a simple text file was good enough, more power to you
Mircea
|
|
|
|
|
jschell wrote: And keeping in mind that 100 is not even close to the number of identified living languages. But it likely is close to what one might consider a viable market. If you cover 100 languages, you are bound to also run into a lot of cultural aspects that is not language specific or based.
20 years ago, 'everyone' wanted to collect the entire internet in their databases. Archive.org is one of the (few) survivors of that craze. I was in it, and went to an international conference. Access control to the collected information was an essential issue, and one of the speakers told that he had been in negotiations with delegate from US native groups about how to protect information that should be available only to males, or only to females. Also, some information should be available only during the harvesting season, other information only during the seeding season. The limits of either of course depended on the kind of crop.
Needless to say, the access control of the system presented by the speaker did not have sufficient provisions for these demands. He presented it as an unsolved issue. If we simply state "We can't honor such cultural restrictions - The whole world must simply adapt to our culture, accept our freedoms (and most certainly respect all our taboos)!", then we are cultural imperialists as bad as in the era of colonization.
And we are.
|
|
|
|
|
Mircea Neacsu wrote: There is no silver bullet One could use a trick we used in the 1980's, when IT books where not translated.
You learn English
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
99% of my heavy criticism of computer book authors and their editors is directed towards English language textbooks. They are most certainly no better than the translated ones.
I guess that part of the problem is that major parts of the English speaking world (read: in the Us of A) do not read very much any more. Their critical sense to reject (sometimes very) bad books, from a language, editorial and presentation point of view has worn out. They do not know how to distinguish a well written book from a crappy one. So the fraction of crappy books is steadily increasing.
My impression is that the average IT textbook written in other languages (my experience is with Scandinavian languages, but I suspect that it holds for a lot of other languages) is written under a lot stricter editorial control, and is a lot less smudged with 'edutainment' elements, going much more directly to the point. So the number of pages are about half.
Originating in the US of A has not in any way been any guarantee for quality for an IT textbook. Quite to the contrary. When I feel the temptation to dig out my marker and my pen to clean up the text, I often think of how I could reshape this text into something much better in a Norwegian edition, half the pages. But at the professional level I am reading new texts today, the market for a Norwegian textbook is too small for it ever to pay the expenses. Making an abridged English version would lead to a lot of copyright issues.
|
|
|
|
|
trønderen wrote: Originating in the US of A has not in any way been any guarantee for quality Full stop there, as that is not just limited to books.
Learning Enlish (not American) gives you a wider range, just as learning to write in English does. To drive that point home, our little CP community is English only.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
trønderen wrote: I guess that part of the problem is that major parts of the English speaking world (read: in the Us of A) do not read very much any more. Their critical sense to reject (sometimes very) bad books, from a language, editorial and presentation point of view has worn out. They do not know how to distinguish a well written book from a crappy one. So the fraction of crappy books is steadily increasing.
I doubt the implied cause there.
It is much easier to produce (publish) a book now than even 20 years ago. And much, much easier than 50 years ago.
And it is orders of magnitude different for self publishing.
50 years ago one would need a publisher to accept the book and then an editor working for that publisher would edit it. (Not totally true but one would need much more knowledge and money to self publish then.)
Now even when that path is followed the role of the editor is less. Probably due to the publisher wanting to save costs but also because there are so many more books published.
I would be very surprised if the publishers were not seeking quantity rather than quality now. Much more so than in the past.
I suspect all of those factors have even more of an impact for 'text books'. After all just one consideration is that there is quite a bit of difference in editing a romance novel versus editing a programming language book.
|
|
|
|
|
The reduction of quality is most certainly not limited to self published books. I guess every English IT book I have bought(*) was published by what everybody would classify as highly respected publishing houses. These no longer need to spend resources on keeping the quality up, through editors and reviewers. The books sell anyway.
One thing that one could mention to explain all the talkety-talk and lack of conciseness: The entry of the PC as a writing tool. When the authors were still using typewriters, doing editing was much more cumbersome; it required a lot more work to switch two sentences around, or move a paragraph to another chapter. The first thing that happened was that authors wrote down every thought they could think of, without filtering the way they did before. The second thing was that they forgot how to use the delete key, and how to do cut and paste to clean up the structure of the text.
I guess that the cost of publishing, the process, makes up a larger fraction of the budget today. The cost of the paper is a smaller fraction than it used to be. Publishing/printing a 600 page book is not three times as expensive as a 200 page one. (Well it never was three times as expensive, but the cost of the materials made much more impact on the sales price 50 years ago.)
(*) I have got one self-published IT book - Ted Nelson: Computer Lib/Dream Machines[^], the book introducing the concept of hypertext. It was published 49 years ago, before you had MS Word for writing your manuscript. Most of it is typewriter copy, or hand written. This is probably the first IT book I'd try to save if a fire broke out in my home.
|
|
|
|
|
Right. "Time" or "lag".
Resource files are easier and faster to update versus a "resource management system" sitting on a server (IMO).
Your can easily write a file parser at some point to report on your "resources".
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
snorkie wrote: We're getting pressure from one of our customers to internationalize our software product
Basically if you want to sell in France and/or Quebec (Canada) you better be able to support French.
snorkie wrote: One of the devs wants to remove the resx files and put all translations in a database table (actually 3)
Certainly not something I would want to see happen.
The UI is going to just end up caching that every single time. What happens if someone does a browser refresh? Do you load each page or everything all at once? If you have 1,000 distinct text items on a new page do you really want to do a pass through cache (1,000 separate database calls.)
There would of course be process (not code) considerations for how it gets into the database during normal feature deliveries. Does it end up being treated as a database update which means there are also rollback considerations if a feature fails?
Same consideration applies to all non-prod boxes also such as Developer and QA.
What happens if the if the database is down?
I think the 'dev' that wants this should be required to write the conversion Epic along with all of the stories and designs needed to support it. Then cost it out and present that, the cost, to management. I suspect that will be nixed by management even presuming the 'dev' has the willingness and expertise to fully write out the plan. It should include at a minimum.
- Specific analysis of why this is better.
- How UI uses it. Specifically what needs to change in the UI.
- Performance impacts
- Deployment steps for changes
- Removing the old code.
- How this will be supported with current translation service.
snorkie wrote: There are also some database translations where we allow customization.
Presumably a customer can change this. That should not be a consideration for this case. However if a customer wants to support users with different language needs does that existing design account for that?
|
|
|
|
|
I'm looking for a replacement for the VLC Media Player, because it does not support IPv6. Here are some of my requirments:
- Supporting both IPv4 and IPv6
- Integration to WPF
- Sixteen Cameras
- Taking Snapshots
- Support audio and video codecs ( H.264 / H.265 and AAC / MP3 )
- RTSP and HLS Stream
- Playing DVR Data
- Start/Stop
- Debug Logs
- Custom Parameters
- Custom Menu for player
- Set volume
- Disable Keyboard/Mouse input
- Custom Menu for player
- Player Events
- Mediaplayer Events
- Track/Stream Info
- Set Crop Rectangle
- Select channel to play the audio(e.g., Left vs Right)
I've looked at LeadTools, which his pretty good and there documentation and support are nice, but it's expensive. I've also been looking at 'ffply'.
Anyone have an suggestions?
In theory, theory and practice are the same. But in practice, they never are.”
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
modified 20-Oct-23 15:00pm.
|
|
|
|
|
I'm working on a WPF app that connects to SharePoint.
From time to time, upon connecting, I get a 'Server not availale' exception. Sometimes I get the error while downloading data. It's intermittent and I can't seem to reproduce it. I restart the app and it works fine.
I could just catch it and retry, but I'd like to get your thoughts on the best way to deal with this.
In theory, theory and practice are the same. But in practice, they never are.”
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
|
|
|
|
|
If you're downloading "a lot of data" (or processing), you take "check points" so you don't have to "restart the app" from the beginning. This assumes the app (or "query") can restart / continue from a check point / sync point.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
GitHub - App-vNext/Polly[^] is "a .NET resilience and transient-fault-handling library that allows developers to express policies such as Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback in a fluent and thread-safe manner."
As recommended by Microsoft[^].
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
It is an external server. One should always design and code with the expectation that external calls will fail.
They can fail for the following known intermittent reasons.
1. The server is down
2. The network is having issues.
They can fail repeatedly for following reasons.
1. Something is incorrectly configured.
2. Your code is not correctly setting up the call. (Credentials, wrong message, invalid message, etc.)
There can be processing issues
1. The server never responds
2. The server takes too long (a timeout is exceeded.)
3. The server has an internal error and returns an error code.
4. The server returns an error code that suggests a retry is possible.
5. The server returns an error code that suggests a retry is not possible.
There can be other known/unknown reasons not in the lists above.
You can choose to implement a retry strategy but that can only work for some of the cases above.
The problem with retry strategies is the following
1. Are there situations where it must not be retried? For example if you just attempted to update the inventory by removing (data) 10 items do you want to keep retrying that again and again for every possible error? That could be a problem if the server is in fact succeeding (the 10 were removed) but then fails when attempting to format a correct response back to you.
2. Are there situations where it will never work so retries are pointless?
One must also evaluate what retry strategies can do to the entire enterprise. For example if simple retries are in place and there is a chain of 5 services that keep retrying (service A retries B which retries C etc) what happens to the original caller while they wait?
Even more complicated what happens with timeouts? If service B has three retries at 90 seconds each and service A also looks for timeouts then a single call to B would require a minimum timeout of 270 seconds in A. And B would need 3 of those. Presuming of course that A even knows that B is using a timeout like that.
|
|
|
|
|
You can find a solution in recommendations X.215/X.225 ...
We did have a standardized solution to this 40 years ago, in 1984. But it was too ambitious - you couldn't give IT students a homework assignment of implementing the OSI Session Layer. You could have them making a primitive implementation of, say, (too) Simple Mail Transfer Protocol, (too) Simple Network Management Protocol or (too) Simple File Transfer Protocol. Besides, the Internet standards were freely available, while OSI specs were copyrighted and expensive. So the Internet Protocol Suite won the war against OSI.
In 1984, you may say that OSI Session was overkill (considering resources available at the time). If it had been generally available for 40 years, we certainly wouldn't have considered it overkill today. There might be running implementations out there, even today, but I don't know of any.
|
|
|
|
|