As stated that is nonsensical with what you said in the rest of your post such as "then another user runs the app, makes a change, and saves BEFORE the first user is done". That second cannot happen unless there is a network.
Also Sharepoint is a teamwork collaboration tool. You can't use it without a network.
Perhaps you meant a shared file system.
But other than that the configuration file is either per user or there is only one for the entire company.
If per user then you store it on each computer. There is no problem.
If one per company then then since you know Sharepoint exists then use it. It has a REST API, and I would be really surprised if there was not a way to store a file. Your app can provide the management including a place for the current user to type in their Sharepoint credentials (or use some other method to authenticate.)
As for the overwrite problem...
There is something wrong with your design if you expect users to be constantly updating the configuration file all of the time. It should be something that is rarely updated and when it is updated one specific person will be tasked with doing that. It will likely be the case that only one person even knows how to do it. So it is a non-issue.
Otherwise it isn't a configuration file. It is in fact a persisted data store. And your design should have accounted for that in the first place.
That description does not sound like a configuration file. Everything suggests you want a database but have decided you are not going to use one.
Note that all of this requires a shared data location otherwise it is pointless to even discuss this.
If you have customer data then you need to store it in one location.
If you have user data then you store it in an different location.
If there is shared data then you will need to hack a solution that is in fact a database.
Database servers handle the concurrency issue by being in process and by using locks either at the table level or the row level.
You can implement something like that by using data. The data is a marker that each app must look for before it attempts to write. If it is there then the user must refresh before they can update. Problem with that is if the user computer exits or the network goes down the lock file is still there so you will need to provide a way to force it. The granularity and layout is something you would need to design and implement.
On second thought, even this wouldn't work because a person might still write an older version of the file when committing changes.
So, either lock the file, read what you need to make a decision, and write it back, in one go.
Or, if you require some time to ponder the changes: Read lock, take note of the last modification time, read, release. Do all your pondering, preparing your changes. When changes are ready to be written: Exclusive lock, check time of last modification. If later than the one you saw the first time, read the new data, release and repeat from the pondering step. If write timestamp is unchanged, write your new data and release.
This is a file system version of database 'optimistic locking'. With database transactions, you should always be prepared for a rollback if some other transaction modifies data you have based your calculations on. I am not saying that all database applications are prepared for rollback, only that they should be . Databases with optimistic locking favors short transactions: The shorter the time span from data are read to the commit, the less risk for someone else making modifications inbetween. Your DIY optimistic file system locking is similar: Do all the preparations that you can do without reading the file, leaving the minimum work between reading and writing, to reduce the risk of a rollback. But you must be prepared for it, forcing you to read the updated data before attempting a redo transaction.
We have (as of today, will be more) 200 devices sending telemetry into an SQL server
with a frequency of 5 rows each pr. second.
Each row contains 6 decimal values to be processed into another dataset/table
containing min/max/avg of these values in 1 minute intervals.
I have never worked with DataLakes and similar tech, so I wonder:
Should i read up on storing the raw telemetry in a datalake, and set up post processing there,
or Just go for my existing SQL server and create a c# job post processing the data in the background myself?
So, you have data coming in from 200 devices, 5 times a second. Let's think about the problem areas here:
When you add data, at that volume, you're adding 1000 items a second. Not a problem in SQL Server.
I'm assuming it's one aggregation per device every minute, so you're looking at a read of 60K rows every minute, with a write of 200 rows.
I would be tempted to invert the problem slightly. Rather than write to one source, write to two. So, write the incoming messages to SQL Server and, at the same time, write them to something like Redis Cache. Then, every minute, get your data from Redis and use that to calculate the values for the other table and write to that. This approach follows (somewhat), the CQRS so you're separating the reads and writes from each other.
"The number of devices will hopefully increase 10 to a 100-fold"
You should have a very hard idea of what this increase looks like before you start making decisions. You should know the following.
1. What is the timeline for the increase? Also will the increase be gradual or will it be abrupt?
2. What is the maximum? If you owned the market what size would you need to support?
Typically when I size I do the following
1. Document my assumptions.
2. Document my sources - where did I get the assumptions and how did I derive them.
3. I then increase the size from that by 3 to 10.
Don't attempt to create a system the will work for the next 100 years. One that works for 10 is fine.
After you have those numbers then you start looking for solutions that will handle those numbers. Keeping in mind of course that your architecture/design should support the larger values, but the implementation does not need to support that now. You just want to make sure the implementation does not preclude sizing it up.
i am building an application in which one of the part is calendar updates , push notification ,
i am wondering what is best way/architectural pattern/technologies are best suited forthis scenario . I am a .net developer .if possible kindly guide me on same platform
Pushing requires that the destination is available and capable of accepting the request. That seems unlikely for "calendar" in general since it would suggest client machines (which can be off or have no network access.)
So you are going to need to refine your requirements more before you can do anything.
If the requirements are very specific, such as using a MS Exchange server, then there still is not enough detail for an "architecture". But in that case you would start with how you are going to get the updates in the first place.
If you update the target framework for an assembly, then you need to update the target for anything that references that assembly. That should only be a problem if you're not updating the target for everything, which doesn't sound like the case here.
If you start from the bottom up, then you may get errors when you first open a project that targets 4.5.x and references an assembly you've just updated to 4.7.x; but those should go away once you change that project's target framework.
If that bothers you, then it may be easiest to work from the top down; start with the applications, then the assemblies they reference, then any transitive references, and so on. Once that's done, you can work your way back up, updating the references at each level.
"These people looked deep within my soul and assigned me a number based on the order in which I joined." - Homer