|
(I brought this up as a sidetrack in a Lounge thread. Even though I ask out of curiosity only, not any 'need' or immediate problem, the question is on the technical side, so I move it over here
There is no real reason why stack frames are allocated edge to edge! The stack frame head 'previous frame' pointer could go anywhere. Stack relative addressing never goes outside the current frame. Traditionally, with a million objects, each running its own thread, each of them has allotted a stack for its maximum call depth. This can bind up quite a few megabytes that are hardly if ever used, most certainly not all at the same time. Never ever are every single thread preempted at its very deepest nesting of calls at exactly the same time. In a non-preemptive, co-routine based system, they won't all yield (or whatever it is called in your favorite co-routine package) at the maximum stack depth, all at the same time.
Stack frames could be allocated from the heap, with space is occupied only as long as a function call is active. Then, a given amount of RAM would be capable of handling a much larger number of threads. Especially in event driven architectures, a large fraction of threads idle at a low stack level. They receive an event and go into a nest of calls for handling it, and then return to the low stack level waiting for the next event. The thread might do some sort of yield halfway, but the great majority of threads would be back to 'ground base', or at a moderate call nesting level, most of the time.
The argument against heap allocation is of course the processing cost. There are (/were) machines offering micro coded entry point instructions doing stack allocation from a buddy heap, essentially unlinking the top element from the free list for the appropriate frame size. Function return links the frame back to the top of the list. This requires a couple of memory cycles for each operation. What is that on the total instruction budget for a modern style method call?
Even though you with desktop PCs can just plug in another 64 GiB of RAM to satisfy stack needs, in other systems (such as embedded), this is not always possible. Yet, lots of programmers recoil in horror over buddy allocation: It causes lots internal fragmentation! 25% with binary buddies! Well, but how much can you save in stack requirements? Besides, lots of architectures demand word, doubleword or paragraph stack alignment anyway - that leads to internal fragmentation/waste as well!
If the allocation cost worries you, lots of optimizations are possible, in particular if frame allocation is the responsibility of the caller. E.g. if a function's stack frame has a lot of unused (internal fragmentation) space calls a function asking for a small frame, it could fit in that internal fragmentation space, not requiring another heap allocation. There are dozens of such optimization.
I never heard of any software (compiler, run time system) allocating stack frames on the heap. I never heard of anyone using the micro coded buddy stack frame allocation on the one architecture I know of providing it. Is that because I am ignorant? Smile | Is heap stack allocation 'commonplace' anywhere? Or has it been considered and rejected? If that is the case, what were the arguments for rejecting it?
|
|
|
|
|
Lot of text there with no context.
trønderen wrote: There is no real reason why stack frames are allocated edge to edge!
In C/C++ there were library calls that allowed one to chop the stack off. To reset it to a specific point. I never used it, so I don't know why it existed. Might have been performance or error handling. That call would have certainly been easier (and faster) if frames were next to each other.
However I also believe that is not the case for Java. I believe I looked at the source code for that at one point and it was just allocating on the heap.
trønderen wrote: Stack relative addressing never goes outside the current frame
False.
In C and C++ one can overwrite the frame. I have in fact done so in the past.
Since both Java and C# support native code overwriting the frame is also possible there.
trønderen wrote: Stack frames could be allocated from the heap, with space is occupied only as long as a function call is active
That is somewhat simplistic. Allocations of any sort still require a mechanism to deallocate it.
So for example a standard in C/C++ (at least when I looked at source code) was that local variable were allocated. Might have even been in the stack frame. Then the deallocate was when the frame was freed on exit.
That is why they were (or are) on the stack frame. Because the deallocation is already there.
Heap deallocation on the other hand is a different mechanism.
trønderen wrote: Then, a given amount of RAM would be capable of handling a much larger number of threads.
As stated that is wrong. Memory is not the only limitation for the number of threads. Not to mention that no developer should ever consider the idea that an unlimited or large number of threads is a 'good' idea.
trønderen wrote: Even though you with desktop PCs can just plug in another 64 GiB of RAM to satisfy stack needs
Just noting that I have never seen a computer that actually allows for the full addressable range of 64 bits. Certainly no desktop PC does that. So one cannot just keep plugging it in.
And cloud servers are always limited also. Sometimes to very surprising low limits.
trønderen wrote: (such as embedded), this is not always possible. Yet, lots of programmers recoil in horror
Because all resources, not just memory, is scarce. If you add complication to the processing model then it will require more resources. There is no way around that.
trønderen wrote: If the allocation cost worries you, lots of optimizations are possible,
The discussion however is not about me (and the general audience) but rather you.
If you want to fix it then do so.
I have in the past replaced the entire heap in C compiler eco system.
Certainly you could also do the same for the Stack Frame allocation. So do it. I suspect you will find that the deallocation part is going be complicated but doable.
trønderen wrote: I never heard of any software (compiler, run time system) allocating stack frames on the heap
As I suggested when I looked I found Java (JVM source code) to do that.
Following has a response that suggests the same thing.
https://stackoverflow.com/questions/26741925/is-frame-in-jvm-heap-allocated-or-stack-allocated[^]
|
|
|
|
|
jschell wrote: Lot of text there with no context. What kind of 'context' are you requesting? A call stack is a general concept, employed in practically speaking all current programming languages. Do you want the context limited to one specific language? One specific hardware platform?
In C/C++ there were library calls that allowed one to chop the stack off. To reset it to a specific point. I never used it, so I don't know why it existed. I never knew of anything like that, and your reference is so unspecific (no function names etc.) that it is difficult to do a search. From what you write, it may sound like a mechanism intended for threads requiring a lot of stack space during initialization, and then enter a 'working' state with small stack requirements, so the stack limit is reduced after init, to make the address space above it available to other threads (started at a later time). I am guessing now, and may be completely wrong, but I can't think of many other uses that makes sense (based on the information you provide).
trønderen wrote: Stack relative addressing never goes outside the current frame
False. In C and C++ one can overwrite the frame. I am not sure what you mean by 'overwriting', suspecting that you refer to something similar to out-of-bounds array indexing: If you put a somearray[1] at the very top of the stack, you may index it beyond the limit of the current frame. That is as bad as any other out-of-bounds indexing! Conceptually, you are still within the same stack frame, you are just pretending that the frame is larger than initially allocated. There is no guarantee that the space you are trying to (mis)use is available; you might hit the stack limit.
In languages with a static link (I haven't been working with any more recent one than Pascal; most newer languages don't have a static link), code of an inner function may address locations in frames lower on the stack. However, they do not do this by negative offsets (or positive, if the stack is growing downwards) from their own frame pointer. Rather, they use the static link to find the stack frame of the outer function, and do relative addressing from that frame base address - as if it was a pointer to a struct. The addressing never goes outside that frame.
Allocations of any sort still require a mechanism to deallocate it. [...] That is why they were (or are) on the stack frame. Because the deallocation is already there. Eh? What's the problem? When a method completes, it returns, whether the frame was allocated edge-by-edge or on the heap. In the former case, you adjust the top of stack register to the top of the previous frame. In the heap case, you move the freelist[x] value to the 'next' field (in the freelist interpretation of 'next') of the frame and store the address of the frame in freelist[x]. In both cases, you set the current stack frame pointer to the previous frame. I don't see any big difference at all!
trønderen wrote: Then, a given amount of RAM would be capable of handling a much larger number of threads.
As stated that is wrong. Memory is not the only limitation for the number of threads. You certainly have not explained why this would be 'wrong'. There may certainly be other limitations on the number of threads, but memory is a significant one. Not in all cases, but quite often. Other limitations may be e.g. number/size of thread descriptors; this is highly OS / runtime system dependent, and need not be neither a hard nor a significant limit.
Not to mention that no developer should ever consider the idea that an unlimited or large number of threads is a 'good' idea. It may be a bad idea, e.g. if the OS/RTS puts a hard limit on the number of descriptors, or the memory cost for each thread is large. Heap allocation may reduce the average memory cost significantly, reducing one limiting factor.
Just noting that I have never seen a computer that actually allows for the full addressable range of 64 bits. Certainly no desktop PC does that. You are certainly confusing 64 address bits with 64 gigabyte, which can be addressed with 36 bits. Current mainboards can handle up to 256 Gi.
And cloud servers are always limited also. Sometimes to very surprising low limits. You are repeating the point I made, except that I referred to embedded systems, not to cloud servers. I used '64 Gi' not as a specific value, just to emphasize that option to add more memory can be extremely limited, compared to a plain desktop PC - most tower cabinet PC users still have empty memory slots on the mainboard and can add more RAM. Not so in an IoT chip.
Because all resources, not just memory, is scarce. If you add complication to the processing model then it will require more resources. There is no way around that. You certainly haven't justified why unlinking a block from a freelist rather than adding to the top of stack register would 'add complication to the processing model'. It is so simple that at least one machine provided it as a single entry point instruction (and a corresponding return instruction), everything else was 100% identical. The purpose of it is to save (memory) resources.
I suspect you will find that the deallocation part is going be complicated but doable. In the ND-500 architecture, it was a single instruction covering both deallocation and return, copying the current freelist[x] into the freed frame and the frame address into freelist[x] - that is not 'complicated but doable'! (The instruction also did all the other return stuff.)
For all heap systems, you must handle splitting of large free blocks to get one of the size you want, and when space runs out, you must recombine smaller, adjacent free blocks. In buddy systems, both are fairly trivial operations. Furthermore, recombination can be done incrementally, e.g when waiting for I/O and no thread is runnable. It is also cheap, especially binary buddy, both for the splitting (as implemented in the ND-500) and recombination.
Following has a response that suggests the same thing.
https://stackoverflow.com/questions/26741925/is-frame-in-jvm-heap-allocated-or-stack-allocated[^] I find it slightly funny that you spend a lot of space and energy to tell me how non-workable my proposal for a heap allocated stack is, and then round it off with a reference to a post documenting that it certainly is a viable alternative and has been used for language as widespread as Java
|
|
|
|
|
trønderen wrote: From what you write, it may sound like a mechanism intended for threads requiring a lot of stack space during initialization,
Incorrect. C existed before threads.
The call stack existed then only for controlling the method calling process.
Even now threads on C are more of a add on. So threads must work within the way C does stuff.
trønderen wrote: suspecting that you refer to something similar to out-of-bounds array indexing:
No I wrote what I meant. You are referring to reading. I was talking about writing.
trønderen wrote: There may certainly be other limitations on the number of threads, but memory is a significant one.
For every resource available to a running program memory is the one of which there is the 'most' available. Every other resource of any kind is going to be much much smaller.
As a specific example of that it doesn't matter if a language provides a way to provide for an unlimited number of threads when the OS will not. And never will because nothing in computing is unlimited.
trønderen wrote: You are certainly confusing 64 address bits with 64 gigabyte,
No confusion on my part. You suggested that someone could just plug in an additional 64 gig of memory. That would of course only be to a 64 bit (or higher) CPU. However, as I said, there are very real practical limits on existing computers (the physical boxes and cloud ones) which mean that your suggestion is not generally true.
trønderen wrote: I find it slightly funny that you spend a lot of space and energy to tell me how non-workable my proposal for a heap allocated stack is
I addressed specific points that you made, because as I pointed out in the very first sentence of my response, you long post seemed to ramble on quite a bit.
So for example in the first part of my response you claimed there was never any reason for doing it - so I showed that there was. I based that on my understanding the history of programming and more specifically on the history of the C programming language.
But in general you claimed that no one does heap based stack allocations but I pointed out that they do. So certainly nothing contradictory in my statements in posting a link that says that.
|
|
|
|
|
trønderen wrote: In languages with a static link (I haven't been working with any more recent one than Pascal; most newer languages don't have a static link), code of an inner function may address locations in frames lower on the stack. However, they do not do this by negative offsets (or positive, if the stack is growing downwards) from their own frame pointer. Rather, they use the static link to find the stack frame of the outer function, and do relative addressing from that frame base address - as if it was a pointer to a struct. The addressing never goes outside that frame.
In the late 1970s, I had a project to optimise expression evaluation in Pascal (based on the Pascal PCode compiler). As you surmised, in the PCode compiler, accessing higher nested values from lower nested functions did skip down the stack frame back pointers (one skip per depth of nesting); but that is an implementation technique, it is not part of the language definition. I guessed that it could be improved by maintaining a vector of nested stack frame start locations to access stack specific offsets directly using this vector. This proved to be a useful optimisation for evaluating expressions. However, when I ran comparisons on a large Pascal program (my test large program was the original version of the PCode compiler) I discovered that whilst the overheads for evaluating expressions were reduced, the number of times that the test program actually accessed variables in higher scopes was so low that the overhead of maintaining the vector outweighed the advantages of having the vector.
|
|
|
|
|
Branching stacks aka spaghetti stacks are used in the implementations of some functional languages. They work basically how you suggest they could work, plus the added detail of allowing "sibling frames" to exist simultaneously.
Go supposedly used to use segmented/split stacks (which also work roughly as you suggested, not on a per-frame basis exactly but more as a list of blocks), but later apparently switched to just copying the whole damn thing to keep it contiguous (which still leaves "the stack(TM)" as essentially a bunch of arraylists that exist on the heap). Other userspace-threading solutions may use similar techniques for their stacks for "fake threads", but I don't really pay attention to what happens in that space, hell I don't really know what Go is doing either, I just remembered that it did something funny and spent 5 minutes on google to look it up.
By the way, the way stacks normally use, is by reserving a bunch of contiguous virtual address space, and lazily commit pages of actual physical memory only as needed. Which is why large stack allocations involve a call to _chkstk, to ensure that every page is touched in order, to avoid missing the guard page. So already with the stack working the way it normally does, you're probably not paying the full memory cost for each stack. But there is no mechanism to shrink the stack. Once a page has been touched once, you keep it.
|
|
|
|
|
If you do dynamic allocation of physical pages to a stack, each stack has a minimum memory cost of one RAM page (typically 4 Kibytes), and a similar increment. If the typical thread nests deeply, and you have a few threads, this is OK, but if you rather go for tiny, lightweight threads and lots of them, 4 Ki might lead to a lot of internal fragmentation.
One disadvantage: If you start new threads very frequently, and each start operation leads to a stack page fault, causing allocation of a new physical page and updating of MMS tables, then the cost of starting new threads, as well as the delay before the new thread is running, will increase noticeable, to phrase it politely.
Btw: I have seen such 'on demand' allocation of physical memory pages not just for stacks, but also for heap memory.
Small association: A long time ago, CPU architects were arguing whether stacks ought to grow upwards or downwards. One of the downwards arguments was that if you put the stack at the bottom of the address range, a stack overflow could be detected by an attempt to address below address zero, by the sign bit. If that logic was still in use, it would prevent an 'on demand extension' of the stack.
|
|
|
|
|
trønderen wrote: One disadvantage: If you start new threads very frequently, and each start operation leads to a stack page fault, causing allocation of a new physical page and updating of MMS tables, then the cost of starting new threads, as well as the delay before the new thread is running, will increase noticeable, to phrase it politely.
That however is a architecture/design problem. Not a technical one.
Computers have limits and always will limits. So one can never create an architecture/design that is based on unlimited resources. In fact one should plan on smaller than actual resources to give one room for unexpected problems.
|
|
|
|
|
I have been working on a WPF app for a client for 3 years. They have someone on site who verifies changes I make.
What I've been doing up to now is making a copy of the SQL datbase on their server, restoring to a test DB, and appling any SQL changes to it. I then copy all test files to their system into a test folder. The have a Test Mode icon on the desktop which allows them to run the test environment.
Once all changes are verified I repeate the above for their Production environment.
This has worked well, but I'd like to have a way to test different changes independently. A test environment for each branch would be nice, but that means duplicating what I wrote above for each new branch. Something really easy to set up and take down as branches change would be great.
I'm open to ideas.
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
modified 28-Dec-22 18:11pm.
|
|
|
|
|
The above is not all that clear. Perhaps because the process has not actually been defined.
Kevin Marois wrote: What I've been doing up to now is
This is a 'deployment' or a 'release'
Kevin Marois wrote: A test environment for each branch would be nice,
Ok but what does that have to do with a 'deployment'?
Generally a branch is used to work on feature/fix/etc. So lets say right now there is a new feature and two bug fixes that you need to work on.
I am going to presume you are using source control. So locally you can work on each as a branch. When a branch is complete then you merge it to the main branch in source control. You only 'deliver' the main branch. So if you have finished the feature and one bug when you decide to deliver than you using the main branch to do that. The other bug is in the branch that is not complete and not merged.
Items in the above
1. Depending on how you deliver it you might need to rebuild the main branch. This is a process step that you would need to do. Probably should have an actual list of steps. Important step is that you should always extract from main branch even if you think it is already there.
2. SQL, or however you update the database, should be checked into source control also.
Kevin Marois wrote: now is making a copy of the SQL datbase on their server,...
Steps for testing
1. Create test database. Specific steps are irrelevant.
2. Apply NEW updates the test database.
3. Deploy NEW application to test folder. Specific steps are irrelevant.
4. The icon on desktop to test
Steps for production
1. Apply NEW updates the real database.
2. Deploy NEW application to real folder. Specific steps are irrelevant.
In the above the two steps match the appropriate previous steps except that it applies to a different destination. So you can certainly automate that using a script (linux or windows.) You can also create an application (installer) to do the same thing.
Don't forget potential errors. For example someone accidently marks a file in the real folder is read only and so your script/installer can't copy the new file in. Might just be a matter of manually inspecting the output but there are other ways.
|
|
|
|
|
Flyway... I'd use Flyway.
Whether using repeatable migrations or versioned + undo you could accomplish what I'll call "feature-branched DB state management" which is what it sounds like you want.
|
|
|
|
|
I know commanding and converters have been a staple of MVVM pattern and XAML interfaces, but it seems like WinUI 3 is moving away from those, particularly in terms of the functionality offered (direct binding to functions and events) and seemingly encouragement in the documentation to use them instead of commands and converters.
I'm in the middle of refactoring a WPF application to WinUI 3, and after writing several XAML pages, it looks like it's going to be possible to avoid commands and converters altogether. So is it an MVVM sin to wholly replace commands and converters with x:Bind functions?
Randy
|
|
|
|
|
MS samples are meant to show language features. They don't "do MVVM" unless the article is about MVVM. x:Bind provides some additional type checking at design time and some performance advantages versus "Binding"; with trade-offs. Nothing to do with MVVM in particular.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Thanks, Gerry. My WPF application code was fairly consistent in its application of Binding with commands and converters, and I'd like to continue that level of consistency as I refactor it to WinUI 3.
I'd like to take advantage of the x:Bind performance improvements, but also remain consistent in the way it's used. In other words, I'd prefer to continue to wholly use Binding (to DataContext) with traditional commands and converters, wholly use x:Bind (to code-behind objects) with traditional commands and converter classes, or wholly use x:Bind with functions (no commands or converters).
This may not make sense or be possible, I just haven't gotten far enough into it to tell. So far I haven't found any showstoppers, and that's why I'm curious. But I'm also only about 10% into it.
|
|
|
|
|
I'm working on a WPF app that will connect to Google and retrieve contacts. Right now, the Google key info is all hardcoded unencrypted into the app and is a security risk. I'd like to refactor this so that the key is not compiled in.
One idea we had was to make a backend call to our server to retrieve the key, then use it to connect to Google.
What's the right way to do this?
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
modified 13-Dec-22 14:24pm.
|
|
|
|
|
The server connects to / queries Google; the client makes the Google request via the server.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
OK.
THe issue is that the Security Key & Secret are stored local in the client. They have to be passed. I'm asking about how to design this for security
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
But if the key is shared, shouldn't it be on the server? You route the queries through the server; the query runs from the server; the client never needs to see the key; only the results.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
You pass a Key and Secret to the server. We're trying to avoid this
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Where do I say pass the key? Put the key on the server. Why do you have a server? The client queries the server; the server queries Google and whatever else, and returns the result to the client. It acts like a proxy or a firewall. If you think "2 hops" is an issue, that's another matter, and only if you benchmark it and it says so.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Gerry Schmitz wrote: The client queries the server; the server queries Google
It's a WPF app. It's calling the Google People API directly. The Secret and Key are hardcoded as constants in the C# code. The app directly queries the Google API passing the Secret and Key.
But that's what I said in my OP. We could store the Key & Secret on our sever and add an endpoint to simply return them, therefore removing them from the WPF app's code. The client app would still call the Google API directly passing the Secret and Key, it would just first, on app start, go to OUR server to get them, instead of them being in the code.
Old
- App starts
- App calls Google API, passing hardcoded Key & Secret
New
- App starts
- App calls OUR server, which returns the Key & Secret, and stores them
- App calls Google API, passing the stored Key & Secret
Again, the ultimate goal is to get the Secret & Key out of the code.
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Wpf <-> internet <-> server <-> internet <-> Google; UPS; USPS; VISA; etc.
I have WPF apps, running as kiosks, calling into my (ASP.NET) web server that has "no presence" other than to handle client requests), that calls multiple API's for credit card verification, postal rates, address verification, and retrieving shipping label images; all using different accounts and passwords "stored on the server" along with "back end code" and an SQL data base.
Does that help?
(Sounds like you have a simple / local "file server"; and not a remote / distributed application / database / web server).
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Seems like the other thread has a miscommunication.
Your current app has the credential information in the client code. The type of credential information is irrelevant.
The credential information is hard-coded because you stated that. (That means every actual user of the client will be using the same exact credentials.)
So the other solution is to modify the code to do the following
1. Create a server API method that expects a requests from the client code. The client code does NOT make a call to google. The client code does not have the credentials.
2. The server code API uses the credentials and makes the call to google.
3. The server code returns the result of the google code to the client.
Note in the above that I did not specify where the server code gets the credentials from. Could be it still hard-coded but in the server code. There are other possible solutions to providing the credentials to the server code.
|
|
|
|
|
I have a really strange problem with WebRequest in a ServiceStack web application (hosted by XSP on Mono). It seems that the registration of request modules works in a very strange way; I am using WebRequest to create an HTTP request, and it is failing because it was not able to find a creator for that "prefix" (HTTP).
The exception I am seeing is NotSupportedException, and I was able to track it to the fact that no creator is registered for the HTTP prefix (I am hitting https://github.com/mono/mono/blob/master/mcs/class/System/System.Net/WebRequest.cs, around line 479)
EDIT: more details: NotSupportedException is thrown by WebRequest.GetCreator, which uses the URL prefix as a key to choose which creator to return; in my case, a HttpRequestCreator. The exception is thrown because there is no creator registered for the "HTTP" prefix (actually, there are no creators at all).
So I searched around a little bit, dug into Mono sources, and found that modules are (or should be) added to the webRequestModules section of system.web in one of the various *.config files.
I looked at my machine.config file, and there it is:
System.Net.HttpRequestCreator, System, Version=4.0.0.0
Looking at WebRequest Mono sources it seems that prefixes are added from configuration(s) inside the class static constructor (not a good choice, IMHO, but still.. it should work).
To test it, I tried to add an HttpRequestCreator to system.net/webRequestModules in my web.config; this is loaded by XSP/Mono and results in a duplicate key exception (which is expected since HttpRequestCreator should be already loaded, as it is already present in machine.config).
Even stranger: if I add a mock handler for Http, like this:
bool res = System.Net.WebRequest.RegisterPrefix ("http", new MyHttpRequestCreator ());
Debug.Assert (res == false);
The assertion sometimes pass... sometimes not! (RegisterPrefix returns "false" if a creator for the same prefix is already registered; I expect it always to return false, but this is not the case! Again, it is completely random)
When the registration "fails" (i.e., returns false because an "HTTP" prefix is already registered), then the WebRequest can create requests for HTTP. It is as if calling RegisterPrefix "wakes up" the static constructor and let it run.
I am perplexed: it seems like a race condition in the execution of the static constructor of WebRequest, but this does not make sense (the runtime protects static constructors with a lock, IIRC)
What am I missing? How could I solve or work around this problem? Is it my fault (misunderstanding or missing something), or does it look like a Mono bug, so should I submit it?
modified 16-Dec-22 4:27am.
|
|
|
|
|
Isaac Tack wrote: (it is not possible to reduce the amount of data).
That statement is unqualified.
You might not be able to reduce the total amount of data but you might be able to reduce the amount of data that is returned.
Isaac Tack wrote: I have the following ideas:
What happens if they want to turn off access to a user. Or change access. So the database is updated (delete the user, set a flag or change associated attributes.)
So now what happens with either of your ideas?
|
|
|
|
|