|
Hmm...
I'm pretty sure that AV solutions will go berserk on that....
Who the f*** is General Failure, and why is he reading my harddisk?
|
|
|
|
|
|
now I'm curious... don't tell me you provide documentation for exceptions / custom rules
Who the f*** is General Failure, and why is he reading my harddisk?
|
|
|
|
|
I wish I could, but I've said as much as I can.
You know how it goes.
|
|
|
|
|
*NOW* I am curious..... that sounds a bit dirty and a bit black arts.....
Who the f*** is General Failure, and why is he reading my harddisk?
|
|
|
|
|
|
Seriously, I'm going to have to do a bit of research as my present job involves *preventing* that kind of thing....
Who the f*** is General Failure, and why is he reading my harddisk?
|
|
|
|
|
Dave Kreskowiak wrote: The C# script would be compiled and executed without generating an .EXE on disk. It would all be in-memory.
Pretty sure you could have done that since C# 1.0. And you can certainly do it now.
You create the code.
You compile the code into a 'file' which is actually just a hunk of memory. That is the "dll"
You then run the code in the "dll"
|
|
|
|
|
Yep, and it was ugly and included certain restrictions on how the code had to be written.
|
|
|
|
|
Dave Kreskowiak wrote: Yep, and it was ugly and included certain restrictions on how the code had to be written.
I would need more details. How the code and not for example process failures would lead to problems.
I have worked on two products in C# that did dynamic code compiling. Certainly no restrictions that ever stopped what I wanted to do or in one case many customers that were using the product to write code, for the actual code.
I didn't try to keep it in memory but the dlls were loaded dynamically in both cases. So converting to memory for that part would have been easy.
Now the entire process is "ugly" but in both cases there was much of what was done that could not have been done, in a product feature way, that would have removed that requirement.
In both cases people tended to get excited and then over use it. I have done the same with java (at least 3 times) and that problem happens with that as well. However that is a process problem not a code problem.
So in C# does it have to do with actually saving it to memory?
|
|
|
|
|
You're thinking in technical terms.
My issues with the previous ways of doing it are more "customer" issues than anything technical.
|
|
|
|
|
Noticed that too during my escapades with VS Code and .NET Core 6.0 on Zorin OS last thursday.
But at the end of the day I was glad I got the example working ...

|
|
|
|
|
You were able to install VS Code & dotnet core SDK etc on Zorin and create & compile a C# program on that OS?
Very interesting.
|
|
|
|
|
Yes, but Zorin OS Lite was apparently not a good choice for bleeding edge things like .Net Core 6.0.
Things probably would have been easier on the newer Zorin OS full version using the "Snap package manager".
Btw. in this video the new and strange ways of .NET Core 6.0 are explained:
Hello World: .NET 6 and .NET Conf - YouTube[^]
|
|
|
|
|
Thanks for the link directly to that section of that longer video.
that was great addt'l info on this.

|
|
|
|
|
|
Super Lloyd wrote: .NET.. err.. 4.7?
The version of .NET is irrelevant; it's the compiler and language version that matters. The compiler turns local functions into code that would work in pretty-much any version of .NET - either static functions, instance functions, or functions on a closure class, depending on what you've referenced in the local function.
Eg:
void Foo()
{
int Bar() => 42;
Console.WriteLine(Bar());
} becomes something similar to:
[CompilerGenerated]
internal static int <Foo>g__Bar|0_0()
{
return 42;
}
void Foo()
{
Console.WriteLine(<Foo>g__Bar|0_0());
}
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I think they are going after two main areas: be more like python (REPL approach) and be more like node (see the new asp.net 6 project templates).
Eusebiu
|
|
|
|
|
BASIC -> QBASIC -> VisualBasic -> C# -> BASIC -> ... VB7?
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
That's a simple continuation of the "pay for play" philosophy. While the unavailability of the Main function is a thing I really hate about Python (how the hell am I supposed to know where complex code starts operating), it's absence is a huge win for small code bases.
Don't get me wrong, for a kLoC of code, spread across 4 or so different modules, the lack of structure which this particular C# template brings to the table would be a bloody nightmare (which is why I'm not using this style for my kLoC-multimodule project). But for something of only mild complexity, that's a win.
Boilerplate code, like any other overhead, starts paying off eventually, but if you have something not nearly huge enough for that overhead to pay off, low-overhead alternatives rule.
Take file system as an example. NTFS (or ext, if you're so inclined) is by orders of magnitude more advanced, than FAT. Yet, FAT (be it FAT32 or exFAT) got it's own raison d'etre, which is low-requirements-low-overhead.
PS: that part that you highlighted, namely local functions, is older, than .NET 6. They started with C# 7.0 which started it's life with .NET 4*x.
|
|
|
|
|
There are a number of shorthand changes (but little innovation) that have been made to C# over the years that are of limited or questionable value. It is a good idea to test out these shorthand C# changes, then look at what the compiler does with them by looking at the generated MSIL. As one example, having done that, it is why I no longer use "using" for IDisposable objects.
If you like a particular shortcut, use it. But my advice is to at least know what the compiler does with it. In the case of the OP, just make your own Main() and go with it.
|
|
|
|
|
Could you expand on what you meant by the using example? From what I can see, these end up equivalent:
using (SomeResource res = new SomeResource())
{
}
SomeResource res = new SomeResource();
try
{
}
finally
{
if (res != null)
((IDisposable)res).Dispose();
}
which seems right to me.
|
|
|
|
|
In short, the using statement swallows constructor errors. Since the actual code being executed is a try … finally, why not just use try … finally (or better yet, try… catch … finally) and use your own code for capturing and logging all exception? And given the unpredictability of the GC, scalability is better served by following the principle, “if you create an object, clean it up when done with it”. Relying on the GC and using shortcuts like the using statement are things I consider poor engineering choices in the context of the SDLC. Others may disagree, but I have yet to see a reasoned argument against my approach that ends in better software.
|
|
|
|
|
Aah, ok. I can understand not wanting to use the pattern if you need to catch exceptions or heavily expect that you will need to in the future with that class (if you're gonna have to expand the using into a try...catch anyways might as well go ahead and do it).
That being said, if the exceptions happen in the constructor, they will not be swallowed since object creation happens outside of the try...finally . The only exceptions that would be swallowed are the ones that occur while using the object.
Tested using:
class Test : IDisposable
{
public Test() => throw new NotImplementedException();
public void Dispose() {}
}
using (Test t = new Test())
{
Console.WriteLine("constructor exception swallowed.");
}
|
|
|
|
|
I did a demo on this about 1-1/2 years ago, as my fellow developers didn’t see the harm, either. They did afterwards.
Since in a well designed app, the try-catch-finally is mostly copy and paste, it really does not save any meaningful development time to use the using statement.
Exception handling is a key to reducing dev and QA testing, as well as production troubleshooting. By utilizing the exception’s Data collection, the developer can capture runtime values that are very helpful in diagnosing problems in execution. I have, on many occasions, seen production troubleshooting that would have taken a day or more, shortened to minutes, by smart exception handling. In many production systems, that difference in time can mean thousands to millions in revenue losses avoided by significantly quicker resolution.
Using us a shortcut that alleviates the burden of a developer having to remember to call Dispose(). I’d rather use developers who don’t need such shortcuts.
|
|
|
|