|
I use dotnet-script. No real complaints from me.
Easy to install dotnet tool install -g dotnet-script
Easy scaffolding in VS Code (for debug support) dotnet script init
Easy to execute dotnet script <filename>
You can skip the second step if you just want to execute stuff with no debug support.
|
|
|
|
|
Not usable in my situation. I have to include any libraries in the resulting .EXE and distribute a single executable. Using any "dotnot" commands will not work for me.
|
|
|
|
|
I did that a few years ago. It makes a nice scripting utility. I had to figure out what shortcuts in the IDE were not in the compiler, but that did not take long. Mine worked out well, and I used it in a production app. I added in some safeguards, like making sure the C# script had not been tampered with (lots of opportunity for a disgruntled employee to alter an existing script to wreck havoc on production systems).
|
|
|
|
|
Hmm...
I'm pretty sure that AV solutions will go berserk on that....
Who the f*** is General Failure, and why is he reading my harddisk?
|
|
|
|
|
|
now I'm curious... don't tell me you provide documentation for exceptions / custom rules
Who the f*** is General Failure, and why is he reading my harddisk?
|
|
|
|
|
I wish I could, but I've said as much as I can.
You know how it goes.
|
|
|
|
|
*NOW* I am curious..... that sounds a bit dirty and a bit black arts.....
Who the f*** is General Failure, and why is he reading my harddisk?
|
|
|
|
|
|
Seriously, I'm going to have to do a bit of research as my present job involves *preventing* that kind of thing....
Who the f*** is General Failure, and why is he reading my harddisk?
|
|
|
|
|
Dave Kreskowiak wrote: The C# script would be compiled and executed without generating an .EXE on disk. It would all be in-memory.
Pretty sure you could have done that since C# 1.0. And you can certainly do it now.
You create the code.
You compile the code into a 'file' which is actually just a hunk of memory. That is the "dll"
You then run the code in the "dll"
|
|
|
|
|
Yep, and it was ugly and included certain restrictions on how the code had to be written.
|
|
|
|
|
Dave Kreskowiak wrote: Yep, and it was ugly and included certain restrictions on how the code had to be written.
I would need more details. How the code and not for example process failures would lead to problems.
I have worked on two products in C# that did dynamic code compiling. Certainly no restrictions that ever stopped what I wanted to do or in one case many customers that were using the product to write code, for the actual code.
I didn't try to keep it in memory but the dlls were loaded dynamically in both cases. So converting to memory for that part would have been easy.
Now the entire process is "ugly" but in both cases there was much of what was done that could not have been done, in a product feature way, that would have removed that requirement.
In both cases people tended to get excited and then over use it. I have done the same with java (at least 3 times) and that problem happens with that as well. However that is a process problem not a code problem.
So in C# does it have to do with actually saving it to memory?
|
|
|
|
|
You're thinking in technical terms.
My issues with the previous ways of doing it are more "customer" issues than anything technical.
|
|
|
|
|
Noticed that too during my escapades with VS Code and .NET Core 6.0 on Zorin OS last thursday.
But at the end of the day I was glad I got the example working ...
|
|
|
|
|
You were able to install VS Code & dotnet core SDK etc on Zorin and create & compile a C# program on that OS?
Very interesting.
|
|
|
|
|
Yes, but Zorin OS Lite was apparently not a good choice for bleeding edge things like .Net Core 6.0.
Things probably would have been easier on the newer Zorin OS full version using the "Snap package manager".
Btw. in this video the new and strange ways of .NET Core 6.0 are explained:
Hello World: .NET 6 and .NET Conf - YouTube[^]
|
|
|
|
|
Thanks for the link directly to that section of that longer video.
that was great addt'l info on this.
|
|
|
|
|
|
Super Lloyd wrote: .NET.. err.. 4.7?
The version of .NET is irrelevant; it's the compiler and language version that matters. The compiler turns local functions into code that would work in pretty-much any version of .NET - either static functions, instance functions, or functions on a closure class, depending on what you've referenced in the local function.
Eg:
void Foo()
{
int Bar() => 42;
Console.WriteLine(Bar());
} becomes something similar to:
[CompilerGenerated]
internal static int <Foo>g__Bar|0_0()
{
return 42;
}
void Foo()
{
Console.WriteLine(<Foo>g__Bar|0_0());
}
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I think they are going after two main areas: be more like python (REPL approach) and be more like node (see the new asp.net 6 project templates).
Eusebiu
|
|
|
|
|
BASIC -> QBASIC -> VisualBasic -> C# -> BASIC -> ... VB7?
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
|
|
|
|
|
That's a simple continuation of the "pay for play" philosophy. While the unavailability of the Main function is a thing I really hate about Python (how the hell am I supposed to know where complex code starts operating), it's absence is a huge win for small code bases.
Don't get me wrong, for a kLoC of code, spread across 4 or so different modules, the lack of structure which this particular C# template brings to the table would be a bloody nightmare (which is why I'm not using this style for my kLoC-multimodule project). But for something of only mild complexity, that's a win.
Boilerplate code, like any other overhead, starts paying off eventually, but if you have something not nearly huge enough for that overhead to pay off, low-overhead alternatives rule.
Take file system as an example. NTFS (or ext, if you're so inclined) is by orders of magnitude more advanced, than FAT. Yet, FAT (be it FAT32 or exFAT) got it's own raison d'etre, which is low-requirements-low-overhead.
PS: that part that you highlighted, namely local functions, is older, than .NET 6. They started with C# 7.0 which started it's life with .NET 4*x.
|
|
|
|
|
There are a number of shorthand changes (but little innovation) that have been made to C# over the years that are of limited or questionable value. It is a good idea to test out these shorthand C# changes, then look at what the compiler does with them by looking at the generated MSIL. As one example, having done that, it is why I no longer use "using" for IDisposable objects.
If you like a particular shortcut, use it. But my advice is to at least know what the compiler does with it. In the case of the OP, just make your own Main() and go with it.
|
|
|
|
|
Could you expand on what you meant by the using example? From what I can see, these end up equivalent:
using (SomeResource res = new SomeResource())
{
}
SomeResource res = new SomeResource();
try
{
}
finally
{
if (res != null)
((IDisposable)res).Dispose();
}
which seems right to me.
|
|
|
|