|
It's not undefined behavior yet. Almost every compiler just passes the unchecked nullptr reference to the called function, which goes off the cliff if it uses the member selection operator. The called function can therefore protect itself in the way described, and fail gracefully if it wishes.
|
|
|
|
|
Greg Utas wrote: It's not undefined behavior yet.
It is:
A reference shall be initialized to refer to a valid object or function. [Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” obtained by dereferencing a null pointer, which causes undefined behavior. As described in 9.6, a reference cannot be bound directly to a bit-field.]
Just because it works today on your set of compilers, does not mean it will continue to work tomorrow when for instance standard introduce contracts or compiler implementer decide to make optimizations based on the assumption. Compilers are already taking UB as license to do optimizations, giving language users all kind of WTF moments.
Most famous case of a similar nature is probably this one in Linux: Fun with NULL pointers, part 1 [LWN.net]
The point is you cannot reason about UB.
|
|
|
|
|
I agree that the naughty code should be trapped, but most compilers are also naughty.
|
|
|
|
|
|
That's execrable if they're not trapping null references.
|
|
|
|
|
In section [dcl.ref] of the C++ Standard: A reference shall be initialized to refer to a valid object or function. [ Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” obtained by dereferencing a null pointer, which causes undefined behavior. As described in 9.6, a reference cannot be bound directly to a bit-field. —end note ]
Yes, compilers typically implement references as "pointers with object semantics". This works fine when the reference is valid, but breaks the C++ programming model for a null reference. As I said before, relying on undefined behavior is a bad idea.
I agree that your code will work for many (most? all?) C++ compilers, but that does not mean that it is good code. I reiterate that checking for a program that breaks the C++ programming model should not be done within the bounds of the C++ programming model; a lower-level mechanism must be used.
EDIT: Beaten to the draw by Mladen
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: EDIT: Beaten to the draw by Mladen
Language lawyers are not much different than ambulance chasing lawyers
|
|
|
|
|
It is good code if it will fail gracefully under some compilers, and especially the one actually being used. But I agree that a lower-level mechanism is also needed.
|
|
|
|
|
When is a reference not a pointer? What comes first? The variable, the reference or the pointer?
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
Pointless. Used by inferior practitioners.
|
|
|
|
|
PIEBALDconsult wrote: Pointless So are circles, but I still use them.
|
|
|
|
|
I suspect enabling this will wreak havoc on all the entity models with string properties where the backing field in the DB is nullable.
|
|
|
|
|
That's probably one of the reasons it's an option and not default.
Entity framework will of course be updated to support it though.
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
It will close the gab between C# and database. Both will be able to deal with nullable and non-nullable types (as they already can for the C# value types). I.E. fewer hidden traps. So once the code is updated you will know if something can be null right from the types in your C# code, and your compiler will be able to point out you are an idiot, instead of having to run the code to learn it the hard way.
Of course it will require a code change to update any existing C# entities.... just as you need to update the rest of the code.
I do not think of enabling nullable check as "enable some more warnings". I see it as a late attempt to fix what I can only describe as a huge mistake in the C# type system. Splitting nullable off from the type just as databases did decades ago (well... Oracle did it wrong for strings, but still).
Unfortunately it comes so late it still needs to live with the crappy code that isn't using nullable checks - so you will still need to throw ArgumentNullException from public types.
|
|
|
|
|
Sounds like a great addition to me.
And if I have understood it correctly there is no actual addition of nullable reference types.
What you get is an option where the compiler looks for possible null reference errors.
So when null references are EXPECTED you need to mark those fields as nullable to not get a warning.
Wrong is evil and must be defeated. - Jeff Ello
Never stop dreaming - Freddie Kruger
|
|
|
|
|
I didn't / don't get it either.
If all your function does is return an object, what do you do when there's a problem?
I return null instead of throwing an exception (which seems to be the knee-jerk response).
New LINQ queries? We have FirstOrNull(), LastOrNull(). Will it now be FirstOrNullOrNotNull()?
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it.
― Confucian Analects: Rules of Confucius about his food
|
|
|
|
|
Returning null: Everything is fine, but the stuff you asked for does not exists which is expected to happen.
Throwing an exception: There is a problem - it is stopping me from getting the stuff you asked for, or it is really freaking weird it is missing - your data model probably exploded.
Two VERY different situations and if you handle them both the same - ehh... well.. ehhh... yikes.
Unfortunately it is not always clear-cut which is best, so you often end up wishing the API you use does "the opposite" (Queue.Dequeue, I am looking at you, it's not a freaking problem a queue is empty - it is vary often the desirable state). Some API's have both (TryDequeue would be nice).
But at least now I can look at the return type and know right away if whatever I call can return null or not.
Don't follow your comment on Linq. I think you have a misunderstanding of the nullable type system. All that will change is the return type of FirstOrNull will be of the nullable type - so if it is strings, FirstOrNull return type will be string? instead of just string.
Normally the compiler is on top of it, and will not bother you at all if you check for null before assigning it to a non-nullable string. If the compiler bothers you, maybe ask yourself if you really want to proceed with some code the compilers type system can't understand. If you do, simply override the null check with a !
|
|
|
|
|
I've enabled them, and I quite like them.
Rob Philpott wrote: Nullable reference types (they're nullable already surely?) Yes, perhaps the name of the feature is unfortunate. It may have been more accurate to call it "non-null by default" or something, but it's a mouthful. But whatever, it is what it is, and the name is what it is, even if the name and the thing it names are slightly different.
To me what this feature is (and I think some people in this thread are misinterpreting the feature), is a way to turn a NullReferenceException that would have happened at runtime (or worse: usually doesn't happen because it's on an uncommon code-path), into a warning at compile time. There is basically no downside. If you wrote code that was already null-safe, there's usually no warning, the checker is pretty clever about your explicit null tests and control flow. A few places in the code where null is actually expected need to have the infamous question mark added to make the warning go away, no big deal.
It's not an overbearing safety mechanism like Rusts borrow checker. You can ignore warnings, and you can lie through your teeth: the expression null! is valid and essentially means "I swear that this null here is not null ". The compiler will believe you. You hold all the power, the compiler just occasionally alerts you to mistakes that you make. I think that's a good deal to make.
|
|
|
|
|
I tried them first in swift where they are called more appropriately (at least to me) Optionals
They basically have the same syntax and behavior and they are neat! (unless you force unwrap a null value and everything explodes as a normal null usage would do)
|
|
|
|
|
The way I see it, if you've turned on NRT, haven't got any strings with ? at the end, and no compiler errors, you're doing well
I use NRT all the time, since it was in the prototype stages, and I love it. The ability for the compiler to warn when you're accidentally using something that is potentially null is really useful.
Especially things that you didn't previously realise could return null!
|
|
|
|
|
How does this work?
So a string in the normal way can be null, but with nrt turned on will not accept null. This creates an ambiguity, so a string in something like an interface - a strict definition of properties and methods etc. can suddenly become ambiguous depending on the nrt context.
I'm struggling to see how this works in the framework and libraries. Well worn methods that have been around decades which return string but should now return string?. It's almost like you'd need two frameworks, one enabled for NRT and one not.
I don't get it. But I have decided to turn it on, and it'll all probably make sense soon.
Regards,
Rob Philpott.
|
|
|
|
|
Bear in mind that this is a compiler-only thing - once compiled, all strings are back to being normally nullable.
If you have function declaration: void M(string s) { } , the idea is that you can be assured that s is going to be a not-null string. Because, where you call that function, you will be warned if you try M(null) , or
string? s;
M(s);
The only times you are expected to test for null are:
1) If you receive a string? from a function. You should check for non-null before you use it.
2) Even if you have a not-null string parameter, if that method is external facing, another project may still send a null string
(obviously this all works for any reference type, not just strings...)
Does that help?
|
|
|
|
|
It does - thank you. I think I should perhaps be figuring this out for myself rather than burdening others mind! Code below shows my confusion, a simple thing to write a text file to the console. Now I turn NRT on, my 'line' variable should now become type 'string?', but ReadLine() doesn't start suddenly returning 'string?', surely it maintains it's 'string' return type.
There is a (perceived by me) mismatch between my NRT code and the non-NRT framework.
public void Test()
{
using (StreamReader s = new StreamReader(@"c:\myfile.txt"))
{
string line = s.ReadLine();
while (line != null)
{
Console.WriteLine(line);
line = s.ReadLine();
}
}
}
You know what, I think I'll try it right now...
Regards,
Rob Philpott.
|
|
|
|
|
It depends on which version of C# you're using.
In NET5 (or maybe even Core3.1), StreamReader#ReadLine does return string?.
All of the core library is now annotated with ? and attributes which assist the NRT parser.
You'll find this a lot in reflection and other methods which used to potentially return null.
|
|
|
|
|
Yes I did just try my example, under .NET5 as it happens and the return type magically changed. I just find it a bit strange that turning on nullable here, makes something nullable over there. I notice also that 'HasValue' and 'Value' are missing to keep it in line with value types but I guess this is because its a compiler thing not a framework thing as you say..
Thanks for your help.
Regards,
Rob Philpott.
|
|
|
|