|
I haven't touched C/C++ in a long time, but C# has the similar var. I used to use it until I recognized how unreadable it made code. It forces you to know more than you really need to know and is, frankly, a lazy way to write code.
I do still use it but only in the lazy way of using Intellisense to figure out what the actual type is supposed to be and give me the option of replacing var with the actual type. It has recently come in handy last week when using an API client library generated by Swagger code gen and the holy-sh*t-those-are-long-class-names it generated. The longest is 86 characters long, and average about 40-45. I'm not typing those. I have to get the code working this week, not next year.
|
|
|
|
|
I use it.
Suppose you have
unordered_map<string, int> um{ {"foo", 1}, {"goo", 2}, {"boo",42}};
I find
for (const auto & p : um)
{
cout << p.first << ", " << p.second << "\n";
}
'somewhat simpler' when compared to
for (const pair<string, int> & p : um)
{
cout << p.first << ", " << p.second << "\n";
}
Maybe I am used to it.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
The only time I don't use auto is to override the type that the compiler would deduce. That's very rare, usually with the size of a numeric. auto is almost always a type returned by a function, or maybe the type of a class member, so there's nothing "dangerous" about it in those situations. Someone reading the code needs to be familiar with the functions and classes being used, or they're fooling themselves as to their level of understanding.
|
|
|
|
|
I use auto as much as I can.
I will still use types for POD.
I also use variable names that makes sense so that I know what type the variable should be (obviously not hungarian reverse or not notation)
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
POD?
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
Plain Old Data.
usually simple types like int, char, float...
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
It appears to me that you may be doing much programming that doesn't really fit into a strong, static typing world. Maybe a dynamically typed language, with any variable having the type of its value (at any time) would suit your problems better.
I love the strictness of strong static typing. It makes it possible for the compiler to give me far more detailed and to-the-point error messages and warnings. When reading the code, it provides more information, making it simpler to comprehend the code.
There are situations where auto/var is required, e.g. in database operations; I am not objecting to using in in such cases. In most cases, you can extract the values to strongly typed variables. I do not leave them in untyped variables for long.
Corollary, I try to avoid deep subclass nesting. Choosing between having to inspect 4-6 superclass definitions to find the definition of a field (hopefully with a comment explaining its use) or extending a superclass with a field that for some instances are left unused, I definitely prefer the latter. (I have many times seen subclasses created for adding a single field - even with several sibling classes adding the same single field!)
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
I think you're confusing things up.
C++ is still strongly typed even when you use auto.
when I declare a variable with auto, it will be typed accordingly and I cannot change the type.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
Maximilien wrote: when I declare a variable with auto, it will be typed accordingly and I cannot change the type.
Err...in C++?
Rather certain you can in fact change the type. Not generally a good idea but one can certainly do it.
char* s = ....;
int* p = (int*)s;
I have seen very limited situations where it provided value.
|
|
|
|
|
That's a C -like dirty hack. Useful at times.
There are also union s and variant s. Nonetheless C++ remains a strong typed programming language.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
That does not change anything, it's a cast. It merely tells the compiler "even though s is a char* , for this statement only, pretend it points to an int .
|
|
|
|
|
I believe that in terms of the semantic functionality that the type is now changed.
If you have a method that takes the second type, the compiler will complain if you pass the first but not the second.
I have in fact used a char array as a integer before. At least in that case there was no definable difference between the two.
So exactly, in terms of the language, how does the cast not make it into a different type?
|
|
|
|
|
Well, think of it this way: What is a type? What do we mean when we declare the type of a variable?
We're declaring how we want the compiler to treat the data value. It's not an existential property of the variable, it's the way that we interpret the value.
So:
char* b = "ABCD"; And:
int* a = (int*)b; We're declaring an action, not a property of the variable.
The difficult we do right away...
...the impossible takes slightly longer.
modified 23-May-24 20:27pm.
|
|
|
|
|
A char* is in reality just an index into a portion of memory. So at the machine level it has no type-ness, it can be used to address anything from a byte to a quadword. But as far as the language is concerned it only ever points to a character. When you use a cast the compiler does what can be done at machine level, but the object itself is still a char* , and any attempt to use it in any other way will raise a compiler error. If you have something like the following:
int somefunction(int* pi, int count)
{
int sum = 0;
for (int i = 0; i < count; ++i)
{
sum += *pi;
}
return sum;
}
char* ci = "Foobar";
int total = somefunction((int*)ci, strlen(ci));
The type of ci does not change at all, it is just that its value is passed to somefunction , as the cast allows you to break or ignore the rules of the language. And the result of calling that function may, or may not, make sense.
|
|
|
|
|
In your example, it should be noted that if the target CPU requires that an int have, for example, an even byte alignment, you may get an exception when trying to dereference the int pointer.
I also wondered if you meant to increment the int pointer inside the loop, in which case, at some point you would invoke undefined behavior. Unless, of course, sizeof(int) == sizeof(char) . Which isn't impossible, but I don't know of any system where that might be true. Maybe a 6502 or other 8bit system?
"A little song, a little dance, a little seltzer down your pants"
Chuckles the clown
|
|
|
|
|
k5054 wrote: if the target CPU ... True, but hardly relevant to the point I was trying to make.
And yes, I should have incremented the integer pointer - writing (poor) code in a hurry.
|
|
|
|
|
Richard MacCutchan wrote: If you have something like the following:
For background I have 10 years of C and 15 of C++ after that so I do understand a bit of how it works. Not to mention wild forays into assembler, interpreters, compilers, compiler theory and spelunking through compiler libraries. I have written my own heaps (memory management), my own virtual memory driver, device drivers and hardware interfaces. So I do understand quite a bit about how computer languages work and how the language is processed.
I have used char arrays as ints. I have used char arrays as functions. I have used void* to hide underlying data types. I have used void* in C to simulate C++ functionality.
Richard MacCutchan wrote: When you use a cast the compiler does what can be done at machine level, but the object itself is still a char*, and any attempt to use it in any other way will raise a compiler error.
That specifically is not true.
Once a char* is cast to an int (or int*) then the compiler specifically and exclusively treats it as that new type.
The question is not how it is used but rather how it is defined to the compiler.
Richard MacCutchan wrote: And the result of calling that function may, or may not, make sense.
All of those worked because the compiler did what it was told. The cast changed the type. The compiler respected the type and it did not and does not maintain information about previous types.
A human understands that the underlying data originated from a character array.
However the compiler does what it is told. And once it is cast to a different type it is in fact a different type to the compiler. By definition. You, the human, can use it incorrectly but you (again the human) can use the original type incorrectly as well. That has nothing to do with the cast but rather how the human uses it.
The easiest way, perhaps only way, for a language to preserve type is to not allow the type to be changed at all. Java and C# do that.
Going back to what was originally said by you.
"pretend it points to an int"
The compiler is not doing that. To the compiler once the cast occurs the data is now the new type. Whether that is a problem or not is a human problem, not a compiler problem.
For the compiler to be involved in this at all the underlying data would need to keep track of the type. And it does not do that.
|
|
|
|
|
Well I disagree entirely, but I have no intention of arguing further.
|
|
|
|
|
What you have is a set of bits, commonly called a byte/octet, a halfword, a word, ...
You declare an interpretation of the bit pattern as a character.
You declare an alternative interpretation of the same bit pattern as a small integer.
You might declare a third interpretation of it as, say, an enumeration variable. You can declare as many alternate interpretations of the bit pattern as you like. The various interpretations are independent and coexistent. It is the same bit pattern all the time, nothing changes.
Unless, of course, you call for a function that interprets the bit pattern in one specific way, and the creates another bit pattern that can be interpreted as something resembling the first interpretation made by the function. Say, the function interprets the bit pattern as an integer, and forms a bit pattern that, if interpreted as a floating point value, has an integer part equal to the integer interpretation value of the first bit pattern. Yet, even the constructed bit pattern is nothing more than a bit pattern, that can be given arbitrary other interpretations.
When you declare a typed variable / pointer / parameter, you are just telling the compiler: When I use this symbol to refer to the bit pattern, it should be interpreted so-and-so. The compiler will see to that, without making any modifications to the bit pattern, and - at least in some languages - making no restrictions on other interpretations.
A problem with some languages is that in some cases, a cast will just declare another interpretation of the bit pattern, while in other cases (other interpretations), it will create a new bit pattern. If you want full control, always interpret the same bit pattern, never create a new one, a union is a good alternative to casting.
Besides, by declaring a union, you signal to all readers of the code: Beware - this bit pattern is interpreted in multiple ways! Casting can be done behind your back, risking e.g. that a variable with limited range (e.g. an enumeration) is given an illegal value. With a union, you will be aware of this risk.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
trønderen wrote: while in other cases (other interpretations), it will create a new bit pattern
Keeping in mind of course that at least here the discussion is about C/C++ (the forum.)
C++ can do that since it supports operator overloading. But not as far as I know for native types.
Even with operator overloading though once the cast operation happens the compiler does consider that a new type is in play.
|
|
|
|
|
I didn't consider syntactic sugar, such as operator overloading.
Alternate interpretations of a given bit pattern can be done with old style simple operators, overloaded operators, methods argument preparation, explicit casting, ... The essential point is not the wordiness of the syntax, but that the bit pattern is not changed. We have just added another interpretation, regardless of which coding syntax we use for making that interpretation. (It isn't limited to C/C++ - rather, C/C++ is limited in alternate interpretations.)
I really hate it when people 'explaining' computers tell that 'Inside the computer, everything is numbers. Say, the letter 'A' is 65 inside the computer'. Noooo!!! Inside the computer is a bit pattern that has no predefined, "natural" interpretation as a number! Sure, you can interpret it numerically, and divide it by two to show that half an 'A' is space - but that is plain BS. It is like uppercasing the value 123456789!
Sometimes, it is useful to make alternate interpretations. E.g. a 24-bit value intended to be interpreted as a color, human eyes cannot determine whether two colors are identical (maybe the screen isn't capable of resolving 16 million colors, either). In an alternate interpretation, as a three-part RGB value 0-255, we can very easily see if the colors are identical or not. But that doesn't mean the color 'really' is numbers - no more from the screen than from the rose in the vase next to it. Both the reds are red - not 255,0,0! RGB values are 'false' interpretations (i.e. deviating from interpretation assumed by the photo editor) to help us humans with limited color-interpreting abilities.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
In my world - close to hardware - it's important to know and understand the type. Sure, if I'm using a modern IDE with Intellisense (only one comes to mind) auto might help. But, because of the proximity to hardware, we really don't use complex C++ types. Shoot, the last time I tried to use a C++ map, it was 10x slower than a simple linear search loop. I did not believe it at first...
But getting back to using auto with it's intellisense interaction, intellisense does it's thing for plain and complex types as well. I'm not sure what the point it (other than reduced typing).
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
As already mentioned in several answers, auto keyword doesn't make C++ code less strong-typed or type-safe. So, using auto is individual preference. In some cases (such as templates, lambda) there is no other choice.
When auto is optional, I always use it for complicated types, like container iterators. I also like auto in container enumeration code:
for(const auto& x: my_container)
{
}
As for local variables, it depends. Sometimes we want the variable to have another type. If the variable must have the same type, as expression, auto can help, when the code is changed:
std::vector<short> v;
short n = v[0];
Consider the situation, when we change the container type to int:
std::vector<int> v;
short n1 = v[0]; auto n2 = v[0]; decltype(v)::value_type n3 = v[0];
I find myself using auto more and more. Sometimes, when I want to see exactly, what I am doing, I prefer to use an explicit type.
|
|
|
|
|
I clearly have a limited understanding of C++. I admittedly come from a C background, and I have embraced the general concepts of C++ (most of the 4 pillars). But I'm going to be honest here
It seems to me that auto is fixing or making easier to use some of the more spurious features of C++. Just a general thought, but it gets back to my original post/question. For example, your comment: "decltype(v)::value_type n3 = v[0];" means absolutely nothing to me. I'm at the level of wtf?
So, I went out to the internet and read: "Inspects the declared type of an entity or the type and value category of an expression." for decltype. I still don't know what that means. Are we off in the top 0.01% land of coding? It's okay, I found my niche long ago, but seriously, it feels like so many special features have been added that only apply to the religious fanatics of code land.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
I also prefer C over C++, and decltype example was kind of joke. Bad joke, I guess. In any case:
decltype(v) means: type of v variable, vector of int in this case. vector type has value_type typedef, defined as T, see here: std::vector - cppreference.com[^]
So, this ridiculous (for anyone, except C++ snobs) line is translated by compiler to int, i.e. vector template parameter.
|
|
|
|
|