|
Because I have a brain?
Jeremy Falcon
|
|
|
|
|
Jeremy Falcon wrote: Because I have a brain and I know how to use it for that rare thing called critical thinking ? FTFY
(Sorry, I forgot the /s at the end of my previous comment)
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
But they are completely different: INI files use a ";" for comments and TOML uses a "#".
Bond
Keep all things as simple as possible, but no simpler. -said someone, somewhere
|
|
|
|
|
Jeremy Falcon wrote: in the 90s when MS started pushing XML like crazy. It most definitely was not limited to MS! I never even expired them as very dominating in the XML rush. It was always the *nix guys who insisted on all information in the system being stored as 7-bit ASCII so any hacker would be able to modify any system data using ed for modifying it.
To pick one example: Open Office XML entered the market several years before MS changed from binary formats to OOXML.
I never loved XML (and some of the tools even less than the plain XML format, e.g. XSLT). I went to a Digital Library conference around 2002, and for the first twelve presentations I visited, eleven of them made a major issue of their use of XML and all its benefits. Methinks not. But the issue was not up for discussion; it was settled, carved in stone: "XML is good for you! Always!"
After XML we went into a period of 'Every decent programmer has his own data description language'. One project I was in - and it wasn't a very big one - used four different description languages, all of which covered the same functionality. If JSON is the one to kick out all the others, it is a step forward, no matter its weaknesses. But it seems like new alternatives keep popping up all the time. Maybe JSON is The Answer this year, but I am fully prepare for it being thrown to the wolves within a couple of years.
|
|
|
|
|
Jeremy Falcon wrote: The irony is, all this struct talk is reminding me of C. In my first University level programming class, Pascal was the programming language. We were taught a programming discipline where all attributes relating to a phenomenon (such as a physical object) where put together in a struct. All manipulation of variables of a given struct type were done by a set of functions and procedures (those are the Pascal terms) declared together with the struct type definition. All functions / procedures should take a pointer to a struct as its first argument.
The course included creating general struct types, and including these in more specialized substruct types adding more attributes. It also included handling polymorphic types, using the variant structure facility of Pascal.
I took this first course in object oriented use of Pascal structs (no, we didn't label it as such!) in the fall of 1977.
When OO arrived (C++ in 1985), we moved the first argument - the struct pointer - to make a prefix to the function / procedure name, with a separating dot. The biggest change was the OO term, and starting to say 'method' rather than function / procedure. Sub/superclasses, polymorphism and a general OO mindset had been in place for several years; we just didn't know it by that name.
So quite fancy use of structs for creating objects / blackboxed types has at least 45 years on its back. Back then, some of it relied on programming discipline, not compiler support - but use of structs to create distinct types, as the book author suggests, doesn't really have any compiler support at the concept level, either.
|
|
|
|
|
Extremely fascinating story. Thanks for sharing.
As I was reading your description...
trønderen wrote: All manipulation of variables of a given struct type were done by a set of functions and procedures (those are the Pascal terms) declared together with the struct type definition. All functions / procedures should take a pointer to a struct as its first argument.
...I was thinking, that sounds like a class (or just a half-step away) -- data encapsulation with associated functions that work on the data. Very interesting.
trønderen wrote: Back then, some of it relied on programming discipline, not compiler support
I have talked about this for a long time.
1. If you don't have disciplined devs (engineering mentality of "do the right thing"), then
2. you better have a technology that forces the discipline (example, private vars cannot be manipulated outside class).
This is also why old timers (who had to have a disciplined mindset so they didn't cause themselves problems) see a lot of the new stuff as just fluff.
Two Thoughts
1. There are people who still create total crap, even with all the tools and automated discipline we have now.
2. There were people in the past who created amazing feats of software, even though all the discipline was required to be inside them.
|
|
|
|
|
Quote: This is also why old timers (who had to have a disciplined mindset so they didn't cause themselves problems) see a lot of the new stuff as just fluff.
I resemble that remark.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Additionally, not only old timers, people like me or d2k that have worked / work with limited resources (PLCs, Embedded...) can be counted in too
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
Jeremy my man, you said it all and very well indeed.
Software Zen: delete this;
|
|
|
|
|
Thanks buddy.
Jeremy Falcon
|
|
|
|
|
Is "prefer composition over inheritance" functional? I'm not sure that it is. I think it's simply another OO approach to code re-use. Personally, I'm not against a certain level of inheritence, but I much prefer composing objects for functionality.
I'm not sure if I'm missing something but what has a sealed class to do with immutability. A class is immutable if you can't change it's data, a sealed class means it can't be inherited from.
I agree on the C# funtion point, it annoys me how everything needds to now be functional, I chose C# for it's OO properties, when I want to do funtional programming I'll use F#. (it'll be a pretty cold day in hell for that to happen though )
|
|
|
|
|
Chris Baker 2021 wrote: Is "prefer composition over inheritance" functional? In functional programming that concept is talked about a lot. I mean a lot. Mainly because in purely functional programming you don't have inheritance. So, when I speak of composition, I'm specifically referring to functional composition.
Function composition (computer science) - Wikipedia
There's also object composition in OOP-land. Not really sure which one came first though...
Jeremy Falcon
|
|
|
|
|
Jeremy Falcon wrote: Not really sure which one came first though
I'm reasonably sure functional composition came first. I think Lisp predates the OO paradigm by quite a bit. And if I remember correctly, in the early days, Lisp was purely functional.
Keep Calm and Carry On
|
|
|
|
|
Thanks for that. That's what I thought too but wasn't sure.
Jeremy Falcon
|
|
|
|
|
There were languages (in this case, CHILL might be an example) that allowed you do define incompatible types ('modes' in CHILL lore):
NEWMODE Weight = FLOAT, Distance = FLOAT;
DCL amountOfApples Weight, LondonToNewcastle Distance;
amountOfApples and LondonToNewcastle both have all the properties of a floating point value, but they cannot be added, multiplied, compared, ...
This is a much simpler and more obvious way to get that protection the book writer is aiming at. Implementing it in the compiler should be a trivial matter, and the runtime cost 0.0. Checking type compatibility between primitive types is pure compile time matter. (And I assure you: This one will not make any impact on compile time.)
General lament: There are so many grains of gold in old, phased-out technology. We should spend much more time checking if a problem already has an old, forgotten, but totally satisfactory solution, before we design a new one.
|
|
|
|
|
|
There are too many Magpies in software development. Cannot resist anything shiny and new!
And too many who instead of offering their opinion for critique, seek to impose it as the New Standard.
|
|
|
|
|
I've been working a lot lately with Spring Webflux/Reactor and they liberally use the Duration class for any time specs.
//so instead of
long ticks
long ms
long s
//etc, etc, you see
Duration t
//and you create values using stuff like
Duration.ofSeconds
Duration.ofMilliseconds
By not obsessing over primitives, they made it so that all methods that use times can accept any time. You don't have to constantly remind yourself what the context for that time value is (e.g. seconds, milliseconds, etc), because the method doesn't specify the context, you do. So I love the idea of better contextualizing values beyond their strict storage type. As long as there's a useful context that adds value.
From your example, I think an Angle abstraction that handled both radians and degrees could prove useful in a similar manner to Duration , for example. As given, I'm not sure abstracting a double to an Angle solely to remove the primitive is a good pattern though. My assumption is that the intention is to force the developer to explicitly contextualize the double value, but the thing is if the developer didn't care about the context before, they aren't going to care now. They'll just wrap the double they have and move on (e.g. ex.handleAngle(new Angle(someDoubleThatIsntAnAngle)) ). Elevating a primitive in this way doesn't actually achieve anything that variable naming and/or named arguments couldn't already do. Just having a nondescript Angle with a double size property does nothing to further describe a double angle parameter. There has to be more sauce to it to make the abstraction worth it in my opinion.
|
|
|
|
|
That's a nice example of a good use of creating types for the parameters and it makes sense.
Also, I'm just at the beginning of the author's example also and it seems he is taking the example much further so it probably isn't that the author is actually saying "well, just wrap all those primitives in structs" but is building the case for it as he continues his longer example.
I was just astonished to see this "newer" idea of wrapping primitives like that.
I will continue reading the book because it is making me think different and the author's point is to make "more readable" code too and any hints toward that always go a long way.
Thanks for your interesting post which really adds to the conversation.
|
|
|
|
|
Might wanna check my new, new reply. I'm being annoying in it.
Jeremy Falcon
|
|
|
|
|
That's awesome! I've been learning universal algebra and category theory recently for a similar purpose. Having a new perspective on things really opens up your problems solving ability. I feel like I'm less of a hammer looking at everything like a nail.
|
|
|
|
|
Realistically though, how often do you need to contextual inputs like that? If it's external user input, it should always be sanitized first. Which means you can transform any exceptions in that layer. If it's internal user input, how often do you really change contexts like that in practice?
Don't get me wrong, nothing against structs as a param, but having said logic to handle the contextualization in every last routine that uses it (it's for inputs) isn't ideal.
I'd argue structs are useful for abstracting complex data types only, irrespective of the context in which they are called with.
Jeremy Falcon
|
|
|
|
|
I agree it's easy to go overboard with it. That's why I mentioned I feel like there has to be "more sauce" to the abstraction - e.g. an abstraction that abstracts multiple parameters, an abstraction that adds functionality, etc.
As a more concrete example with the Angle idea - you have Radians and Degrees as options (so Angle is basically an Either sum type) and Radians and Degrees are isomorphic.
Why is that useful? Here's some pseudo-code:
class Angle<T> = Radian<T> | Degree<T>
(+) :: Angle a -> Angle b -> Angle c
(+) x = match x
| Radian z => \y -> z + toRadian(y)
| Degree z => \y -> z + toDegree(y)
public double addRightAngle(double degrees) => degrees + 90; //Fails if you pass in radians
public double addRightAngle(double radians) => radians + (90*(pi/180)); //Fails if you pass in degrees
public Angle<double> addRightAngle(Angle<double> angle) => angle + new Degree(90); //Succeeds in all cases
public Angle<double> addRightAngle(Angle<double> angle) => angle + new Radian(1.5708); //Succeeds in all cases
How useful this is depends on how important angles are to your code-base, but I think abstracting inputs is very powerful. Another example is if you're doing functional programming and have a function that accepts impure inputs like a database function. You can group all impure inputs together into a tuple and shift that tuple to the right of the parameter list. This effectively turns your function into a pure function that returns an impure Reader with that environment tuple as input and the result as output (i.e. "functional" dependency injection). Makes a lot of things easier especially unit testing. Credit to Mark Seemann for that insight[^].
modified 27-Sep-23 1:09am.
|
|
|
|
|
Jon McKee wrote: Here's some pseudo-code: Got it. I didn't think of it in the context of replacing overloads. Just calling it that because if I where to code up your first two calls I'd at least have two strong (primitive-based) types that would differentiate the signature. I've been in JavaScript too long where that's not really done.
Jon McKee wrote: This effectively turns your function into a pure function that returns an impure Reader with that environment tuple as input and the result as output That one I gotta look into man. My understanding of pure functions is that all inputs are deterministic. So, not following how shifting parameter order changes that, since non-deterministic input is still going into the routine. Will check out the link though.
Btw, thanks for knowing what you're talking about. Makes these conversations much better.
Jeremy Falcon
|
|
|
|
|
Jeremy Falcon wrote: Btw, thanks for knowing what you're talking about. Makes these conversations much better.
Haha, thanks, but I don't think I deserve that quite yet. I'm still learning from people like Bartosz Milewski and Mark Seemann.
Jeremy Falcon wrote: That one I gotta look into man. My understanding of pure functions is that all inputs are deterministic. So, not following how shifting parameter order changes that, since non-deterministic input is still going into the routine. Will check out the link though.
This is an interesting topic that really broke my brain when I first ran into it. So, functions of more than one input have a lot of equivalent representations. For example, string -> int -> string can be seen as a function taking a string and returning a function of int -> string , or as a function of two inputs (the tuple (string, int) ) that returns a string . The important part with regards to purity is binding order, or in other words "what is provided when". You can only act upon what is provided, so if arguments that are (potentially) impure are not provided yet, the function is still pure. For example:
public bool saveToDatabase(Database db) => val => { db.save(val) };
public bool saveToDatabase(Value val) => db => { db.save(val) };
The first function is impure, the second function is pure. Why? They both take a Database and Value and return a Bool . Both are lazy (i.e. they only evaluate when all arguments are supplied). Well, because purity is a logical result of inputs and outputs. In the first example, if I apply the Database parameter, get the result function, then drop the database tables, then apply the Value , the operation fails. The partially applied function is impure. The database object that was already bound (partially-applied) was side-effected by the tables dropping. In the second example, no matter what I do after applying the Value , I can't create a situation where the Database is invalid AFTER applying it. The returned function itself is impure since we're side-effecting a database, but the original function is not, because there is no way to change the Database -> Bool that's returned.
I might be off on some stuff, always learning, but that's my understanding of it.
|
|
|
|
|