|
Excellent reply. Thank you.
|
|
|
|
|
The schools are teaching with Python. Academics continue to use what they are familiar with.
10-15 years ago there was a controversy about using Microsoft Windows in schools and universities. Some switched to Linux. Then they asked "Why are we teaching C in schools?".
Python is used by most academics today. I See Python packages all the time doing amazing things.
|
|
|
|
|
|
One of the reason python is wildly popular.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
Wikipedia wrote: Written in Python, C
C for the win!
modified 26-Sep-22 11:29am.
|
|
|
|
|
Quick & dirty prototyping and great libraries.
Don't know which one produce the other, however.
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
We use it as a scripting language for our software. (integration into a C++ application).
It's relatively lightweight, lot of engineering folks not in programming domain use it as a simple tool to do lot of math stuff.
Our clients (and internal folks) do lot of data analysis with it.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
Maximilien wrote: do lot of data analysis with it
But what language is the data analysis code written in?
|
|
|
|
|
Python with NumPy.
We deal with large dataset of 3d points and 3d measurements. (see bio for company)
We do a lot in the software itself, but sometimes the clients have domain (proprietary) specific things they want to do.
CI/CD = Continuous Impediment/Continuous Despair
|
|
|
|
|
I would add:
- it's free
- it works everywhere. And I mean browsers, iPhones, computing cards, workstations, PCs, macs, watches
- there's a huge number of libraries and code examples available.
- it's a very approachable language for teaching. As much as Python makes me swear some days, I'd recommend it as a teaching language over C or C++ for power and simplicity, and also over C# and Java for how ubiquitous it is and how it's not tied down to any platform or vendor. I love C#, very much, but it's a very Microsoft-centric experience (and mindset) still and that's not healthy for someone starting out who needs every option open to them.
- It's the language that anyone dealing with data analysis will use. Engineers, environmental scientists, Data analysis, AI (Obviously). The introduction of Jupyter notebooks was such a boon (and obviously this is no longer Python-only)
I will say, though, that the library system will do your head in. It's wonderful until it's not. BUT: you generally get the code, so last ditch efforts of debugging and manual patching can work in emergencies.
It also has some awkward syntax. Very awkward.
And the whole culture is a little weird, and dare I say fanatical at times. And can Guido please stop pasting Monty Python quotes in the Python docs. Dude. Seriously.
cheers
Chris Maunder
|
|
|
|
|
For writing small server programs it is fine. I would not use it for anything with a GUI (and admittedly have not ever tried). For large programs it becomes harder to maintain. By large I mean > 1 man-year of code.
And for small programs, starting from scratch, it is actually fun, because with zero you make something that does something, quickly!
"If we don't change direction, we'll end up where we're going"
modified 26-Sep-22 13:49pm.
|
|
|
|
|
I've backed myself into far tighter corners than Python on 2+ year projects
It's all about architecting the solution. And some tech stacks are really well suited to this.
And some just aren't...
cheers
Chris Maunder
|
|
|
|
|
I am not saying that it is useless for large projects. And "architecting" is ofc important, but let's say 2+ years PLUS volatile requirements PLUS a situation where developers come and go. Then I would not recommend Python as the first choice.
"If we don't change direction, we'll end up where we're going"
|
|
|
|
|
I've not had experience with large (and long) python projects yet, but two things really stick out for me when I think about your statement:
- Comments seem to be very much a second (or third or fourth) consideration. Not even having formal syntax support for multi-line comments just strikes me as being of the mindset that support for comments is a necessary evil, not an integral part of documenting the niggly things that a dev, 3 years later, will need to refer to
- The absolutely terrible variable names you see in so many samples. Having started my career being forced to maintain a massive legacy FORTRAN codebase in a research institution many, many years ago, I'm still scarred by the variables named
A , AA , AAA , A2 , B2 and so on. And no, this isn't hyperbole: this was the common naming method. I don't see stuff as bad in Python, but naming isn't exactly deeply rooted in the Pythonic culture. It really does not help the cause of maintainability.
cheers
Chris Maunder
|
|
|
|
|
Now I am quite confused and almost wonder if you are commenting on my comment. #1 I said nothing about comments. #2 I said nothing about variable names. But...
Maybe I just triggered you think about these things. Funny thing is that neither of these two are what I was thinking of.
As for comments I dunno. You surely remember the hype around the Ada programming language. They only (AFAIK) had -- on each line for comments. And one of the key language design criteria was: "WORMS i.e Write Once Read Many timeS" And actually Python has docstrings that lazy people could use for multi-line comments. But that would be considered extremely bad style, certainly any formal code review would reject such stuff.
As for naming, there are at least strong linting/formatting tools (Pep8/Black), that my organisation has in our CI, you cannot check in without it. And what I've seen of naming culture the Python community is quite strong. In web snippets? Maybe less so. Yeah, I have done my fair share of FORTRAN too, where naming was darkness...
Despite such draconic measures Python code rots rapidly, but IMO the key reason is the lack of typing combined with optional arguments. In long call chains, in the end, you have no idea what the elephant is being passed or returned.
Cheerz
"If we don't change direction, we'll end up where we're going"
|
|
|
|
|
megaadam wrote: #1 I said nothing about comments. #2 I said nothing about variable names. But...
My mind is merely meandering and thinking about what Python in a big project would be like. I started thinking about the things that I struggle with now when I look at new code, and the rush of comment and varname angst came out. Sorry if that seemed a little random. Pent up venting...
And typing. I completely forgot about typing! They have the type hints, but they on;y get you so far.
Yeah, there's tools, and conventions, and reviews, and everything that's available to disciplined teams. And then there's the slippery slope of getting away with whatever you can.
Naming was darkness. What a great turn of phrase.
cheers
Chris Maunder
|
|
|
|
|
FORTRAN had a six char limit on variable names and I found that name like A, AA, B2, etc. were typical of code written by engineers, not comp sci graduates. And bad variable names are a problem with ALL languages. It has nothing to do with Python.
|
|
|
|
|
Quote: And can Guido please stop pasting Monty Python quotes in the Python docs. Dude. Seriously. Are you really asking Chris Maunder to remove Bob from CodeProject?
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
Python is interpreted; easier to get started with (IMO). Popular in schools; universities.
Unless one has a specific problem to solve, it's another solution looking for one.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Gerry Schmitz wrote: Python is interpreted; easier to get started with (IMO). That certainly was an essential argument in favor of interpreted languages ... Long time ago. In my student days as a junior, around 1980, our group project - no more than a couple thousand lines - required more than half an hour compilation time, on a VAX. So we made sure to make all known changes/fixes in the source code before starting a recompilation.
I haven't timed compilers for a few years. Last time I did timing, on a complete recompilation of a system with a few hundred modules, on the average the compiler produced eight object code modules per second. If you use an IDE such as VS, which takes the responsibility for recompiling modified modules only, compilation is practically unnoticeable.
No compilation delay once was an argument in favor of interpretation. It no longer is.
|
|
|
|
|
I decided to do a couple of compiles, partly to show how much faster CMake/Ninja is than MSBuild.
240KLOCs of C++, MSVC compiler, VS2022:
- MSBuild, x86 Release: 2m 52s
- CMake/Ninja, x64 Release: 0m 32s
I was forced to switch to CMake/Ninja when targeting for Linux instead of just Windows. I wish I'd gotten around to it sooner.
|
|
|
|
|
Greg Utas wrote: MSBuild, x86 Release: 2m 52s
CMake/Ninja, x64 Release: 0m 32s A factor of 5.4 for (presumably) doing identical jobs would make me very cautious. I would not take that at face value as an indicator of 'typical' performance, but spend some effort on learning what makes the one alternative more than five times faster.
Although x86 and x64 are not quite apples and oranges, it is at least apples and pears. So the two jobs are not identical.
Obviously, the compilers are different. Even if they have the same call line interface, different modules are activated. Were all the options exactly the same? E.g. the same level of optimization. The same amount of runtime checks. Etc.
Did the two jobs generate the same number of compiler activations, and the same number of object files? With two different target architectures, you cannot except exactly the same number of object files, but they should be comparable.
Where both jobs clean compiles? This includes e.g. precompilation of header files. For a 'fair' compare, you could run the job on a newly formatted disk, but if this forces one setup to do heavy one-time preparations that saves a lot of time later, maybe it isn't as 'fair' as you first thought. If you are doing an incremental, non-clean compile: Were exactly the same changes made in both cases? Are the dependency rules set up identically in the two alternatives?
Did both jobs do the same kind of preparatory work, e.g. building the dependencies? If the job 'in principle' is of the same kind, were there significant differences, such as in one case, the developer supplies dependencies, while in the other, it is automatically detected through analysis of the source code?
Even for a clean compile: Are the dependency rules set up 'ideally'? I have seen compile logs from large compilations (typically 30-60 minutes build time) compiling the same source file five times. This happens not once, but often! The developers argue that to maintain their part of the build files efficiently, they need to be independent of what the other guys are doing, and need to maintain their own independent dependencies ... (And they refuse to let a separate team or expert do all build file maintenance, claiming that is it too tightly interwoven with the source code.)
Did both jobs provide the same (/comparable) results? E.g. did they both include complete linking into an executable? Did both deliver auto-generated documentation? Did both generate the same amount and quality of debug information?
Given different target architectures: Was the number of actually compiled lines almost identical, or did #ifdef's make it significantly different?
How large part of the 32 or 172 seconds are actual compilation time? If the compiler is the same, the number of files and their contents the same, chances are epsilon squared for the one run to require 140 seconds more compilation time than the other. A large fraction of the time difference most likely lies in the build system processing. Note that this discussion started from a compilation/interpretation consideration; a lousy build system is not a good argument in favor of interpreted languages! (ref jmaida's joke in the JOTD a few threads higher up: "Yep, I used to have a truck like that once.")
I actually never got around to properly learn CMake/Ninja myself, but have worked shoulder by shoulder with people who do. What I have seen in everyday use is actually the very opposite of your experience: When you have made your source changes and hit F5, you debugger is running within a few seconds. Those Ninja runs tend to recompile half the code base. After having tried to learn CMake/Ninja, I have realized that it is very sensitive to the level of expertise of the build maintainers; you can give it very poorly designed (wrt. build perfomance) input files. But the developers don't care: After some unfortunate incidenses about ten years ago, when we still used classical make, and a few deliveries were made without one file being recompiled, we had a few years doing a complete rebuild for any edit. Just get faster machines to the build cluster! has been the mantra for years. So who cares to fine tune CMake/Ninja build files?
I am sure that setting up MSBuild jobs 'by hand' can result in just as inefficient builds as can be made with CMake/Ninja!
If you hand over your build to a build cluster, it really isn't significant whether it takes half a minute or two minutes to do the build. Far more essential is the turnaround time from your modified source file is saved to your debugger gives you control. That is not the time to do a complete rebuild. While one setup may reduce the time available for a coffee break, compared to the other, the file-save-to-debug time really is the most essential, and ranking of alternatives might come out quite differently.
Bottom line: I very strongly doubt that a factor of 5.4 for 'identical' jobs is caused by simply 'more efficient code' in the faster alternative. The two build systems certainly did much more different jobs than just the code generating for two target architectures.
And it contributes very little if anything to the interpreted vs. compiled languages discussion - except that if you can do a complete 240 KLOC job generating, compilation and linking, with all its associated management, in 32 seconds, you may conclude that compilation time is not any significant argument in favor of interpreted languages.
|
|
|
|
|
Both were clean compiles, and both use the MSVC compiler. It's the front ends that differ. The options for x86 and x64 are basically the same, and so were MSBuild times for x86 and x64 when I used to build both with MSBuild. When I switched to CMake, I got rid of my .vcxproj files. The MSBuild time for x86 is still about the same. I have no idea why CMake/Ninja is that much faster; I'm just happy about it and have no desire to investigate why.
True, this has nothing to do with whether code is compiled or interpreted. I just wanted to point out that C++ compiles for a large code base are fairly fast but that, even then, there can be significant differences.
|
|
|
|
|
|
trønderen wrote: In my student days as a junior, around 1980, our group project - no more than a couple thousand lines - required more than half an hour compilation time, on a VAX. So we made sure to make all known changes/fixes in the source code before starting a recompilation. My first year of college we used punch cards -- had to carefully type the deck (make a mistake, throw out the card), bundle it up, and drop it off at the data center. At the beginning of the semester, come back an hour later to pick up the printout and deck. At the end of the semester? The turnaround was 12 hours.
That type of hassle makes more careful programmers, as we needed to get things right on the first try, not F5 / fix a line / F5 / fix a line / F5 / repeat until it looks like it works.
I hated it, but it was excellent training. Those that screwed around and waited to write & run their programs typically switched majors in their second semester.
|
|
|
|
|