|
FORTRAN had a six char limit on variable names and I found that name like A, AA, B2, etc. were typical of code written by engineers, not comp sci graduates. And bad variable names are a problem with ALL languages. It has nothing to do with Python.
|
|
|
|
|
Quote: And can Guido please stop pasting Monty Python quotes in the Python docs. Dude. Seriously. Are you really asking Chris Maunder to remove Bob from CodeProject?
"In testa che avete, Signor di Ceprano?"
-- Rigoletto
|
|
|
|
|
Python is interpreted; easier to get started with (IMO). Popular in schools; universities.
Unless one has a specific problem to solve, it's another solution looking for one.
"Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I
|
|
|
|
|
Gerry Schmitz wrote: Python is interpreted; easier to get started with (IMO). That certainly was an essential argument in favor of interpreted languages ... Long time ago. In my student days as a junior, around 1980, our group project - no more than a couple thousand lines - required more than half an hour compilation time, on a VAX. So we made sure to make all known changes/fixes in the source code before starting a recompilation.
I haven't timed compilers for a few years. Last time I did timing, on a complete recompilation of a system with a few hundred modules, on the average the compiler produced eight object code modules per second. If you use an IDE such as VS, which takes the responsibility for recompiling modified modules only, compilation is practically unnoticeable.
No compilation delay once was an argument in favor of interpretation. It no longer is.
|
|
|
|
|
I decided to do a couple of compiles, partly to show how much faster CMake/Ninja is than MSBuild.
240KLOCs of C++, MSVC compiler, VS2022:
- MSBuild, x86 Release: 2m 52s
- CMake/Ninja, x64 Release: 0m 32s
I was forced to switch to CMake/Ninja when targeting for Linux instead of just Windows. I wish I'd gotten around to it sooner.
|
|
|
|
|
Greg Utas wrote: MSBuild, x86 Release: 2m 52s
CMake/Ninja, x64 Release: 0m 32s A factor of 5.4 for (presumably) doing identical jobs would make me very cautious. I would not take that at face value as an indicator of 'typical' performance, but spend some effort on learning what makes the one alternative more than five times faster.
Although x86 and x64 are not quite apples and oranges, it is at least apples and pears. So the two jobs are not identical.
Obviously, the compilers are different. Even if they have the same call line interface, different modules are activated. Were all the options exactly the same? E.g. the same level of optimization. The same amount of runtime checks. Etc.
Did the two jobs generate the same number of compiler activations, and the same number of object files? With two different target architectures, you cannot except exactly the same number of object files, but they should be comparable.
Where both jobs clean compiles? This includes e.g. precompilation of header files. For a 'fair' compare, you could run the job on a newly formatted disk, but if this forces one setup to do heavy one-time preparations that saves a lot of time later, maybe it isn't as 'fair' as you first thought. If you are doing an incremental, non-clean compile: Were exactly the same changes made in both cases? Are the dependency rules set up identically in the two alternatives?
Did both jobs do the same kind of preparatory work, e.g. building the dependencies? If the job 'in principle' is of the same kind, were there significant differences, such as in one case, the developer supplies dependencies, while in the other, it is automatically detected through analysis of the source code?
Even for a clean compile: Are the dependency rules set up 'ideally'? I have seen compile logs from large compilations (typically 30-60 minutes build time) compiling the same source file five times. This happens not once, but often! The developers argue that to maintain their part of the build files efficiently, they need to be independent of what the other guys are doing, and need to maintain their own independent dependencies ... (And they refuse to let a separate team or expert do all build file maintenance, claiming that is it too tightly interwoven with the source code.)
Did both jobs provide the same (/comparable) results? E.g. did they both include complete linking into an executable? Did both deliver auto-generated documentation? Did both generate the same amount and quality of debug information?
Given different target architectures: Was the number of actually compiled lines almost identical, or did #ifdef's make it significantly different?
How large part of the 32 or 172 seconds are actual compilation time? If the compiler is the same, the number of files and their contents the same, chances are epsilon squared for the one run to require 140 seconds more compilation time than the other. A large fraction of the time difference most likely lies in the build system processing. Note that this discussion started from a compilation/interpretation consideration; a lousy build system is not a good argument in favor of interpreted languages! (ref jmaida's joke in the JOTD a few threads higher up: "Yep, I used to have a truck like that once.")
I actually never got around to properly learn CMake/Ninja myself, but have worked shoulder by shoulder with people who do. What I have seen in everyday use is actually the very opposite of your experience: When you have made your source changes and hit F5, you debugger is running within a few seconds. Those Ninja runs tend to recompile half the code base. After having tried to learn CMake/Ninja, I have realized that it is very sensitive to the level of expertise of the build maintainers; you can give it very poorly designed (wrt. build perfomance) input files. But the developers don't care: After some unfortunate incidenses about ten years ago, when we still used classical make, and a few deliveries were made without one file being recompiled, we had a few years doing a complete rebuild for any edit. Just get faster machines to the build cluster! has been the mantra for years. So who cares to fine tune CMake/Ninja build files?
I am sure that setting up MSBuild jobs 'by hand' can result in just as inefficient builds as can be made with CMake/Ninja!
If you hand over your build to a build cluster, it really isn't significant whether it takes half a minute or two minutes to do the build. Far more essential is the turnaround time from your modified source file is saved to your debugger gives you control. That is not the time to do a complete rebuild. While one setup may reduce the time available for a coffee break, compared to the other, the file-save-to-debug time really is the most essential, and ranking of alternatives might come out quite differently.
Bottom line: I very strongly doubt that a factor of 5.4 for 'identical' jobs is caused by simply 'more efficient code' in the faster alternative. The two build systems certainly did much more different jobs than just the code generating for two target architectures.
And it contributes very little if anything to the interpreted vs. compiled languages discussion - except that if you can do a complete 240 KLOC job generating, compilation and linking, with all its associated management, in 32 seconds, you may conclude that compilation time is not any significant argument in favor of interpreted languages.
|
|
|
|
|
Both were clean compiles, and both use the MSVC compiler. It's the front ends that differ. The options for x86 and x64 are basically the same, and so were MSBuild times for x86 and x64 when I used to build both with MSBuild. When I switched to CMake, I got rid of my .vcxproj files. The MSBuild time for x86 is still about the same. I have no idea why CMake/Ninja is that much faster; I'm just happy about it and have no desire to investigate why.
True, this has nothing to do with whether code is compiled or interpreted. I just wanted to point out that C++ compiles for a large code base are fairly fast but that, even then, there can be significant differences.
|
|
|
|
|
|
trønderen wrote: In my student days as a junior, around 1980, our group project - no more than a couple thousand lines - required more than half an hour compilation time, on a VAX. So we made sure to make all known changes/fixes in the source code before starting a recompilation. My first year of college we used punch cards -- had to carefully type the deck (make a mistake, throw out the card), bundle it up, and drop it off at the data center. At the beginning of the semester, come back an hour later to pick up the printout and deck. At the end of the semester? The turnaround was 12 hours.
That type of hassle makes more careful programmers, as we needed to get things right on the first try, not F5 / fix a line / F5 / fix a line / F5 / repeat until it looks like it works.
I hated it, but it was excellent training. Those that screwed around and waited to write & run their programs typically switched majors in their second semester.
|
|
|
|
|
In 1978, I was in the last freshman class to use punched cards. Two years later, a group of professors and graduate students from our university was on a visit to MIT. Somewhat embarrassed, they revealed that not until last year (i.e. 1979) was the introductory programming course run on interactive terminals. The MIT people stalled: Interactive terminals in an introductory programming course? At that time at MIT, interactive terminals were reserved for graduate work!
In 1978, a 12 h turnaround was unheard of. Usually, the printout was on the shelves the next day, but in rush periods, it could take two days. Be careful to note, though, that out of those 48 hours, maybe five seconds were compilation and running time. The rest of the time, the deck was sitting in the input queue (a physical one!), being handled mechanically, or the printout laying stacked up in the line printer output tray waiting to be carried to the output shelves. If the operators had been given interpreters for interpreting the card decks, rather than compilers and run time systems, it would have affected the turnaround time nothing at all.
You are most certainly correct: It made us more careful programmers. It was excellent training.
Old memory worth recalling: In the compiler construction course, one essential quality metric was the compiler's ability to detect all, or as many as possible, (real, primary) errors in one compilation run. So we all became fans of LALR over recursive descent The first compiler I studied close up was the classic recursive Pascal P4 compiler (open source didn't come with Linux!), being impressed by the number of tricks it did to be able to continue compilation even after quite serious syntax errors, while generating as few second order error messages as possible.
Today, who cares at all for such qualities? Many times have I seen coworkers getting a long list of error reports, fixing the first ten, and ignoring the rest before they rebuild the system from scratch 'so that we are not bothered by second order error messages from the first ten errors'. To some degree they are right: Modern compilers does a much poorer job of hiding already reported errors and avoiding second order errors.
|
|
|
|
|
For us it is the huge range of third-party packages and how easily Python interfaces with other systems.
|
|
|
|
|
import framework
import that-cool-library-someone-made
to be that nitty gritty announce, C# or c#.net, which includes so much everyday functions. And net 6/c#10 have global usings that hides a bunch even more, yay.
So I would lean that python has a huge early years, "free" publicity growth, which then just lingers and continues to grow because its popular, which makes it more popular, and repeat.
think 10 years ago, .net 4 is great, but still windows, and visual studio a heavy weight install.
python, with scratch and other simple editors, and interpretation language, comparably more easy errors
again comparing 2012 python world to c#.net.
add 10 years people putting time and effort into python, they create packages, they share, improve them, write guides, and how tos.
look at just c# changes in last 5 years, yearly major version numbering changes compared to 2008-2012 of slow if anything.
So what you start with has big influence, I can switch between javascript and c#.net fairly easily because I dont have to worry if a space or tab, or if not align. VS with errors improved somewhat in last 10 years.
but if someone starts with a python syntax of writing, then semi-colons and braces will be a pain.
cross platform, run on raspberry pi, or linux, again most of that c#.net core has had to rewrite (or however that had to do to get methods which were copywrite to make code open source) has been done in last 5 ish years.
Performance gains from core 3 to 5, significant and leaps things like json parsing for asp.net comparable if not faster then node.js.
not sure if any makes sense, simply attempting to compare the legacy with what have today, and why python might appear more popular
|
|
|
|
|
Great questions. I have recently started using Python for machine vision prototyping as well as exploring the machine learning libraries. While I like the language, the lack of structured architecture is a bit unnerving. It reminds me of National Instruments LabVIEW which is also very easy to use and make a big mess in.
I have had conversations with our software department and there are some very good libraries available in Python for complex math operations that are well documented and have community support. There are similar libraries available in C#; however, they are difficult to implement and lack proper documentation or support. I think this may have more to do with the underlying code that supports Python is C++ rather than C#, so there is some conversion/wrapping that needs to be done to make the library available to C#.
I think it comes down to your use case, but I don't think there is anything available in Python that isn't available in C#. A lot of the available Python libraries are wrappers for other general purpose code blobs available in other languages. Case in point is the Kivy library for building UI interfaces. The code base underpinning Kivy is god-level genius. Some of the implemenation requirements are a bit clunky, but I have been impressed so far with the library.
|
|
|
|
|
MSBassSinger wrote: What value-add(s) does Python bring that I cannot get now in C#? What disadvantages are there, if any, to using Python over C#?
Not to be flippant, but if you have C#/T-SQL experience, go ahead and learn Python and you will discover the advantages and disadvantages, and sometimes they overlap.
I used Python for a large (60+ Beaglebone SBC's [Single Board Computer]) inhouse project for a customer. (If any of you know my somewhat colorful past, you'll know what "industry" this was for.) We implemented a simple web server for each Beaglebone, used RabbitMq for messaging, had a small screen with a GTK interface for the graphics, and custom IO for the various buttons and one-wire iButton readers, all running under Debian.
The cool thing was that I could test all the software (GTK runs in Windows as well) and emulate the hardware I/O on my Windows machine, debugging it directly in Visual Studio. And the software included an auto-update process that would automatically update all 60+ Beaglebones (that took some trial and error but eventually worked.)
Being able to test the app on Windows and deploy it automatically with WinSCP to the test jigs (I had 6 Beaglebones at home for testing) was, frankly, a very pleasant experience.
Would I write a professional web server with database requirements in Python? Heck no, but Python definitely has its uses, certainly in the SBC arena.
|
|
|
|
|
I work out in the real world writing C#, JavaScript, TypeScript, and SQL everyday.
I also teach one night a week at a community college, Data Structures in the Spring semester and a programming language (currently C) in the Fall semester.
My observations is that Python is a scripting language and, like other scripting languages, useful for doing things a little more rapidly but inexactly.
My observation of students who want to know why they can't use Python in the Data Structures class (college requires C++ for transfer reasons) is that the new crop of students really don't understand what or why they are doing things but are monkey-see, monkey-do programmers. Of course that doesn't apply to the 5 - 10 % of my students who really DO understand how a computer works.
The net being, a lot of computer is "close enough is good enough" in our world of today. Obviously that doesn't apply to certain financial type transactions in banking, real estate, etc, but DOES apply to a lot of things that are just providing info.
|
|
|
|
|
i have zero learning hours of python and yet i have never had problem a reading Python code. to be real, that was only 3-4 times and the code was short
the first time i saw Python was in the book Foundations of Python Network Programming. somehow i got this book in my hand and i started reading immediately. i was surprised how easy i understood the language. it was so clear that i didn't bother to write the examples in Python, but i translated them on the fly to C on the Windows platform. the only gotchas were the ones with the Win32 API
next comes what's important about C# and Python, that i think is highly subjective. at the moment i get my living from writing C# code. i'm not good at it, just barely good enough for people to put up with me. i feel repulsion with languages like Java and C#. i was assigned to read data from a server and luckily from me i found code examples:
https://github.com/flightaware/firehose_examples
i was looking at the C# example and looking and looking... even had it compiled and it was working, but still i couldn't grasp it. then i turned to the Python example and i immediately understood what needs to be done. what is the essence. that's what i mean when i say your question is highly subjective. there is no "the right language" and "the only right thing to do for the common good", because too much right turns into left
i believe, no matter what others say (although some of those people i know for decades and value their opinion highly), based on my experience, that Python is clean and very good language for introduction into programming. regardless if it will potentially introduce bad habits on may-have-been-future professional programmers.
you know when you ask how to do something in git and you get a 10 page explanation of the theoretical possibilities and historical background of all version control software VS an answer saying: type this 4 words and it will do? for me, the former is C# and the latter is Python
cheers
|
|
|
|
|
Hi,
I am harley, new to the group. I would like to add the best of my knowledge to the group and would love to learn more in technology from you guys.
modified 26-Sep-22 8:57am.
|
|
|
|
|
Welcome, maybe you can tell us a bit more about yourself and what technologies you are interested in.
|
|
|
|
|
Welcome Harley. Looking forward to your contributions and enjoy exploring the site!
|
|
|
|
|
|
I see ransomware as such a threat, as does any sane IT person... such a great business model. Salesforce is completely SAAS. I've been dealing with it some. It's interesting and ... can be standoffish, hard to reach. I'm wondering if that barrier makes it protected from ransomware.
|
|
|
|
|
There is no such thing as proof.
And there is also no point in asking Salesforce, I mean, they're probably pretty good, but no-one is perfect. Also: Security Questionnaire | CommitStrip[^]
Just make sure you have off site backups. And also a backup plan.
|
|
|
|
|
That's the thing. While my knowledge of Salesforce is not advanced, having your own offsite backups seems pretty near impossible. Just getting to the data is very hard. Since it is all browser based and a few other things, it might be fairly easy for the Salesforce folks to protect the data pretty good on their own. I don't know.
|
|
|
|
|
The only systems that are ransomware-proof are systems that don't have a network access.
|
|
|
|
|
That's the point. Salesforce seems to have very limited access other than by browser. It may be designed that way with security in mind and it may allow a very high degree of protection. That is what I am curious about.
|
|
|
|
|