|
You can fix this to varying degrees pretty easily on the command line:
If the branch is newly created or no conflicts are expected:
git merge master
You can also use this and just manually resolve conflicts if you really want.
If conflicts are expected and you know you want to keep your changes:
git merge -s ours master
If you've been working on something but haven't committed before realizing the problem:
git stash && git merge master && git stash pop
If you want a clean history without the additional master merge:
git rebase master
modified 20-Mar-20 1:30am.
|
|
|
|
|
Thanks you very much I'll give it a shot
"We can't stop here - this is bat country" - Hunter S Thompson - RIP
|
|
|
|
|
Githib and I have Tortoise Git on my machine so I can get visual UI and not need to worry about commands unless I truly screw up things.
Zen and the art of software maintenance : rm -rf *
Maths is like love : a simple idea but it can get complicated.
|
|
|
|
|
|
|
Finally, the answer has come!
It only took 20 years, the mark of a good question!
|
|
|
|
|
If so I was just wondering what sort of apps you've written or played with in order to use CUDA.
cheers
Chris Maunder
|
|
|
|
|
I played around a little with OpenCL and DirectCompute (when it was still fashionable), but not a direct CUDA API. I wouldn't know the first thing about that.
All I did was throw some contrived distributed problems at it.
Eventually I wanted to make some acoustic modeling Digital Signal Processing software using it to provide nice tube amp and analog synth sounds, or maybe go further and implement low latency real-time vocoding and such.
I never did though. Too much work and too much math.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: too much math
Is that even a thing??
On a completely unrelated note: that syntax thingamajig you're building (sorry for getting technical...). Can that be adapted to guess what syntax it's looking at? I'm assuming not because I'm guessing you have to provide it the syntax rules (laborious?) for it to understand a syntax. What I was thinking was "does your syntax thingy load a syntax from a standard syntax description library and parse from that?"
I have a problem and am randomly looking around for a solution
cheers
Chris Maunder
|
|
|
|
|
Chris Maunder wrote: What I was thinking was "does your syntax thingy load a syntax from a standard syntax description library and parse from that?"
Kind of. Glory takes a syntax description as a "grammar". The grammar itself is in XBNF format which is a lot like EBNF in terms of functionality but not as ugly.
Chris Maunder wrote: Can that be adapted to guess what syntax it's looking at?
It can't guess what it's looking at, nor can it change grammar rules on the fly (table regeneration would be too slow), but what it can do is return all possible variations of what it's looking at.
In order to get it to guess a grammar it would have to solve an undecidable problem, which is a problem.
Real programmers use butterflies
|
|
|
|
|
honey the codewitch wrote: it would have to solve an undecidable problem
So what you're saying is it needs a marriage counsellor.
cheers
Chris Maunder
|
|
|
|
|
haha!
The problems crop up when trying divine hierarchies/trees of data from flat text.
Real programmers use butterflies
|
|
|
|
|
Chris Maunder wrote: I have a problem and am randomly looking around for a solution let me introduce you your next secret weapon: Quick Answers[^]
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
I used CUDA, after looking at OpenCL. Opinion: OpenCL was what AMD got IBM,HP et al to impose on NVidia, so that "the same code" could run on AMD's (ATI's) video chips too. Having written asm to do the latter, it's ridiculous; you need to use different algorithms when the underlying chipset is that much less powerful. CUDA was really straightforward; high-level but targeting a GPU built for GPGPU.
That being said, have not used it in 10 years.
|
|
|
|
|
Yes, I have. We are rewriting a significant piece of an application to utilize it. This is just for HPC stuff. We haven't gone into machine learning yet but we have some targets in mind.
I have also messed around with fractal generation and other graphical things using CUDA and it is lightening fast at that. On the cards I have been using the double precision performance is considerably slower than single (more than twice) but it is still much faster than using a CPU. I can see the difference in detail on my graphics stuff when using single precision vs. double.
I went to Nvidia's GTC (Graphics Technology Conference) last year and was going to go this year also until it was cancelled. I will be certain to catch the on-line stuff when it happens next week.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
Nice
cheers
Chris Maunder
|
|
|
|
|
I used Cuda in my Masters for simulating spiking neural networks.
Ended up using cudafy (.Net library) as wanted C# familiarity.
Certainly allowed me to run my simulations far quicker but don’t underestimate the amount of effort required to tune (and get right - debugging 1000s of threads isn’t fun) non-trivial algorithms.
|
|
|
|
|
I used CUDA in my doctoral work in physics. Solving a non-linear partial differential equation via finite difference, I achieved a speed up of 32x on an NVIDIA GPU in my laptop, about 96 cores. It requires a different mode of thinking than we are used to, but it's worth it.
|
|
|
|
|
Used it for a basic convolution like problem with a large overlap. The code I wrote is rather basic, the stuff around needed some attention to get it working, but it delivered in spite of not studying that much on it.
But I'll wait for another real life application before delving into it again.
|
|
|
|
|
I have used Cuda\C\C++ for simple pattern matching on fairly large data sets. Keep in mind that there are some performance limitations when using Cuda due to the time required to marshal data to and from GPU memory and when the algorithm requires multiple synchronizations but still the performance is impressive.
Keep in mind that Cuda is not the solution for all problems; clever algorithm implemented on CPU only can match or even outperform GPU code in some scenarios. It is fun to play with good old C and different memory types of GPU. Debugging is more challenging and separate compilation that requires two compilers (NVCC and C/C++) is sometimes creating unexpected issues. Finding help on the web is more difficult than with more established technologies. I am using Visual Studio to do all of that on Windows.
|
|
|
|
|
I used CUDA for parallelization of a Java program. The task was about accelerating a fairly simple algorithm for projecting bitmaps. It was fun and I made an infographic on my method. Found GPGPU quite exiting but lost it from sight, anyway... Regards, Jürgen.
|
|
|
|
|
I tried out CUDA as a way of speeding up a raytracing graphics engine I was working on. My goal was to do raytracing without expensive (RTX) hardware. If I remember correctly, my program ran about as fast using most of the CUDA cores on a GTX 1050 Ti as it did using all the CPU cores on a Ryzen 7 1700X. I probably could have got it to run faster by optimizing it more for CUDA (I think I was using doubles), but the main problem for that project was my core algorithm being slow on anything.
|
|
|
|
|
The endless, endless texts, Slacks, Skypes, tweets, emails, FaceTimes, calls and messages.
cheers
Chris Maunder
|
|
|
|
|
Unlike meetings, where your absence or lack of attention is obvious, at least it isn't for most of these. They can be ignored or handled at your convenience, or you can do other things while pretending to be engaged.
One of the managers where I used to work told me he liked it when he had two meetings scheduled simultaneously. If he went to neither, people would assume that he was in the other one.
|
|
|
|
|
Greg Utas wrote: two meetings scheduled simultaneously. If he went to neither, people would assume that he was in the other one.
Ingenious!!
Those would be the two best meetings ever!
The best meeting is a cancelled meeting!
|
|
|
|