|
I wrote a tray app that communicates with a Windows service using named pipes, and it worked fine. Are you talking about .Net Core maybe?
|
|
|
|
|
I don't know "honey the codewitch's" situation, but your experience matches my recollection
|
|
|
|
|
Nah, it's DNF 4.72
Real programmers use butterflies
|
|
|
|
|
In my several decades of working in the Win32 environment and now Win64, I have never, ever had an issue with credentials and inter-process communication. Not a single time and I have worked with pipes, events, mutexes, and shared memory extensively.
What I mean by this is, in and of themselves, none of those IPC mechanisms require anything involving credentials. Any issues you may have with them are arising from a different level such as dealing with a service OR they are being imposed by something in .nyet.
"They have a consciousness, they have a life, they have a soul! Damn you! Let the rabbits wear glasses! Save our brothers! Can I get an amen?"
|
|
|
|
|
It might be a .NET issue, but it looked like I was getting a win32 error result since it was 5: access denied
Real programmers use butterflies
|
|
|
|
|
"I would be looking for better tekkies, too. Yours are broken." -- Paul Pedant
Context: The inability to create or consume CSV files.
|
|
|
|
|
|
"Anti-escape baffle", yeah, so when that 7,000 volts doesn't do the job.
|
|
|
|
|
They finally built a better one.
Yuya Tuya want your mouse? I've got one.
Yuya Tuya want your mouse? I'm sayin'
modified 4-Aug-20 22:32pm.
|
|
|
|
|
I need a bowl and a half of walnut to catch a mouse - and it does not kill it...
"The only place where Success comes before Work is in the dictionary." Vidal Sassoon, 1928 - 2012
|
|
|
|
|
Don't eat the walnuts yourself, put them in the trap!
|
|
|
|
|
Do they make them burglar-sized, too?
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Hillarious.
I find it odd no-one else has made mention of something.
Why would you fire the pulse 5 times if the device INSTANTLY KILLS em?
Also of note,
The device lure mice and sustaining a high-voltage discharge for 2 minutes to ensure the mouse is dead.
There's a very slim chance the marketing might not be quite accurate. Very slim.
|
|
|
|
|
Someone's come to me with what I thought was a good question.
He's got some 320kbps MP3s that were upconverted from files that originally were anywhere between 128kbps and 256kbps. Don't ask me why this was done. Someone must've thought introducing extra bits would magically improve the lower-res recording. He no longer has the original versions of the files.
The question he asked me, and I had no answer for, is this: Is there software that can analyze the audio in a given file, and determine that it's something that does NOT require 320kbps and there would be "no loss" converting it back to 256 or 128kps, or whatever it was originally encoded from? His argument is that his library is now taking roughly 2x+ the amount of disk space it used to, with no benefit to be gained. Of course, he's already got some MP3s that were originally ripped at 320kbps, so he doesn't want to bulk-convert everything back to 256kbps or lower - only those that were at the lower resolution to begin with.
Obviously this isn't "audio fingerprinting" like MusicBrainz Picard can do. And I can't come up with the right keywords for googling.
|
|
|
|
|
dandy72 wrote: he doesn't want to bulk-convert everything back to 256kbps or lower - only those that were at the lower resolution to begin with. OK but how about this: re-encode everything anyway, then compare the loss between the 320kbps version and the new version. If there's a lot of loss, keep the big version. Audio comparison tools exist, so this wouldn't involve anything fancy.
|
|
|
|
|
That's an interesting idea, and makes sense.
There's that second step however, comparing whether there was a loss...
|
|
|
|
|
I always encode everything to 128kbps.
This comes from a time when my HD was only 500 GB (ancient times) and it was full.
It's still very relevant for my (old) 160 GB MP3 player today.
This may sound like I'm really old skool, or hipster maybe, but my music tastes aren't always on streaming platforms such as Spotify or Bandcamp.
I don't really hear the 128 vs. 320 difference though.
Maybe with headphones, but not from my laptop speaker or earplugs on my bike.
Besides, the brain is great at filling in missing parts.
The soundtracks back in the (S)NES days sounded like actual orchestras to me
Anyway, "upcoding" isn't possible as far as I know.
Tried it sometime, but it gets glitchy.
I can't imagine "upcoding" and then decoding back is good for your quality.
I'm afraid your friend is out of luck
But couldn't he just try to decode the 256 ones back to 128 and leave the 320 alone?
Or decode the 320 to 256 or 192 to save some space while still having decent quality?
He should also check VBR (Variable Bit Rate), which is kind of what you want, but not completely.
It will work for the original 320, but probably won't make the "upcoded" stuff better.
With VBR you get like 128 (or even less) at the quiet parts and maybe 320 at loud parts with lots of instruments, and everything in between, basically.
Your bit rate with VBR can be something like 213kbps because it's an average of the various parts.
It's the best of both worlds although, as said, I don't use it myself.
|
|
|
|
|
Sander Rossel wrote: I don't really hear the 128 vs. 320 difference though.
Maybe with headphones, but not from my laptop speaker or earplugs on my bike.
Besides, the brain is great at filling in missing parts.
The soundtracks back in the (S)NES days sounded like actual orchestras to me
For some reason that just reminds me of this scene: Everybody Loves Raymond - Vinyl vs CDs - YouTube[^]
|
|
|
|
|
Fun fact, in 2019, vinyl revenue surpassed that of CDs for the first time since 1986.
CDs still sold more in absolute numbers, but vinyl is more expensive
I say, bring back the VHS!
|
|
|
|
|
Sander Rossel wrote: I don't really hear the 128 vs. 320 difference though.
Maybe with headphones, but not from my laptop speaker or earplugs on my bike.
That much is a given. Laptop speakers are notoriously bad (and I laugh at laptops that have built-in Harman Kardon speakers).
Steve Jobs has also done a fantastic job at lowering expectations given the hardware he hawks.
I'm no audiophile by any stretch, but depending on the material, I'll almost always immediately make the distinction between 128 and 320kbps. 256 is where I'll typically start to get it wrong.
|
|
|
|
|
Chances are very good that re-encoding (up or down) introduces more losses; however those losses may be acceptable. As suggested elsewhere, he could try downward re-encoding and see if he can notice the difference. If he uses a variable bitrate with target 128, he'll get the smaller size and perhaps less loss.
------------------------------------------------
If you say that getting the money
is the most important thing
You will spend your life
completely wasting your time
You will be doing things
you don't like doing
In order to go on living
That is, to go on doing things
you don't like doing
Which is stupid.
|
|
|
|
|
The easiest solution (I have actually done similar for bulk video processing) is to, for each file (named SONG.mp3 in my example below):
1. Convert to raw audio samples (the type you can cat to /dev/dsp, for example): SONG_ORG.pcm
2. Convert to 256kps, then convert that to raw audio samples: SONG_256KPS.pcm
3. Convert to 128kps, then convert that to raw audio samples: SONG_128KPS.pcm
At this point you'll have three uncompressed (raw audio) versions of the song. If you specified the same sampling rate for all of them they should all be the same size. Now this is the slightly more difficult part.
Write a program that takes two files and finds the statistics on the bytes in both files (avg, mean, median, variance, std-dev, frequency, distribution, etc) AND finds the statistics on the deltas between the two files taken together (some programs can do this).
Running this program on SONG_ORG.pcm and SONG_256KPS.pcm would show you how large the difference in sound is between the two files. If it is too large (experiment with the threshold) then you cannot re-encode to the smaller bitrate format because too much information was lost. If the difference is small then you can.
The only time-consuming part will be writing the program to examine PCM samples and give stats on the deltas. When I did this with video, I used ffmpeg to generate stills and image-magick to generate stats from those stills, and let it run over the weekend on all the videos I was checking. You can use sox/liblame to do the PCM/mp3 generation, but I don't know of a program that does for sound what image-magick does for images.
|
|
|
|
|
That sounds like the sort of analysis I had in mind. However I'm not the one with the vested interest in solving that "problem" so I'm not gonna be spending any time coding this. I'd be curious to know whether someone's already written that sort of thing...
The more I read about this, the more I think my buddy needs to re-rip everything from the original source, or live with the extra space requirements. Storage is cheap nowadays...
|
|
|
|
|
The MP3 codec is a lossy format. Unfortunately, there is no way back to the original file size without loss. The only real way to fix it would be to encode them at the desired bitrate from uncompressed WAV files.
|
|
|
|
|
Jason Hutchinson wrote: The MP3 codec is a lossy format. Unfortunately, there is no way back to the original file size without loss.
Well, the file started life as 128kbps, then was converted to 320kbps. The question is, how much loss would be incurred when going from 320 back to 128, and comparing that version with the original that already was at 128. Even though, I realize, he no longer has it.
But it would be an interesting experiment one way or another - and hang on to all versions each step of the way so they can later be compared.
|
|
|
|