|
Sander Rossel wrote: I don't really hear the 128 vs. 320 difference though.
Maybe with headphones, but not from my laptop speaker or earplugs on my bike.
Besides, the brain is great at filling in missing parts.
The soundtracks back in the (S)NES days sounded like actual orchestras to me
For some reason that just reminds me of this scene: Everybody Loves Raymond - Vinyl vs CDs - YouTube[^]
|
|
|
|
|
Fun fact, in 2019, vinyl revenue surpassed that of CDs for the first time since 1986.
CDs still sold more in absolute numbers, but vinyl is more expensive
I say, bring back the VHS!
|
|
|
|
|
Sander Rossel wrote: I don't really hear the 128 vs. 320 difference though.
Maybe with headphones, but not from my laptop speaker or earplugs on my bike.
That much is a given. Laptop speakers are notoriously bad (and I laugh at laptops that have built-in Harman Kardon speakers).
Steve Jobs has also done a fantastic job at lowering expectations given the hardware he hawks.
I'm no audiophile by any stretch, but depending on the material, I'll almost always immediately make the distinction between 128 and 320kbps. 256 is where I'll typically start to get it wrong.
|
|
|
|
|
Chances are very good that re-encoding (up or down) introduces more losses; however those losses may be acceptable. As suggested elsewhere, he could try downward re-encoding and see if he can notice the difference. If he uses a variable bitrate with target 128, he'll get the smaller size and perhaps less loss.
------------------------------------------------
If you say that getting the money
is the most important thing
You will spend your life
completely wasting your time
You will be doing things
you don't like doing
In order to go on living
That is, to go on doing things
you don't like doing
Which is stupid.
|
|
|
|
|
The easiest solution (I have actually done similar for bulk video processing) is to, for each file (named SONG.mp3 in my example below):
1. Convert to raw audio samples (the type you can cat to /dev/dsp, for example): SONG_ORG.pcm
2. Convert to 256kps, then convert that to raw audio samples: SONG_256KPS.pcm
3. Convert to 128kps, then convert that to raw audio samples: SONG_128KPS.pcm
At this point you'll have three uncompressed (raw audio) versions of the song. If you specified the same sampling rate for all of them they should all be the same size. Now this is the slightly more difficult part.
Write a program that takes two files and finds the statistics on the bytes in both files (avg, mean, median, variance, std-dev, frequency, distribution, etc) AND finds the statistics on the deltas between the two files taken together (some programs can do this).
Running this program on SONG_ORG.pcm and SONG_256KPS.pcm would show you how large the difference in sound is between the two files. If it is too large (experiment with the threshold) then you cannot re-encode to the smaller bitrate format because too much information was lost. If the difference is small then you can.
The only time-consuming part will be writing the program to examine PCM samples and give stats on the deltas. When I did this with video, I used ffmpeg to generate stills and image-magick to generate stats from those stills, and let it run over the weekend on all the videos I was checking. You can use sox/liblame to do the PCM/mp3 generation, but I don't know of a program that does for sound what image-magick does for images.
|
|
|
|
|
That sounds like the sort of analysis I had in mind. However I'm not the one with the vested interest in solving that "problem" so I'm not gonna be spending any time coding this. I'd be curious to know whether someone's already written that sort of thing...
The more I read about this, the more I think my buddy needs to re-rip everything from the original source, or live with the extra space requirements. Storage is cheap nowadays...
|
|
|
|
|
The MP3 codec is a lossy format. Unfortunately, there is no way back to the original file size without loss. The only real way to fix it would be to encode them at the desired bitrate from uncompressed WAV files.
|
|
|
|
|
Jason Hutchinson wrote: The MP3 codec is a lossy format. Unfortunately, there is no way back to the original file size without loss.
Well, the file started life as 128kbps, then was converted to 320kbps. The question is, how much loss would be incurred when going from 320 back to 128, and comparing that version with the original that already was at 128. Even though, I realize, he no longer has it.
But it would be an interesting experiment one way or another - and hang on to all versions each step of the way so they can later be compared.
|
|
|
|
|
Send him here: mp3ornot.com[^] and if he doesn't score (say) 5/5 then he may as well give up, and encode them all to 128. I got the first 2 right, but failed on the third (didn't like the sound anyway) so stopped. I am not an audiophile.
|
|
|
|
|
That's an interesting test, although they really need more than just the one sample song. Some recordings can sound fine at 128kbps, and others might sound terrible at that resolution.
But then I've only tried this with my desktop speakers with the window open, traffic going by and a fan running. Not exactly a great listening setup.
|
|
|
|
|
There is more than one sample - by some magic, if one waits, the sample changes on next play - don't ask me how that works! I think you have to do all 3 listens, then select an answer - then the clip changes!
|
|
|
|
|
I hit refresh about 20 times, and the same sample keeps coming up. Maybe you do have to listen to them all, rather than immediately jumping to step 2.
[Edit]
Ok, my mistake was hitting Refresh to try to get a new sample. It seems to restart the whole thing, so refresh is entirely pointless...
|
|
|
|
|
I think you can tell by looking at the frequency content. It's easy to tell the difference between an "it started as an MP3" file and a "it started as a WAV" in my audio noise clean-up software because the spectrum for the MP3 has a brick-wall low pass filter well below the Nyquist frequency for the file's sample rate. That frequency is probably lower for lower-resolution MP3 files. (But no, I don't think I've ever checked this. And though I'm working from home, I risk far too much distraction from "I'm supposed to be working now" if I turn on the home computer and its associated audio editing tools to run some tests.)
|
|
|
|
|
I wondered if the frequency spectrum would be enough to indicate the bit rate, as although I knew that MP3 encoding 'threw away' bits of the sound, I didn't know whether that would show clearly on a spectrum - so I just took the same 5s clip from the sound track to Clockwork Orange (The Thieving Magpie) which I know contains some well defined high-frequency bits (a flute or somesuch - I used this track to compare recordings onto audio cassettes many years ago, to establish which tape quality (cost) level was needed to avoid unacceptable loss). I loaded the orignal WAV file from CD into Audacity, and did a spectrum. Then saved as MP3 at a variety of bitrates. and did the same. Without being able to load pictures here, I can table the maximum frequency seen in the plots for various bit rates, as it is very clear there is a pattern:
bit rate -100dB freq (Hz)
Full (WAV) 21,300
320 kbps 20,100
256 19,400
128 16,600
I picked the freq at which the signal went below -100dB, as that corresponded roughly with the visual scale on the plot (which only goes down to -90 by default). The cutoff was well-defined, and if there is a place to put them, I can supply screen shots.
So one could take a resampled file and, if its frequency cutoff (of maybe a suitable bit) was below, say, 18kHz, one could judge that it came from a 128kbps original. It would be a lot of work to do that for all files, of course. You really need a tool that measures in one go the frequency cutoff.
What surprised me was that there was very little difference in the spectrum apart from the upper end - but of course, that is where the sample rate required for faithfull reproduction is greatest, so where the greatest reduction in file size can be obtained. Now off to look at the wavefoms in more detail...
I, of course, am retired, so have nothing better to do - and it was fun.
|
|
|
|
|
Thanks for confirming my hunch.
WAV file from CD should be sampled at 44.1KHz, which puts Nyquist at 22.05KHz -- pretty close to the 21,300Hz you see. My older ears don't hear the difference between 128kbps and 320kbps unless I'm paying close attention.
|
|
|
|
|
You can compare decoded raw bitstreams with tools like Audacity but imho it is unnecessary.
Re-encoding with VBR (Variable bitrate) encoding would be the way to go. It will use lower bits when there is less sound information, and will use your maximum provided bitrate when it actually needs it.
Also, I would strongly advise to migrate from mp3 to a better format unless you're hardware locked.
128 kbps opus is said to be transparent (almost indistinguishable from uncompressed form to 99% ears) while 80-96 kbps opus is somewhat equivalent to 320 kbps mp3. And the format is royalty-free with wide support from various OSes.
|
|
|
|
|
Never heard of Opus until now. If I can't throw something at a random player and have it "just work", it's not even a contender in my book.
I'm reading that "[...] in many respects Ogg Opus is the successor to Ogg Vorbis". That name, I recognized, and I stopped reading there.
You can have the most awesome format in the world, if the support isn't there, it's a non-starter. Re-encoding is never a good idea, and going down this path to me sounds like having to re-encode in a different format every couple of years for the sake of using the format-du-jour. No thanks.
|
|
|
|
|
No, there will not be any such software. In short, your friend cannot return to their original 128k quality files from what they have now.
The process you're referring to is called transcoding. Whenever you transcode across bitrates of a lossy codec, you lose information because the quantitization of the samples is different. So, quality was lost in the 128k -> 320k trancoding. More quality will be lost doing another transcoding from 320k -> 128k.
Maybe your friend is now at a financial point in their life where they can afford to replace the mangled files?
|
|
|
|
|
patbob wrote: Maybe your friend is now at a financial point in their life where they can afford to replace the mangled files?
There's "affording" the time, and "affording" the money. Despite the old saying, I hope you're not suggesting both are interchangeable.
|
|
|
|
|
Hope all our Lebanese friends are safe and sound.
I'd rather be phishing!
|
|
|
|
|
For those that haven't seen it yet: RAW VIDEO: Beirut blast caught on camera - YouTube[^]
Good luck to all there, not just CP members - that's one big explosion.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
From the angle shown, it looks like a fire, that set off the explosion. Someone's ammunition dump go up in smoke?
NOT a good day for civilians in Beirut.
EDIT: according to the Israeli web site www.ynet.co.il (the web site of a major Israeli paper), the small explosions were set off by a fire in a warehouse storing fireworks. The big explosion occurred when the fire reached a warehouse containing a few tons of nitrates. Nitrates, for those who've forgotten their chemistry, are the starting point for chemical fertilizer and also for explosives.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: NOT a good day for civilians in Beirut.
Slight understatement there - the airport (10km / 6 miles from the blast) was damaged ... Latest I've seen is 50 dead, 2750 injured, but they are still looking for the firemen / firewomen who were working on the original blaze. 90% of the wheat for the Lebanese staple flatbread is imported throughn the now destroyed port. And with the recent economic fun-and-games in Lebanon, it's all just going to make a very bad situation much, much worse.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
OriginalGriff wrote: 90% of the wheat for the Lebanese staple flatbread is imported throughn the now destroyed port.
It's worse than that. The big building near the port looks like a grain storage silo. A still picture from another angle shows that the silo was damaged, so even if grain were delivered, they have nowhere to store it!
The two closest ports which have grain-handling facilities are Haifa (in Israel), and Latakia (in Syria). The questions are (a) whether they have the combined additional capacity to handle the grain destined for Lebanon, (b) whether the Lebanese government will accept aid coming via Israel, and (c) whether Hezbollah will interfere with any shipments coming from Israel.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Technically, I think Lebanon is still at war with Israel - I just checked and there has been a cease fire in effect since '07, but there is still border fun-and-games plus Hezbollah working in Lebanon - and as for Syria ... they are still having a civil war, plus what's left of ISIS waiting for an opportunity.
Couldn't have happened in a worse place at a worse time, I guess. What a mess.
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|