first: I have done this test myself many times in various ways, including recreating albums as a mix of 16bit FLAC and v0 MP3 (track by track, not within tracks), putting them on, and listening on speakers. I can tell sometimes, but the v0 still sounds great.
I was able to distinguish the 3 rock recordings with confidence, high frequency transients sounded more impactful in WAV. The Queensryche in particular has a lot of (well applied!) dynamic compression on the acoustic guitar and vocal which really brings out those transients.
However, if I heard the MP3 in isolation I would not detect anything was off. They all sounded good.
The Morricone and Vangelis I had no conviction either way and I guessed wrong both times. I suspect in their recording/mixing/mastering a lot of high frequency sound was lost anyway. In either case, I don't know if the CD master was made from original tapes or not. I know the Blade Runner OST has had a convoluted release history. Morricone has a 2004 CD master which is pretty well liked.
"Moving Pictures" was recorded to tape, but was notably an early digitally mastered album. Maybe that has resulted in preserved high frequency sound.
Compressed audio is great, I love it and I use it a lot.
I use CD Quality for archival purposes and my home library.. for most of the past decade hard disks have been inexpensive. I convert to Opus 192 for mobile devices.
Another reason for CD Quality archiving - I have a long term idea of recreating a CD collection. I want to get printable CDs and burn the audio/print the art because I want my children to have the experience of going thru a shelf or flipping through a binder, putting the disk in the tray, pressing play. I always loved doing that.
Again, could I tell if I transcoded a well encoded mp3 back to redbook? Maybe not consistently, but it's more likely the transcode of mp3 -> CD would introduce more audible problems than the encoding of WAV -> mp3.
aitchnyu 9 hours ago [-]
I managed only the Queensryche with my Sennheiser HD 490 Pro and Qudelix 5k. Thanks for giving the name to my vague feeling.
kimixa 1 days ago [-]
It could often depend on the encoder - things like lame have a hard low-pass filter even on the "insane" settings [0]. This can often mean, if you're someone who can detect that high frequency (probably not most adult), you may pretty easily be able to tell the difference if those frequencies are present in the recording.
Additionally, a lot of audio pipelines (even beyond the DAC - like amplifiers and similar) can end up with artifacts and harmonics in more audible frequencies - this is often more notable at extremely high frequencies (like 96khz and similar) - there's honestly nothing any human can actually hear near that range - but that doesn't mean it doesn't then affect audible ranges when actually played back on real equipment.
The big point is that "Being Able To Tell The Difference" isn't always the same as "Better Quality". You're often just replacing one artifact of the playback pipeline with another. Neither may truely match the original performance.
Huh, I guessed all correct. Random guess would have a 1/(2^5) = 1/32 chance of being correct.
I don't make any claim to any special hearing or expertise. I've been listening to practically only lossy music since around '98, ripping from CDs at that time.
Morricone and Vangelis have been especially hard for me to tell apart, could have been a random guess on my part (I listened to those ~20 times).
When I read the title I expected to hear the actual _difference_ between the lossless and lossy waveform - i.e. only the actual artifacts. Could be a fun exercise.
LazyMans 1 days ago [-]
Correctly identified with 100% accuracy. The author said they can't, but for me the mp3 versions have noticeable high frequency artifacts that make the recording sound slightly less clear. Using Sony XM5
elabajaba 17 hours ago [-]
Part of that might be if you're using them wireless because then you're double compressing the audio which amplifies the artifacts (mp3 -> Bluetooth compression).
littlexsparkee 1 days ago [-]
Acoustic guitar, drums are a good signal - lower quality just sounds hollow / spacey. The most obvious a/b was the Gamma Ray sample, imo (with mid-range Beyer headphones, wired). It's easiest to tell with recordings you know well, for me Steely Dan is a good reference. I rip to FLAC for archiving even though 320 or 250+ VBR is probably 'close enough' unless I'm scrutinizing.
astrange 10 hours ago [-]
> I rip to FLAC for archiving even though 320 or 250+ VBR is probably 'close enough' unless I'm scrutinizing.
MP3 is fundamentally flawed and has audible artifacts no matter what the bitrate is. If you use a newer codec (AAC or Opus) you'll probably not notice anything.
MoonWalk 1 days ago [-]
The high-frequency "swishiness" the usual giveaway.
But sadly today most popular music is ruined beyond repair with dynamic compression, not data compression. The craven stupidity of the loudness war may be unequaled in the history of art, and yet even the artists often don't seem to understand what the problem is. You see legendary artists complaining about modern sound quality (Dylan, Neil Young, and so forth) but then cheerleading for absurd sampling rates and bit depth. NO. That isn't the problem. I have 45-RPM records that sound better than their "lossless," "remastered" incarnations on streaming services.
The biggest problem in popular music (and I would say this probably pervades everything but classical at this point) is dynamic compression.
Slow_Hand 1 days ago [-]
It’s not so simple.
Today “loudness” is an aesthetic choice and good mixers and producers know how to craft a record that is both loud and of good sonic quality.
There is a place for both dynamic records (in the sense of classical or old jazz records) and contemporary loudness aesthetic.
Can inexperienced producers/mixers do a hack job trying to emulate the loud mixes of pros? Yes. The difference comes down to taste and ability to execute with minimal sonic tradeoffs.
Source: I have a long history producing, mixing, and mastering records and work among Grammy winners regularly. Very much in the dirt on contemporary records.
MoonWalk 2 hours ago [-]
From my observations and from industry people I've read opinions from, the early '90s were the peak for mastering quality. Digital was well-understood, but wasn't being abused.
Listen to the original pressings of songs like "Creep." That guitar noise punched through because there were still dynamics back then. Music was fun to listen to, especially with headphones. The soundscape of an album sometimes led me to give music a second chance that I might not have bothered with if it didn't sound so good.
Now, even very catchy music is tiresome and quickly abandoned because of dynamic compression. It's fatiguing (if not grating) to listen to. Yes, there are a few exceptions here and there. "Gives You Hell" by the All-American Rejects comes to mind. But in general music sounds like ass now. Take Coldplay... regardless of what you think of the content, this music should sound great. But it's sonically dull trash.
Slow_Hand 2 hours ago [-]
The thing about mastering is that unless you're a part of the production team and get to hear the before/after you'll almost never know what the mastering engineer's contribution actually was. Done well, their role is invisible.
Mastering engineers work with the record that they receive from the mixer. It's entirely possible that the smashed (over-limited) record was handed to them by the mixer and approved by the artist. In that case the ME's hands are usually tied. They work with what they receive.
Likewise, the mixer may receive a reference mix (from the producer) that is smashed. The mixer has far more ability to influence the sonics than the ME (waaay more), but they too can have their hands tied if the artist is really attached to the vibe of that rough producer mix.
Professional mixers and ME's are well aware of the negative effects of the loudness wars. It's well understood by any working professional today. Ultimately the buck stops with the record's producer and the artist. They're the ones seeing the project through from beginning to end.
The difference falls on them, between a "loud" record that sounds like lifeless trash and a "loud" record crafted with skill, taste, and intention that has depth and impact. As I said, amazing "loud" records do exist when all stages of the record's production team are aligned. But it requires restraint and taste on the production team and the artist.
---
You're not wrong that something changed around the mid 90s. Until the late 80s records were being mixed primarily for vinyl. The limitations of the medium (namely the needle would skip out of the groove if you tried to print a loud or bass-y mix) kept the loudness in check. You simply COULDN'T make a record that loud. This limitation acted like speed bumps. But perceptual loudness has always been an objective of recording engineers since the dawn of recording.
What happened is that in the 90's digital tools (particularly digital limiting) in combination with digital playback mediums (CDs) opened up the door to squeeze greater loudness and new sonic aesthetics out of records. As such, these tools have been abused and over-cooked. In some cases that abuse may be the objective.
Today we're well aware of the trade-offs and to some artists it just doesn't matter. They WANT it smashed. It ultimately comes down to restraint, taste, and good technical know-how to get a flavor of loudness that doesn't have too many tradeoffs.
SoleilAbsolu 1 days ago [-]
Agreed regarding the audibility of (data-) compressed audio, just put on some classic jazz with trumpets and lots of cymbals and the artifacts are immediately apparent.
Not going to argue with you regarding dynamic compression, but after backing away from the worst excesses of the volume wars by mastering engineers in the mid '00s, things are sounding better to my ears. Dynamic compression can sound good (even in the extreme) if done for artistic effect. Like here's Beck's Ramona where the drums & cymbals have the tar squashed out of them with serious limiting, which to my ears nicely tames the sonics of Joey Waronker's spirited performance, while fitting well dynamically into the rest of the song.
https://www.youtube.com/watch?v=e3yZ9OVjzbE
That said, maybe the engineers responsible for some of the worst dynamic squashing could be pressed into TV/film audio service where in 2026, there are still extreme volume imbalances between on-screen dialogue and everything else (hint the dialogue isn't loud enough and the everything else, especially crashes and explosions, are wayyy too loud).
MoonWalk 2 hours ago [-]
Sure, compressing individual elements judiciously is a valid and even necessary choice. But the so-called "remastering" that has ruined our whole pop/rock heritage as represented today on streaming services is a heinous, lazy hack job that ruins people's enjoyment of music... even though they can't put their finger on why.
When I was a little kid, I'd ride my bike to the record store and buy my two or three favorite current songs on 45. I noticed that they didn't sound as "fat" as they did on the radio. So I got an equalizer. But that of course wasn't the answer.
Over time I realized that I liked the sound of the records better. They were more fun to turn up loud. Likewise I realized that the oddly-quiet station on my FM dial (WXRT in Chicago) sounded the best. All because it, like the records, was less dynamically compressed than the other stations.
A huge number of people alive today have never heard good-sounding pop music, which is disgraceful. Near-perfect sound reproduction is within everyone's reach now, but the recordings themselves are ruined before we get them.
It's all even more stupid when you consider that compression could have been (and was) done ON THE PLAYBACK DEVICE. My 1996 Ford CD player has a button on it labeled "Compress."
Duh. People aren't getting smarter.
wavemode 22 hours ago [-]
Also got 100% (Presonus Eris + sub), but I had to struggle. Especially on The Good, The Bad and The Ugly.
I would never know the difference during casual listening. Only in this setting where I'm told upfront that there is a difference, do I notice it.
Once you hear the difference in sound quality / see difference in image quality you cannot undo it.
I have become very picky with display resolution and text clarity, and it has not served me well. I miss the days I was happy with a 1080p monitor.
throwawaytea 1 days ago [-]
In 2011 I had a 28" 1080p monitor I thought was amazing. Like ground breaking enjoyment using it for my sales job inside the CRM.
Now if you ask me that monitor is causing eye damage and I rather not use the computer that day vs use it.
saltcured 1 days ago [-]
Well, nobody should be happy with 16:9 aspect ratio, but as you get older you may find that your happiness with a lower pixel pitch returns... ;-)
oliyoung 1 days ago [-]
As the author points out, it's not really a "MP3 vs Uncompressed" conversation, it's a "which encoder are you using" conversation ...
because any of us from the late 90s/early 2000s who used the early versions of LAME will tell you in a second how easy it was to pick MP3 over raw, even at 320kb/s
saltcured 1 days ago [-]
I remember this repeated with the opensource AAC encoders. We had pretty decent LAME MP3 output by then, but everybody wanted to squeeze bytes and suddenly we were hearing a lot of terrible artifacts again.
Few audio things bug me more than the kind of tinkly pre-echo effects that were pervasive for a while.
PaulHoule 1 days ago [-]
I will concur with that.
When I first started encoding MP3s I used a 128kbps rate which is noticeably inferior to the original CD. I noticed this in the early 2000s when I would up listening to a CD of some music I usually listened to as a 128kbps MP3 and was blown away with how much more I heard.
I'd say that 192kbps is much better and the 320kbps that the author advocates is basically transparent.
cindyllm 1 days ago [-]
[dead]
_kulang 15 hours ago [-]
Apparently, I’m very easily able to tell them apart. It’s just that I always picked the MP3 as the WAV
hxorr 1 days ago [-]
Some people simply have better hearing than others.
Also, you can train yourself for what to listen for, to a point.
Tagbert 1 days ago [-]
Noted, but I think I'll pass. Doesn't seem to be much benefit if you have to train yourself to discern a difference just so you can stream massive files.
Of course this does matter to some people and I say "have fun".
maxwg 1 days ago [-]
Pretty great demo! It'd be great to see a 128/192 comparison.
I had Tidal many years back, and from the Lossless v Regular I only ever noticed a difference when it came to breathy sounds/etc. I did see that Tidal would burn through like 50GB of data monthly though.
Also - you may want to test some more modern recordings, the microphone/mastering quality of things nowadays is far better than what it was 2 decades ago (despite what some audiophiles may claim)
parkersweb 1 days ago [-]
I’ve done a bunch of testing over the years including a similar test of ‘can people hear mp3 compression’ as well as comparison of mp3 variable bit rate qualities.
In practice, on average playback equipment (by which I mean decent hifi) in an average listening environment most people can’t tell the difference.
But… I’ve also done blind testing with a top mastering engineer on studio speakers and he was able to identifying 48 vs 192 reliably.
Mastering quality was ruined by the battle for perceived loudness. So masters with decent degrees of dynamic range is definitely helpful.
saltcured 24 hours ago [-]
On the other hand, I visited a friend's recording studio in my prime listening years and remember being blown away when they played me some recording masters that were 24 bit/192 kHz. This was just one raw, uncompressed bit stream versus another. It was the first and only time I had felt that a straight up stereo speaker reproduction was completely transparent, like the performers must actually be there somehow in that acoustic space.
I've heard things get close using regular CD audio with some umpteen-channel DSP effects, but nothing like that from two speakers and a straight playback with no effects processing.
I've also had a binaural headset demo get really really close. I imagine it could be better, but this was for some generic model, not anything that is tuned to your own personal ear shape etc.
mmmlinux 1 days ago [-]
I mean 48 is pretty much trash. Id hope a top mastering engineer can tell the difference between that and 192...
DiskoHexyl 1 days ago [-]
It was really easy to tell which is which for the vocals.
On the other hand, the only sample in which I didn't hear ANY difference is Ennio Morricone's, to the point where I couldn't really tell it apart from its 56kbit/s version.
Can the hearing be selectively bad for some frequencies within the standard 20-20000 range, and normal for the others?
beAbU 13 hours ago [-]
> Can the hearing be selectively bad for some frequencies within the standard 20-20000 range, and normal for the others?
Yes. Your ears are acoustic filters just like microphones and speakers. When you get your ears tested, you get a chart that looks suspiciously like a speaker/mic response chart. Frequency in the x, db attenuation in the y.
So a person with bad ears could have fine hearing below say 5 khz, but with a sharp cut-off beyond that. Or it could be any other way round. Or you could have a notch in the middle. Calibrated hearing aids just take this chart and boost the frequencies your ears are attenuating. You can eq your own sound equipment based on the chart, to get a result that compensates for your ears.
jrmg 1 days ago [-]
I wonder how likely it is that the people who are posting that they got most of them correct are just the people who happened to randomly guess correctly with 50/50 chance each time - people who guessed wrong or thought they couldn’t tell probably aren’t going to post…
etempleton 1 days ago [-]
I was right for all but one. High frequencies give it away. I can tell the difference, but it was certainly close enough that I am not sure I care anymore.
Rendered at 01:34:15 GMT+0000 (Coordinated Universal Time) with Vercel.
first: I have done this test myself many times in various ways, including recreating albums as a mix of 16bit FLAC and v0 MP3 (track by track, not within tracks), putting them on, and listening on speakers. I can tell sometimes, but the v0 still sounds great.
I was able to distinguish the 3 rock recordings with confidence, high frequency transients sounded more impactful in WAV. The Queensryche in particular has a lot of (well applied!) dynamic compression on the acoustic guitar and vocal which really brings out those transients.
However, if I heard the MP3 in isolation I would not detect anything was off. They all sounded good.
The Morricone and Vangelis I had no conviction either way and I guessed wrong both times. I suspect in their recording/mixing/mastering a lot of high frequency sound was lost anyway. In either case, I don't know if the CD master was made from original tapes or not. I know the Blade Runner OST has had a convoluted release history. Morricone has a 2004 CD master which is pretty well liked.
"Moving Pictures" was recorded to tape, but was notably an early digitally mastered album. Maybe that has resulted in preserved high frequency sound.
Compressed audio is great, I love it and I use it a lot.
I use CD Quality for archival purposes and my home library.. for most of the past decade hard disks have been inexpensive. I convert to Opus 192 for mobile devices.
Another reason for CD Quality archiving - I have a long term idea of recreating a CD collection. I want to get printable CDs and burn the audio/print the art because I want my children to have the experience of going thru a shelf or flipping through a binder, putting the disk in the tray, pressing play. I always loved doing that.
Again, could I tell if I transcoded a well encoded mp3 back to redbook? Maybe not consistently, but it's more likely the transcode of mp3 -> CD would introduce more audible problems than the encoding of WAV -> mp3.
Additionally, a lot of audio pipelines (even beyond the DAC - like amplifiers and similar) can end up with artifacts and harmonics in more audible frequencies - this is often more notable at extremely high frequencies (like 96khz and similar) - there's honestly nothing any human can actually hear near that range - but that doesn't mean it doesn't then affect audible ranges when actually played back on real equipment.
The big point is that "Being Able To Tell The Difference" isn't always the same as "Better Quality". You're often just replacing one artifact of the playback pipeline with another. Neither may truely match the original performance.
[0] https://sound.stackexchange.com/questions/38109/lame-why-is-... - while not an explicit "low-pass" filter, the default option of "-Y" does something similar.
I don't make any claim to any special hearing or expertise. I've been listening to practically only lossy music since around '98, ripping from CDs at that time.
Morricone and Vangelis have been especially hard for me to tell apart, could have been a random guess on my part (I listened to those ~20 times).
When I read the title I expected to hear the actual _difference_ between the lossless and lossy waveform - i.e. only the actual artifacts. Could be a fun exercise.
MP3 is fundamentally flawed and has audible artifacts no matter what the bitrate is. If you use a newer codec (AAC or Opus) you'll probably not notice anything.
But sadly today most popular music is ruined beyond repair with dynamic compression, not data compression. The craven stupidity of the loudness war may be unequaled in the history of art, and yet even the artists often don't seem to understand what the problem is. You see legendary artists complaining about modern sound quality (Dylan, Neil Young, and so forth) but then cheerleading for absurd sampling rates and bit depth. NO. That isn't the problem. I have 45-RPM records that sound better than their "lossless," "remastered" incarnations on streaming services.
The biggest problem in popular music (and I would say this probably pervades everything but classical at this point) is dynamic compression.
Today “loudness” is an aesthetic choice and good mixers and producers know how to craft a record that is both loud and of good sonic quality.
There is a place for both dynamic records (in the sense of classical or old jazz records) and contemporary loudness aesthetic.
Can inexperienced producers/mixers do a hack job trying to emulate the loud mixes of pros? Yes. The difference comes down to taste and ability to execute with minimal sonic tradeoffs.
Source: I have a long history producing, mixing, and mastering records and work among Grammy winners regularly. Very much in the dirt on contemporary records.
Listen to the original pressings of songs like "Creep." That guitar noise punched through because there were still dynamics back then. Music was fun to listen to, especially with headphones. The soundscape of an album sometimes led me to give music a second chance that I might not have bothered with if it didn't sound so good.
Now, even very catchy music is tiresome and quickly abandoned because of dynamic compression. It's fatiguing (if not grating) to listen to. Yes, there are a few exceptions here and there. "Gives You Hell" by the All-American Rejects comes to mind. But in general music sounds like ass now. Take Coldplay... regardless of what you think of the content, this music should sound great. But it's sonically dull trash.
Mastering engineers work with the record that they receive from the mixer. It's entirely possible that the smashed (over-limited) record was handed to them by the mixer and approved by the artist. In that case the ME's hands are usually tied. They work with what they receive.
Likewise, the mixer may receive a reference mix (from the producer) that is smashed. The mixer has far more ability to influence the sonics than the ME (waaay more), but they too can have their hands tied if the artist is really attached to the vibe of that rough producer mix.
Professional mixers and ME's are well aware of the negative effects of the loudness wars. It's well understood by any working professional today. Ultimately the buck stops with the record's producer and the artist. They're the ones seeing the project through from beginning to end.
The difference falls on them, between a "loud" record that sounds like lifeless trash and a "loud" record crafted with skill, taste, and intention that has depth and impact. As I said, amazing "loud" records do exist when all stages of the record's production team are aligned. But it requires restraint and taste on the production team and the artist.
---
You're not wrong that something changed around the mid 90s. Until the late 80s records were being mixed primarily for vinyl. The limitations of the medium (namely the needle would skip out of the groove if you tried to print a loud or bass-y mix) kept the loudness in check. You simply COULDN'T make a record that loud. This limitation acted like speed bumps. But perceptual loudness has always been an objective of recording engineers since the dawn of recording.
What happened is that in the 90's digital tools (particularly digital limiting) in combination with digital playback mediums (CDs) opened up the door to squeeze greater loudness and new sonic aesthetics out of records. As such, these tools have been abused and over-cooked. In some cases that abuse may be the objective.
Today we're well aware of the trade-offs and to some artists it just doesn't matter. They WANT it smashed. It ultimately comes down to restraint, taste, and good technical know-how to get a flavor of loudness that doesn't have too many tradeoffs.
Not going to argue with you regarding dynamic compression, but after backing away from the worst excesses of the volume wars by mastering engineers in the mid '00s, things are sounding better to my ears. Dynamic compression can sound good (even in the extreme) if done for artistic effect. Like here's Beck's Ramona where the drums & cymbals have the tar squashed out of them with serious limiting, which to my ears nicely tames the sonics of Joey Waronker's spirited performance, while fitting well dynamically into the rest of the song. https://www.youtube.com/watch?v=e3yZ9OVjzbE
That said, maybe the engineers responsible for some of the worst dynamic squashing could be pressed into TV/film audio service where in 2026, there are still extreme volume imbalances between on-screen dialogue and everything else (hint the dialogue isn't loud enough and the everything else, especially crashes and explosions, are wayyy too loud).
When I was a little kid, I'd ride my bike to the record store and buy my two or three favorite current songs on 45. I noticed that they didn't sound as "fat" as they did on the radio. So I got an equalizer. But that of course wasn't the answer.
Over time I realized that I liked the sound of the records better. They were more fun to turn up loud. Likewise I realized that the oddly-quiet station on my FM dial (WXRT in Chicago) sounded the best. All because it, like the records, was less dynamically compressed than the other stations.
A huge number of people alive today have never heard good-sounding pop music, which is disgraceful. Near-perfect sound reproduction is within everyone's reach now, but the recordings themselves are ruined before we get them.
It's all even more stupid when you consider that compression could have been (and was) done ON THE PLAYBACK DEVICE. My 1996 Ford CD player has a button on it labeled "Compress."
Duh. People aren't getting smarter.
I would never know the difference during casual listening. Only in this setting where I'm told upfront that there is a difference, do I notice it.
Once you hear the difference in sound quality / see difference in image quality you cannot undo it.
I have become very picky with display resolution and text clarity, and it has not served me well. I miss the days I was happy with a 1080p monitor.
Now if you ask me that monitor is causing eye damage and I rather not use the computer that day vs use it.
because any of us from the late 90s/early 2000s who used the early versions of LAME will tell you in a second how easy it was to pick MP3 over raw, even at 320kb/s
Few audio things bug me more than the kind of tinkly pre-echo effects that were pervasive for a while.
When I first started encoding MP3s I used a 128kbps rate which is noticeably inferior to the original CD. I noticed this in the early 2000s when I would up listening to a CD of some music I usually listened to as a 128kbps MP3 and was blown away with how much more I heard.
I'd say that 192kbps is much better and the 320kbps that the author advocates is basically transparent.
Also, you can train yourself for what to listen for, to a point.
Of course this does matter to some people and I say "have fun".
I had Tidal many years back, and from the Lossless v Regular I only ever noticed a difference when it came to breathy sounds/etc. I did see that Tidal would burn through like 50GB of data monthly though.
Also - you may want to test some more modern recordings, the microphone/mastering quality of things nowadays is far better than what it was 2 decades ago (despite what some audiophiles may claim)
In practice, on average playback equipment (by which I mean decent hifi) in an average listening environment most people can’t tell the difference.
But… I’ve also done blind testing with a top mastering engineer on studio speakers and he was able to identifying 48 vs 192 reliably.
Mastering quality was ruined by the battle for perceived loudness. So masters with decent degrees of dynamic range is definitely helpful.
I've heard things get close using regular CD audio with some umpteen-channel DSP effects, but nothing like that from two speakers and a straight playback with no effects processing.
I've also had a binaural headset demo get really really close. I imagine it could be better, but this was for some generic model, not anything that is tuned to your own personal ear shape etc.
On the other hand, the only sample in which I didn't hear ANY difference is Ennio Morricone's, to the point where I couldn't really tell it apart from its 56kbit/s version.
Can the hearing be selectively bad for some frequencies within the standard 20-20000 range, and normal for the others?
Yes. Your ears are acoustic filters just like microphones and speakers. When you get your ears tested, you get a chart that looks suspiciously like a speaker/mic response chart. Frequency in the x, db attenuation in the y.
So a person with bad ears could have fine hearing below say 5 khz, but with a sharp cut-off beyond that. Or it could be any other way round. Or you could have a notch in the middle. Calibrated hearing aids just take this chart and boost the frequencies your ears are attenuating. You can eq your own sound equipment based on the chart, to get a result that compensates for your ears.