Hi/High res audio means music files recorded at 24-bit and higher than 44.1kHz sampling rates (often 48kHz, 96kHz and 192kHz). High definition audio promises "better than CD quality", and is being offered by companies like Apple and Sony. But can you notice the difference? Will your library of songs suddenly become more realistic and lifelike? And is the price that you pay (for devices like the expensive Pono Player) actually worth it?
It is true that the higher resolution has benefits over CD when it comes to dynamic range and extended frequency response. But the files take up far more space on your hard disk for a quality improvement that's all but negligible. In all honesty, the loudness war, recording techniques etc make a far bigger difference in the end result than high-res audio ever will.
Hey, I'm HandyAndy and thanks so much for watching my video! If you did enjoy it, then please subscribe to my channel, and make sure to hit that like button!
 Nothing to see here, that's just an example.
 Undithered, 16-bit/44.1kHz audio has up to 96dB dynamic range. If you apply proper dithering and noise shaping to the signal, then you can theoretically hear a signal that's more than 120dB down - that's 24dB below the noise floor. (http://people.xiph.org/~xiphmont/demo/neil-young.html)
 You can't notice the difference when you're playing back 16-bit and 24-bit audio side-by-side. However, you CAN when you're recording. Multitrack recording at 24-bit means that far less quantization distortion is added whenever you apply an effect, adjust the amplitude, etc. (http://productionadvice.co.uk/dither-or-distort/)
 Why does CD go to 22.05kHz instead of just 20k? Well, history is to blame. Early CD digital masters used to be sent to pressing plants on U-matic video tapes, and the digital information was modulated into the analogue video signal. 44.1kHz was chosen because it resulted in the highest data rate that was supported by both PAL and NTSC cassettes. Weird, hey? (https://en.wikipedia.org/wiki/44,100_Hz)
 Well, we MIGHT. The reason why 96k (and later 192k) became standard in the pro audio world was because it was difficult to build an anti-aliasing low-pass filter that would cut off at 22.05kHz without attentuating the signal any more than was necessary. But for a consumer format, the oversampling on modern DACs has kind of bypassed that requirement.
 Yes, they aren't necessary for a playback-only consumer format. But in the pro sound engineering field, I must admit that they are quite useful.