ETH Zurich has published a highly interesting paper on "extremely learned image compression" with neural networks. You can try out the really exciting on the linked project page with your own mouse on the screen.
This is because there is an image on the page that enables a shift between the "learned compression" and a comparison compression ( BPG, WebP, JPEG2000, JPEG and uncompressed original image) using Javascript.
At very low data rates, neuronal compression performs significantly better. But the comparison with the original picture is especially exciting. While in conventional compression methods the details disappear in blocky pulp, neuronal compression "invents" the missing details.
This happens because the image is created synthetically from learned images. And because the original is only used more or less as an object reference description during compression. However, if you take a closer look at the bus or the facades, you will notice that, apart from the correct positioning in the image, the objects differ significantly from the original.
So what is better at low data rates? No details or fictitious details? We think this depends on the application. In any case, it already smells like a digital image with such an imaginative decoder should be marked somehow when used.
Oh yes, for liquid moving image compression this method should not be usable until further notice, because the decision as to which details can be used for which object can already be "tilted" by a minimal change of perspective. So the model type of a car could change abruptly between two frames...