[15:34 Fri,30.September 2016 by Rudi Schmidts] |
![]()
Gated Recurrent Units (Grus) are the newest craze in the broad field of artificial intelligence. Similarly the LSTMs (Long Short Term Memories) but they work without output gate and own in recurrent net structures obviously vorzueglich for image compression. This will now at least the AI u200bu200bresearchers of Google have shown that have now shown with a disclosed Tensorflow model that the picture quality opposite JPEGs may be in the same data consumption significantly "better". About Flying to the Paper and the linked blog post, so you can see in the first place that actually disappear Macrobloecke at Google's new approach. Hardly surprising, because the neural networks in this context are trying to display the entire image through a very complex formula, and not individual parts. Stillgo naturally also fine details with decreasing data rate in the image below: PIC 1 For more modest personal experience on the subject we want to make two points for compression using neural networks: - Firstly, the need compression time will not be until further comparable to today popular form method , For video applications, this type of compression and a pragmatic time budget is definitely until further notice not be usable. - That these methods and ideas for the upscaling of images usually hoechst exciting and partly already very successful. Because simply explained - are image structures expressed by formulas storage to a certain part is aufloesungsunabhaengig. Learned structures thus can be output in a higher resolution, Bild zur Newsmeldung:
![]() deutsche Version dieser Seite: Weder DCT noch Wavelet - Neuronale Netze zur Bildkompression |
![]() |