At MIT, too, there is always something special to admire when it comes to artificial intelligence. Like this time, for example, a program that can filter waveforms out of a video depending on where they were created.
This allows you to click on the sound exit point of an instrument during a music performance and then hear it more or less well filtered as a solo instrument. For musicians who want to hear individual passages from an ensemble performance, this is certainly a great help. Within certain limits, however, it is also possible to subsequently change a mix to increase or decrease the volume component of an instrument. Similar tools should also be welcome in the audio post-processing of video recordings. For example, to work out speech in original sound without amplifying the background noise.
As is often the case, there is also a video on YouTube for better illustration:
Neural networks will definitely turn audio processing upside down in the future.