Who knew you could use Python with music so you can continue jamming with AI? I recently came across a smart tool called Spleeter.
Spleeter is a command line tool for separating an audio track into component tracks. The tool was developed by a team at the streaming company deezer and is writen in Python using the Tensorflow library. I’d assume they have a huge training set of data to use and get the tool to a point where the output is good.
The tool uses methods from AI to try and identify the separate instruments in some recorded music. It was trained on data where the component tracks have been available. The main options are to separate the vocals, separate out 4 (vocals, bass, drums, other) or 5 tracks (adds a piano stem to the 4 tracks setting). Users can also supply existing stems to further train the model.
This is a great learning tool (for music) as you can isolate the part you are learning. Once you’ve got this down you can load the tracks into a DAW and play along with just the drum parts or any combination of the separated instruments.
I’ve enjoyed learning and playing music in these times when getting together to play music with others has been difficult.
Messing around with the separated tracks, you can remove and add in different tracks. Together they sound like the original (no surprise) but on there own they tend to have elements of other tracks or even just what seems like noises present. This must be the audio representation of the fuzzy spaces that don’t fit the model.