Polymath is a tool that uses machine learning to convert any music library into a music production sample-library, automatically separating songs into stems, quantizing them to the same tempo, analyzing musical structure and converting audio to MIDI.
Features
- Music source separation with the Demucs neural network
- Music structure segmentation/labeling with the sf_segmenter neural network
- Music pitch tracking and key detection with the Crepe neural network
- Music to MIDI transcription with the Basic Pitch neural network
- Music quantization and alignment with pyrubberband
- Music info retrieval and processing with librosa
Use Cases
- Creating a searchable sample library for music producers, DJs, and ML audio developers
- Combining elements from different songs to create unique compositions
- Creating a large music dataset for training generative models
Suited For
- Music producers
- DJs
- ML audio developers
FAQ
To use Polymath, you need to have ffmpeg installed on your system.
You can install Polymath by cloning the GitHub repository and installing the required dependencies using pip.
Yes, most of the libraries used by Polymath have native GPU support through cuda. Please follow the setup instructions on the TensorFlow website for using cuda with Polymath.
Yes, if you have Docker installed, you can use the provided Dockerfile to quickly build a Polymath Docker image.