Looking for an Android developer with experience with OpenSL ES audio to make improvements on the audio integration of a music game.
We currently have an NDK C++ wrapper around the well-known FluidSynth real-time synthesizer library written in C. This wrapper is based on Raph Levien's High-Performance Audio on Android talk from Google I/O 2013 (see [url removed, login to view]). The goal is low-latency synth. This wrapper is incorporated into a Unity C# game.
It is working well. There are at least 2 improvements needed:
1. Feed the PCM waveform back to the Unity game so the synthesized music can be visualized on the Unity side.
2. Allow for stereo synthesis i.e. with different left/right volumes.
You should have experience with Android low-latency OpenSL ES audio system. You should be familiar with the ring-buffer based approach explained in the Levien talk above and this blog [url removed, login to view]
Experience with Unity is optional.
About me: I'm a former Google product manager based in New York working on this mobile music game with a small group of teammates.