Feel free to follow my ios app development on twitter: @radiiresearch
Added about 30 tests to the app today. Caught some really interesting edge cases between the render callback and audio unit configuration/init. Should lead to a more stable and responsive experience.
Anyone else find that LLMs have more trouble with declarative (as opposed to imperative) code? Maybe lack of training data or more difficult to follow chain-of-thought?
Fixed the bug with scratching and playback transitions. Bluetooth now integrated into the app, auto detection of connection, playback transitions seamlessly to headphones and back. :)
Two steps forward, one step back yesterday. Got Bluetooth headphones working and fixed a wraparound bug in scratching, but exposed an edge case in the seamless interruption of playback with scratching. Will try and fix today.