Programmable Synth

See the full project on GitHub

2022

Code-based synthesiser made in Processing.js that can be used with a MIDI keyboard. Takes in recorded sound and maps it to a 12-pitch octave. Allows for adding reverb, granulation and delay effects to the sound.

Background and Context

In this project from UTS, using my creativity and programming skills, I was tasked with creating a program in Processing.js that fulfils a design brief that required me to consolidate all the knowledge I gained throughout my interactive media course. The project required the creation of a software-based system capable of capturing live user interaction data, such as mouse movement, keyboard input, audio, gestural, or other forms of input, as time series data across multiple samples.

This data had to be transformed into a visual, auditory, or tactile output, either static or dynamic. The design needed to include real-time feedback during the recording process, as well as a final output that meaningfully reflected the accumulated input. Users were also expected to be able to modify various transformation parameters. The chosen output medium could span a wide range—from graphics, sound, and animation to tactile or physical forms—and could be either fully code-generated or derived through creative processing of existing material.

Theoretical Framework

The theoretical framework for this project draws upon principles from creative coding, interactive system design, and generative media arts. Key theoretical references include Learning Processing by Daniel Shiffman, Programming Interactivity by Joshua Noble, and The Nature of Code by Daniel Shiffman.

Shiffman’s Learning Processing: A Beginner's Guide to Programming Images, Animation, and Interaction offers foundational knowledge in creative coding using Processing. This resource supports the development of the synthesiser’s graphical interface and real-time visual feedback mechanisms. It informs the implementation of interactive elements that respond to MIDI input and user control, ensuring the system is both functional and engaging.

Noble’s Programming Interactivity provides a broader understanding of building interactive systems that integrate hardware, software, and user input. Its relevance lies in guiding the connection between user actions and sound synthesis behaviours, particularly the mapping of interface inputs (e.g., sliders, buttons, or sensors) to MIDI signals and audio outputs. It also informs strategies for creating responsive systems that encourage experimentation and exploration.

Shiffman’s The Nature of Code explores concepts such as randomness, forces, and emergence in programming, which are especially useful for designing generative behaviours within the synthesiser. This text provides insight into creating more expressive and organic-sounding output by integrating rule-based or algorithmic approaches to sound modulation and pattern generation.

Together, these sources form the theoretical foundation for developing an interactive, programmable MIDI synthesiser. They guide the project’s technical implementation and influence design decisions relating to interactivity, user experience, and generative sound design. This framework supports an iterative design process aimed at creating a compelling and creative tool for digital music-making.