Back in the 1970's and '80's, pioneering home computing machines such as the Tandy Color Computer (COCO) had built-in sound. You could program simple tunes with the extended BASIC language. I remember coding Tchaikovsky's 1812 Overture to amuse my four-year old son. But sound quality was horrible and musical complexity on a 4 K machine just couldn't exist.
The concept of personal computing itself was not clear back then. Were PCs being developed for children and entertainment? For schools and education? Or business and productivity? Apple, Commodore, Amiga, and Tandy fought to define the PC market. Then Nintendo introduced their 'killer app' - a non-PC based Game Player. Meanwhile, giant IBM steered their new PC, the AT (Advanced Technology), toward a lucrative business market with text based word processing and flexible spreadsheet applications. After this tremendous push into game players and business productivity, the idea of creating musical quality sounds on the PC was not even in the running.
Meanwhile, the musician was listening to a different tune. Back then, musicians and record producers were coming from an analog, not digital, world. Recording studios were equipped with huge sound editing 'mixers'. These required specialized skills to run. Dubbing over a basic sound recording was done by using different recording 'tracks' on 4 or 8 track analog signal magnetic tape.
About the same time, new electronic instruments were being developed. Les Paul pioneered by hooking up his guitar to an electronic sound amplifier. Soon, in the studios, musicians explored a richness of musical sounds as they experimented with this new electronic technology. So sound synthesizer equipment designers, not PC engineers, worked on the sound production issue. The Moog was the earliest success. Big and expensive, the Moog produced the sounds for John Cage/Merce Cunningham's breakthrough choreography. My husband and I ran an independent film business then, and struggled with our budgets to afford costly hours to make sound tracks on a rented Moog. Prices could be $10,000 a day. We envied the better-funded Yoko Ono and John Lennon in those same NYC sound studios, as they created synthesizer sounds for their LOVE films.
Soon, music equipment innovators such as Yamaha, Casio and Roland were developing scaled down synthesizers more affordable to the average musician. These were semi-portable keyboards and control boxes that could produce decent sounds. Built of specialized computing chips and electronics, each keyboard had its own way of using computing power to interpret the keyboard action and produce the sounds on the synthesizer. Setting dip switches became the new musician's equivalent of tuning an instrument.
On the road, these 'portable' synthesizers could increase the sound loudness for musicians playing to huge stadium audiences and also supported the anti-establishment style of the times. But it was hard to combine different keyboards in a group. Performances were particularly rough - bulky computing boxes and cables all over the stage. Musicians tied down to their keyboards, whilst the performance trend was headed toward freedom of movement and lots of live action.
Meanwhile, back in the computing world, PC's were being equipped with modems to enable them to talk to each other. Key was the use of standard protocols to send and receive messages. Musicians began to wonder if their various synthesizers could talk to each other in a similar manner.
At the 1983 North American Music Manufacturers Show in Los Angeles, one exhibition broke through the music world's consciousness. What was so startling? There were two synthesizers, each by a different manufacturer. Two cables connected these two synthesizers. When a musician played one synthesizer, they both sounded. If he or she walked over to the other synthesizer and played that one, again they both sounded. The two synthesizers were communicating with each other. How? Through the MIDI command protocol. The rest, as they say, is history.