An Apple patent (number 20100223400) has appeared at the US Patent & Trademark Office regarding features — both present and future — of Apple’s GarageBand app. The app is for correlating changes in audio.

Exemplary embodiments of methods and apparatuses to correlate changes in one audio signal to another audio signal are described. A first audio signal is outputted. A second audio signal is received. The second audio signal may be stored in a memory buffer. The first audio signal is correlated to conform to the second audio signal. The first audio signal may be dynamically correlated to match with the second audio signal while the second audio signal is received. At least in some embodiments, a size of a musical time unit of the second audio signal is determined to correlate the first audio signal. At least in some embodiments, the adjusted first audio signal is stored in another memory buffer. Chris Moulios is the inventor.

Here’s Apple’s background and summary of the invention: “Audio signal processing, sometimes referred to as audio processing, is the processing of a representation of auditory signals, or sound. The audio signals, or sound may be in digital or in analog data format. The analog data format is normally electrical, wherein a voltage level represents the air pressure waveform of the sound. A digital data format expresses the air pressure waveform as a sequence of symbols, usually binary numbers. The audio signals presented in analog or in digital format may be processed for various purposes, for example, to correct timing of the audio signals.

“Currently, audio signals may be generated and modified using a computer. For example, sound recordings or synthesized sounds may be combined and altered as desired to create standalone audio performances, soundtracks for movies, voiceovers, special effects, etc. To synchronize stored sounds, including music audio, with other sounds or with visual media, it is often necessary to alter the tempo (i.e.; playback speed) of one or more sounds.

“Generally, a loop in audio processing may refer to a finite element of sound which is repeated using, for example, technical means. Loops may be repeated through the use of tape loops, delay effects, cutting between two record players, or with the aid of computer software. Many musicians may use digital hardware and software devices to create and modify loops, often in conjunction with various electronic musical effects. Live looping is generally referred to recording and playback of looped audio samples in real-time, using either hardware (magnetic tape or dedicated hardware devices) or software. A user typically determines the duration of the recorded musical piece to set the length of a loop. The speed or tempo of playing of the musical piece may define the speed of the loop. The recorded piece of music is typically played in the loop at a constant reference tempo. New musical pieces can be recorded subsequently on top of the previously recorded musical pieces played at a tempo of the reference loop.

“Because the tempo and/or speed of recording of the new musical pieces may change, the loops of the newly recorded musical pieces may be non-synchronized to each other. The lack of synchronization between the musical pieces can severely impact a listening experience. Therefore, after being recorded, the tempo of the new musical pieces may be changed to the constant reference tempo of the previously recorded musical piece played in the reference loop.

“Unfortunately, merely changing the tempo of all newly recorded musical pieces to a constant reference tempo may result in undesired audible side effects such as pitch variation (e.g., the “chipmunk” effect of playing a sound faster) and clicks and pops caused by skips in data as the tempo of the newly recorded pieces is changed. Currently there are no ways to dynamically adjust the tempo of the musical pieces during recording.

“Exemplary embodiments of methods, apparatuses, and systems to correlate changes in one audio signal to another audio signal are described. In one embodiment, a first audio signal is outputted, and a second audio signal is received. The second audio signal may be stored in a memory buffer. The first audio signal is correlated to conform to changes in the second audio signal. The first audio signal may be dynamically correlated to match with the second audio signal while the second audio signal is received. At least in some embodiments, a size of a musical time unit of the second audio signal is determined to correlate the first audio signal. At least in some embodiments, the adjusted first audio signal is stored in another memory buffer.

“At least in some embodiments, correlating the first audio signal may include time stretching the first audio signal, time compressing the first audio signal, or both. In some embodiments, correlating the first audio signal includes adjusting a tempo of the first audio signal to the tempo of the second audio signal.

“At least in some embodiments, a first audio signal is outputted, and a second audio signal is received. For example, the first audio signal may be played back, generated, or both. Data of the second audio signal may be stored in a memory buffer. The data of first audio signal may be dynamically correlated to conform to the changes in the second audio signal while the second audio signal is received. Further, a third audio signal may be received. The third audio signal may be stored in another memory buffer. At least the second audio signal may be adjusted to conform to the third audio signal.

“At least in some embodiments, a first audio signal is outputted while a second audio signal is received. The data of the second audio signal may be stored in a memory buffer. Further, a determination is made whether to commit data of the second audio signal to mix with the data of the first audio signal. The data of the first audio signal is dynamically correlated to match with the data of the second audio signal if the data of the second audio signal is committed to mix with the data of the first audio signal.

“At least in some embodiments, a new audio signal is received. The new audio signal is stored in a memory buffer. A size of a musical unit of the new audio signal may be determined. The musical time unit may be, for example, a beat, a measure, a bar, or any other musical time unit. The size of the musical unit of a recorded audio signal is adjusted to the size of the musical unit of the new audio signal. At least in some embodiments, the new audio signal may be grouped with one or more previously recorded audio signals.

“At least in some embodiments, a new audio signal is received. The new audio signal is stored in a memory buffer. A size of a musical unit of the new audio signal may be determined. The size of the musical unit may be determined based on a tempo of the new audio signal. The size of the musical unit may include a time value. The size of the musical unit of a recorded audio signal is adjusted to the size of the musical unit of the new audio signal.

“At least in some embodiments, a determination is made whether to commit data of the new audio signal to mix with the data of the recorded audio signal. The size of the musical unit of a recorded audio signal is adjusted to the size of the musical unit of the new audio signal when the data of the new audio signal are committed to mix with the data of the recorded audio signal.

“At least in some embodiments, adjusting data of the recorded audio signal to the data of the new audio signal comprises time stretching data of the recorded audio signal to match the size of the musical unit of the new audio signal, time compressing data of the recorded audio signal to match the size of the musical unit of the new audio signal, or both. At least in some embodiments, the recorded audio signal is faded out after being correlated to changes in the new audio signal.”