The Ultimate Sample-Library Glossary
This Glossary contains brief definitions of the audio sample library terms and acronyms used here on wrongtools.com
|AAX – A plugin format exclusive to Avid Pro Tools, which offers improved performance and more efficient use of computer resources compared to its predecessor RTAS.
|Additive Synthesis – A sophisticated audio synthesis technique that involves the mathematical addition of sine waves to produce sound. It allows for the creation of complex timbres by combining fundamental frequencies and harmonics.
|ADSR – An essential envelope control tool used to shape the evolution of a sound over time. It is widely used to control various parameters of a sound, such as amplitude, filter, pitch, and more.
|Aftertouch is a MIDI keyboard feature that generates a control signal in a synthesizer based on the amount of pressure applied to the keys after they are initially pressed. Although most instruments that support this feature do not have individual pressure sensing for each key, some models do provide ‘polyphonic aftertouch’ for each key.
|AIFF or AIF – A high-quality, uncompressed audio file format developed by Apple. AIFF files are similar to WAV files and are often used for professional audio production.
|An algorithm refers to a set of instructions that define how to perform a particular task. Typically, algorithms are written in a computer language and compiled into a program. In the context of effects units, algorithms describe a software building block designed to create specific effects or combinations of effects.
|Ambience refers to the sound reflections that occur in a confined space, which add to the original direct sound. Some digital reverb units can also create electronic ambience. Unlike reverberation, ambience does not have a characteristic long delay time, and the reflections mainly provide a sense of space and sonic character to the room.
|An arpeggiator is a device or software that enables a synthesizer or MIDI instrument to sequence around any notes currently being played. This allows for a repeating sequence of notes to be played, which can be sequenced over several octaves for an impressive sound. With an arpeggiator, even a simple chord can be transformed into a complex and dynamic sequence of notes.
|ASIO is a low-latency and high-fidelity computer sound card driver protocol used on Windows operating systems. ASIO provides a more efficient and reliable interface between digital audio software and a computer’s sound card, improving performance and reducing latency.
|Attack is the initial portion of a sound wave, during which the amplitude of the wave increases. It is a critical parameter for shaping the overall character and impact of a sound.
|Attenuate: To reduce the level of a signal, typically through the use of a gain control or fader.
|An (AU) Audio Unit plugin format developed by Apple exclusively for macOS/OSX. AU plugins are widely used in professional audio production and are known for their high-quality sound and efficient use of computer resources.
|These frequencies are measured in Hertz (Hz) and typically range from 20Hz to 20kHz. Audio signals within this range are considered to be within the audible spectrum and are commonly used in music production, sound design, and other audio applications.
|Audio Interface – A hardware device used to connect a computer or other digital audio device to other audio equipment, such as microphones, speakers, and instruments. Audio interfaces can be external or internal and come in various shapes and sizes, depending on the intended use and requirements.
|Auto-Tune is a brand name registered by Antares in 1997 for their automatic pitch correction processor. However, the term is now commonly used to refer to any software or hardware device that performs pitch correction, regardless of the manufacturer.
|Autoload – A feature found in digital audio workstations such as MASCHINE that enables instant loading of sound, pattern, plugin, or sample into the current context of a project, streamlining the workflow and enhancing productivity.
|Automation is the process of using software to record and play back changes to parameters such as volume, panning, and effect settings over time. This allows for precise control over the mix and can save time during the mixing process by allowing for repetitive tasks to be automated.
|Bandpass Filter A type of filter that allows only a specific range of frequencies to pass through, by combining the characteristics of both highpass and lowpass filters.
|Bar – A musical term used to describe a measure of beats, usually consisting of a specific number of beats in a specific time signature.
|A Beat is a unit of musical time, typically defined by the tempo and time signature of a piece of music. Beats provide a rhythmic foundation for music and are a fundamental element of most musical genres.
|Beatmatching is a DJing technique of aligning the tempos and beats of two or more tracks to achieve seamless transitions between them.
|Bit depth refers to the number of bits used to represent the amplitude of an audio sample. Higher bit depths provide a greater dynamic range and resolution, resulting in a more accurate representation of the original audio signal. Common bit depths for audio production include 16-bit, 24-bit, and 32-bit float. Bit Depth is the number of bits used to represent the dynamic range of an audio recording. Higher bit depth provides a wider dynamic range and results in better audio quality.
|Bitrate is the number of bits that are contained in an audio file per second, measured in kilobits per second (kbps). A higher bitrate generally translates to higher audio quality.
|Bouncing is the process of mixing down multiple audio tracks into a single audio file. Bouncing is a common final step in the mixing and mastering process and is used to create a final mixdown for distribution or further processing.
|BPM is short for Beats Per Minute, a measure of tempo in music.
|A breath controller is a tool that can convert breath pressure into MIDI controller data, allowing woodwind players to control synthesizers in a unique way.
|Brickwall Limiter: A type of digital limiter that completely prevents the output from exceeding a defined level, regardless of the input level. It provides a hard ceiling for the signal, making it useful for mastering and other situations where maximum level is critical.
|Buffer is a type of temporary memory storage used to manage varying periods of data read or write operations. It allows data to be temporarily stored in a sequence until it is ready to be processed or transferred to another part of the system. Buffers are commonly used in digital audio systems to ensure smooth and uninterrupted playback and recording, preventing glitches and dropouts caused by delays or inconsistencies in data flow. It is one of those things you might have to experiment with when setting up large templates with many sampler-instruments
|A bus is a virtual or physical path within a digital audio workstation or mixing console for routing and processing audio signals from multiple sources to a common destination. Buses are commonly used to group channels together and apply processing to multiple channels simultaneously.
|Bus Powered refers to a device that draws its power from the USB connection, rather than an external power source. Usually a mic running on 48v
|Bypass is a function that allows a user to temporarily disable an effect on a track, allowing the unprocessed audio signal to be heard.
|A channel is a virtual or physical path within a digital audio workstation or mixing console for routing and processing audio signals. Each channel can be used to process a single audio source, such as a microphone or instrument, and can be adjusted independently of other channels.
|Chorus is an effect that creates a sense of depth and fullness by combining the original audio signal with a slightly detuned copy of itself. This effect is commonly used on vocals, guitars, and other instruments to create a lush, harmonized sound. A time-based effect used to create a richer sound by adding two or more shifting delays to a signal, creating a detuning effect that thickens the sound.
|Clock Signal – A signal that provides timing information to synchronize devices. Clock signals can be transmitted over MIDI or CV and are used to keep devices in time with each other.
|Close-miking is a microphone technique that involves placing a microphone very close to the sound source with the intention of capturing more of the desired sound while minimizing any unwanted sounds from other sources or room acoustics.
|Comb Filtering: Frequency cancellations that occur in regular intervals due to phase interference between multiple identical signals. These cancellations produce a comb-like appearance in the frequency response of the combined signal.
|Compression is the process of reducing the dynamic range of an audio signal, making the loudest parts quieter and the quietest parts louder. This process is used to even out the volume of a track and is commonly used in music production to create a more polished and professional sound.
|Controller – A hardware device, typically using MIDI, that allows the user to control parameters of a software or another device, such as a MIDI keyboard, drum pad, or mixer.
|CPU stands for “Central Processing Unit.” It is the primary component of a computer that performs the instructions of a computer program. The CPU is often referred to as the “brain” of the computer, as it is responsible for executing most of the instructions that allow the computer to function. The CPU receives input from various sources, including keyboard and mouse input, and then processes that input, executing the appropriate instructions to produce the desired output.
|Crossfade is the process of smoothly blending two audio clips together by fading out the first clip while fading in the second clip. Crossfading can be used to create seamless transitions between different sections of a song or to remove unwanted clicks or pops at the beginning or end of audio clips.
|Cutoff Frequency – A control on a filter that specifies where the frequencies will begin to ramp off, allowing only a certain range of frequencies to pass through. In Wrongtools synthesizers, you may tweak LP filters from the front user interface
|Control Voltage (CV) is a variable voltage signal that is commonly used to control various parameters in analogue synthesizers, such as pitch, filter frequency, or modulation depth. In most analogue synthesizers, the pitch is controlled following the one volt per octave convention
|DAW (Digital Audio Workstation) – A software used for music production, recording, and editing in a modern studio environment. Some popular DAWs include Logic Pro, Cubase, Ableton Live, FL Studio, and more. It provides a platform for digital audio manipulation, enabling users to mix multiple tracks, add effects, and create professional-sounding music.
|De-esser is a type of dynamic processor that reduces or removes sibilance in vocal recordings. It is often used to tame harsh, high-frequency sounds such as “s” and “sh” sounds.
|The Decca Tree microphone technique is a way to setup mics. It involves three omnidirectional microphones arranged in a triangular shape, with two mics positioned about 1.2 meters apart and the third placed above and slightly behind the center mic.The Decca Tree technique is known for its ability to capture a wide and natural stereo image, with the left and right microphones picking up the stereo image and the center microphone providing depth and clarity. The technique was originally developed by the Decca Record Company in the 1950s and has since become a widely used technique in classical music recordings.
|Decibel (dB) – The standard unit of measurement for loudness. It is a ratio measurement that requires a reference point to measure from. Some common dB measurements include dBFS (digital audio, where 0dB is clipping) and dB SPL (in acoustics, where 0dB is near silence).
|Delay is an effect that creates a repeat of the audio signal at a set time interval. This effect can be used to create rhythmic patterns or to add depth and dimension to a track.
|Depth: The perception of differentiation between close and distant sounds. It is achieved through a combination of volume, frequency balance, and reverberation.
|Distortion – The process of adding harmonics to an audio signal, resulting in a fuller, more aggressive sound. It can be achieved using hardware or software plugins and is often used in guitar and bass effects.
|Dithering is the process of adding a small amount of noise to an audio signal to reduce quantization errors when converting between different bit depths or sample rates. Dithering can improve the overall resolution and clarity of an audio signal by minimizing the effects of quantization distortion.
|Dolby Atmos is an immersive audio format that was developed by Dolby Laboratories. It adds height channels to traditional surround sound setups, allowing for more precise placement of audio elements in a three-dimensional space. This creates a more immersive audio experience for the listener, as sounds can be placed and moved around in a more realistic way. Dolby Atmos is used in a variety of settings, including movie theaters, home theater systems, and music production.
|DSP (Digital Signal Processing) – The use of algorithms and mathematical techniques to process audio signals in the digital domain. It allows for advanced audio processing and manipulation in real-time.
|Dynamic range refers to the gap between a signal’s loudest and quietest levels. A wider dynamic range provides greater contrast between the different elements of a mix and can result in a more engaging and impactful sound. It is an important characteristic of music and can be controlled or tamed using tools such as compressors and limiters.
|Early Reflections (ER) is the early reflections describe the initial body of reverberation that comes from natural or algorithmic reverberation. Early reflections are the first sound reflections to arrive at a listener’s ears and play a critical role in creating the perception of the acoustic space in which the sound was recorded.
|Echo – A reflection of sound that arrives at the listener with a delay after the direct sound. Delays can be created artificially using digital signal processing techniques, or they can occur naturally in acoustic environments.
|Envelope refers to the shape of a sound wave over time, including the attack, sustain, and release portions. It is a critical tool for shaping the dynamics and character of a sound.
|EQ (Equalization) is the process of adjusting the balance of frequencies within an audio signal. This process is used to improve the clarity and balance of a mix, allowing each element to be heard clearly and in balance with the others.
|In audio, feedback refers to the process of feeding the output of a system back into its input, creating a loop that can result in a build-up of sound energy. It can be used intentionally to create effects like delay or reverb, but it can also cause unwanted noise or distortion.
|A filter is a type of audio effect that selectively removes or allows certain frequencies in a sound. Filters can be used for many purposes, including shaping the tonal balance of a mix, removing unwanted noise, or creating special effects.
|Flanger is an effect that creates a sweeping, whooshing sound by combining the original audio signal with a delayed version of itself. This effect is commonly used in guitar solos and can also be used to create interesting rhythmic patterns.
|FM – Frequency Modulation is a method of sound synthesis that involves using one waveform to modulate the frequency of another waveform. This technique is used to create complex, evolving sounds that can be used in a variety of musical contexts.
|Foley is the process of adding sound effects to a film or video to enhance the realism of the audio. These sound effects are usually recorded or created specifically for the film and are synchronized with the action on screen to create a more immersive audio experience.
|Frequency is the number of vibrations per second of a sound wave, measured in Hertz (Hz). The frequency of a sound wave determines the pitch of the sound, with higher frequencies producing higher pitched sounds.
|An effect (or ‘FX’) modifies the audio signals it receives. Effects can range from basic processing, like EQ and compression, to more complex transformations like granular synthesis or convolution reverb. Effects can be applied to individual tracks, busses, or the master output of a mix. The FX button on Wrongtools sampler instruments opens a menu with pre-programmed multieffects.
|Gain refers to the amplification of an audio signal, measured in decibels (dB). It is the amount of amplification applied to an audio signal before it is fed into other processing tools such as equalizers, compressors, and limiters. Proper gain staging is essential for producing clean and transparent recordings without any unwanted noise or distortion.
|Grain – A grain is a small, discrete segment of sound that can be manipulated or repeated to create unique audio effects.
|Grain Delay – A grain delay is a type of audio effect that uses short, repeated segments of sound (grains) to create echoes or delays. This technique is often used in electronic music production to create complex rhythms and textures.
|Granular synthesis is a method of sound synthesis that involves breaking a sound down into tiny, discrete particles called grains. These grains can then be manipulated and recombined to create new, complex sounds.
|Harmonic Distortion: Coloration or modification of a signal caused by the introduction of harmonics that were not present in the original signal. This can be desirable in some contexts, such as guitar distortion effects.
|Harmonics: Multiples of a fundamental frequency, generated by resonances in a vibrating system. For example, the second harmonic of a 1 kHz tone is a 2 kHz tone. On musical instruments terminology it reflects a certain playingstyle.
|Harshness: An excessive amount of high-frequency content in a signal, which can result in a piercing or grating sound.
|Headroom refers to the amount of available space in an audio signal before it reaches the maximum level or “clips.” It is crucial to leave enough headroom to prevent distortion and maintain a dynamic range in the mix. Headroom can be increased by reducing the gain of individual tracks or using a mastering limiter to decrease the overall volume of the mix. It is important to leave sufficient headroom in a mix to avoid unwanted distortion or clipping during the mastering process. In the sampler world, there are many gain stages. Therefore it can be smart to keep the headroom low during the making of a song.
|High Pass: A filter that attenuates low frequencies below a certain cutoff frequency while allowing higher frequencies to pass through unaffected.
|Imaging: The ability to accurately position or distinguish sounds in the stereo field. This is achieved through careful use of panning, volume, and frequency balance.
|An input is a connection or port on an audio device or software program used to receive audio signals. Inputs can be used to connect microphones, instruments, or other audio sources to a recording or mixing setup.
|IR or Impulse Response. It is an audio file that captures the characteristics of a specific space’s acoustics, including reflections, frequency response, and decay time. When loaded into a convolution reverb plugin, it can apply the characteristics of that space to any sound or mix, making it useful for creating realistic reverb effects or recreating the acoustics of a particular space.
|Jitter in the context of digital audio, jitter refers to the variation in timing of the samples in a digital audio signal, caused by fluctuations in the clock that governs the sampling rate. Jitter can cause distortion, unwanted noise, and other artifacts in the audio signal.
|kHz – Abbreviation for kilohertz, which is a unit of measurement used to describe the sample rate of digital audio. A sample rate of 44.1kHz is standard for CD-quality audio, while higher sample rates such as 96kHz or 192kHz are used for high-resolution audio.
|Knee – In the context of audio compression, the knee control determines how smoothly the compressor engages when the signal exceeds the threshold level. A soft knee setting allows for a gradual increase in compression as the signal approaches the threshold, resulting in a more natural and transparent sound, while a hard knee setting causes the compressor to engage more abruptly, resulting in a more noticeable and aggressive compression effect.
|Latency is the delay between the time an audio signal is input into a device or software program and the time it is output. Latency can be caused by processing time or by the time it takes for the signal to travel through a system, and can be an issue in real-time applications such as recording or live performance.
|LFO – An LFO is a low-frequency oscillator that generates a waveform below the audio range that can be used to modulate other parameters, such as amplitude, filter cutoff, or pitch. LFOs are often used in electronic music to create rhythmic or evolving textures.
|A limiter is a type of dynamic range compressor that limits the maximum level of a signal. Unlike a compressor, a limiter has an infinite ratio, which means that it can prevent the signal from exceeding a certain level. Limiters are often used in mastering to increase the perceived loudness of a track without introducing distortion.
|A Loop is a repeating section of audio that can be looped to create a longer piece of music. Loops are commonly used in electronic music production, where a repetitive beat or melody can be sustained throughout a song.
|Low Pass: A filter that attenuates high frequencies above a certain cutoff frequency while allowing lower frequencies to pass through unaffected. The slope of the filter determines the rate at which frequencies are attenuated above the cutoff.
|Metering: a tool used to help measure and evaluate the level, frequency spectrum, stereo image, and dynamic range of an audio signal. Metering can provide various measurements such as peak, RMS, LUFS, correlation, and phase, among others.
|MIDI (Musical Instrument Digital Interface) is a protocol used for communicating musical information between devices, such as synthesizers and computers. MIDI allows for the control of various aspects of music production, including pitch, velocity, duration, and volume. as well as control messages such as tempo, modulation, and expression. MIDI is a widely used standard in music production and performance.
|The ModWheel a control found on many keyboards and synthesizers that can be used to modulate a particular parameter. The most common use of a modulation wheel is to add dynamics to a sampler instrument, but it can also be used for other types of modulation, such as filter cutoff, LFO rate, or vibrato
|Modulation – Modulation is the process of using one signal to control another signal. In music production, modulation can refer to a wide range of effects, including vibrato, tremolo, chorus, and flanger. Modulation can also be used to control parameters such as filter cutoff, resonance, and delay time.
|Monophonic refers to a synthesizer or instrument that can only play one note at a time. This is in contrast to polyphonic instruments, which can play multiple notes simultaneously. Monophonic instruments are often used for basslines, leads, and other single-note melodies.
|Normalization is the process of adjusting the level of an audio signal to the maximum level without clipping, typically to 0 dBFS. Normalization does not affect the dynamic range or tonal balance of the audio signal, but it can be useful for increasing the overall level of a track without manually adjusting the gain.
|An oscillator is a fundamental building block of synthesis that generates a waveform of a particular shape, such as sine, sawtooth, pulse, or triangle. In addition to controlling the pitch, oscillators can be modulated by various sources, including envelopes and LFOs, to create complex and evolving sounds.
|An output is a connection or port on an audio device or software program used to send audio signals. Outputs can be used to connect speakers, headphones, or other devices for monitoring or playback.
|Panning is the process of adjusting the balance of an audio signal between the left and right channels of a stereo field. In addition to traditional panning, some plugins and hardware devices offer advanced panning options, including binaural panning and circular panning.
|Parallel Processing: a technique in which an original audio signal is duplicated and processed separately from the original signal, and then both signals are mixed together. Parallel processing can help preserve the original character of the signal while applying additional processing such as compression, EQ, or distortion.
|A patch is a set of connections between audio devices or software programs, used to route audio signals between them. Patches can be created manually or saved as presets for easy recall.
|Phantom Power – A feature found on most audio interfaces and mixers that provides +48V of power to condenser microphones. This voltage is required to polarize the diaphragm in the microphone capsule and amplify the audio signal.
|Phase refers to the position of a sound wave in relation to its starting point. Phase can affect the perceived tone and timbre of a sound, and can be adjusted using phase-shifting tools or by manipulating the placement of microphones. When two identical signals are combined, their phase relationship determines whether they will add constructively or destructively, resulting in either a reinforcing or cancelling effect.
|Phaser – A modulation effect that uses a series of all-pass filters to shift the phase of a signal. This produces a sweeping, swirling sound that can be modulated by an LFO to create dynamic, evolving textures.
|Phasing: the audible effect that occurs when two identical or nearly identical audio signals are combined, resulting in frequency cancellations and reinforcements. Phasing can occur due to time delays, phase shifts, or interference patterns between the signals. Techniques like time alignment, phase inversion, and filtering can help mitigate phasing issues.
|Pitch refers to the perceived highness or lowness of a sound, determined by the frequency of the sound wave. The frequency of a sound wave is measured in Hertz (Hz), and the higher the frequency, the higher the pitch of the sound.
|Pitch Bend wheel is a control found on many musical instruments that allows the player to change the pitch of a note in real time by moving a wheel, joystick, or other controller. Pitch bend is commonly used for expressive playing and creative sound design.
|A plugin is a software module that can be used within a digital audio workstation to add functionality, such as virtual instruments or audio effects. Plugins can be used to expand the capabilities of a DAW and can be purchased or downloaded for free. Plug-ins come in various formats, including VST, AU, and AAX, and are an essential part of modern digital audio production.
|Polyphonic in electronic audio is the ability of a synthesizer or sampler to play multiple notes simultaneously, allowing for the creation of chords, harmonies, and complex textures.
|Pre-delay refers to the time delay between the arrival of the direct sound and the arrival of the first early reflections in a space. Pre-delay can be used to create a sense of depth and space in a mix and can be adjusted to create different reverberation characteristics.
|Preamp amplifies a weak audio signal before it is sent to a recorder, mixer, or other audio processing device. Preamps are used to boost the gain of microphones, instruments, and other sources, and can greatly affect the tone and character of the recorded sound.
|PWM – Pulse Width Modulation is a synthesis technique that involves varying the width of a pulse wave to create complex and evolving timbres.
|The process of aligning MIDI or audio to a grid to correct timing errors. In addition to fixing timing, quantization can be used creatively to achieve a robotic or mechanical feel in music production.
|RAM – Random Access Memory is a type of computer memory that allows for faster access to frequently used data, such as software applications or virtual instruments. RAM size can affect the performance of a computer when running memory-intensive music software. How much RAM you have available influences the number of sampler-plugins you can use in a session.
|Ratio refers to the amount by which an audio signal is compressed or limited above the threshold. It is a critical parameter for controlling the amount of gain reduction applied to a signal, affecting its dynamic range and overall level. A higher ratio means more gain reduction, while a lower ratio means less gain reduction.
|Release is the portion of a sound wave after the attack, during which the amplitude of the wave decreases. It is a crucial parameter for controlling the sustain and decay of a sound.
|Reverb is an effect that simulates the reflections of sound waves in a physical space, creating a sense of depth and space in the audio. This effect is used to make a track sound like it was recorded in a specific space, such as a concert hall or a small room. Algorithmic and convolution reverbs are common types used in digital music production, with convolution reverbs using real-world impulse responses to accurately capture the sound of physical spaces.
|Reverse Polarity : To invert the phase of a signal, so that positive excursions become negative and vice versa. This can be used to eliminate phase cancellation when combining multiple signals, among other applications.
|Sample – A pre-existing audio recording that can be used in music production as a sound source. Samples can be manipulated in various ways, such as pitch shifting, time stretching, or chopping, to create unique textures and patterns. When programming samples in KONTAKT, audiosamples are called zones.
|Sample rate refers to the number of samples of an audio signal taken per second, measured in Hz. A higher sample rate provides a greater frequency response and more accurate representation of high-frequency content in the audio signal. Common sample rates for audio production include 44.1 kHz, 48 kHz, and 96 kHz.
|Saturation is a technique used to add harmonic content to an audio signal, typically by overdriving an analog or digital device. Saturation can add warmth, depth, and character to a sound, making it sound more organic and less sterile. It can be used subtly or aggressively depending on the desired effect.
|A Sequencer is a tool that allows for the creation and arrangement of musical patterns and sequences. It enables musicians to program complex rhythms and melodies, providing a high degree of precision and control over the music. A software or hardware tool used to create, record, program and arrange sequences of musical events.
|Shelf on an equalizer that applies a consistent boost or cut to all frequencies above or below a defined frequency. Shelf filters are commonly used for tonal adjustments to high or low frequencies.
|Sibilance is a harsh or hissing sound that occurs in vocal recordings, usually caused by strong energy in high-frequency ranges. Sibilance can be mitigated using de-essing techniques or EQ adjustments to reduce the energy in the problematic frequency range.
|Sidechain – A technique used in music production where one signal is used to control the processing of another signal. For example, sidechain compression is commonly used to create a “pumping” effect in dance music by using a kick drum to trigger the compression of a synth or pad sound.
|Standalone Mode – Refers to using a music software application as a standalone program rather than as a plugin within a digital audio workstation (DAW). This mode is often used for live performances or standalone music production setups. The KONTAKT sampler either runs in standalone mode, or as a plugin.
|Stereo widening is the process of creating a sense of width and separation in a stereo audio signal. This can be achieved through techniques such as panning, EQ, and stereo imaging tools, which can create a more immersive and spacious sound.
|Sustain is the portion of a sound wave between the attack and release, during which the amplitude of the wave remains constant. It is a critical parameter for creating sustained sounds such as pads, strings, and drones.
|A Synthesizer is an electronic instrument that generates sound using oscillators and filters. Synthesizers are used to create a wide range of sounds, including those not possible with traditional instruments.
|Tempo refers to the speed of a piece of music, measured in beats per minute (BPM). The tempo of a piece of music determines the overall pace and feel of the composition, with faster tempos creating a more energetic and upbeat atmosphere, while slower tempos tend to create a more relaxed and contemplative mood.
|Threshold refers to the level at which an effect such as compression or limiting is applied to an audio signal. It is a crucial parameter for controlling the amount of processing applied to a signal.
|Time signature is a notational symbol that indicates the number of beats per measure and the type of note that represents one beat. This notation provides a framework for musicians to play together in time and is essential for creating rhythmic patterns in music.
|Timeline – refers to the horizontal axis of the arrangement window where a track is being recorded and edited. This area displays the progression of time in measures, beats, and ticks, allowing the user to place and arrange audio and MIDI clips.
|Transient refers to the initial high-amplitude portion of a sound wave, such as the attack of a drum hit. Transients contain a lot of energy and can be challenging to process, but they are essential for creating punchy and impactful mixes.
|The transport refers to the area that contains the playback controls such as play, pause, stop, rewind, fast-forward, etc. This area is usually located at the top or bottom of the DAW window and provides the user with essential tools for navigating their project.
|Velocity – The MIDI parameter for each performed and recorded note that determines the loudness of the notes. It can also be used to modify other parameters on synthesizers so as to affect a sound based on performance. Velocity is an essential part of MIDI sequencing, allowing for dynamic performances to be captured and reproduced accurately.
|VST (Virtual Studio Technology) is a software interface that allows for the use of virtual instruments and audio effects within a DAW. This technology enables musicians to simulate the sound of traditional instruments and effects without needing to physically own them.
|Short for “Waveform Audio File Format”. It is the standard lossless audio file format in the digital domain. WAV files can contain a wide range of audio data, including uncompressed or compressed audio, and support high sample rates and bit depths. In the same ways as AIFF files.
|A method of sound synthesis that uses wavetables, which are series of waveform cycles, to generate sound. Wavetable synthesis allows for smooth transitions between different waveforms, resulting in more complex and evolving sounds.