Thursday, July 3, 2014

InfoComm 2014 Attendees: Your Info is Being Sold

Earlier this week I received this heartwarming email message, offering me the complete contact information for everyone who attended InfoComm last month, for a price:

Hi,

Would you like to have access to InfoComm 2014 Expo Attendees Email Contacts? If yes, we have 18,000 Attendee Contacts with full details including verified emails. Grab this offer at economical price.

Our list comes with information such as; Company Name, Web site, Contact Name, Title, Verified Email address, Mailing Address, Telephone Number, Fax number, Industry, Sic Code, Revenue, Number of employees etc. Samples are available on the request.

Let me know if you need any further information. Looking forward to hear from you.

Regards,

James Carter | Business Development & Marketing Specialist|
101 Wood Avenue South, Suite 900. Metro Park 101, NJ 08830


I found it extraordinary that someone apparently unconnected with InfoComm is selling InfoComm's database. What makes it even more fishy is that the address at the bottom of the marketing email is actually that of Microsoft's New York district office in Iselin, NJ (http://www.microsoft.com/about/companyinformation/usaoffices/nymetro/en/us/iselin.aspx).

I've written to InfoComm to see if they can shed light on this apparent data breach, and will post their reply.

Monday, June 23, 2014

M-S Recording: A Useful Technique for Working in Stereo

M-S (mid-side) recording is a two-mic approach to recording orchestras, jazz combos and similar self-balancing ensembles. M-S goes a long way toward reducing the “hole-in-the-middle” that can result from other two-mic stereo techniques, while affording extensive control over the width of the ensemble.

In brief, the M (or mid) mic exhibits a bi-directional or cardioid pickup pattern and faces the front and center of the orchestra. The S (or side) mic exhibits a bi-directional pickup pattern and is oriented at 90 degrees to the M mic, facing the side walls.

By mixing the output of the S mic with positive polarity (“in phase”) with the M mic to derive the left channel, and mixing it with reversed polarity (“out of phase”) with the M mic to derive the right channel, the apparent width of the sound stage can be adjusted to be wider or narrower simply by varying the overall level of the S mic.

This technique provides results consistent with the X-Y technique (crossed cardioid or bi-directional mics) with two additional advantages:

1. instruments in the center of the stage are not picked up off-axis as they are with both X-Y mics (which are pointing away from the center); and

2. the recording is 100% mono-compatible (e.g., for FM radio transmission), since in mono the +S and -S signals are added together and cancel each other out, leaving just the M component.

I have used the M-S technique extensively with good results. The first example on this page illustrates the technique used to record Tafelmusik, a world-class Baroque orchestra:

Here's an excellent technical introduction to M-S.

The authors also provide a comprehensive and comparative review of various stereo miking techniques here.

Wednesday, October 2, 2013

On intellectual property theft: music vs books

You could do worse today that spend 7 or 8 minutes viewing the trailer to the upcoming film Unsound, which examines the impact that intellectual property theft and the ensuing collapse of the traditional music business model has had on the lives and careers of five different performers and bands. 

As publisher of Philip Giddings' much loved classic reference book Audio Systems Design and Installation, I made an explicit decision NOT to make the book available as an e-book for many of the reasons explained in Unsound. Printed books, unlike music recordings, cannot be duplicated and shared with one or two mouse clicks by those with a sense of personal entitlement to the content.

At Post Toronto Books, we consider that the time and expense of acquiring and then scanning or photocopying this 600-page text just to share it with friends and others would far exceed the attention span of even the most diehard file-sharing enthusiast and act as a deterrent to the theft of Philip's intellectual property. Unfortunately, the same cannot be said for music, the groundwork for the wholesale copying and outright theft of which was laid some 50 years ago with the introduction of the recordable compact cassette. It's just a little easier with today's technology.

Watch the Unsound trailer on Vimeo.

Info on Audio Systems Design and Installation

Thursday, September 12, 2013

Just Because it's Sound Doesn't Mean it has to be Mixed

Mixing is like driving—everybody does it, it gets you from here to there, and it seems like it’s been part of the culture forever.

For recording or broadcast requirements with a limited channel count, a stereo or mono mix will usually fit the bill, but for live events, perhaps we can do better.

As a case in point, consider a talker at a lectern in a large meeting room. Conventional practice would dictate routing the talker’s microphone to two loudspeakers at the front of the room via the left and right masters, and then feeding the signal with appropriate delays to additional loudspeakers throughout the audience area. A mono mix with the lectern midway between the loudspeakers will allow people sitting on or near the center line of the room to localize the talker more or less correctly by creating a phantom center image, but for everyone else, the talker will be localized incorrectly toward the front-of-house loudspeaker nearest them.

In contrast to a left-right loudspeaker system, natural sound in space does not take two paths to each of our ears. Discounting early reflections, which are not perceived as discrete sound sources, direct sound naturally takes only a single path to each ear. A bird singing in a tree, a speaking voice, a car driving past—all these sounds emanate from single sources. It is the localization of these single sources amid innumerable other individually localized sounds, each taking a single path to each of our two ears, that makes up the three-dimensional sound field in which we live. All the sounds we hear naturally, a complex series of pressure waves, are essentially “mixed” in the air acoustically with their individual localization cues intact.

Our binaural hearing mechanism employs inter-aural differences in the time-of-arrival and intensity of different sounds to localize them in three-dimensional space—left-right, front-back, up-down. This is something we’ve been doing automatically since birth, and it leaves no confusion about who is speaking or singing; the eyes easily follow the ears. By presenting us with direct sound from two points in space via two paths to each ear, however, conventional L-R sound reinforcement techniques subvert these differential inter-aural localization cues.

On this basis, we could take an alternative approach in our meeting room and feed the talker’s mic signal to a single nearby loudspeaker, perhaps one built into the front of the lectern, thus permitting pinpoint localization of the source. A number of loudspeakers with fairly narrow horizontal dispersion, hung over the audience area and in line with the direct sound so that each covers a fairly small portion of the audience, will subtly reinforce the direct sound as long as each loudspeaker is individually delayed so that its output is indistinguishable from early reflections in the target seats.

Such a system can achieve up to 8 dB of gain throughout the audience without the delay loudspeakers being perceived as discrete sources of sound, thanks to the well known Haas- or precedence-effect. A talker or singer with strong vocal projection may not even need a single “anchor” loudspeaker at the front at all.

As an added benefit to achieving intelligibility at a more natural level, the audience will tend to be unaware that there is a sound system in operation, an important step in reaching the elusive system design goal of transparency—people simply hear the talker clearly and intelligibly at a more or less normal level. This approach, which has been dubbed “source-oriented reinforcement,” precludes the sound system from acting as a barrier separating the performer from the audience, because it merely replicates what happens naturally, and does not disembody the voice through the removal of localization cues.

Traditional amplitude-based panning, which, as noted above, works only for those seated in the sweet spot along the centre axis of the venue, is replaced in this approach by time-based localization, which has been shown to work for better than 90 per cent of the audience, no matter where they are seated. Free from constraints related to phasing and comb-filtering that are imposed by a requirement for mono-compatibility or potential down-mixing—and that are largely irrelevant to live sound reinforcement—operators are empowered to manipulate delays to achieve pin-point localization of each performer for virtually every seat in the house.

Source-oriented reinforcement has been used successfully by a growing number of theatre sound designers, event producers and even DJs over the past 15 years or so, and this is where a large matrix comes into its own. Happily, many of today’s live sound boards are suitably equipped, with delay and EQ on the matrix outputs.

The situation becomes more complex when there is more than one talker, a wandering preacher, or a stage full of actors, but fortunately, such cases can be readily addressed as long as correct delays are established from each source zone to each and every loudspeaker on a one-to-one basis.

This requires more than a console level matrix with just output delays, or even assigning variable input delays to individual mics, since it necessitates a true delay-matrix allowing multiple independent time-alignments between each individual source zone and the distributed speaker system.

One such delay matrix that I have used successfully is the TiMax2 Soundhub, which offers control of both level and delay at each crosspoint in matrixes ranging from 16 x 16 up to 64 x 64 to define unique image definitions anywhere on the stage or field of play.

The Soundhub is easily added to a house system via analog, AES digital, and any of the various audio networks currently available, with the matrix typically being fed by input-channel direct outputs, or by a combination of console sends and/or output groups, as is the practice of the Royal Shakespeare Company, among others.

A familiar looking software interface allows for easy programming as well as real-time level control and 8-band parametric EQ on the outputs. A PanSpace graphical object-based pan programming screen allows the operator to drag input icons around a set of image definitions superimposed onto a jpg of the stage, a novel and intuitive way of localizing performers or manually panning sound effects.


 
The TiMax PanSpace graphical object-based pan programming screen


For complex productions involving up to 24 performers, designers can add the TiMax Tracker, a radar-based performer-tracking system that interpolates softly between image definitions as performers move around the stage, thus affording a degree of automation that is otherwise unattainable.

Where very high SPLs are not required, reinforcement of live events may best be achieved not by mixing voices and other sounds together, but by distributing them throughout the house with the location cues that maintain their separateness, which is, after all, a fundamental contributor to intelligibility, as anyone familiar with the “cocktail party” effect will attest.

As veteran West End sound designer Gareth Fry says, “I’m quite sure that in the coming years, source-oriented reinforcement will be the most common way to do vocal reinforcement in drama.”

While mixing a large number of individual audio signals together into a few channels may be a very real requirement for radio, television, cinema, and other channel-restricted media such as consumer audio playback systems, this is certainly not the case for corporate events, houses of worship, theatre and similar staged entertainment.

It may sound like heresy, but just because it’s sound doesn’t mean it has to be mixed. With the proliferation of matrix consoles, adequate DSP, and sound design devices such as the TiMax2 Soundhub and TiMax Tracker available to the sound system designer, mixing is no longer the only way to work with live sound—let alone the best way for every occasion.

Thursday, June 20, 2013

Putting "Excellence" Into Perspective

One of the hard lessons learned while working on an episodic TV series is that there is neither the time nor the budget to make everything "perfect." It's been said that a show is never finished, just abandoned when you run out of time or money or both.

In a recent blog entry on the ProTools Expert site, editor Russ Hughes wrote, "Perfection is said to be as much a curse as it is a blessing, especially for creative types. We record, edit, mix audio or shoot, cut and grade video and often we just can’t leave it alone, or indeed be satisfied with the end results.

"Our clients often never know the lengths we go to when working on their projects; they certainly won’t pay for half the work we did in the name of perfection. It’s a difficult balance being a creative professional with a budget on the one hand and a personal desire to do the best we can on the other. It’s the little things that take a project from good to great, as I’ve already alluded to most of us seldom feel we have done that, despite our best efforts," he said. (http://bit.ly/11IioLh)

One of the areas that causes us the most grief is all those annoying, unwanted noises that plague our recordings, especially production tracks for film and TV shows.

As supervising sound editor on 66 episodes of Relic Hunter a few years back, I calibrated our edit rooms to 82 dBA SPL and worked on eliminating unwanted noise perceptible at that level, and not at a higher level, since 82 was the level set in the mix theatre at Deluxe. Of course you would hear every little glitch and tick if you were to boost the monitor pot, but the average audience won't, and we're not going for absolute perfection, just doing the appropriate job for our client, the production company, within the available time and budgetary constraints without killing ourselves.

One takeaway from this is that for picture work, calibrate your monitor level appropriate for the program type and then DON'T TOUCH IT AGAIN. That's how it's done on the mix stage. In fact, I've seen film consoles with the monitor pot removed. Seasoned mixers know when dialog is at the right level just using their ears and never looking at a meter. If you mix consistently to, say, feature film level of 85 dB SPL each and every day for as little as 4 weeks without ever changing the level, you'll soon train yourself to accurately gauge level, and you'll love the freedom this brings to the work, along with a concomitant lack of stress over little things that will never be heard in the intended listening environment.

This is also the antidote to level creep in music mixes, where the monitor level goes up as the hours stretch on, and ultimately changes the track's spectral content due to the way we perceive the amount of bass and treble at different listening levels (due to the equal-loudness contours). Try to avoid this temptation at all costs and leave the monitor pot alone. Failing that, calibrate it to something like 85 or 90 dBA SPL and remove the knob. Try it. You may like it.

Wednesday, May 29, 2013

80 Years On: Getting it Right for Speech Reinforcement

April 27 marked the 80th anniversary of a historic milestone in the history of audio. On this date in 1933, the Philadelphia Orchestra under deputy conductor Alexander Smallens was picked up by three microphones at the Academy of Music in Philadelphia—left, center, and right of the orchestra stage—and the audio transmitted over wire lines to Constitution Hall in Washington, where it was replayed over three loudspeakers placed in similar positions to an audience of invited guests. Music director Leopold Stokowski manipulated the audio controls at the receiving end in Washington.

This historic event was reported and analyzed by audio pioneers Harvey Fletcher, J.C. Steinberg and W.B. Snow, E.C. Wente and A.L. Thuras, and others, in a collection of six papers published in January 1934 as the Symposium on Auditory Perspective by the IEEE, in Electrical Engineering. Paul Klipsch referred to the Symposium as "one of the most important papers in the field of audio."


Leopold Stowkowski and Harvey Fletcher
April 27, 1933: Leopold Stokowski at the controls with Harvey Fletcher observing
 
Prior to 1933, Fletcher had been working on what has since been termed the wall of sound. “Theoretically, there should be an infinite number of such ideal sets of microphones and sound projectors [i.e., loudspeakers] and each one should be infinitesimally small,” he wrote.

Fletcher's curtains of microphones and loudspeakers
Fletcher’s dual curtains of microphones and loudspeakers
 
Fletcher continued, “Practically, however, when the audience is at a considerable distance from the orchestra, as usually is the case, only a few of these sets are needed to give good auditory perspective; that is, to give depth and a sense of extensiveness to the source of the music.”

In this regard, Floyd Toole’s conclusions—following a career spent researching loudspeakers and listening rooms—are especially noteworthy. In his 2008 magnum opus, Sound Reproduction: Loudspeakers and Rooms, Toole noted that the “feeling of space”—apparent source width plus listener envelopment—which turns up in the research as the largest single factor in listener perceptions of “naturalness” and “pleasantness,” two general measures of quality, is increased by the use of surround loudspeakers in typical listening rooms and home theatres.

Given that these smaller spaces cannot be compared in either size or purpose to concert halls where sound is originally produced, Toole noted that in the 1933 experiment, “there was no need to capture ambient sounds, as the playback hall had its own reverberation."

Localization Errors

Recognizing that systems of as few as two and three channels were “far less ideal arrangements,” Steinberg and Snow observed that, nevertheless, “the 3-channel system was found to have an important advantage over the 2-channel system in that the shift of the virtual position for side observing positions was smaller."

In other words, for listeners away from the sweet spot along the hall’s center axis, localization errors due to shifts in the phantom images between loudspeakers were smaller in the case of a Left-Center-Right system compared with a Left-Right system.
Significantly, Fletcher did not include localization along with “depth and a sense of extensiveness” among the characteristics of "good auditory perspective.”

Regarding localization, Steinberg and Snow realized that “point-for-point correlation between pick-up stage and virtual stage positions is not obtained for 2-and 3-channel systems.” Further, they concluded that the listener “is not particularly critical of the exact apparent positions of the sounds so long as he receives a spatial impression. Consequently 2-channel reproduction of orchestral music gives good satisfaction, and the difference between it and 3-channel reproduction for music probably is less than for speech reproduction or the reproduction of sounds from moving sources.”

The 1933 experiment was intended to investigate “new possibilities for the reproduction and transmission of music,” in Fletcher’s words. Many, if not most, of the developments in multichannel sound have been motivated and financed by the film industry in the wake of Hollywood's massive financial investment in the "talkies" that single-handedly sounded the death knell of Vaudeville, and led to the conversion of a great many theatres into cinemas.

Given that the growth of the audio industry stemmed from research and development into the reproduction and transmission of sound for the burgeoning telephone, film, radio, television, and recorded music industries, it is curious that the term “theatre” continued (and still continues to this day) to be applied to the buildings and facilities of both cinemas and theatres. This reflects the confusion not only in their architecture, on which the noted theatre consultant Richard Pilbrow commented in his wonderful 2011 memoir A Theatre Project, but also in the development of their respective audio systems.

Theatre is Not Cinema: The Differing Requirements of Speech Reinforcement

Sound reinforcement was an early offshoot, eagerly adopted by demagogues and traveling salesmen alike to bend crowds to their way of thinking; yet, as Don Davis noted in 2013 in Sound System Engineering, “Even today, the most difficult systems to design, build, and operate are those used in the reinforcement of live speech. Systems that are notoriously poor at speech reinforcement often pass reinforcing music with flying colors. Mega churches find that the music reproduction and reinforcement systems are often best separated into two systems.”

The difference lies partly in the relatively low channel count of audio reproduction systems that makes localization of talkers next to impossible. Since delayed loudspeakers were widely introduced into the live sound industry in the 1970’s, they have been used almost exclusively to reinforce the main house sound system, not the performers themselves. This undoubtedly arose from the sheer magnitude of the sound pressure levels involved in the stadium rock concerts and outdoor festivals of the era.

However, in the case of, say, an opera singer, the depth, sense of extensiveness, and spatial impression that lent appeal to the reproduced sound of the symphony orchestra back in 1933, likely won’t prove satisfying in the absence of the ability to localize the sound image of the singer’s voice accurately. Perhaps this is one reason why “amplification” has become such a dirty word among opera aficionados.

In the 1980s, however, the English theatre sound designer Rick Clarke and others began to explore techniques of making sound appear to emanate from the lips of performers rather than from loudspeaker boxes. They were among a handful of pioneers who used the psychoacoustics of delay and the Haas effect “to pull the sound image into the heart of the action,” as sound designer David Collison recounted in his 2008 volume, The Sound of Theatre.

Out Board Electronics in the UK has since taken up the cause of speech sound reinforcement, with a unique delay-based input-output matrix in its TiMax2 Soundhub that enables each performer’s radio mic to be fed to dozens of loudspeakers—if necessary—arrayed throughout the house, with unique levels and delays to each loudspeaker such that more than 90 per cent of the audience is able to localize the voice back to the performer via Haas effect-based perceptual precedence, no matter where they are seated. Out Board refers to this approach as source-oriented reinforcement (SOR).

The delay matrix approach to SOR originated in the former DDR (East Germany), where in the 1970s, Gerhard Steinke, Peter Fels and Wolfgang Ahnert introduced the concept of Delta-Stereophony in an attempt to increase loudness in large auditoriums without compromising directional cues emanating from the stage. In the 1980s, Delta-Stereophony was licensed to AKG and embodied in the DSP 610 processor. While it offered only six inputs and 10 outputs, it came at the price of a small house.

Out Board started working on the concept in the early 1990s and released TiMax (now known as TiMax Classic) around the middle of the decade, progressively developing and enlarging the system up to the 64 x 64 input-output matrix, with 4,096 cross points, that characterizes the current generation, TiMax2.

The TiMax Tracker, an ingenious radar-based location system, locates performers to within six inches in any direction, so that the system can interpolate softly between pre-established location image definitions in the Soundhub for up to 24 performers simultaneously. The audience is thereby enabled to localize performers’ voices accurately as they move around the stage, or up and down on risers, thus addressing the deficiency of conventional systems regarding the localization of both speech and moving sound sources.

Source-Oriented Reinforcement

Out Board director Dave Haydon put it this way: “First thing to know about source-oriented reinforcement is that it’s not panning. Audio localization created using SOR makes the amplified sound actually appear to come from where the performers are on stage. With panning, the sound usually appears to come from the speakers, but biased to relate roughly to a performer’s position on stage. Most of us are also aware that level panning only really works for people sitting near the center line of the audience. In general, anybody sitting much off this center line will mostly perceive the sound to come from whichever stereo speaker channel they’re nearest to.

“This happens because our ear-brain combo localizes to the sound we hear first, not necessarily the loudest. We are all programmed to do this as part of our primitive survival mechanisms, and we all do it within similar parameters. We will localize even to a 1 ms early arrival, all the way up to about 25 ms, then our brain stops integrating the two arrivals and separates them out into an echo. Between 1 ms and about 10 ms arrival time differences, there will be varying coloration caused by phasing artifacts.

“This localization effect, called precedence or Haas Effect after the scientist who discovered it, works within a 6-8 dB level window. This means the first arrival can be up to 6-8 dB quieter than the second arrival and we’ll still localize to it. This is handy as it means we can actively apply this localization effect and at the same time achieve useful amplification.

“If we don’t control these different arrivals they will control us. All the various natural delay offsets between the loudspeakers, performers and the different seat positions cause widely different panoramic perceptions across the audience. You only to have to move 13 inches to create a differential delay of 1 ms, causing significant image shift. Pan pots just controlling level can't fix this for more than a few audience members near the center. You need to manage delays, and ideally control them differentially between every mic and every speaker, which requires a delay-matrix and a little cunning, coupled with a fairly simple understanding of the relevant physics and biology,” Haydon said.

Into the Mainstream

More and more theatres are adopting this approach, including New York’s City Center and the UK’s Royal Shakespeare Company. A number of Raymond Gubbay productions of opera-in-the-round at the notoriously difficult Royal Albert Hall—including Aida, Tosca, The King and I, La Bohème and Madam Butterfly—as well as Carmen at the O2 Arena, have benefited from source oriented reinforcement, as have recent productions of Les Miserables, Jesus Christ Superstar, Into the Woods, Beggar’s Opera, Marie Antoinette, Andromache, Tanz de Vampire, Lord of the Flies, Fela!, and many others at venues around the world.

Veteran West End sound designer Gareth Fry employed the technique earlier this year at the Barbican Theatre for The Master and Margarita, to make it possible for all audience members to continuously localize to the actors’ voices as they moved around the Barbican’s very wide stage. He noted that, in the three-hour show with a number of parallel story threads, this helped greatly with intelligibility to ensure the audience’s total immersion in the show’s complex plot lines.

Based on the experience, Fry said, “I’m quite sure that in the coming years, SOR will be the most common way to do vocal reinforcement in drama.”

As we mark the 80th anniversary of that historic first live stereo transmission, it’s worth noting that, in spite of the proliferation of surround formats for sound reproduction that has to date culminated in the cinematic marvel of 64-channel Dolby Atmos, we are only now getting onto the right track with regard to speech reinforcement.

It’s about time.

(photo source: http://www.stokowski.org) 

Saturday, April 13, 2013

The Passing of Online

The essential distinction between offline and online is that an offline process is one of construction; an online process, one of execution. In media production, online usually follows offline, as in the case of video editing, where a product that has been laboriously constructed in an offline edit suite—perhaps over the course of days or weeks—is executed by machinery following an edit decision list (EDL) in minutes or hours in an online suite.

Since the hourly rate of a well appointed online suite is typically several orders of magnitude higher than that of a small offline studio—often equipped with not much more than a desktop computer running editing software—the distinction between online and offline has long been etched into the steely heart of many a production manager.

Applying this distinction to the field of music, you might say that playing an instrument is generally an online process, and requires the talent to perform. Constructing a musical performance using MIDI step input, for example, is an offline process, and requires a different skill set.

Before Bing Crosby teamed up with Jack Mullin back in 1947 and seized on the potential for splicing tape offline to construct complete recorded performances, recording musicians had to execute a complete work flawlessly to the end while it was being recorded direct to phonograph disc—an online process. If they made a mistake, they had to go back to the beginning, scrap the disc, and start all over again.

Likewise, dialing a phone on a traditional land line is an online process. If you realize you’ve made a mistake, you have to abort—hang up—and begin again. Dialing a cell phone, on the other hand, is an offline process. You compose the number and, if you make a mistake, you go back a step and delete the wrong input—edit it out—and input the right number. When the entire telephone number has been constructed to your liking, you go online—literally, hit the green online button—and the call is executed by the service provider.

The ability to edit is what distinguishes offline from online processes.

Sound mixing for film used to be mostly an online activity. It was common practice in the early decades of film sound for an entire 10-minute reel to be mixed in a single pass, following one or more rehearsals. With the development of pick-up and record electronics for film dubbers making punching in possible, the two- or three-person re-recording team enjoyed the ability at last to go back and fix a flawed portion of a mix—usually refining their console settings listening to the sound backwards while the dubbers rewound in real time—without causing undue delay and excessive cost to the production.

Mix automation changed all that, from the introduction of console automation systems in the 1970s to today’s digital audio workstations featuring the ability to graph not just volume and mute, but just about every conceivable control parameter. Automation has allowed the offline construction of mixes to become standard operating procedure, with the mix being subsequently executed online in a single record pass or internal bounce-to-disk.

Now this has all changed again with the introduction of offline bounce in ProTools 11. This enables freezing a mix—that is, rendering the final mix up to 150 times faster than real time, according to Avid—and has made the notion of “online” something of a quaint curiosity.

Now a mix need never be onlined at all, since we are able to render into a single final file something that doesn’t ever need to be played through, prior to the playback for quality control checking and approval, after the fact.

The notion of online vs. offline, once so central to the production process and necessitating the development of the all-important EDL, is in the process of being relegated to the status of a quaint curiosity, a byway in the development of modern studio practices and procedures. It will soon be forgotten, along with such other bygone realities as the daily tape recorder alignment ritual, analog noise reduction devices, and uniformed gas station attendants.

It brings to mind the day that I finally sold my once invincible Synclavier and 16-track Direct-to-Disk recorder—to a couple of vintage synth collectors, no less. The only things I hung onto were two blank rack panels and an AC power bar. Some things, at least, are irreplaceable.