Wednesday 31 December 2014

Why are $20k loudspeakers so expensive?

Here is a short video that should give pause to those who have asked that question with the confident skepticism of someone who has never tried to actually make a pair themselves. This person has made his own pair of B&W 800 Diamond loudspeakers. Has he succeeded? We will never know, but it sure looks most impressive.

In practice, he has restricted himself to making his own set of elaborate cabinets, as it looks as though he has bought all the drive units from B&W. But even so, the overwhelming impression is of the expensive resources he has had to bring to bear to realize the project. OK, he has done the grunt work himself, but the project has clearly taken a HUGE amount of time and effort. Aside from some initial consternation, I imagine that the executives at B&W are having a good chuckle over it.

Presumably his motivation was purely the satisfaction of creating his own work of art. Think about it. How much money can he possibly have saved by doing it himself? Do you think you could do it yourself for less, without sacrificing at least some of the core design objectives?

Whatever, as I contemplate my own B&W 802 Diamonds, I am sure glad I bought mine!

https://www.youtube.com/watch?v=fHgdNQkiNVk

Monday 15 December 2014

DSD - Is DST Compression Really Lossless?

The SACD format was built around DSD right from the start.  Since DSD takes up about four times the amount of disk space of a 16/44.1 equivalent this meant that a new physical disc format with more capacity than a CD was going to be required.  Additionally, SACD was specified to deliver multi-channel content, which increases the storage requirement by another factor of 3 or more, depending on how many channels you want to support.  The only high-capacity disc format that was on the horizon at the time was the one eventually used for DVD, and even this was going to be inadequate for the full multi-channel capability envisaged for SACD.

The solution was to adopt a lossless data compression protocol to reduce the size of a multi-channel DSD master file so that it would fit.  The protocol chosen is called DST, and is an elaborate DSP-based method derived from the way MP3 works.  Essentially, you store a bunch of numbers that represent the actual data as a mathematical function which you can later use to try to re-create the original data.  You then store a bunch of additional numbers which represent the differences between the actual data and the attempted recreation.  If you do this properly, the mathematical function numbers, plus the difference data, takes up less space than the original data.  On a SACD the compression achieved is about 50%, which is pretty good, and permits a lot of content to be stored.

Given that DST compression is lossless, it is interesting that the SACD format allows discs to be mastered with your choice of compressed or non-compressed data.  And, taking a good look at a significant sample of SACDs, it appears that a substantial proportion of those discs do not use compression.  Additionally, if you look closely, you will see that almost all of the serious audiophile remasters released on SACD are all uncompressed.  So the question I have been asking is - is there any reason to believe that DST-compressed SACDs might sound worse than uncompressed ones?

First of all, let me be clear on one thing.  The DST compression algorithm is lossless.  This means that the reconstructed bit stream is bit-for-bit identical to the original uncompressed bit stream.  This is not at issue here.  Nor is the notion that compressing and decompressing the bits somehow stresses them so that they don’t sound so relaxed on playback.  I don’t buy mumbo jumbo.  The real answer is both simpler than you would imagine (although technically quite complicated), and at the same time typical of an industry which has been known to upsample CD content and sell it for twice the price on a SACD disc.

To understand this, we need to take a closer look at how the DSD format works.  I have written at length about how DSD makes use of a combination of massive oversampling and noise shaping to encode a complex waveform in a 1-bit format.  In a Sigma-Delta Modulator (SDM) the quantization noise is pushed out of the audio band and up into the vast reaches of the ultrasonic bandwidth which dominates the DSD encoding space.  The audio signal only occupies the frequency space below 20kHz (to choose a number that most people will agree on).  But DSD is sampled at 2,822kHz, so there is a vast amount of bandwidth between 20kHz and 2,822kHz available, into which the quantization noise can be swept.

One of the key attributes of a good clean audio signal is that it have low noise in the audio band.  In general, the higher quality the audio signal, the lower the noise it will exhibit.  The best microphones can capture sounds that cannot be fully encoded using 16-bit PCM.  However, 24-bit PCM can capture anything that the best microphones will put out.  Therefore if DSD is to deliver the very highest in audio performance standards it needs to be able to sustain a noise floor better than that of 16-bit audio, and approaching that of 24-bit audio.

The term “Noise Shaping” is a good one.  Because quantization noise cannot be eliminated, all you can hope to do is to take it from one frequency band where you don’t want it, and move it into another where you don’t mind it - and in the 1-bit world of DSD there is an awful lot of quantization noise.  This is the job of an SDM.  The design of the SDM determines how much noise is removed from the audio frequency band, and where it gets put.  Mathematically, DSD is capable of encoding a staggeringly low noise floor in the audio band.  Something down in the region of -180dB to -200dB has been demonstrated.  What good DSD recordings achieve is nearer to -120dB, and the difference is partly due to that fact that practical real-world SDM designs seriously underperform their theoretical capabilities.  But it also arises because better performance requires a higher-order SDM design, and beyond a certain limit high-order SDMs are simply unstable.  A workmanlike SDM would be a 5th-order design, but the best performance today is achieved with 8th or 9th order SDMs.  Higher than that, and they cannot be made to work.

So how does a higher-order SDM achieve superior performance?  The answer is that it packs more and more of the quantization noise into the upper reaches of the ultrasonic frequency space.  So a higher-performance higher-order SDM will tend to encode progressively more high-frequency noise into the bitstream.  A theoretically perfect SDM will create a Bit Stream whose high frequency content is virtually indistinguishable from full-scale white noise.

This is where DST compression comes in.  Recall that DST compression works by storing a set of numbers that enable you to reconstruct a close approximation of the original data, plus all of the differences between the reconstructed bit stream and the original bit stream.  Obviously the size of the compressed (DST-encoded) file will be governed to a large degree by how much data is needed to store the difference signal.  It turns out that the set of numbers that reconstruct the ‘close approximation’ do a relatively good job of encoding the low frequency data, but a relatively poor job of encoding the high frequency data.  Therefore, the more high frequency data is present, the more additional data will be needed to encode the difference signal.  And the larger the difference signal, the larger the compressed file will be.  In the extreme, the difference signal can be so large that you will not be able to achieve much compression at all.

This is the situation we are in with today’s technology.  We can produce the highest quality DSD signal and be unable to compress it effectively, or we can accept a reduction in quality and achieve a useful degree of (lossless) compression.

So what happens when we have a nice high-resolution DSD recording all ready to be sent to the SACD mastering plant?  What happens if the DSD content is too large to fit onto a SACD, and cannot be compressed enough so that it does?  The answer will disappoint you.  What happens is that the high quality DSD master tape is remodulated using a modest 5th-order SDM, in the process producing a new DSD version which can now be efficiently compressed using DST compression.  Most listeners agree that a 5th order SDM produces audibly inferior sound to a good 8th order SDM, but with real music recordings it is essentially impossible to inspect a DSD data file and determine unambiguously what order of SDM was used to encode it.  So it is easy enough to get away with.

How do you tell if a SACD is compressed or not?  Well, if you have the underground tools necessary, you can rip it and analyze it definitively.  For the rest of us there is no sure method except for one.  You simply add up the total duration of the music on the disc, and calculate 2,822,400 bits of data per second, per channel.  If the answer amounts to more than 4.7GB then the data must be compressed.  If it adds up to less, there is no guarantee that it won’t be DST-compressed, but the chances are pretty good that it is not.  After all, if the record company wants to compress it, they’d have to pay someone to do that, and that probably ain’t gonna happen.  The other simple guideline is that if it is multi-channel it is probably compressed, but if it is stereo it probably is not.

Of course, none of this need apply to downloaded DSD files.  If produced by reputable studios these will have been produced using the best quality modulators they can afford, and since DST encoding is not used on commercial DSF and DFF* files this whole issue need not arise.  However, if the downloaded files are derived from a SACD (as many files are which are not distributed by the original producers), then the possibility does exist that you are receiving an inferior 5th-order remodulated version.  The take-away is that not all DSD is created equal.  Yet another thing for us to have to bear in mind!

[* Actually, the DFF file format does allow for the DSD content to be DST compressed, because the DFF file format is what is used by the mastering house to provide the final distribution-ready content to the SACD disc manufacturing plant.  However, for consumer DSD file distribution, I don’t think anybody employs DST compression.]

Saturday 13 December 2014

BitPerfect 2.0.2 Released

Today, BitPerfect 2.0.2 has been released to the App Store.  It may take up to 48hrs before it shows up in all regions.  V2.0.2 contains several minor bug fixes, plus some minor enhancements to the audio engine to improve stability.

BitPerfect 2.0.2 is a free upgrade for existing BitPerfect users.


Monday 24 November 2014

24-Bits Of Christmas

Once again LINN Records are announcing a "24-Bits Of Christmas" promotion, where they are offering a free high-resolution download every day from December 1st through Christmas.  The doors are opening early this year, and the first track is already available.  Check it out!

A Fix for the Yosemite Console Log Problem

BitPerfect user Stefan Leckel has come up with a useful solution to the Yosemite Console Log problem.  In case you are unaware, under Yosemite, when you use BitPerfect, iTunes fills the Console Log with a stream of entries - several per second - which rapidly fills the Console Log to capacity.  At that point, the oldest messages are deleted.  In effect, this renders the Console Log pretty useless as a diagnostic tool.

Stefan's ingenious solution is a simple script file which, in effect, sets up the Console App so that it ignores these specific messages.  However, because the script works at the system level, using it requires a level of comfort with working on OS X using tools that are capable of wreaking havoc, although hopefully the instructions below are easy enough for most people to use with a degree of comfort.  As with anything that involves tinkering at the system level, YOU USE THIS TOOL ENTIRELY AT YOUR OWN RISK, AND WITH NO EXPRESS OR IMPLIED WARRANTY.  If in doubt, channel Nancy Reagan, and "Just Say No":)

First, you need to download a special script file which you can download by clicking here.  This will download a file called ConsoleFix.sh.  It doesn't matter where you place this file.  Your downloads folder would do fine.  If you are concerned about the authenticity of this file, or what it might be doing to your Mac, the contents are reproduced below for you to inspect and compare.

To use the script file, you need to first open a Terminal window.  Inside the terminal window type the following: "sudo bash " - don't type the quote marks, and be sure to leave a space after the bash - and DON'T press the ENTER key.  Next, drag and drop the ConsoleFix.sh file that you just downloaded into the Terminal window.  This action will complete the "sudo bash " line with the full path of the ConsoleFix.sh file.  Now you can press ENTER.  You will be prompted to enter your system password.  Enter it (nothing will show in the Terminal as you type), and hit ENTER.

That's it.  The Console Log should now work fine.  If you want to reset it back to how it was, just re-run the same sequence.  The same command is designed to toggle the modification on and off.

Thank you Stefan!

Below, for reference, I have reproduced the content of ConsoleFix.sh in full (lines shown in green are wrapped from the end of the previous line):

#!/bin/bash
#
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# use at your own risk, no warranty
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
#
# checks if asl.conf is already modified
#
set -x
cat /etc/asl.conf|grep -F "? [= Facility com.apple.coreservices.appleevents] [=
Sender BitPerfect] [S= Message com.apple.root.default-qos.overcommit] ignore" > /dev/null

if [ $? -eq 0 ]
then
    echo "removing bitperfect modification from /etc/asl.conf file"
    cat /etc/asl.conf|grep -v -F "? [= Facility
com.apple.coreservices.appleevents] [= Sender BitPerfect] [S= Message com.apple.root.default-qos.overcommit] ignore" > /etc/asl.bitperfect
else
    echo "adding bitperfect modifications to /etc/asl.conf file"
    echo "? [= Facility com.apple.coreservices.appleevents] [= Sender BitPerfect]
[S= Message com.apple.root.default-qos.overcommit] ignore" > /etc/asl.bitperfect
    cat /etc/asl.conf >> /etc/asl.bitperfect
fi

echo "backup /etc/asl.conf to /etc/asl.conf.bitperfect"
cp /etc/asl.conf /etc/asl.conf.bitperfect

echo "activating new config"
mv /etc/asl.bitperfect /etc/asl.conf

echo "restarting syslogd daemon"
killall syslogd

echo "done."
exit
 


Friday 21 November 2014

The Importance Of Being Roger

Roger has been somewhat shunted unceremoniously to one side in the modern world.  We seem to have forgotten why he was ever there in the first place, and the important role he used to play.  Without Roger, our world today is a less friendly place, one in which misunderstandings are easy to come by.  Personally, I miss him, but then again I suppose I am just another old fart.

In the early day of person-to-person radio communications, Roger played a critically important role.  If you are flying an aeroplane, and you want to announce to the control tower that you’re commencing your takeoff roll, you want to be sure that the control tower is aware of that, otherwise all sorts of unpredictable outcomes could potentially result, some of them dire.  That’s where Roger comes in.  The control tower responds “Roger” and now you know your message has been received and, by extension, that the control tower knows you are rolling.  It is part of what we today recognize as a handshaking protocol, something that ensures the effectiveness of a two-way communication.  Handshaking is a tool to ensure that a message has been received, that it has been understood, and that both parties know either who is expected to speak next, or that they are agreed that the conversation is over.

When speaking to someone face-to-face, or over the telephone, there are implied cues to which we tend to adhere in order to provide this handshaking element.  These can be turns of phrase, vocal inflections, gestures, and the like.  They often vary among cultures.  How we communicate with a person has important ramifications as to how the other person perceives us, and how we in turn perceive the other person.  We may perceive that person to be brusque, friendly, rude, gregarious, or to have any of a number of attributes.  If, as a person, your inter-personal communications cause others in the world to perceive you wrongly, it is well-understood that you could have problems in your life.

Generally, it is important in our day-to-day inter-personal communications that we understand how the subtext of our communication is being received.  If you ask someone if they want to have a beer with you after work, there is world of difference between “No” and “Gee, I’m sorry, but my daughter has soccer practice”.  Most of us, when we speak with someone face-to-face or on the telephone, understand the subtext, even as we recognize that the understanding itself is sometimes in error.

Roger’s absence first became a problem with the widespread introduction of e-mail into mostly business correspondence.  If I send an e-mail inviting a colleague out for dinner when I’m in town next week, many people will find it acceptable to reply “No” in an e-mail when they really mean “Gee, I’m sorry, but I’m out of town that day”, even when they would never dream of responding with the terse “No” in a face-to-face situation.  It is part of a complex issue, one on which I don’t propose to write a treatise, but a major contributory factor is that, for most of us, it takes far longer to compose an e-mail message that properly encapsulates the subtext with which we wish to endow our response, and often we just don’t feel we have that time.  Personally, I find that excuse to be a lazy one, and if not, then disrespectful towards the recipient.

In today’s world, for many people the text message has replaced the e-mail, particularly for one-on-one conversations.  Partly by their nature, and partly due to the hardware typically used to send them, text messages tend to be terse by default.  Additionally, text message conversations tend to replace telephone conversations for many people.  They want to multi-task.  They will fit your text conversation in as they find time during the course of their day.  And so will you.  Consequently, the alternative of a phone call takes on something of the aura of an intrusion.  Which is rather frustrating, since, in the grand scheme of things, a phone conversation is always many, many times more effective.  But that too is another discussion.

This is where Roger comes in.  The lingua franca of texting is the curt message.  I don’t know about you, but I really feel the need to know that a message has been received and understood.  If I send a text that says something to the effect of “I need to see your report by the end of the day”, I feel unhappy if I don’t get a response.  It’s like if I said the same thing to that person in my office, and he just walked out without responding.  It is very clear to me how I would interpret such an action.  And shortly thereafter it would be equally clear to the other person.  What is missing is Roger.  All you need is a text response that says “Will do”.  Or even “K”, if that’s your thing.  Sadly, and frustratingly, I am finding that Roger is very much the exception rather than the rule these days.

I said earlier that inter-personal “handshaking” protocols are to a large degree cultural, and maybe that’s what’s going on here.  A texting culture is arising - or has arisen - in which subtext is no longer being conveyed at all - even emoticons, as best as I can tell, are en route to being passé.  If so, that is a counter-productive development.  I do converse with a few people routinely whose preferred mode of communication is the text message, and have done so for some time.  But I still find it a frustrating medium, and mainly because I dearly miss Roger, which I still give, but rarely receive.
 

Thursday 13 November 2014

The PS Audio DirectStream DAC - Part IV.

It has been several months now since I concluded my review of the PS Audio DirectStream DAC, and a pretty positive review it was.  Since that time the unit has continued to very gradually break in, and there have also been a couple of firmware updates (all of which are available on PS Audio’s web site at no charge), each an improvement, but not sufficient to justify adding substantially to the gist of my review.  But recently there has been a major firmware update - version 1.2.1 is its designation - which is a sufficient game-changer that it warrants a whole update of its own.

So why should something as perfunctory as firmware change the sound of a DAC?  Normally when we think of firmware updates we think of functionality rather than performance.  And indeed there are functionality issues which are addressed here - the DirectStream now fully supports 24/352.8 PCM, which it did not do with the original firmware.  But in a DAC in particular, a large part of what it does performance-wise lies in the processing of the incoming data in the digital domain, and those processes are often under the control of the firmware.  Particularly in the DirectStream, where all that processing happens on in-house PS Audio firmware rather than within the proprietary workings of a third-party chipset.  What goes on under the aegis of its firmware is to a large degree the heart and soul of what the DirectStream is all about.

I have communicated at length with Ted Smith, designer of the DirectStream, about the nature and effect of the changes he has made.  I’m not sure how open those discussion were intended to be, and so I will not share them in detail with you, but there are two areas in which his attention has been primarily focussed.  The first is on the digital filters, and how their optimal implementation is found to affect jitter, something which initially surprised me.  The second is on the Delta-Sigma Modulators which generate the single-bit output stage, always an area ripe for improvement, in which Ted has reined in the attack dogs which stand guard to protect against the onslaught of instabilities.  Together, the effect of these significant updates has been transformative, and that is not a word I use lightly.

The simple description of the sound of the 1.2.1 firmware update is that it has opened up the sound.  Everything has more space and air around it.  Sonic textures have acquired a more tactile quality.  The music just communicates more freely.  It would be easy to sit back and characterize the sound as more “Analog-” or “Tube-” like.  These are words the audiophile community likes to use as currency for sound that is quite simply easy to listen to.  It is interesting that we audiophiles admire and value attributes such as sonic transparency, detailed resolving power, and dynamic response, and yet how often is it that when we are able to bring them together the result is painfully unlistenable?  It is these Yin and Yang elements that are foremost in my mind as I listen to the 1.2.1 version of the DirectStream.

So, without further ado, what am I listening to, and how is it sounding?

First up is “Bending New Corners” by the French jazz trumpeter Erik Truffaz.  This is a curious infusion of early-‘70s Miles Davis, ambient groove jazz, and trip-hop, which brings to mind the sort of music that might have deafened you in the über-trendy restaurant scene of the 1990’s.  I first heard it on LP at the Montreal high-end dealer Coup de Foudre, and today I’m playing the CD version.  The mix is a relatively simple one involving trumpet, bass, keyboards and drums, plus the occasional vocal stylings of a rapper called ‘Nya’.  The music is set in an atmospheric ambient, and is quite simple in its sonic palette, but nevertheless I have always had trouble separating out the individual instruments.  I was keen to know what the additional resolving power of the 1.2.1 DS would make of it.

What the additional clarity brought was the realization that I have been hearing the limits of this recording all along.  The trumpet has a very rich spectrum of harmonics which overlay most of the audio spectrum, and when it plays as a prominent solo instrument those harmonics can intermodulate with the sounds of many other instruments, making it difficult to hear through the trumpet and follow the rest of the mix.  If the intermodulation is baked into the recording, then no degree of fidelity in the playback chain is going to solve that problem.  This is what I am plainly hearing with the 1.2.1 DS.  This recording, far from being a clean and atmospheric gem waiting for an extraordinary DAC to liberate its charms, is a bit of a digital creation.  The extraordinary DAC instead reveals its ambience as a digital artifact.  The lead trumpet and vocals can be heard to have a processed presence about them.

Once you have heard something, you can never “un-hear” it again.  It’s a bit like skiing, in that once you’ve mastered it, it becomes impossible to ski like you did when you were still learning.  At best, all you’ll manage is a caricature of a person skiing like a novice.  I can now go back to the CD of “Bending New Corners” on a lesser system and will recognize its flaws for what they are, even though previously I would have interpreted what I was hearing differently.

My experience with Bending New Corners was to be repeated many times.  As I type, I am listening to Ravel’s Bolero with the Minnesota Orchestra conducted by Eiji Oue on Reference Recordings, ripped from CD.  It begins with a pianissimo snare drum some 20 feet behind the speakers and slightly to the right of center.  This recording has always been one of which I have thought highly.  The solo pianissimo snare is a good test for system imaging.  However, I now hear the snare as living in a slightly smeared space.  I perceive its sonic texture differently - more plausibly accurate if you will (a layer of sonic mush hovering around the instrument itself has evaporated away like the early mist on a spring morning) - but I somehow cannot place the image more accurately than a few feet.  I surmise that, because my brain is more confident that it is hearing the sound of a pianissimo snare drum, it therefore also expects to hear that sound more accurately localized in space.  But it is unable to do that.  As a consequence, although I never previously thought that the stereo image was wanting, I now appreciate that in fact it is, and I wonder how a higher-resolution version of this recording would compare.

Here is a song my wife likes.  It is “Hollow Talk” from the CD “This is for the White in your Eyes” by the Danish band Choir of Young Believers.  My wife had me track it down because it is the theme tune on a Danish/Swedish TV show we have been watching on Netflix called The Bridge (Bron/Broen).  It is another example of how the DS 1.2.1 can render a studio’s clumsy machinations clearly manifest.  The echo applied to the vocal adds atmospherics but is just unnatural.  As the track proceeds, the production gets layered on and layered on - and then layered on some more.  The effect is all very nice when heard on TV, but on my reference system driven by the DS 1.2.1 it just calls out for a lighter touch.  For example, at the beginning I heard a faint sound off to the left like someone getting into or out of a car and closing the door.  I don’t see why they wanted to include that - I can’t imagine it is particularly audible unless you have a highly resolving system such as a DS 1.2.1, one which makes clear the dog’s breakfast nature of the recording.

Next up is another old favourite of mine, “Unorthodox Behaviour” by 1970’s fusion combo Brand X.  I saw the band live at Ronnie Scott’s club in London back in 1975 (or thereabouts) and bought the album on LP as soon as it came out.  Today, I’m playing a 24/96 needle-drop.  I just love the opening track, Nuclear Burn.  Percy Jones’ bass lick is original and memorable, and extremely demanding of technique.  DirectStream 1.2.1 lets me hear the bass line more clearly than I have ever heard it before.  I had always thought it to have a slightly muddy texture - not surprising, given that playing it would tie most people’s fingers into inextricable knots - but now I hear just how extraordinarily skilled Jones’ bass chops really were.  And below it, Phil Collins’ kick drum has acquired real weight.  Not that it sounds any louder, or deeper.  It is more like the pedal mechanism has had an extra 5lb of lead affixed to it.

Now to a lonely corner of your local music store, where the Jazz, Folk, and Country aisles peter out.  This is where you’ll find Bill Frisell’s 2000 CD “Ghost Town”, a finely recorded ensemble of mostly acoustic guitar and banjo music with Frisell playing all the instruments.  Despite the album’s soulful and contemplative mood, due at least in part to the sparse arrangements and absence of a drum track, I keep expecting it to break out suddenly into 'Duelling Banjos'.  The track list comprises mostly Frisell original compositions together with a handful of well-chosen covers.  Apart from enjoying the music, the idea here is to play Spot The Guitar.  On a rudimentary system this involves telling which are the guitars and which the banjos.  As the system gets better, you start to be able to tell how many different models of each instrument are being played.  With the DS 1.2.1 I suspect you could go further and identify the brands (Klein, Anderson, Martin, etc.).  Me, I’m not a guitar head, and can’t do that (although, back in the day, I used to be able to reliably tell a Strat from a Les Paul, even on the most rudimentary systems), but I do hear the different tonalities and sonorities very clearly.

Gil Scott-Heron is credited in some circles as being the father of rap.  He was a soulful yet extremely cerebral poet-musician with a strong sense of a social message.  His 1994 CD “Spirits” was a bit of a swan song, and contains a track “Work for Peace” which is a political rant against the 'military and the monetary', who 'get together whenever it’s necessary'.  I kinda like it - it is, I imagine, great doper music … yeah, man.  But the mostly spoken voice is very soulfully and plausibly captured.  You can imagine the man himself, in the room with you.  I would just love to hear the original master tape transferred to DSD.

I Remember Miles” is a 1998 CD from Shirley Horn.  It’s a terrific recording, and won the Grammy for Best Jazz Vocal Performance.  But really, it is an all-round wonderful album.  And the standout track is an absolute classic 10-minute workout of Gershwin’s “My Man’s Gone Now” from Porgy and Bess.  It begins with Ron Carter’s stunning, ripely textured, ostinato-like bass riff which underpins the track.  It has always sounded to me like two basses - one electric and one acoustic - but with the latest DS 1.2.1 the electric bass tones now sound more and more like an expertly played and finely recorded acoustic bass, and in addition I’m beginning to think there’s just the one bass - perhaps even double-tracked.  I’d love to know what you think.  Aside from the tasty bass, the rest of the recording is revealed to have a smooth but slightly congested, slightly coloured sound, a bit like what I hear when I listen to SETs played through horn speakers (I know, I know, heresy.  Kill me now.).  The immediacy and sheer presence of a fine DSD recording is just not there.  Unfortunately, this has not been released on SACD either.  Perhaps a DSD remaster will finally put the bass conundrum to bed?

Which brings me to the nub of this review.  Finally, the DirectStream is delivering on its huge promise as a DSD reference.  With the 1.2.1 firmware, it is opening up a clear gap between its performance with DSD and PCM source material, along the exact same lines as my previous experience with the Light Harmonic Da Vinci Dual DAC.  The DSD playback just adds that extra edge of organic reality to the sound.  It just sounds that little bit more like the actual performer in the room with you.  Sure, CD sounds great on it - probably as good as I’ve ever heard it sound - but the DS 1.2.1 consistently shows CD at its limits.  Great sound requires more than CD can deliver across the board, and in my view the DS 1.2.1 - through its excellent performance - makes this about as clear as it’s ever going to be.

In Part II of my review I mentioned the CD of Acoustic Live by Nils Lofgren.  I recently came across a SACD of music from the TV series “The Sopranos”, and it contains “Black Books” from the Lofgren album.  The CD is a pretty special recording, but the DSD ripped from the SACD just blows it clean out of the water, if you can imagine such a thing.  The vocal has incredible in-the-room-in-front-of-you presence.  All of the acoustics, which were already pretty open, really open up.  The pair of tom-toms I mentioned take on individual tonality, texture, and weight.  And the guitar work, which I previously characterized as being 'aggressively picked' comes across with a much more natural and plausible sound.  You just cannot go back to the CD and hear it the same way.  DAMN!  Someone needs to release this whole album on SACD, and preferably as a DSD download.

Another great SACD is MFSL’s remastering of Stevie Ray Vaughan’s “Couldn’t Stand The Weather”, with its perennial audiophile favourite “Tin Pan Alley”.  Beginning with a solid kick drum thwack, it launches into a cool, laid-back, 12-bar blues.  Vaughan’s guitar has just the right combination of restraint and blistering finger work, and his vocal is very present and stable, just to the left of centre.  The rhythm section lays down a fine metronomic beat, playing the appropriate foundational role upon which SRV builds his performance.  By contrast, in their uncomplicated take on Hendrix’s “Voodoo Chile”, the drums are given full rein to pound out a tight and impactful rhythm, and SRV gives his guitar hero chops a good airing.  If you’re unfamiliar with SRV and want to know what the man was about, this would be the place to start.  It is a fantastic recording, and one that has been expertly transferred to SACD.

The Japanese Universal Music Group has remastered and released many classic albums in their SHM-SACD series, all of which are both hard to come by outside of Japan, and ruinously expensive.  Their work on Dire Straits’ “Brothers in Arms” is interesting.  To the best of my knowledge the original recording was on 20-bit 44.1kHz digital tape (but there are people around that know more than me about those things).  Anyway, the fact is that there is no obvious reason why a remastered SACD should sound significantly better than the original CD, unless, of course, the latter was not well mastered.  However, the conventional wisdom is that Mark Knopfler was particularly anal about the recording and mastering quality, and so maybe that argument doesn’t hold water.  Additionally, the Universal SHM-SACD can be compared with a contemporary remastering by MFSL, and both can be compared to the original CD.

Right away, both SACDs come across as superior to the CD in all the important ways.  The title track, Brothers in Arms, is one of my all-time go-to tracks.  On both remasterings, with the DS 1.2.1 the vocal has that signature SACD presence, and Knofler’s guitar work sounds more organic, more like a real instrument in the room with you - just like with the Nils Lofgren.  I puzzled over how and why two SACD remasters from impeccable digital sources could sound different.  But they do, and maybe someone could enlighten me about that.  The two remasters sound almost stereotypical (there’s gotta be a pun in there somewhere) of how we think of Japanese and American musical tastes.  The Japanese SHM-SACD is massively detailed, but with slightly flat tonal and spatial perspectives compared to the American MFSL.  The latter’s tonal bloom fills the acoustic space in a more immediately appealing manner, but at the apparent cost of some of that delicious detail.  If one is right, then the other must be wrong, so they say.  You pays your money, and you takes your choice.  But the bottom-line is that with a DAC of the resolving power of the DS 1.2.1 considerations such as these are going to weigh more heavily than might otherwise be the case.

So there you have it.  The 1.2.1 firmware update will transform your DirectStream from a great product into a game-changing product.  I concluded my last review by comparing the DirectStream, with its original firmware, to my all-time reference, the Light Harmonic Da Vinci Dual DAC.  I felt that, based on my aural memory, since I no longer have the Da Vinci to hand, that the DirectStream was not quite up to the latter’s lofty standards.  With the 1.2.1 firmware I am no longer so sure about that.  I would need to have both DACs side-by-side in order to be certain.  But this time around my aural memory tells me that the DirectStream in its 1.2.1 incarnation could very well give the Da Vinci a good run for its money.  And in some areas, such as its bass performance, I even wonder if the DirectStream might not come out on top.  Let's bear in mind the price difference - $6k vs $31k.  That’s an extraordinary achievement.

Tuesday 11 November 2014

The Promised AirPlay Update

Now that Tim’s Vermeer is out of the way, here, finally, is my promised AirPlay update.  You will recall that, following my system-wide upgrade to Yosemite and iTunes 12.x, when I set about evaluating BitPerfect’s AirPlay behaviour under that configuration it started out looking very bleak.  Nothing seemed to want to work, and there seemed to be a number of different and quite independent failure modes.  At the same time I was running short of patience with my AppleTV for non-audio reasons, and eventually discovered that it was one of a number of early ATV3 units which was eligible for free replacement under an Apple program.  With the AppleTV removed from my network, AirPlay suddenly started working very well, very predictably, and very stably.  Now that I have my replacement ATV3 from Apple, the question is what would happen when I re-installed it?

The answer is good news.  It has had no adverse impact whatsoever on my AirPlay setup.  So I set about devising a torture test, one I have never tried before.  I have three Macs in my test network, all running Yosemite and iTunes 12 - the first is a 2013 Mac Mini, the second a 2014 RMBP, and the third a 2009 MBP.  I also have three AirPlay devices in my network - the first is my
Classé CP-800 (which has an ethernet-connected AirPlay interface built in) which is connected to my main reference system, the second is an Airport Express connected to a set of computer speakers, and the third is my new AppleTV3, connected to a TV set.

All three Macs are running BitPerfect 2.0.1 straight from the App Store, and all three are playing through AirPlay.  The Mac Mini is playing Sade’s “Diamond Life” to the CP-800, the MBP is playing Laura Mvula’s “Live with Metropole Orkest” to the AirPort Express, and the RMBP is playing AC/DC’s “Back in Black” to the AppleTV.  All three are playing music quite happily, simultaneously, and the music choices are ones which my brain can at least separate out from the cacophony.  There have been no dropouts or other problems as far as I can tell (it is not easy listening to three systems simultaneously!).  So what had the potential to be a metaphorical headache is now instead a physical one, but for the time being I am not too unhappy about it :)

Next, I decided to stress the system to breaking point.  Although BitPerfect “hogs” its audio output device, thereby preventing other Apps - including OS X itself - from accessing it, this is not so simple with AirPlay.  BitPerfect can only hog AirPlay’s standard audio interface, but OS X does not control AirPlay through the standard audio interface.  So, even while BitPerfect is hogging it, you can still access the AirPlay subsystem via OS X’s Audio Midi setup.  So what would happen if I messed with the AirPlay settings in Audio Midi Setup while BitPerfect was busy playing through it?  And suppose I did that simultaneously with all three systems, while each one was busy playing?  Ugh.

So, on my Mac Mini I changed the AirPlay device in Audio Midi Setup from the CP-800 to the AppleTV.  Nothing happened.  BitPerfect continued playing to the CP-800, and the RMBP continued playing to the AppleTV.  So then I changed the RMBP’s AirPlay device from AppleTV to Airport Express, and the MBP’s from AirPort Express to CP-800.  Now, each of my Macs has its AirPlay device set to a different one from the one which BitPerfect is playing through, but BitPerfect’s playback has continued unchanged.  It is as though BitPerfect’s “hog” on the AirPlay device seems to have a lot more teeth to it that was previously the case under Mavericks and Mountain Lion.  This is a good thing, although the net result would be seriously confusing to someone who came in off the street right now and set about inspecting my setup.

Finally, the playlist playing on each of the systems has moved on to the next album in the queue.  In each case I have selected a 24/176.4 album so that (i) BitPerfect is doing some extra work to downsample the incoming signal, and (ii) the WiFi network is now being more seriously challenged.  The Wi-Fi network now has to stream 24/176.4 music into two of the three computers (the audio files live on a NAS), and then stream a 16/44.1 AirPlay stream out of the computers and then into the AirPlay devices.  That’s two 24/176.4 streams and four 16/44.1 streams simultaneously.  The third computer, and the third AirPlay device, are both connected via ethernet.  Everything continues to play just fine.  Credit here, to be fair, must go to my trusty Cisco E4200 router.

And still my headache continues.  Eric Clapton’s “Slowhand”, Deep Purple’s “Machine Head”, and Arkady Volodos reciting Chopin, now permeate the soundscape.  Excuse me while I take a couple of Tylenol….


To read about AirPlay under OS X 10.11 (El Capitan), read this post.

Friday 7 November 2014

Tim’s Vermeer - II. DSD

Yesterday I wrote about Tim Jenison and his cool research on the topic of Vermeer and the photo-realism of the Dutch School, captured in a wonderful documentary called “Tim’s Vermeer” from Sony Classics.  I mentioned the extraordinary story of how Jenison constructed a plausible apparatus by which, he posited, Vermeer may have actually produced his revolutionary paintings.  Jenison, a graphics designer and non-artist, went further and used it to produce his own version of Vermeer’s “The Music Lesson”, something which, had he done it in 1650, might conceivably have elevated the name Jenison into the pantheon of the greats.

Jenison’s technique in effect had him work his way across the canvas, comparing each fragment of the painting with the corresponding fragment of an image of his subject produced by a camera obscura.  In essence, you consider a spot on your canvas, and compare what the obscura image suggests should go there with what is already there.  If you perceive a discrepancy, you can easily correct it.  Or modify it later if, maybe as a result of what you put in an adjacent area, you come up with something better - in contrast to, for example, how an ink-jet printer might approach the same job.  “The Music Lesson” is about two feet square, and it took Jenison something like 130 days to complete the painting.  The unbreaking concentration required was enormous, and in turn nearly broke him.

While watching all this I was immediately struck by an interesting comparison with the Sigma Delta Modulator (SDM) used to produce a DSD data stream.  The SDM takes some input data, which may be an analog signal or a digital data stream, and sets about producing an output data stream.  Each time it needs to create an output value it sets about comparing two things - the input value, plus the input values that preceded it, and the previous output values.  It uses those to calculate what the new output value should be. 

This is like Jenison’s apparatus.  He goes to a place on the canvas, looks what’s there, and compares that to what’s in the equivalent place on the original image.  He then uses his judgement to decide what he needs to paint in that particular place.  In a practical sense, it’s not that he seeks something right or wrong in absolute terms about the smudge of paint that needs to be applied, more a question of what looks best given the available comparables.  “Using his judgement” is a convenient phrase which undervalues the colossal amount of visual processing power that the brain is able to bring to bear on the task.

This illuminates one of the limitations of a true SDM in a DSD application.  DSD requires that the output of the SDM has to be either 1 or 0.  If, on balance, the SDM figures out that the best output value is actually 0.5, DSD doesn’t have that as an option.  It has to choose either 1 or 0.  The SDM architecture only allows us to look historically at both input and output data and use that to make our choice.  If both 1 and 0 are equally wrong, then it does’t matter which one we choose.  We just hope that the SDM can take the error fully into account when it comes to choosing the next output value, and the ones after that.  In fact the situation is always like that.  The SDM, in reality, always figures out a best output value somewhere between 0 and 1, and never comes up with an output value which is either exactly 1 or exactly 0.  And, unlike Jenison, the SDM doesn’t get to go back and do a make-over once it’s made its choice.  In that regard, the SDM is a bit like the ink-jet printer analogy.

Given that the ideal output value is never 1 or 0, and that we have to pick one or the other and hope for the best, what do we do if it turns out that we’d have been better off choosing the other one?  The answer is that, in the grand scheme of things, we end up with a combination of higher background noise and increased distortion.  But in the end, by designing our SDMs optimally, we do get those parameters down to the point where the overall performance is pretty darned good.

Actually, there is a way to get around that problem.  Let’s take an ordinary SDM whose output value is going to be either 1 or 0.  We can do some kind of “what if” calculation, and say “What if we chose a 1?” and calculate what the output value after that was going to be.  We can do the same thing for “What if we chose a 0?”.  In each case the SDM will chose either 1 or 0 for the subsequent output value.  What we are doing is, instead of selecting between two possible output values, 1 and 0, we are selecting between 4 possible output sequences, 10, 11, 00, and 01, which begin with either 0 or 1.  We get to choose which of the 4 gives us the best result, but this comes at the expense of a doubling of the amount of processing that we have to do.  Note that by choosing between those four possible values, we are only selecting the first bit, and not both bits of the sequence.  In other words, if we prefer 11 or 10, then all we are doing is selecting the single output value of 1, and if we prefer 01 or 00 all we are doing is selecting the single output value 0.  This process is called “Look-Ahead” for obvious reasons.

Look-Ahead can give seriously improved performance in both noise floor and distortion, but in order to achieve that, it turns out you need to be able to look a long way ahead, not just two or three bits.  In reality, 10 or 16 bits of look-ahead are required, and, at first sight, each additional bit of look-ahead doubles the amount of processing time required.  At that rate, 16 (or even 10) bits of look-ahead comes at a prohibitive processing cost.  However, like in most things mathematical, when smart people are motivated to look into it, solutions can often be found.  By analyzing how the mathematics of look-ahead works, you realize that the same calculations are being repeated in multiple branches of the look-ahead tree, and can find ways of only doing them once.  Additionally, some of the branches can be identified early on as being bad candidates for the final decision and can be pruned at an early stage.  Finally, it is possible to expand the basis upon which we decide how ‘good’ or ’bad’ a branch is, and thereby do a better job of eliminating the ‘bad’ ones early on.

Taken together, along with the relentless rate of progress of computer power, these “look-ahead” SDM architectures are on the verge of being implementable.  They may not have an immediate impact on consumer DACs, but they could make their presence felt in DSD studio equipment, where the lower noise floor and distortion may open the door to effective mixing and other rudimentary signal processing.

All stuff that Vermeer would never have thought of.  Or, for that matter, Tim Jenison.

Thursday 6 November 2014

Tim’s Vermeer - I. Tim’s Vermeer

I finally got my AppleTV replaced by Apple.  It’s not that it took them a long time - in fact they replaced it on the spot with no hassle at all - it’s more that it took me a long time to get round to hauling my ass to the Apple Store.  So now I have a brand new AppleTV 3, presumably without the known bugs that afflicted my original unit.  What we now need to find out is what difference that has made to my AirPlay network, given that the latter has performed flawlessly since I removed the AppleTV from it.

But that’s going to have to wait a short while, because there is something else I want to write about first.  You see, as I mentioned in a previous post, my AppleTV’s main role in life is to drive the TV set in my gym, to provide the boredom relief necessary to get me through my daily workout.  So this morning, I watched a documentary from Sony Classics called Tim’s Vermeer.  It was a profoundly interesting program, and I felt it necessary to write about it, and about the parallels I drew from it (which will be the subject of a separate post tomorrow).

Johannes (Jan) Vermeer was one of the Dutch Masters who painted in the latter part of the 17th Century.  His paintings, typical in general of the Dutch Golden Age, possess a quality we today refer to as ‘photo-realism’.  They possess an accuracy of perspective, and of illumination, that we today take for granted in photographs, but which was quite unknown in Vermeer’s time.

Tim Jenison is an American entrepreneur who built a successful career in software for the TV and video industries.  Although he is a graphic artist, he is not trained in any way as a painter.  Jenison, like many people before him, was deeply intrigued by how the Dutch Masters, and Vermeer in particular, were able to make the leap in perception which they did.

As an expert in the field of video, he came to appreciate that Vermeer’s paintings differed from many more modern artworks in a key aspect.  The way he saw it, the works of the the Dutch School looked more to him like video stills than photographs.  This would only come about if they were ‘copied’ from life, rather than created independently where you can continuously modify the result if it isn’t exactly what you want, even to the extent that what you want is no longer a strictly accurate representation.  Many experts in the field have postulated that the Dutch School arose due to the concurrent development of the “camera obscura”.  This would throw an image of a real-life scene onto a screen or wall in a darkened room, and the artist could paint from that.  Vermeer’s “video-still” brand of photo-realism could arise if he painted by ‘copying’ what he saw on a camera obscura image.

Books have been published on this topic (the so-called Hockney-Falco thesis, named for the British artist David Hockney and the American physicist Charles Falco), which made an impression on Jenison.  The interesting thing is that none of Vermeer’s works show any evidence of the sort of procedures an artist would presumably have had to follow if that were the case.  A camera obscura (latin for “Dark Room”) is a low-light environment, and not one at all conducive to painting a masterpiece.  Therefore, if an artist were to use one as the tool to throw an image directly onto a canvas that he would then paint over, it is likely that he would record the basic framework of the image in the camera obscura, and finish it off elsewhere.  However, X-ray analysis of Vermeer’s works show no evidence of any such structures beneath the final layers of paint.  His works appear to have been deposited in their final form directly upon the canvas, and with extraordinary precision in some critical aspects.

Intrigued by these findings, Jenison set about his own experiments, to see what would happen in practice if you tried to paint Vermeer-style art from camera obscura images, and from there to imagine how Vermeer might have responded to these challenges.  One of the obvious problems is that the image in a camera obscura is upside-down and back-to-front.  Although the latter is not too much of a hindrance, the human brain - and the artist’s eye - have a lot more trouble interpreting an inverted image.  Jenson realized that using a mirror would be the simplest way to correct for that problem and set about experimenting with one.

He immediately found an intriguing solution.  He placed a small canvas flat on a table which he positioned directly in front of an inverted photograph.  He then placed a small mirror directly above the canvas, at the same distance from the canvas as from the photograph.  Peering at the canvas from above, a viewer would see the canvas, except for the area where the mirror impinged, where instead he would see a reflection of a small portion of the photograph.  Both photograph and canvas would be in focus.  You could then use the setup as a tool to draw a replica of the photograph, a bit like dividing a picture into a grid of squares like they taught you in high school.

As a graphics designer, Jenison saw this configuration as a 17th century version of an editing window in which you could place the original and the copy side-by-side for comparison purposes.  In particular, it would enable very precise colour matching, which is otherwise rather more challenging than you might imagine, since the human eye/brain combination has very poor absolute colour memory.  Jenson then used this theory to attempt for himself to copy a simple B&W portrait photo using oils.  As a non-painter, this would be his first ever attempt at an oil painting.  You really need to see the film itself to appreciate what an incredible job he was able to do.

Essentially, what his technique does is to nibble away at the whole image, by adjusting his viewpoint so that, bit by bit, the entire image passes by the interface between the mirror and the canvas, allowing him to compare and replicate the exact tint of the applied paint at each point.

Armed with a primitive (but authentic) camera obscura device, his mirror, and his B&W portrait in oil, Jenison visited David Hockney in London, plus a couple of other authorities, to see if there was any interest in the notion that this might have been the technique that Vermeer himself had used.  The reception he received was quite encouraging.  Also, while in London, he was granted special dispensation to visit Buckingham Palace and spend a half hour looking at his personal favourite Vermeer painting, “The Music Lesson”.

He came away convinced of what the next step should be.  He would attempt to use the self same techniques to try and replicate Vermeer’s “The Music Lesson”.  Actually, he would not so much try to replicate the painting itself, rather he would replicate the method by which it was created in the first place.  Given that he was not in any way a painter, and Vermeer is a revered master, this would be a major challenge.

Jenison was quite thorough in his approach.  Like Vermeer, he chose to make his own paints by grinding his own pigments and mixing them with oils.  He made his own furniture when he could not buy authentic originals.  He cast and ground his own lenses to use in the camera obscura.  He recreated as exactly as possible the room in which the original painting was set, the clothes worn by the subjects, the decoration and the furnishings - which included a Viola da Gamba, on which he gave a rustic and rather baroque rendition of the iconic riff from “Smoke On The Water”.

I won’t elaborate on the outcome, save to say that it all comes to a fitting conclusion.  Along the way, a couple of extraordinary things emerge.  One of the first things Jenison observes when he begins his marathon paint job is the appearance of chromatic aberration, caused by the fact that he has obliged himself to use authentic glass and lens designs in his camera obscura.  If he is going to be true to his aim of objective authenticity, he must include the faint blue blurs which are visible at certain high contrast edges.  But, looking at high-magnification images of the original Vermeer, he is astonished to find that it, too, has rendered the same blue blur, in the same places.  There is no reason to believe that Vermeer understood chromatic aberration.

More dramatically, though, part way through the painting, Jenison discovers that his lens also shows some mild pincushioning, a fact that only becomes evident due to the unerring optical accuracy of his method.  Again, and quite astonishingly, the original Vermeer is shown to exhibit the very same pincushion distortion, in such a way as to suggest that not only did he follow Jenison’s method, but also the precise (and highly practical) order in which Jenison implemented it.  Unfortunately, the narrator did not address the question of whether or not this pincushioning had ever been detected by experts prior to Jenison’s work.  That would have been interesting to know.

I found the whole thing to be wonderfully entertaining and informative.  Since the program was produced by Penn and Teller - with Penn Jillet doubling as presenter and narrator, and Teller directing - one can comfortably eliminate the notion that the wool is being pulled over our eyes in the service of a good yarn.  Some limited follow-up research on my part shows that while Jenison’s theory does indeed receive a great deal of credence - seriously unusual in itself for the work of a rank amateur in the rarefied world of fine art - there is little to support it in terms of the historical record.  Vermeer is not known to have had any particular interest in optics, and his personal effects after his death were not found to include a camera obscura or anything similar.

Tomorrow I will explain what any of this has to do with audio.

Tuesday 28 October 2014

AirPlay Still Good

It has been a week now since I put my AppleTV in its box and took it back to the Apple Store.  Unfortunately, I was told I needed an appointment to receive an audience with a “Genius” in order to get it seen to, and nobody was available.  Since this involves a 45-minute drive through the West Island’s lethal road construction, I haven’t been back yet.

The upside is that I have had a week without an AppleTV in my system, and during that week AirPlay playback has been flawless, provided I followed the (revised for Yosemite) procedure I described last week.  That’s three systems - a 2014 RMPB, a 2013 bare-bones Mac Mini, and a 2009 MBP.  All running Yosemite, all working just fine, first time, every time, with AirPlay.

I thought that was worth reporting.


I now have my new AppleTV from the Apple Store.  To find out what happened when I installed it in the system, read on here.

Thursday 23 October 2014

AppleTV

I have an AppleTV 3.  It is an incredibly buggy device.  As an audio device it has been a source of frustration for me since day one, to the point where I now no longer use it - ever - as part of my BitPerfect test regimen.  It is relegated to use in my Gym, where I watch YouTube or Netflix with it while working out.

The AppleTV has an annoying habit of dropping its WiFi connection periodically.  Actually, it doesn’t so much drop its connection - it is more like its entire WiFi system shuts off.  This happens after between a few minutes and a few hours of use, and has persisted across several firmware updates.  The solution is to re-boot it.  Sometimes it takes three or four re-boots.  Rarely do I get through a solid hour without it failing.

Today it appears to have given up for good.  I can’t get it to come back up at all.  I have just found out that there is an Apple recall program in place and that my AppleTV is one of the affected units, so it is now boxed up and ready to go back to Apple.  Let's see what happens.

So, while dealing with that, I got round to thinking a little.  If you have read my recent posts on AirPlay, you will have noted that I spent a few days exhaustively testing AirPlay with BitPerfect under Yosemite and iTunes 12.0.1 with mixed results.  I was using both my AirPort Express and the AirPlay receiver in my Classe CP-800 as the target AirPlay device.  It may not have come across in my post, but my test experience seemed to go through two phases.  The first was an initial three-hour phase during which nothing seemed to work at all.  This was followed by a lengthy period during which AirPlay seemed to function with at least some semblance of predictability, as reported in my post, a situation which still persists this morning.

Here is what is going through my mind.  Is it possible that when my AppleTV was active on the network I was having uncontrollable AirPlay problems?  And that as soon as its WiFi transceiver ‘died’ (causing it to drop off the network) things started to play more predictably?  As I write this, it occurs to me that whenever the AppleTV is active on my network, my RMBP seems to want to select it as its ‘default’ AirPlay device whenever it can, even though I never want to use it in that role and therefore never - ever - select it.  Hmmmm….


Read on here....

Tuesday 21 October 2014

Adventures in AirPlay

I have been working hard on AirPlay to try to understand what it takes to get BitPerfect to work smoothly with it under the combination of Yosemite and iTunes 12.0.1.  Unfortunately I don’t have a definitive answer for you, but I am at least starting to get a handle on its behaviour.  I thought you might be interested to read some of this.

The problem is, it either works or it doesn’t, and I can’t figure out why.  There are two main modes of “doesn’t work”.  One is where BitPerfect’s menu bar icon stays black.  This one happens rarely and generally only at the first attempt.  It means that BitPerfect cannot access the AirPlay Device.  The other is where the icon starts green, goes briefly black, and then stays green but with no music audible.  This means that BitPerfect is streaming music to the AirPlay Device, which as far as BitPerfect is concerned is responding in the way it normally would.  I have been wrestling with every combination of the various settings and sequences that might impact AirPlay behaviour but despite some successes, nothing has proven to be the magic bullet.

My first potential “Aha!” moment was when I got to the point where iTunes would throw up a message to the effect that “I can’t find the Airport Express” and offers me two options, “Cancel” or “Continue using the Computer Speaker”.  The secret seems to be to select “Continue using the Computer Speaker”.  “Cancel” is the wrong choice.  I spent some time trying to determine what would cause this message to appear, but after a while I just stopped seeing it, and I haven’t actually seen it now since early yesterday.  So that remains a puzzle.

The next interesting observation is a significant deviation from the setup that we have been recommending since Mountain Lion and Mavericks.  iTunes has its own little AirPlay icon (next to its volume control) where you can select between the various AirPlay devices and “Computer”.  It used to be that it was necessary to select the desired AirPlay device, but now I am finding that when using BitPerfect, AirPlay never works unless “Computer” is selected, and not the other way around.

Yesterday, using my Mac Mini, it appeared that the required solution was to select AirPlay as the default system output device using Audio Midi Setup, then launch BitPerfect, and have BitPerfect launch iTunes (whether automatically or manually), then select “Computer” as the output device from the iTunes AirPlay control.  But when I came to confirm my findings this morning, I found that it didn’t seem to matter whether or not I set the default system output device to AirPlay or to something else.  All that matters is that I set the iTunes AirPlay control to “Computer”.  You must select the desired AirPlay device (if you have more than one) in Audio Midi Setup.  I even experimented with connecting the Mac Mini to the network by Ethernet (its normal configuration) or by WiFi.  It didn’t make any difference.

While all this was happening, on my RMBP (which had Yosemite and iTunes 12.0.1 installed) it seemed that AirPlay would always work first time.  I have a second, older MBP and so I installed Yosemite and iTunes 12.0.1 on that machine also.  This morning I have added that to the mix.  It seems that both MBPs have no problems at all getting BitPerfect and AirPlay to work together, provided I set the iTunes AirPlay control to “Computer”.  For the most part the Mac Mini also works too.  However, it took three or four attempts, restarting BitPerfect and iTunes each time in between, before it started working consistently.  With each of these Macs, once AirPlay starts working, it seems to stay working until you stop playback for a while, or quit iTunes/BitPerfect.

So there you have a summary of a couple of days of intensive AirPlay experimentation.  Set the iTunes AirPlay control to “Computer” and it will either work or it won’t.  If it doesn’t, then quit BitPerfect and iTunes and start again.  Rinse and repeat as necessary.  You may be lucky in that you have a Mac which is pre-disposed to want to work well with AirPlay (like my two MBPs) or you may be unlucky that your Mac does not prefer to play ball (like my Mac Mini).  It’s all I have at the moment, I’m afraid.  I have no idea whether or not you will see the same behaviour.  I will continue my experiments, albeit at a less intense level, as I am (a) running short of good ideas, and (b) have other things piling up on my plate.


... As an important follow-up, here are some thoughts and observations regarding my AppleTV.

Monday 20 October 2014

Yosemite / iTunes 12.0.1

We have been working on evaluating BitPerfect on the latest version of Yosemite / iTunes 12.0.1, and we are coming up with a mixed bag of results.  For the most part it is working quite well, but there are two areas of concern for us for the moment.

The first is with AirPlay.  I have two Macs right now that have been updated to the new configuration.  The first is a RMBP and the second is a headless Mac Mini.  I seem to have no problems getting AirPlay to work on the RMBP, but thus far not with the headless Mac Mini.  I have no idea what the problem is.  I am currently updating a second, older MBP, and will see what happens with that one in due course.

The second issue is with the Console App.  We use the Console Log as a valuable debugging tool, but unfortunately, under Yosemite, BitPerfect is flooding the Console with a raft of unhelpful messages.  In effect, this is amounting to a Denial-of-Service attack on the Console App!!  While this seems to have no obvious impact on BitPerfect's performance, it is rendering our primary diagnostic tool almost ineffective.

More on all this as developments arise ....


UPDATE 21 Oct 2014
 

Monday 6 October 2014

Our Own League Of Nations

One of the useful things about Apple's App Store is that they give you some very detailed breakdowns of product sales, including by Country.  To date, BitPerfect has been sold in 71 different countries, which is pretty amazing when you think about it.  And it was just this week that our first customer from Pakistan joined the BitPerfect community, extending the list now to 72.  [Ask yourself - can you even name 72 Countries off the top of your head?]  So, whoever you are - if you are reading this - I would like to extend a warm welcome to the sole representative of Pakistan to the BitPerfect Community!

If you are interested, here are the 72 Countries:
Japan
USA
UK
Canada
France
Germany
Netherlands
Australia
Italy
Hong Kong
Russia
Switzerland
Sweden
Taiwan
Belgium
Denmark
China
Norway
Singapore
Thailand
Korea
Poland
New Zealand
Spain
Austria
Finland
Mexico
Brazil
Turkey
Greece
Chile
Portugal
Malaysia
South Africa
India
Ireland
Hungary
Czech
Indonesia
Luxembourg
Argentina
Israel
Ukraine
Romania
Croatia
Phillipines
Venezuela
Slovenia
Colombia
Estonia
UAE
Slovakia
Peru
Bulgaria
Uruguay
Lithuania
Belarus
Latvia
Kazakhstan
Macau
Macedonia
Saudi Arabia
Malta
Ecuador
Kuwait
Guatemala
Costa Rica
Dominican
Nicaragua
Namibia
Cyprus
... and ...
Pakistan!

Monday 22 September 2014

The Lucy Show

The Lucy Show

To the extent that I can claim to be a qualified anything, that would be a Physicist.  I am not a biologist, geneticist, anthropologist, or theologist for that matter.  But it doesn’t stop my mind from wandering into these areas from time to time.  And recently, I have been thinking a bit about evolution.

While working out on my cross-trainer I like to watch a TV show to alleviate the boredom.  Preferably something that will interest me sufficiently to extend my workout to the point where it actually does me some good.  Recently, I saw a show that made some unconvincing point or other about “Lucy” - the one who was supposedly an Australopithecus Afarensis who lived some 3-odd million years ago.  The gist of the program seemed to be founded on the notion that Lucy was a common ancestor of all humanity, and that we are therefore all her direct descendants.  It is all speculation, of course, but it got me to thinking about what it means to be descended from someone or something.  Because, unless you buy into some sort of creation theory, we all, ultimately, have to be descendants of something that first appeared in the primordial ooze.  And I got to thinking about that.

The first thing that strikes me is the notion of being descended from someone.  Usually, long-chain blood lines descend down from a given person, and not up to him or her.  This is because the historic record is light on the general, and heavy on the specific.  So, if you want to trace your ancestry, you probably won’t have to go back very far before you’ll strike out, with apparently no traces remaining of any records regarding certain individuals.  In my case, my family tree peters out after only three or four generations.  Perhaps not a bad thing, I sometimes think.

But one thing we can be pretty sure of, and that is that every human being who ever lived had one extant mother and one extant father (OK, all but one, if that is a point you want to argue).  So, starting with myself, I can say with certainly that I had two parents, four grandparents, eight great-grandparents, sixteen great-great grandparents, and so on.  I may never know who they all were, but I know they did exist.

You can imagine a hypothetical map of the world on your computer screen, with a slider that controls the date going back as far in history as you want.  As you move the slider, a pixel lights up showing the whereabouts of every one of your direct ancestors who was alive on the corresponding date.  If such a thing could ever exist, I wonder what it would show.  My father is a Scot and my mother Austrian, so I imagine that for the first few hundred years or so mostly Scotland and Austria would be lit up.

For all of my most recent ancestors, it is a fair assumption to suggest that they are all mutually exclusive.  In other words, that there was no cross-pollination (if I may put it that way) no matter how far removed the individuals were in my family tree.  Realistically, though, that approximation is going to falter if you include a sufficient number of generations.  Therefore, as we go further back in time the net number of my ancestors stops growing at an exponential rate.  But at the same time as the number of my ancestors has been growing, with the steady rolling back of the clock so also the total number of humans on the planet will be shrinking.  The growing ancestral base, and the shrinking overall population must surely meet somewhere.

It seems reasonable that the same line of logic should apply to us all.  In other words, if we go back far enough in time, we should find that we are all descended from the same group of humans, no matter how disparate geographically, culturally, or any-other-ally.  But this group would not comprise all of humanity at that point in time.  Some of those individuals will die childless.  Others will bear children who will die childless, and so forth.  So the entire human race at that point will comprise two groups of people.  Those who are the direct ancestors of every living person in the world today, and those whose bloodlines died out completely in the intervening millennia.

So the questions that I arrived at were these.  What would the expected ratio be of ancestors to non-ancestors?  Would we expect it to be a relatively large percentage or a relatively small percentage?  And in particular, if the latter, what would it take for that small percentage to actually be one person?  Is that even possible?  I have never seen this line of thinking expanded upon, but one thing I have learned is that whenever something like that crosses my mind, it has always previously crossed the mind of someone who is a proper expert in the field.  Maybe one day I’ll get to hear that expert’s opinion.  But, in the meantime, it seems highly improbable to me that the ancestral percentage would even be a minority, let alone a minority of one.

The next point concerns what you might term the crossing of the man-ape barrier.  This troubles a lot of people.  Scientists dig up ancient skeletons and fossils and assign them to categories such as human, proto-human, and ape.  Actually, they are lot more scientific about it, but you get my drift. The theory of evolution provides a mechanism or road-map for the development of ape into proto-human and proto-human into human, but has little to say on the specifics.  Meanwhile, all we have in our historical record are a seriously limited number of archaeological specimens that we can do little with other than to fit them into a timeline.

The transformation of proto-human into human took - I don’t know - let’s call it a million years.  Yet we only have specific archaeological specimens - for example the proto-human and the human.  Us ordinary folk look at them - and also at the artists renderings of what the original individuals may have looked like - and many people have a hard time grasping how it is at all possible for one to become the other.  Of course, if we had a perfect fossil record - say, one for every thousand years over the span of that million years - we might be able to understand and communicate convincingly how the development played out.  But we don’t.  And so we can’t.  We can just make guesses - albeit highly-informed and very well-educated guesses.

These things happened over timescales so vast that all of recorded history is just a blink right at the end.  It is wrong to think of evolution as a set of stable eras characterized by specific inhabitant species, separated by periods of transition.  Certainly major transformative periods did occur, such as the Jurassic/Triassic, Triassic/Cretaceous, and Cretaceous/Tertiary (K/T) boundaries, but in general, for the last 66 million years, evolution has been a continuous thing.  We are evolving today as a species as least as quickly as - if not orders of magnitude faster than - our ancestors did as they transitioned from proto-human to human.  We’ve just not been around for long enough to be able to observe it.

Lets close with my computerized ancestral map, and slide the time dial back to the age when proto-humans were evolving into humans.  Assuming that all humanity does not derive from The Lucy Show after all, my ancestral map will become a map of proto-human occupation.  Slide it back a bit more and it will reflect ape occupation.  Slide it back even further - and then what?  At that point as far as I can tell all science has to offer is speculative at best.  Apes date back to the cretaceous period.  So, although we pretty much certainly were not descended from dinosaurs, it seems likely that some of our ancestors will have been eaten by them (although if a T-Rex ate some of my relatives, it would perish from alcohol poisoning).  On the other hand, the well-known Dimetrodon - a lizard-like creature characterized by a spiny sail along its back, and recognized by five-year-olds everywhere - is quite possibly an ancestor of today’s mammals.

At the far end of its travel, my ancestral map ends up in the primordial soup, presumably as a population of bacteria.  But if it did, then so did yours!…

Thursday 18 September 2014

OS X 10.9.5

While waiting for the results of the Scottish Independence referendum to trickle out, I installed the latest OS X 10.9.5 and gave it a quick workout.  So far so good.  I can see no reason why BitPerfect Users should not upgrade.

Of course, like the Scottish Referendum, it may not look so rosy by tomorrow. :)

Friday 12 September 2014

Has DSD met its Waterloo?

In May of 2001, Stanley Lipshitz and John Vanderkooy of the University of Waterloo, in Canada, published a paper titled “Why 1-bit Sigma-Delta Conversion is Unsuitable for High-Quality Applications”.  In the paper’s Abstract (a kind of introductory paragraph summing up what the paper is all about) they made some unusually in-your-face pronouncements, including “We prove this fact.”, and “The audio industry is misguided if it adopts 1-bit sigma-delta conversion as the basis for any high quality processing, archiving, or distribution format…”.  DSD had, apparently, met its Waterloo.

What was the basis of their arguments?  Quite simple, really.  They focussed on the problem of dither.  As I mentioned in an earlier post, with a 1-bit system the quantization error is enormous.  We rely on dither to eliminate it, and we can prove mathematically that TPDF dither at a depth of ±1LSB is necessary to deal with it.  But with a 1-bit system, ±1LSB exceeds the full modulation depth.  Applying ±1LSB of TPDF dither to a 1-bit signal will subsume not only the distortion components of the quantization error, but also the entire signal itself.  Lipshitz and Vanderkooy study the phenomenon in some detail.

They then go on to characterize the behaviour of SDMs.  SDMs and noise shapers are more or less the same thing.  I described how they work a couple of posts back, so you should read that if you missed it first time round.  An SDM goes unstable (or ‘overloads’) if the signal presented to the quantizer is so large as to cause the quantizer to clip.  As Lipshitz and Vanderkooy observe, a 1-bit SDM must clip if it is dithered at ±1LSB.  In other words, if you take steps to prevent it from overloading, then those same steps will have the effect that distortions and other unwanted artifacts can no longer be eliminated.

They also do some interesting analysis to counter some of the data shown by the proponents of DSD, which purport to demonstrate that by properly optimizing the SDM, any residual distortions will remain below the level of the noise.  Lipshitz and Vanderkooy show that this is a limitation of the measurement technique rather than the data, and that if the signal is properly analyzed, the actual noise levels are found to be lower but the distortions levels are not, and do in fact stand proud of the noise.

Lipshitz and Vanderkooy do not suggest that SDMs themselves are inadequate.  The quantizer at the output of an SDM is not constrained to being only a single-bit quantizer.  It can just as easily have a multi-bit output.  In fact they go on to state that “… a multi-bit SDM is in principle perfect, in that its only contribution is the addition of a benign … noise spectrum”.  This, they point out, is the best that any system, digital or analog, can do.

The concept of a stable SDM with a multi-bit output is what underlies the majority of chipset-based DAC designs today, such as those from Wolfson, ESS, Cirrus Logic, and AKM.  These types of DAC upsample any incoming signal - whether PCM or DSD - using a high sample rate SDM with a small number of bits in the quantizer - usually not more than three - driving a simplified multi-bit analog conversion stage.

Lipshitz and Vanderkooy’s paper was of course subjected to counter-arguments, mostly (but not exclusively) from within the Sony/Phillips sphere of influence.  This spawned a bit of thrust and counter-thrust, but by and large further interest within the academic community completely dried up within a very short time.  The prevailing opinion appears to accept the validity of Lipshitz and Vanderkooy from a mathematical perspective, but is willing to also accept that once measures are taken to keep any inherent imperfections of 1-bit audio below certain presumed limits of audibility, 1-bit audio bitstreams can indeed be made to work extremely well.

Where we have reached from a theoretical perspective is the point where our ability to actually implement DSD in the ADC and DAC domains is more of a limiting factor than our ability to understand the perfectibility (or otherwise) of the format itself.  Most of the recently published research on 1-bit audio focuses instead on the SDMs used to construct ADCs.  These are implemented in silicon on mixed-signal ICs, and are often quite stunningly complex.  Power consumption, speed, stability, and chip size are the areas that interest researchers.  From a practicality perspective, 1-bit audio has broad applicability and interest beyond the limited sphere of high-end audio, which alone cannot come close to justifying such an active level of R&D.  Interestingly though, few and far between are the papers on DACs.

For all that, the current resurgence of DSD which has swept the high-end audio scene grew up after the Lipshitz and Vanderkooy debate had blown over.  Clearly, the DSD movement did NOT meet its Waterloo in Lipshitz and Vanderkooy.  Its new-found popularity is based not on arcane adherence to theoretical tenets, but on broadly-based observations that, to many ears, DSD persists in sounding better than PCM.  It is certainly true that the very best audio that I personally have ever heard was from DSD sources, played back through the Light Harmonic Da Vinci Dual DAC.  However, using my current reference, a PS Audio DirectStream DAC, I do not hear any significant difference at all between DSD and the best possible PCM transcodes.

There is no doubt in my mind that we haven’t heard the last of this.  We just need to be true to ourselves at all times and keep an open mind.  The most important thing is to not allow ourselves to become too tightly wed to one viewpoint or another to the extent that we become blinkered.

Thursday 11 September 2014

iTunes 11.4

I have been testing the latest iTunes update (11.4) and it seems to be working fine. Initially I was concerned that it had caused my RMBP to grind to a halt and require a re-boot, but this did not repeat on my Mac Mini so I am thinking that was an unrelated issue. It has been working fine on both Macs since then.

It looks like BitPerfect users can install it with confidence.

DSD64 vs DSD128

As a quick follow-up to my post on noise shaping, I wanted to make some comments on DSD playback.  DSD’s specification flies quite close to the edge, in that its noise shaping causes ultrasonic noise to begin to rise almost immediately above the commonly accepted upper limit of the audio band (20kHz).  This means that if DSD is directly converted to analog, the analog signal needs to go through a very aggressive low-pass filter which strips off this ultrasonic noise while leaving the audio frequencies intact.  Such a filter is very similar in its performance to the anti-aliasing filters required in order to digitally sample an analog signal at 44.1kHz.  These aggressive filters almost certainly have audible consequences, although there is no widely-held agreement as to what they are.

In order to get around that, the playback standard for SACD players provides for an upsampling of the DSD signal to double the sample rate, which we nowadays are referring to as DSD128.  With DSD128 we can arrange for the ultrasonic noise to start its rise somewhere north of 40kHz.  When we convert this to analog, the filters required can be much more benign, and can extend the audio band’s flat response all the way out to 30kHz and beyond.  Many audiophiles consider such characteristics to be more desirable.  By the way, we don’t have to stop at DSD128, nor do we have to restrict ourselves to 1-bit formats, but those are entirely separate discussions.

If that was all there was to it, life would be simple.  And it isn’t.  The problem is that the original DSD signal (which I shall henceforth refer to as DSD64 for clarity) still contains ultrasonic noise from about 20-something kHz upwards.  This is now part of the signal, and cannot be unambiguously separated from it.  If nothing is done about it, it will still be present in the output even after remodulating it to DSD128.  So you need to filter it out before upsampling to DSD128, using a filter with similar performance to the one we just discussed and trashed as a possible solution in the analog domain.

The saving grace is that this can now be a digital filter.  There are three advantages that digital filters have over analog filters.  The first is that they approach very closely the behavior of theoretically perfect filters, something which analog filters do not.  This makes the design of a good digital filter massively easier as a practical matter than that of an equivalent analog filter.  The second advantage is that digital filters have a wider design space than analog filters, and some performance characteristics can be attained using them that are not possible using analog filters.  The third advantage is that analog filters are constructed using circuit elements which include capacitors, inductors, and resistors - components which high-end audio circuit designers will tell you can (and do) contribute to the sound quality.  Well-designed digital filters have no equivalent sonic signatures.

So - good news - the unwanted ultrasonic noise can be filtered out digitally with less sonic degradation than we could achieve with an analog filter.  Once the DSD64 is digitally filtered, it can be upsampled to 5.6MHz and processed into DSD128 using a Sigma-Delta Modulator (SDM).  It is an unresolved question in digital audio whether a SDM introduces any audible sonic degradation.  Together with the question of whether a 1-bit representation is adequate for the purposes of high-fidelity representation of an audio signal, these are the core technical issues at the heart of the PCM-vs-DSD debate.

So the difference between something that was converted from DSD64 to DSD128, and something that was recorded directly to DSD128, is that the former has been filtered to remove ultrasonic artifacts adjacent to the audio frequency band, and the latter has not.  If DSD128 sounds better than DSD64 it is because it dispenses with that filtering (and re-modulation) requirement.  Such arguments can be further extended to DSD256, DSD512, and the like.  The higher the 1-bit sampling frequency, the further the onset of ultrasonic noise can be pushed away from the audio band, and the more benign the filtering can be to remove it for playback.

It is interesting to conclude with the observation that, unlike the situation with 44.1kHz PCM, DSD64 allows the encoded signal to retain its frequency spectrum all the way out to 1MHz and beyond, if you wanted.  By contrast, 44.1kHz PCM requires the original analog signal itself to be strictly filtered to eliminate all content above a meager 22.05kHz.  DSD64 retains the full bandwidth of the signal, but allows it to be submerged by extremely high levels of added noise.  In the end you still have to filter out the noise - and any remaining signal components with it - but at least the original signal is still present.