Friday, 11 April 2014

Sales & Marketing

I overheard a conversation the other day.  I know it’s rude to listen in on other people’s conversations but some people talk so loud its like they want to be overheard.  Anyway, one fellow was holding forth over the way a certain product was being marketed.  He advocated certain solutions to address these concerns, involving, among other things, a brick wall, a party of uniformed gentlemen sporting rifles, and a group of unhappy former Marketing executives.

It is sad that the term “Marketing” engenders a rather negative response these days, and I thought it was worth a comment or two to address the topic.  The problem we are seeing is that the term “Marketing” is not really being applied correctly.  And the reason for that is the way many large companies - particularly those which serve consumer markets - organize their sales and marketing efforts.

I suppose very few people who have not worked in these areas of business have ever bothered to stop and consider the differences between Sales and Marketing.  In most peoples minds the two are conflated into one single entity, “Sales-and-Marketing”.  So what exactly is Marketing if it is not the same thing as Sales?

The easiest definition is that Marketing is the activity that stops once the product is designed and put into manufacture.  Sales is the activity that starts at that point and seeks to put the product into customers’ hands.  To illustrate that distinction, I want to invent an imaginary product and discuss the task facing Umberto, an equally imaginary VP of Marketing.  For relevance, my hypothetical product will be an audio product.  (And for my sins, I did serve time in the trenches as a Product Marketing Manager, although not in any consumer-oriented field.)

Umberto first heard of the Globular Diaphragm in a meeting with his VP of R&D.  The product of fruitless research in an unrelated field, it turned out that a Globular Diaphragm could be used to make a loudspeaker with some remarkable properties.  For a start, the loudspeaker would be incredibly efficient - close, in fact, to the maximum theoretical efficiency.  In addition, it would exhibit an unusually large auditivity coefficient, and a remarkably low dispersive deflation, qualities which, if I hadn’t made them up, would permit the design of a remarkably good, and unusually small, loudspeaker.

As VP of Marketing, Umberto’s job was to figure out which markets would be best served by this new technology.  Small and efficient are two words which immediately crystallized two more words in his mind: Smart and Phone.  With Globular Diaphragm technology at his disposal he could own the Smart Phone market!  Unfortunately, reviewing this possibility with his VP of Engineering he found that, because the technology relied on a material known as Barely Obtainium (a commercially produced version of Unobtainium), it was extremely expensive.  In fact, it would make the loudspeaker so expensive that no Smart Phone manufacturer would be willing to consider it.  So that option was discarded.

Umberto next considered the Military market.  In general, when something new comes along which offers a great technological advance, but is too expensive to be used in a mainstream consumer product, the natural thing to do is to sell it to the Army.  Umberto envisaged a product that could be deployed behind enemy lines.  It would be so light it would blow about in the wind, all the while emitting (rather loudly, due to its dispersive deflation) that god-awful tune “I know a song that get’s on everybody’s nerves!”.  Such a device could cause massive disruption in enemy morale.  As it happened, Umberto met with his local Senator - a powerful member of the Standing Committee on Military Spending.  The Senator was wildly enthusiastic, and was sure the Joint Chiefs would all want to be on board.  He outlined the road map to making the program work.  If all went well, the product could go into service in as little as ten years time!  Eventually, Umberto discarded that option.

Next, Umberto decided to investigate the high-end audio market.  He soon found, much to his astonishment, that not only was there a market for $1,000 loudspeakers, there was also a market for $10,000 loudspeakers.  Even $100,000 loudspeakers.  This market was tailor-made for Globular Diaphragm technology.  He commissioned a prototype from his VP of Engineering - a cost-no-object design that would blow away every other loudspeaker ever produced.  The prototype was about the size and shape of an electric kettle, but sounded for all the world like a pair of Wilson Alexandria XLFs.  The incredible quantity of high-quality Barely Obtainium used in that design would mean that such a product, if he ever committed it to manufacture, would have to sell for north of $100,000.  But every dealer he met told him the same story.  Yes, it was better than the Big Wilsons.  And yes, the small size meant that it was much easier to deploy in a real-world installation.  But anyone who was willing to blow that kind of cash on a pair of speakers wanted it to look like the Big Wilsons, and not like a pair of kettles.  And it needed to weigh 2 tons, not 2 kg.

When he got home, Umberto sat down, somewhat deflated, and poured himself a nice glass of Brunello.  Needing to relax, he turned on his Bose Wave radio.  Having listened to nothing but incredible audio on his lengthy road trip, it suddenly dawned on him just how god-awful his Bose Wave radio actually sounded.  He wondered how good it might sound if he ripped out the tinny speakers and popped in a pair of the smallest and cheapest Globular Diaphragms.  Within a week he had a prototype.  He knew the chief buyer at Best Buy, where he bought his Bose Wave Radio, and made an appointment to demo his prototype.  Unfortunately, the prototype looked pretty strange, due to the fact that it had been cobbled together from the bits and pieces that were lying around.  It looked like the love-child of a Retro table-top radio and a futuristic Plasma TV.  But it sounded quite amazing.  And the Best Buy buyer absolutely loved the bizarre new look.

This turned out to be a killer idea.  His Best Buy buddy indicated his strong interest in carrying such a product.  Umberto arranged meetings with all the major distribution outlets.  They all thought it was a killer product.  They all wanted a copy of the prototype to take home - and they all got one.  They all reported back that their wives (they were all men, of course) all thought it looked really cool.  They all wanted another one for their country cottages - and they all got one.  And they all agreed it was the most desirable new lifestyle product they had come across in a long time.  They all wanted to carry it.

Umberto’s job was now done, and, per company policy, ownership of the product was handed off to the VP of Sales, Greasy Pete.  Greasy Pete’s team opened preliminary sales talks with all of the enthusiastic retailers.  Negotiations were tough.  While the product looked and sounded amazing, it was still way too expensive.  Only Apple can get away with asking that kind of money.  Wasn’t there some way of getting the price down? 

Greasy Pete met again with Umberto and the VP of Engineering to brainstorm ways of trimming costs.  It turned out, of course, that the Globular Diaphragm was pushing the cost through the roof.  Greasy Pete suggested they replace the Globular Diaphragm with a cheap parts-bin loudspeaker cone.  That way they could realize a massive cost savings.  He could push it at retail for one-third of the price originally envisaged, and actually make double the projected revenue.  The retailers would love it even more.   After all, he assured Umberto, customers won’t care what it sounds like.  It would still look way cooler than anything else in the store.

Greasy Pete immediately christened the new product “21st Century Soundscape” and launched it with a slick advertising campaign, which was based on the notion that a newly-married couple’s first priority would be to purchase a new “21st Century Soundscape” portable audio system, whose pounding room-filling beat could be carried up to the bedroom eliciting spectacular, although unspecified, benefits.  Sales turned out to be equally spectacular, and the product was a roaring success.  As to the Globular Diaphragm technology?  It was set aside and forgotten about.

Several months later, two audiophiles passed a shop window with a big display of 21st Century Soundscape products.  “So, do you enjoy a 21st Century Performance in your bedroom?”, asked the first, quoting the product’s now ubiquitous catch phrase.  “Have you actually heard the thing?”, replied the other, “It sounds awful.”  “I know,” said the first, “It’s nothing but marketing hype.  Those guys wouldn’t know great sound if it chased them down the street.”

That’s my little story.  It is not a morality play.  In the end Greasy Pete made what looks like the right call, but who knows?  In most modern companies Greasy Pete holds the title VP Sales & Marketing, and he is a senior executive with a lot of clout.  Umberto holds the title Product Marketing Manager, and wields a much smaller club.  What Greasy Pete wants, Greasy Pete usually gets.  A very wise man, the Chairman of the Board of one of my companies, had ten rules of business.  One of those was “Never Let The Salesman Set The Price”.  That was - and remains - a fine piece of advice.  It describes a path that can be a very hard one to follow, and can lead to disaster if you should give in to temptation and stray.

Monday, 31 March 2014

Real-Time DSD Conversion

BitPerfect user Eugene Vivino writes: “As you may be aware, PS Audio is releasing the DirectStream - a product that upscales all signals to 2xDSD.  They say they do this because all PCM processors mask the sound in some way, while DSD outputs a more realistic and believable signal.  And reviewers say that it sounds ‘right’.  Now that BitPerfect supports DSD, would it be possible to add real time DSD conversion?”.

Actually, we will soon be taking delivery of a DirectStream.  We are getting one of the first batch of production units.  We continue to hear great things about this product, from both public and private sources, and as the hype has been quite intentionally built up I feel justified in expecting great things of it.  But does DirectStream’s contribution to the state-of-the-art derive from the fact that it upscales to 2xDSD?  I have received both private and public communications from Paul McGowan and his team on this subject, and I have no intention of commenting on any aspect of any information communicated privately.  But I am prepared to comment on some of the publicly-acknowledged elements of DirectStream’s design.

One of the defining aspects of the DirectStream design lies in the fact that it does not use anything that you would describe as a conventional DAC chip.  So I need to recap briefly on what a ‘conventional DAC chip’ actually does.  What these devices do is to convert the incoming PCM signal to what is often erroneously described as DSD, primarily for the entirely laudable motive of avoiding customer confusion.  This is because DSD is 1-bit 2.88MHz, and nothing else.  To create DSD it is necessary to pass an incoming signal - whether analog or digital - through a thing called a Sigma-Delta Modulator.  These SDMs can be configured to produce all sorts of different outputs, and 1-bit 2.88MHz (DSD) is just one possibility.  More specifically, SDMs can produce a multi-bit output as well, with as many bits as you like.  It turns out that there is at least one seriously good reason to use a SDM with a multi-bit output.  This is that SDMs can be unstable, and, in particular, a 1-bit SDM can be seriously unstable.  SDM design is not for the faint of heart.  A typical commercial DAC chip will take the incoming PCM and pass it through a SDM to create a multi-bit, even higher sample rate, version of “DSD”, which it then converts to analog.

What Paul’s team have done is to focus on one of the fundamental benefits of single-bit SDM-based DACs.  With a multi-bit output - such as PCM itself - the DAC has to create an output voltage which has a magnitude determined by the bit pattern.  The more bits, the more different possible output levels there are.  With 1-bit, there are only two levels - encoded as 1 and 0 - and from an electrical perspective, all a DAC has to do is switch its output between two fixed voltage sources represented by those numbers.  These voltage sources can be, for example +1V and -1V, and can be controlled and regulated with fantastic precision, and with extremely low noise.  The job of the DAC is then simply to switch the output signal line between one voltage source and the other.  This is something you don’t need a chip to do, and is furthermore something you can employ a lifetime of audio electronics circuit design experience to realizing in the best possible manner. 

Of course, DirectStream has some other quite unique elements to its design, such as its approach to jitter rejection.  But all that aside, the only thing that counts is what it sounds like, and I am as intrigued as the next guy on that score, and quite impatient to boot!

So when Eugene Vivino observes that DirectStream upscales all incoming PCM to 2xDSD, the truth is that virtually all DACs do something very similar, except that it isn’t necessarily 2xDSD.  And there is nothing particularly special about 2xDSD, other than that it is a bit better than 1xDSD.  Unless you are using what is termed a ladder DAC, a vanishing beast with only a few examples still in production, the process of Digital-to-Analog conversion fundamentally involves converting the PCM data to one of the “DSD-like” SDM-produced formats.  This in and of itself is unlikely to be the source of any differentiating performance attributes that might emerge from DirectStream.

Now, as I mentioned earlier, SDM design is not for the faint of heart.  It is based on some massively complicated signal-processing mathematics.  A breakthrough in SDM design will often earn you a PhD.  There is also a lot of untapped potential in SDM design waiting patiently in the wings for computing power - always inexorably moving forward - to reach a level necessary for these process-intensive SDM designs to be brought into real-world implementation. 

Yes, BitPerfect could implement PCM-to-DSD conversion on-the-fly to enable your PCM content to be delivered to your DAC in DSD form.  But simply converting PCM to DSD is no panacea.  It will not suddenly release trapped information in your PCM data stream.  The only reason you would want to do that is if BitPerfect’s PCM-to-DSD conversion algorithm would produce a significantly better result than the equivalent functionality inside your DAC’s DAC chip.  Make no mistake about it, these chips contain algorithms designed by teams of dedicated mathematicians, who know their stuff, and are implemented on silicon designed expressly for signal processing.  So, at BitPerfect, we do not have an SDM of that quality, and with all respect to the other fine player Apps out there, some of which offer this capability, none of them do either.

The SDM technology we are working on requires seriously impressive amounts of processing power, so when we reach the point where we are able to implement it, it will not be introduced in a real-time system, but likely as an option in a forthcoming version of DSD Master.

Sunday, 30 March 2014

Seeing is Believing

They say “seeing is believing”, but nobody ever proposes that “hearing is believing”.  Yet how our brains make sense of what we see, and how our brains make sense of what we hear, seem to be accomplished more or less the same way.  In each case, the brain starts by trying to figure out what it might be seeing/hearing, and tries to correlate what it actually sees/hears with its pre-determined idea.  If the correlation is good, then our brains are able to conclude convincingly what it is that we are seeing/hearing.  So, in order to see or hear something accurately, we don’t so much have to actually see it, or actually hear it.  Rather what we need is an ensemble of evidence that allows our brains to make the necessary correlation in order for us to feel confident that we know what we are looking at or listening to.

There are many examples of this in action.  The most obvious ones are ambiguous images, such as the one that is either a black table lamp against a white background or the white silhouettes of two faces looking at each other against a black background.  I’m sure you can think of many other examples.  When we look at such a picture, we cannot see both interpretations simultaneously.  When we consider it to be a table lamp, we don’t perceive the faces.  And when we see the faces, we don’t perceive the table lamp.  For us to switch the way we see the image, we must consciously switch from one mode to the other.  The more complex the image, the more effort is needed to switch our perception from one perspective to the other.

Over the last couple of weeks this has been illustrated to me in an interesting way.  During this time, the hour I have been waking up in the morning has coincided with the time the sun rises, and starts to illuminate by bedroom.  I have quite heavy curtains that do a pretty good job of keeping the sun out.  But nonetheless the sun works way past the cracks.  In doing so, it starts to illuminate the ceiling above my bed, throwing shadows of my curtain rail as it does so, making a pattern of dark lines across the ceiling.  My ceiling is a flat white, and has an all-white 5-blade ceiling fan in the middle.  At this time of year the fan is turned off.  The angle of the fan blades is such that some of the blade faces are in the shadow of the encroaching sunlight, whereas one in particular faces it directly.

OK, you get the idea.  Now, as the sun gradually rises, and illumination level rises, the fan blade which faces the sun just happens to take on the exact same shade of white as the ceiling behind it, and I cannot tell the two surfaces apart.  I know exactly where that blade should be, because I can see the rest of the fan outlined quite clearly by its shadows.  But over the course of about ten minutes, as the light gradually improves, what I see on my ceiling is a five-bladed fan, with one blade clearly missing.  It is just invisible, as though it had been removed.  My eyes do not detect the difference in tone of the white fan blade and the white ceiling behind it, so my brain tries to interpret the image as best as it can, and the best correlation it can come up with is the one with the missing fan blade.  I know it is there, but try as I might I cannot perceive any indication of the presence of the apparently missing blade.

As the sun continues to rise, and the room gets brighter, eventually the optical illusion is replaced with the reality of the five-bladed fan, but at the point of transition an interesting thing happens.  At a certain moment, if I concentrate hard enough I can see the emergence of some contrast around the edges of the “missing” fan blade.  My brain can lock onto that and suddenly can “see” all five blades.  However, like the aforementioned optical illusions, with some effort I can switch between perception modes.  The fan can have either four blades or five.

This is where it gets interesting, though.  I mentioned at the start that the curtain rail casts a shadow across the ceiling comprising a number of thin dark lines.  Bear in mind that this is still a low-light situation, so the thin shadow lines are only just visible, but visible nonetheless.  One of those shadow lines happens to run right through the disappearing fan blade.  So when my brain sees the five-bladed fan, it sees that the shadow runs across the ceiling behind the fan blade but not across the fan blade itself.  In other words, my brain recognizes that the fan blade obscures the shadow, so I see a shadow line interrupted by the 6 inches or so of the fan blade.  Am I making myself clear?  Good. 

So what happens when I switch my perception mode to that of the four-bladed fan?  In that case, my brain’s model has constructed no blade to obscure the shadow line, and so it expects to see the shadow line pass uninterrupted across the portion of the ceiling no longer obscured by the apparently missing fan blade.  In short, when I visualize the five-bladed version of the fan, I see the shadow line interrupted by the fan, but when I visualize the four-bladed fan, I see the continuous shadow line.  This is not a sub-conscious artifact - I can consciously switch my perception between the two versions of the apparent optical illusion.  I am fully aware of the apparent contradiction which is that something which is quite obviously visible in one version becomes equally obviously invisible in the other version.  Seeing is believing, indeed.

This holds some lessons for us in interpreting how we hear, if we are willing to accept this model of cognitive perception.  It tells us that in order to be satisfied that we are hearing a certain thing, it is not sufficient to determine whether that thing is in and of itself audible.  What we need is for that thing to make sense in the light of everything else that the brain is hearing at the same time.  And I should say perceiving, rather than hearing.

As an example, humans have an uncanny ability to locate sounds in three dimensions.  Not only can we locate something from left to right, but we can also locate it in terms of distance, and also in terms of height.  How we are able to do this remains a topic of active research.  The simple picture of how we hear is that we have two ears, and that our brains can infer the direction from which a sound is coming based on the time delay between the arrival of those sounds at our two ears.  But this does not explain, for example, how we can perceive any vertical component to the localization.

This area of research has also shown up some remarkably interesting results.  In the right conditions, test subjects can reliably differentiate between sounds originating from two points in space which are remarkably close together.  When you crunch the numbers, the time difference between the arrival of the signals from those two locations is absolutely minuscule - of the order of 10 microseconds.  This generates some tough problems, since in order to achieve that degree of temporal resolution, it is more or less a requirement that whatever is doing the detecting must have a bandwidth of around 250kHz.  Considering that our hearing has a measurable upper limit of the order of 20kHz, this is problematic, and no theories yet exist to physically account for it. 

However, some interesting observations do suggest that humans can perceive audio signals at frequencies considerably higher than 20kHz.  By connecting a person up to a brain scanning device of some type, researchers have shown that the human brain can show a measurable response to audio signals at frequencies as high as 45kHz, even though the subject reports that they don’t hear a thing.

So if we base our models of audio reproduction theory solely upon simple stereo signals with a bandwidth of 20kHz, we may find ourselves unable to account for everything that practical experience throws at us.  There are new things that we need to learn, but the fact that we don't know what they are is not an excuse for assuming they don't exist.

Wednesday, 26 March 2014

Why Moore’s Law will Blow Your Mind

Most of you will be very familiar with Moore’s Law, formulated by Gordon E. Moore, the founder of Intel, way back in 1965.  Imagine, if you can, the state of electronics components technology back then.  Integrated circuits were in their infancy, and indeed few people today would look at the first-ever 1961 Fairchild IC and recognize it as such.  This was the state-of-the-art when Moore formulated his law which states that the number of transistors in an IC would double every two years.  Considering the infancy of the industry at the time Moore made his prediction, it is astonishing that his law continues to hold today.  In 1965, commercial ICs comprised up to a few hundred transistors.  Today, the biggest commercial ICs have transistor counts in the billions.  Also, every ten years or so, sage observers can be counted on to pronounce that Moore’s Law is bound to slow down over the coming decade due to [fill-in-the-blanks] technology limitations.  I can recall at least two such major movements, one in the early 1990’s, and again about 10 years later.  The movers and shakers in the global electronics industry, however, continue to base their long-range planning on the inexorable progress of Moore’s Law.

Last night I attended a profoundly illuminating talk given by John La Grou, CEO of Millennia Media.  John showed how Moore’s law applies in similar vein to a number of core technologies that relate to the electronics industry.  He touched on the mechanisms that underly these developments.  However, what was most impressive was how he expressed the dry concepts such as transistor counts in more meaningful terms.  The one which particularly caught my attention was a chart that expressed the growth in computer power.  Its Y axis has units like the brainpower of a flea, the brainpower of a rat, the brainpower of a human, and the combined brainpower of all humans on earth.  In his chart, today’s CPU has slightly more than the brainpower of a rat, but falls massively short of the brainpower of a human.  However, by 2050, which will be within the lifetimes of many of you reading this, your average computer workstation will be powered by something approaching the combined brainpower of every human being on earth.

I wonder if, back in 1965, Gordon Moore every paused to imagine the practical consequences of his law.  I wonder if he contemplated the possibility of having a 2014 Mac Pro on his office desk, a computer possessed of processing power equivalent to the sum total of every computer ever built up to the time Apple introduced their first ever PC.  Now Moore was a smart guy, so I’m sure he did the math, but if he did, I wonder if he ever asked himself what a person might ever DO with such a thing.  I don’t know if posterity records his conclusions.  In the same way, I wonder (and I most assuredly do not stand comparison to Moore) what a person might do in 2050 with a computer having at its disposal the combined brainpower of every human being on the planet.  As yet, posterity does not record my conclusions.

La Grou’s talk focussed on audio-related applications.  In particular he talked about what he referred to as immersive applications.  In effect, wearable technology that would immerse the wearer in a virtual world of video and audio content.  He was very clear indeed that the technology roadmaps being followed by the industry would bring about the ability to achieve those goals within a remarkably short period of time.  He talked about 3D video technology with resolution indistinguishable from reality, and audio content to match.  He was very clear that he did not think he was stretching the truth in any way to make these projections, and expressed a personal conviction that these things would come to fruition quite a lot faster than the already aggressive timescales he was presenting to the audience.  He showed some really cool video footage of unsuspecting subjects trying out the new Occulus Rift virtual reality headsets, made by the company acquired yesterday by FaceBook.  I won’t attempt to describe it, but we watched people who could no longer stand upright.  Grou has tried the Occulus Rift and spoke of its alarmingly convincing immersive experience.

At the start of La Grou’s talk, he played what he described as the first ever audio recording, made by a Frenchman some 30 years before Edison.  Using an approach similar to Edison’s, his recording was made by a needle which scratched the resultant waveform on a piece of (presumably moving) inked paper.  This recording was made without the expectation that it would ever be replayed; in fact the object was never to listen to the recorded sound, but rather to examine the resultant waveforms under a microscope.  By digitizing the images, however, we can replay that recording today, more than 150 years after the fact.  We can hear the Frenchman humming rather tunelessly over a colossal background noise level.  One imagines he never rehearsed his performance, or even paused to consider what he might attempt to capture as history's first ever recorded sound.  Anyway, the result is identifiable as a man humming tunelessly, but not much more than that.

At the end of the talk we watched the results of an experiment where researchers were imaging the brains of subjects while they (the subjects, that is, not the researchers) were watching movies and other visual stimuli.  They confined themselves to imaging only the visual cortex.  In doing so, there was no pattern to how particular images caused the various regions within the cortex to illuminate, but computers being the powerful things they are (i.e. smarter than the average rat), they let the computer attempt to correlate the images being observed with the patterns being produced.  If I understand correctly, they then showed the subjects some quite unrelated images, and asked the computer to come up with a best guess for what the subject was seeing, based on the correlations previously established.  There is no doubt that the images produced by the computer corresponded quite remarkably with the images which the subject was looking at.  In fact, the computer was making as good a reproduction of the image that the subject was looking at, as the playback of the 150-year old French recording was to what one might imagine was the original. 

I couldn’t help but think that it would be something less than - quite a lot less than - 150 years before this kind of technology advances to a practically useful level, one with literally mind-bending ramifications.

Thursday, 20 March 2014

Tip for ripping CDs with gapless content

Here is a tip from BitPerfect user Jim Brower:  "When ripping CDs, do not auto rip and eject, which lets iTunes determine the ripping order.  Instead, use show CD, which allows you to view the ripping order.  Where you see the track numbers begin, there is an arrow.  You can click the arrow until it points up.  This organizes the track order in the correct sequence from 1 to the end of the album.  The first track will be on top and the normal linear sequence will be from top to bottom.  This only has to be done once, iTunes will remember this setting.  I have rerecorded about 50 live CDs with no gaps using this method."

Thank you, Jim!

BitPerfect 2.0.1 Released.

BitPerfect 2.0.1 contains several bug fixes only. This version only runs on OS X 10.7 (Lion) and up.   We may return to supporting Snow Leopard in the future if we are able to do so.
BitPerfect 2.0.1 is a free upgrade for existing BitPerfect users.

Monday, 17 March 2014

Snow Leopard Update

Apple has now got back to us, and while they acknowledge that the iTunes bug which we reported does exist, they do not intend to devote any resources to fixing it.  This is disappointing.  In effect, Snow Leopard users are apparently no longer being supported by Apple.

So where do we go from here?  Well, we do a have a couple of ideas.  They should work for Snow Leopard users from a perspective of functioning well, but the question is entirely whether Apple will approve them for sale on the App Store.  There are reasons to be pessimistic on that front.  However, we will continue to plug away at it until we get an answer one way or the other.

In the meantime we are about to release an update on the App Store, version 2.0.1, containing numerous bug fixes.  Version 2.0.1 will be marked for OS X 10.7 (Lion) and upwards, so Snow Leopard Users will not have access to it.

Until the situation resolves itself, Snow Leopard users are advised to contact BitPerfect via our support line.