Things I’ve learned, published for the public benefit
Trying To Be Helpful header image

Windows 7: The almost-there operating system

One thing that struck me soon after I upgraded my main computer from Windows XP to Windows 7 is how many things it gets almost right. The OS is full of well-engineered features that seem awesome, yet – upon closer inspection – turn out to have some hidden flaw that renders them useless or at least very frustrating.

Math Input Panel

I’ll start with the Math Input Panel. This is a feature so awesome that you want to show it to your friends. You scribble a mathematical expression with your mouse, touch screen or graphics tablet, and it is magically converted into proper typographical form.

Screenshot of the Math Input Panel

But then you want to insert your formula into a document. You open the built-in (and greatly improved) Write editor of Windows 7. You click “Insert”. Nothing happens. You open Paint (also improved in Windows 7) and try again. Nothing. OpenOffice Writer? Nothing. Word 2003? Nada. Does this thing even work?

Then you read the small print. The Math Input Panel only works with applications that support MathML. As of this writing, the only popular application with MathML support would be Word 2007. There are no other output options. The Math Input Panel cannot generate code in LaTeX, which is the de facto standard in the mathematical community and has been adopted by projects such as Wikipedia, WordPress and jsMath. It cannot generate OLE objects for older versions of Word. It does not even let you paste the damn equation as an image. How can something so ingenious be so useless?

Windows Firewall

On its face, the Windows Firewall has everything you need to say goodbye to third-party firewalls like Comodo. It’s lean, well-integrated with the OS, and the new “Windows Firewall with Advanced Security” console lets you specify detailed rules for inbound and outbound connections to/from specific programs and ports:

Screenshot of the Windows Firewall control panel

Perfect, isn’t it? Unfortunately, it has two fatal shortcomings:

  • Any application can add its own exceptions to it by means of a simple API call. Why? The official rationale is that it is not the firewall’s job to block malicious applications from accessing the network – once you have executed malicious code on your computer, it can pretty much do whatever it wants, including sending data via a trusted process in a way that is invisible to the firewall. There is some truth to this, but a less permissive firewall would still make things that much harder for wrongdoers. More importantly, however, this reasoning misses the use case where you want to prevent legitimate applications from “phoning home”. If I block Adobe Photoshop from using my Internet connection, it probably won’t go so far as to hijack another process, but it will make use of an official Windows API to add an outbound rule for itself.
  • There is no way to get pop-up notifications about outbound connections. In a typical software firewall, when a new application attempts to establish an outbound connection, you get a pop-up window which enables you to allow or block the connection, and add a permanent rule for this application. The Windows Firewall does not have this functionality. The only thing you can enable is a notification about blocked incoming connections, which gives you a chance to unblock an application. What about outbound connections? The best you can do is block all unknown applications, but then you will never know that an application wanted to access the Net. It will just silently fail.

Sticky Notes

Screenshot showing two sticky notes on the desktopThe Sticky Notes feature looks really useful at first. For someone who stares at his screen for most of the day, the Windows desktop seems to be a logical place for “notes to self”. The UI is pretty straightforward and has some nice touches, such as the fact that every note has a little plus button that lets you quickly add another note.

Unfortunately, for some unknown reason Sticky Notes is not a gadget, like the weather thingy you can see on the screenshot above. It’s a separate application. One that cannot be minimized to the system tray. And I don’t know about you, but I don’t like tiny utilities like this taking up space on my taskbar. I need the space so I can comfortably switch between my productivity applications.

Windows Backup

The final “almost perfect” Windows 7 feature I’m going to talk about is Windows Backup. Now this is a seriously exciting utility that promises to replace third-party backup applications like Acronis True Image. On the face of it, it has everything you need. Scheduled and on-demand backups? Check. System drive snapshots? Check. Backups of selected folders? Check. Incremental backups? Check. Restore from bootable CD/DVD? Check. Time needed to back up 500 GB of data to an external USB hard drive? 35 hours. That’s right. Thirty-five freaking hours. (If you suspect there is something wrong with my setup, read these other reports.) Try it once and you’ll never try it again.

It’s as if Microsoft developed a perfectly good backup application and then decided to cripple it on purpose, just to let ISVs make a buck. I don’t want to give my money to Acronis again, especially after reading their official response to a compression bug in TrueImage Home 11 (“just turn off compression”), but it seems I’m going to have to.

→ 18 CommentsTags:

Why you should use English versions of your OS and other software

Even though I’m writing this blog in English, I know I have a considerable number of readers in non-English-speaking countries, such as my native Poland. This post is for them. If you are American, British, Australian, New Zealand(ish?)Kiwi — sorry, there’s nothing for you here. See you next week.

Now for the rest of you. As you can probably figure out from the title, I’m going to try to convince you to use English versions of your software. Now, I am the webmaster of a site which tells you how to learn English, so you might expect I would tell you how daily exposure to English menu items, system messages, help files, and all the other textual UI elements will program your brain with correct English. (Which, by the way, would all be true.)

But today I’m not going to write about the importance of getting English input every chance you get. Instead, I will give you a very practical reason to install English versions of your operating system and other software rather than versions localized in your native language.

Suppose you have just updated the drivers for your nVidia card. Unfortunately, something has gone wrong and every time you reboot your machine you see the following error message:

Sterownik ekranu przestał działać, ale odzyskał sprawność.

(The error message is in Polish because, in this example, we will assume you are Polish and use the Polish version of Windows.) “Motyla noga”, you curse to yourself while opening your Web browser. If there’s one thing you’ve learned online, it’s that the Internet has the answer to your computer question. Other people must have had the same problem and there must be a forum post somewhere which has the solution.

But what are you going to type into Google? What keywords would be likely to occur in this forum post you want to find? In all likelihood, the poster would have quoted the error message itself.

Except they would have quoted it in English, not Polish. Let’s face it — it is much more probable that the solution to your problem is posted on one of the many English-language tech forums than on one of the few Polish-language ones. A Google Groups search on “nVidia” turns up 17,000,000 group threads in English and only 211,000 in Polish (1/80 of the English figure).

So now you’re stuck with your Polish error message, trying to figure out the exact words the English version might have used. “The screen driver has failed?” “Malfunctioned?” “Stopped working?”

Of course, I have an English-language version of Windows, so if I am having computer issues, I can simply read the English error message off the screen (in our example it’s “The display driver has stopped responding and has successfully recovered”), type that magic phrase into Google together with the name of the malfunctioning device or application and boom! — within minutes I’m reading about the secret registry setting that makes it all okay.

Now that I think about it, having an English-language version of Windows probably accounts for something like 30% of my troubleshooting ability. Moreover, using English-language software is useful not only when troubleshooting — I find it equally helpful when I just want to learn how to do something in Windows, Office, Photoshop or even a Web app like GMail. I can just search on the names I see instead of wondering what is the English name for warstwy dopasowania (adjustment layers). And I can apply the solution more easily because I don’t have to translate all the names back into Polish.

It would perhaps behoove me to give you “the other side” of the argument, but the matter seems pretty clear-cut to me: If you want to get help with your software (and who doesn’t?), it helps to use the same version that most of the potential helpers use. And with this, I leave you.

→ 42 CommentsTags:

What you should know about Volume Shadow Copy/System Restore in Windows 7 & Vista (FAQ)

What is volume shadow copy?

Volume Shadow Copy is a service that creates and maintains snapshots (“shadow copies”) of disk volumes in Windows 7 and Vista. It is the back-end of the System Restore feature, which enables you to restore your system files to a previous state in case of a system failure (e.g. after a failed driver or software installation).

Does volume shadow copy protect only my system files?

No. Volume Shadow Copy maintains snapshots of entire volumes. By default, it is turned on for your system volume (C:) and protects all the data on that volume, including all the system files, program files, user settings, documents, etc.

How is this different from what’s in Windows XP?

In Windows XP, System Restore does not use the Volume Shadow Copy service. Instead, it uses a much simpler mechanism: the moment a program attempts to overwrite a system file, Windows XP makes a copy of it and saves it in a separate folder. In Windows XP, System Restore does not affect your documents – it only protects files with certain extensions (such as DLL or EXE), the registry, and a few other things (details). It specifically excludes all files in the user profile and the My Documents folder (regardless of file extension).

When are the shadow copies created?

Volume shadow copies (restore points) are created before the installation of device drivers, system components (e.g. DirectX), Windows updates, and some applications.

In addition, Windows automatically creates restore points at hard-to-predict intervals. The first thing to understand here is that the System Restore task on Vista and 7 will only execute if your computer is idle for at least 10 minutes and is running on AC power. Since the definition of “idle” is “0% CPU usage and 0% disk input for 90% of the last 15 minutes, plus no keyboard/mouse activity” (source), it could take days for your machine to be idle, especially if you have a lot of programs running in the background.

As you see, the frequency with which automatic restore points are created is hard to estimate, but if you use your machine every day on AC power and nothing prevents it from entering an idle state, you can expect automatic restore points to be created every 1-2 days on Windows Vista and every 7-8 days on Windows 7. Of course, the actual frequency will be higher if you count in the restore points created manually by you and those created before software installations.

Here’s a more precise description: By default, the System Restore task is scheduled to run every time you start your computer and every day at midnight, as long as your computer is idle and on AC power. The task will wait for the right conditions for up to 23 hours. These rules are specified in Scheduled Tasks and can be changed by the user. If the task is executed successfully, Windows will create a restore point, but only if enough time has passed since the last restore point (automatic or not) was created. On Windows Vista the minimum interval is 24 hours; on Windows 7 it is 7 days. As far as I know, this interval cannot be changed.

What cool things can I do with Volume Shadow Copy?

  • If your system malfunctions after installing a new video card driver or firewall software, you can launch System Restore and roll back to a working system state from before the installation. If you can’t get your system to boot, you can also do this from the Windows Setup DVD. This process is reversible, i.e. your current state will be automatically saved as a restore point, to which you can later go back. (Note: System Restore will not roll back your documents and settings, just the system files.)
  • previous_versionsIf you accidentally delete 10 pages of your dissertation, you can right-click the document, choose Restore previous versions, and access a previous version of it. You can open it (in read-only mode) or copy it to a new location.
  • If you accidentally delete a file or folder, you can right-click the containing folder, choose Restore previous versions, and open the folder as it appeared at the time a shadow copy was made (see screenshot below). All the files and folders that you deleted will be there!


Note: While the Volume Shadow Copy service and System Restore are included in all versions of Windows Vista, the Previous versions user interface is only available in Vista Business, Enterprise and Ultimate. On other Vista versions, the previous versions of your files are still there; you just cannot access them easily. The Previous versions UI is available in all versions of Windows 7. It is not available in any version of Windows 8.

Is Volume Shadow Copy a replacement for versioning?

No. A versioning system lets you access all versions of a document; every time you save a document, a new version is created. Volume Shadow Copy only allows you to go back to the moment when a restore point was made, which could be several days ago. So if you do screw up your dissertation, you might have to roll back to a very old version.

Is Volume Shadow Copy a replacement for backups?

No, for the following reasons:

  • Shadow copies are not true snapshots. When you create a restore point, you’re not making a new copy of the drive in question — you’re just telling Windows: start tracking the changes to this drive; if something changes, back up the original version so I can go back to it. Unchanged data will not be backed up. If the data on your drive gets changed (corrupted) for some low-level reason like a hardware error, VSC will not know that these changes happened and will not back up your data. (see below for a more detailed description of how VSC works)
  • The shadow copies are stored on the same volume as the original data, so when that volume dies, you lose everything.
  • With the default settings, there is no guarantee that shadow copies will be created regularly. In particular, Windows 7 will only create an automatic restore point if the most recent restore point is more than 7 days old. On Windows Vista, the minimum interval is 24 hours, but remember that the System Restore task will only run if your computer is on AC power and idle for at least 10 minutes, so it could take days before the conditions are right, especially if you run a lot of background processes or do not use your computer frequently.
  • There is no guarantee that a suitable shadow copy will be there when you need it. Windows deletes old shadow copies without a warning as soon as it runs out of shadow storage. With a lot of disk activity, it may even run out of space for a single shadow copy. In that case, you will wind up with no shadow copies at all; and again, there will be no message to warn you about it.

How much disk space do Volume Shadow Copies take up?

By default, the maximum amount of storage available for shadow copies is 5% (on Windows 7) or 15% (on Vista), though only some of this space may be actually allocated at a given moment.

You can change the maximum amount of space available for shadow copies in Control Panel | System | System protection | Configure.

How efficient is Volume Shadow Copy?

It’s quite efficient. The 5% of disk space that it gets by default is usually enough to store several snapshots of the disk in question. How is this possible?

The first thing to understand is that volume shadow copies are not true snapshots. When a restore point is created, Volume Shadow Copy does not create a full image of the volume. If it did, it would be impossible to store several shadow copies of a volume using only 5% of that volume’s capacity.

Here’s what really happens when a restore point is created: VSC starts tracking the changes made to all the blocks on the volume. Whenever anyone writes data to a block, VSC makes a copy of that block and saves it on a hidden volume. So blocks are “backed up” only when they are about to get overwritten. The benefit of this approach is that no backup space is wasted on blocks that haven’t changed at all since the last restore point was created.

Notice that VSC operates on the block level, that is below the file system level. It sees the disk as a long series of blocks. (Still, it has some awareness of files, as you can tell it to exclude certain files and folders.)

The second important fact is that shadow copies are incremental. Suppose it’s Wednesday and your system has two shadow copies, created on Monday and Tuesday. Now, when you overwrite a block, a backup copy of the block is saved in the Tuesday shadow copy, but not in the Monday shadow copy. The Monday copy only contains the differences between Monday and Tuesday. More recent changes are only tracked in the Tuesday copy.

In other words, if we were to roll back an entire volume to Monday, we would take the volume as it is now, “undo” the changes made since Tuesday (using the blocks saved in the Tuesday shadow copy), and finally “undo” the changes made between Monday and Tuesday. So the oldest shadow copy is dependent on all the more recent shadow copies.

When I delete a 700 MB file, does VSC add 700 MB of data to the shadow copy?

No. When you delete a file, all that Windows does is remove the corresponding entry (file name, path, properties) from the Master File Table. The blocks (units of disk space) that contained the file’s contents are marked as unused, but they are not actually deleted. So all the data that was in the file is still there in the same blocks, until the blocks get overwritten (e.g. when you copy another file to the same volume).

Therefore, if you delete a 700 MB movie file, Volume Shadow Copy does not have to back up 700 MB of data. Because it operates on the block level, it does not have to back up anything, as the blocks occupied by the file are unchanged! The only thing it has to back up is the blocks occupied by the Master File Table, which has changed.

If you then start copying other files to the same disk, some of the blocks formerly occupied by the 700 MB file will get overwritten. VSC will make backups of these blocks as they get overwritten.

If VSS is constantly backing up blocks of data that get overwritten, what actually happens when a restore point is created if data is automatically being backed up anyway?

Not much — VSS simply starts backing up the data to a new place, while leaving the “old place” there (at least until it runs out of space). Now you have two places to which you can restore your system, each representing a different point in time. When you create a restore point, you’re simply telling VSS: “I want to be able to go back to this point in time”.

Note that it’s a mistake to think that VSS is backing up every change you make! It only backs up enough to enable you to go to a specific point in time. Here’s an example scenario to clear things up:

  1. You create a file (version #1)
  2. You create a restore point
  3. You change the file (resulting in version #2) — VSS backs up version #1
  4. A week later, you change the file again (resulting in version #3) — VSS doesn’t back anything up, because it already has version #1 backed up. As a result, you can no longer go back to version #2. You can only go back to version #1 — the one that existed when the restore point was created.

(Note that actually VSS doesn’t operate on files but on blocks, but the principle is the same.)

What are the security implications of Volume Shadow Copy?

Suppose you decide to protect one of your documents from prying eyes. First, you create an encrypted copy using an encryption application. Then, you “wipe” (or “secure-delete”) the original document, which consists of overwriting it several times and deleting it. (This is necessary, because if you just deleted the document without overwriting it, all the data that was in the file would physically remain on the disk until it got overwritten by other data. See question above for an explanation of how file deletion works.)

Ordinarily, this would render the original, unencrypted document irretrievable. However, if the original file was stored on a volume protected by the Volume Shadow Copy service and it was there when a restore point was created, the original file will be retrievable using Previous versions. All you need to do is right-click the containing folder, click Restore previous versions, open a snapshot, and, lo and behold, you’ll see the original file that you tried so hard to delete!

The reason wiping the file doesn’t help, of course, is that before the file’s blocks get overwritten, VSC will save them to the shadow copy. It doesn’t matter how many times you overwrite the file, the shadow copy will still be there, safely stored on a hidden volume.

Is there a way to securely delete a file on a volume protected by VSC?

No. Shadow copies are read-only, so there is no way to delete a file from all the shadow copies.

A partial solution is to delete all the shadow copies (by choosing Control Panel | System | System protection | Configure | Delete) before you wipe the file. This prevents VSC from making a copy of the file right before you overwrite it. However, it is quite possible that one of the shadow copies you just deleted already contained a copy of the file (for example, because it had recently been modified). Since deleting the shadow copies does not wipe the disk space that was occupied by them, the contents of the shadowed file will still be there on the disk.

So, if you really wanted to be secure, you would also have to wipe the blocks that used to contain the shadow copies. This would be very hard to do, as there is no direct access to that area of the disk.

Some other solutions to consider:

  • You could make sure you never save any sensitive data on a volume that’s protected by VSC. Of course, you would need a separate VSC-free volume for such data.
  • system_protectionYou could disable VSC altogether. (After disabling VSC, you may want to wipe the free space on your drive to overwrite the blocks previously occupied by VSC, which could contain shadow copies of your sensitive data.) However, if you disable VSC, you also lose System Restore functionality. Curiously, Windows offers no option to enable VSC only for system files. If you want to protect your system, you also have to enable Previous versions (see screenshot to the right).
  • The most secure approach is to use an encrypted system volume. That way, no matter what temporary files, shadow copies, etc. Windows creates, it will all be encrypted.

Notice that VSC only VSC only lets you recover files that existed when a restore point was created. So if the sequence of events is as follows:

create file → create restore point → make encrypted copy → overwrite original file

the original file will be recoverable. But if the sequence is:

create restore point → create file → make encrypted copy → overwrite original file

you are safe. If you make sure to encrypt and wipe files as soon as you create them, so that no restore point gets created after they are saved on disk in unencrypted form, there will be no way to recover them with VSC. However, it is not easy to control when Windows creates a restore point; for example, it can do it at any time, just because your computer happens to be idle.

Can I prevent VSC from keeping snapshots of certain files and folders?

Yes, but you have to edit the registry to do that. Here are detailed instructions from MSDN.

What happens when VSC runs out of space?

Most of the time, most of the data on your disk stays unchanged. However, suppose you uninstall a 5 GB game and then install another 5 GB game in its place. This means that 5 GB worth of blocks got overwritten and had to be backed up by VSC.

In such “high-churn” scenarios, VSC can run out of space pretty quickly. What happens then? VSC deletes as many previous shadow copies as necessary, starting from the oldest, until it has enough space for the latest copy. In the rare event that there isn’t enough space even for the one most recent copy, all the shadow copies will be deleted. There are no partial copies.

Thanks to Adi Oltean, who was one of the engineers of Volume Shadow Copy at Microsoft, for answering my questions on the subject.

→ 74 CommentsTags:

The Hidden Shadow

The flower delivery van had been parked across the street for far too long. Cahey peered outside through the window blinds for the third time. By now he was certain they had him under surveillance. He had been careful not to discuss the subject matter of his current project with anyone, but there were a few souls at the Tribune who knew he was working on a major investigative piece. Apparently that was enough to spike the government’s interest.

Cahey lit a cigarette and reflected on the van’s relatively conspicuous location. Sloppy surveillance work or a deliberate attempt to scare him into silence? There was no way to know. He was, however, sure of one thing: if they came here, they would find nothing. Knowing that digital content was much easier to protect from prying eyes than papers, photographs and recordings, he had disposed of every physical record of his investigation, leaving only a digitized copy on the hard drive of his laptop computer. Two days ago, he had encrypted all this data using an open-source application called TrueCrypt, making sure to overwrite the original files several times before deletion. Now his data was unrecoverable without the password, and there was nothing anybody could do about it, not even the NSA with their army of PhD’s and their supercomputers. The spooks would be in for a surprise.

“Drrrrrt” — the sound of the doorbell pierced the smoke-infused air. Cahey glanced through the window. The van was gone. As he walked towards the door, he contemplated logging out of his Windows account, but decided against it. Bypassing that layer of security would be a trivial exercise, and it wouldn’t do the government much good anyway, given the fact that everything of interest was now encrypted. He opened the door. On his porch stood five serious-looking men in suits. “Stephen Cahey? We have a warrant to search the premises.”


Agent Jack Trallis looked at the machine he had been ordered to process. It was a pretty standard Dell laptop with a dual-core CPU and a 15-inch screen that was covered with fingerprints. “God, do I hate those glossy displays”, he muttered to himself. He was alone in the room; the other agents were in the living room questioning the suspect. Trallis noticed the prominent TrueCrypt icon on the machine’s desktop. “Uh oh. Strong encryption.” He fixed his eyes on the taskbar at the bottom of the screen. There was a row of oversized, unlabeled icons that reminded him of the Hackintosh he had once built for his girlfriend. The guy’s laptop was running Windows 7. There was still a chance.

He located the Documents folder, opened its Properties window, and clicked on the “Previous Versions” tab. Just as he thought, there were five previous versions of the folder – “shadow copies” created regularly by the operating system as part of the System Restore mechanism. As these snapshots were prepared silently in the background and stored on a hidden disk volume, few users were aware of them. Agent Trallis was smiling. The good guys from Redmond were going to make his job easy again.

He selected one of the snapshots and clicked Open. An Explorer window popped up, showing the contents of the Documents folder exactly as it had appeared three days ago. “This is too funny”, he thought. There was a subfolder labeled Project Foxhunt full of scanned documents and audio files. Trallis grabbed his radio. “Sir”, he called out to his commanding officer, “I’ve got something you might want to have a look at.”

For technical information on Volume Shadow Copy, read What you should know about Volume Shadow Copy/System Restore in Windows 7 & Vista

→ 7 CommentsTags:

An audiophile’s look at the audio stack in Windows Vista and 7

If you are an audiophile who uses a PC as a source in your audio system, you’re probably aware of the fact that Windows Vista introduced a brand-new audio engine to replace the much hated KMixer of Windows XP. In my opinion, there are a few reasons why audiophiles should be happy with this change:

  • The new audio stack automatically upconverts all streams to a 32-bit floating-point sample depth (the same that is used in professional studios) and mixes them with the same precision. Because of the amount of headroom that comes with using 32-bit floats, there is no more clipping when playing two samples at the same time. There is also no loss of resolution when you lower the volume of a stream (see below).
  • The Vista/Win7 audio engine automatically feeds your sound card with the highest-quality output stream that it can handle, which is usually 24 bits per sample. Perhaps you’re wondering why you should care, given that most music uses only 16 bits per sample. Suppose you’re playing a 16-bit song with a digital volume control set to 10%. This corresponds to dividing each sample by 10. Now let’s assume the song contains the following two adjacent samples: 41 and 48. In an ideal world, after the volume control we would get 4.1 and 4.8. However, if the output stream has a 16-bit depth just like the input stream, then both output samples will have to be truncated to 4. There is now no difference between the two samples, which means we have lost some resolution. But if we can have an output stream with 24 bits per sample, for each 16-bit level we get 28 = 256 additional (“fractional”) levels, so we can still preserve the difference between the two attenuated samples. In fact, we can have ≈4.1016 and ≈4.8008, which is within 0.04% of the “ideal” samples of 4.1 and 4.8.
  • Don’t you hate it when you change the volume in your movie player or instant messaging software and instead of changing its own volume, it changes your system volume? Or have you ever used an application with its own poorly implemented volume control (iTunes, I’m pointing at you!)? Well, these abominations should now be behind us. In Vista and Win7, each application gets its own audio stream (or streams) and a separate high-quality volume control, so there should no longer be any reason for application vendors to mess with the system volume or roll their own and botch the job.

So Windows Vista and Windows 7 upconvert all your samples to 32-bit floats and mix them with 32-bit precision into an output stream that, by default, has the highest bit depth that your hardware can handle. The output bit depth is customizable; you can change it in the properties of your audio device. If you change it e.g. to 16 bits, the audio engine will still use 32-bit floats for internal processing — it will just downconvert the resulting stream to 16 bits before sending it to your device.

Now, what about the sample rate? You can set the output sample rate in the audio device properties window, but is there also some internal sample rate that the Windows audio engine uses regardless of your setting? For example, does it upsample your 44.1 kHz songs to 96 or 128 kHz? Unlike the upconverting from 16-bit integers to 32-bit floats (which should be completely lossless), this could potentially introduce some distortion as going from 44.1 kHz to 96 or 128 kHz requires at least some interpolation.

I couldn’t find the answer to this question anywhere, so I wrote to Larry Osterman, who developed the Vista and Win7 audio stacks at Microsoft. His answer was that the sample rate that the engine uses is the one that the user specifies in the Properties window. The default sample rate is chosen by the audio driver (44.1 kHz on most devices). So if your music has a sample rate of 44.1 kHz, you can choose that setting and no sample rate conversion will take place. (Of course, any 48 kHz and higher samples will then be downsampled to 44.1 kHz.)

There is some interesting technical information on the Windows Vista audio stack in this Channel9 video.

→ 25 CommentsTags:

Setting up cross-domain tracking of e-commerce transactions with Google Analytics and FastSpring

The problem

You have a website running Google Analytics (with the latest tracking code). You are selling widgets using a 3rd party online store provided by FastSpring.

You would like to open Google Analytics, display a list of all your sales, and be able to see where your paying customers are coming from. You want to know what websites they are being referred from, what search keywords they are using to find your site, and which pages on your site they are landing on. You also want to know which keywords (organic or paid) and which landing pages are earning you the most money.

The non-solution

FastSpring boasts easy Google Analytics integration, so getting your e-commerce transactions to show up in your Analytics reports is a piece of cake. Pretty much all you need to do is enter your Analytics profile number in the FastSpring management panel and set “E-commerce website” to “Yes” in your profile settings, if you haven’t done so already.

You can now see all your sales in Analytics. However, when you start looking at the referrals and keywords that led to the sales, you will get a nasty surprise: All your sales appear as though they were referred from your site! Instead of “cheap widgets”, or whatever keywords your customers use to find your site, you’ll see “(direct)”. Instead of Google, you’ll see “”, or whatever you called your silly widget site. This will not stand!

Here’s why Analytics loses track of the referral information: The Google Analytics code on your website’s pages recognizes visitors by a cookie which stores a unique visitor ID. However, this cookie is assigned to your domain ( When a visitor leaves your site and enters your FastSpring store (at, the Google Analytics code FastSpring added to your store pages does not have access to that cookie. So what it does is create a brand-new tracking cookie. And the only information it has is the referring page on, which is why the transaction will be shown as having been referred from

The proper solution


First, change the Google Analytics code on all your pages to:

<script type="text/javascript">
var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "' type='text/javascript'%3E%3C/script%3E"));
<script type="text/javascript">
var pageTracker = _gat._getTracker("UA-XXXXXX-1");

Of course, you should replace “UA-XXXXXX-1” with your actual profile ID. If your website consists of multiple subdomains that you want to track under a single Analytics profile, you should also add this line:


This line will ensure that GA will use only one cookie (for rather than a separate cookie for each subdomain.

It is critical that you use the exact same code on all your pages. Google Analytics is very sensitive and will set a new cookie when it detects the slightest change in the tracking code from one page to another. When this happens, you lose information on what your customer was doing before.

Google’s official help tells you to use pageTracker._setDomainName(“none”) instead of pageTracker._setAllowHash(false), but this solution is not recommended by the gurus at the Analytics Help forum. Caleb Whitmore says:

setDomainName(“none”) will make GA set the cookies to the full, EXPLICIT domain of the current hostname.  Thus, if someone is on “” and then clicks to a page on “” they’ll get new cookies on the “www” domain, or vice-versa.”


Add the exact same code (with the exception of the _setDomainName line) to your FastSpring store pages. First, you need to disable Google Analytics tracking in the FastSpring management panel. FastSpring does not let you customize the tracking code and they use a very old version of the code anyway (from back when Google Analytics was known as Urchin).

You will need to insert the proper GA code into your store page template. In the FastSpring panel, go to “Styles” and create a new style (if you haven’t created one before). You will need a ZIP file with your current style files. You can get this file if you e-mail FastSpring. Unzip the file, then open a file named “window.xhtml”. Add the tracking code described above (with the exception of the _setDomainName line) right after the <body> tag.


Change all the links from to your store at FastSpring to look like this:

<a href=""
onClick="pageTracker._link(this.href, true); return false;">Buy my widget</a>

This code will take all the information stored in the GA tracking cookie for and append it to the URL query string used to navigate to your online store. The GA tracking code on the target store page will retrieve all this data from the query string and put it into the new cookie (assigned to that it will create. That way, all the information about your visitor’s previous activities will be preserved.

Remember that for the new cookie to get this information, every single link to your store must look as shown above.


Now the GA code on your store pages has access to all the interesting information on search keywords, landing pages, etc. The only remaining problem is that no transactions are being recorded, since you disabled FastSpring’s built-in Analytics support in step 2. We will fix that now.

In the FastSpring management panel, go to “External Tracking”. This is where you previously disabled Google Analytics tracking. Now you are going to turn the tracking back on, but with the updated Analytics code.

Click “Add Custom Tracking Method”. Name it something like “Google Analytics (new code)”. Choose “Free-form XHTML Fragment”. Paste the following code into the text box:

<script type="text/javascript">
try {
"#{order.reference}",          // Order ID
"",                        // Affiliation
"#{order.subTotal.textValue}", // Total
"#{}",      // Tax
"",                            // Shipping
"#{}",       // City
"#{order.address.region}",     // State
"#{}"     // Country

<repeat var="item" value="#{order.allItems}">
"#{order.reference}",     // Order ID
"#{item.productName}",   // SKU
"#{item.productName}",    // Product Name
"",                      // Category
"#{item.priceTotal.textValue}",   // Price
"#{item.quantity}"        // Quantity

} catch(err) { }

Whereas the code you added to window.xhtml (your store Style) is inserted into every page of your FastSpring store, the above code appears only on the order confirmation page (near the bottom). It sends some basic transaction data to Analytics, then it takes each order item and sends information about it to Analytics as well (the <repeat> loop).

A few comments are in order:

  • FastSpring has no SKU’s or product ID’s, but Analytics needs this parameter — otherwise it does not record all the line items in an order correctly. So the best solution is to pass the product name as the SKU.
  • #{order.subTotal.textValue} is passed as the “Total”, because the order amount before taxes and shipping is probably what you are interested in. If you’d rather see the total amount paid by the customer in your Analytics reports, you can change it to #{}.
  • “Shipping” is left empty because FastSpring does not provide that variable.

Neither the above variables nor the <repeat> construct are documented anywhere in the manuals that FastSpring provides to sellers. I got this information from Ryan Dewell, the VP of Software Development at FastSpring. He has mentioned to me that FastSpring will be updating its Google Analytics support, so it is possible that full Analytics integration will eventually be possible without all the hacking described here.

$latex ihbarfrac{partial}{partial t}left|Psi(t)right>=Hleft|Psi(t)right>$

→ 6 CommentsTags:

How to clean eyeglasses

Picture of Ludwik dishwashing liquidI’ve worn eyeglasses since I was 3 years old. A few years ago, I started getting annoyed with the dust and grease that keep building up on my glasses. Maybe it’s old-age grumpiness kicking in, or maybe it’s because I started to use LCD displays whose immaculate picture quality sensitized me to any blurriness between the LCD matrix and my retina.

Anyway, I started cleaning my glasses regularly. The problem was that I couldn’t figure out a good cleaning technique. First, I tried washing my glasses with running water and then drying them up with towels. That didn’t work so well for the grease and the towels (either cloth or paper) would leave tons of lint on my glasses. So I bought a professional microfiber cloth, the same kind that I use for cleaning photographic lenses, and some isopropyl alcohol (isopropanol), the stuff that they put in those overpriced “lens cleaning kits” you’ll find in the photography section of your electronics store. That was a lot better than my previous technique, but the alcohol would not clean off all the grease, which was impossible to remove completely with the cloth.

Well, I’ve finally figured it out. (Actually, I wish I had. I learned about this technique from my optician.) The answer is dishwashing liquid (AKA dish soap).

  1. Rinse your glasses under running water.
  2. Put a bit of dishwashing liquid on one of the lenses, then use your fingers to gently rub the liquid on both sides of both lenses.
  3. Rinse glasses again to remove the dish soap. You don’t need to use your fingers to get the dish soap off – just use running water. You should be looking at perfectly clean lenses with a few drops of water on them. If there’s any grease or other spots, repeat steps 2 and 3.
  4. Use a microfiber cloth to gently clean off remaining water drops. Use light touches – there might be small pieces of dirt on the cloth and if you rub it too hard, they might scratch the lenses. The microfiber cloth leaves no fluff, so your glasses should be perfectly clean.

It’s really a perfect combination. The dish soap dissolves all the grease, so you don’t get any smudges when you use the microfiber cloth. The microfiber cloth removes the remaining water drops and (non-greasy) stains made by evaporating water, and leaves no lint. The result: pristine-looking glasses in one minute.

What’s more, this technique is fairly convenient to use. Many online how-tos recommend special eyeglass-cleaning sprays or vinegar, which may be expensive or unavailable. On the other hand, most people have dish soap in their kitchen, so the only special accessory you need is a microfiber cloth, which costs $7 (for a top-quality one) and can be re-used for years. And even that isn’t really necessary, as paper towels or tissues work almost as well.

→ 54 CommentsTags:

Google Toolbar shows incorrect PageRank for HTTPS pages

If you are a webmaster looking to improve your Google ranking by arranging links from high-PageRank sites, you should be wary of pages whose URL starts with “https” (secure pages). As I have found, the Google Toolbar reports incorrect (usually inflated) PageRank for such pages, so that a secure page which appears to have a PageRank of 8 may in fact have a much lower PR or may not even be indexed at all.

When you visit a secure (https) page, the PageRank you see in the Google Toolbar is not the PageRank of the page — instead, the toolbar shows you the PR of the root page of the domain.

  • is the user info page for Jose Bolanos, one of the many developers at the Mozilla add-on community website. In both IE and Firefox, it appears to have a PageRank of 8. Now, if that were true, Jose would truly have a reason to smile. Unfortunately for him, what the toolbar is actually showing is the PageRank of, the homepage of the add-ons site. If you try any other page on the site (e.g. this one or this one), you will see that they all have a PR of 8.
  • The Google Adsense login page ( appears to have a PR of 10 (higher than not because it has so many incoming links or because Google has given it an artificial boost — it’s just that the Google Toolbar is reporting the PR of
  • This obscure page at the Michigan State University and all its subpages appear to have a PR of 8. Of course, by now we know that what we’re really seeing is the PR of the MSU homepage ( and the actual PR of the page is unknown.

The bug seems to affect the latest versions of the Google Toolbar for IE and Firefox. However, I have seen an earlier version of the toolbar that did not suffer from this problem, so I believe the issue is version-dependent.

→ 7 CommentsTags:

Should we care about ABX test results?

Policy no. 8 in the Terms of Service of the respected audiophile community Hydrogenaudio states:

8. All members that put forth a statement concerning subjective sound quality, must — to the best of their ability — provide objective support for their claims. Acceptable means of support are double blind listening tests (ABX or ABC/HR) demonstrating that the member can discern a difference perceptually, together with a test sample to allow others to reproduce their findings.

What a breath of fresh air. Other audio forums are full of snake-oil-peddling and Kool-Aid-drinking evangelists who go on and on about how replacing $200 speaker wires with $400 speaker wires “really opened up the soundstage and made the upper-midrange come alive”. The people at Hydrogenaudio know that such claims demand proper scientific evidence. How nice to see that they dismiss subjective nonsense and rely instead on the ultimate authority of ABX tests, which really tell us what makes a difference and what doesn’t.

Except that ABX tests don’t measure what really matters to us. ABX tests tell us whether we can hear a difference between A and B. What we really want to know, however, is whether A is as good as B.


“Wait a second!”, I hear you exclaim. “Surely if I cannot tell A from B, then for all intents and purposes, A is as good as B and vice versa. If you can’t see the difference, why pay more?

Actually, there could be tons of reasons. To take a somewhat contrived example, suppose I magically replaced the body of your car with one that were less resistant to corrosion, leaving all the other features of your vehicle intact. Looking at the car and driving it, you would not notice any difference. Even if I gave you a chance to choose between your original car and the doctored one, they would seem identical to you and you could choose either of them. However, if you were to choose the one I tampered with, five years later your vehicle’s body would be covered in spots of rust.

The obvious lesson here is that “not seeing a difference” does not guarantee that A is as good as B. Choosing one thing over another can have consequences that are hard to detect in a test because they are delayed, subtle, or so odd-ball that no one even thinks to record them during the test.

But how is this relevant to listening tests? Assuming that music affects us through our hearing, how could we be affected by differences that we cannot hear?

In his fascinating book Burning House: Unlocking the Mysteries of the Brain, Jay Ingram describes the case of a 49-year-old woman suffering from a condition called hemispatial neglect (the case was researched by neuropsychologists John Marshall and Peter Halligan). Patients with hemispatial neglect are unable to perceive one (usually the left) side of the objects they see. When asked to copy drawings, they draw only one side; when reading out words, they read them only in half (e.g. they read simile as mile).

burning-houseIn Marshall and Halligan’s experiment, the woman was given two simple drawings showing two houses. In one of the drawings, the left side of the house was covered in flames and smoke; the houses looked the same otherwise. Since the flames were located on the left side, the patient was unable to see them and claimed to see no difference between the drawings. When Marshall and Halligan asked her which of the houses she would rather live in, she replied — rather unsurprisingly — that it was a silly question, given that the houses were identical.

However, when the experimenters persuaded her to make a choice anyway, she picked the flameless house 14 out of 17 times, all the time insisting that both houses look the same.

Marshall and Halligan’s experiment shows (as do other well-known psychological experiments, including those pertaining to subliminal messages) that it is possible for information to be in a part of the brain where it is inaccessible to conscious processes. This information can influence one’s state of mind and even take part in decision-making processes without one realizing it.

If people can be affected by information that they don’t even know is there, then who says they cannot be affected by inaudible differences between an MP3 and a CD? Failing an ABX test tells you that you are unable to consciously tell the difference between two music samples. It does not mean that the information isn’t in your brain somewhere — it just means that your conscious processes cannot access it.

So the fact that you cannot tell the difference between an MP3 and a CD in an ABX test does not mean that an MP3 is as good as a CD. Who knows? Maybe listening to MP3s causes more fatigue in the long run. Maybe it makes you get bored with your music more quickly. Or maybe the opposite is true and MP3s are actually better. We can formulate and test all sorts of plausible hypotheses — the point is, an ABX test which shows no audible difference is not the end of the discussion.


I have shown that the lack of audible differences between A and B in an ABX test does not imply that A is as good as B. Before you read this post as an apology for lossless audio formats, here is a statement that will surely upset hard-core audiophiles:

The fact that you can tell the difference between an MP3 and a CD in an ABX test does not mean that the MP3 is worse than a CD.

First of all, the differences between MP3s encoded at mainstream bitrates (128 kbps and 192 kbps) and original recordings are really subtle and can be detected only under special conditions (quiet environment, good equipment, full listener concentration, direct comparisons of short samples). Because the differences are so tiny, we cannot automatically assume that it is the uncompressed version that sounds better. Subtle compression artifacts such as slightly reduced sharpness of attacks on short, loud sounds may in fact be preferred by some listeners in a direct comparison.

Secondly, even if we found that the uncompressed version is preferred by listeners, that wouldn’t necessarily mean that it is better. People prefer sitting in front of the TV to exercising, but the latter might make them feel much better overall. If it were discovered, for example, that compressed music is less tiring to listen to (this is of course pure speculation), then that fact might outweigh any preference for uncompressed sound in blind tests.


The relevance of ABX tests to the lives of music lovers is questionable. Neither does the absence of audible differences imply equal quality, nor does the presence of audible differences imply that the compressed version is inferior. Rather than being the argument to end all debate, the results of ABX tests are just one data point and the relative strengths of various audio formats may well be put in a new light by further research.

→ 21 CommentsTags:

Blind-testing MP3 compression

Among music listeners, the use of lossy audio compression technologies such as MP3 is a controversial topic. On one side, we have the masses who are glad to listen to their favorite tunes on $20 speakers connected to their PC’s onboard audio device and couldn’t care less what bitrate MP3s they get as long as the sound quality is better than FM radio. On another side, we have the quasi-audiophiles (not true audiophiles, of course, as those would never touch anything other than a high-quality CD or LP player properly matched to the amplifier) who stick to lossless formats like FLAC due to MP3’s alleged imperfections.

If I considered myself part of either group, my life would be easy, as I would know exactly what to do. Unfortunately, I fall somewhere in between. I appreciate music played through good equipment and I own what could be described as a budget audiophile system. On the other hand, I am not prepared to follow the lead of the hard-core lossless format advocates, who keep repeating how bad MP3s sound, yet do not offer anything in the way of objective evidence.

So, me being me, I had to come to my own conclusions about MP3 compression. Is it okay for me to listen to MP3s and if so, what bitrate is best? To answer these questions, I spent many hours doing so-called ABX listening tests.

What is an ABX test?

An ABX test works like this: You get four samples of the same musical passage: A, B, X and Y. A is the original (uncompressed) version. B is the compressed version. With X and Y, one is the original version (same as A), the other is the compressed version (same as B), and you don’t know which is which. You can listen to each version (A, B, X or Y) as many times as you like. You can select a short section of the passage and listen to it in each version. Your objective is to decide whether X = A (and Y = B) or X = B (and Y = A). If you can get a sufficient number of right answers (e.g. 7 times out of 7 or 9 times out of 10), you can conclude that there is an audible difference between the compressed sample and the original sample.

What I found

  1. The first thing I found was that telling the difference between a well-encoded 128 kbps MP3 and a WAV file is pretty damn hard. Since 128 kbps is really the lowest of the popular MP3 bitrates and it gets so much bad rap on forums like Head-Fi, I expected that it would fail miserably when confronted with the exquisite work of artists like Pink Floyd or Frank Sinatra. Not so. Amazingly, the Lame encoder set at 128 kbps (ABR, high quality encoding) held its own against pretty much anything I’d throw at it. The warm, deeply human quality of Gianna Nannini’s voice in Meravigliosa Creatura, the measured aggression of Metallica’s Blitzkrieg, the spacious guitar landscapes of Pink Floyd’s Pulse concert — it all sounded exactly the same after compression. There were no changes to the ambiance of the recording, the quality of the vocals, the sound of vowels and consonants, the spatial relationships between the instruments on the soundstage, or the ease with which individual instruments could be picked out.
  2. That said, MP3s at 128 kbps are not truly transparent. With some training, it is possible to distinguish them from original recordings in blind listening tests. My trick was to look for brief, sharp, loud sounds like beats or certain types of guitar sounds — I found that compression takes some of the edge off them. Typically, the difference is so subtle that successful identification is only possible with very short (a few seconds long) samples, a lot of concentration and a lot of going back and forth between the samples. Even then, the choice was rarely obvious for me; more often, making the decision felt like guessing. Which of the identical bass riffs I just heard seemed to carry more energy? A few times I was genuinely surprised that I was able to get such high ABX scores after being so unsure of my answers.
  3. With some effort, it is possible to find passages that make the difference between 128 kbps MP3 and uncompressed audio quite obvious. For me, it was just a matter of finding a sound that was sharp enough and short enough. In David Bowie’s Rock ‘n Roll Suicide, I used a passage where Bowie sings the word “song” in a particular, Dylanesque way (WAV file). Another example is a 1.2-seconds-long sample from Thom Yorke’s Harrowdown Hill (WAV file). The second beat in the sample is accompanied by a static-like click (clipping) that is considerably quieter in the compressed version. More samples that are “difficult” for the MP3 format can be found on the Lame project page (I found the “Castanets” sample especially revealing.).
  4. What about higher bitrates? As I increased the bitrate, the differences that were barely audible at 128 kbps became inaudible and the differences that were obvious became less obvious.
    • At 192 kbps, the Bowie and Yorke samples were still too much of a challenge and I was able to reliably tell the MP3 from the original, though with much less confidence and with more going back and forth between the two versions.
    • At 256 kbps (the highest bitrate I tested), I was not able to identify the MP3 version reliably — my ABX results were 7/10, 6/10 and 6/7, which can be put down to chance.


Obviously, the results I got apply to my particular situation. If you have better equipment or better hearing, it is perfectly possible that you will be able to identify 256 kbps MP3s in a blind test. Conversely, if your equipment and/or hearing is worse, 192 kbps or even 128 kbps MP3s may sound transparent to you, even on “difficult” samples.

Test setup

  • Lame MP3 encoder version 3.98.2. I used Joint Stereo, High Quality, and variable bitrate encoding (ABR).
  • Foobar2000 player with ABX plugin. I used ReplayGain to equalize the volume between the MP3 and the original file — otherwise I found it too easy to tell the difference in ABX tests, since MP3 encoding seems to change the volume of the track somewhat.
  • Auzentech X-Meridian 7.1 — a well-respected audiophile-quality sound card with upgraded LM4562 op-amps.
  • RealCable copper jack-RCA interconnect.
  • Denon PMA-350SE amplifier — an entry-level audiophile receiver designed in England.
  • Sennheiser HD 25-1 II, top-of-the-line closed headphones with stock steel cable.

When I write that there was an audible difference in an ABX test, I mean that I got 7/7 or 9/10 correct answers without repeating the test.


If my goal was to use an MP3 bitrate that is indistinguishable from the original in a blind listening test, I would use 256 kbps, since that is the bitrate which I was unable to identify in a reliable way, despite repeated attempts on a variety of samples (including the “difficult” samples posted on the Lame website).

Whether I will actually standardize on 256 kbps, I’m not sure. The fact that a 192 kbps MP3 can be distinguished from the original in a contrived test (good equipment, quiet environment, high listener concentration, specially selected samples) does not mean it is unsuitable for real-world scenarios. Sure, at 192 kbps the music is not always identical to the original, but judging by my experiments, the difference affects less than 1% of my music (in a 100-second sample, more than 99 seconds would probably be transparent). Even if all I did was listen to this tiny proportion of my music, I would be in a position to perceive the difference less than 1% of the time (what percent of the time do I listen to music in a quiet environment? what percent of the time am I really focused on the music as opposed to other things I’m doing?). Besides, there is the rarely-posed question of whether “different” necessarily means “inferior” — it is quite possible that subtle compression artifacts might actually improve the perceived quality of music in some cases.

→ 23 CommentsTags: