Why I hate Microsoft (2) - The not-so-good, the bad and the ugly
“… it is easy to be blinded to the essential uselessness of them by the sense of achievement you get from getting them to work at all. In other words … their fundamental design flaws are completely hidden by their superficial design flaws.” – The Hitchhiker’s Guide to the Galaxy, on the products of the Sirius Cybernetics Corporation.
Let’s be honest: there’s no such thing as bug-free software. Initial versions of programs may occasionally crash, fail to de-allocate memory, or encounter untested conditions. Developers may overlook security holes, users may do things nobody thought of, and not all systems are identical. Software developers are only human, and they make mistakes now and then. It happens. But of all major software vendors Microsoft has the worst record by far when it comes to the quality of their products in general.
Outlining the battle field
Microsoft boasts a rather extensive product range, but in fact there’s less here than meets the eye. Microsoft has forever been selling essentially the same software over and over again, in a variety of colorful new wrappers.
Microsoft products can be divided into three categories: applications, operating systems, and additional server products. The applications include the Microsoft Office suite, but also Internet Explorer, Media Player, Visio, Frontpage, etc. The operating systems involve desktop and server versions of Windows. On the desktop we find Windows 9x/ME, NT Workstation, Windows 2000, XP and Vista, and at the server end we have Windows NT Server, Windows 2003 Server and varieties such as Datacenter. The additional server products, e.g. Internet Information Server (IIS) and SQL Server, run on top of one of the Windows server products. They add services (e.g. webserver or database server functions) to the basic file, print and authentication services that the Windows server platform provides.
Two different Windows families
Windows on the desktop comes in two flavors: the Windows 9x/ME product line, and the Windows NT/2000/XP/Vista product line. The different versions within one product line are made to look a bit different, but the difference is in the details only; they are essentially the same. Windows ‘95, ‘98 and ME are descended from DOS and Windows 3.x, and contain significant portions of old 16-bit legacy code. These Windows versions are essentially DOS-based, with 32-bit extensions. Process and resource management, memory protection and security were added as an afterthought and are rudimentary at best. This Windows product line is totally unsuited for applications where security and reliability are an issue. It is completely insecure, e.g. it may ask for a password but it won’t mind if you don’t supply one. There is no way to prevent the user or the applications from accessing and possibly corrupting the entire system (including the file system), and each user can alter the system’s configuration, either by mistake or deliberately. The Windows 9x/ME line primarily targets consumers (although Windows ‘95 marketing was aimed at corporate users as well). Although this entire product line was retired upon the release of Windows XP, computers running Windows ‘98 or (to a lesser degree) Windows ME are still common.
The other Windows product line includes Windows NT, 2000, XP and Vista, and the server products. This Windows family is better than the 9x/ME line and at least runs new (i.e. post-DOS) 32-bit code. Memory protection, resource management and security are a bit more serious than in Windows 9x/ME, and they even have some support for access restrictions and a secure filesystem. That doesn’t mean that this Windows family is anywhere near as reliable and secure as Redmond’s marketeers claim, but compared to Windows 9x/ME its additional features at least have the advantage of being there at all. But even this Windows line contains a certain amount of 16-bit legacy code, and the entire 16-bit subsystem is a direct legacy from Microsoft’s OS/2 days with IBM. In short, all 16-bit applications share one 16-bit subsystem (just as with OS/2). There’s no internal memory protection, so one 16-bit application may crash all the others and the the entire 16-bit subsystem as well. This may create persistent locks from the crashed 16-bit code on 32-bit resources, and eventually bring Windows to a halt. Fortunately this isn’t much of a problem anymore now that 16-bit applications have all but died out.
While Windows has seen a lot of development over the years, relatively little has really improved. The new features in new versions of Windows all show the same half-baked, patchy approach. For each fixed problem, at least one new problem is introduced (and often more than one). Windows XP for example comes loaded with more applications and features than ever before. While this may seem convenient at first sight, the included features aren’t as good as those provided by external software. For example, XP insists on supporting DSL (“wideband Internet”) networking, scanners and other peripherals with the built-in Microsoft code instead of requiring third-party code. So you end up with things like DSL networking that uses incorrect settings (and no convenient way to change that), scanner support that won’t let you use your scanner’s photocopy feature, or a digital camera interface that will let you download images from the camera but you can’t use its webcam function. Wireless (WiFi) network cards are even more of a problem: where manufacturers could include their own drivers and client manager software in previous versions of Windows, users are now reduced to using XP’s native WiFi support. Unfortunately XP’s WiFi support is full of problems that may cause wireless PCs to lose their connection to the wireless access point with frustrating regularity. Also XP’s native WiFi support lacks extra functions (such as advanced multiple-profile management) that manufacturers used to include in their client software.
Even basic services are affected. Windows 2000 and later have built-in DNS (Domain Name System) caching. DNS is the mechanism that resolves Internet host and domain names (e.g. www.microsoft.com) into the numerical IP addresses used by computers (e.g. 194.134.0.67). Windows’ DNS caching basically remembers resolved hostnames for faster access and reduced DNS lookups. This would be a nice feature, if it weren’t for the blunder that failed DNS lookups get cached by default as well. When a DNS lookup fails (due to temporary DNS problems) Windows caches the unsuccessful DNS query, and continues to fail to connect to a host regardless of the fact that the DNS server might be responding properly a few seconds later. And of course applications (such as Internet Explorer and Outlook) have been integrated in the operating system more tightly than ever before, and more (formerly separate) products have been bundled with the operating system.
Design flaws common to all Windows versions
All versions of Windows share a number of structural design flaws. Application installation procedures, user errors and runaway applications may easily corrupt the operating system beyond repair. Networking support is poorly implemented. Inefficient code leads to sub-standard performance, and both scalability and manageability leave a lot to be desired. (See also TODO:appendix A.) In fact, NT and its successors (or any version of Windows) are just not comparable to the functionality, robustness or performance that the UNIX community has been used to for decades. They may work well, or they may not. On one system Windows will run for weeks on end, on another it may crash quite frequently. I’ve attended trainings at a Microsoft Authorized Education Center, and I was told: “We are now going to install Windows on the servers. The installation will probably fail on one or two systems [They had ten identical systems in the classroom] but that always happens - we don’t know why and neither does Microsoft.” I repeat, this from a Microsoft Authorized Partner.
Be that as it may… Even without any installation problems or serious crashes (the kind that require restore operations or reinstallations) Windows doesn’t do the job very well. Many users think it does, but they generally haven’t experienced any alternatives. In fact Windows’ unreliability has become commonplace and even proverbial. The dreaded blue screen (popularly known as the Blue Screen of Death or BSOD for short) has featured in cartoons, screen savers and on t-shirts, it has appeared at airports and on buildings, and there has even been a Star Trek episode in which a malfunctioning space ship had to be switched off and back on in order to get it going.
Even if Windows stays up it leaves a lot to be desired. On an old-but-still-good desktop PC (e.g. a 450MHz PII CPU with 256MB RAM, something we could only dream of fifteen years ago) four or five simultaneous tasks are enough to tax Windows’ multitasking capabilities to their limits, even with plenty of core memory available. Task switches will start to take forever, applications will stop responding simply because they’re waiting for system resources to be released by other applications (which may have halted without releasing those resources), or kernel and library routines lock into some unknown wait condition. Soon the whole system locks up entirely or becomes all but unusable. In short, Windows’ process management is as unimpressive as its memory protection and resource management are, and an operating system that may crash entirely when an application error occurs should not be sold as a serious multi-tasking environment. Granted, it does run several processes at once - but not very well. Recent versions of Windows (i.e. XP and Vista) are somewhat better in this respect and more stable than their predecessors, but not spectacularly so. Although they have been patched up to reduce the impact of some of the most serious problems, their multitasking still depends to a large degree on the application rather than on the operating system. That means that a single process may still paralyze the entire system or a process can become impossible to terminate without using dynamite. The basic flaws in the OS architecture remain; a crashing application (e.g. a video player or a communications package) can still lock up the system, crash it into a BSOD or cause a sudden and spontaneous reboot.
Code separation, protection and sharing flaws
Windows is quite fragile, and the operating system can get corrupted quite easily. This happens most often during the installation of updates, service packs, drivers or application software, and the problem exists in all versions of Windows so far. The heart of the problem lies in the fact that Windows can’t (or rather, is designed not to) separate application and operating system code and settings. Code gets mixed up when applications install portions of themselves between files that belong to the operating system, occasionally replacing them in the process. Settings are written to a central registry that also stores vital OS settings. The registry database is basically insecure, and settings that are vital to the OS or to other applications are easily corrupted.
Even more problems are caused by the limitations of Windows’ DLL subsystem. A good multi-tasking and/or multi- user OS utilizes a principle called code sharing. Code sharing means that if an application is running n times at once, the code segment that contains the program code (which is called the static segment) is loaded into memory only once, to be used by n different processes which are therefore instances of the same application. Apparently Microsoft had heard about something called code sharing, but obviously didn’t really understand the concept and the benefits, or they didn’t bother with the whole idea. Whatever the reason, they went and used DLLs instead. DLL files contain Dynamic Link Libraries and are intended to contain library functions only. Windows doesn’t share the static (code) segment - if you run 10 instances of Word, the bulk of the code will be loaded into memory 10 times. Only a fraction of the code, e.g. library functions, has been moved to DLLs and may be shared.
The main problem with DLL support is that the OS keeps track of DLLs by name only. There is no adequate signature system to keep track of different DLL versions. In other words, Windows cannot see the difference between one WHATSIT.DLL and another DLL with the same name, although they may contain entirely different code. Once a DLL in the Windows directory has been overwritten by another one, there’s no way back. Also, the order in which applications are started (and DLLs are loaded) determines which DLL will become active, and how the system will eventually crash. There is no distinction between different versions of the same DLL, or between DLLs that come with Windows and those that come with application software. An application may put its own DLLs in the same directory as the Windows DLLs during installation, and may overwrite DLLs by the same name if they exist.
What it boils down to is that the application may add portions of itself to the operating system. (This is one of the reasons why Windows needs to be rebooted after an application has been installed or changed.) That means that the installation procedure introduces third-party code (read: uncertified code) into the operating system and into other applications that load the affected DLLs. Furthermore, because there is no real distinction between system level code and user level code, the software in DLLs that has been provided by application programmers or the user may now run at system level. This corrupts the integrity of the operating system and other applications. A rather effective demonstration was provided by Bill Gates himself who, during a Comdex presentation of the Windows 98 USB Plug- and-Play features, connected a scanner to a PC and caused it to crash into a Blue Screen. “Moving right along,” said Gates, “I guess this is why we’re not shipping it yet.” Nice try, Mr. Gates, but of course the release versions of Windows ‘98 and ME were just as unstable, and in Windows 2000 and its sucessors new problems have been introduced. These versions of Windows use a firmware revision number to recognize devices, so an update of a peripheral’s firmware may cause that device to be ‘lost’ to PnP.
Another, less harmful but most annoying, side-effect of code confusion is that different language versions of software may get mixed up. A foreign language version of an application may add to or partially overwrite Windows’ list of dialog messages. This may cause a dialog window to prompt “Are you sure?” in English, followed by two buttons marked, say, “Da” and “Nyet”.
Peripheral drivers also use a rather weak signature system and suffer from similar problems as DLL’s, albeit to a lesser degree. For example, it’s quite possible to replace a printer driver with a similar driver from another language version of Windows and mess up the page format as a result. Printer drivers from different language versions of Windows sometimes contain entirely different code that generates different printer output, but Windows is unaware of the difference. This problem has been addressed somewhat with the release of Windows 2000, but it’s still far from perfect.
Mixing up OS and application code: why bundling is bad
Designing an OS to deliberately mix up system and application code fits Microsoft’s strategy of product bundling and integration. The results are obvious: each file operation that involves executable code essentially puts the entire OS and its applications at risk, and application errors often mean OS errors (and crashes) as well. This leads to ridiculous “issues” such as Outlook Express crashing Windows if it’s a “high encryption” version with the locale set to France. Replying to an e-mail message may crash the entire system, a problem which has been traced to one of the DLLs that came with Outlook. (Are you still with me?)
In a well-designed and robustly coded OS something like this could never happen. The first design criterion for any OS is that the system, the applications, and (in a multi-user environment) the users all be separated and protected from each other. Not only does no version of Windows do that by default, it actively prevents you from setting things up that way. The DLL fiasco is just the tip of the iceberg. You can’t maintain or adequately restore OS integrity, you can’t maintain version control, and you can’t prevent applications and users from interfering with each other and the system, either by accident or on purpose.
Integrating applications into the OS is also not a good idea for very practical reasons. First of all it’s a mistake from a standpoint of efficiency and reliability. Think of the OS as a delivery van and of the applications as the parcels you want to deliver with it. Imagine that, when you buy your van, it comes with a number of large parcels already in it. These parcels are welded in place so that it is all but impossible to remove them without damaging your van. You have to live with them, drive them around to wherever you go and burn up extra fuel to do so, and quite often these built-in parcels get in the way of the items you wanted to deliver when you bought your van for that purpose. Even worse; when one of the parcels is damaged, quite often your van has to be serviced in order to fix the problem! But when complain about it, the manufacturer of the van tells you that this actually makes it a much better delivery van. Ridiculous? Yes, of course. It’s just as ridiculous as Windows using up valuable system resources for bundled applications that more often than not get in the way of what you need to do, and add points-of-failure to boot.
The second practical issue, related to the first one, is control, from the user’s point of view. A basic operating system allows the user to install, configure and run applications as required, and to choose the right application based on their features and performance, and the user’s preference. Bundling applications in Windows removes their functions from the application level (where the user has control over them) to the operating system (where the user has no control over them). For example, a user application such as a web browser can be installed, configured and uninstalled as necessary, but Internet Explorer is all but impossible to remove without dynamite. The point here is not that Internet Explorer should not be provided (on the contrary; having a web browser available is a convenience much appreciated by most users) but that it should be available as an optional application, to be installed or uninstalled as per the user’s preference without any consequence for how the rest of Windows will function.
Beyond repair
Then there’s Windows’ lack of an adequate repair or maintenance mode. If anything goes wrong and a minor error or corruption occurs in one of the (literally) thousands of files that make up Windows, often the only real solution is a large-scale restore operation or even to reinstall the OS. Yes, you read correctly. If your OS suddenly, or gradually, stops working properly and the components which you need to repair are unknown or being locked by Windows, the standard approach to the problem (as recommended by Microsoft) is to do a complete reinstallation. There’s no such thing as single user mode or maintenance mode to do repairs, nor is there a good way to find out which component has been corrupted in the first place, let alone to repair the damage. (The so-called ‘safe mode’ merely swaps configurations and does not offer sufficient control for serious system repairs.)
Windows has often been criticized for the many problems that occur while installing it on a random PC, which may be an A-brand or clone system in any possible configuration. This criticism is not entirely justified; after all it’s not practically feasible to foresee all the possible hardware configurations that may occur at the user end. But that’s not the point. The point is that these problems are often impossible to fix or even properly diagnose, because most of the Windows operating system is beyond the users’ or administrators’ control. This is of course less true for Windows 9x/ ME. Because these are essentially DOS products, you can reboot the system using DOS and do manual repairs to a certain degree. With Windows NT and its successors this is generally impossible. Windows 2000, XP and Vista come with an external repair console utility on the CD, that allows you some access to the file system of a damaged Windows installation. But that’s about it.
The inability to make repairs has been addressed, to a certain degree, in Windows XP. This comes with a ‘System Restore’ feature that tracks changes to the OS, so that administrators may ‘roll back’ the system to a previous state before the problem occurred. Also, the ‘System File Check’ feature attempts to make sure that some 1000 system files are the ones that were originally installed. If a “major” system file is replaced by another version (for example if a Windows XP DLL file is overwritten by a Windows ‘95 DLL with the same name) the original version will be restored. (Of course this also prevents you from removing things like Outlook Express or Progman.exe, since the specification of what is an important system file is rather sloppy.) Windows Vista takes these features even further, by incorporating transaction-based principles. This enhances the chances of a successful roll-back from changes that have not been committed permanently yet.
Most of these workarounds are largely beyond the user’s control. While some of them may have adverse effects (e.g. File System Check may undo necessary modifications) their effectivity is limited by nature. There are many fault conditions possible that prevent automated repair features from working correctly in the first place. When Windows breaks, the automated features to recover from that fault generally break as well. Also the number of faults that the automated repair options can deal with are limited. At some point manual intervention is the only option, but that requires the adequate maintenance mode that Windows doesn’t have. The inability of a commercial OS to allow for its own maintenance is a good demonstration of its immaturity.
Even so, even the added options for system restore in XP and Vista are an improvement over the previous situation, in that at least a certain amount of recovery is now possible. On the other hand, this illustrates Microsoft’s kludgy approach to a very serious problem: instead of implementing changes in the architecture to prevent OS corruption, they perpetuate the basic design flaw and try to deal with the damage after the fact. They don’t fix the hole in your roof, they sell you a bucket to put under it instead. When the bucket overflows (i.e. the system recovery features are insufficient to solve a problem) you’re still left with a mess.
Wasted resources, wasted investments
The slipshod design of Windows does not only reflect in its flawed architecture. The general quality of its code leaves a lot to be desired as well. This translates not only in a disproportionately large number of bugs, but also in a lot of inefficiency. Microsoft needs at least three or four times as much hardware to deliver the same performance that other operating systems (e.g. Unix) deliver on much less. Likewise, on similar hardware competing products perform much better, or will even run well on hardware that does not meet Microsoft’s minimum system requirements.
Inefficient code is not the only problem. Another issue is that most bells and whistles in Microsoft products are expensive in terms of additional hardware requirements and maintenance, but do not increase productivity at all. Given the fact that ICT investments are expected to pay off in increased productivity, reduced cost or both, this means that most “improvements” in Microsoft products over the past decades have been a waste of time from a Return On Investment standpoint. Typical office tasks (e.g. accounting, data processing, correspondence) have not essentially changed, and still take as much time and personpower as they did in the MS-DOS era. However the required hardware, software and ICT staff have increased manifold. Very few of these investments have resulted in proportional increases in profit.
Only 32 kilobytes of RAM in the Apollo capsules’ computers was enough to put men on the moon and safely get them back to Earth. The Voyager deep space probes that sent us a wealth of images and scientific data from the outer reaches of the solar system (and still continue to do so from interstellar space) have on-board computers based on a 4-bit CPU. An 80C85 CPU with 176 kilobytes of ROM and 576 kilobytes of RAM was all that controlled the Sojourner robot that drove across the surface of Mars and delivered geological data data and high-resolution images in full-color stereo. But when I have an 800MHz Pentium III with 256 Megabytes of RAM and 40 Gigabytes of disk space, and I try to type a letter to my grandmother using Windows XP and Office XP, the job will take me forever because my computer is underpowered! And of course Windows Vista won’t even run on such a machine…
Server-based or network-based computing is no solution either, mainly because Windows doesn’t have any real code sharing capability. If you were to shift the workload of ten workstations to an application server (using Windows Terminal Server, Citrix Server or another ASP-like solution) the server would need a theoretical ten times the system resources of each of the workstations it replaced to maintain the same performance, not counting the inevitable overhead which could easily run up to an additional 10 or 20 percent.
Then there’s the incredible amount of inefficient, or even completely unnecessary code in the Windows file set. Take the 3D Pinball game in Windows 2000 Professional and XP Professional, for example. This game (you’ll find it under \Program Files\Windows NT\Pinball
) is installed with Windows and takes up a few megabytes of disk space. But most users will never know that it’s sitting there, wasting storage and doing nothing productive at all. It doesn’t appear in the program menu or control panel, and no shortcuts point to it. The user isn’t asked any questions about it during installation. In fact its only conceivable purpose would be to illustrate Microsoft’s definition of ‘professional’. No wonder Windows has needed more and more resources over the years. A few megabytes doesn’t seem much, perhaps, but that’s only because we’ve become used to the enormous footprints of Windows and Windows applications. Besides, if Microsoft installs an entire pinball game that most users neither need nor want, they obviously don’t care about conserving resources (which are paid for by the user community). What does that tell you about the resource-efficiency of the rest of their code? Let me give you a hint: results published in PC Magazine in April 2002 show that the latest Samba software surpasses the performance of Windows 2000 by about 100 percent under benchmark tests. In terms of scalability, the results show that Unix and Samba can handle four times as many client systems as Windows 2000 before performance begins to drop off.
Another example is what happened when one of my own clients switched from Unix to Windows (the reason for this move being the necessity to run some webbased accounting package with BackOffice connectivity on the server). Their first server ran Unix, Apache, PHP and MySQL and did everything it had to do with the engine barely idling. On the same system they then installed Windows Server 2003, IIS, PHP and MySQL, after which even the simplest of PHP scripts (e.g. a basic 100-line form generator) would abort when the 30 second execution timeout was exceeded.
Paradoxically, though, the fact that Microsoft products need humongous piles of hardware in order to perform decently has contributed to their commercial success. Many integrators and resellers push Microsoft software because it enables them to prescribe the latest and heaviest hardware platforms in the market. Unix and Netware can deliver the same or better performance on much less. Windows 2000 and XP however need bigger and faster systems, and are often incompatible with older hardware and firmware versions (especially the BIOS). This, and the fact that hardware manufacturers discontinue support for older hardware and BIOSes, forces the user to purchase expensive hardware with no significant increase in return on investment. This boosts hardware sales, at the expense of the “dear, valued customer”. Resellers make more money when they push Microsoft products. It’s as simple as that.
Many small flaws make a big one
Apart from the above (and other) major flaws there’s also a staggering amount of minor flaws. In fact there are so many minor flaws that their sheer number can be classified as a major flaw. In short, the general quality of Microsoft’s entire set of program code is sub-standard. Unchecked buffers, unverified I/O operations, race conditions, incorrectly implemented protocols, failures to deallocate resources, failures to check environmental parameters, et cetera ad nauseam… You name it, it’s in there. Microsoft products contain some extremely sloppy code and bad coding practices that would give an undergraduate some well-deserved bad marks. As a result of their lack of quality control, Microsoft products and especially Windows are riddled with literally thousands and thousands of bugs and glitches. Even many of the error messages are incorrect!
Some of these blunders can be classified as clumsy design rather than as mere sloppiness. A good example is Windows’ long filename support. In an attempt to allow for long filenames in Windows ‘9x/ME, Microsoft deliberately broke the FAT file system. They stored the extension information into deliberately cross-linked directory entries, which is probably one of their dirtiest kludges ever. And if that wasn’t enough, they made it legal for filenames to contain whitespace. Because this was incompatible with Windows’ own command line parsing (Windows still expects the old FAT notation) another kludge was needed, and whitespace had to be enclosed in quotation marks. This confused (and broke) many programs, including many of Microsoft’s own that came with Windows.
Another good example is Windows’ apparent case-sensitivity. Windows seems to make a distinction between upper and lower case when handling filenames, but the underlying software layers are still case-insensitive. So Windows only changes the case of the files and directories as they are presented to the user. The names of the actual files and directories may be stored in uppercase, lowercase or mixed case, while they are still presented as capitalized lower case file names. Of course this discrepancy causes no problems in a Windows-only environment. Since the underlying code is essentially case-insensitive, case is not critical to Windows’ operation. However as soon as you want to incorporate Unix-based services (e.g. a Unix-based webserver instead of IIS) you discover that Windows has messed up the case of filenames and directories.
But most of Windows’ errors and glitches are just the result of sloppy work. Of course there is no such thing as bug- free software, but the amount of bugs found in Windows is, to put it mildly, disproportionate. For example, Service Pack 4 for Windows NT 4.0 attempted to fix some 1200 bugs (yes, one thousand two hundred). But there had already been three previous service packs at the time! Microsoft shamelessly admitted this, and even boasted about having “improved” NT on 1200 points. Then they had to release several more subsequent service packs in the months that followed, to fix remaining issues and of course the additional problems that had been introduced by the service packs themselves.
An internal memo among Microsoft developers mentioned 63,000 (yes: sixty-three thousand) known defects in the initial Windows 2000 release. Keith White, Windows Marketing Director, did not deny the existence of the document, but claimed that the statements therein were made in order to “motivate the Windows development team”. He went on to state that “Windows 2000 is the most reliable Windows so far.” Yes, that’s what he said. A product with 63,000 known defects (mind you, that’s only the known defects) and he admits it’s the best they can do. Ye gods.
And the story continues: Windows XP Service Pack 2 was touted to address a large number of security issues and make computing safer. Instead it breaks many things (mostly products made by Microsoft’s competitors, but of course that is merely coincidence) but does not really fix any real security flaws. The first major security hole in XP-SP2 was described by security experts as “not a hole but rather a crater” and allowed downloadable code to spoof firewall information. Only days after XP-SP2 was released the first Internet Explorer vulnerability of the SP2-era was discovered. Furthermore SP2 leaves many unnecessary networking components enabled, bungles permissions, leaves IE and OE open to malicious scripts, and installs a packet filter that lacks a capacity for egress filtering. It also makes it more difficult for third-party products (especially multimedia plugins) to access the ActiveX controls, which in turn prevents the installation of quite a bit of multimedia software made by Microsoft’s competitors. XP-SP2’s most noticeable effect (apart from broken application compatibility) are frequent popups that give the user a sense of security. Apart from this placebo effect the long-awaited and much-touted XP-SP2 doesn’t really fix very much.
In the summer of 2005 Jim Allchin, then group VP in charge of Windows, finally went and admitted all this. In a rare display of corporate honesty, he told the Wall Street Journal that the first version of Longhorn (then the code name for Windows Vista) had to be entirely scrapped because the quality of the program code had deteriorated too far. The root of the problem, said Allchin, was Microsoft’s historical approach to developing software (the so-called “spaghetti code culture”) where the company’s thousands of programmers would each develop their own piece of code and it would then all be stitched together at the end. Allchin also said to have faced opposition to his call for a completely new development approach, firstly from Gates himself and then the company’s engineers.
MS developers: “We are morons”
Allchin’s revelations came as no great surprise. Part of the source code to Windows 2000 had been leaked onto the Internet before, and pretty it was not. Microsoft’s flagship product turned out to be a vast sprawl of spaghetti in Assembly, C and C++, all held together with sticky tape and paper clips. The source code files contained many now- infamous comments including “We are morons” and “If you change tabs to spaces, you will be killed! Doing so f***s the build process”.
There were many references to idiots and morons, some external but mostly at Microsoft. For example:
In the file private\ntos\rtl\heap.c
, which dates from 1989:
// The specific idiot in this case is Office95, which likes
// to free a random pointer when you start Word95 from a desktop
// shortcut.
In the file private\ntos\w32\ntuser\kernel\swp.c
from 11-Jul-1991:
// for idiots like MS-Access 2.0 who SetWindowPos( SWP_BOZO )
// and blow away themselves on the shell, then lets
// just ignore their plea to be removed from the tray.
Morons are also to be found in the file private\genx\shell\inc\prsht.w
:
// We are such morons. Wiz97 underwent a redesign between IE4 and IE5
And in private\shell\shdoc401\unicpp\desktop.cpp
:
// We are morons. We changed the IDeskTray interface between IE4
In private\shell\browseui\itbar.cpp
:
// should be fixed in the apps themselves. Morons!
As well in private\shell\ext\ftp\ftpdrop.cpp
:
We have to do this only because Exchange is a moron.
Microsoft programmers also take their duty to warn their fellow developers seriously against unsavory practices, which are apparently committed on a regular basis. There are over 4,000 references to “hacks”. These include:
In the file private\inet\mshtml\src\core\cdbase\baseprop.cxx
:
// HACK! HACK! HACK! (MohanB) In order to fix #64710
// at this very late date
In private\inet\mshtml\src\core\cdutil\genutil.cxx
:
// HACK HACK HACK. REMOVE THIS ONCE MARLETT IS AROUND
In private\inet\mshtml\src\site\layout\flowlyt.cxx
:
// God, I hate this hack ...
In private\inet\wininet\urlcache\cachecfg.cxx
:
// Dumb hack for back compatibility. *sigh*
In private\ispu\pkitrust\trustui\acuictl.cpp
:
// ACHTUNG! HACK ON TOP OF HACK ALERT:
// Believe it or not there is no way to get current height
In private\ntos\udfs\devctrl.c
:
// Add to the hack-o-rama to fix formats.
In private\shell\shdoc401\unicpp\sendto.cpp
:
// Mondo hackitude-o-rama.
In private\ntos\w32\ntcon\server\link.c
:
// HUGE, HUGE hack-o-rama to get NTSD started on this process!
In private\ntos\w32\ntuser\client\dlgmgr.c
:
// HACK OF DEATH!!
In private\shell\lib\util.cpp
:
// TERRIBLE HORRIBLE NO GOOD VERY BAD HACK
In private\ntos\w32\ntuser\client\nt6\user.h
:
// The magnitude of this hack compares favorably with that
// of the national debt.
The most worrying aspect here is not just how these bad practices persist and even find their ways into release builds in large numbers. After all, few things are as permanent as a “temporary” solution. Nor is it surprising how much ancient code still exists in the most recent versions of Windows (although it is somewhat unsettling to see how decades-old mistakes continue to be a problem). No, the most frightening thing is that Microsoft’s developers obviously know they are doing terrible things that serious undermine the quality of the end product, but are apparently unable to remedy the known bad quality of their own code.
As you may remember, Windows XP was already out by the time that the above source code got leaked. In fact, back in 2004, Microsoft had been talking about Longhorn (Windows Vista) for three years. Just a few months after the source code leaked out, it was announced that WinFS, touted as Microsoft’s flagship Relational File System Of The Future, would not ship with Vista after all. The reason isn’t hard to guess: Windows’ program code has become increasingly unmaintainable and irrepairable over the years.
In the long years since XP was launched, Apple have come out with five major upgrades to OSX, upgrades which (dare I say it?) install with about as much effort as it takes to brush your teeth in the morning. No nightmare calls to tech-support, no sudden hardware incompatibilities, no hassle. Yet Microsoft has failed to keep up, and the above example of the state of their program code clearly demonstrates why.
Unreliable servers
All these blunders have of course their effects on Windows’ reliability and availability. Depending on application and system load, most Windows systems tend to need frequent rebooting, either to fix problems or on a regular basis to prevent performance degradation as a result of Windows’ shaky resource management.
On the desktop this is bad enough, but the same flaws exist in the Windows server products. Servers are much more likely to be used for mission-critical applications than workstations are, so Windows’ limited availability and its impact on business become a major issue. The uptimes of typical Windows-based servers in serious applications (i.e. more than just file and print services for a few workstations) tend to be limited to a few weeks at most. One or two server crashes (read: interruptions of business and loss of data) every few months are not uncommon. As a server OS, Windows clearly lacks reliability.
Windows server products aren’t even really server OSes. Their architecture is no different from the workstation versions. The server and workstation kernels in NT are identical, and changing two registry keys is enough to convince a workstation that it’s a server. Networking capabilities are still largely based on the peer-to-peer method that was part of Windows for Workgroups 3.11 and that Microsoft copied, after it had been successfully pioneered by Apple and others in the mid-eighties. Of course some code in the server products has been extended or optimized for performance, and domain-based authentication has been added, but that doesn’t make it a true server platform. Neither does the fact that NT Server costs almost three times as much as NT Workstation. In fact we’re talking about little more than Windows for Workgroups on steroids.
In November 1999, Sm@rt Reseller’s Steven J. Vaughan-Nichols ran a test to compare the stability of Windows NT Server (presumably running Microsoft Internet Information Server) with that of the Open Source Linux operating system (running Samba and Apache). He wrote:
Conventional wisdom says Linux is incredibly stable. Always skeptical, we decided to put that claim to the test over a 10-month period. In our test, we ran Caldera Systems OpenLinux, Red Hat Linux, and Windows NT Server 4.0 with Service Pack 3 on duplicate 100MHz Pentium systems with 64MB of memory. Ever since we first booted up our test systems in January, network requests have been sent to each server in parallel for standard Internet, file and print services. The results were quite revealing. Our NT server crashed an average of once every six weeks. Each failure took roughly 30 minutes to fix. That’s not so bad, until you consider that neither Linux server ever went down.
Interesting: a crash that takes 30 minutes to fix means that something critical has been damaged and needs to be repaired or restored. At least it takes more than just a reboot. This happens once every six weeks on a server, and that’s considered “not so bad”… Think about it. Also note that most other Unix flavors such as Solaris, BSD or AIX are just as reliable as Linux.
But the gap between Windows and real uptime figures is even greater than Vaughan-Nichols describes above. Compare that half hour downtime per six weeks to that of Netware, in the following article from Techweb on 9 April 2001:
Server 54, Where Are You? The University of North Carolina has finally found a network server that, although missing for four years, hasn’t missed a packet in all that time. Try as they might, university administrators couldn’t find the server. Working with Novell Inc. (stock: NOVL), IT workers tracked it down by meticulously following cable until they literally ran into a wall. The server had been mistakenly sealed behind drywall by maintenance workers.
Although there is some doubt as to the actual truth of this story, it’s a known fact that Netware servers are capable of years of uninterrupted service. Shortly before I wrote this, I brought down a Netware server at our head office. This was a Netware 5.0 server that also ran software to act as the corporate SMTP/POP3 server, fax server and main virus protection for the network, next to providing regular file and print services for the whole company. It had been up and running without a single glitch for more than a year, and the only reason we shut it down was because it had to be physically moved to another building. Had the move not been necessary, it could have run on as long as the mains power held out. There’s simply no reason why its performance should be affected, as long as nobody pulls the plug or rashly loads untested software. The uptimes of our Linux and Solaris servers (mission-critical web servers, database servers and mail servers, or just basic file and print servers) are measured in months as well. Uptimes in excess of a year are not uncommon for Netware and Unix platforms, and uptimes of more than two years are not unheard of either. Most OS updates short of a kernel replacement do not require a Unix server to be rebooted, as opposed to Windows that expects a complete server reboot whenever a DLL in some subsystem is updated. But see for yourself: check the Netcraft Uptime statistics and compare the uptimes of Windows servers to those of Unix servers. The figures speak for themselves.
Microsoft promises 99.999% availability with Windows 2000. That’s a little over 5 minutes of downtime per year. Frankly I can’t believe this is a realistic target for Windows. Microsoft products have never even approached such uptime figures. Even though most of the increased availability of Windows 2000 must be provided through third-party clustering and redundancy solutions (something that the glossy ads neglect to mention) it’s highly unlikely that less than five minutes of downtime per year for the entire Windows cluster is practically feasible.
Perhaps even more serious is the fact that, short of clustering, there is no adequate solution for the many software glitches that jeopardize the availability of a typical Windows server. A typical NT or 2000 server can spontaneously develop numerous transient problems. These may vary from network processes that seem healthy but ignore all network requests, to runaway server applications that lock up the entire operating system. Usually the only solution in these cases is to power cycle and restart the system. I remember having to do that three times a week on a production server. Believe me, it’s no fun. Perhaps it’s understandable that some network administrators feel that the best way to accelerate a Windows system is at 9.81 meters per second squared.
More worries, more cost, or both
Does all this make Windows an entirely unusable product that cannot run in a stable configuration anywhere? No, fortunately not. There are situations where Windows systems (both workstations and servers) may run for long periods without crashing. A vendor-installed version of Windows NT of 2000 on an HCL-compliant, A-brand system, with all the required service packs and only certified drivers, should give you relatively few problems (provided that you don’t use it for much more than basic file and print services, of course). The rule of thumb here is to use only hardware that is on Microsoft’s Hardware Compatibility List (HCL), to use only vendor-supplied, certified drivers and other software, and to use third-party clustering solutions for applications where availability is a critical issue.
Another rule of thumb is: one service, one server. Unix sysadmins would expect to run multiple services on one server and still have resources to spare. Good Windows sysadmins generally don’t do that. If you need to run a file/print server, a web server and a mail server, all under Windows, use three servers. This will minimize the risk of software conflicts, and it will help prevent overload. On the other hand, you now have to maintain three servers instead of one, which in turn requires more IT staff to keep up with the work.
A diligent regime of upgrading and running only the latest versions of Microsoft products may help as well. Such a policy will cost a small fortune in license upgrades, but it may help to solve and even prevent some problems. To be honest, Windows XP and Vista on the desktop, and Windows Server 2003 in the server room, are somewhat better (or rather, less bad) than NT4 was. These versions are at least more stable, and less prone to spontaneous crashes, than NT4 was. Some of NT’s most awkward design blunders have been fixed. For example, the user home directories are no longer located under the WINNT directory. On most systems (especially on notebook computers) XP and Vista are considerably less shaky (albeit by no means perfect) and hardware support is certainly a lot better. Which goes to show that a few relatively trivial changes may go a long way..
But still, given the general quality of Microsoft products and Windows in particular, there are absolutely no guarantees. And of course Microsoft introduced a whole new set of glitches and bugs in Windows XP, which largely undid many of the improvements in Windows 2000. So now Windows XP is less stable in some situations than Windows 2000 was, and Vista will stumble on issues that didn’t bother XP, starting with the inability to copy files in less than a few days, an issue that even Service Pack 1 didn’t solve on most PCs. But that’s innovation for you, I suppose.
Denial will see us through
One frightening aspect about all this is that Microsoft doesn’t seem to realize how serious these problems are. Or rather, they probably realize it but they don’t seem to care as long as sales hold up. While the core systems of large companies still run on either mainframes or midrange Linux systems in order to provide sufficient reliability and performance, Microsoft sales reps pretend that Windows is good enough to compete in that area.
Microsoft likes to pretend that Windows’ huge shortcomings are only minor. Their documents on serious problems (which are always called ‘Issues’ in Microsoft-speak) are very clear on that point. Take the classic ‘TCP/IP Denial Of Service Issue’ for example: a serious problem that was discovered a few years ago. It caused NT servers to stop responding to network service requests, thus rendering mission-critical services unavailable. (This should not be confused with deliberate Denial Of Service attacks to which most operating systems are vulnerable; this was a Windows issue only.) At the time there was no real solution for this problem. Microsoft’s only response at the time was to state that “This issue does not compromise sensitive data in any way. It merely forces a server to become unavailable for a short time, which is easily remedied by rebooting the server.” NT sysadmins had to wait for the next service pack that was released several months later before this problem was addressed. In the meantime they were expected to accept downtime and the rebooting of mission-critical servers as a matter of course. After all no data was lost, so how bad could it be?
And Microsoft thinks that this stuff can compete with Unix and threaten the mainframe market for mission-critical applications? Uh-huh. I don’t think so.
In September 2001 Hewlett-Packard clustered 225 PCs running the Open Source Linux operating system. The resulting system (called I-cluster) benchmarked itself right into the global top-500 of supercomputers, using nothing but unmodified, out-of-the-box hardware. (A significant number of entries in that top-500, by the way, runs Linux, and more and more Unix clusters are being used for supercomputing applications.) Microsoft, with a product line that is descended solely from single-user desktop systems, can’t even dream of such scalability - not now, not ever. Nevertheless Microsoft claimed on a partner website with Unisys that Windows will outperform Unix, because Unisys’ server with Windows 2000 Datacenter could be scaled up to 32 CPU’s. This performance claim is of course a blatant lie: the opposite is true and they know it. Still Microsoft would have us believe that the performance, reliability and scalability of the entire Windows product line is on par with that of Unix, and that clustered Windows servers are a viable replacement option for mainframes and Unix midrange systems. I’m not kidding, that’s what they say. If you’re at all familiar with the scalability of Unix midrange servers and the requirements of the applications that mainframes are being used for, you will realize how ludicrous this is.
Microsoft lacks confidence in own products
Dog food is sold to the dog owners who buy it, not to the dogs who have to eat it. “Eating your own dog food” is a metaphor for a programmer who uses the system he or she is working on. Is it yet functional enough for real work? Would you trust it not to crash and lose your data? Does it have rough edges that scour your hand every time you use a particular feature? Would you use it yourself by choice?
When Microsoft acquired the successful Hotmail free Email service, the system had roughly 10 million users, and the core systems that powered Hotmail all ran Unix. A few years later the number of Hotmail users had exceeded 100 million, but in spite of Microsoft’s claims about the power of Windows and their previous intentions to replace Hotmail’s core systems with Windows servers, Hotmail’s core systems still run Unix. This was discussed thoroughly in a leaked-out internal paper by Microsoft’s Windows 2000 Server Product Group member David Brooks. It mentioned the proverbial stability of the Unix kernel and the Apache web server, the system’s transparency and combination of power and simplicity. Windows on the other hand it considered to be needlessly GUI-biased (Brooks wrote: “Windows […] server products continue to be designed with the desktop in mind”) and also complex, obscure and needlessly resource-hungry. (Brooks: “It’s true that Windows kequires a more powerful computer than Linux or FreeBSD [and treats a server] reboot as an expectation”.)
Hotmail is not the only example of Microsoft’s refusal to eat their own dog food. The “We have the way out” anti-Unix website that Microsoft (along with partner Unisys) put up in the first months of 2002, initially ran Unix and Apache. (It was ported to IIS on Windows 2000 only after the entire ICT community had had a good laugh).
For many years Microsoft’s own email servers have protected by third-party security software. This amounts to a recognition of the fact that Exchange on Windows needs such third party assistance to provide even a basic level of system security.
Microsoft’s SQL Labs, the part of the company that works on Microsoft’s SQL Server, purchased NetScreen’s 500- series security appliance to defend its network against Code Red, Nimda and other worm attacks. Apparently the labs’ choice was made despite the fact that Microsoft then already sold its own security product touted as a defense against such worms. The Microsoft ISA [Internet Security and Acceleration] Server was introduced in early 2001 and was hailed by Microsoft as their first product aimed entirely at the security market. In fact, the most important reason businesses ought to switch to ISA Server, according to Microsoft, was that “ISA Server is an […] enterprise firewall and secure application gateway designed to protect the enterprise network from hacker intrusion and malicious worms”. Still Microsoft’s SQL Labs prudently decided to rely on other products than their own to provide basic security.
Microsoft’s own accounting division used IBM’s AS/400 midrange platform for critical applications such as the payroll system, until well in the late nineties.
The most recent example of Microsoft’s awareness of the shortcoming of their own products thus far is how some of Microsoft’s own top executives had trouble getting Windows Vista to work in the weeks after its release. The officials, including a member of the Microsoft board of directors, voiced some of the same complaints about missing drivers and crippled graphics that users have raised since Vista debuted in January 2007. Steven Sinofsky, the Microsoft senior vice president who took charge of Windows development the day after Vista’s retail release, complained that some of his hardware wouldn’t work with the new OS. “My home multi-function printer did not have drivers until 2/2 and even then [they] pulled their 1/30 drivers and released them (Brother)” said Sinofsky in an e-mail dated Feb. 18, 2007. Sinofsky’s e-mail was one of hundreds made public in February 2008 by U.S. District Court Judge Marsha Pechman as part of a lawsuit that claimed Microsoft deceived buyers when it promoted PCs as “Windows Vista Capable” in the run-up to the 2006 holiday season. Mike Nash, vice president for Windows product management, was nailed by the Vista Capable debacle more than a year later when he bought a new laptop. “I know that I chose my laptop (a Sony TX770P) because it had the Vista logo and was pretty disappointed that it not only wouldn’t run [Aero], but more important wouldn’t run [Windows] Movie Maker” Nash said in an email on Feb. 25, 2007. “Now I have a $2,100 e-mail machine.”
Network pollution
It should also be mentioned that Microsoft doesn’t know the first thing about networking. A Windows system in a TCP/ IP environment still uses a NetBIOS name. Microsoft networking is built around NetBEUI, which is an extended version of NetBIOS. This is a true Stone Age protocol which is totally unroutable. It uses lots of broadcasts, and on a network segment with Windows PCs broadcasts indeed make up a significant portion of the network traffic, even for point-to-point connections (e.g. between a Microsoft mailbox and a client PC). If it weren’t for the fact that it is possible to encapsulate NetBIOS/NetBEUI traffic in a TCP/IP envelope, connecting Windows to the real world would be totally impossible. (Note that Microsoft calls the IP encapsulation of NetBEUI packets ‘native IP’. Go figure.) The problem is being made worse by the ridiculous way in which Microsoft applications handle file I/O. Word can easily do over a hundred ‘open’ operations on one single file, and saving a document involves multiple write commands with only one single byte each. Thus Windows PCs tend to generate an inordinate amount of garbage and unnecessary traffic on the network.
Microsoft’s design approach has never shown much understanding of of computer networking. I remember reading a document from Microsoft that stated that a typical PC network consists of ten or at most twenty peer-to-peer workstations on a single cable segment, all running Microsoft operating systems. And that explains it, I suppose. If you want anything more than that, on your own head be it.
Here’s a simple test. Take a good, fast FTP server (i.e. one that runs on Unix). Upload and download a few large files (say, 50MB) from and to a Windows NT or 2000 workstation. (I used a 233MHz Pentium-II.) You will probably see a throughput in the order of 1 Mbit/s for uploads and 2 to 3 Mbit/s for downloads, or more on faster hardware. Then boot Linux on the same workstation (a quick and easy way is to use a Linux distribution on a ready-to-run CD that requires no installation, such as Knoppix). Then repeat the upload and download test. You will now see your throughput limited only by the bandwidth or your network connection, the capacity of your FTP server, or by your hardware performance, whichever comes first. On 10 Mbit/s Ethernet, 5 Mbit/s upload and download throughput are the least you may expect. To further test this, you can repeat it with a really slow client (e.g. a 60 or 75MHz Pentium) running Linux. The throughput limit will still be network-bound and not system-bound. (Note: this is not limited to FTP but also affects other network protocols. It’s a performance problem related to the code in Windows’ IP stack and other parts of the architecture involved with data throughput.)
New Windows versions bring no relief here. Any network engineer who uses PPPoE (Point-to-Point Protocol over Ethernet) with ADSL will tell you that the MTU (a setting that limits packet size) should be set to 1492 or less. In XP it’s set by default to 1500, which may lead to problems with the routers of many DSL ISPs. Microsoft is aware of the problem, but XP nevertheless persists in setting up PPPoE with an MTU of 1500. There is a registry hack for PPPoE users, but there is no patch, and XP has no GUI-based option which enables the user to change the MTU conveniently.
The above example is fairly typical of XP. It tries to do things itself and botches the job, rather than give you control over it to do it properly. But all versions of Windows share a number of clumsily designed and coded network features, starting with Windows file sharing. This service uses fixed ports, and can’t be moved to other ports without using dynamite. This means that routing the essentially insecure Windows file sharing connections through a secure SSH tunnel is extremely cumbersome, and requires disabling (or, on XP, uninstalling) file sharing services on the local client, so that using both a tunneled and a non-tunneled file sharing connection at the same time is impossible, and switching back and forth between the two requires rebooting. Yes, you could conceivably solve this with a VPN configuration, but that’s not the point. The point is that any self-respecting network client will let you configure the ports it uses but, apparently for reasons of user-friendliness, Microsoft hard-coded the file sharing ports into their software, thereby making it impossible to extend file sharing beyond insecure connections on a local LAN.
While many Windows’ networking limitations are rapidly phasing out now that the ‘9x/ME product line has been abandoned, others persist. Set an XP or Vista box or a Windows 2003 server to share files, and then try to get Windows networking to ‘see’ those shares over a VPN or from the other end of an Internet router. You can’t, or at least not without cumbersome and unnecessarily expensive workarounds, due to Windows Networking still being based on a non-routable IBM protocol from the 1970’s.
On top of all this, readers of this paper report that according to John Dvorak in PC Magazine, the Windows kernel maxes out at 483 Mbps. He remarks that as many businesses are upgrading to 1 Gigabit Ethernet, Windows (including XP) just can’t keep up.
Now go read what Microsoft writes about Windows 2000 and XP being the ideal platform for Internet applications…
Denial of Service vulnerabilities
The sloppy nature of Windows’ networking support code and protocol stacks also makes the system more vulnerable to Denial of Service attacks. A DoS attack is a form of computer vandalism or sabotage, with the intention to crash a system or otherwise render it unavailable. In a typical DoS attack a deliberately malformed network packet is sent to the target system, where it triggers a known flaw in the operating system to disrupt it. In the case of Windows, though, there are more ways to bring down a system. For example, the kernel routines in Windows 2000 and XP that process incoming IPsec (UDP port 500) packets are written so badly that sending a stream of regular IPsec packets to the server will cause it to bog down in a CPU overload. And of course Windows’ IPsec filters cannot block a 500/udp packet stream.
Another way to render a system unavailable is a Distributed Denial of Service attack. A DDoS attack involves many networked systems that send network traffic to a single target system or network segment, which is then swamped with traffic and becomes unreachable. There’s very little that can be done against DDoS attacks, and all platforms are equally vulnerable.
With all these DoS and DDoS vulnerabilities, it’s a worrying development that Windows 2000 and XP provide new platforms to generate such attacks. The only real ‘improvement’ in Windows 2000’s and XP’s IP stacks is that for no good reason whatsoever, Microsoft has extended the control that an application has over the IP stack. This does not improve Windows’ sub-standard networking performance, but it gives applications the option to build custom IP packets to generate incredibly malicious Internet traffic. This includes spoofed source IP addresses and SYN-flooding full scale Denial of Service (DoS) attacks. As if things weren’t bad enough…
Cumulative problems on the server
So far we have concentrated on Windows. Most of the problems with Microsoft products originate here, since Windows is by far the most complex Microsoft product line, and there are more interactions between Windows and other products than anywhere else. But unfortunately most server and desktop applications are cut from the same cloth as Windows is. The general quality of their code and design is not much better.
The additional server products generally run on a Windows server. This means that all the disadvantages of an insecure, unstable platform also apply to the server products that run on those servers. For example, Microsoft SQL Server is a product that has relatively few problems. Granted, it suffers from the usual problems, but nothing unexpected. It’s basically a straightforward implementation of a general SQL server, based on technology not developed by MS but purchased from Sybase. Prior to V7, SQL Server was mostly Sybase code. It wasn’t until V7 that SQL Server saw major rewrites.
While SQL Server causes relatively few problems, it is not a very remarkable or innovative product. Not only does it bear all worst of Microsoft’s hallmarks (things like Service Pack 4 for MS SQL Server 2000 having to mostly fix problems caused by Service Pack 3) but if I had waited until 2005 to implement database partitioning, I think I’d be covering it up, not trumpeting it to the world…
Still SQL Server is not a bad product as far as it goes, certainly not by Microsoft standards. However, no database service can perform better or be more reliable than the platform it’s running on. (This goes of course for any software product, not just for a database server.) All vulnerabilities that apply to the Windows server directly apply to the database service as well.
Other additional server products come with their own additional problems. Microsoft’s webserver product, Internet Information Server (IIS) is designed not just to serve up web pages written in the standard HTML language, but also to provide additional authentication and links to content databases, to add server and client side scripting to web pages, to generate Dynamic HTML and Active Server Pages, et cetera. And it does all these things, and more, but often not very well. IIS is outperformed by all other major webserver products (especially Apache). IIS’ authentication is far from robust (the general lack of security in MS products is discussed below) and the integration of an IIS webserver with a content database server is far from seamless. Dynamic HTML, ASP and scripting require the webserver to execute code at the server end, and there Microsoft’s bad process management comes into play: server load is often excessive. Running code at the server end in response to web requests creates a lot of security issues as well, and on top of all that the web pages that are generated do not conform to the global HTML standards, they are only viewed correctly in Microsoft’s own web browser products.
Microsoft’s mail server product, Exchange, has a few sharp edges as well. To begin with, its performance is definitely sub-standard. Where one Unix-based mail server will easily handle thousands of users, an Exhange server maxes out at about one hundred. So to replace large Unix-based email services with Exchange generally requires a server farm. A much bigger problem is Exchange’s lack of stability and reliability. To lose a few days worth of corporate E-mail in an Exchange crash is not uncommon. Most of these problems are caused by the hackish quality of the software. Exchange is designed to integrate primarily with other Microsoft products (especially the Outlook E-mail client) and it doesn’t take the Internet’s global RFC standards too seriously. This limits compatibility and may cause all kinds of problems. Outlook Express also has a strange way of talking IMAP to the Exchange server. It makes a bazillion IMAP connections; each connection logs in, performs one task, sets the connection to IDLE– and then drops the connection. Since OE does not always close the mailbox properly before dropping the connection, the mailbox and Outlook do not necessarily sync up. This means that you may delete messages in OE that magically return to life in a few minutes because those deletions did not get disseminated to the mailbox before the connection terminated.
Just like other Microsoft applications, the additional server products are tightly integrated into Windows during installation. They replace DLLs belonging to the operating system, and they run services at the system level. This not improve the stability of the system as a whole to begin with, and of course most of the code in the additional server products is of the same doubtful quality as the code that makes up Windows. The reliability and availability of any service can never be better than the server OS it runs on. However most of Microsoft’s additional server products add their own complexity, bugs and glitches to the system, which only makes it worse. The resulting uptime and reliability figures are rather predictable. The inefficiency that exists in Windows is also present in the additional server products, so as a rule of thumb each additional service needs its own server platform. In other words: if you need a file and print server, a web server and a mail server, you need three separate systems whereas Unix or Netware could probably do the job on only one system.
Desktop: bigger but not better
Microsoft desktop applications (like Word and Excel) are largely more of the same. They’re in the terminal stages of feature bloat: they’re full of gadgets that don’t really add anything useful to the product, but that ruin productivity because of their complexity, and that introduce more errors, increase resource demands, and require more code which in turn leads to increased risks. After years of patching and adding, the code base for these products has become very messy indeed. Security, if any, has been added as an afterthought here, too. For example, a password-protected Word document is not encrypted in any way. Inserting a ‘protected’ document into another non-protected document (e.g. an empty new document) is enough to get around the ‘protection’. And if that fails, a simple hex editor is enough to change the ‘Password To Modify’ in a document. Microsoft is aware of this, but now claims that the ‘Password To Modify’ is only intended to “prevent accidental changes to a document” and not to offer protection from modifications by malicious third parties. Uh-huh.
Animated paper clips don’t really make Word a better word processor. We’d be better off with other things, such as a consistent behavior of the auto-format features, the ability to view markup codes, or a more complete spell checking algorithm and dictionary. But in spite of all the “new” versions of Office and persistent feature requests from their users, Microsoft still hasn’t gotten around to that. Instead we have multi-language support that tends to ‘forget’ its settings occasionally, and an ‘auto-correct’ feature that’s limited to the point of being more annoying than useful. Word documents have become excessively large and unwieldy, and occasionally they are corrupted while being saved to disk. When that happens, Word cannot recover these documents and will crash in the attempt to open them.
In fact it’s hilarious that the latest version of Office, well into the 21st century, still can’t handle multiple users reading and writing the same data. It’s stuck in the eighties, when multiple users might have been able to read the same data, but all but the best systems couldn’t properly handle writing to the same files, let alone database records. This problem, referred to as record locking, was fixed in modern software over a decade ago.
We can be brief about Excel: it has similar problems, and calculation errors in formula-based spreadsheets on top of that. Excel is full of frills and spiffy graphics and animations, but essentially it’s still a spreadsheet that cannot count and that requires many formulas and macros to be rewritten for each new version of Excel.
The database component of Office, Microsoft Access, isn’t exactly a stellar piece of work either. Access, apart from its quirky way of interfacing with backend databases, can still lock out an entire (possibly mission-critical) database, just because one user hasn’t shut down the application used to write or modify data. Access is actually supposed to be able to properly handle this condition, but it doesn’t. And in a stunning display of lack of understanding, Access 2007 introduced the use of multi-valued data types in SQL databases, in an attempt to make the product easier for power users to drive. The development team felt that power users find the creation of many-to-many joins using three tables conceptually very difficult, and will find multi-valued data types a much easier solution. In this they are correct; users certainly do struggle with the concept of creating many-to-many joins using three tables as is the ‘classic’ way in SQL. However the reason for doing it the old-fashioned way is that this is totally accurate and predictable, and that every bit of data (every atomic value) will always be accessible, which was one of main design principles (perhaps even the whole point) of SQL’s design around atomic values. The multi-valued approach is like putting cruise control on a back hoe or a bullldozer in an attempt to make it easier for unskilled operators to use, and it will result in a similar mess.
Menu interfaces in all Microsoft applications, even in those that are bundled in MS Office, are inconsistent and non- intuitive. For example, the menu option to set application preferences, which may be titled ‘Preferences’ in some products but ‘Options’ in others, may be found under the ‘File’ menu, under the ‘Edit’ menu, under ‘View’ or somewhere else, depending on what product you’re currently using. To create even more confusion, the same options in different applications do not work identically. For example the ‘Unsorted list’ button (to create a bullet list) handles indentation correctly in Word but ignores it completely in PowerPoint (PowerPoint adds bullets but messes up the left outline of the text). And the ‘F3’ key activates the ‘search again’ function for string searches in practically all Microsoft products, except in Internet Explorer and Excel where it brings up something totally different for no apparent reason.
Microsoft does the Internet (In a manner not unlike Debbie did Dallas)
Microsoft’s most important application outside MS Office is without doubt Internet Explorer. In its first incarnation IE was a very unremarkable web browser; e.g. version 2.0 as it was shipped with Windows NT 4 was so backward that it didn’t even have frame capability. This soon changed as Microsoft began integrating the web browser with Windows as a part of their integration and bundling strategies (which are discussed in detail below).
In all honesty it must be said that recent versions of IE (starting with version 6) aren’t really bad web browsers. That is, from the end users’ point of view they mostly do what they’re supposed to do. They do have many nasty bugs and problems, but these mainly cause headaches for web developers and not for end users. IE has more than its share of annoying errors in the implementation of style sheets and some strange discrepancies in the rendering of tables. It also tends to become confused by some rather elementary things, such as submitting a page that contains more than one button, in which case IE erroneously returns values for multiple button names. It has its own ideas about the Domain Object Model (DOM) and does not support the DOM Level 2 Events module, even though Microsoft participated in the definition of this module and had ample time to implement it. IE6 also boasts Jscript behavior that is different from any other browser and full of implementation quirks, and lacks proper support for xHTML support and character encoding negotiations. And of course PNG support in IE6 is so bad that it has singlehandedly delayed the acceptation of this image format by many years, and perhaps forever.
Even so, on the whole IE6 and 7 do the job well enough for most users. At least they display standards-compliant HTML as more or less correctly rendered web pages, at a speed that is by all means acceptable. Previous versions of IE weren’t nearly this good, and even contained deliberate deviations from the global HTML standards that were intended to discourage the use of standardized HTML in favor of Microsoft’s own proprietary and restrictive ideas.
The main drawbacks of Internet Explorer lie in the fact that it tries to be more than just a web browser. It adds scripting support (with the ability to run Visual BASIC or Jscripts that are embedded in web pages) and it hooks directly into the Windows kernel. I’ve seen web pages that would shut down the Windows system as soon as the page was viewed with Internet Explorer. Microsoft doesn’t seem to have bothered very much with basic security considerations, to put it mildly. And of course the installation of a new version of Internet Explorer replaces (overwrites) significant portions of the Windows operating system, with all the drawbacks discussed above.
Similar problems are found in Outlook, Microsoft’s E-mail client. Outlook is in fact a separate application, but it isn’t shipped separately. There are two versions: one is bundled with Internet Explorer (this version is called Outlook Express) and the other is part of MS-Office (this version is named ‘Outlook’ and comes with groupware and scheduler capabilities). In itself Outlook is an acceptable, if unremarkable, E-mail client; it allows the user to read and write E- mail. It comes with a few nasty default settings, but at least these can be changed, although the average novice user of course never does that. (For example, messages are sent by default not as readable text but as HTML file attachments. When a user replies to an E-mail, the quoting feature sometimes uses weird formatting that won’t go away without dynamite. And there’s often a lot of junk that accompanies an outgoing E-mail message.) More serious is the fact that both Outlook and its server-end companion Exchange tend to strip fields from E-mail headers, a practice that is largely frowned upon. This also makes both network administration and troubleshooting more difficult.
The most worrying problem with Outlook is that it comes with a lot of hooks into Internet Explorer. IE code is being used to render HTML file attachments, including scripts that may be embedded into an HTML-formatted E-mail message. Again Microsoft seems to have been completely unaware of the need for any security here; code embedded in inbound E-mail is by default executed without any further checking or intervention from the user.
Basic insecurity of MS products
Which brings us to another major weakness of all Microsoft products: security, or rather the lack thereof. The notorious insecurity of Microsoft software is a problem in itself.
It all begins with Windows’ rather weak (not to say naive) security models, and it’s apalling quality control.. The number of reports on security holes has become downright embarrassing, but it still keeps increasing regularly. On the other hand, Windows security holes have become so common that they hardly attract attention anymore. Microsoft usually downplays the latest security issues and releases another patch… after the fact. If Microsoft really wanted to resolve these software problems, they would take greater care to ensure such problems were fixed before its products went on sale– and thus reverse the way it traditionally conducts business. Doing so would mean less resources wasted by its customers each year patching and re-patching their systems in an attempt to clean up after Microsoft’s mistakes, but it would also decrease the customers’ dependency on what Microsoft calls ‘software maintenance’.
In the meantime, hackers are having a ball with Microsoft’s shaky security models and even weaker password encryption (which includes simple XOR bitflip operations, the kind of ‘encryption’ that just about every student reinvents in school). Hackers, script kiddies and other wannabees get to take their pick from the wide choice of elementary security weaknesses to exploit. Some historic and highly virulent worms, for example, spread so rapidly because they could crack remote share passwords in about twenty seconds. This did not stop Microsoft from running an advertising campaign in spring 2003 that centered on hackers becoming extinct along with the dodo and the dinosaur, all because of Microsoft’s oh so secure software. Unsurprisingly this violated a few regulations on truth in advertising, and the campaign had to be hastily withdrawn.
In an attempt to clean up their image somewhat, Microsoft made sure that Windows Vista was launched with a lot of security-related noise. For starters, Vista has a better set of default access privileges. Well, finally! Ancient commercial OSes like Univac Exec, CDC Scope and DEC VMS all had special accounts with various permissions ages ago as a matter of course and common sense. On Windows every user needed administrator rights to do basic tasks. But apart from being decades too late, this basic requirement has been met in a typical Microsoft fashion: it creates a security hole so large that it might more properly be called a void. In Windows Vista the need for certain access privileges are now tied to… program names! For example, if Vista sees that an application developer has created a Microsoft Visual C++ project with the word “install” in the project name, then that executable will automatically require admin rights to run. Create exactly the same project but call it, say, Fred, and the need for elevated access permissions magically disappears. In short, all that malicious software has to do is to present a harmless-looking name to Vista, and Vista will let it through.
Apart from the fact that proper access control should have been implemented in Windows NT right from the start, and that Microsoft botched it when it finally did appear in Vista, the rest of Vista’s security is the usual hodge-podge of kludges and work-arounds that often attempt to patch one hole and create another one in the process. For example let’s look at Vista’s “PatchGuard” service. PatchGuard crashes the computer when it detects that specific internal data structures have been “hooked”, which is a common way that malicious software starts doing its damage. Not only does this work-around still not amount to proper protection of operating system code in the first place, but it also prevents third-party security products (e.g. anti-virus and anti-spyware programs) from working correctly. In order to remedy this, Microsoft released API’s that essentially enable a user-level program to shut down Vista’s Security Center. Uh-huh.
An important part of the problem is Windows’ lack of proper separation between code running on various system and user levels. Windows was designed around the basic assumption that code always runs with the highest privilege, so that it can do almost anything, including malicious intent. This makes it impossible to prevent malicious code from invading the system. Users may (inadvertently or deliberately) download and run code from the Internet, but it’s impossible to adequately protect system level resources from damage by user level code.
Integrated vulnerabilities
The tight integration between the various Microsoft products does little to improve overall security. All software components are loaded with features, and all components can use each other’s functions. Unfortunately this means that all security weaknesses are shared as well. For example, the Outlook E-mail client uses portions of Internet Explorer to render HTML that is embedded in E-mail messages, including script code. And of course IE and Outlook hook into the Windows kernel with enough privileges to run arbitrary malicious code that happens to be embedded in a received E-mail message or a viewed web page. Since Outlook uses portions of IE’s code, it’s vulnerable to IE’s bugs as well. So a scripting vulnerability that exists in Outlook also opens up IE and vice versa, and if IE has a hook into certain Windows kernel functions, those functions can also be exploited through a weakness in Outlook. In other words, a minor security leak in one of the components immediately puts the entire system at risk. Read: a vulnerability in Internet Explorer means a vulnerability in Windows Server 2003! A simple Visual BASIC script in an E- mail message has sufficient access rights to overwrite half the planet, as has been proven by Email virus outbreaks (e.g. Melissa, ILOVEYOU and similar worms) that have caused billions of dollars worth of damage.
A good example are Word viruses; these are essentially VBS (Visual BASIC Script) routines that are embedded in Word documents as a macro. The creation of a relatively simple macro requires more programming skills than the average office employee can be expected to have, but at the same time a total lack of even basic security features makes Word users vulnerable to malicious code in Word documents. Because of the integrated nature of the software components, a Word macro is able to read Outlook’s E-mail address book and then propagate itself through the system’s E-mail and/or networking components. If Windows’ security settings prevent this, the malicious virus code can easily circumvent this protective measure by the simple expedient of changing the security settings. How’s that for security?
Similarly, VBS scripts embedded in web pages or E-mail messages may exploit weaknesses in IE or Outlook, so that viewing an infected web page or receiving an infected E-mail is enough to corrupt the system without any further action from the user (including manually downloading a file or opening an attachment). Through those weaknesses the malicious code may access data elsewhere on the system, modify the system’s configuration or even start processes. In March 2000, a hacker wrote (of course anonymously) on ICQ:
21/03/2k: Found the 1st Weakness: In Windows 2000 […] there is a Telnet daemon service, which is not started by default. It can be remotely started by embedding a COM object into HTML code that can be posted on a web page, or sent to an Outlook client. Following script will start the Telnet service: <SCRIPT LANGUAGE=VBScript> CreateObject(“TlntSvr.EnumTelnetClientsSvr”)</SCRIPT>
We’ve tried it and it really works. Only a Problem… we’ve put it into a html page. When opening the page… our best friend “IE5” shows an alert msg saying that “you’re going to run some commands that can be dangerous to your PC…Continue?” We must fix it! No problem using Outlook… [sic]
Note that after patching no fewer than seven different security holes in the Windows 2000 telnet code (yes, that’s seven security leaks in telnet alone!) Microsoft released another patch in February 2002, to fix security issue number eight: another buffer overflow vulnerability. Somehow I don’t think this patch will be the last. If you don’t succeed at first, try seven more times, try, try (and try some more) again. Seen in this light, it’s not surprising that J.S. Wurzler Underwriting Managers, one of the first companies to offer hacker insurance, have begun charging clients 5 to 15 percent more if they use Microsoft’s Windows NT software in their Internet operations.
Microsoft knows exactly how bad their own product security is. Nevertheless they wax lyrical about new features rather than accept their responsibility for their actions. To quote Tom Lehrer:
“The rockets go up, who cares where they come down? That’s not my department, says Werner von Braun.”
Microsoft can’t be unaware of the risks and damages they cause. After all they prudently refuse to rely on their own products for security, but use third party protection instead. (See above.) And while they try to push their user community into upgrading to new product versions as soon as possible, Microsoft can hardly be called an early adopter. In the autumn of 2001 they still did not run Windows and Exchange 2000 on their own mail servers yet, long after these versions had been released to the market. On other internal systems (less visible but still there) a similar reluctance can be seen to upgrade to new versions of MS products. Only after many security patches and bug fixes have been released will Microsoft risk upgrading their own critical systems.
Sloppiness makes the problem worse
Many security problems are caused by the sloppy code found in many Microsoft products. The many buffer overrun vulnerabilities can be combined with scripting weaknesses. You don’t need to open E-mail attachments or even read an incoming E-mail message to risk the introduction of malicious code on your system. Just receiving the data (e.g. downloading E-mail from a POP3 server or viewing a web page) is enough. Yes, stories like this have long been urban legend, but Outlook has made it reality. Microsoft explains: “The vulnerability results because a component used by both Outlook and Outlook Express contains an unchecked buffer in the module that interprets E-mail header fields when certain E-mail protocols are used to download mail from the mail server. This could allow a malicious user to send an E-mail that, when retrieved from the server using an affected product, could cause code of his choice to run on the recipient’s computer.” This vulnerability has been successfully exploited by Nimda and other malicious worm programs. Other worm programs (e.g. Code Red) combine vulnerabilities like this with creatively constructed URL’s that trigger buffer overruns in IIS. Even without the Frontpage extensions installed it is relatively easy to obtain unencrypted administration passwords and non-public files and documents from an IIS webserver. Furthermore, this “E-commerce solution of the future” contains a prank (a hardcoded passphrase deriding Netscape developers as “weenies”) in the code section concerned with the access verification mechanism for the whole system. And there are many more weaknesses like this. The list goes on and on and on.
IIS is supposed to power multi-million dollar E-commerce sites, and it has many backend features to accomplish this application. But each and every time we hear about a large online mailorder or E-commerce website that has spilled confidential user data (including credit card numbers) it turns out that that website runs IIS on Windows NT or 2000. (And that goes for adult mailorder houses too. I’m not quite sure what kind of toy a Tarzan II MultiSpeed Deluxe is, but I can probably tell you who bought one, and to which address it was shipped. Many E-commerce websites promise you security and discretion, but if they run IIS they can only promise you good intentions and nothing more. Caveat emptor!)
The Code Red and Nimda worms provided a nice and instructive demonstration of how easy it is to infect servers running IIS and other Microsoft products, and use them for malicious purposes (i.e. the spreading of malicious code and DDoS attacks on a global scale). Anyone who bothers to exploit one of the many documented vulnerabilities can do this. Some of the vulnerabilities exploited by Code Red and Nimda were months old, but many administrators just can’t keep up with the ridiculous amount of patches required by IIS. Nor is patching always a solution: the patch that Microsoft released to counter Nimda contained bugs that left mission-critical IIS production servers non-operational.
On 20 June 2001, Gartner vice president and analyst John Pescatore wrote:
IIS security vulnerabilities are not even newsworthy anymore as they are discovered almost weekly. This latest bug echoes the very first reported Windows 2000 security vulnerability in the Indexing Service, an add-on component in Windows NT Server incorporated into the code base of Windows 2000. As Gartner warned in 1999, pulling complex application software into operating system software represents a substantial security risk. More lines of code mean more complexity, which means more security bugs. Worse yet, it often means that fixing one security bug will cause one or more new security bugs.
The fact that the beta version of Windows XP also contains this vulnerability raises serious concerns about whether XP will show any security improvement over Windows 2000.
On 19 September 2001, Pescatore continued:
Code Red also showed how easy it is to attack IIS Web servers […] Thus, using Internet-exposed IIS Web servers securely has a high cost of ownership. Enterprises using Microsoft’s IIS Web server software have to update every IIS server with every Microsoft security patch that comes out - almost weekly. However, Nimda (and to a lesser degree Code Blue) has again shown the high risk of using IIS and the effort involved in keeping up with Microsoft’s frequent security patches.
Gartner recommends that enterprises hit by both Code Red and Nimda immediately investigate alternatives to IIS, including moving Web applications to Web server software from other vendors, such as iPlanet and Apache. Although these Web servers have required some security patches, they have much better security records than IIS and are not under active attack by the vast number of virus and worm writers. Gartner remains concerned that viruses and worms will continue to attack IIS until Microsoft has released a completely rewritten, thoroughly and publicly tested, new release of IIS. Sufficient operational testing should follow to ensure that the initial wave of security vulnerabilities every software product experiences has been uncovered and fixed. This move should include any Microsoft .Net Web services, which requires the use of IIS. Gartner believes that this rewriting will not occur before year-end 2002 (0.8 probability).
As it turns out, Gartner’s estimate was overly optimistic. Now, several years later, still no adequately reworked version of IIS has been released yet.
So how serious is this?
In all honesty it must be said that Microsoft has learned to react generally well to newly discovered security holes. Although the severity of many security problems is often downplayed and the underlying cause (flawed or absent security models) is glossed over, information and patches are generally released promptly and are available to the user community without cost. This is commendable. But then the procedure has become routine for Microsoft, since new leaks are discovered literally several times a week, and plugging leaks has become part of Microsoft’s core business. The flood of patches has become so great that it’s almost impossible to keep up with it. This is illustrated by the fact that most of today’s security breaches successfully exploit leaks for which patches have already been released. In fact the sheer volume of patchwork eventually became sufficient to justify the automated distribution of patches. For recent versions of Windows there is an automatic service to notify the user of required “critical updates” (read: security patches) which may then be downloaded with a single mouseclick. This service (which does work fairly well) has become very popular. And for good reason: in the year 2000 alone MS released about 100 (yes, one hundred) security bulletins - that’s an average of one newly discovered security-related issue every three to four days! The number of holes in Microsoft products would put a Swiss cheese to shame.
And the pace has increased rather than slowed down. For example, once you install a “recommended update” (such as Media Player) through the Windows Update service, you discover immediately afterwards that you must repeat the whole exercise in order to install a “critical update” to patch the new security leaks that were introduced with the first download! It’s hardly reasonable to expect users to keep up with such a rat race, and not surprising that most users can’t. As a result, many E-mail viruses and worms exploit security holes that are months or years old. The MSBlaster worm that spread in the summer of 2003 managed to infect Windows Server 2003 using a vulnerability that was already present in NT4!
In an age where smokers sue the tobacco industry for millions of dollars over health issues, all Microsoft products had better come with a warning on the package, stating that “This product is insecure and will cause expensive damage to your ICT infrastructure unless you update frequently and allocate time on a daily basis to locate, download, test and install the patch-du-jour”. Unfortunately they don’t, and Windows-based macro and script viruses emerge at a rate of several hundreds a month, while the average time for an unpatched Windows server with a direct Internet connection to be compromised is only a few minutes.
Patch release as a substitute for quality
An interesting side effect of the ridiculous rate with which patches have to be released is that some users now get the impression that Microsoft takes software maintenance very seriously and that they are constantly working to improve their products. This is of course rather naive. If they’d bought a car that needed serious maintenance or repairs every two weeks or so, they probably wouldn’t feel that way about their car dealer. Redmond has exploited this misconception more than once. In recent comparisons of Windows vs. Linux they quoted patch response times, in an attempt to show that Windows is more secure than Linux. They had of course to reclassify critical vulnerabilities as non-critical, misinterpret a lot of figures, and totally ignore the fact that Windows develops many times the number of vulnerabilities than any other product.
Even so, if Microsoft’s patching policy was effective we’d have run out of security holes in most MS products about now, staring with IE. Obviously no amount of patching can possibly remedy the structural design flaws in (or absence of) Microsoft products’ security. A patch is like a band-aid: it will help to heal a simple cut or abrasion, but it won’t prevent getting hurt again, in the same way or otherwise, and for a broken leg or a genetic deficiency it’s totally useless, even if you apply a few thousand of them. The obvious weak point in the system is of course the integration of application software into the OS. Microsoft likes to call Windows “feature-rich” but when they have to release an advisory on a serious vulnerability in Windows Server 2003 that involves MIDI files, it becomes obvious that the set of “features” integrated in Windows has long since passed the limits of usefulness.
Microsoft’s solution: security through obscurity
Lately Microsoft lobbyists are trying to promote the idea that free communication about newly discovered security leaks is not in the interest of the user community, since public knowledge of the many weaknesses in their products would enable and even encourage malicious hackers to exploit those leaks. Microsoft’s Security Response Center spokesman Scott Culp blamed security experts for the outbreak of worms like Code Red and Nimda, and in an article on Microsoft’s website in October 2001 he proposed to restrict circulation of security-related information to “select circles”. And it’s all for our own good, of course. After all, censorship is such a nasty word.
In August 2002, during a court hearing discussing a settlement between Microsoft and the DoJ, Windows OS chief Jim Allchin testified how cover-ups are Microsoft’s preferred (and recommended) course of action:
“There is a protocol dealing with software functionality in Windows called message queueing, and there is a mistake in that protocol. And that mistake, if we disclosed it, would in my opinion compromise a company who is using that particular protocol.”
In the meantime things are only getting worse with the lack of security in Microsoft products. The latest incarnation of Office (Office XP) provides full VBA support for Outlook, while CryptoAPI provides encryption for messages and documents, including VBS attaches and macro’s. In other words, anti-virus software will no longer be able to detect and intercept viruses that come with E-mail and Word documents, rendering companies completely defenseless against virus attacks.
Clearly this is a recipe for disaster. It’s like a car manufacturer who floods the market with cars without brakes, and then tries to suppress all consumer warnings in order to protect his sales figures.
Count the bugs: 1 + 1 = 3
Another worrying development is that leaky code from products such as IIS or other products is often installed with other software (and even with Windows XP) without the system administrators being aware of it. For example: SQL Server 2000 introduced ‘super sockets’ support for data access via the Dnetlib DLL. It provides multi-protocol connectivity, encryption, and authentication; in other words a roll-up of the different implementations of these technologies in past versions of the product. A system would only have this DLL if SQL Server 2000, the client administration tools, MSDE, or a vendor-specific solution was installed on the box. However, with XP this DLL is part of the default installation– even on the home edition. One has to wonder how a component goes from “installed only in specialized machines on a particular platform” to “installed by default on all flavors of the OS.” What other components and vulnerabilities are now automatically installed that we don’t know about?
And the Windows fileset is getting extremely cluttered as it is. Looking through the WINNT directory on a Windows 2000 or XP system, you’ll find lots of legacy executables that are obsolete and never used: Media Player 5, 16-bit legacy code from previous Windows versions as far back as version 3.10 (in fact the bulk of the original Windows 3.10 executable code is there), files that belong to features that are never used in most cases (e.g. RAS support) or accessibility options that most users fortunately don’t need (such as the Narrator and Onscreen Keyboard features). Dormant code means possible dormant security issues. The needless installation of such a roundup reflects the laziness of Microsoft’s developers: if you just install everything but the kitchen sink, you can just assume it’s there at a later time and not bother with the proper verifications. Of course this practice doesn’t improve quality control at all, it merely adds to the bloat that has plagued Windows from day one.
Expect no real improvement
The future promises only more of the same. Since Microsoft is always working on the next versions of Windows, it seems a safe assumption that we’re stuck with the current flawed Windows architecture and that no structural improvements are to be expected. So far Microsoft has never seemed capable of cleaning up their software architectures. Instead they concentrate on finding workarounds to avoid the real problem.
A prime example was Microsoft’s recommendation that PCs “designed for Windows XP” should no longer accept expansion cards but only work with USB peripherals. The reason for this recommendation was that XP and its successor Vista still suffer from the architecture-related driver problems that have caused so many Windows crashes in the past. In an attempt to get rid of the problem, Microsoft tried to persuade PC manufacturers to abandon the PCI expansion bus. The fact that this recommendation was immediately rejected by the hardware industry is irrelevant; the point is that Microsoft tried to get rid of expansion bus support rather than improve Windows’ architecture to make it robust. When this attempt failed, they resorted to their second option, which was to discourage (in XP) and prevent (in Vista) the user from installing any device driver that hasn’t been tested and certified not to cause OS instabilities.
In fact Microsoft does recognize the need for structural improvements in Windows’ architecture. However, in spite of years of effort it has proven impossible for them to deliver any. This is perfectly illustrated by the early announcements of the .Net initiative, shortly after the turn of the century. Eventually .Net materialized as a framework for programmers to facilitate the development of network applications. During the first year or so of its initial announcement, though, .Net was presented as the future of desktop computing that was going to solve all deficiencies in Windows once and for all. Its most touted “innovation” was “Zero Impact Install” which amounted to doing away with the tight integration between application and operating system. Instead of the current mess of DLL’s being inserted into the OS and settings spread throughout an insecure registry database, applications would live in their own subdirectories and be self-contained. Code would be delivered in a cross-platform format and be JIT- compiled (Just In Time) for the platform it was to run on.
While these things would have meant a dramatic improvement over the current situation, their innovation factor was of course close to zero: Windows’ need for an adequate separation between OS and application code makes sophisticated ICT professionals long for Unix, mainframe environments or even DOS. JIT-compilation is nothing new either; it wasn’t even a new idea when Sun Microsystems proposed Java in the mid-1990’s. But more importantly, none of these changes ever materialized. Windows XP and Vista were going to be the first step to accomplish this enlightened new vision, but after the release of both Windows versions no trace of these bold plans remains. Instead the release of Vista was delayed by several years, because it was impossible to make even minor improvements without massive code rewrites.
So what we ultimately got instead was Windows Vista: a version of Windows so bloated with useless and annoying features that its sales have slumped in a manner never seen before in the history of Windows, and almost universially hated by the user community. As an illustration of how seriously Microsoft botched the release of Vista, Service Pack 1 for Vista involved a complete kernel replacement! Shortly after releasing SP1, though, Microsoft had to retract it in a hurry because it caused “updated” PCs to get stuck in an endless loop of reboots.
Microsoft will soon (read: sometime before the end of the decade) release Vista’s succesor (Windows version 8) but especially after the dog’s breakfast that Vista turned out to be, there’s not much to hope for. Microsoft announced early in the development stage that Windows 8 is based on the same kernel and code base as Windows Vista.
Trustworthy computing? Not from Microsoft
Lately Microsoft has been making a lot of noise about how they now suddenly take security very seriously, but the bad overall quality of their product code makes it impossible to live up to that promise. Their Baseline Security Analyzer (which they released some time ago as part of their attempts to improve their image) was a good indication: it didn’t scan for vulnerabilities but merely for missing patches, and it did a sloppy job at that with a lot of false positives as a result.
Let’s face it: Microsoft’s promises about dramatic quality improvement are unrealistic at best, not to say misleading. They’re impossible to fulfil in the foreseeable future, and everyone at Microsoft knows it. To illustrate, in January 2002 Bill Gates wrote in his “Trustworthy computing” memo to all Microsoft employees:
“Today, in the developed world, we do not worry about electricity and water services being available. With telephony, we rely both on its availability and its security for conducting highly confidential business transactions without worrying that information about who we call or what we say will be compromised. Computing falls well short of this, ranging from the individual user who isn’t willing to add a new application because it might destabilize their system, to a corporation that moves slowly to embrace e-business because today’s platforms don’t make the grade.”
Now, for “today’s platforms” read “a decade of Windows, in most cases” and keep in mind that Microsoft won’t use their own security products but relies on third party products instead. Add to that the presence of spyware features in Windows Media Player, Internet Explorer 7 and XP’s Search Assistant (all of which contact Microsoft servers regularly whenever content is being accessed), the fact that Windows XP Home Edition regularly connects to a Microsoft server for no clearly explained reason, and the hooks for the Alexa data gathering software in IE’s ‘Tools/Show Related Links’ feature… and the picture is about complete. Trustworthy? Sure! Maybe Big Brother isn’t watching you, and maybe nothing is being done with the information that’s being collected about what you search for and what content you access… and maybe pigs really can fly. For those who still don’t get it: in November 2002 Microsoft made customer details, along with numerous confidential internal documents, freely available from a very insecure FTP server. This FTP server sported many known vulnerabilities, which made gaining access a trivial exercise. Clearly, Microsoft’s recent privacy-concerned and quality-concerned noises sound rather hollow at best. They don’t even have any respect for their customers’ privacy and security themselves.
As if to make a point, a few weeks after Gates’ memo on Trustworthy Computing, Microsoft managed to send the Nimda worm to their own Korean developers, along with the Korean language version of Visual Studio .Net, thus spreading an infection that had originated with the third-party Korean translators. How ‘trustworthy’ can we expect a company to be, if they aren’t even capable of basic precautions such as adequate virus protection in their own organisation?
Of course nothing has changed since Gates wrote the above memo. Security holes and vulnerabilities in all MS products, many of which allow blackhat hackers to execute arbitrary code on any PC connected to the Internet, continue to be discovered and exploited with a depressing regularity. Microsoft claims to have put 11,000 engineers through security training to solve the problem, but all users of Microsoft products continue to be plagued by security flaws. It’s obvious that real improvement won’t come around anytime soon. Windows Server 2003 was marketed as “secure by design” but apart from a couple of improved default settings and the Software Restriction Policies not much has changed. Right after Windows Server 2003 was released, its first security patch (to plug a vulnerability that existed through Internet Explorer 6) had to be applied, to nobody’s surprise.
Lip service will do
Following Gates’ memo on Trustworthy Computing, Microsoft has made a lot of noise about taking security very seriously. However, Stuart Okin, MS Security Officer for the UK, described security as “a recent issue”. During an interview at Microsoft’s Tech Ed event in 2002, Okin explained that recent press coverage on viruses and related issues had put security high on Microsoft’s agenda. Read: it was never much of an issue, but now it’s time to pay lip service to security concerns in order to save public relations.
And indeed Microsoft’s only real ‘improvement’ so far has been an advertising campaign that touts Windows XP as the secure platform that protects the corporate user from virus attacks. No, really - that’s what they said. They also made a lot of noise about having received “Government-based security certification”. In fact this only means that Windows 2000 SP3 met the CCITSE Common Criteria, so that it can be part of government systems without buyers having to get special waivers from the National Security Agency or perform additional testing every time. CC-compliance does not not mean the software is now secure, but merely means the testing has confirmed the code is working as per specifications. That’s all – the discovery of new security holes at least once a week has nothing to do with it. But even so, Windows 2000 SP3 was the first Microsoft product ever that worked well enough to be CC-certified. Go figure.
Gates’ initial launch of the Trustworthy Computing idea was much like the mating of elephants. There was a lot of trumpeting and stamping around the bush, followed by a brief moment of activity in high places, and then nothing happened for almost two years. Eventually Steve Ballmer made the stunning announcement that the big security initiative will consist of… a lot of minor product fixes (yes, again), training users, and rolling up several minor patches into bigger ones. Microsoft’s press release actually used the words, quote, “improving the patch experience”, unquote. So far this “improvement” has mainly consisted of monthly patch packages, which had to be re-released and re- installed several times a month in a ‘revised’ monthly version more than once.
Another sad aspect of Microsoft’s actual stance on security is neatly summed up by Internet.com editor Rebecca Lieb, who investigated Microsoft’s commitment on fighting the epidemic flood of spam. She concludes:
“[Microsoft] executives are certainly committed to saying they are [committed to helping end the spam epidemic]. These days, Bill Gates is front and center: testifying before the Senate; penning a Wall Street Journal editorial; putting millions up in bounty for spammer arrests; building a Web page for consumers; and forming an Anti-Spam Technology & Strategy Group, “fighting spam from all angles– technology, enforcement, education, legislation and industry self-regulation.”
When I meet members of that group, I always ask the same question. Every version of the Windows OS that shipped prior to XP’s release last year is configured –by default– as an open relay. Millions have been upgraded to broadband. Ergo, most PCs on planet Earth emit a siren call to spammers: “Use me! Abuse me!” Why won’t Microsoft tell its millions of registered customers how to close the open relay?”
True enough, in 2004 over 75% of all spam is distributed via Windows PCs (on DSL and cable Internet connections) that have been compromised by email worms and Trojan Horse infections. But rather than fix the vulnerabilities in their products, Microsoft so far has concentrated on high-profile actions such as a collaboration with the New York State Attorney General and a highly publicized crusade against Internet advertising companies. Bill Gates’ reckless prediction that in the year 2006 the spam problem would be solved has only served to demonstrate the value of Microsoft’s promises on quality and security.
Neither is Microsoft’s own implementation of Sender Policy Framework even remotely effective. Microsoft touted their use of SPF as a significant step in spam reduction, and introduced it with so much fanfare that you’d think they’d developed it themselves. However, Security appliance firm CipherTrust soon found that spammers adopted the new standard for email authentication much faster than legitimate emailers, and shortly after its introduction more spam than legitimate email was sent using Sender Policy Framework. While this is going on, implementors are balking at MS’s licensing policy for the Sender ID system, which amounts to creating great dependency on Microsoft’s permission to (continue to) use Sender ID.
Meanwhile Microsoft is branching into new markets and so far runs true to form. They have already shipped their first cellphone products. Orange, the first cellnet operator to run Microsoft Smartphone software on their SPV phones, has already had to investigate several security leaks. On top of that the phones are reported to crash and require three subsequent power-ups to get going again, call random numbers from the address book and have flakey power management. I shudder to think what will happen when their plans on the automotive software market begin to materialize.
Bottom line: things are bad
In spite of what Microsoft’s sales droids would have us believe, the facts speak for themselves: developments at Microsoft are solely driven by business targets and not by quality targets. As long as they manage to keep up their $30 billion plus yearly turnover, nobody’s posterior is on the line no matter how bad their software is.
Microsoft products are immature and of inferior quality. They waste resources, do not offer proper options for administration and maintenance, and are fragile and easily damaged. Worse, new versions of these products provide no structural remedy, but are in fact point releases with bugfixes, minor updates and little else but cosmetic improvement. Recent versions of Microsoft products are only marginally more secure than those that were released years ago. In fact, if it weren’t for additional security products such as hardware-based or Unix-based filters and firewalls, it would be impossible to run an even remotely secure environment with Windows.
MS products are bloated with an almost baroque excess of features, but that’s not the point. The point is that they are to be considered harmful, lacking robustness and security as a direct result of basic design flaws that are in many cases over a decade old. They promise to do a lot, but in practice they don’t do any of it very well. If you need something robust, designed for mission-critical applications, you might want to look elsewhere. Microsoft’s need for compatibility with previous mistakes makes structural improvements impossible. The day Microsoft makes something that doesn’t suck, they’ll be making vacuum-cleaners.
Besides, 63,000 known defects in Windows should be enough for anyone.
Add Comment