Between 10:03 a.m. and 10:23 a.m. U.S. Pacific Time, only about 30 percent of visitors managed to enter Amazon.com, according to mobile and Internet management firm Keynote Systems, which tracks Web site performance.
After stabilizing, Amazon.com again wobbled, and its availability dropped to about 68 percent between 10:56 a.m. and 11:09 a.m., said Shawn White, Keynote's director of external operations.
After that, the site went back to normal and remains that way at press time.
However, the technical gremlins also hit the company's U.K. storefront on Monday, and the problems there are ongoing.
The U.K. site first experienced problems at 10:06 a.m. PT, and its availability dropped as low as 38 percent -- meaning that about six of 10 people couldn't enter -- but by 12:11 p.m. the availability had climbed back to about 96 percent, White said.
Asked for comment, Amazon provided this statement via e-mail: "Some customers reported intermittent problems accessing Amazon retail Web sites on Monday morning. However, we are working to resolve the issues, and Amazon's web services are not affected."
Even people who managed to enter and browse the sites faced slow performance: While Amazon.com pages typically load in six seconds or less, that average climbed to about 15 seconds during the affected periods, White said.
Gomez, another Web site monitoring firm, puts Amazon's normal average response times between 3 seconds and 8.5 seconds, but that average rose to 14 seconds on Friday and stood in a range of between 2.5 seconds and 14 seconds on Monday.
On Friday, when the availability problems lasted about 3 hours, as well as on Monday, most shoppers having access problems were getting a cryptic error message saying "Http/1.1 Service Unavailable," which means nothing to nontechnical people.
This indicates to White that whatever caused the problem proved hard to isolate, making it impossible for the company to configure its system to trigger a more intelligible alert acknowledging the problem in plain English.
White's guess is that a misconfiguration somewhere in Amazon's complex e-commerce system discombobulated unrelated pieces in its vast network of databases, data centers and application and Web servers.
If this is indeed the cause of the problems, the lesson for Amazon and anyone else is to perform rigorous testing before making any alterations, especially when the change will have an effect on many moving parts in the system, White said.