About 2,000 miles separate San Francisco and Little Rock, but for The Sharper Image, it might as well have been...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
an infinite distance.
The company had been streaming transaction logs between two data centers -- the San Francisco headquarters and the Little Rock backup. Now, it was moving its data center from one side of San Francisco to the other and wanted to find a way to speed up the link between the two.
But before the move, Sharper Image hit a couple of bumps, especially when the movers didn't show as expected.
The company had given itself a three-day weekend to get the primary data center moved and cut over, figuring that would be more than enough time. It wasn't.
Because the movers were no-shows, there was very little time to set up the new data center.
During a presentation at Burton Group's Catalyst Conference, Steve Matsuo, Sharper Image's senior manager of systems and programming, said the movers eventually showed, but it was days later than planned. Matsuo said he and his staff ended up having only a few hours, instead of days, for the setup and cutover.
After a weekend of sweat and tears, Matsuo said, the cutover ended up going relatively smoothly.
Once the move was completed, however, restricted bandwidth created more problems. The transaction log streaming rate is up to 25 gigabytes per hour, or 60 megabits per second. That's more than two DS-3s, but Sharper Image was budgeted only for a couple of T-1s. That restriction caused queuing on the links, and at times the backup site in Little Rock was hours, sometimes nearly a full day, behind headquarters.
Matsuo said Sharper Image needed a way to chop disaster recovery time down to one hour or less for the entire system. That meant that the company would need either more bandwidth, compression of the transaction log file by servers, or WAN optimization. On the advice of a reseller, Sharper Image signed on for Juniper's WX.
According to Juniper, the WX acceleration platform uses compression, sequence caching, TCP and application-specific acceleration, bandwidth management, and path optimization to keep transport moving swiftly.
The company switched the WX in and out, tested it for data integrity, and set up a conditional purchase order. Matsuo said the WX did the trick. It compressed the transaction log being sent over the pipe and accelerated the flow. Then he was charged with convincing the COO that Sharper Image needed it. After some clever illustration of the problem and the solution, the COO was also sold.
That was two years ago. Since then, the company has had no failures, no loss of data integrity, and no other problems. The backup site is now less than one second behind headquarters, Matsuo said, and Sharper Image has cut bandwidth costs up to 90%.
With that backup problem out of the way, he said, it was time to speed up CAD/CAM files sent by remote engineers from the main office in San Francisco to the engineers in Navato and Marin, about 25 miles north. Matsuo said the merchandising and marketing staff and the design engineers frequently send huge CAD/CAM drawings and other documents back and forth.
The link was slow and hogged a lot of bandwidth, and the CEO-founder, who lives in Marin, wasn't pleased with the poor performance when he tried to drag files across his Windows desktop to or from a San Francisco server.
Since it was already using Juniper WX for backup links, Sharper Image tried a pair of Juniper WXCs, similar to Juniper's WX, with a hard disk to store data patterns.
That, too, sped things up. Adding in the WXCs cut bandwidth use by more than 50%, Matsuo said, massively improving the performance of Windows CIFS files. The obvious improvement made it clear to executives that the solution was worth paying for, he said.
"Feel the pain," he said. "Let them know about the pain. That's how you get the money."
And while everything went swimmingly for Sharper Image, Matsuo suggested that companies experiencing similar troubles shop around for solutions, test them for data integrity, and ensure that they work within the architecture in all backup and failure modes. He also recommended retesting the system whenever the architecture changes.
Once a workable solution is found, he said, management will see the ROI.