Ozinga, which operates 21 sites across its distributed network, owns and operates 500 trucks that are vital to its operations. Trucks need maintenance, but with siloed data centers and server farms, there was no single point of reference to track that maintenance. Some branches used software. Others scribbled on a chalk board to indicate that trucks were up to date with repairs and other work.
The company always had a wide area network (WAN) in place, but was lacking the tools to tie all 21 locations together cohesively. Tom Allen, Ozinga's IT director, said the company needed a method that would allow truck mechanics to communicate with one another and a place to keep an inventory of maintenance records and other pertinent information.
The first step was data center consolidation. The company, which had data centers scattered throughout its 21 locations, moved to a single data center in a central location. Along with it, the company also consolidated its Citrix server, putting that in the same central spot. In each branch location, the company rolled out client terminals for mechanics to access the maintenance software over the WAN.
The maintenance software, called TMT, is necessary for several reasons: It saves Ozinga money because it avoids overstocking of parts; it ensures that the fleet of trucks is well maintained, avoiding unexpected repair costs; and it can save money in liability because Ozinga will have a record of the trucks' maintenance to make sure that everything is up to date.
To ease into the data center consolidation project, Allen said, the company moved slowly and added one application at a time. That allowed them to monitor each application's performance on the IT side and to gauge user perception. Ozinga also added and upgraded WAN monitoring tools, which Allen called "a very important step forward" for the consolidation project.
Despite all the planning, when the TMT software was rolled out over the WAN, mechanics began experiencing problems immediately.
"We thought this would work great, and it didn't," Allen said. Mechanics encountered performance problems and freezing with the maintenance application.
To try to quell the problems, each site attempted its own fix. One site added bandwidth. Another tried using quality of service. But nothing worked. Consultants were called in for a second opinion but could find nothing wrong. Still, the application was plagued with jitter and freezes and deemed worthless by the mechanics whose lives it was supposed to make easier.
"Basically," Allen said, "it came down to this. It was a million-dollar rollout of this application that came to a halt because the mechanics said it was unusable."
Allen said he lost a lot of sleep racking his brain about the problem. His colleague, Alex Kropiewnicki, said he "turned to prayer."
The issue sparked dozens of meetings, and several potential solutions were discussed. One suggestion was to move everything to a Multiprotocol Label Switching (MPLS) network, but Allen said that option was quickly dismissed because of the cost involved. Running MPLS to 21 sites would have added thousands of dollars a month to a budget that was already stretched too thin for comfort.
"Fortunately, we never got to that," he said, adding that Ozinga wanted to keep the WAN running with relatively inexpensive connections strung together with VPNs, an architecture that in the past had provided adequate performance.
Removing all video from the WAN was considered, because video applications are notorious bandwidth hogs. Ozinga had been using video for years to monitor concrete production.
"We had the same issue with jitter and packet loss," Allen said. The company determined that it had plenty of bandwidth for both video and the new truck maintenance applications to coexist in harmony.
Someone suggested WAN acceleration, but Allen and his crew were skeptical because most attempts to alleviate the problem had already failed. "We didn't have high hopes for it," he said.
Ultimately, the company tried it, using Citrix's WANScaler application accelerators. Application performance improved, and jitter, latency and congestion disappeared. The mechanics picked up on the change right away, Allen said. Once the 30-day product trial ended, the phone started to ring -- with mechanics calling to ask what happened -- because the application's performance had reverted to its old, slow ways. Ozinga was able to extend the trial until it could install accelerators at each location.
Having that application taken care of and out of the way let his team return its focus to its larger network upgrade projects, Allen said.
"We're on a trend to continue to consolidate and move more applications into the central data center," he said. "We still have some applications distributed, like the mechanics' software used to be, but we're moving a lot of that."
This was first published in August 2008