3.0 thru End...move elements repeated for each element...
4.4 Move Execution 4.3 Pre-Move Execution 4.2 Detail Planning 4.1 Define and Document Move Requirements
5.5 Post-Move Execution 5.4 Move Execution 5.3 Pre-Move Execution 5.2 Detail Planning 5.1 Define and Document Move Requirements
3 t n e m e l E e v o M 1
t n e m e l E e v o M
4.5 Post-Move Execution
Move Element 2 . . .
EXHIBIT 7 Move Element Project Plan
Case Study IV-2 • FastTrack IT Integration for the Sallie Mae Merger 621
622 Part IV • The Information Management System
state . . . the whole structure shook out over about two months.
—Becky Robinson, Director of Systems Management The plans were incredibly detailed, and they were written from an application-owner point of view.
The trick was to see the dependencies. The IT appli- cations people knew those dependencies and were able to align them with what they had to do.
—Allan Horn, Vice President, Technology Operations Move-element freeze policies (see Exhibit 9) required that no application changes be implemented for a 2-week period prior to the move-element implementation and for a 1-week period afterward. Infrastructure changes were not allowed 4 weeks before the move element imple- mentation and for a 1-week period afterward.
All plans, policies, timelines, progress reports, and other relevant documents were coordinated by the FPMO staff for each area of the integration. These documents were then e-mailed to key managers and posted on Sallie Mae Central, the company’s intranet. This provided com- panywide access to the information needed by both IT and business managers to manage the project effectively as well as to manage ongoing operations.
Equipment Move Strategies
One of the first steps in moving the data center was to make decisions on how all of the equipment from Reston would fit into the Indianapolis facility. A number of approaches were used to redesign the space to accommo- date the new equipment.
Our first impression, and theirs, was “this will never fit.” The Sallie Mae data center was the length of a football field. Once we started doing floor plans, we found that Sallie Mae was underutilizing their space. We used a number of strategies here. We eliminated local monitors and keyboards on storage racks, for example. This saved a lot of space.
Replacing old equipment with smaller new equip- ment helped too.
—Jon Jones, Director of Client Server Computing Excellent vendor relationships were key to rational- izing equipment, making infrastructure improvements, and carrying out the project on time and within budget.
Mainframe Tandem Network 1) Define move element from the applications point of view (vertical).
2) Then define from the infrastructure point of view, to catch any gaps (horizontal).
Applications
Infrastructure
Wired Scholar
L! OpenLineSS Class
EXHIBIT 8 Move Element Definition Approach
One major difference between the approach adopted by Sallie Mae and the approach introduced by their IT consultants was that move elements were first identified, and managed, from a business-application perspective, rather than a technology-infrastructure perspective. As described in Exhibit 8, the software application view pro- vided a vertical business perspective, and the technology infrastructure view provided a horizontal cross-application perspective. Move elements were therefore managed as a set of interdependent hardware and software components.
Large business applications were divided into smaller log- ical move elements to better facilitate project planning and move flexibility.
[The IT consultants] had us looking at this move from a technology viewpoint. The way we looked at it, the superstructure was the application. We defined move elements as a set of hardware and/or software components that can and should move together because of interdependencies.
—John Bennett, Project Manager for Data Center Relocation The systems development people became the team leads. They understood the dependencies, the inte- gration points. . . . It was a hard decision to make, and it was difficult to let someone else be in charge of the data-center move. But it was critical that we didn’t lose sight of the applications because they were our primary concern. It then became our job to focus on a higher level of coordination, resolving the dependencies. It took time to get to an organized
Production Environment
The production environment move element freeze standard requires that:
• No application changes affecting the move element to be moved are implemented for a two-week period prior and one-week period after the move element implementation date.
• No infrastructure changes affecting the move element to be moved are implemented for a four-week period prior and one-week period after the move element implementation date.
In addition it is desirable that any application changes be implemented in time to have been executed successfully in production.
This generally would mean the following:
1) For changes that impact monthly processing, no changes should be made after the month-end execution proceeding the move element transition date.
2) For changes that impact weekly processing, no changes should be made after the weekly processing immediately proceeding the move element transition date.
3) For changes that impact daily processing, no changes should be made for two weeks prior to the move element transition date.
Example:If a move element is scheduled to move on 3/17, any application changes affecting month-end processing should be implemented in time to process for February month end. If the application change cannot make the February month-end implementation, it should be held and implemented after 2/24.
Quality Assurance and Development Environment
The QA and Development Environment move element freeze standard requires that:
• No application changes affecting the move element to be moved are implemented for a three-day period prior and three-day period after the move element implementation date. Any changes made during the two-week period prior to the move must be documented and given to the WPM for inclusion in the Move Control Book prior to the move occurring. This way issues caused by normal development will not be mistaken for move issues.
• No infrastructure changes affecting the move element to be moved are implemented for a four-week period prior and one-week period after the move element implementation date.
Appeal Process
In cases where the move freeze policy presents unusual hardships for the business, an appeal process is available. Any change requests falling within the standard move-freeze period require prior approval by the Data Center Relocation Steering Committee. Information to be presented to the Steering Committee for them to consider a change within the freeze period includes:
• Explanation of the move element, the move approach, and the level of complexity
• Explanation of the requested change and primary business contact
• Increased risk associated with implementing the change during the freeze period
• Εffect on the business of holding the change
• Benefits associated with completing the change during the freeze period
Questions concerning the freeze should be directed to the individual Work Plan Managers and then to their vice presidents for initial resolution. Issues that cannot be resolved through the Work Plan Manager should be documented and raised to the Data Center Relocation project manager (John Bennett). If not resolvable, the project manager will raise the issue to the Data Center Relocation steering committee and if necessary to Greg Clancy.
Move Element Freeze Communication Responsibility
Work Plan Managers are responsible for communicating to affected technology and business area management the freeze periods associated with each move element. The freeze dates will also be posted on the Data Center Relocation Web site. Any freeze period issues must be communicated to the Work Plan Manager and, if necessary, to the Data Center Relocation project manager (John Bennett).
EXHIBIT 9 Move Element Freeze Policies
Case Study IV-2 • FastTrack IT Integration for the Sallie Mae Merger 623
624 Part IV • The Information Management System When we were negotiating with vendors for USA Group, we weren’t really a big player in the market and had less latitude. . . . As part of Sallie Mae, we were nearly able to write our own terms and conditions. In some cases, we were working with vendor reps in D.C., and they were able to make better and faster deals and were more flexible. We told them what we wanted, and they delivered.
—Becky Robinson, Director of Systems Management Three different strategies were used for installing computer equipment in the Indianapolis facility that had been used in Reston. (Improvements to infrastructure com- ponents—such as backup equipment and more secure fire- walls—were also built into the migration plans.)
1. TheAsset Swapmethod was used for equipment in Reston ready to be retired. New equipment was pur- chased from the hardware vendor for the Indianapolis facility. Once the applications on the old equipment had been moved to (installed on) the new equipment in Indianapolis, the old equipment in Reston was traded in to the vendor.
2. ThePush/Pullstrategy was used for select equip- ment in Reston that was not ready to be retired and difficult to replace: the old equipment was taken out, moved, and reinstalled in Indianapolis.
3. The Swing method was used for equipment that existed in multiples. New equipment was purchased for Indianapolis for the earliest moves. After the ini- tial applications were successfully installed on the new equipment, the relevant piece of equipment was removed from Reston and shipped to Indianapolis for the next move, and so on.
The strategy choice for each move element was based on allowable downtime for the application(s) involved, vendor prices for replacement equipment and trade-ins, and physical move costs. The overall objective was to decrease integration risks by minimizing the hard- ware assets that would be physically moved from Reston to Indianapolis. For example, a Reston data warehouse was stored on a Sun E10K server. Since this was a very expensive piece of equipment, an asset swap strategy would have been very costly. However, it was learned that the business could tolerate up to a week of downtime out- side of the peak processing season, so a push/pull strategy was used instead. The vendor tore down the machine in Reston, trucked it to Indiana, and rebuilt it at the Indianapolis facility.
Lack of backup was the biggest risk. We determined the maximum downtime that the operation could handle without losing customers, and we established backup and system redundancies as needed.
—John Bennett, Project Manager for Data Center Relocation In contrast, the company couldn’t afford much downtime for the mainframe on which the Class loan- servicing system was run: the service centers needed to be able to communicate with customers. The push/pull method required too much downtime, so the asset swap method was used. For DASD storage, a new vendor was selected in order to provide newer technology that would better handle the company’s increased storage needs, as well as reduce the maintenance risks associated with older technologies.
Redundancies were built in, wherever possible, to minimize the business impact of the “go live” dates of crit- ical applications. For example, beginning in March 2001, three T1 lines were leased from AT&T in order to have a fast electronic backup for the major moves.
We spent a lot of money, almost a million dollars in two months, to create a pretty significant pipe between the two centers, in order to have a very fast link, so that you could essentially run the business out of either center if your migration had a problem. Fortunately, most of our migrations went well, and they did not have any problems, but that was a great big insurance.
—Hamed Omar, Senior Vice President, Technology Group Prior to any move, the sending technology owners worked with the target (receiving) technology owners to transfer the knowledge needed by the Indianapolis opera- tions staff to handle the changes in applications, hardware, and large increases in transaction volume.
Early in the process, we asked teams to start plan- ning and scheduling tasks related to training. The approach was tell them, show them, watch them.
—Allan Horn, Vice President, Technology Operations
The Major Data Center Moves
The DCR team began moving applications in February 2001, as they were ready (see Exhibit 10). From the director level down, the IT staffs at Reston and Indianapolis
were paired by function to work on knowledge-transfer issues—including computer operators, help-desk people, database administrators, and other technical support people.
Trial runs were conducted as needed to determine the length of time that various transitions would take.
Individual move element teams met every day, and a project room was dedicated for this purpose. Anyone on the DCR team was authorized to call a meeting.
The Data Center Relocation team had an action orientation. They weren’t waiting for someone to tell them what to do . . . they were off doing it. We had a number of moves from March to mid-June, and none were failures. Very little didn’t work. It was truly amazing.
—Greg Clancy, Chief Information Officer
We clicked. We had a mission. No confusion, illusions, or secrets. . . . It was critical that we main- tained people’s confidence.
—Allan Horn, Vice President, Technology Operations Starting February 1, DCR team members were responsible for written status reports and updated project plans on a weekly basis. Plans were submitted to the FMPO staff on Mondays, and 2-hour meetings were held via video- conference every Tuesday to discuss project status. About 20 people in Indianapolis participated in the meetings, with about 15 people participating from Reston. Meetings were held with the steering committee two times a week.
Videoconferencing was a key medium for us. This merger was an emotional event. In Reston, people
= Full Command Center
* = Primarily an Internal Impact Only 01/14/01
Facilities Prep.
Mini-ITL
01/21/01 SIFT Phase I
Facilities Prep
01/28/01 Facilities Prep.
02/04/01 errors.salliemae.com (2/2) scholarships.salliemae.com (2/2)
HR Financial: Speedpay -Prep 02/11/01 SallieMaeSolutions.com (2/9) Supportdesk.SallieMaeSolutions.com (2/9)
SmartLoan.com (2/9)
02/18/01 SallieMae.com (2/16)
HR Financial:
(College Answer, CSS -Prep (2/17))
02/25/01
03/04/01 Careers.SallieMae.com (2/28)
BSS: Sweepstakes(3/3) BSS: Mercury Mail Server
03/11/01 BSS (3/10)
BSS: PIC Laureate HR Financial: VRU
05/27/01 Data Warehouse
(5/11–5/22) 05/20/01
ImDoc:
(CM & Transfer Server) Conversion from Old SIFT& SIFTI
to SIFTII Complete 05/06/01
Efinance: OE799
05/13/01 Mainframe Move
FTP Class, etc HR Financial:
Walker, IMS*,ALMS, Collateral Tracking*, HR/BLIS*, LPP, Thrift & Savings*, Merrill Lynch File*
04/29/01 ImDoc: Primary Move
HR Financial:
(Yield Modeling a.k.a. SAM* (4/27)) 04/22/01
Efinance Reporting TOPPS
04/15/01 SallieNet Wired Scholar -Onyx (4/12)
04/08/01 HR Financial:
(PAR*, Walker C/S CABS*, BCS*
Employee Expense Rpt*
Keyfile*
HR Restrac*
HR Web Hire*) Sallie Net:FTMS
04/01/01 03/25/01 HR Financial: CSS*
Print/Mail 2ndWave SIFT II
03/18/01 SallieMae Financial Wired Scholar (3/13)
OpenlineSS Print/Mail 1st Wave
Laureate ECXpert HR Finance:Credit Data(3/19) Laureate-Sales Training (2/23)
Investors.SallieMae.com (2/23) Careers.SallieMae.com (2/28)
HR Financial:
(ABS Database*(2/24)) HR Financial: Speedpay (2/24)
HR Financial:Principia HR Financial:Bloomberg*
HR Financial:Reuters*
HR Financial:Reporting DB Efinance:GenChk
EXHIBIT 10 Calendar for Data Center Moves
Case Study IV-2 • FastTrack IT Integration for the Sallie Mae Merger 625
626 Part IV • The Information Management System were losing their jobs, and in Indianapolis, people were struggling to keep their jobs. Body language was everything. If you couldn’t see people, you didn’t know what was going on. Videoconferencing also helped us create the impression of a big group working together toward a common goal.
—John Bennett, Project Manager for Data Center Relocation During this time, the typical workday was 12 hours long. Since weekends afforded some downtime for execut- ing a move element, there were many weekends when IT people couldn’t go home. A command center was set up in the Indianapolis office every weekend to help monitor move activities; full command centers were set up for the three most critical weekend moves in mid-March, mid- April, and mid-May.
On the weekends with very large numbers of move elements, complicated moves, or very integrated move elements, we created move control books that listed everything that would be moving, implementa- tion plans, vendor information, contingency plans, risks, disaster-recovery plans, and the possible impact to the business if a move failed. We manned the PMO communications center with team members, PMO members, vendors, and specific key contacts related to the implementation.
—Cheri E. Dayton, Senior Manager, Guarantee Systems Development We took every opportunity for community building.
On those long weekends we set up games for them to play. About once a week, we held informal lunch- eons to recognize successes along the way. We had an open budget on food: it was delivered around the clock every day. You can’t take too good care of your people, and you can’t communicate too much.
—Allan Horn, Vice President, Technology Operations The movement of the Reston mainframe operations for the loan-servicing application was studied the most.
Timings were made for truck hauls between Reston and Indianapolis, including loading and unloading times. The T-1 lines would allow the company to revert back to opera- tions in Reston if a glitch in Indianapolis precluded running the loan-servicing system from its new location.
The mainframe move was scheduled to take place over a weekend in mid-May. A full command center was in place, and about a dozen representatives from hardware and
software vendors were required to be on site for the weekend.
In addition, all vendors were required to have plans in place to provide immediate access to other people and resources should there be a problem with their equipment or software.
The team members were on location round the clock.
The technical part of that weekend was challenging, but it wasn’t the hardest part. The political and busi- ness ramifications of that move were huge. The mainframe is the lifeblood of the company. The call centers use it every minute of every day. It was the most critical part of the entire move.
—John Bennett, Project Manager for Data Center Relocation You have to tell yourself that you can be the first ones to do it, that you’re not an average company and can find a way to succeed. That can-do attitude is critical to success.
—Hamed Omar, Senior Vice President, Technology Group Before the mainframe switch was flipped, John Bennett pulled the team together for a go/no-go decision at 2 a.m. Sunday morning.
I got a lot of flack for having a meeting at that time of morning: people wanted to know why it couldn’t wait until 7 a.m. We couldn’t wait that long to know if there was a problem. We had duplicated enough tapes so that we could start installing manually in Reston, and a corporate Learjet was waiting to fly the other tapes back to Reston. We never used this backup, but it was good to know it was there. We had spent a lot of time working to make sure the mainframe move went smoothly, and it went grand.
—John Bennett, Project Manager for Data Center Relocation A month before the Reston data center move was even completed [the IT consultant firm that lost the contract] knew we would be successful. They told us that they planned to come up with a “FastTrack”
method for mergers based on what they had learned through the engagement.
—Greg Clancy, Chief Information Officer The Challenges Ahead
The new IT group at Sallie Mae would soon be tested dur- ing peak lending season: not only would the transaction