- By Michael Fickes
- October 1st, 1999
The University of Pennsylvania in Philadelphia has an approach to the information age that is proving to be a great success: determined yet flexible, forward-thinking yet practical. Their campus network responds to present needs, while keeping an eye on what’s coming down the pipeline. How do they manage this feat?
Bandwidth and Beyond
The university has launched a disciplined attack on the next great challenge of tech-nology implementation: bandwidth.
Translated into lay terms, bandwidth means the speed with which a computer user can access or download large files located across campus or around the world. Insufficient bandwidth may require students to wait several minutes to download information. Researchers on slow networks might find time for lunch and a nap during particularly complex operations.
Among the most advanced campus networks in the country, the university’s 200-building campus network, called PennNet, has never skimped on bandwidth. Several years ago, Michael Palladino, Penn’s executive director of information services and computing, installed an Ethernet system capable of communicating at 10 megabytes per second (Mbps). That’s about 200 times faster than today’s standard modems, which transmit and receive data at an average speed of 56 kilobytes per second.
Then again, this is not really a fair comparison. A network infrastructure requires significantly more bandwidth than individual desktop computers because the infrastructure must accommodate numerous individual computers communicating across the network at any given time. In other words, a network designer must “scale” network infrastructure to match the number of users and the nature of their applications. All colleges and universities face the challenge of scaling networks to accommodate increasing numbers of users running more and more data-intensive operations.
At Penn, a number of advanced network applications began to strain the 10-megabyte capabilities of Palladino’s Ethernet system about two years ago.
One of the most exciting programs is Qbone, a project sponsored by the Internet-2 consortium of industry, government and 130-plus universities. This project, part of a larger effort to establish next-generation networking technologies, studies quality of service standards for network services running across packet-based networks. The goal is to ensure that time-sensitive applications such as voice, video and multimedia run smoothly across networks despite transmission delays associated with Internet Protocol, the data transmission standard used by most networks.
Another advanced application for PennNet occurs at the university’s medical school, where the radiology department, for example, has eliminated the use of traditional film for viewing x-rays. Instead the images flow into a computer, and physicians view them on a monitor screen. This system then uses PennNet to send the images to archival storage. “This level of data transfer requires very large pipes,” notes Palladino.
Penn also plans to roll out a number of real-time multimedia applications in the next three to six years. Network Radio will broadcast audio signals over PennNet. Network Telephone will establish interactive, two-way communications between users of campus workstations and may eventually allow calls from a workstation to a standard outside telephone. Network TV will facilitate talks and presentations, Penn classroom lectures and a Penn video network. Network Audio conferencing will allow multiple network users to conduct a conference call, while Network Video-conferencing will enable face-to-face meetings with distant colleagues.
Building a network capable of handling such advanced bandwidth-intensive applications requires more than 10 Mbps Ethernet speeds. “Video, for example, requires a minimum bandwidth of 150 Mbps and perhaps even higher,” Palladino says. “At that rate, a network could provide video phone and conferencing services.”
Bringing such capabilities to PennNet will await at least the next generation of infrastructure equipment. For now, Palladino has decided to upgrade PennNet to Fast Ether-net, a network backbone that operates at 100 Mbps, combined with connections to individual computers that will operate at 10 Mbps or 100 Mbps depending upon the need.
The 10/100 network infrastructure can handle most current needs. “Penn has found that existing speeds suffice for e-mail, administrative and financial applications,” says Rose Rodd, education industry manager with the Santa Clara, Calif.-based 3Com Corp., which has supplied equipment for the latest upgrade. “But this system could not handle higher-end applications such as multimedia.”
The word infrastructure describes what lies behind the wall jack that connects most desktops to a network. For most home computers, the wall jack leads to a telephone line connected to a phone company switching system that allows communications with an Internet Service Provider.
For a large organization such as a college campus, the wall jack connects to a local network infrastructure of switches that carry information from the desktop through a number of switching centers en route to one or more other computers.
The upgrade design of PennNet’s infrastructure begins with 3Com SuperStack II Ethernet switches located in equipment closets in each residence hall, administrative building, classroom building and research facility on campus. The units stack on top of the other and provide ports for individual computer connections. A switch allows these units to operate at 10 Mbps or 100 Mbps.
The stackable local switches pass information along to a larger switch, in this case a 3Com CoreBuilder 3500 located in a building’s main equipment. CoreBuilder 3500 switches in each networked building feed central information cores composed of several CoreBuilder 9000 enterprise switches.
Information moves from a desktop computer through the stackable switches to the 3500s and out to the 9000s, which direct the data back down the line to another campus user or out across the Internet. Once information moves beyond the 10/100 Mbps stackable units, it flies through the network at a rate of 100 Mbps.
Off-campus users, totaling 15,000 at Penn, connect to the campus infrastructure by way of a modem and phone line that can dial in to a 3Com Total Control unit connected to the CoreBuilder system.
Palladino hopes to complete the phased campuswide upgrade to the 100 Mbps backbone by June 2000.
Even now, however, he has directed his technical engineering staff to begin planning for the next upgrade, which will take the system to 155 Mbps or higher in about three years. “We plan our upgrades in three-year replacement cycles,” Palladino says. “Every three years we upgrade the components of the network, not because the equipment wears out, but because it becomes obsolete.”
Why not go to 155 Mbps right now? Wouldn’t that help to fight off the obsolescence problem? The answer has to do with cost and with planning that follows the evolution of technology. Different network speeds operate according to different protocols or data formats. While 155 Mbps equipment is available today, it operates in something called Asynchronous Transfer Mode or ATM. Palladino expects different protocols to appear within three years. In his judgment, it is better to use proven technologies now and to wait for video and audio technologies to mature and for new higher-speed network protocols to emerge before taking the system to the next level. By making such judgments, he can not only achieve faster network speeds with each evolution, but also combine quality network operations with acceptable budgets.
Budgeting represents a challenge. Palladino estimates that installing a system from the ground up could cost as much as $1,000 per user in today’s money. In other words, building the 40,000-user PennNet from scratch could cost $40 million. By scaling the system to match the needs of various users and by upgrading the system regularly to avoid obsolescence, he keeps his system at the technological forefront for $7 million per year.
“Our annual network spending includes about a half-million dollars for new servers, about a half million for stackable closet electronics, about $1 million on building entrance equipment like the 3Com Fast Ethernet system, and between $2 and $3 million per year on main network routers and switchers,” Palladino says.
This approach has allowed Penn-Net to position itself on the leading edge of technology, without falling over the bleeding edge.