Project: Next Generation Backbone, Phase I
Sponsor: Backbone Engineering Group
Summary and Relevance
The Backbone Engineering Group has been working for the past 7 months
on developing a design for enhancing the backbone network services available
at UCSB. The principle goal of this "Next Generation Backbone" (NGB) is
to be able to deliver bandwidth and network services as needed across campus.
Our approach has been to build on the design and equipment of the Calren2
research network and evolve that network into a production network capable
of addressing identified needs. Those needs include:
- Access to higher bandwidth and enhanced network services.1
- VLAN capabilities on the backbone network.2
- Retirement of departmental routers (at dept. discretion).3
- Better departmental access to 100Mb backbone connections.4
- Broader access to Internet2 (Calren) to allow high bandwidth off campus
connectivity.5
- Address problems with Lanplex reliability on the FDDI backbone.6
- Retirement of RIP routing on the backbone.7
- Allow traffic filtering as a security tool.8
Design Overview
Our vision of a complete implementation of an expanded Calren network
would consist of:
- A core of gigabit Ethernet multi-layer switches ("distribution switches").
- Each networked building would have as a backbone demarc a single layer
2 VLAN capable switch ("building switch") with a gigabit or 100Mb Ethernet
link back to a distribution switch.
- Each building switch would have enough 10/100 UTP ports to support the
connections in that building.
- Each distribution switch would be capable of providing IP routing for organizations
attached to its building switches. OSPF will be the routing protocol on
this network; it will not support RIP routing.
- The VLAN configuration would allow the delivery of a routed subnet to a
port or ports on a building switch, and would allow departmental VLANs
between buildings
- The implementation of the NGB would occur in an evolutionary manner, allowing
more predictable equipment and maintenance costs, and taking advantage
of decreasing equipment costs over time. This requires the ongoing support
of parallel networks – the FDDI and the NGB – during the implementation
period.
Preliminary estimates for a complete implementation of this design would
be $1.5M in initial costs, and $380K/year of recurring costs. The tasks
associated with centralized routing, VLAN implementation, and the potential
for other advanced network service (such as filtering) imply additional
staffing to support this network design.
Phase I Proposal
In order to meet as many of the driving needs as possible while providing
a base for further evolution and expansion, we proposed the following initial
implementation of the NGB:
- Placement of distribution switches in locations where there are currently
Lanplexes.
- Establishing a single-mode fiber route between the distribution switches
to allow gigabit Ethernet between them.
- Placement of building switches in locations where there are currently departmental
routers attached to the backbone (initially 100Mb links back to distribution
switches; Gigabit will generally require single mode fiber and additional
ports on the switches, which is planned for Phase II).
- Units who need Gigabit connections to the backbone, or who need a building
switch in a location currently not served by a router connected to the
backbone, can self fund any necessary fiber and the backbone switch ports
to accomplish this. Building switches are required for departmental connections;
there will be no direct connections into a distribution switch.9
- At the end of Phase I implementation, the Lanplexes can be retired.
The placement of the Phase I distribution and building switches is intended
to make optimal use of existing fiber routes between departmental routers
and the Lanplexes.
Design Goals Achieved by Phase I
- Access to higher bandwidth and enhanced network
services
- VLAN capabilities on the backbone network
- Retirement of departmental routers (at dept.
discretion)
- Better departmental access to 100Mb backbone
connections
- Broader access to Internet2 (Calren) to allow high
bandwidth off-campus connectivity
- Address problems with Lanplex reliability
on the FDDI backbone
- Retirement of RIP routing on the backbone
- Allow traffic filtering as a security tool
- To buildings with switches on Phase I
- Isolation of RIP to a single VLAN
Costs
Initial hardware costs: $533,658
Wiring costs: $20,000 (not including any single mode fiber costs on
distribution switch backbone)
Maintenance costs: $54,900/year (starting year 2)
Cyclical replacement costs after year 1 (assuming 5 year equipment
life-cycle): $106,781/year
See attachment #1 for details.
Staff Support
As indicated above, the tasks associated with centralized routing, VLAN
implementation, and the potential for other advanced network service (such
as filtering) imply additional NOC staffing to support this network design.
Timeline
Once funding has been allocated for this proposal, an RFP process for
the equipment can be initiated, and any necessary fiber routes can be put
in place. The infrastructure of distribution and building switches should
be in place within two months of the conclusion of the RFP process. Departmental
connections to this infrastructure can overlap the placement of the building
switches, and will continue based upon urgency of departmental networking
need.
Likely Phase II
Because current FDDI design has encouraged units to minimize the number
of routers they need to purchase and support by building their local network
in a way that places multiple buildings behind a single router, there are
many buildings with significant networking needs who will not be directly
served by Phase I. Extending the NGB to those buildings should proceed
in a timely manner to deliver the benefits of this new network to the broadest
possible segment of the campus.
The BEG will develop a proposal for Phase II after the funding of Phase
I. It is likely that Phase II will include:
- Single mode fiber along all the campus fiber rings to support Gigabit links
between distribution switches and building switches where needed.
- More building switches.
Notes
1 The backbone network should provide an infrastructure flexible
enough to address varied needs across our distributed computing environment.
This requires a design that provides the needed bandwidth and services,
and also maximizes ease of access to the backbone from departmental networks.
It should be noted that traffic on the existing FDDI backbone doubled between
January and August 1999. Back
2 VLAN provides a method of addressing the needs of buildings
with multiple departments, and departments occupying multiple buildings.
The alternative approach currently in use, results in needing multiple
routers in a single building [e.g., South Hall] (with increased local costs
for equipment and networking staff, and increased complexity of the overall
network design); or the allocation of campus fiber or the placement of
local fiber to expand the local network to buildings that are not necessarily
contiguous [e.g., the classroom network] (at potentially significant local
costs, and in some cases not doable based on distance limitations). Back
3 Departmental routing requires local investments in equipment
and staff, and if done poorly can negatively impact the entire campus network.
The Calren2 design provides routing centrally -- that is, the connection
to the backbone is a routed connection. Many of the departmental routers
currently in place are old and need replacement; a centralized routing
design would allow departments to choose not to replace those routers. Back
4 Many departments are transitioning from 10Mb to 100Mb networks
internally, but have 10Mb connections to the FDDI backbone. This networking
bottleneck limits the ability of departments to take advantage of their
100Mb bandwidth when communicating with other on-campus locations (e.g.,
between a server in English and the Residence Halls) or off-campus (e.g.,
to take advantage of the increased external bandwidth that Calren has brought
to campus). The cost of FDDI ports is significantly higher than the cost
of 100Mb Ethernet ports, which has discouraged departments from upgrading
their connections to the FDDI from 10Mb to 100Mb. If many departments did
choose to upgrade to 100Mb FDDI connections, this could saturate the campus
backbone, since the FDDI is a shared 100Mb network and interdepartmental
communication at 100Mb would contend with all other network traffic. Back
5 Access to the high bandwidth external connectivity of the
Calren network was provided to the original Calren researchers through
the funding for that project. For others to access that bandwidth would
require a physical route to a switch on the Calren network. For some areas
of campus, this would require extensive cable costs. The evolution of the
NGB should attempt to improve general access to Calren with the minimum
fiber installation costs. Back
6 The 3Com LanPlex 2500 Ethernet switches which connect departmental
Ethernets to the FDDI have been subject to hardware failures in both the
backplane/CPU and the line cards. This makes them expensive in terms of
staff time required. They also do not support "hot swap" insertion of new
cards without network disruptions. Several units have already been repaired,
and if we maintain our reliance on this approach to attachments to the
backbone, the others will need to be replaced. Back
7 The redistribution of network routes between devices using
the current OSPF protocol and devices using the older RIP protocol has
been problematic, and has resulted in the periodic inability of some areas
of campus to connect to services on other areas of campus. (E.g., ccnh.ucsb.edu,
which houses the services running under COMPLETE, can become unreachable
to departments in some areas of L&S.) The BEG has been committed to
eliminating RIP routing on the backbone for some time; it persists only
to support legacy departmental devices. Back
8 Inadequate traffic filtering capabilities result in significant
impacts to service during security incidents. For example, a single compromised
host in a department may be used to launch denial-of-service attacks. The
most appropriate response would be to filter traffic involving the compromised
host. The alternative is to disconnect the entire department subnet. Back
9 Some existing Calren locations do not have an equivalent
of a building switch in place. When a local building switch with a Gigabit
uplink is available, either in Phase I or later, these units will need
to connect to the local building switch instead of directly to a Calren
switch (which has in effect become a distribution switch). Back
Attachment #1
Cost estimates, as of October 22, 1999
Assuming each location has the
following gear (parts named are for reference, using street retail pricing;
switch is generic, unspecified):
Qty |
Price |
Description |
1 |
$399.99 |
DWR-10-22-PD Datatel
Datawall 19" cabinet, wallmount |
1 |
$429.99 |
0520-750RU Best Power
Fortress 750 rackmount |
1 |
$340.20 |
AT-MCR12-10 12-slot
media converter rack chassis |
1 |
$229.50 |
AT-MC101XL-10 100TX-100FX
media converter w/ST fiber |
1 |
$5200.00 |
Switch, 24 port 10/100BT,
SNMP/802.1Q VLAN, etc. |
[Note that the wallmount cabinet is an enclosed unit and somewhat expensive. An exposed wallmount
rack with 20+" depth, 15+" height, and minimum 100 lbs. capacity would suffice.]
Total building equipment for
35 locations (see below) and one spare switch: $236,200.
Estimated maintenance: $15,000/yr
($14,400 equip, plus replacement UPS battery.)
Estimate $500/ea to pull interfaces
back to BTP, 30 interfaces (yes, this is probably high): $15,000
We should be able to acquire
6 distribution boxes (and 1 spare) at ~$38K ea (based upon 16 ports 10/100BT,
and between 2-8 ports GigLX). 7 UPS will run about $1400. Media converter:
$21,658. Emergency power has not been wired at HRC, Snidecor, PSN; assume
$5000. (PSN should be very easy, but HRC/Snidecor are more expensive.)
Hardware: $297,458 Wiring: $5000 Maint: $39,900
Attachment #2
List of Building Termination
Panels to be covered by Phase I
BTP |
Bldg# |
Building
Name |
Department |
534 |
534 |
Arts |
Art Studio, Art Museum, Art History |
572 |
572 |
Broida |
Physics |
552 |
552 |
Cheadle |
L&S, Acad Personnel, Chancellor, Public Affairs, Grad Div, Ofc of Research |
489 |
489 |
Chem Research |
Chemistry |
494 |
494 |
Creative Studies |
Creative Studies |
525 |
525 |
Davidson Lib |
Library, Alexandria |
563A, 563B |
563 |
Ellison |
HSSC, Geography, ICESS |
556 |
556 |
Engineering I |
College of Engineering |
ERC |
ERC |
Engr. Research |
Ocean Engineering Lab |
581 |
581 |
Faculty Club |
Faculty Club |
564 |
564 |
Girvetz |
Env. Studies, Academic Senate, Crustal |
515B? |
515 |
HSSB |
L&S (various) |
HRC |
HRC |
Hollister Research |
Inst. Adv., Educ. Abroad, Loan Collect., Extension |
591 |
591 |
Kerr Hall |
Instructional Development |
567 |
567 |
Kohn Hall |
Inst. for Theor. Physics |
555 |
555 |
Arts |
Art Studio, Art Museum, Art History |
534 |
534 |
Mar Bio Lab |
Marine Biology |
342 |
342 |
Marine Science |
Marine Science Institute |
451 |
451 |
Military Science |
Military Science |
531 |
531 |
Music |
Music |
544C |
544 |
Noble Hall |
NS2, LifeSci/Psych |
535 |
535 |
North Hall |
IS&C, Econ Frct Proj, CalTrans, ISBER |
560A, 560C? |
560 |
Phelps |
IC, L&S, GSE |
557A |
657 |
PhysSci North |
Chemistry |
574 |
574 |
Arts |
Art Studio, Art Museum, Art History |
534 |
534 |
Public Safety |
Police, CommServ |
981 |
981 |
QUEST |
QUEST (may change soon) |
533 |
533 |
Rob Gym |
Athletics |
568 |
568 |
SAASB |
Student Affairs |
554B |
554 |
Arts |
Art Studio, Art Museum, Art History |
534 |
534 |
Snidecor |
Speech & Hearing, Dramatic Arts |
528A |
528 |
South Hall |
L&S (various), Ofc of Research |
558 |
558 |
UCen |
Assoc Students, Bookstore, UCen admin |
526 |
526 |
Webb Hall |
Geology |
434 |
434 |
Womens' Center |
CLAS, Tutorial Center, Womens' Center |
Back to Proposals Index