Technology StocksThe *NEW* Frank Coluccio Technology Forum

Previous 10 Next 10 
From: Frank A. Coluccio10/21/2005 3:07:25 PM
   of 46642
Education: Wireless Campuses:

Survey ranks schools' wireless access
From eSchool News staff and wire service reports
October 21, 2005

[ as students become accustomed to, and more importantly in many instances, entirely dependent on, the modalities of mobility and portability afforded by wireless computing and communications at school, they will seek to have those same modalities satisfied in their home locales later on, when they leave ]

Ball State University in Muncie, Ind., has the best wireless internet access of any college campus in the nation, according to a survey by Intel Corp. The survey also shows an explosion of wireless internet access on college campuses nationwide over the past year alone.

"Last year, it was almost a novelty," said Bert Sperling, principal author of the survey. "This year, it's almost expected."

Thirty-four of the top 50 schools in the survey have 100-percent wireless coverage, up from seven of the top 50 schools last year. According to the survey, the top 50 most "unwired" campuses are, on average, 98 percent covered by a wireless network, up from an average coverage of 64 percent in last year's survey.

In fact, Sperling said, last year there were frequent instances of campuses with no wireless network deployment, while this year he reports that nearly every school examined had some degree of wireless infrastructure.

Sperling looked at nearly 1,000 colleges across the United States. The top 50 were ranked based on the amount of Wi-Fi coverage their campuses have, how the technology is used, the number of undergraduate students enrolled, and the computer-to-student ratio on campus.

Rounding out the top five were Western Michigan University in Kalamazoo; the University of Akron in Akron, Ohio; Dartmouth College in Hanover, N.H.; and Carnegie Mellon University in Pittsburgh.

At Ball State, wireless access points--which send and receive signals to personal computers and other devices--began going up in 2002. But most of the campus wasn't covered until this year.

The school has more than 625 wireless access points spread across about 600 acres. "Students can work on projects wherever they are," said O'Neal Smitherman, Ball State's vice president of information technology.

Not only that, but Ball State is one of at least three schools in the survey--the others are Western Michigan University and No. 14 Purdue--where students can watch sporting events broadcast wirelessly across campus networks. This was one of many ways the technology was being used to enhance campus life, the survey found. Here are some others:

* Professors at Coppin State University in Baltimore (No. 19) and Winona State University in Winona, Minn. (No. 23), use wirelessly enabled tablet PCs to transmit data to LCD projectors from anywhere they roam in the classroom.
* At Carnegie Mellon and Dartmouth, students can use wireless laptops to check the status of their laundry loads and washing machine availability.
* Professors are conducting virtual office hours and administering exams online. University operations also are being streamlined through wireless internet access, as schools equip campus security staff, housing services staff, and facility managers with wireless laptops or handheld computers to complete paperwork and submit work orders instantly from the field.

Richard Beckwith, an ethnographer with Intel's Corporate Technology People and Practices Research Group, said today's college campuses offer a window into the future of computing.

"The class of 2009 will graduate to a world far more technologically advanced than it is today," said Beckwith. "Today's campuses are like a living laboratory, providing a window into how tomorrow's digital communities will define the way people work, live, learn, and play as wireless infrastructure continues to advance and evolve."

Sperling agrees.

"Dartmouth has been wireless for a few years. The main computing guy there told me something changed that they hadn't anticipated. The way people use their computers; they expect to be in touch with everyone all the time," he said.

Sperling drew a parallel between how American culture assimilated cell phones and how it's adopting Wi-Fi access.

"I certainly remember the time where cell phones were notoriously unavailable, ineffective, and you never knew when they were going to work," he said. "These days, they're everywhere, you have coverage, you put it in your pocket and go.

"Wi-Fi is playing out similarly on campus, showing its effect [on communication]."

Done in conjunction with the Center for Digital Education, the study examined schools with at least 1,000 students. Data were gathered from university interviews, public documents and industry sources, and an online survey that schools completed between May 1 and Sept. 1.

Share RecommendKeepReplyMark as Last Read

From: Frank A. Coluccio10/21/2005 9:20:50 PM
   of 46642
Underseas Cable Bandwidth: Xtera Provides Additional Nu-Wave DWDM Equipment to Support Surge in Orders for Wavelength Capacity on FLAG Atlantic-1 Submarine Cable

ALLEN, TX -- (MARKET WIRE) -- 10/21/2005 -- Xtera, an innovator of optical transport solutions, announced that FLAG Telecom, a leading provider of international wholesale network transport and communications services, is purchasing additional Nu-Wave multi-reach DWDM equipment to respond to a 500-percent increase in orders for wavelength capacity on FLAG Atlantic-1 (FA-1), a multi-terabit/s optical submarine cable. The increase, measured quarter on quarter in the first half of 2005, has been driven by the increased use of broadband services by businesses and customer demand for bandwidth-intensive applications, including video, data and advanced voice services.

Xtera's Nu-Wave multi-reach DWDM system was installed in FLAG's FA-1 backhaul system last year to support anticipated capacity growth, network simplification and demand for high-quality bandwidth required by today's growing broadband applications. Current purchases include add-on devices that provide easy access to the capacity supplied by Xtera's Nu-Wave DWDM.

"FLAG Telecom has a sterling reputation for quality and flexibility which is the driving force behind their substantial quarter-over-quarter growth," said Jon Hopper, CEO of Xtera. "Our 18-month relationship with FLAG Telecom has enhanced their ability to service customers with superior bandwidth for converging applications."

Xtera's unique DWDM technology not only provides three times the capacity of other DWDM systems on the market, but also uses low-loss OADMs to simplify the network while simultaneously increasing its flexibility -- a capability that is paramount in responding to both planned and spontaneous surges in orders.

"We are very pleased to continue our relationship with Xtera," said Peter Boland, SVP of Engineering and Network Planning, FLAG Telecom. "Our initial requirement was to increase capacity and minimize wavelength turn-up time. With the implementation of Xtera's Nu-Wave equipment, all back-to-back terminals in the previous FA-1 backhaul system were replaced with more flexible optical add/drop multiplexers. The result is that 50 percent fewer network elements have to be serviced to fulfill end-to-end FA-1 backhaul orders."

About FLAG Telecom

FLAG Telecom, a Reliance Infocomm company, has an established customer base of more than 180 leading operators, including all of the top ten international carriers. FLAG owns and manages an extensive optical fibre network spanning four continents and connecting key business markets in Asia, Europe, the Middle East and the USA. FLAG also owns and operates a low latency global MPLS based IP network, which connects most of the world's principal international Internet exchanges. FLAG offers a focused range of global products, including global bandwidth, IP, Internet, Ethernet and Co-location services. Recent news releases and further information are on FLAG Telecom's website at:

About Reliance Infocomm

Reliance Infocomm Ltd., an Anil Dhirubhai Ambani Enterprises group company, is India's largest private information and communications services provider, with a subscriber base of over 15 million. Reliance Infocomm has established a pan-India, high-capacity, integrated (wireless and wireline), convergent (voice, data and video) digital network, to offer services spanning the entire Infocomm value chain. The Anil Dhirubhai Ambani Enterprises group is a member of the Reliance Group, founded by Shri Dhirubhai H. Ambani (1932-2002).

About Xtera Communications

Xtera Communications supplies innovative optical transport solutions enabling high-bandwidth service providers to offer profitable converged broadband services. Using patented all-Raman technologies, Xtera delivers the industry's only actual Wide Reach™ DWDM transport solution with the headroom to deliver abundant high-quality video, data and advanced voice services. Xtera's Wide Reach solution benefits regional, long haul, ultra-long haul and unrepeatered submarine applications. Recent news releases and further information are on Xtera Communications' Web site at:

Share RecommendKeepReplyMark as Last Read

From: Frank A. Coluccio10/21/2005 9:45:30 PM
   of 46642
Hype-Busting: IMS Is Gonna Take a While
By Edward J. Finegold - October, 2005
Billing World & OSS Today

The telecommunications industry seems slow to learn certain lessons, like those that demonstrate the dangers of over-hyping innovation. Once again the industry finds itself predicting an ambitious future, often without citing the massive hurdles that must be overcome. The acronym IMS—which can stand for either IP Multimedia Subsystem, or IP Multimedia Services, depending on who’s talking—is being tossed around the industry with little regard for what it really means and what promises it is intended to deliver. The dirty little secret, however, is that implementing true IMS architectures is a long way off.

In the more immediate future, for most large U.S.-based telcos, is nothing overwhelmingly innovative—VoIP, maybe even SIP-based VoIP and television, however it might be delivered. VoIP and TV may be new to the telcos, but they aren’t new to customers. Hence, the question arises: When will we see telcos deliver something truly new, based on real IMS capabilities? In short, there are some things that look a little like IMS, but true IMS services—the kind of things marketers and trade show speakers love to hype—are several years away.

What Is IMS?

To set the record straight, IMS means IP Multimedia Subsystem. It represents an organizing principle and architecture for building cross-domain service interworking in an all-IP environment. “IMS is a facilitating framework,” explains Roberta Cohen, vice president for IMS business development at Telcordia Technologies. “It allows carriers to deliver converged services, … but these are two different things. One is a facilitating architecture, the other a set of services. You could offer these new services—like VoIP, IPTV and fixed-mobile convergence—but without IMS it wouldn’t be as easy to interwork these things. No one has really implemented IMS yet.”

Others agree with Cohen’s assessment. “Customers are still going through their education around IMS, what it is, and what it enables,” says Mark Nicholson, CTO at Syndesis. “I see IMS as a next-generation control plane.” Nicholson compares the development of IMS to that of SS7 and the Intelligent Network (IN). “We had in-band signaling, but when SS7 was invented it allowed us to separate control from the voice path,” he says. “IMS is doing the same—bringing in a new control plane for the IP network that gives us a separate control layer to enable greater service interactivity and control over that interactivity.”

Nicholson also points out, however, that IMS architecture makes certain assumptions that have not necessarily been realized today. For example, IMS assumes an underlying IP and connectivity network with all transport and access aligned to support all-IP capabilities from premises to premises. Once the all-IP network layer is in place, IMS can begin to work its magic. Until that time, however, IMS will be limited by the availability of IP networking.

So IMS is an enabling architecture, and IP services are IP services. The two do not necessarily need to co-exist, though most carriers are beginning to build out IMS operations architecture with future IP service interworking in mind. Though most VoIP devices being installed today are IMS compliant, IPTV launches that carriers like SBC and Telus have scheduled for the coming months, as well as upcoming content services, are not based in the IMS domain. For now, these services are being rolled out on a one-off basis.

“Most content is not delivered over IMS,” says Chris Daniel, senior director of business development at Leapstone Systems. “Could you do IPTV over IMS? Sure, if you put the energy into it. Could you do gaming and streaming? Probably, but I’m not sure that’s going to happen. In the next five years, you’ll have those three domains independent but interconnected.” As a result, services will come to market that look like they utilize IMS interworking, and may be delivered over a common pipe like a residential DSL connection, but an IMS architecture implementation will not yet be in the background. (See sidebar, “BT and IMS-like Services.”)

What’s the Big Problem?

The barrier separating today’s environment from the reality of IMS is in fact a set of complex problems, some of which telcos are just discovering, and others they’ve been combating for several years. “In my 10 years with a service provider, I’ve learned that with these kinds of projects, half of what you’re doing is figuring out what doesn’t work. That’s no one’s fault in particular, you’re just dealing with a lot of moving parts that are new,” says Daniel.

The first difficulty is that, because IMS is so new, the market suffers a general shortage of carrier-class IMS components. This is a typical problem for any new technology set, and it’s somewhat safe to assume this will be solved over time, likely in the near future. Carriers are pressing suppliers—those that build network elements and application servers —to fall in line with IMS. Some carriers are taking a hard line, telling suppliers “you need to evolve your product to fit into the IMS architecture. If you can’t commit to that for initial deployment, you’re no longer a player,” says Daniel.

Stories abound of equipment manufacturers struggling to deliver not only systems that support IMS, but also solid Management Information Base (MIB) databases that make their gear manageable. A source inside BellSouth who has spent the past several months working on the LEC’s VoIP architecture says that major equipment manufacturer has lost an opportunity to supply application servers based on the lack of reasonable MIBs. If this is the type of basic problem holding up IMS development, it’s clear that most IMS gear is simply not ready to come out of the oven yet. “We’re still seeing 24-month [estimates] before we see widespread deployment of standard systems,” says Syndesis’ Nicholson.

Second, as mentioned, to realize the full potential of IMS, an all-IP network environment must be in place, which is not the case today. IP networks are extensive and growing, but it is not reasonable to suggest that all telco customers are connected to a core IP network over a common IP pipe. Residential broadband in the United States is still in the minority, and most enterprise customers have not migrated their range of access services to a single IP pipe, though they are calling for native IP and Ethernet services from carriers. As IP networks are being built out, remember that most of the technology involved—such as MPLS—is also still very new, which means the problems in implementing these new network technologies are still to be ironed out. (See “Managing Capacity in IP,” Billing World and OSS Today, September 2005.)

Looking deeper, IP developments are happening on top of changing layer 1 and 2 transport and access networks. The optical layer is a mix of SONET, WDM, DWDM, PON and other technologies that are being engineered to coexist. Layer 2 networks are gradually shifting from ATM and frame relay to various flavors of Ethernet. In short, there is significant instability in these underlying networks. Before there can be a dynamic, all-IP, IMS-based environment, layers 1 and 2 must stabilize to provide a solid foundation to support the real-time capacity, quality and service delivery capabilities the IMS control plane assumes will be in place.

Lastly, as these networks drive toward the all-IP future, operational organizations remain in flux. Most major carriers are still consolidating their operations in order to attain a centralized, service-centric view of their entire network base with efforts to roll all ordering, provisioning and diagnosis into common streams. In addition, this is all happening in the context of recent mega-mergers, which further complicate these efforts. Carriers are banging away at this problem to try, for example, to automate provisioning from layer 1 up through the services layer, at least in transport networks (most access pipes will continue to be provisioned in a linear, mechanistic fashion for the time being). In short, underlying networks are not yet aligned or managed in such a way as to allow them to meet the needs of a high-volume, dynamic, real-time services environment. Until the underlying network and operations problems are solved, the service examples being hyped today will not be realized.

What’s at Issue Right Now?

Those big picture issues are going to take years to solve. In the meantime, the folks in the trenches are tackling these problems incrementally, but issues from carrier to carrier are not necessarily the same. “I don’t think there’s a lot of commonality,” says David Sharpley, vice president of marketing and product management at MetaSolv Software. “We see delays because of hardware and software not being ready, because of changing business requirements, and a lack of understanding of the impacts on operational processes and the people who use them. You’re into human change now, and that’s a big challenge for large companies. We haven’t seen a common thread, though, because each project is so unique.”

Certainly, every carrier has unique technical, organizational and personnel matters to overcome, and the biggest challenges aren’t necessarily centered on IMS yet. For example, companies like Telus and Telecom Italia are focused on delivering broadband, unified IP networks, VoIP and TV while also consolidating their operations. SBC’s Project Lightspeed operations architecture is being built from the ground up to support its triple-play launch and future IMS offerings. The approaches are different, but what’s common is that they are still wrestling with OSS problems that have more to do with operating underlying networks today than with delivering an IMS architecture.

“Putting the service out there isn’t the problem,” says Telcordia’s Cohen. “It’s how to cost-effectively operate it when you scale it out to a large audience. Carriers are getting sticker shock over the operational cost of going to market quickly. The cost is so high because so many systems have to be touched and revamped.”

Provisioning Turned Inside Out

Provisioning is one of those systems being overhauled, and in fact turned on its head. Until now, telco provisioning has been almost entirely linear, mechanistic and silo-oriented. A complex order is broken down, provisioning orders are sent to each relevant network silo, and a linear, mechanized process is followed. If something goes wrong, the order is excepted, thrown back into the queue, and the process begins again until everything is delivered. Services are often tested after the fact, and as any telco customer knows, the entire service provisioning process can take weeks or months to complete. This model is becoming irrelevant.

In the new model, the IP environment is supposed to be about rapid service introduction, on-demand capability and ubiquitous access to services. The network must be able to provide the proper connectivity from the point of origin—like an application server—to the customer device. To deal with all of the complexity, what needs to materialize in the IP and IMS domains is the ability to create standard service components that can be mixed and matched to build targeted service offerings very quickly. These service components will include things like network connections, IP streams, applications and features, as well as capabilities such as presence, QoS management and various types of data that either relate to service performance or must be delivered to the user.

These service components will ideally be managed in a way that feels like web services integration: different components will announce their capabilities, and some central, organizing system will be the source for calling on these capabilities to tie them together. The ability to mix and match reusable service components is definitely an element of the IMS domain and architecture. Companies like Leapstone are supplying just the first IMS building blocks to SBC and others today. Admittedly, little of this functionality is automated, and most service components must be defined manually in the IMS applications designed to handle service creation. That said, it’s an incremental process that will require several iterations to perfect.

BT Embraces Service Components

BT sees this service component approach as they key to managing the complexity of personalized, targeted IP services. “You do have a significant clash between the different ways you operate these different networks,” says Daryl Dunbar, BT’s director of 21st Century Network (21CN) design and development. “We’re borrowing from the software development model … ideas like objects and reusable components. To accelerate service introduction, we want to let people pick and mix components to build their own services.”

The idea is to define reusable service components and assign systems and network “hooks” that reveal them as reusable components. What’s defined is all of the capabilities of the service and its network and provisioning requirements. These components can then be grouped to build specific services that are ready for delivery. BT plans to make these hooks available internally and to wholesale customers to enable them to combine BT’s capabilities with their own to create new services. This is perhaps the initial inkling of the MVNO model being realized in the landline, IP realm.

“I can … build a product much more rapidly than ever before,” says Dunbar. “We’re targeting dropping the service introduction time down to 9 weeks.” With that in mind, Dunbar also says that access provisioning isn’t going to change radically from the traditional process, but “once you’re connected to the cloud, the service you want is dynamic.” In other words, facility provisioning may not change much in the short term, though many carriers are looking at concepts such as “left in place” networking and pre-building access pipes into new construction that can provide access to service provisioning the day a customer moves in. Service provisioning, however, is headed toward a real-time mode.

Device Support a Vendor Issue

Another major challenge in provisioning has to do with the number of new devices being introduced to the network and service layers. “There’s constant change in hardware suppliers but also in the number of devices provided by each and how unique each one is, plus the different IOS and software versions on each device,” says MetaSolv’s Sharpley. Carriers are telling OSS vendors to collaborate with equipment manufacturers to ensure device support, but often the carrier needs to press the equipment vendor to answer an OSS vendor’s calls. Those with established relationships and protocol for joint solution work have the advantage, because the message coming from many carrier CTOs is “make sure you’re there when we are.” At a recent TeleStrategies conference, a spokesman from BT made it clear that it considers device support the responsibility of the network equipment manufacturer, hence placing the onus on NEMs and OSS vendors to collaborate.

Telus Presses Vendors

BT is not the only carrier to expect more proactive collaboration from its vendors. Despite well publicized labor issues, Telus is aiming for an extremely aggressive IPTV launch which, at press time, was slated for the September 2005 timeframe but not yet announced. According to vendor business developers with direct access to CTO-level staff, Telus has laid the burden on its OSS and network vendors to take on the cost and risk of delivering joint solutions to meet its deadlines, while insisting on a high level of software engineering quality.

In exchange for meeting these demands, Telus is making large, long-term commitments to six key OSS/BSS suppliers, though not all of them will necessarily stay in the game, depending on their performance. The current list includes Amdocs, NetCracker, Syndesis, Intelliden, Atreus Systems and Micromuse, and each is responsible for collaborating collectively where necessary and with network vendors Alcatel and Nokia to support ADSL2+, IPTV, VoIP and future offerings.

European Counterparts

As Telus presses forward, Telecom Italia and Telefonica are taking different approaches to set the triple-play pace in continental Europe.

Telecom Italia was one of the first to realize it needed to centralize and streamline its operations to achieve a seamless, pan-European IP and broadband network, and its foresight is a big part of the reason the company is able to win DSL business away from major PTTs outside Italy. Even with that head start, TI is just beginning to roll out basic IPTV offerings in limited trials and is being very careful about its service quality in the process. The company is running into issues like loop quality, choppy QoS in the service stream, and the fallout from integrating brand new vendor software and network gear. Its goal is to maintain a strong customer experience when it launches network-wide, by first ironing out the majority of these technical issues.

Other European carriers, such as Telefonica, are moving into IPTV more rapidly and with less of a focus on solving quality problems up front. Telefonica has already gained more than 40,000 IPTV customers and has deployed more than 100,000 IPTV ports in Lucent DSL access concentrators. At the same time, Telefonica is has been migrating its transport networks to an all-Ethernet architecture for two years and plans to merge its commercial and residential Ethernet networks by the end of 2005.

Technical staff members and supplier business developers say that Telefonica’s subscriber numbers have been achieved with a significant customer churn relating to dissatisfaction with the initial quality of the 50-channel television and video-on-demand service. This reflects the immaturity of the technology and Telefonica’s willingness to keep marching and work out the kinks along the way. Telecom Italia and Telefonica are worth watching for companies like SBC, to determine whether customers will accept poor quality initially and continue to try out new services, or instead whether carriers have only one chance to make a first impression when it comes to TV.


Today, SBC’s video service is the center of focus and the source of some of its biggest challenges. IMS is in the works, but is not center stage at this point. It’s basically common knowledge that Microsoft is dealing with a number of problems around IPTV. Senior business developers close to Lightspeed’s executive managers and chief architects report that executives at SBC were shocked when confronted with the sheer number of servers necessary for Microsoft’s IPTV solution and are therefore concerned with its scalability and related costs. Microsoft may be the only player with the resources necessary to overcome this problem, but no one knows when it will be solved and therefore ready for mass production.

SBC is also reportedly very concerned about quality, particularly in the home. Currently SBC has no technology to provide the reach or visibility necessary to manage independent service streams delivered over a common pipe and distributed to various devices in the household. It can test loop quality prior to service initiation, but once the service is up and running, there’s virtually no visibility down to the set-top.

For that reason, the carrier is issuing RFXs for agent technologies that can provide quality measurements and problem diagnosis. Metrics such as channel change response time and video quality are high on the list—especially considering that it can take several seconds to change channels in some IPTV implementations. In the VoIP domain, SBC wants to see SIP call setup metrics and voice quality measurements. Altogether, one major goal is to understand how one service affects the quality of all the others traveling over the same access pipe. The carrier is also looking at ways to build capabilities such as virus and worm detection directly into the pipe. BellSouth reportedly shares these concerns as it examines its VoIP and video offerings.

Industry rumors have it that SBC plans a “controlled market entry” in one market around the end of 2005, with plans to expand the offering to enter new markets in mid-2006 which makes a lot of sense. It is believed that the company has begun to install its IP video equipment and is building out its IP video operations center to monitor the network, a super-hub to acquire national video content, and four video hub offices for storage and on-demand delivery to subscribers. The operations center is scheduled for full operation by year end, with the super hub and video hubs operational in 2006 and 2007. Regardless of these plans, it’s likely that the initial TV offering will not be par with cable offerings, as challenges such as multi-second delays when changing channels persist.

Sources with a major systems integrator responsible for delivering significant portions of the architecture confirm that SBC’s initial Project Lightspeed trial is slated to launch in San Antonio, Milwaukee and Kansas City in November 2005. This was initially the Lightspeed launch date, but it has since been re-categorized as a trial. The launch is now slated for April or May 2006 and will include IPTV, Internet access and VoIP, but not video on demand—in other words, basic triple play. An April-May launch would match the timing of SBC’s second planned OSS/BSS upgrade.

Currently, OSS and IMS vendors involved in Lightspeed include Amdocs, Granite, SAP, Syndesis, EMC Smarts, Micromuse and a variation of Trendium’s solution. IBM and MQ Sonic are providing integration middleware, and a joint IBM-Leapstone solution will serve as the IP service delivery platform—an IMS component. Network providers include Alcatel, 2Wire and Microsoft for set-top boxes. In the midst of these triple-play launches, SBC will migrate its VoIP service from the AT&T CallVantage platform to one being developed internally, though reasons for the switch are unclear. Whatever the cause, it’s clear that SBC is dealing with a large number of new moving parts, making the delays in what were optimistic launch schedules not very surprising.

When We Build It, Will They Buy?

Ultimately, the point of overcoming all of this outrageous complexity is to increase the subscriber base, revenue streams and profit margins. With the billions carriers are shelling out to make it all happen, there’s real concern over whether the costs will be justified. Will customers give all of their communications dollars to telcos, rather than cable operators or other competitors? Currently, the estimates aren’t promising.

“We’re basically in the 5 to 9 million subscriber range, in terms of how many subscribers telcos will have as of 2010, and that number is underwhelming. If by 2010 they aren’t sporting 10 to 15 million subscribers, I have to believe that’s a disappointment,” says Robin Knight, executive director at Agilent Technologies.

While the telcos are publicly very excited and optimistic about their future in multimedia services, staff members with several key suppliers report they have witnessed a senior executive with at least one of the leading carriers fears that expressing fears that if its new initiative isn’t profitable within two years, the carrier will be going to the capital markets just to make payroll. Such is the danger, and the promise, of IMS.

Author’s note: Alcatel declined to comment for this story. SBC and Telus did not reply to requests for comments and responses.

Share RecommendKeepReplyMark as Last Read

From: Frank A. Coluccio10/22/2005 1:09:24 AM
   of 46642
Ireland gets poor marks for broadband
By Deirdre McArdle | Friday, October 21 2005

Yet another broadband survey has shown that Ireland is languishing at the bottom of the league table in Europe.

This time the survey comes courtesy of ECTA, the European association of alternative telecom service providers, which positioned Ireland 14th out of the 15 European Union Member States. Embarrassingly, the report shows that Ireland has been overtaken in terms of broadband penetration by some of the new EU member states, like Hungary, Slovenia and Lithuania.

The report, which tracks progress in broadband over the past year, reveals that Ireland is not catching up, but rather maintaining its position at the bottom of the table. ECTA outlines the damage that has been done to the Irish broadband market by the slow processes in place for unbundling the local loop.

It concludes that the leading broadband countries are those where competitors have been able to come in and build a share of the market using competing technologies. In these countries there is competition in broadband from DSL, LLU, and from cable networks.

"The evidence is clear: in France and the UK where action was taken on local loop unbundling and bitstream, they moved up three places in the broadband league table in 18 months," said Tom Hickey, chairman of ALTO, which represents ECTA in Ireland. "Italy also rose two places in the past two years as a result of its policy of building a path towards competition through bitstream and local loop unbundling; the countries at the top of the table are those with the highest LLU rates," Hickey concluded.

The report noted that countries at the bottom of the table were all similar in that the incumbent's market share was consistently bigger than in those at the top of the table, with new competitors providing less than 40 percent of broadband lines. In Ireland, new competitors provide only 37 percent of lines.

"Competition works, but competitors need to be able to access the copper loop in a reasonable manner to bring services to their customers," said Hickey, who went on to say that broadband penetration needs to be increased significantly.

"The ECTA report found that the countries with the highest broadband penetration had strong competition, with at least 50 percent of broadband lines being provided by competitors of the incumbent," said Hickey.

The ECTA study comes on the back of the OECD's Science, Technology and Industry Scoreboard 2005 survey, which ranked Ireland 19th out of 22 countries. Poland, the Czech Republic and Hungary now have more households with high-speed internet access than Ireland, which has penetration of about 5 percent and is the fourth-worst performer in the league table. The only other countries ranking lower for the period from 2000 to 2004 were Mexico, Turkey and Greece.

Share RecommendKeepReplyMark as Last Read

From: Frank A. Coluccio10/22/2005 1:23:32 AM
   of 46642
Henderson aims to be Nevada's first wireless city

Going wi-fi proves challenging for municipalities
By VALERIE MILLER | October 21, 2005

Keith Hedgcock likes sipping his coffee while he works on homework and listens to music on his laptop. But the best part of his visits to the Coffee Bean & Tea Leaf shop on Maryland Parkway? The free wi-fi.

At age 50, the gray-haired college student has seen what wireless access can be in his former home of Minneapolis, Minn. There, free wi-fi was commonplace. By comparison, he complains, Las Vegas really doesn't shape up too well.

"There's that song, 'She's still occupied in 1985,'" Hedgcock says, paraphrasing a line from the song "1985" by the rock band Bowling for Soup. "That's like Las Vegas. It's behind the times."

It's not just one man's opinion. A survey of the most unwired cities, released earlier this year by Intel Corp., ranked Minneapolis ninth best for wireless Internet access. Las Vegas was almost in the middle of the pack of 100 major cities, at 42.

Curiously, a municipality ranking a little lower on the survey than Las Vegas -- Philadelphia -- is making big news in the wireless world. The City of Brotherly Love's plan to partner with Earthlink to provide municipal affordable wi-fi for all its citizens has set off a flurry of proposals across the country. Other municipalities may not want to risk looking, as Keith Hedgcock says, behind the times.

"Everybody and his brother is coming out of the woodwork," mused communications consultant Frank Dzubeck. "It is an interesting issue and it will go to the courts." Dzubeck, president of the Washington, D.C.-based consulting company Network Communications Architects, was alluding to the opposition would-be wireless cities have been facing from telecommunications companies.

The announcement of the ambitious "Wireless Philadelphia" project last year sped up the pace of wireless expansion, contended Dinah Neff, the city of Philadelphia's chief information officer.

"Philadelphia will always be the leader in this, and now there are about 99 cities that have looked at this and implemented it in some sort of way, whether it was for the city government only or for the citizens," Neff said. "We brought the public's attention to it."

Still, there have been no major metropolitan cities to offer municipal broadband wireless or wi-fi access to all citizens. Big cities such as Chicago and New York are studying it, while smaller cities like Austin, Texas, have come close to achieving it.

The college town provides 100 hot spots for wi-fi connection through its Austin Wireless City Project network. Recently, San Francisco has threatened to give Philadelphia a run for its money in becoming the nation's first major wi-fi city. The City by the Bay is talking with potential partners, including Google, which has offered to provide the service for free.

The Las Vegas Valley also has the right qualities to become a wireless Mecca, said Mike Ballard, president of the Technology Business Alliance of Nevada. "Number one, our geography and our topography make it good. It is all flat," he said. "Number two, with all the growth in the valley, it is hard for the incumbent utility to put it in fast enough, (and) because of all that growth, there is more openness to change. People may not have heard of Cox or Sprint."

The first city in Southern Nevada to make a wireless name for itself may not be Las Vegas proper. Instead, the one-time industrial burg of Henderson, now Las Vegas' largest suburb, could lead the way. Henderson soon could be following in the footsteps of Philadelphia and San Francisco by offering wireless services to all its residents.



The Henderson initiative will likely take a year to "get started on a pilot program," said Curlie Matthews, the city's chief information officer. It could be another 18 months to 2 years before the service is available to all its nearly 250,000 residents, he added.

"There would be 'hot spots,' like Starbucks has, but all over the city," Matthews said.

The move into wireless could take different forms, he explained. One would have city workers connected to a wi-fi or WiMAX network first, with citywide access following via a separate provider. Another plan would have city employees and residents using the same provider, but on separate networks.

Antennas would be installed around the city but Matthews said it was uncertain whether the city would pay for the build-out or a private partner would.

Though the project is in the early planning stages, city officials are already bracing for obstacles. The ambitious plan may face opposition from local utility providers who have the backing of a six-year-old state law on their side. Nevada Revised Statute 710.147 prohibits the governing body of a county with a population of 50,000 or more from providing telecommunication services to the general public.

Both Clark County and North Las Vegas officials have cited the 1999 law as a problem when considering offering municipal broadband to residents. "The law affects our ability to put it up and offer it to the public ourselves. It doesn't stop us from partnering with someone else," said Roma Haynes, a management analyst with Clark County.

Meanwhile, the city of Las Vegas is just looking at providing wireless access in its city buildings for now, according to city spokesman Jace Radke.

Henderson officials are expecting a possible challenge from Sprint and have already contacted the utility about their plans, Matthews said. He is now waiting to hear back from the company. Sprint currently sells wireless access to the Henderson Police Department and the Henderson Fire Department. It would lose that business if the city provided its own service.

Municipal wireless could also compete with Sprint's DSL high-speed Internet for customers.

Local Sprint officials responded that they were unaware of Henderson's wireless plan.

"If they are interested, we would be very pleased to partner with them," said spokeswoman for Sprint's LTD, Vicki Soares. "The process should be left up to the experts. Government excels in its respective roles and responsibilities, and private companies excel in their respective functions."

The LTD division is the soon-to-be-spun off land line division of Sprint Nextel, which includes DSL.


The legal issues arising from municipal wireless services are far from resolved. The cities of Philadelphia and Austin had to fight for their right to provide citizens with wi-fi.

The Texas town battled telecom giant SBC and won. The telephone company lobbied to get a law passed through the Texas Legislature to prohibit Austin Wireless City from providing free wi-fi service to residents. The municipal plan had such wide popular support that state lawmakers simply let the clock run out on the SBC proposal. Austin Wireless City went forward as a result.

Philadelphia's Neff recalls an even tougher fight to move Wireless Philadelphia forward. Verizon succeeded in persuading the Pennsylvania Legislature to pass a law prohibiting cities from completing with private telecommunications companies. Philly wi-fi supporters refused to give up.

"We took it to the street and made the battle public," Neff said. "We got an exception for Philadelphia with a non-litigation clause." The law's effective date was also delayed until July 1, 2006, in order to give other Pennsylvania cities and towns time to roll out their wireless plans. Verizon has publicly criticized San Francisco's plan to blanket the city with wi-fi, but has not taken legal action against it.

Henderson's Matthews is also looking at the possibility of approaching the state Legislature to get an exception to the Nevada law. Unlike Clark County officials, who feel private partnerships are allowed, the Henderson official said the statute is vague.

"Chances are that we would have to get an exception, unless we provided services just to employees. Then we might have to get an exception anyway," he said. "We are following the Philadelphia project, and more than likely what happens there will have an impact on Henderson."

Philadelphia's Neff said the battles with the state are over, but the war might not be.

Federal legislation has been proposed to ban the building of municipal wireless networks. The Preserving Innovation in Telecommunications Act of 2005 is sponsored by Rep. Pete Sessions, R-Texas, who is a former employee of SBC. An opposing piece of legislation, the Community Broadband Act of 2005 is being sponsored by Sen. Frank Lautenberg, D-N.J., and Arizona Republican Sen. John McCain.

"The battle is now federal in that they are now trying to rewrite the Telecommunications Act of 1996 to include wireless broadband," Neff said. "We should be grandfathered in, but there is no guarantee that we will be."

Network Communications Architects consultant Dzubeck doubts a state law would offer much protection in that case. "I believe that if it went to the Supreme Court, it would overturn any law allowing government competition with private entities."

Private companies may not complain, however, if there is something in it for them. Cox Las Vegas, which offers wireless broadband to more than 20 local resorts, wasn't immediately concerned upon learning of the Henderson plan.

"If a municipal government is going develop a broadband network and Cox is the dominant provider, who are they going to come to? They will probably come to Cox," said company spokesman Jurgen Barbusca. In the case of any wireless service, the signal has to eventually come back to a ground line, which would also benefit the line's owner, such as Cox or Sprint, he added.


There is no clear consensus that munipal wi-fi is a benefit. Using public money to provide wireless services has been called both a waste and inequitable, communications consultant Dzubeck points out. "The other problem is spending taxpayer money," he said. "If you have people who need housing and neighborhoods that need to be rebuilt, but then you can't do it because you've spent the money on wireless broadband. The other issue is if you have nothing to connect (to the wireless network) with, how does it benefit you?"

A Vanderbuilt University report in 2001 found that twice as many Caucasian Americans had the Internet in their homes as did Hispanics or African Americans. A similar Department of Commerce study reported that 73 percent of Caucasian students had computers in their homes, while only 33 percent of African American students owned them.

The so-called "digital divide" has been used both to support and to criticize the Philadelphia Wireless project.

In the end, Earthlink agreed to pay the $10 million to $15 million required to build the network. A fee of $20 for most residential users and $10 for low-income households will be charged, according to Philadelphia's Neff. Earthlink will pay Wireless Philadelphia a portion of the ad revenue. There are about 1.5 million people in Philadelphia, and the city is trying to make sure they'll all have access.

"We want to place 10,000 computers in low-income households," she said. To that end, Wireless Philadelphia has been providing desktop computers priced below $200 and laptops under $400.

San Francisco has also made it a priority to provide all its citizens with computer training as part of its wireless push, said Chris Vein, that city's senior technology adviser to the mayor's office. Free, low-cost and refurbished equipment is also being looked at. Whether the Northern California city uses tax dollars for the project depends to some extent on whether it goes with Google, he said.

Henderson aims to open the Internet up to all residents with a citywide wireless service, said the city's Matthews. The shifting demographics of the once-industrial city present a challenge, however.

"Do we just put the service out there and the only ones that can use it are the ones who can afford computers?" he asked. "A big segment of the people coming to Henderson are senior citizens, but what about the people who are longtime Henderson residents who don't have a big retirement income? These are part of 1,001 issues facing us."

One local wireless broadband provider, Verde Communications, sees even more problems with Henderson running the wi- fi show.

"When you are talking about Philadelphia or San Francisco, you are talking about one entity," Verde President Jason Mendenhall pointed out. "Here, if Henderson does it, it'll only work in Henderson and not Las Vegas, North Las Vegas and Clark County. And if your service only works in Henderson, how likely are you to buy that?"


Southern Nevada firms may not necessarily lose out if the valley never blooms into a wireless superstar.

Some business owners admit they may be better off the way things are now. After all, free wi-fi access has become quite a selling point for the relatively few shops that offer it. Locally owned Rejavanate gets as much as 20 percent of its customers because the wireless link, said owner Bruce Ewing. The access costs the Flamingo Road coffee shop only $35 a month.

Complimentary wi-fi is important for coffee houses and eateries like Panera Breads, Rejavanate, the Coffee Bean & Tea Leaf and It's a Grind because they are part of a rare breed. Still, sipping his coffee, Coffee Bean & Tea Leaf customer Keith Hedgcock said the famous Seattle franchise just a block down the street was losing business by charging for the service.

"There aren't too many coffee houses with free Internet," he said, putting his refillable glass down for a moment. "Starbucks charges. That's why I chose this one." | 702-871-6780 x331


Wi Fi: Formerly known as 802.11b, it has been dubbed "Wi Fi" because it is easier to remember. Wi Fi repeatedly pushes signals through a broader band of frequency within the radio frequency spectrum. Wi FI requires 'hot spots' and has shorter reach than wireless broadband.

Wireless: Communication take takes place via the air waves instead of cables or telephone lines. It is enabled by packet radio, spread spectrum, satellites, cellular technology and microwave towers. Wireless can be used for voice, data, video and images.

Broadband: Short for "broad bandwidth," it is a type of high-speed, high-capacity data transmission channel where a single wire, or medium, can carry several signals at once. Broadband sends and receives information on a coaxial cable or fiber optic cable. Cable television also uses broadband.

WiMAX: Also called WirelessMAN, Air Interface Standard and IEEE 802.16, WiMAX is a specification for fixed broadband wireless metropolitan access networks. It supports very high bit rates in uploading and downloading. The technology is expected to be standardized around 2008.

DSL: Short for Direct Subscriber Line, it provides a fast, permanent connection to the Internet. DSL uses the copper found in most homes and offices. Special hardware connected at both ends of the line allows data to transmit at a much faster speed than traditional telephone wiring can.

Share RecommendKeepReplyMark as Last Read

From: Frank A. Coluccio10/22/2005 2:55:26 AM
   of 46642
EarthLink: Where competition meets convergence
Marguerite Reardon | CNET
October 20, 2005, 19:05 BST

Story URL:

EarthLink chief executive Garry Betty knows that his company faces a slew of challenges in the next decade but still believes that the ISP can thrive in a new — and perhaps very different — Internet era.

Not only is EarthLink losing dial-up customers, but its broadband business took some blows earlier this year on the regulatory front. First, the US Supreme Court ruled that cable companies don't have to share their networks with ISPs. Soon after, the Federal Communications Commission (FCC) said the same thing about DSL providers. For EarthLink, which depends on access to these networks to sell its broadband Internet service, the news was bad.

Betty has been with the company since 1996. He helped steer it through an initial public offering as well as a merger with former competitor MindSpring. He recently spoke with ZDNet UK sister site CNET about the changing Internet landscape and what that will mean to EarthLink. He also had a message for critics ready to count the company out of the game: We still have a few tricks up our sleeve.

Q: With the Brand X Supreme Court decision basically saying that cable operators don't have to share their networks and the FCC changing its classification of DSL, it seems like it's getting harder for EarthLink to compete in broadband Internet service.

A: It hasn't gotten any more difficult. It just hasn't gotten any easier. We expect to continue negotiating commercial agreements with DSL providers because it's good business. I've got 400,000 retail DSL customers in the US. That's a big chunk of business, and our relationship with these providers has never been better.

But it's important for you to own some infrastructure yourself, right?

It's important to have an alternative to broadband and cable in order to create some real competition. For 75 percent of the country, I can't provision an EarthLink service over the cable plant. With networks like the one Philadelphia is developing , we'll have another option beyond the telephone network to give consumers a choice, where that choice today doesn't exist.

Is that why EarthLink has chosen to build the wireless network in Philadelphia instead of just leasing capacity from another provider, like HP?

We've been so disenchanted with our ability to get access to broadband pipes that we felt like we needed to take a more proactive stand. We would prefer to be a non-facilities-based provider. But if you don't have the people who own the network willing to sell it to you at a price where you can make a living, you have to change the name of the game. This is part of changing the name of the game.

But I thought you said that your relationships with access providers are going well.

They are. But it's a hard living. It's like being a sharecropper. They are basically selling [access] to me almost for what they are selling it to consumers. It's hard to tell the consumer, "Hey! I've got a better service, but you're going to pay me $10 more a month for it."

So, why be in this business at all?

Because it's the business we're in, and we have 1.5 million broadband subscribers. We're probably getting...

...about 20 percent of the relative market share in broadband. But we're not getting what I believe, ultimately, we can get in terms of market share, if we had a level playing field.

Do you anticipate having to fight any more regulatory battles?

Oh Yeah! The telephone and the cable companies never quit. They will continue to take every advantage they can in putting up road blocks for other people to compete against them.

Would you say Wi-Fi is essential to EarthLink's strategy going forward?

It's a piece of the strategy, but Wi-Fi isn't going to displace a very high-speed connection. As a very cost-effective alternative to entry-level broadband and as a way to provide customers a better always-on higher-speed solution, this very much fits the bill.

What about broadband over power lines?

That's a very important technology. There's been a lot of testing over the past 10 years. Soon you'll start seeing some large-scale commercial deployment of broadband over power line in the United States.

What do you think about plans from companies like Google that say they want to offer municipal wireless access for free?

Free sounds great, doesn't it? But you just can't run a network, roll trucks and provide customer support and all the other back-end services for free. Every other ad revenue model for providing [free] service has failed in the past. It would take somewhere in the neighbourhood of $5 or $6 (per user) a month just in ad revenue to cover your costs. And that's not even getting a return. It's a tough model.

Does it bother you that Google got so much attention for the San Francisco bid?

I thought it was pretty interesting, since it was supposed to be a closed bid. We put in a proposal, just like 28 other companies did.

Do you view Google as a competitive threat?

We've got a great partnership with Google. We've integrated their search technology into the core of our services. They will be prominently part of what we do in Philadelphia.

But everybody is a competitive threat. We compete against Yahoo, Microsoft, AOL and Google. In certain instances, we partner with them. We compete against the telephone companies, but we have to rely on them to provision DSL. We compete against the cable companies, but again, we rely on them, too. In this world of convergence, I think it's inevitable that in certain instances, your interests are not going to be perfectly aligned. But that doesn't prevent you from continuing to have very cordial, very beneficial business relationships.

How are EarthLink's services evolving to stay competitive?

In the early days, we had great customer support and provided software that made it easier for people to get connected. Over the last three or four years, it's been about providing protection, getting rid of pop-ups, spam, viruses, and not allowing our customers' identities to get stolen.

We've been an undisputed leader in protecting and enhancing what users can do on the Internet. That has paid very big dividends, and quite frankly, it has allowed me to sell my product at a premium. For the future, I think voice is another example of where EarthLink can differentiate our product offering.

What do you think is going to happen to traditional voice players? There's a lot of competition now. Will they go away?

I don't think they're going away. But their business is going to continue to shift. Phone companies are trying to get in the video business, and cable companies are getting in the voice business. You've got independent players like Skype and Vonage, and people like EarthLink. So I think it's just going to be more fragmented.

Won't that just create chaos in the market?

I don't know if I would say chaos, but it creates opportunity. The great thing about where we are is that these markets are so large, it doesn't take a huge amount of business to be very meaningful for EarthLink.

Share RecommendKeepReplyMark as Last Read

From: Frank A. Coluccio10/22/2005 3:10:12 AM
   of 46642
ROADM Evolution [ Reconfigurable optical add/drop multiplexers ]

By Ed Gubbins | Oct. 20, 2005

Reconfigurable optical add/drop multiplexers have excited network operators with the promise of greater flexibility and lower operational costs, elevating ROADM to a charged industry buzzword in recent years. But ROADMs are also continuously evolving toward more advanced forms, from long-haul to metro, from simpler to more complex, achieving greater flexibility and efficiency while also confronting the problematic costs associated with the technology. Just as a new evolution in ROADMs took place this year, next year's changes are already coming into view.

ROADMs started out in long-haul wavelength division multiplexing networks, offering remote reconfigurability of wavelengths so that network operators didn't have to manually replace the physical hardware of their networks on-site every time they wanted a change. ROADMs also granted network operators more flexibility, freeing them from the previously standard practice of having to plan networks carefully and meticulously beforehand and committing faithfully to that plan.

"Planning WDM networks is not trivial," said Vinay Rathore, Ciena's senior product marketing manager. "You really have to know the beginning and endpoints of all your circuits."

The technology eventually made its way into metro networks, but the first metro ROADMs weren't exceedingly scalable. They typically used "wavelength blockers" to add or remove wavelengths from a ring, using inexpensive liquid crystal devices to simply block signals selected for removal. These ROADMs split each wavelength among all possible routes and blocked only the undesired paths. Using several blockers in a cascading series depleted the wavelength's power along the way, requiring more money to be spent on amplifiers to get power levels back up.

A later enhancement allowed more scale and flexibility: Wavelength-selective switching (WSS) presented more choices than just adding or dropping a given wavelength from a ring. Often using small angled mirrors, WSS could switch a signal optically to one of perhaps three other rings.

Some equipment vendors announced WSS ROADMs this year, including Fujitsu Network Communications, which dominated the North American metro ROADM market last year. Other vendors, however, have described WSS as either too immature (as is the case of Tropic Networks), too expensive or both.

Some say the cost of WSS components will come down once the technology starts shipping in larger volumes, yielding economies of scale. But that notion may pose a paradox: How will the technology be deployed in large volumes unless the cost comes down first?

The next step for ROADMs, according to Ciena's Rathore, is a combination of WSS with sub-wavelength grooming (SWG). Instead of switching entire wavelengths, carriers can use SWG to switch smaller sections within that wavelength that comprise individual services. Ciena's CN 4200 multiservice transport platform, unveiled in May, allows grooming in 155 Mb/s increments.

"This way, rather than reconfiguring just wavelengths on the network, the entire network is reconfigurable down to the circuit level," Rathore said. "You can drop an individual service on the wavelength and reintroduce that same wavelength with a new service on it. We're making the ROADM far more granular and improving the efficiency of it."

Ciena uses the term "dynamic wavelength routing" to describe the combination of WSS and SWG, which the company hails as "the third generation of ROADM." Ciena is planning to introduce dynamic wavelength routing into its products early next year.

Meanwhile, other ROADM vendors are reaching out in different directions, including making attempts to address some of the cost concerns surrounding WSSs and ROADMs in general. At the Supercomm 2005 trade show in June, Movaz Networks introduced a low-cost pizza-box-sized ROADM as an alternative to its larger, more scalable system. Component vendors are working to get the price of their wares down, too. At the European Conference on Optical Communications last month, WSS module manufacturer Metconnex demonstrated a cascade of 16 WSS ROADM modules at 40 Gb/s, boasting lower cost of ownership for carriers.

Another unique approach offers an alternative to ROADMs. Infinera uses integrated photonics--microchips that do the work normally performed by optical components--to convert optical signals into electronic ones for transparent switching and management and back to optical signals for efficient travel. The company and its technology gained an important validation in May, when Level 3 announced it would deploy the system nationwide.

Though Infinera's product is a remotely reconfigurable optical switch, it is not considered a ROADM. However, Infinera often competes head-to-head with ROADM vendors for the same applications. In fact, Infinera director of marketing Rick Dodd has at times considered referring to the company's approach as "READM," substituting "electrical" for the word "optical," to capture some of the ROADM buzz.

According to Ron Kline, research director for optical networks at Ovum-RHK, "Multiple RBOCs and MSOs in North America are expected to begin ROADM deployments within the next 12 months."

In February, Infonetics Research predicted the overall ROADM market would double this year to more than $200 million. And by 2007, it should be double the size it is now, Ovum-RHK said recently.

A major ROADM request for proposal from Verizon Communications this year followed one last year from SBC. And beyond the current RFPs, Bell companies will need ROADMs to join their networks with those of the interexchange carriers they are acquiring, according to Clif Holliday, president of B&C Consulting. "The way they'll tie those networks together--AT&T, SBC, MCI and Verizon--is going to be with three- and four-degree ROADMs. The other choices are extremely uneconomical."

Those new networks have ROADM vendors jockeying for position, pointing to new differentiators in a space that doesn't allow many. In its metro ROADM RFP issued this year (one that has been rumored to have been awarded to Fujitsu and Tellabs), Verizon searched for a versatile approach that would serve two different needs. The vendor determined that only a small number of its offices (the "superhubs") were optimized by the sort of multi-node mesh networks enabled by WSS. For most hubs, rings and blockers worked just fine.

The notion of that sort of mesh/ring mix was at the heart of Meriton Networks' recent acquisition of Mahi Networks, which turned to the ROADM market through its own acquisition of Photuris after its own homegrown product (a large transport platform) failed to catch on. Meriton and Mahi said they often found themselves in competition with each other for the business of large incumbent carriers that wanted a mix of mesh and ring topologies. Both Meriton--whose gear is optimized for mesh networks--and Mahi--whose gear is optimized for rings--were forced to offer prospective customers future products that covered the other bases. Finally, they decided to combine their efforts. By next year, the gear will be integrated with a common management system.

"Meshes are problematic because to make meshes work, you use routers," Holliday said. "The problem with that is routers are slow." Though mesh networks can't keep up with Sonet's 50-millisecond reflexes, he said, meshes are advantageous in that they allow carriers to switch more granular traffic streams inside the wavelength itself instead of having to switch only the entire wavelength.

But whether mesh or ring, there's no question about carriers' demand for ROADM. "ROADMs have become table stakes for new networks," Holliday said. "It's not imaginable for someone to build a big fiber network that's not ROADM-based."

Share RecommendKeepReplyMark as Last Read

From: Frank A. Coluccio10/22/2005 3:17:16 AM
   of 46642
WATER REPORT: Central California Levees Have State Flirting with Disaster
10/24/2005 | By J.T.Long

California’s 1,600-mile Sacramento-San Joaquin Delta levee system is at risk. Water resource managers and engineers say that the old and frail network has a protection level strong enough for only a 100-year flood event, offering less security than floodwalls in New Orleans. A magnitude 6 quake nearby, they say, could shut Central California’s water delivery system, which serves 20 million people–two-third’s of the state’s population.

Built to channel the Sacramento, Cosumnes, Mokelumne and San Joaquin rivers and the lacy Delta region where they converge en route to San Francisco Bay, the levees bring flood protection to a vast swath of fertile farmland and urban areas, including Sacramento. They also funnel 7.5 million-acre-feet of water to drinking water transmission pumps in Tracy. Map:

But the aging earthen levee system offers marginal protection at best. Construction began in the mid-1800s and much of it rests on organic loam or a deep layer of sand. The dikes are susceptible to seepage that can lead to blowouts in extreme flooding events.

In the Delta, many of the levees are privately-owned, sunken islands built to the Federal Emergency Management Agency’s "agricultural" standards. They have minimum levee crown widths of 16 ft, waterside slopes of at least 1.5 horizontal to 1 vertical, landside slopes of at least 2:1 and levee freeboard of 1.5 ft above the average 100-year flood. Failure of such a levee west of Stockton in 2004–possibly due to animal burrowing–flooded 12,000 acres of farmland. The event caused $100 million of damage (ENR 7/12/2004 p. 7).

Engineers say the options for improvements and paying for them are as varied as the politicians, engineers and geography of the system. But improvements are being made. Last February, after 15 years of work to raise levees, insert slurry walls and build pumping facilities, Sacramento’s 107 miles of urban levees were declared in compliance with FEMA’s 100-year urban flood protection standard. Freeboard is 3 ft above the base flood level, waterside slopes are 2:1 and there is at least a 15-ft crown.

That’s the good news. The bad news is that some of those levees are underseeping. Raising them to a 200-year standard would require building 80-ft-deep seepage berms and closing gaps in slurry walls at a cost of $200 million, according to the Corps’ initial estimate. Stein Buer, the Sacramento Area Flood Control Agency’s executive director, says he hopes the Levee Seepage Task Force "will find a way to do the work for less."

In the meantime, several projects are planned. Chris Neudeck, principal at consulting engineers Kjeldsen, Sinnock, Neudeck Inc., Stockton, says levee work in Sacramento and the Delta can range from "treating a superficial wound to heart surgery," depending on the problem’s severity and the money available.

The least expensive fix ($1 million per mile or less) often consists of adding a plastic sheet and a 200-foot-wide berm halfway up the landside of the levee. This counteracts the weight of the water on the other side and increases the distance subterranean seepage must travel. Local materials work even on porous, loamy foundations–if the material is applied slowly and the load is spread widely, Neudeck says. The method is not an option, however, when development has reached the toe of the levee.

Seepage barriers are another option. They do not broaden the levee and vary in size, style and cost, says Ray Costa Jr., principal engineer for geotechnical services at Kleinfelder Inc., San Diego.

Costa says contractors working for SAFCA started building seepage barriers by using backhoes to trench to levee depth–about 30 ft–and insert a wall of bentonite clay. When seepage was discovered, deep soil mixing augers were brought in to reach a non-porous clay layer as deep as 150 ft.

Seepage barriers cost $3 million to $4 million per mile, depending on the depth of the barrier and access. R. Kevin Tillis, a principal engineer at Concord, Calif.-based Hultgren-Tillis Engineers, calls bentonite barriers "a temporary fix" that can break down over time.

A similar solution, but more difficult and costly ($4 million to $6 million per mile), is sheet-pile insertion. "This has to be done where there aren’t homeowners to complain about plates falling off shelves," says Neudeck.

Setback or attached-setback levees are more controversial. A second levee is built 500 to 1,000 ft behind the old one, using locally mixed and compacted material or imported "select" clay. Where no select material is available, geogrid reinforced netting, deep dynamic compaction (dropping 20-ton weights from 100-ft cranes) and Geogrid plastic reinforcement can strengthen the foundation. The old levee is allowed to erode over time, giving the river room to spread during storms. It can increase the safety factor from 1.2 to 1.8 or more and costs between $5 million to $15 million per mile, but it takes land off the table for agriculture or development.

Another option is planned failure. By making part of a levee a few feet shorter and covering the landside with rock to protect the foundation while allowing overtopping, pressure can be relieved downstream, says Costa. One big drawback is determining where overtopping will be allowed.

To pay for improvements, State Sen. Don Perata (D) is sponsoring a $10-billion infrastructure bond bill that includes $1.2 billion for levees. On Sept. 15, Gov. Arnold Schwarzenegger (R) also asked for $90 million in federal aid for 12 state levee programs.

Share RecommendKeepReplyMark as Last Read

To: Peter Ecclesine who wrote (11836)10/22/2005 3:25:05 AM
From: axial
   of 46642
Hi, Peter -

I recall studying the problem of "staying within the linear portion" of amplified signals some 40 years ago. Ditto "tuning" the system as amplifiers worked to the end of their life: the decline and wearout characteristic.

Theory and simulation of nonrelativistic elliptic-beam formation with onedimensional Child-Langmuir flow characteristics stimulated a headache, and not much else. I couldn't find any information that suggested the decline characteristic had been eliminated.

My understanding, long ago, was that the decline characteristic was caused by physical deterioration of component materials subjected to heat, radiation, etc., and accompanying "boil-off" at the atomic level. Nothing suggests that these new tube components work in a less demanding environment.

Nor is there any indication that such devices could be used in anything but base stations. The problems of power consumption and linearity in solid-state amplifiers for CPE and mobile transceivers remain.

Which is a shame, because I had hoped that science would have solved the problems associated with OFDM amplifiers, in the decade after I began reading about them.

Looking forward to your comments.



Share RecommendKeepReplyMark as Last ReadRead Replies (2)

To: axial who wrote (11850)10/22/2005 12:06:37 PM
From: Frank A. Coluccio
   of 46642
Article: A Discussion on Optical Extinction Ratio

Hi Jim,

Since the work describing "The Theory and simulation of nonrelativistic elliptic-beam formation with onedimensional Child-Langmuir flow characteristics" gave you so much pleasure, while leaving you to wonder where the decline characteristic went, I thought the following tutorial from this month's Lightwave Magazine might be something you'd find interesting. I suggest going to the Web Site (URL below) for graphics for the best read. I post the text of the article below for posterity, which always assumes that this SI post will (and often does) outlast the Web presence or the material of the moment being posted. <s>


The Increasing Importance of Extinction Ratio in Telecommunications

Defined and required by many standards, many variations of use between designers, manufacturers, and end users continue to plague the industry.

Bob Hasenick
Agilent Technologies, Inc.

This article describes how extinction ratio (ER) is defined and used within the telecommunications industry. A companion article in the October 2005 issue of Lightwave describes measurement and calibration methods for the characterization of extinction ratio.

Several physical-layer parameters are used to characterize optical signals, and most of these have specific limits and test conditions. Extinction ratio is an important measure of the quality of an optical signal, especially for modern transmitters. Extinction ratio is commonly misunderstood, can be harder to measure than other parameters, and is traded for performance with other parameters such as chirp1, fiber dispersion, and self-phase modulation. 2

Designers, suppliers, and users of transceivers can benefit greatly from a common understanding of extinction ratio, where the careful choice of component parameters improves the interoperability of complex devices in short- and long-haul communications systems.

Use and significance of extinction ratio

Let's start with the impact of extinction ratio on system performance and other parameters. As the extinction ratio improves, the bit-error ratio (BER) improves, reducing the number of errors and the amount of error correction required. As higher data rates are being pushed through materials (such as FR-4 printed circuit boards), loss and dispersion close the eye and errors increase. This effect is commonly known as "power penalty," which has been thoroughly described in other sources. 3,4

Briefly, poorer values of ER increase the power penalty (PP), worsen BER, and diminish the benefit of increased power. Equation 1 defines the power penalty as:

PP= -10log {(ER-1)/(ER+1)} (1)

where ER is defined as E1/E0. For example, a common ER value called out in a standard near 6 dB delivers a power penalty of 2 dB.

These factors become significant when the measured values from manufacturers differ from what designers measured, and the end user obtains different values than the manufacturer. The resulting discussions on yield and failures are rarely productive and can be remedied by the techniques in the October article.

ER provides a common metric for all users to characterize an optical signal. Minimum values of extinction ratio are called out in nearly all optical standards, including ITU-T G.691, G.957, and G.959.1. 5 Most standards reference EIA/TIA OFSTP-4A6 for the definition of ER and associated measurement conditions.

All standards require the use of a filter when measuring ER, with the most commonly required filter being a fourth-order Bessel Thomson. A fourth-order Bessel filter closely approximates the integration of the signal to obtain an equivalent value of energy; the addition of Thomson specifies the filter be at three-quarters of the bit rate.

Definition of ER and associated challenges

To state a minimum or characteristic value for ER, we need to accurately determine the levels of "0" and "1." Three equations summarize how ER is expressed:

ERdB = 10 * log (E1/E0) (2)

ER% = 100 * (E0/E1) (3)

ER = E1/E0 (4)

where E1 = energy in a nominal logic 1 pulse
E0 = energy in a nominal logic 0 pulse. 7

Equations (1) and (2) are the dominant definitions, likely because dB and % are common measures for other parameters and are easier to compare than a ratio that can have very large numerical values. Note the use of the term energy rather than power, which is normally used. While power would seem the obvious choice, energy better characterizes the levels because the time interval is specified.

Industry standards vary in their definition of the portion of the bit period that is used. Standard ITU-T G.691 defines ER using values over the full width of the eye. Most other standards specify the use of histograms over the 40-60% of the bit period. The latter definition is easier to measure because of the significant amount of energy in the bit transitions.

Let's now consider the means to establish the "0" and "1" levels. Figure 1 shows a typical eye diagram, which is used to obtain several parameters including ER.

Standard IEC 61280-2-2 calls for the use of histograms, which are regularly used to characterize the levels and give statistical parameters of the energy between the 40% and 60% area of the eye. 8 For low power levels or very poor extinction ratios, the histograms start to overlap, and at some point the ability to discern a "0" and "1" is very difficult and results in large errors. For the specified output powers of most modern transceivers, this is of less concern yet needs to be considered in the uncertainty calculations.

Factors to consider when designing for ER

Designers and those who validate designs must consider several factors when designing transmitters, including choice of eye width, overshoot, slow edges (undershoot), and lack of a proper filter. Consider each of these while referring to Figure 2, which shows the effect on ER for different transmitter bias voltages.

Recalling that the levels for "0" and "1" are determined by the use of histograms, note that the histograms will have a much wider spread when evaluating the eye for ER over the entire bit period. This flatter histogram is affected by a wide range of bit transitions and more easily results in variations of the perceived values of "0" and "1," further contributing to differences between designers, manufacturers, and users.

Overshoot and lack of a proper filter similarly affect the histogram, whether measured over the center 20% or the entire bit period. No or poor filters leave the higher-frequency components of the signal in the eye, contributing to a higher value of "1" and lower value of "0" in the 40% region of the eye. Figure 2 shows that this increases the ER of the unfiltered signal and is attractive to a designer struggling to meet compliance specifications; the design will often violate another aspect of system design or not work well with other components.

The opposite effect occurs for slow edges or filters with too much rejection. The values for "0" and "1" are closer together than they would be for a cleaner eye, resulting in a poorer ER and more challenging system performance.

The Fibre Channel standard defines an alternative measure called "Optical Modulation Amplitude" (OMA). 9 While initially seeming to offer the same measure of quality, OMA depends on a standard test pattern similar to a square wave and differently reflects how a transmitter performs with a typical data pattern. Viewing this as a more stringent test of a transmitter can in turn penalize a designer using atypical operating conditions.


Extinction ratio has become an important and common, albeit controversial, measure of transmitter quality. Defined and required by many standards, many variations of use between designers, manufacturers, and end users continue to plague the industry. These variations diminish or disappear when several design aspects of the transmitter are carefully considered in light (no pun intended!) of the principles outlined in this article. Refer to the follow-on article in the October 2005 issue of Lightwave for specific techniques regarding measurements and calibration.


1. Sung Kee Kim, O. Mizuhara, Y. K. Park, L. D. Tzeng, Y. S. Kim, and Jichai Jeong, "Theoretical and Experimental Study of 10 Gb/s Transmission Performance Using 1.55um LiNbO3-Based Transmitters with Adjustable Extinction Ratio and Chirp," IEEE Journal of Lightwave Technology, Vol. 17, No. 8, pp. 1320-1325, August 1999.

2. Zhuang Li, Yongqi He, Bo Foged Jorgensen and Rune J. Pedersen, "Extinction Ratio Effect for High-Speed Optical Fiber Transmissions," International Conference on Communication Technology, Beijing, China, pp. S35-02-1 to 5, October, 1998.

3. Maxim Integrated Products, Application Note HFAN-2.2.0 entitled "Extinction Ratio and Power Penalty," Rev 0; May 2001.

4. Rijiv Ramaswam, and Kumar N. Sivarajan, Optical Networks: A Practical Perspective, Second Edition, Chapter 5, Morgan Kaufmann Publishers, 2002.

5. ITU Series G: Transmission Systems and Media, Digital Systems and Networks:
G.691 on Transmission media characteristics -- Characteristics of optical components and subsystems, December 2003.
G.957 on Digital transmission systems -- Digital sections and digital line system ¿ Digital line systems, July 1999.
G.959.1 on Digital sections and digital line system -- Digital line systems, December 2003.

6. TIA-EIA Document OFSTP-4A, "Optical Eye Pattern Measurement Procedure," TIA-526-4-A, November 1997.

7. "Measuring Extinction Ratio of Optical Transmitters," Agilent Technologies Application Note 1550-8, January 2001.

8. IEC Document 61280-2-2, "Fibre Optic Communication Subsystem Test Procedures, Part 2-2: Digital systems -- optical eye pattern, waveform, and extinction ratio measurement."

9. "Fibre Channel Physical Interfaces (FC-PI-2)," ANSI Working Draft, March 2005.

Bob Hasenick is product marketing engineer, digital signal analysis, at Agilent Technologies Inc. (Santa Rosa, CA).

Share RecommendKeepReplyMark as Last ReadRead Replies (1)
Previous 10 Next 10 

Copyright © 1995-2018 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.