InvestorsHub Logo
Followers 0
Posts 101
Boards Moderated 0
Alias Born 10/01/2000

Re: None

Wednesday, 07/04/2001 6:32:47 PM

Wednesday, July 04, 2001 6:32:47 PM

Post# of 273
Testing,,,, and apparently it passed,,, long post from Trats site.
Date: Monday, 2 July 2001, at 1:22 a.m.

The bandwidth famine

David Prior, consultant at PBI Media, challenges the argument put forward by many in the industry that there will be a glut of bandwidth to contend with.

Over the past three months the media has concluded that the great seduction is over: the Internet proposition is dead. The understandable fear is that, if the Internet model is failing, the supporting layers of that model will fail too. Certain Wall Street analysts have aggressively argued that we are experiencing a bandwidth glut. By association, share values of network infrastructure owners and suppliers have suffered severe declines. A glut of network capacity would negatively impact investment prospects and so the market caps of builders and manufacturers of oceanic and terrestrial fibre have tumbled. This fear has generated a plethora of anecdotal and, allegedly, empirical evidence supporting the argument for glut. With issues of context and content remaining, in some cases, unresolved, such reports nevertheless sell into the investment market and the cycle begins again. The issue of glut is misdirected: a glut will not occur in terms of capacity. There may be an argument that a glut pertains to the number of providers of capacity, but with 83 percent of the global population having no ready access to communications networks and with only around 6 percent of the global population having advanced Internet access, it seems hasty to prejudge consolidation in the supply sector. Capacity service providers are typically subject to traditional telecommunications influence and methodology and in the transition period between point-to-point voice minutes and node-to-node IP routes, traditional business and financial models have little or no relevance. Between 1994 and 1999 telecommunications infrastructure companies deployed a traditional network model to support the evolution and expansion of the Internet. However, widespread adoption of the network with unparalleled financial and intellectual resources has now created a situation where the traditional infrastructure cannot support the emerging application layer. In an illustration of the critical distinction between old model telecommunications and new model communications, the infrastructure must be in place to support the continued technological, economic, social and commercial development of the network. It is no longer the case that capacity and performance can be drip-fed based on what the infrastructure will support. In a common infrastructure environment, it underpins the potential for development of services that utilises that capacity. As the true promise of the Internet becomes apparent – that the Internet is not an end unto itself but is a channel to be used in conjunction with existing outlets – so expectations of the network are changing. At present, corporate users do not find capacity easy to obtain or financially viable to deploy. Commercial website operators, such as eBay and Amazon, have to use multiple providers in order to attain sufficient capacity to support their enterprise. Users find that the legacy investment of their traditional operator is restricting their ability to access the network at greater levels of speed and reliability. The arrival and deployment of a single MP3 distribution application overwhelms university backbones. Capacity is not for life; it’s just for Christmas or for the holiday season or for the launch of a new lingerie catalogue.

Unpredictable IP networks are unpredictable in applications, their utilisation or direction. This erratic nature forces the adoption of a new model: customer-managed services. The customer must be able to take control of the capacity and direction of services deployed, and the provider must ensure that this happens. At the same time, customers are becoming more and more aware of the depreciating price-points for IP capacity. At one level, this works in the operators’ favour: customers spend the same amount of money to make use of greater capacity. At another level, operators are disadvantaged because the market sees the price fall without a parallel increase in demand. Internet Protocol (IP) operates through two standard subsets: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). Each of these subsets has a potential 65,000-plus application ‘ports’ that can be used to deploy an IP-enabled application within the network, globally. The presence of 65,000-plus ports means that there are over 65,000 different applications that can be deployed within the network, generating demand. Avoiding unpredictable applications like Napster, Gnutella, and network gaming, Figure 1 shows forecast growth in demand for three simple applications: email, Instant Messaging, and Streamed Video. The chart illustrates Mbps global demand for each of the three applications, derived from a function of the number of events for each application per day multiplied by the average number of bits of demand that each event generates. The result is then divided by the number of seconds in a day to produce the Mbps figure for each application. Although this representation unrealistically distributes demand equally for every second of the day, the smoothed total demand per second of 1.8 Tbps in 2000 (20.1 Tbps by 2003) indicates that the peak load to be provisioned for will be much higher.

Quantifiable Given this forecast, why does the glut theory continue to attract credibility? Glut, in this sense, refers to available capacity with little or no quantifiable demand to utilise it. The context of the theory is a clue: the glut perspective is generated primarily by US market analysts and investors. Little account of the stage of build and demand in Europe, Asia and Latin America appears in this perspective. The critical issue is the location of the bottleneck. Currently the bottleneck occurs at the user access point. Our prediction is that within two years the bottleneck will occur at the borders of the national entity. In terrestrial environments bottlenecks will be more easily overcome through the deployment of express wavelengths, optical add-drop multiplexing, and application defined colour-based QoS. The provisioning of 432 pair/864 fibre cable systems in conjunction with DWDM and optical switching is taking place and will continue to extend national backbone services within terrestrial regions where permitted by political and cultural acceptance. The bottleneck will, as a result, shift to the shorelines. Bill Carter of Global Crossing, speaking at the East-West Centre conference on Asia-Pacific eCommerce, referred to the present terrestrial to oceanic situation as being “like the New Jersey turnpike meeting a dirt-track”. Technical limitations confine oceanic deployment to 6-pair fibre cables, so the disparity between terrestrial and oceanic capacity becomes acute. Bill Carter’s vision is that terrestrial capacity will be given away whilst value remains in the oceanic sector. Refocusing Internet content to regional environments will also reduce the impact of the bottleneck. Market-specific content will be hosted in, as well as accessed from, the relevant region, reducing demand on long-haul networks in general. The market opportunity in the short term, therefore, remains in the provision of local, national and regional infrastructure that is capable of both superseding the traditional telecommunications infrastructure and supporting the foreseeable medium- to long-term demands of the new communications model. In the medium term, the opportunity lies in the provision of reliable, efficient, dynamic network services that funnel demand in the context of the regional market space. Although prices for basic capacity will continue to fall as supply becomes both more widespread and more usable, a process of consolidation and co-operation will allow a stable, diversified revenue model to be created. It is the mindset that allows the creation of this model, backed by knowledgeable supportive investment, that will be the key to network success in the long term.

Mindset Mindset changes are not only required in the approach to market. For the market to fully appreciate the change under way, those who analyse, predict, and forecast market trends need to understand the nature of the market for IP-based services. Traditional forecasting methods, like traditional business models, have no place here. The nature of the IP space is uncertainty: new applications come from nowhere to increase user expectations, increase demand, and to change the face of the business. A failure to appreciate this model leads to a misrepresentation of demand. Forthcoming changes in access technology throughput, the device and human user population, and the application base in common use are not being considered in the overall equation. Such a failure leads to the emergence of a gap between the expected utilisation – and the build required to support it – and the actual experienced demand. It is our contention that, far from a bandwidth glut, this gap will create a situation where limited supply leads to a reversal of present bandwidth price decline. Although, at face value, such a situation would appear to provide a resolution of the current market trend, it will, in actuality, only preserve the present model for Internet infrastructure services and continue to promote the digital divide. What is required is an open acceptance that we are currently at the base of a vertical rise in adoption, utilisation and demand, the upper limits of which are still difficult to determine. It is only through such an acceptance, supported by build activity to meet this demand, that the Internet will be able to continue to evolve. Without sufficient capacity to support emerging applications, new business models, and the growing user and content populations, the Internet will remain as it is at present: a hub and spoke model focused on the US, accessed at less than optimal speeds.

Transition Today’s Internet Infrastructure is in transition. The emergence of new technology and new commercial interests and applications requires that the traditional infrastructure is replaced. Issues of latency, packet loss, bottlenecks, overheads, and traffic volumes have created a situation where the traditional infrastructure cannot cope. This legacy infrastructure must be bypassed or replaced if we are to experience the Internet as a critical commodity service with true global scope. Figure 2 illustrates the emerging communications ‘stack’. It is our belief that the emerging ‘Internet II’ architecture, layered above the traditional ‘Public Internet’, will cause the legacy network to drop away. Increased demand and a decrease in tolerance for poor performance in both the consumer and business Internet models will tend to telescope the timeframe for this succession. The arrival of new, guaranteed quality of service core infrastructure, coupled with a trend towards 10 Mbps access technology and focused on new colocation facilities is generating a new ‘web above the web’. It is this iteration of the Internet that will promote further, wider adoption and generate unanticipated demand.

rollin'