PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Performance evaluation is used to gain an understanding of how to make the best use of scarce resources. Storage, memory, processing, and communications bandwidth are all relatively plentiful and inexpensive. What is the next frontier for communications networks and performance evaluation? I will argue that it is power management to achieve cost-effective operation. In the past few years, entirely new network protocols have been developed for battery-hungry sensor networks. But, what about the existing Internet? Estimates place the Internet as consuming from 2% to 8% of the total electricity produced in the USA - much of this power consumption is unnecessary. Do our “always on” desktop computers really need to be fully powered-up all the time? What can be done to achieve power-savings in these computers? The goal is to eliminate unnecessary energy usage by desktop computers in the near future and by networked embedded systems in the longer term. Traffic characterization is the first step towards this goal. Traffic characterization at inter-flow, intra-flow, and protocol levels is being done to investigate power management. The resulting savings achievable from relatively simple power management schemes are measured in TWh per year - or roughly equivalent to the electricity generated by one nuclear power plant. This is cost-effectiveness on a large scale!
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study modeling approaches for traffic characteristics of real time video traffic including the distribution function of the size of each group of pictures in MPEG coding and their short term correlation. A numerical solution of the steady state workload distribution is computed for semi-Markovian state models in discrete time. A verification step based on interval arithmetic is able to enclose the numerical result within tight bounds for many cases. We compare the delay and loss performance in data forwarding by a buffered switch as predicted by the model to direct evaluation of publicly available video traces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a previous work we have introduced a multifractal traffic model based on so-called stochastic L-Systems, which were introduced by biologist A. Lindenmayer as a method to model plant growth. L-Systems are string rewriting techniques, characterized by an alphabet, an axiom (initial string) and a set of production rules. In this paper, we propose a novel traffic model, and an associated parameter fitting procedure, which describes jointly the packet arrival and the packet size processes. The packet arrival process is modeled through a L-System, where the alphabet elements are packet arrival rates. The packet size process is modeled through a set of discrete distributions (of packet sizes), one for each arrival rate. In this way the model is able to capture correlations between arrivals and sizes. We applied the model to measured traffic data: the well-known pOct Bellcore, a trace of aggregate WAN traffic and two traces of specific applications (Kazaa and Operation Flashing Point). We assess the multifractality of these traces using Linear Multiscale Diagrams. The suitability of the traffic model is evaluated by comparing the empirical and fitted probability mass and autocovariance functions; we also compare the packet loss ratio and average packet delay obtained with the measured traces and with traces generated from the fitted model. Our results show that our L-System based traffic model can achieve very good fitting performance in terms of first and second order statistics and queuing behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we compare two traffic models based on Markov modulated Poisson processes (MMPPs), that were designed to capture self-similar behavior over multiple time scales. These models are both constructed by fitting the distribution of packet counts in a number of time scales. The first model is a superposition of MMPPs where each MMPP describes a different time scale. The second one is obtained as the equivalent to an hierarchical construction process that, starting at the coarsest time scale, successively decomposes MMPP states into new MMPPs to incorporate the characteristics offered by finner time scales. We evaluate the accuracy of the models by comparing the probability mass function at each time scale, as well as the loss probability and average waiting time in queue, corresponding to measured traces and to traces synthesized according to the proposed models. The analysis is based on three measured traffic traces exhibiting self-similar behavior: the well-known pOct Bellcore trace and two traces measured in a Portuguese ISP. Based on the obtained results, we conclude that both Markovian models have good and very similar performances in matching the characteristics of the data traces over the relevant time scales. However, one advantage of the hierarchical approach is that the number of states of the corresponding MMPP can be much smaller.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our earlier work, we have presented a simple measurement-based admission control (MBAC) scheme for modified Bandwidth Broker framework. In that scheme, real time (RT) traffic is basically able to starve non-admission controlled non-real time (NRT) traffic. By concentrating on real time application requirements it may be hard or even impossible to use other objectives in admission decisions; some non-real time applications (e.g., audio streaming) also need a certain minimum bandwidth for proper operation. In order to fix this problem, we present a solution, where the bottleneck link bandwidth is shared dynamically between real time and non-real time traffic. As a second enhancement to our earlier work, we propose the use of coefficients for the requested resources. These coefficients are derived from the price the user is paying for the connection. A requested peak rate is multiplied by the coefficient before comparison to available bandwidth. The proposed scheme is validated through simulations and its performance is compared against other admission control schemes. The simulation results show that it is possible for a network operator to gain more revenue with the proposed scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Web sites are exposed to high rates of incoming requests. Since web sites are sensitive to overload, admission control mechanisms are often implemented. The purpose of such a mechanism is to prevent requests from entering the web server during high loads. This paper presents how admission control mechanisms can be designed and implemented with a combination of queueing theory and control theory. Since web servers behave non-linear and stochastic, queueing theory can be used for web server modelling. However, there are no mathematical tools in queueing theory to use when designing admission control mechanisms. Instead, control theory contains the needed mathematical tools. By analysing queueing systems with control theoretic methods, good admission control mechanisms can be designed for web server systems. In this paper we model an Apache web server as a GI/G/1-system. Then, we use control theory to design a PI-controller, commonly used in automatic control, for the web server. In the paper we describe the design of the controller and also how it
can be implemented in a real system. The controller has been implemented and tested together with the Apache web server. The server was placed in a laboratory network together with a traffic generator which was used to represent client requests. Measurements in the laboratory setup show how robust the implemented controller is, and how it correspond to the results from the theoretical analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Broadband satellite constellation networks will be required to carry all types of IP traffic, real time interactive traffic as well as nonreal time traffic, warranting the need for appropriate QoS for these different traffic flows. In this paper we investigate advantages of employing constraint-based routing using MPLS in a multilayered hierarchical satellite constellation. Bandwidth availability or residual bandwidth on a satellite link is taken into account when setting up routes for high priority real-time traffic e.g. VoIP, which is sensitive to delay and jitter. Also to protect the VoIP traffic from being swamped by bursty best-effort traffic we propose to have a separate queue for high priority traffic. The performance of the prioritized load balancing routing algorithm on a multi-layered satellite network is simulated and analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many services have recently been offered based on a peer-to-peer (P2P)
communication model. Peers connect to each other and build an overlaid logical network and available services are communicated over this network. The robustness of the P2P network against frequent peer failure must be considered. This includes when the peers leave the network and this directly affects the stability of the entire logical network. Replication of the content is one of the most useful techniques to increase robustness. However, the overall effectiveness of replication is heavily dependent on the topology of the logical network.
As topology of networks, including the Internet and P2P, follows a
Power-Law distribution pattern, we first investigate the effect of the logical network topology (especially of the Power-Law characteristics) on replication methods. We use a search method called "n-walkers random walk" in which multiple queries move randomly across the P2P logical network. We use a "path replication method," to create replicas at all the intermediate nodes on the path between the requesting and responding nodes. Through simulations experiments, we observed that peers with a large number of degrees (e.g., degree > 10) make four times as many replicas as peers with a small number of degrees. In addition, replicas on large degree's peers are used ten times as frequently as those on peers with small degrees.
Based on these observations, we propose a query forwarding method that
considers the Power-Law property of the network topology in order to
improve the performance of the P2P service. In our method the queries
are transmitted with different probabilities, dependending on the degree of each adjacent node. Our simulation results show that our proposed method can greatly improve the query performance by considering the characteristics of Power-Law. Our method reduces the average hop count in finding replicas by up to 60% compared with the random forwarding method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reserved delivery service can help the information service providers to provide more consistent performance to their customers by the provisioning of reserved bandwidth on a delivery subnetwork. However, the configuration problem of a reserved delivery subnetwork is a hard optimization problem with no efficient exact algorithm besides exhaust searches. In this paper, we introduce a reserved delivery subnetwork configuration algorithm based on an idea of the maximum sharing shortest path tree (MSSPT). The proposed algorithm is motivated by the observation that the path sharing of multiple flows
reduce the cost in reserved delivery subnetworks. Thus, a solution close to the optimal could occur in a subnetwork with the maximum degree of flow sharing. The maximum sharing shortest path tree problem can be categorized as a multicriteria shortest path problem. Using an algorithm based on the shortest path network (SPN, a unique subnetwork in which every path s → u is a shortest path in the original graph), we develop an efficient algorithm for the maximum sharing shortest path problem. The proposed algorithm is an approximation algorithm in nature because it takes the MSSPT as the
approximation solution to the reserved delivery subnetwork configuration problem. Our experimental results show that the
proposed algorithm has good performance against an easily computed lower bound, but has time complexity comparable to a single source shortest path algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent and future communications networks have to provide QoS guarantees for a rapidly growing number of various telecommunications services. This can be ensured by application of an efficient MAC layer. Various communications technologies, such as cellular networks and PLC access networks, apply reservation MAC protocols, providing a good network utilization, which is important for networks with limited data rate, and ensuring realization of different QoS guarantees. In this investigation, we compare the performance of so-called one-step reservation protocols, represented by slotted ALOHA and active polling, with a hybrid two-step reservation protocol. The protocols are investigated in their extended variants, including piggybacking, signaling over data channels and dynamic backoff mechanism. The protocols are implemented within a simulation model representing an OFDMA/TDMA scheme, which is outlined as a suitable solution for the PLC networks. Nevertheless, the achieved results can be generalized and interpreted for any other multiple access schemes, as well as for other communications technologies. To observe networks with different subscriber behavior, we define a traffic mix representing internet based data traffic, which is mainly expected in the access networks, such as PLC. As expected, the two-step protocol achieves always the best performance, compared with both one-step protocols. On the other hand, the fairness between the subscribers belonging to the same traffic class is ensured in all three investigated MAC protocols. However, a performance variation can be observed between different traffic classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In wireless communication systems, each user's signal contributes to the interference seen by the other users. Given limited available battery power, this creates a need for effective and efficient power control strategies. These strategies may be designed to achieve quality of service (QoS) or system capacity objectives, or both. We show how the power control problem is naturally suited to formulation as a noncooperative game in which users choose to trade off between signal-to-interference ratio (SIR) error and power usage. Koskie (2003) studied the static Nash game formulation of this problem. The solution obtained led to a system of nonlinear algebraic equations. In this paper we present a novel distributed power control strategy based on the Newton iteration used to solve the corresponding algebraic equations. That method accelerates the convergence of the Nash game algorithm owing to the quadratic convergence of the Newton iterations. A numerical example demonstrates the efficiency of the new algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effect of traffic variability on statistical multiplexing gain is analyzed in a bufferless continuous fluid-flow traffic model. Two different methods are used to account for traffic loss, namely a method based on overflow probability and another one based on traffic loss ratio. It is shown that for both methods, bandwidth savings due to statistical multiplexing gain (SMG) can be significant and increases with increasing traffic variability. It is also shown that SMG and channel utilization increase as the number of composite traffic streams increases and as traffic loss probability/ratio is lowered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose analytical models to capture the
statistical behavior of real traces of MPEG-4 encoded variable bit
rate (VBR) video data in a video server. We study the scattered
disk storage of video frames and periodic scheduling policies and
we calculate the user disk service rate, buffer size, and the
maximum number of simultaneous subscribers by using the Chernoff bound asymptotic technique. We have included a self-similar
Gamma model which seems to be very close to the actual data
behavior.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider a ring in which simultaneous transmission of messages by different stations is allowed, a property referred to as spatial reuse. A ring network with spatial reuse can achieve a network level throughput much higher than the channel rate. A widely used scheme to achieve spatial reuse is Buffer Insertion Ring (BIR). However, because non-preemptive priority is given to the ring traffic, BIR scheme can lead to fairness problems in distributing the ring bandwidth among distinct nodes. In this paper, we propose a novel approach that provides fair access to all nodes and features low complexity. Within each node, the proposed approach allocates a separate queue for every upstream node. Each queue receives its fair share of the ring bandwidth based on an assigned weight value. Performance of the proposed scheme in terms of fairness and average
packet delay has been evaluated through both simulations and analysis. The results show that the new scheme called Source-Based Queuing (SBQ) can provide fairness with less end-to-end delay compare to the BIR scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic self-similar traffic in computer networks simulation is of imperative significance for the capturing and reproducing of actual Internet data traffic behavior. A universally used procedure for generating self-similar traffic is achieved by aggregating On/Off sources where the active (On) and idle (Off) periods exhibit heavy tailed distributions. This work analyzes the balance between accuracy and computational efficiency in generating self-similar traffic and
presents important results that can be useful to parameterize existing heavy tailed distributions such as Pareto, Weibull
and Lognormal in a simulation analysis. Our results were obtained through the simulation of various scenarios and were evaluated by estimating the Hurst (H) parameter, which measures the self-similarity level, using several methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally dynamic load balancing is applied in resource-reserved connection-oriented networks with a large degree of managed control. Load balancing in connectionless networks is rather rudimentary and is either static or requires network-wide load information. This paper presents a fully automated, traffic driven dynamic load balancing mechanism that uses local load information. The proposed mechanism is easily deployed in a multi-vendor environment in which only a subset of routers supports the function.
The Dynamic Localized Load Balancing (DLLB) mechanism distributes traffic based on two sets of weights. The first set is fixed and is inverse proportional to the path cost, typically the sum of reciprocal bandwidths along the path. The second weight reflects the utilization of the link to the first next hop along the path, and is therefore variable. The ratio of static weights defines the ideal load distribution, the ratio of variable weights the node-local load distribution estimate. By minimizing the difference between variable and fixed ratios the traffic distribution, with the available node-local knowledge, is optimal. The above mechanism significantly increases throughput and decreases delay from a network-wide perspective. Optionally the variable weight can include load information of nodes downstream to prevent congestion on those nodes. The latter function further improves network performance, and is easily implemented on top of the standard OSPF signaling. The mechanism does not require many node resources and can be implemented on existing router platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The trend of the service architectures developed in telecommunications today is that they should be open in the sense that
they can communicate over the borders of different networks. Instead of each network having their own service architecture with their own applications, all networks should be able to use the same applications. 3GPP, the organization developing specifications for the 3G networks has specified the standard Open Service Access (OSA), as a part of the 3G specification. OSA offers different Application Protocol Interfaces that enable an application that resides outside a network to use the capabilities of the network. This paper analyses the performance of an OSA gateway. It is examined how the overload control can be dealt with in a way to best satisfy the operators and the 3'rd parties. There are some guiding principles in the specifications, but a lot of decisions have to be made by the implementors of application servers and OSA gateways. Proposals of different requirements for an OSA architecture exist such as, minimum amount of accepted calls per second and time constraint for the maximal total delay for an application. Maximal and fair throughput have to be prioritized from the 3'rd parties view, but profit is the main interest from the operators point of view. Therefore this paper
examines a priority based proposal of an overload control mechanism taking these aspects and requirements into account.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new Web proxy cooperation model is introduced and analyzed that preserves advantages of cooperative caching in the presence of proxy link capacity variations. The Restricted Broadcast Query (RBQ) cooperation model uses a score table containing dynamic information that describes proxy connectivity. This information is used to re-distribute load among proxies, thereby compensating for changes in link capacities. An analytic model was developed to evaluate network congestion effects on alternative Web proxy cooperation mechanisms (CMs). The model was applied to a system of identical, fully connected proxies, in order to compare the performance of two common CMs (Broadcasting and URL-hashing) with that of RBQ.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Emerging delay-sensitive applications on the Internet increase awareness of the Quality of Service (QoS) parameters of a path for Internet Service Providers (ISPs) as well as users, However, it is costly to frequently monitor delays along individual paths among every edge-router in the ISP. The most widely used way of estimating such statistics is by actively sending probe packets along each path, despite the increased transmission of wasteful traffic introduced by the probe packets itself for frequent and accurate estimations. On the other hand, each router can passively observe local queuing delays experienced at the router. However, while the mean delays can
always be concatenated concentrating simply by summing those at tandem routers along a path, the statistics (other than the mean) such as the 90-percentile cannot be estimated accurately by such a simple-sum scheme because of dependence among delays at such routers on the Internet. In this work, a novel scheme to estimate the QoS parameters of a path is proposed, which combines statistics
gatherd at each router and data obtained from a small number of sampling along the path. For delays, considering an unknown joint discrete distribution of quantized queuing delays on routers along a path, we find the maximum likelihood estimator for the unknown distribution (under the constraints of the marginal distributions measured at each router) from the samples. Theoretical analysis and numerical simulations indicate that this scheme effectively estimates the delay statistics along a path even with a small number of samples, which allows continual measurements capturing statistics with a broad range of time-scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have started a long-term experiment of end-to-end active measurements along a number of Internet paths, while such kinds of distributed measurement infrastructures have been developed on the Internet and a number of experiments on them have already been reported. Our objective is to explore correlations among various properties of an individual path measured within a period in which the path state does not change, which have not yet been clearly covered. A PC-based measurement system has been developed to
measure a set of path properties in sequence or in parallel for this purpose.
In our preliminary experiment over several Internet paths in Japan,
loss (rate and pattern) and delay (RTT and queuing delay) statistics;
bottleneck bandwidths (the capacity and available bandwidth); and TCP throughput as well as the end-to-end route (to validate no changes of itself) are measured. Some interesting correlations (and no-correlation) among those properties are shown, which indicate the potential of efficient and/or reliable measurement of some path property utilizing the multiple properties measured on the path.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper demonstrates the existence of considerable dependencies between Web server arrival and service times, as well as strong dependencies within the arrival process. We derive a heavy-traffic stochastic-process limit for Web server performance, under various control policies, that captures these forms of correlations. This includes an analysis of control policies that provide near-optimal expected response times while also maintaining good response time variance properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduced the Extreme Value Theory (EVT) for analysis of network traffic. The role of EVT is to allow the development of procedures that are scientifically and statistically rational to estimate the extreme behavior of random processes. In this paper, we propose an EVT_based procedure to fit a model to the traffic trace. We have performed some simulation experiments on real-traffic traces such as video data to study the feasibility of our proposed method. Our experiments showed that the EVT method can be applied to statistical analysis of real traffic. Furthermore, since only the data greater than the threshold are processed, the computation overhead is reduced greatly. It indicates that EVT method could be applied to real time network control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since Resilient Packet Ring has been the subject of intense research, it is necessary to study and analyze the performance of technology. In this paper, based on queuing theory and M/G/1/K queuing system, the average packet transfer delay following Darwin preliminary draft is analyzed. The results show that high priority traffic gets the lowest delay and the difference between medium and low priority traffic is small. In addition, the larger the network, the less is the difference. Maximum network throughput is also obtained in theory, which is instructive to further promote the related standard and even to design network in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Optical Burst Switched (OBS) networks, the requirements on packet loss probability vary for different class users. In order to meet different packet loss probability demands, besides the traditional methods to support Differentiated services (DiffServ) by resource allocation and contention resolution, the scheduling of control
packets should also support DiffServ. A new type of scheduling strategy, Priority-based Weight Fair Queuing (PWFQ) scheduling strategy, is proposed. An equivalent analysis model is also presented to simplify the solving process of scheduling weight of each class. We also define a parameter, normalized deviation factor, to evaluate
the validity of our analysis model as well as the fairness of scheduling strategy on supporting DiffServ. Numerical results confirm that, it is feasible to analyze queue system, in which PWFQ scheduling strategy is adopted, with our equivalent model. At the same time, it confirms that our scheduling strategy performs well on providing fair DiffServ in terms of packet loss probability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new threshold-based mixed-assembly technique with QoS support in optical burst switched networks. The most striking characteristic of the mixed-assembly policy is that both the low and high packet classes are aggregated into one burst simultaneously. Once contention occurs, there will be an overlap between the tail of the earlier arriving burst and the head of the contending burst. Combing with burst segmentation technique, we drop the tail of the earlier arriving burst that is mainly made up of low priority packets, thus the packet loss probability of the high packet classes will be guaranteed. Simulation results show that the proposed burst assembly scheme performs well in terms of performance metrics such as the average packet loss probability and the packet loss probability of the high class of traffic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the current Internet, most of the traffic is transmitted by TCP (Transmission Control Protocol). In our previous work, we have proposed a modeling approach for the entire network, including TCP congestion control mechanisms operating at source hosts and the network seen by TCP connections, as a single feedback system. However, our analytic model is limited to a simple network, where TCP connections have the identical propagation delay. In this paper, we therefore extend our analytic approach to a more generic network, where multiple TCP connections are allowed to have different
propagation delays. We derive the packet loss probability in the network, the throughput and the average round-trip time of each TCP connection in steady state. By presenting several numerical examples, we quantitatively investigate how the fairness among TCP connections is degraded when multiple TCP connections with different propagation delays share the single bottleneck link.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.