Research Areas

In spite of some of our research being rather generic in nature we do spend most of our time on modelling and simulating concrete systems of immediate relevance to BT and where a technical improvement would be of immediate financial advantage. At the present the main research areas are as follows.

Contents



Performance of Web Servers

The essential elements of any Web server are server hardware, server software, and an interconnection to the Internet. The overall performance of any Web server is based on the interrelated effects of these three elements as well as network bandwidth, customer modem speed, file-size distribution, round-trip times and caching characteristics just to mention a few.

In spite of the phenomenal growth in Web-based services the performance characteristics of Web servers are still rather poorly understood. Mostly there is only rather a narrow empirical evidence for the performance of any given Web server arrangement. Quantitative models are definitely lacking. It is a major challenge to develop quantitative models, which capture all the essential interrelationships between the variables, which contribute to the server's performance.

In parallel to analytical work we have developed a toolbox which simulates the effects of all the essential variables, which impact on the Web server performance. The toolbox allows us to identify bottlenecks that emerge under heavy load and how these move from one location to another as the work load or the performance parameters are changed. This toolbox provides an invaluable guide in finding the optimal Web server configuration given some knowledge on file request distribution, Internet load, file-size distribution, number of server threads, number of input-output buffers, and other relevant variables.

Our present activities focus on establishing a connection between the results generated by the toolbox and those predicted by our analytical models.

If you want to read a bit more on this subject click here or contact: Dr. Sverrir Olafsson or download Complexity Research Group papers on this subject:

Back to top of the page



More Information on Web server work

Back to top of the page



Risk Management and Capacity Planning

Our work on risk management so far has focused on the question how companies requiring an unknown future network capacity for their communications and data services can hedge the uncertainties of their position by entering a future contract or by buying a call option on capacity. Similarly, companies with excessive amount of capacity installed on their networks can enter a short position and sell futures or call options on their capacity. They might also consider buying puts on capacity they expect not to require in the future and also if they expect the future price of capacity to fall faster than expected by the market. It will be the simultaneous existence of market participants with complementary interests and market positions, which provides the main condition for the existence of active trading in capacity.

We have developed analytical tools, which enable us to fix the premium for options (calls or puts) on capacity. The approach is based on the principles of portfolio dominance, and requires a liquid market in capacity, which would be free of arbitrage opportunities. That of course is not the case as yet but all the indicators are that the near future will bring us efficient markets which trade capacity and bandwidth along with other commodities like copper, coffee and timber. The effect BT is likely to be huge but a timely understanding of events and proper preparation will turn these developments into huge opportunities.

If you want to read a bit more on this subject click here or contact: Dr. Sverrir Olafsson or download Complexity Research Group papers on this subject:


More Information on risk management work

Recently strong evidence has emerged that traditional acquisition mechanisms of network capacity are inadequate in modern telecommunications and Internet markets. These developments are primarily driven by globalization, deregulation and, an increasing supply of capacity on networks with a global reach. Capacity can be acquired by anyone, at any time and anywhere in the world. This increasing supply of capacity has had strong impact on the price of bandwidth, making it converge towards a universal market price. This situation is in strong contrast to the conventional acquisition mechanism based on bilateral agreements between national and multinational carriers. These developments bring new risks to carriers and service providers but also new financial opportunities and instruments, which enable them to manage these risks. Our work has focused on analysing this risk and in particular how it can be quantified, and finally controlled. Also, what are the implications of these developments for incumbents and entrants in the rapidly developing telecommunications and Internet markets?

Network operators and service providers alike requiring access to bandwidth, face considerable potential risks in their everyday operations. The network operator may have installed a huge amount of capacity on his network, some of which may remain idle for long and continuous periods of time. Similarly the service provider faces uncertainties regarding his future need for bandwidth to be able to provide high quality service to his customers. This exposes the network operator and the service provider to a risk which, given the right market environment could be managed. Traditionally, capacity has not been viewed as commodity. This is partly due to the fact that the infrastructure required for the trading of capacity has not been developed, and in part because of the nationalised monopoly mentality which has dominated the telecoms market for many years in the past. In view of changing regulatory environment and increased competition and globalisation in telecoms and Internet services, these things might be about to change dramatically.

Provided that the right infrastructures are in place, network operators or service providers may be in a position to hedge some of the risks they are exposed to in their every day operations. This requires them having the opportunities to trade the capacity they already possess or may require in the future. What they essentially want to be able to trade is their risk exposure. That however requires that there are other participants in the market willing to take on their risk for a fee. This situation is very likely to arise but trading platforms have to be developed to enable standardised and liquid trading in the capacity market.

Conventionally, network operators and service providers have traded capacity or bandwidth by means of bilateral contracts or leasing schemes fixed over some periods of time, possibly years. Often the price has been fixed for the considered time period. In view of the fact that prices for capacity have been going down rapidly in recent years fixed long time agreements have tended to be unfavourable to the long position, that is the buyer. Increasingly contracts have therefore been subjected to regular updates to reflect recent price movements. However, the uncertainties resulting from future price movements are not appropriately managed by long-term contracts, even if they are regularly modified. Only an active spot market can provide market players with the opportunities to buy capacity at the correct market price. Recent years have in fact seen the emergence of a number of exchanges, which enable network operators and service providers who connect to the exchange to buy and sell capacity at rates displayed at the exchange. More recently exchanges have opened for trading which allow market players to buy and sell capacity on a minute-to-minute basis. This development is very likely to accelerate in the coming months and years.

Back to top of the page



Topology of networks

There are many interesting mathematical questions raised by the simple question: `What is a good design for the topology of a network'?

In the last year there has been an explosion of interest amongst theoretical physicists in so-called `small-world' networks, with more than 40 papers published on the subject. These networks are especially constructed (with some degree of randomness) so that the degree (number of edges incident on each node) is small, while at the same time the diameter (mean shortest path between any pair of vertices) is small. The question thus arises as to whether such networks are of practical use in real communication networks; that is, whether the property of small diameter (rather than, say, large capacity) is relevant.

We have therefore made a study of such small-world networks, with some new features: small-worlds based on higher-dimensional graphs, hierarchical small-worlds, traffic distribution of small-world networks, and a comparison with older techniques like Cayley graphs.

If you want to read a bit more on this subject click here or contact: Dr. Keith Briggs or download Complexity Research Group papers on this subject:

Back to top of the page



Random Boolean networks

These are networks of interacting nodes, each node taking the value +1 or -1, with an updating rule causing some nodes to flip their state at each time step. We are studying some generalizations in which the topology also undergoes random changes, as a model of a communication network with links that are busy or not. Such models show an interesting scaling behaviour with respect to size which we are studying. We also are looking at Boolean analogs of Lyapunov exponents to characterize the dynamics of such networks.

If you want to read a bit more on this subject contact: Dr. Keith Briggs or download Complexity Research Group papers on this subject:

Back to top of the page



Behaviour of round-trip times in the internet

We are gathering a large data set of round-trip times to several sites all over the world and developing vector autoregressive models to characterize the time and space correlation behaviour of the round-trip times.

If you want to read a bit more on this subject contact: Dr. Keith Briggs or download Complexity Research Group papers on this subject:

Back to top of the page



Simultaneous Diophantine approximation

This is classical problem of simultaneously approximation of a set of given irrational numbers by a set of rational numbers with a common denominator. There have been some recent developments in this old field which we have implemented and intend to apply to problems in dynamical systems, pulse parameter estimation, and the breaking of cryptographic codes.

If you want to read a bit more on this subject contact: Dr. Keith Briggs or download Complexity Research Group papers on this subject:

Back to top of the page



Stability of Generalised Packet Switched Networks

Summary

We study the dynamic behaviour of multiple access packet switched networks. The dynamic behaviour takes into account the system instability. We have focused on three issues:

  1. how system instability arises?
  2. what are mathematical descriptions of the system model and how solutions can be found?
  3. what are major system parameters that influence the dynamics of the systems?

We have used Poisson process and two-dimensional Markov process to study the system and presented numerical results on throughput and delay. We then introduced equilibrium point analysis (EPA) as an alternative method to study more complex systems where multidimensional Markov processes may be required. We use EPA to predict the stability of the systems.

If you want to read a bit more on this subject contact: Dr. Xuanye Gu or download Complexity Research Group papers on this subject:

Details

There have been many proposed access protocols to date. One of the important issues in the design of such access networks is how multiple users can effectively share a common channel, while keeping the systems to operate in a stable region. Traffic characteristics and the system load can determine the performance of a multiple access protocol. It is desirable to have appropriate methods for system performance evaluation and the design of the protocols. The objective of this work has been to develop methods to analyse packet switched networks and to find closed applications in current wireless spread spectrum systems.

The exact solutions for performance analysis for a protocol are generally difficult. However, some analytical techniques are still possible provided that some assumptions and approximations are used. Such assumptions and approximations are often adopted for contention-based protocols (random access channel). The system performance under a contention-based protocol depends how heavily the channel is loaded by traffic from all users. A gridlock may occur in a system if the load exceeds its limit, hence the instability arises. The contention-based protocol is not only used for multiple access protocols, they also serves as an essential component for many today's complex multiple access protocols. Therefore, in order to evaluate the performance of various systems protocols, we have developed some techniques for the contention-based protocols.

Stochastic models can be used to evaluate the performance of multiple access protocols. Three models are often adopted when analysing a multiple access protocol. These include (a) dynamic flows models, (b) Poisson models, and (c) Markov models. We have used Poisson and Markov models. We have noticed that Poisson model has its limitation to the analysis of stability problem. To enable the stability analysis it is necessary to include Markov model. We have modelled the access protocol as a two dimensional discrete Markov chain. More complex protocols can be modelled as a multi-dimensional Markov chain. The number of dimension required depends on the complexity of the protocol. It is difficult to have some closed form solutions for such complicated protocol. This is because of the dependency of the system throughput on the system state (multi-dimension). Thus we need some approximate analytical techniques to deal with multi-dimemsional Markov chain. One solution is found from another project that solved state transition probabilities for a CDMA system where multiple packets are permitted in each time slot. Another option is to use the equilibrium point analysis (EPA), which does not need to calculate transition probabilities for multiple states. The EPA is a fluid-type approximation, which is applied only to systems with stead-state. It assumes that the system always operates at an equilibrium point. Therefore, by using EPA, it is not necessary to calculate the state transition probabilities. An equilibrium point can be obtained by numerically solving a set of simultaneous non-linear equations. On the other hand, we can find a desired equilibrium point from the throughput-delay performance consideration. The optimum can be used to maximise the system performance. An equilibrium point is defined as a point at which the channel input rate is equal to the channel output rate. If the number of points is one the system is said to be stable, otherwise the system is said to be unstable.

We use both Poisson model and Markov model. Poisson model can be used to analyse throughput and delay. It must be pointed out that Poisson analysis can not treat the stability problem. This is because the Poisson analysis assumes infinite population, Poisson input and statistical equilibrium and these assumptions only represent approximations to the real systems. It is shown that in an access system with an infinite population of users is always unstable. In real systems, the population is finite, and thus the system is either stable or unstable. In Poison analysis, the channel traffic is modelled as a homogeneous Poisson process. In reality, the channel traffic depends on both new packet transmissions ("forward") and retransmission ("backwards") and cannot be modelled as a homogeneous Poisson process, particularly if the random time delay is not large. Therefore, the Markov model is used to study the stability problem.

Back to top of the page



Performance of CDMA Based Access Systems

Summary

We investigate methods and tools for performance analysis of CDMA based access systems. The performance includes probability of bit error, probability of packet success, network throughout, delay and stability measure. Previously there have been no methods that are readily available to carry out performance measure of the CDMA systems. More ever, it was difficult to compute accurate bit error probability in the presence of multiple access interference. We have solved the difficulty of obtaining the accurate bit error probability and developed a set of methods and tools for CDMA systems.

We have also developed a graphical user interface that culminates all of above computations. The users can view and simulate the probability of bit error, packet success probability, network throughput and delay based on some most common parameters used in a packet CDMA network: the PN sequence length N, packet length L, error correcting capability t, number of users K and offered load G.

If you want to read a bit more on this subject contact: Dr. Xuanye Gu or download Complexity Research Group papers on this subject:

Back to top of the page



Access Control for Wireless Multimedia Applications

Summary

Traditional medium access control protocols based on single-type of traffic only provide simple access mechanism, thus they are not efficient for multimedia traffic in terms of resource allocation and performance. This work investigates some scheduling alogorithms that schedul data packets accoring to their characteristics and requirements thereby achieving a fare and best possible performance.

If you want to read a bit more on this subject contact: Dr. Xuanye Gu or download Complexity Research Group papers on this subject:

Back to top of the page



Phase transitions in data networks

Complex systems consisting of a large number of interacting components can display interesting and unexpected temporal and spatial effects. A temporal effect which has received some attention in recent years is the emergence of type of noise common in a large number of multi component extended systems. This is frequently accompanied by the emergence of scale invariant spatial structures demonstrating some type of a self-organisation resulting from the interaction between the system components. It is particularly interesting that the spatial and temporal structures often result without any fine tuning of the system parameters. In this case one says that the system has evolved towards the state of self organised criticality.

The use of statistical mechanics

Statistical physics has successfully been applied to the analysis of multi component systems where the tracing of each single component is not possible. Usually one reduces the problem to a few essential degrees of freedom which adequately capture the aggregate properties of the system at a macroscopic level. This is equivalent to working with system expectation values evaluated in some probability distribution supposed to describe the system properties at the microscopic level. The macroscopic dynamics then describes the temporal evolution of these aggregated quantities, or expectation values.

Application to data networks

Modern data networks have many properties which make them suitable to analysis by the methods of statistical mechanics. When analysing the flows on extended data networks one is faced with the same kind of a problem, i.e. at what level of granularity should one start the modeling process. Over the years different approaches have been developed ranging from a very low level packet analysis to the construction of macroscopic flows using methods borrowed from hydrodynamics, see and references therein. The real challenge is to relate the two different granularity descriptions by constructing appropriate mappings between the two.

In this work we analyse the flow of two directional data over a system of cross connected buffers. We have therefore two types of data, flowing from left to right and from top to bottom on the two dimensional array of buffers. The rules for moving data from one occupied buffer to another empty one are deterministic. Depending on the total amount of data in the buffer system the flow across the system leads to congestion which can be characterised in terms of surprisingly simple patterns. We manage to identify the spontaneous occurrence of congestion with the emergence of extended, connected clusters of occupied buffers. This observation enables us to undertake performance analysis in terms of percolation theory.

If you want to read a bit more on this subject contact: Dr. Sverrir Olafsson or download Complexity Research Group papers on this subject:

Back to top of the page



Models for Internet Browsing

Motivation for Work

The role of the Internet as a platform for the search and supply of information and services is increasing at an ever increasing rate. These rapid developments make severe requirements on the management of bandwidth and network resources. To be able to cope with this steadily increasing demand a good understanding of user bahaviour is required. In this work we model user browsing on the Web as a Markov process with an appropriate balance between forward - and backward browsing. We derive at a power law for the probability distribution over all possible browsing lengths. Within this framework a compact and elegant expressions for the mean browsing length and its variance are derived. There are plans to use real data of Internet usage to fit the model parameters.

Introduction

The usage of the Internet and services provided on it is growing at a very fast rate. Presently there are over 100 million Web pages connected to the Internet and it is estimated that their number is doubling in a time just over a year. Also, the kind of applications and services being offered on the Internet require continuously increasing bandwidth so the demands on the network resources are steadily increasing. This development puts serious load constraints on the management and the resource allocation techniques presently used on the Internet.

In order to be able to provide the users of the Internet with a reliable and an efficient access and service provision some good understanding of their behaviour is required. This includes questions like how often do users log on to the WWW, how long do they stay there, how do they move about i.e. what Web pages do they typically visit and for how long, what type and size of files do they down load and so on. This WWW browsing behaviour has strong impact on the traffic patterns generated on local networks and on the backbone network itself. As many of the links point to other countries and continents these traffic patterns do not only impact locally but do also contribute to the traffic patterns on the backbone network. Accurate understanding and modeling of user behaviour therefore provides an essential input to network design and data/file management issues.

In this work we develop a model for the browsing or surfing pattern of users of the Internet. To be able to formulate the browsing problem we have to make some basic assumptions which are as follows. A user accesses a given address (URL) on the WWW. As he scrolls thorough the Web page a number of hypertext links are presented to him. He may click on these links or he may ignore them. Each accessed Web page has therefore many different exit points via these hypertext links. If the user clicks on one of these hypertext links he accesses a new URL on the WWW. This new page also makes the user aware of other links which he can follow or not. The described process can repeat itself many times. By undertaking the whole clicking process the user seeks to gain some information he considers to be relevant to him. At some stage the browsing process is interrupted and the user may well initiate another sequence of clicks, starting from a Web page not part of the previous browsing sequence. The timing of the interruption will relate to some value expectations the user associates with continuing his browsing.

In fact we can view this process or string of actions as a kind of a branching process with limited life expectancy. Here, by life expectancy we understand the number of sequential hypertext links initiated by the visiting of one URL address. The branching comes from the fact that each visited Web page can be exited via a number of different hyperlinks. Visiting a sequence of Web pages as a result of visiting one URL can be viewed as a string of correlated activities which eventually comes to an end. When that happens the user resets himself back to some new Web page and the whole process may start again, i.e. the user generates a new string of visits to many different and remote Web pages. Alternatively, we can view this process as a kind of an avalanche which is triggered of by the visit to one particular Web page. We are interested in calculating the typical size of the avalanche and also its variance and other statistical parameters that may be of relevance for its description.

In this work we set up stochastic rate dynamics which describes the forward browsing, from one Web page to another, followed by occasional backward moves. The resulting dynamic equations describing the browsing process are generalized Master equations. As we are interested in the aggregated behaviour of a large group of typical WWW users we assume that the Master equation reaches an equilibrium which satisfies the principle of detailed balance. After making further assumptions about the forward and backward browsing rates iterative solutions to the Master equation can be found. These solutions state the probabilities over the length of browsing, i.e. pn is the probability that a user browses over n hyperlinks. This also enables us to find the first and the second moments for the probability distribution. For certain parameter values we find that the resulting probability distribution follows a power law.

If you want to read a bit more on this subject contact: Dr. Sverrir Olafsson or download Complexity Research Group papers on this subject:

Back to top of the page



Evolutionary Game Theory

If you want to read a bit more on this subject contact: Dr. Sverrir Olafsson or download Complexity Research Group papers on this subject:

Back to top of the page



Adaptive Response Model

If you want to read a bit more on this subject contact: Dr. Sverrir Olafsson or download Complexity Research Group papers on this subject:

Back to top of the page



Simulating distributed adaptive filing systems

Introduction

The performance of computer systems is rapidly improving over the past years while the speed of communications lags behind and becomes a network performance bottleneck for applications requiring network resources. The goal of our investigation has been to develop algorithms for organization and optimization of the network resources at to rapidly adapt and respond to user requests.

We have developed a distributed file system model and used it as an experimental simulation tool to design, implement and test network adaptation algorithms. Two adaptation algorithms have been developed. One is based on the "greedy" heuristic principle and the other is a genetic algorithm tailored to handle the constraints of our problem.

If you want to read a bit more on this subject contact: Dr. Sverrir Olafsson or download Complexity Research Group papers on this subject:

Back to top of the page



Performance Analysis of Cache Networks

This work has developed and implemented a World Wide Web cache infrastructure model which is to be used for analysis of features that are otherwise difficult to get from existing log data or for evaluation of non-existing cache scenarios. A prominent feature of our model that differentiates it from other similar models is its dynamical aspect, which allows for the investigation of temporal features. Using the model we verify and quantify observations made from real log data and provide a more comprehensive picture of the caching processes that take place behind the scenes. We also show how the model can be used to assess the economic viability of the caching solution.

If you want to read a bit more on this subject contact: Dr. Sverrir Olafsson or download Complexity Research Group papers on this subject:

Back to top of the page



Other previous work

Evolutionary Game Theory
Capacity Pricing
Neural Networks
Risk Management Approach to Network reliability


Webmaster: Keith Briggs     Design: Paul Jolly     Last Updated: 2000 Dec 21

This page is best viewed at a resolution of 800× 600 or greater

Support the Any Browser Campaign