RUDN scientists suggest method to increase speed and reliability of wireless channels

2 mins read

A model of a simple system consisting of two unreliable servers and one common queue for waiting customers has been developed by RUDN University mathematicians with a view to create a wireless communication technology that combines both high speed and reliability.

After analysing their hybrid model, the scientists demonstrated that heterogeneity of servers helps to increase speed and reliability of data transmission systems.

"We suggested algorithms for calculating performance and reliability characteristics, the distribution of survival probability, and average life time of each server and the whole system to the first failure. The new reliability measurement is introduced as the distribution function of the number of failures within a given operation period. We've also carried out quantity evaluation of the influence of various parameters on such reliability characteristics and found out what values are required for optimal system operation," says Assistant Professor Dmitry Efrosinin of the department of the probability theory and mathematical statistics at RUDN.

During the first stage of work, the scientists took a controllable Markovian queueing system (QS) - that is a mathematical model for the time between job arrivals to a system - consisting of two unreliable servers with high and low data transmission speeds and created its mathematical model. In QSs with heterogeneous servers, the choice of a mechanism to allocate the customers between heterogeneous servers makes sufficient contribution to the system performance and reliability. Therefore, by varying such allocation mechanism, one may increase the system's capacity, reduce energy consumption and sojourn time of a customer in the system, whilst also increasing the system's reliability without altering the characteristics of servers.

According to the optimal allocation mechanism or, in other words, optimal allocation policy that minimises the average number of customers in the system, a faster server should be used each time whenever it is free, while a slower one - only when the number of waiting customers exceeds a given threshold level.

The RUDN mathematicians introduced a new reliability measurement - the distribution function of the number of failures of individual server and of the system in a given period of time. Based on this definition of reliability, the scientists say they found a way to increase it. Using computer modelling, they calculated the ratio between the reliability of the system and the parameters of each of its servers (failure and service speed coefficients). The researchers demonstrated that at failure intensity of 0.01, the system is most reliable when the heterogeneity of the servers is increased. For example, when service speeds make up 4.8 and 1.2 operations per second for the first and the second server respectively. According to the scientists, in each case one can choose such system criteria parameters at which the values of capacity and reliability would meet pre-set limits.

The idea is that this model will help to reduce the number of failures and increase processing speed in the hosts of information networks where high- and low-speed as well as high- and low-reliability channels are combined in one hybrid communication system. In this case, the system turns out to be very flexible and able to quickly react to the intensity of data packages incoming in the hosts while preserving high processing speed with the minimum risk of complete system failure.