Russian Federation
BISAC TEC000000 General
The monography presents the fundamentals of the theory of construction newgeneration modems. Modems are built on the principles of statistical communication theory, based on the use of a random signal (chaos) as a carrier of information. In such a signal, a characteristic function is modulated, which is a fundamental characteristic of a random process. The signal modulation and demodulation method is patented and allows you to create modems with efficiency and noise immunity indicators several orders of magnitude higher than those of the known devices of the same name. Newgeneration modems immediately improve the technical characteristics of digital IT equipment by several orders of magnitude, since they work without errors in wired and radio channels when receiving one hundred duodecillion of binary symbols. The book is recommended for scientists and specialists in the field of digital communication systems, statistical radio engineering and instrumentation, and may be useful for graduate students, masters and students of relevant specialties.
newgeneration modems, information, signal, signal modulation and demodulation method, IT equipment, digital communication, radio engineering
Introduction
Largescale digitalization of our planet, artificial intelligence, and much more are coming. The level of people's comfort will be enhanced due to intelligent digital technologies based on modern digital systems. Each such system has many important characteristics, of which we single out noise immunity. It determines the number of errors in the document depending on the signal level and interference. Currently, digital systems provide one error per ten million bits of information. In relation to the document, we can talk about one error in the text with font 12 on one hundred sheets of A4 paper. Is it a lot or a little? If we consider a hundred pages of an ebook, then this is not enough. If we take a document in the form of one hundred receipts for the transfer of funds to recipients, then this is a lot, since an error can alter the amount of payment to any one of the hundred clients. There should not be a similar and any other mistake in the digital economy. Therefore, for the introduction of the digital economy, it will be necessary to increase the noise immunity of digital systems by several orders of magnitude at once.
Historically, it has become a rule for the transmission of messages to modulate the parameters of a deterministic oscillation, believing that its presence in practice is possible by inertia. This confidence existed until the results of measurements of amplitude, phase, and frequency fluctuations of physical sources of harmonic oscillations appeared in 1950. After that, it turned out that the deterministic vibration is nothing more than some kind of mathematical abstraction, which is unrealizable in practice. Then the experimentally established fluctuations of the oscillation parameters were hidden by the term "parasitic amplitude, phase, frequency fluctuations". This has survived to the present, they are being fought. And as a result of this, the theoretically established indicators of noise immunity of communication systems are still unattainable. To overcome the current situation and increase the noise immunity of digital systems by a big leap forward, we propose an alternative option, namely, to abandon the unpromising deterministic oscillation and switch to a random or, at the first stage, to a quasideterministic signal. Both signals, in our understanding, are synonymous with dynamic chaos, which opens up great opportunities in the development of new methods for transmitting, storing and processing information. When applying dynamic chaos, it would be appropriate to talk about digital systems with high noise immunity, in which the error probability will be 1∙10^{41}, i.e. one error when receiving one hundred duodecillion binary characters.
In addition, to achieve high noise immunity of digital systems, a new approach to signal modulation is proposed, which includes what was said above about chaos and the transition to modulation of the characteristic function of the signal, which, will serve as a “space suit” for the modulated signal by analogy with astronautics. The new method of random signal modulation will be conditionally called statistical modulation, which forms the basis of the theory of statistical communication. The results of the analysis show that with the help of the new method it is possible to achieve the limiting values of noise immunity, spectral and energy efficiency of digital systems.
Thus, the monography is devoted to increasing the noise immunity of digital data transmission systems by several orders of magnitude using new methods, techniques and devices. Its material complements the content of the avantgarde direction of statistical radio engineering, aimed at involving random processes in solving problems in the theory of statistical communication, which remain relevant to this day. Using random signals with probabilistic characteristics, the author in his research proves the promise of using dynamic chaos in a new generation of radio engineering devices, for example, in modems.
The monography considers various modem structures, algorithms for modulation and demodulation of a quasideterministic signal, and characteristic functions of signals that are proposed to be used in digital data transmission. Along the way, new knowledge was obtained regarding the characteristic function and the distribution law of signals, which were not known in probability theory. It is shown that the signal, characteristic function, signal modulation and demodulation algorithms in aggregate are the product of digital technology. The noise immunity of modems of the new generation when operating in a noisy channel was assessed qualitatively and quantitatively. At the same time, it was found that with a signaltonoise ratio of 3 dB, the noise immunity of the modem is at least ten orders of magnitude higher than the same characteristic of known analogues, for example, with phase modulation.
1. CHARACTERISTIC FUNCTION
The characteristic function (ch.f.) is a probabilistic characteristic of a random process, or a random variable. It was proposed and used (1901) by the mathematician A.M. Lyapunov to prove the probability limit theorem. Since then, ch.f. has acquired independent significance and is successfully used to solve both fundamental and applied problems [1, 2]. In our opinion, it is appropriate to call it the characteristic function of A.M. Lyapunov, as it is done, for example, with the functions of F. Bessel, P. Dirac, P. Struve, O. Heaviside and other scientists. For more than fifty years, the characteristic function of A. Lyapunov served as a tool of mathematics and remains so to this day. Thanks to this function, mathematicians solve complex problems and construct difficult proofs of theorems. Since 1954 this function has been studied and applied by applied science. With its help, new results have been obtained in nondestructive testing and diagnostics, in communication technology, in noise filtering in the probability space, in detecting signals that are an order of magnitude or more superior to those previously known [2,3].
1.1. Characteristic function: definition, properties
A onedimensional characteristic function is a statistical mean of an exponent with an imaginary exponent of the form jVmξ (t), in which the random process ξ(t) is multiplied by an arbitrary real parameter V_{m}. Mathematical model ch. f. is represented by the expression
, 
(1.1) 
Where is the characteristic function (ch.f.); is mathematical sign of the statistical average (expectation operator); V_{m }= т∆V is the ch.f. parameter; ∆V is the discretization step (quantization) of the ch.f. parameter; . In expression (1.1), the number 1 denotes a onedimensional function. Obviously, using the Euler's formula, we can write

(1.2) 
where is the real part of ch.f.;
is the imaginary part of the ch.f. Then there will be an equality

(1.3) 
Here , are the modulus and argument of ch.f., respectively. Geometric interpretation ch. f. shown in figure 1.1. It depicts a spatial figure formed by the rotation of the radius  a vector with a length equal to the module , around the axis on which the values of the parameter Vm are plotted. In this case, the projections of a point belonging to the figure on the coordinate axes are respectively equal to the real and imaginary parts of ch. f. The shape and dimensions of the figure are determined by the random process ξ(t). The figure in picture 1.1 is built for a Gaussian process with m_{1} = 0.3, σ = 0.01 and looks like a funnel, where m1 is the expectation of the process; σ is the mean square deviation (MSD) of the process. In the section of the figure, for a fixed value of ξV_{m}, a circle is obtained.
Picture 1.1. One of the possible geometric interpretations of the ch.f.
From a physical point of view, the parameter Vm is the coefficient (or multiplicity) of amplification (weakening) of the instantaneous values of the random process, and the product is the instantaneous phase of the analytical signal
. (1.4)
Then ch.f. is the mathematical expectation of an analytical signal with a constant modulus , while the random process ξ(t) only determines the law of change of the instantaneous phase of the signal (1.4). In this case, the meaning of the picture in Figure 1.1 is as follows. With an increase in the multiplication factor of the instantaneous phase of the analytical signal, its expectation decreases, since the rapidly changing signal is strongly averaged, as a result of which its constant component tends to zero. For a certain multiplication factor V_{m}, it even becomes equal to zero. In this place, the top of the spatial figure appears.
The wellknown relation of the ch.f. with probabilistic characteristics of a random process is represented by the formula [4]

(1.5) 
where is the onedimensional probability density of the random process ξ(t). In a particular case, we have

(1.6) 
if the imaginary part of the ch.f. equals zero. This implies that if ch. f. takes on only real values, then it is even, and the corresponding probability density will be symmetric. Conversely, if the probability density of a random process is symmetric, then ch. f. is a real, even function of the parameter V_{m}. In addition to (1.5), a relation x is established ch. f. with initial and central moment functions of a random process. The book [4] gives the equality

(1.7)

It follows from this that the initial moment functions of the kth order differ from the value of the kхderivatives of ch. f. if Vm = 0 only by the factor j^{k}. In a particular case, if k = 1, we obtain an expression for the expectation of a random process

(1.8) 
In the general case, if there are initial moments of any order, then, as follows from (1.7), ch. f. can be represented next to Maclaurin

(1.9) 
If we expand not ch.f. into a Maclaurin series, but a cumulant function as , then we obtain the expression

(1.10)

The coefficients of this series, called cumulants (or semiinvariants) of the distribution, are expressed in terms of central moments by the formulas [4]
, , ,
, , (1.11)
where M_{k}{ξ(t)} is the central moment functions of the k^{th} order of the random process ξ(t). It can be seen from formulas (1.11) that the 1^{st} order semiinvariant is equal to the expectation, and the 2^{nd}order semiinvariant is equal to the variance of the random process.
For many random processes, the ch.f. is defined and calculated. Information about them is systematized in the literature [2,5]. A graph of the ch.f. widely used Gaussian random process is shown in Figure 1.2. It's plotted for the function

(1.12) 
for which the coefficient m_{1} =0. When σ=1, the value of ch.f. for V_{m} = 4 differs from θ_{1}(0) = 1 by a factor of 300. Therefore, ch.f. of normal random process rapidly decreases to zero. This behavior ch.f. is determined by its noise filtering property, which will be discussed below. In addition, the ch.f. has
Figure 1.2. Characteristic function of a Gaussian process
(m_{1}=0, σ=1)
other properties, for example, its maximum value is equal to one at V_{m} = 0 and the modulus θ1(0) = 1. This implies the measurability and boundedness of the ch.f. if all values .
When solving many problems, it is especially useful to have the property that the ch.f. additive mixture of signal and noise is equal to the product of the characteristic functions of individual terms. This property is easy to
extend to the sum of independent random processes. Ch.f. of the amount will be

(1.13) 
where θ_{1i}(V_{m}) is the ch.f. ith random process. Here we can also talk about the probability density of the sum of independent random processes, which, as you know, is calculated by the convolution formula

(1.14) 
where z(t)=u(t)+y(t). Then, using transformation (1.5), we can pass to the characteristic function of the additive mixture z(t). However, these two options for solving the problem do not reject each other. In some cases, it turns out to be easier to find the probability density, in others, the characteristic function. For example, when measuring the probability density of only a signal from a mixture z(t), of course, it is much easier to measure the ch.f. mixtures of z(t) and ch.f. interference y(t) when the signal u(t)=0. Then, it is necessary to find the ch.f. signal from the ratio of the characteristic functions, and to calculate its probability density using the Fourier transform

(1.15)

The solution of such a problem with the help of convolution (1.14) is much more complicated.
If we divide the characteristic function of the additive mixture by the ch.f. interference, we get the quotient. It contains expressions for the real and imaginary parts of the ch.f. separate signal. Expressions are equal

(1.16)


(1.17) 
where , , is the real part of the ch.f. of signal, interference and additive mixture, respectively; , , is the imaginary part of the ch.f. of signal, interference and additive mixture, respectively. When the noise has a symmetric distribution function and mathematical expectation equal to zero, then the calculation algorithms (1.16), (1.17) become simpler:

(1.18) 



(1.19) 


In accordance with (1.2), the ch.f. signal will be equal to 




(1.20) 

If the additive mixture of signals is represented as z(t)= K_{0}u(t) + U_{0}, then the ch.f. sum looks like this:

(1.21) 
where is deterministic signal, constant in time (for example, constant voltage); θ_{1z}(V_{m}) is ch.f. process z(t), and θ_{1u}(V_{m}) is ch.f. of signal u(t). Where U_{0} = 0, expression (1.21) is transformed into the equality
. 
(1.22) 
Consequently, amplification (attenuation) of the signal by a factor of K_{0} leads only to a change in the real parameter of the ch.f. the same number of times. This property of the ch.f. is especially important when constructing instruments of a new class with the conditional name characterometers. When the sign of the real parameter of the ch.f. equality (1.22) takes the form
= 
(1.23) 
i.e. θ_{lz}(V_{m}) and θ_{1u}(K_{0}V_{m}) are complex conjugate functions. If we put K_{0} = 1 in expression (1.23), then it is reduced to the known
= = 
(1.24) 
Let us onsider a complex signal
in which the terms are functions of a real variable. Then

(1.24a) 
where , are the characteristic functions of the signals ξ(t), u(t), respectively. Thus, ch.f. of complex signal is equal to the product of the ch.f. of the real part of this signal and the expectation of the exponential function, whose exponent with the opposite sign is the imaginary part of the complex signal amplified by Vm times. If a random signal v(t) is subject to the distribution law W_{1}(x), then
For example, for a law of the form W_{1}(x)=1/(2π) we have 
(1.24б) 

(1.24в) 
Multidimensional ch.f. of random process has the following form:

(1.25) 
The quantization step ∆V is applicable to any ch.f. parameter. Taking into account the previously adopted notation, we can write

(1.26) 
Moreover, it is not necessary at all in (1.26) to have a constant quantization step for all parameters of the ch.f. Each parameter (or group of parameters) ch.f. can be discretized by its step.
If the instantaneous values of the random process, taken at the moments of time t_{l}, t_{2},...,t_{n}, do not depend on each other, then the expression (1.25) takes a different form:

(1.27) 
where is ch.f. of ith set of instantaneous values of a random process.
For stationary random processes, expression (1.25) is written in a simpler way:

(1.28)

since ch.f. does not depend on time or depends only on the differences of individual moments of time t_{2}t_{1},_{ }t_{3}t_{1},...,t_{n}t_{1} = пτ, i.e.

(1.29) 
It is quite clear that the ch.f. of lower dimension is quite simply obtained from the ndimensional characteristic function, namely:

(1.30)

Calculation of ndimensional ch.f. random process is a difficult task. Therefore, the publication contains few examples of multidimensional ch.f. Nevertheless, the expression for the ndimensional ch.f. normal random process:

(1.31) 
where m_{1}, σ^{2} are the mathematical expectation and variance of the random process; R_{ik}(τ) is the normalized correlation function, with R_{ik}(τ)=R_{ki}(τ), R_{ii}(0) = 1, R_{kk}(0)=1. Examples of other multidimensional ch.f. can be found in the books [2,5].
1.2. Estimates of the characteristic function of instantaneous values of a random process
Formula (1.1) contains the mathematical operation m_{1}{·} is the statistical mean (or the expectation operator), which involves averaging an infinitely large number of values of the function exp[jV_{m}ξ(t)], depending on the instantaneous values of the random process ξ(t).
It is quite clear that in the instrumental analysis of ch.f. a finite number of instantaneous values of the random process or its parameters (envelope, instantaneous phase, frequency) will be used. The result of calculating the value of ch.f. according to a limited set of sample data[1]^{)} of a random process is called the estimate of the function. The estimate of the ch.f. will be denoted by the previously adopted symbol, and to distinguish it, mark with an asterisk:

(1.32) 

(1.33) 
where L is the transformation operator for an array of sample data, a set of sample values, or timelimited implementations of a random process; k is the number of implementations or sequences used).
The transformation operator L can be different. In [6], three operators are considered, namely: an ideal integrator with normalization with respect to T

(1.34) 
ideal adder normalized by N

(1.35) 
ideal adderintegrator with normalization on N and T

(1.36) 
where T is the averaging time (the duration of the implementation of the random process); N is the amount of sample data or the set of sample values. Taking into account (1.34)  (1.36), we can write:

(1.37)
(1.38) 

(1.39) 
By definition [6], estimates (1.37)  (l.39) are called k  current, t  current and average, respectively. The application of these estimates is largely determined by the type of random process. Using the classification [4] of random processes, we can specify the following:
 for a stationary ergodic random process, all estimates of the ch.f. are equal to each other;
 for a stationary nonergodic random process t  current and average estimates of ch.f. are equal to each other;
 for a nonstationary ergodic random process k  current and average estimates of the ch.f. are equal to each other;
 for a nonstationary nonergodic random process, all estimates of the ch.f. are not equal to each other. In this case, we recommend using average ch.f. estimates, since they converge better than others to a probabilistic characteristic (characteristic function).
Passing to the estimates of the real and imaginary parts of the ch.f., taking into account (1.37) – (1.39), we write

(1.40) 

(1.41) 

(1.42) 

(1.43) 

(1.44) 

(1.45)

Our further analysis is connected with the kcurrent estimate of the ch.f. To simplify the notation, we omit the symbols k, n in formulas (1.37), (1.40), (1.41), and we will only have them in mind. If in estimates (1.37), (1.40), (1.41) the integration operation is performed by a nonideal integrator, and a real device with an impulse response q(t), then these expressions take the form
(1.46)

(1.47) 

(1.48) 


Let us extend the notion of an estimate to an ndimensional ch.f. Similarly to estimates (1.37)–(1.39), let us write

(1.49)


(1.50)


(1.51) 
Separately for the real and imaginary parts of the ch.f. we have

(1.52) 

(1.53) 

(1.54) 

(1.55) 

(1.56)


(1.57)

For a kcurrent estimate of a multidimensional ch.f. taking into account (1.46)  (1.48) we can write

(1.58) 

(1.59) 

(1.60) 
In the above formulas, the model of a random process can be any, and the model of the impulse response of a physically realized integrator is presented in Table 1.1.
In the theory of estimating the probabilistic characteristics of a random process, much attention is paid to the fundamental nature of estimates. Fundamentality is characterized by properties of estimates. They must be wealthy, efficient and unbiased. The properties of estimates (1.40  1.45, 1.47, 1.48) are studied in detail and described in the book [2]. It is also shown there that the estimates for the real and imaginary parts of the ch.f. are consistent, efficient, and asymptotically unbiased, i.e. they are fundamental.
Table 1.1.
Classification of integrators
Integrator type 
Characteristic q(Tt) 
RC integrator 
βe^{β}^{(}^{t}^{}^{T}^{/2)}/2sh(βT/2) 
RC integrator 
(lβT+βt)e^{βt}/T 
RC integrator 
2(Ttβe^{β}^{(}^{t}^{}^{T}^{)})/(2e^{}^{βT}+T^{2}2) 
Measuring instrument with critical damping 
β^{2}(Tt)e^{βT}/(e^{βT}βT1) 
Turning to estimates of the form (1.36  1.39, 1.46), we can say that they are also fundamental. Their properties will be the same as those of the estimates (1.40,1.41,1.47,1.48) of the real and imaginary parts of the ch.f. Here it is appropriate to say the following. Estimates (1.40), (1.41), (1.47), (1.48) are assigned the role of some approximators, with the help of which estimates of the probability density, correlation function, initial and central moment functions are constructed. Since the initial estimates (approximators) of the probabilistic characteristics of a random process are consistent, other estimates of the probabilistic characteristics constructed from them by classical methods will also be consistent, which is consistent with the conclusion in [2].
In the instrumental analysis of random processes, it can be difficult to obtain an estimate of a probabilistic characteristic that satisfies all of the above properties at once. For example, it may turn out that even if an effective estimate exists, then the algorithms for calculating it are too complicated and one has to be content with a simpler estimate, the variance of which is somewhat larger. The same can be said about biased estimates, as slightly biased estimates are sometimes used. The final choice of the estimate, as a rule, is dictated by the results of its analysis in terms of simplicity, ease of use in equipment or mathematical statistics, and reliability of properties. Estimates of the probability distribution function, probability density, correlation function, initial and central moment functions of the kth order are given in the book [2].
1.3. Estimation of the characteristic function of a discrete quantity
Let us consider a special case when the process ξ(t) is represented only by instantaneous values ξ_{i}(t_{n}). To simplify the formulas, we introduce the notation ξ_{i}(t_{n}) = ξ_{i}. This is a discrete random variable for which the probability density has the form [4]

(1.61) 

where p_{i} is the probability of occurrence of the ith value of a discrete random variable; N is the total number of discrete values of a random variable; δ (·) is the delta function [4]. In view of (1.61), the ch.f. discrete random variable 



(1.62) 
If a discrete random variable ξ_{i}=U_{0}= const with probability p=1, then the ch.f. constant value is calculated by the formula

(1.63) 
Known distribution laws for a discrete random variable are tabulated in the book [5]. The characteristic functions obtained taking into account the known distribution laws of a discrete quantity are also given there. For example, a discrete random variable distributed according to the Poisson law has a ch.f. kind

(1.64) 
where λ – Poisson distribution parameter. This function is widely used when observing the flows of elementary particles, such as electrons.
Formulas (1.7)  (1.11) can be used to calculate the distribution moments of a discrete random variable. Substituting the ch.f. (1.64) into formula (1.7), we obtain
, , . 
(1.65) 
They coincide with the results contained in [4]. Thus, the above material regarding ch.f. random process can be extended to a discrete random variable, the ch.f. which is defined by formula (1.32).
Let us consider a complex number ξi=ui+jvi, which, in accordance with the notation adopted earlier, can be called a complex discrete quantity. Similarly to (1.24a), we obtain
. 
(1.66) 
Let’s denote Then we have
(1.67)
where p_{i }is the probability with which the random variable takes on the value y_{i}. If the probability density of this quantity is
(1.68) 

and the probability density of the random variable v is equal to

(1.69) 
then we get the equation
(1.70) 
Taking into account (1.67), (1.70), we have the ch.f. discrete random variable

(1.71)

When the probability is , then
, 
(1.72)

where N is the number of equiprobable values of the discrete random variable v. For this case, expression (1.71) takes on the form
(1.73) 
New formulas of the form (1.73) follow from consideration of other laws of distribution of a discrete random variable.
1.4. New property of the characteristic function
In the above analysis of the characteristic function, special attention was paid to its properties known from the publication, such as, for example, boundedness, measurability, and others. We have discovered a new property of ch.f., which concerns filtering in the space of noise probabilities with the help of this function.
Expressions (1.16  1.19) form the basis of the filtering capacity of the characteristic function. Filtering capacity of ch.f. has an applied value [3,7] and the following physical meaning.
The probability density of a random process and its ch.f. are connected by a pair of Fourier transforms (1.5), (1.15). It turns out that the ch.f. is the spectral density of the probability density (or, in short, the spectral density of the probabilities) of a random process in the domain of probabilities, if we use the terminology of the Fourier transform of signals, in which the domain is called the frequency domain. Ch.f. carries information about the probabilities of occurrence of instantaneous values of a random process depending on the parameter Vm, which we previously proposed to call the multiplicity of values of a random process. This multiplicity can be written as integer and fractional real numbers. For integer multiplicities, V_{m}= ±1, ±2, …, ±¥, and for fractional multiplicities, V_{m} takes any other values on the real axis from ¥ to +¥. This is done in the same way as in the frequency domain, when imaging a line spectrum, the spectral lines are located at points with abscissas w, 2w, 3w,…, nw, where w is the circular frequency of the signal. By analogy with the physical spectrum when using the ch.f. in practice, the multiplicity V_{m} is taken to be integer and positive. At V_{m}=0 ch.f. Q_{1}(0)=1 is the total probability. This total probability is distributed between the probabilities of presence in the signal of instantaneous values with a multiplicity one (Vm=1), with a multiplicity two (V_{m}=2), with a multiplicity three (V_{m}=3), etc. For example, for "white" noise with dispersion σ^{2} =1 and ch.f. of the form (1.12) at m_{1} = 0 we have: p_{1}_{ }=0,6065 at V_{m} =1; p_{2} =0,1353 at V_{m} =2; p_{3}=0,0111 at V_{m} =3 etc. to infinity, and for V_{m} =∞ the probability p_{∞} =0. When filtering the additive mixture using expressions (1.16  1.19) with V_{m} =1, all instantaneous noise values that are present with a probability p_{1} =0.6065 are “cut out” from it. In this case, after filtering, instantaneous noise values remain in the additive mixture with a total probability of 0.3935, i.e. with a probability less than one. At the filter output, the additive mixture is different, it will be partially “cleaned” of noise. And, as a result of this, the signaltonoise ratio at the filter output increases.
You can continue filtering further the "purified" additive mixture, "cutting out" from it at V_{m}= 2 the instantaneous noise values that are present with a probability p_{2} = 0.1353. Then, at the output of the second filter, the signaltonoise ratio increases even more, and the additive mixture becomes “cleaned” twice. And so it should be continued further, changing only the value of the multiplicity V_{m}=m∆V. When ∆V=1, it is appropriate to use the wellknown term "filter section" and write the definition of a new device in the form m  link virtual filter.
Simulation of virtual filters using the characteristic function with V_{m} =1 showed good results. The works [3,7] present quantitative and graphical data obtained using equation (1.19). As an example, Figure 1.3 shows graphs of noise and signal suppression when filtering an additive mixture, where σ_{0}, σ_{f} are the SNR at the input and output of the filter, respectively; σs is the SNR of the signal; h_{вх} is the signaltonoise ratio at the filter input. The designation N is taken from expressions (1.42,1.43). An analysis of the curves in Figure 1.3 shows that the filter suppresses signal and noise differently. "White" noise with the help of ch.f. is suppressed by a factor of 5053, while the quasideterministic signal is attenuated by only a factor of 120. Thus, the signaltonoise ratio at the output of a singlelink virtual filter increased by an average of 30 times.
а) b)
Figure 1.3. Filter suppression of noise (a) and signal (b):
1, 4 – N=5; 2 – N=10; 3 – N=50
2. QUASIDETERMINISTIC SIGNALS
To implement statistical modulation, quasideterministic signals are required, which, by definition [4 p. 171] ".... are described by time functions of a given type, containing one or more random parameters , that do not depend on time." From this class of signals, the model and probabilistic characteristics of a quasideterministic signal with the arcsine distribution law are quite fully presented in the publication. Information about other quasideterministic signals is presented only fragmentarily.
2.1. Signal model with arcsine distribution law
Let us consider a signal with a mathematical model of the form
, (2.1)
where is the constant amplitude of the signal constant circular frequency of the signal;  a random variable (initial phase angle) with a uniform distribution law within  . . ; instantaneous values of the signal, obeying the distribution law of the arcsine. In the publication, this signal is called quasideterministic.
We begin the description of probabilistic characteristics with the probability density of instantaneous values of the signal, which, by definition [4], is equal to
, (2.2)
where dispersion (average power) of the signal. Here and below, the number 1 denotes the onedimensionality of the function. The graph of function (2.2) is shown in Figure 2.1.
Let's take a look at the shape of the graph. It looks like a horseshoe and is centered around zero because the mathematical expectation of the signal is zero. At the edges, the value of the function tends to infinity when the value of the argument is equal to the amplitude of the signal. This results in the probability of occurrence of the signal amplitude as if equal to one, since this is like depicting a probability density of a constant value using a delta function. However, this is possible, and this is explained only by the fact that the signal model (2.1) is some mathematical abstraction. The physical process has amplitude fluctuations [8], which are random. And as
Figure 2.1. Signal probability density
a result of this, the specified equality a=const is violated. The arcsine law was established in 1939 by the mathematician P. Levy for random walks of a point on a straight line and later adapted for signal (2.1).
The characteristic function of the signal in accordance with (1.5) is equal to Θ_{1}(V_{m})= , (2.3)
where J_{0}(∙) is the zeroorder Bessel function of the first kind (see Fig. 2.2). For a random variable ηwith probability density W_{1}(η)=(1/2π) ch.f. will be [4]
. (2.4)
For a constant signal amplitude ch.f. was defined earlier with the help of expression (1.63), which we repeat with our designations
. 
(2.5) 
The signal correlation function (2.1) will be
(2.6)
where W(η) is the probability density of the random phase η; τ is the shift in time.
The dependence of the average signal power on time turns out to be harmonic, it is shown in Figure 2.3.
Figure 2.2. Ch.f. of signal (separate real and imaginary part)
Figure 2.3. Correlation function of a signal with a frequency of 30 Hz
Let us proceed to the analysis of the power spectral density (energy spectrum) of the signal (2.1). Let's write down its energy spectrum
. (2.7)
The spectrum (2.7) turned out to be lined. It contains a spectral component ranging from  to 0 and a spectral component ranging from 0 to , where delta is a function. In the transition to the physical spectrum, i.e. to the spectrum in the region of positive frequencies, we get
. (2.8)
The energy spectrum as a function of frequency is shown in Figure 2.4.
Figure 2.4. Signal power spectral density at 30 Hz
To clarify, Figures 2.1–2.4 show estimates of the probabilistic characteristics of the signal (2.1) measured using the virtual instrument “Characteriometer” [3, 9]. Signal (2.1) was obtained from the output of the G354 generator.
[1]^{)} The instantaneous value of the implementation of a random process is called the sample value and is denoted by the symbol ξ_{i}(t_{n})  the value of the ith implementation at the time t_{n}.
2.2. Signal model with the Veshkurtsev distribution law
Let's repeat the mathematical model of the signal (2.1)
u(t)= a sin (ω_{0}t +η), (2.9)
where a,η  random variables (amplitude and initial angle of phase shift, respectively); ω_{0}  constant circular frequency of the signal; u_{(t)}  instantaneous signal values distributed according to the Veshkurtsev law [10]. There is no description of such a signal in the publication.
Let the signal amplitude (2.9) be distributed according to the Gauss law (normal law) , (2.10)
and the random phase  according to a uniform law within – π . .π, where is the amplitude dispersion. Then the instantaneous values of the signal obey the Veshkurtsev distribution
(2.11)
where is the cylindrical function of the imaginary argument (the Macdonald function) [11]. The MacDonald function at 𝑥=0 asymptotically tends to infinity on both sides of the yaxis (Fig. 2.5). In this way, Veshkurtsev's law resembles the arcsine law (Fig. 2.1), where the value of the probability density tends to infinity at the edges when the argument value is equal to the signal amplitude. It turns out that Veshkurtsev's law is a kind of copy of the transformed arcsine law, therefore, it is also some kind of mathematical abstraction. A physical process with such a law does not exist, and only digital technology will make it possible to put it into practice in the form of a random process sensor.
Figure 2.5. Signal Probability Density
Since this law was obtained for the first time, we will agree to call it the Veshkurtsev law in the future by the name of the author, who was the first to write it down analytically and apply it in practice in solving new problems [10,12,13]. Naturally, all the properties of the statistical law prescribed in the theory of probability have been verified by the author and they are fulfilled. Using the Fourier transform of this law, we obtain the ch.f. signal
(2.12)
where I_{0}(∙) is the Bessel function of the imaginary zeroorder argument. A similar transformation of the distribution law (2.10) gives the ch.f.
(2.13)
of signal amplitude with a Gaussian distribution law. Ch.f. for the random phase of the signal will be
. (2.14)
Quasideterministic signal (2.9) is centered (its mathematical expectation is zero), it has dispersion (average power)
, (2.15)
where  generalized hypergeometric series [11]; signal amplitude dispersion.
Concluding the analysis of the probabilistic characteristics of the quasideterministic signal (2.9), we clarify that its instantaneous values are distributed according to the Veshkurtsev law, the amplitude is distributed according to the Gauss law, and the phase is distributed according to the uniform law.
The signal correlation function (2.9) has the form
(2.16)
The energy spectrum of the signal (2.9) coincides with the spectrum (2.7)
. (2.17)
In the transition to the physical spectrum, i.e. to the spectrum in the region of positive frequencies, we obtain
. (2.18)
The physical spectrum of the signal with the Veshkurtsev distribution law contains only one spectral component located on the frequency axis at the point with the abscissa , when , and coincides with the origin.
2.3. Signal model with cosine distribution law
Let's repeat the mathematical model of the signal (2.1)
, (2.19)
where  random variables (amplitude and phase shift angle, respectively), each with its own distribution law; ω  constant circular frequency;  instantaneous signal values obeying the distribution law
at , where > 0. (2.20)
Here and below, the number 1 denotes a onedimensional function. The statistical law (2.20) is given in the book [ 5 p. 46 ] without a title and additional explanations, there is no information about its use in the literature. Apparently, we are the first to pay attention to this statistical law. For further actions, we take the value in formula (2.20), then , and expression (2.20) takes the form
(2.21)
at The statistical law (2.21) has all the properties prescribed in the probability theory. We will call it the law of cosine in the future. There are no quantitative parameters in the mathematical description of the law of cosine. It should be noted that this law is centered, the mathematical expectation is zero, and the dıspersion is
(2.22)
It is always constant and depends only on the bounds of the values of the variable. By this, this law is inferior to the Gauss law (normal law), in which the dispersion and mean square deviation (MSD) are included in the mathematical description of the law.
If the instantaneous values of the quasideterministic signal (2.19) are distributed according to the cosine law, then the signal amplitude will be distributed according to the law [14]
at , (2.23)
where J_{0}(∙)  Bessel function of the zero order of the first kind; Г (∙) – gamma – function. Since this law was obtained by us for the first time, we will call it the Bessel law in the future by analogy with the function of the same name included in it. Using the Bessel law, we determine the initial moments of the distribution of the signal amplitude (2.19), while obtaining the initial moment of the first order (expectation) [14]
(2.24)
and the initial moment of the second order [14]
(2.25)
_{ }where _{ }  Lommel function [11 ] ; is the Bessel function of the kth order of the first kind [11].
Turning to the random phase of the signal (2.19), we say that as a result of mathematical calculations, we have obtained a uniform law according to which the phase is distributed within .
The characteristic function (ch.f.) of a centered quasideterministic signal (2.19) is [14]
. (2.26)
The work [2] describes the properties of the ch.f., which the function (2.26) satisfies. In particular, one property of the ch.f. concerns the signal distribution law, from which it follows that the ch.f. for a signal with a noncentered distribution law is equal to the ch.f. obtained for a signal with a centered distribution law , multiplied by the exponent , where the expectation of the signal. Let us use this property and write the function (2.26) for a noncentered quasideterministic signal (c.q.s.), i.e. a signal that has an expectation. As a result, we will have
, (2.27)
where are the real and imaginary parts of the ch.f. respectively. In contrast to (2.27), the ch.f. (2.26) has only a real part.
Passing to the ch.f. random amplitude of the centered signal (2.19), we have [14]
, (2.28)
;
;
where  Bessel function of the first order of the first kind;  Bessel function of the second order of the first kind [11]. Expression (2.28) is a particular solution; it is valid for the value , while the general solution for the ch.f. signal amplitude (2.19) is still in the search stage. Since the expectation of the signal amplitude (2.19) is not equal to zero, the ch.f. (2.28) is a complex function.
For the random phase of the signal (2.19), the ch.f. known [4 p. 162] and is equal to
, (2.29)
it is a real function, since the phase distribution law is centered.
Concluding the analysis of the probabilistic characteristics of the quasideterministic signal (2.19), we clarify that its instantaneous values are distributed according to the cosine law, the amplitude  according to the Bessel law, and the phase  according to the uniform law.
The signal correlation function (2.19) will be
, (2.30)
where W(y)  amplitude probability density (2.23); W(η)  probability density of the random phase η; is the initial moment of the second order (2.25). Let us proceed to the analysis of the power spectral density (energy spectrum) of signals (2.19). Let's write the energy spectrum of the signal
. (2.31)
The spectrum (2.31) turned out to be lined. It contains a spectral component ranging from  to 0 and a spectral component ranging from 0 to , where where delta is a function. In the transition to the physical spectrum, i.e. to the spectrum in the region of positive frequencies, we obtain
(2.32)
Like the signal (2.9), the physical spectrum of a signal with a distribution according to the cosine law contains only one spectral component located on the frequency axis at the point with the abscissa , when and coincides with the origin of coordinates.
2.4. Signal model with the Tikhonov distribution law
Let's repeat the mathematical model of the signal (2.1)
, (2.33)
where  random variables (amplitude and phase shift angle, respectively), each with its own distribution law; ω  constant circular frequency;  instantaneous signal values obeying the Tikhonov distribution law
within π . (2.34)
The authors of the book [ 5 p. 46] without additional explanations call the statistical law (2.34) the distribution of V.I. Tikhonov, a wellknown scientist who was the first to propose it to describe the phase of selfoscillations of a synchronized generator in a phaselocked loop system. Apparently, we are the leader in the use of this statistical law in the formation of a quasideterministic signal.
Similarly to the cosine law (2.20), in the mathematical model (2.34) there are no quantitative parameters of the distribution law, except for the coefficient D, which determines the shape of the probability density graph, since it enters the Bessel function I_{0}(D). Tikhonov's law is centered, its dispersion is determined [15, p.334]
, (2.35)
it is constant and depends on the coefficient D, for example, at D=1 the dispersion
is equal to , where is the Bessel function of the imaginary argument of the nth order of the first kind.
The signal amplitude (2.33) is distributed according to the law described by the probability density of the form [16]
. (2.36)
Since the statistical law (2.36) was obtained for the first time, we will call it the BesselLommel lawby analogy with the known functions included in it. The properties of the law (2.36), prescribed in the theory of probability, have been verified by us and they are fulfilled. The BesselLommel law describes the distribution of the random signal amplitude (2.33) within .
The expectation of a random signal amplitude is [16]
, (2.37) and the initial moment of the second order of the signal amplitude will be
. (2.38)
In this case, the dispersion of the signal amplitude will be . The designations in expressions (2.37), (2.38) were explained earlier when describing formulas (2.24), (2.25), (2.28), (2.35).
From the analysis of Tikhonov law, it follows that the random phase of the signal (2.33) is distributed according to a uniform law within π…+π.
The characteristic function of the signal (2.33) is the Fourier transform of the probability density (2.34)
. (2.39)
Properties of ch.f. depend on properties  the Bessel function of the imaginary argument  th order of the first kind. The graph of this Bessel function is shown in the figure in the handbook [17, p.196]. For each value of the parameter, the graph of the function is different, however, for the value function . Thus, it can be argued that the properties of the ch.f. (2.39) are observed. If the signal (2.33) has expectation , then its ch.f. will be
. (2.40)
Concluding the analysis of the probabilistic characteristics of the quasideterministic signal (2.33), let us clarify that its instantaneous values are distributed according to the Tikhonov law, the amplitude  according to the BesselLommel law, and the phase  according to the uniform law.
The signal correlation function (2.33) will be [16]
, (2.41)
where W(y) is the amplitude probability density (2.36); W(η)  probability density of the random phase η;  the initial moment of the second order (2.38). Let us proceed to the analysis of the power spectral density (energy spectrum) of signals (2.33). Let's write the energy spectrum of the signal
. (2.42)
The spectrum (2.42) turned out to be lined. It contains a spectral component ranging from  to 0 and a spectral component ranging from 0 to , where where delta is a function. In the transition to the physical spectrum, i.e. to the spectrum in the region of positive frequencies, we obtain
(2.43)
Similarly to what was said earlier, the physical spectrum of a signal with a distribution according to Tikhonov's law contains only one spectral component located on the frequency axis at the point with the abscissa , when when and coincides with the origin of coordinates.
Completing the stage of formation of mathematical models of quasideterministic signals for statistical modulation, let's say that we recorded three of them for the first time with all probabilistic characteristics, including the characteristic function. A quasideterministic signal with an arcsine distribution law already exists practically as a source of physical oscillations and can be used when performing statistical modulation. The remaining quasideterministic signals can be implemented in practice only in the form of new computer programs, which will later serve as signal sensors as part of digital technologies. Now it is premature to talk about the creation of new sources of physical oscillations, in our opinion. However, the filling of a separate class of random processes with other quasideterministic signals must go on constantly.
3. NEW GENERATION MODEMS
Modems with amplitude, phase, frequency modulation are widely used in communication technology, but they have low noise immunity when operating in a noisy channel, and amplitude modulation is the most unprotected from interference. Currently, they are trying to increase the noise immunity of modems by combating interference, while inventing various devices and blocks for suppressing interference, which, at times, are much more complicated than the modems themselves.
We offer another direction for improving the theory of modulation and another way of building modems, based on the complication of the mathematical model of the signal and the modulation of its characteristics, in particular the characteristic function. It is defined in the domain of probabilities or the space of probabilities proposed in 1933 by academician A. Kolmogorov when building information theory. At the same time, the probability theory is a mathematical tool for describing all signal transformations in the probability space. For a signal with a mathematical model (2.1, 2.9, 2.19, 2.33), the characteristic function (ch.f.) is strictly defined, i.e. fundamentally. Thus, by introducing random variables into the models (2.1, 2.9, 2.19, 2.33), a transition to the model of the socalled quasideterministic signal, which is an element of statistical radio engineering, is achieved. At first glance, replacing a deterministic oscillation model with a quasideterministic signal mathematical model is a fairly simple operation, but the modem noise immunity after replacing the oscillation model turns out to be limiting in the sense that there are no errors when receiving data.
3.1. The first method of signal modulation
We will consider a new modulation method [18], in which all signal parameters are “hidden” inside the expectation operator, as a result of which we obtain the function
(3.1)
widely known in mathematics, physics, statistical radio engineering. The mathematician A. Lyapunov proposed this function and published its description in 1901 [19]. In the literature [4], it is called the characteristic function. Applying L. Euler's formula, let’s write
(3.2)
where A(V_{m}), B(V_{m}) – real and imaginary parts of the characteristic function;
V_{m} is the parameter of the characteristic function.
By analogy with cosmonautics, the characteristic function (ch.f.) is a “spacesuit” for a signal, it serves as a fundamental probabilistic characteristic of a signal, for example, a quasideterministic oscillation (2.1)
with parameters where – a random phase shift angle with a uniform distribution law within . The physical meaning of ch.f. studied in [2], and it is shown that it is the spectral density of the probabilities of the instantaneous values of the signal (2.1). Ch.f. depends on the probability density of the signal. Consequently, each model of a quasideterministic signal has its own fundamental ch.f., which has many positive properties. It is limited, measurable, filters noise, has limiting values , , . Other remarkable properties of it are described in [2]. Based on the advantages of the ch.f., we propose a method for modulating this function.
A ch.f. modulation method in which a constant voltage is multiplied with a telegraph signal s(t), which takes on the value either "1" or "0", after which the product e_{0}s(t) is summed with a centered quasideterministic signal (2.1), expectation which is equal to zero, and thus carry out the modulation of the ch.f. of the transformed quasideterministic signal according to the law:
for s(t)=0 to obtain functions of the form
A(V_{m},t) = J_{0}(V_{m}U_{0},t), B(V_{m},t) = 0 ; (3.3)
for s(t)=1 to obtain functions of the form
A(V_{m},t) = J_{0}(V_{m}U_{0},t) cos (V_{m} e_{0}), B(V_{m},t) = J_{0}(V_{m}U_{0},t) sin (V_{m} e_{0}), (3.4)
where J_{0}(∙) is Bessel function of zero order; U_{0 }– the signal amplitude V_{m} – the ch.f. parameter, and at V_{m} = 1 function A(1,t) and function B(1,t) change in antiphase. By the way, the dependence of the ch.f. from time to time appeared due to the modulation of the signal, since the modulated signal is a nonstationary process.
In the future, we propose to call the modulation of a new type statistical modulation (SSK  statistical shift keying).
A block diagram of the modulator is shown in Figure 3.1, it contains a multiplier 1 and an adder 2. Timing diagrams explaining its operation are shown in Figure 3.2. The following explanations can be given to the figures. In accordance with the definition of the modulation method, a noncentered quasideterministic signal is formed
(3.5)
with ch.f. as [2]
(3.6)
Let the telegraph signal be a sequence of logical zeros and ones (Fig. 3.2a). If s(t)=0, then the ch.f. has only a real part, and its imaginary part is equal to zero [2], i.e.
In this case, with V_{m}=1, we have A (1, t), B (1, t) in Figure 3.2d, e. When s(t)=1, the ch.f. is equal to (3.6). Then we get
A(V_{m},t) = J_{0}(V_{m}U_{0},t) cos (V_{m} e_{0}), B(V_{m},t) = J_{0}(V_{m}U_{0},t) sin (V_{m} e_{0}).
At V_{m}=1 we have functions
A(1,t) = J_{0}(U_{0},t) cos (e_{0} ), B(1,t) = J_{0}(U_{0},t) sin (e_{0}), (3.7)
which are shown in Figure 3.2d, e. These functions change according to the law of the telegraph signal. Therefore, ch.f. modulated by a telegraph signal, and the functions A(1, t), B(1, t) change in antiphase.
Figure 3.1. Signal modulator circuit
In our opinion, the structure of the modulator turned out to be simple; there are no complex nodes and sources of oscillations in it. Quasideterministic signal (2.1) is present at the output of any selfoscillator up to atomic frequency standards. It is known from the review [20] that they and quartz oscillators have shortterm phase instability, or, in other words, phase fluctuations, and thus fall under the signal model (2.1). Moreover, the characteristics in Figures 2.1  2.4, measured experimentally when studying the signal at the output of a standard generator, confirm this. Constant voltage value by analogy with computers, cell phones can be obtained from the battery. In addition, the amplitude of the highfrequency oscillation in the modulator does not change and, as a result of this, the power amplifier of the transmitting device has the linearity of the input and output characteristics in a narrow range of input signals. All taken together characterizes the modulator only from the positive sides.
Fig. 3.2. Timing waveform diagram in the modulator
However, this modulation method, in which there is a constant component of the signal, is used only in wired communication and is not used in radio communication. Antennafeeder devices (AFD) in radio communications do not pass the constant component of the signal. For radio communications, a method for amplitude shift keying (AM) of a signal has been developed, which can be used in statistical modulation, since ch.f. of a quasideterministic signal will change, similarly to the amplitude of a deterministic oscillation. In this case, the amplitudetime diagram in Figure 3.2c will be different and will take the form shown in Figure 3.2e. The power amplifier of the transmitting device will change the linear mode of operation to nonlinear.
At the AM of a centered quasideterministic signal (2.1)
Figure 3.2f. Time diagrams of changes in telegraph and AM signals
We obtain only the real part of the ch.f., which changes in antiphase with the telegraph signal and has the form shown in Figure 3.2d. The imaginary part of the ch.f. centered signal (2.1) is always zero. Schemes of modulators with amplitude keying of the signal are described in detail in textbooks [21] and monographs [22].
In passing, we present in more detail the diagram in Figure 3.2c for the telegraph signal shown in Figure 3.2e. As a result, we get a noncentered quasideterministic signal with amplitude keying, shown in Figure 3.2g.
Figure 3.2g. Noncentered amplitudeshift keying signal
3.2. The second method of signal modulation
Let a quasideterministic signal (2.9) be modulated, for which the random variable a is distributed according to the normal law with quantitative parameters  expectation,  expectation, σ^{2}  dispersion, and the random variable is distributed uniformly within 0…2π. Then we have
. (3.8)
The characteristic function of A. Lyapunov for the quasideterministic signal (2.9), obtained by us earlier using the wellknown expression [4, p.263]
(3.9)
and tables [11] , taking into account the distribution law of the random variable a at V_{m} > 0 , = 0 has the form (2.12). Ch.f. has properties [2], for example, if the law W_{1}(x) corresponds to the ch.f. Θ(V_{m}), then the law W_{1}(x _{0}) corresponds to the ch.f. Θ(V_{m})exp( jV_{m}e_{0}). Therefore, for a law with expectation 0 ch.f. (2.12) is transformed into the expression
(3.10)
Ch.f. (3.10) at V_{m}=const, excluding zero and infinity, depends on the variables ,σ^{2} . Consequently, by changing the expectation and amplitude dispersion of the quasideterministic signal (2.9) with the help of a telegraph signal s(t), one can modulate the ch.f. this signal. Using this method of influencing the amplitude, it is possible to implement twelve variants of the considered method of ch.f. modulation. signal. A new method, the socalled statistical modulation, using a characteristic function, a quasideterministic signal and a telegraph message, consists in changing the quantitative parameters of the distribution law of the amplitude of the quasideterministic signal in accordance with the change in the telegraph message containing a sequence of logical "0" and logical "1".
A block diagram of the modulator is shown in Figure 3.3 [13], it contains a (IC) interface circuit, a centered quasideterministic signal sensor (c.q.s) and a (SB) settings block. In the settings block, the values of the quantitative parameters of the distribution law of the amplitude of the quasideterministic signal are stored in the memory, which are written to the signal sensor through the interface circuit. The algorithm for writing parameters includes a telegraph signal s(t), from the logical "0" and "1" of which the values of the settings depend. For example, when a logical “0” arrives, the setting σ_{0}^{2}=1, and when a logical «1» arrives  σ_{1}^{2}= 0,0009 is selected, while in both cases the setting e_{0} =0. Then at the output of the modulator we get a modulated centered quasideterministic signal, which is shown in Figure 3.4. In form, the time diagram of the signal resembles a random process that obeys the statistical law of Veshkurtsev, with a probability density of the form (2.11).
Figure 3.3. Modulator circuit
Figure 3.4. Timing diagram of the modulated signal
In the modulator circuit in Figure 3.3, there is no source of physical oscillations, and instead of it, a centered quasideterministic signal sensor (c.q.s.) is used. This sensor is built as a computer program using digital technology. The same can be said about other blocks of the modulator block diagram, which are separate files of the general program.
The instantaneous values of the centered quasideterministic signal (2.9) vary in the range 3σ and can reach large values at σ=1, where σ – is the mean square deviation (MSD) of the signal amplitude. This, in turn, places great demands on the linearity of the input and output characteristics of the power amplifier of the transmitting device.
3.3. Combined signal modulator
The first and second methods of signal modulation are implemented individually using their own modulator. However, it is possible to build some combination of two modulators [16], shown in Figure 3.5. The combined modulator circuit includes a multiplier 1, an adder 2, a setpoint block 3, a generator or sensor of a centered quasideterministic signal 4. In contrast to the modulator in Figure 3.3, the setpoint block contains only the energy quantitative parameters of the generator (or sensor) oscillation in the form of dispersions σ_{c}^{2}, σ_{0}^{2}, σ_{1}^{2} signal, which are set using the logical "1" and "0" of the telegraph signal.
Figure 3.5. Modulator circuit
The expectation is introduced using a telegraph signal s(t) through another channel containing a multiplier 1 and an adder 2, which receives a centered quasideterministic signal from a sensor (or generator). As a result of these transformations, at the output of the modulator, we obtain a noncentered quasideterministic signal shown in Figure 3.6.
а)
b)
Figure 3.6. Timing diagrams of the modulated signal
The amplitudetime dependence of the fluctuation in Figure 3.6a does not contain the expectation (=0), while in Figure 3.6b the quasideterministic signal (2.9) has the expectation =1.
As a result of this, the characteristic function of the signal (2.9) will be modulated, and the variance σ_{0}^{2} = σ_{1}^{2} of the signal (2.9) in both pictures is constant, where σ_{0}^{2}, σ_{1}^{2} is the variance of the signal amplitude (2.9) when a logical “0” and a logical “1” arrive at the modulator telegraph signal, respectively.
The amplitudetime dependence of the oscillation at the output of the modulator with a noncentered quasideterministic signal (3.5) is shown in Figure 3.2c. Let's recall that the adder 2 of the modulator in this case receives a quasideterministic signal (2.1) with a constant dispersion σ_{c}^{2} from generator 4, which in this case replaces the sensor.
3.4. Twochannel signal demodulator
To demodulate the signal, we propose a new method [23], which uses an analogtodigital signal conversion, multiplication of discrete instantaneous signal values with the parameter V_{m}, a functional transformation in order to obtain the functions of the sine and cosine products, followed by the accumulation of the values of these functions over a time interval equal to the duration symbol logical "0" and logical "1". After that, using the sine function, the estimate of the imaginary part of the ch.f. is calculated, and using the cosine function, the estimate of the real part of ch.f., the current values of which are compared with the thresholds, and the decision is made in accordance with the fulfillment of the following inequalities:
 if , then it is considered that the logical ͈ 0 ̎ is accepted;
 if , then it is considered that the logical ͈ 1̎ is accepted;
3) if , then it is considered that the logical ͈ 0 ̎ is accepted;
4) if , then it is considered that the logical ͈ 1 ̎ is accepted.
The block diagram of the demodulator is shown in Figure 3.7. It contains an analogtodigital converter (ADC) 1, a multiplier 2, functional converters 3,4, accumulating averaging adders 5,6, threshold devices 7,8, an inverter 9. The principle of operation of the demodulator is as follows. The demodulator input receives, for example, a signal (3.8). After conversion to the ADC, the discrete instantaneous values of the signal are multiplied with the parameter V_{m}, and the products are converted to obtain the function and the function . Accumulating averaging adders 5.6 work simultaneously. The adder 5 accumulates the current values of the sine function, and the adder 6  the current values of the cosine function. When a synchronization pulse appears at the strobe inputs of the adders, the estimates of the real and imaginary parts of the ch.f. appear at their outputs.
(3.11)
(3.12)
where N   sample size of instantaneous signal values; ∆t is the signal sampling interval. The properties of the estimates (3.11,3.12) were studied in [2], and it was found that for N>>1 they are asymptotically consistent, effective, and unbiased.
The values of the estimates of the ch.f. (3.11,3.12) with the value V_{m}=1 are compared in threshold devices 7,8 with the threshold П_{1c}, П_{2k}. For convenience of analysis, the series connection of blocks 3, 5, 7 will be called the sine channel of the demodulator, and the series connection of blocks 4, 6, 8, 9 will be called the cosine channel of the demodulator. Each channel has its own output, hence the demodulator has two outputs. At the output of the cosine channel, the telegraph signal is received inverse with respect to the original. Therefore, the inverter 9 is turned on at the channel output. If the above inequalities with the value V_{m}=1 are not met, errors occur in the decision regarding the received symbol of the telegraph signal.
Figure 3.7. Twochannel signal demodulator
Thresholds (sine channel), (cosine channel) are set in accordance with the equalities
, , (3.13)
where К_{1}, К_{2}  variable coefficients; П_{1} , П_{2} are thresholds, the values of which will be different depending on the model of the quasideterministic signal and are calculated further when analyzing the noise immunity of the modem.
3.5. Singlechannel signal demodulator
The sine and cosine channels of the demodulator in Figure 3.7 are not equally affected by interference and, as a result, have different noise immunity. If we combine the advantages of each channel together, we get a singlechannel demodulator circuit [24], shown in Figure 3.8.
Figure 3.8. Structural diagram of the demodulator
The block diagram of the demodulator is shown in Figure 3.8. It contains an analogtodigital converter (ADC) 1, a multiplier 2, functional converters 3.4, accumulating averaging adders 5.6, threshold devices 7.8, an inverter 9, logical AND circuit. Logic circuit 10 combines the outputs of the demodulator channels in the figure 3.7, after which the demodulator has only one output. The demodulator becomes a singlechannel device.
Up to logic diagram 10, the singlechannel demodulator operates in full accordance with the description of the principle of operation of the circuit in Figure 3.7. Further, the logic circuit 10 is included in the work, the operation of which is explained in Table 3.1.
Table 3.1.
Truth or state table
Sequence number 
1 
2 
3 
4 
Sinus channel output 
log. «1» 
log. «1» 
log. «0» 
log. «0» 
Cosine channel output 
log. «1» 
log. «0» 
log. «1» 
log. «0» 
Demodulator output 
log. «1» 
log. «0» 
log. «0» 
log. «0» 
Looking ahead, let's say that in the first case, when determining the logical "1", errors are possible, since the sinus channel determines the logical "1" satisfactorily. But the logical "0" sinus channel determines without errors. Therefore, in all subsequent cases, the absence of errors can be expected. The simulation of the circuit in Figure 3.8 confirms what has been said [24]. On average, the singlechannel demodulator in Figure 3.8 reduces the error rate by a factor of 20 compared to the cosine channel of the twochannel demodulator. The circuit in Figure 3.8 is not the only one; other options for combining the demodulator channels in Figure 3.7 are possible. Additional studies are required to determine the optimal option for combining two demodulator channels into one channel.
Consider another version of the singlechannel demodulator [25], shown in Figure 3.9. In fact, there is actually only one channel in it, namely, this is the previously designated cosine channel. However, the circuit in Figure 3.9 can equally belong to the sine channel of the demodulator if the FC will form a sine function.
Figure 3.9. Singlechannel signal demodulator
The circuit in Figure 3.9 includes an ADC  an analogtodigital converter; P1  multiplier; FC is the functional converter of the cosine function; AA  accumulative averaging adder; TD is a threshold device, in the output circuit of which an inverter is included, as is done, for example, in the circuit of Fig. 3.7. Therefore, at the output of the demodulator, we will receive an inverse set of logical "0" and "1", which are taken from the output of the control panel to the inverter.
Signal conversion, for example (3.8), in the demodulator proceeds in the following sequence. The quasideterministic signal (3.8) is discretized by the ADC, and each discrete instantaneous value of the signal is multiplied with the ch.f. parameter , the product is converted by the functional converter into the value of the function , where ∆t is the sampling interval of the signal. The values of the cosine function are accumulated in the adder, and when a synchronization command is received, they are averaged. The result of averaging enters the threshold device and is compared with the threshold, and the decision is made in accordance with the inequalities:
1) if , then it is considered that the logical ͈ 1 ̎ is accepted;
2) if , then it is considered that the logical ͈ 0 ̎ is accepted.
The result after averaging the AA adder data is
(3.14)
where N is the sample size of discrete instantaneous values of the signal. In expression (3.14), the expectation operator is replaced by an ideal adder. Studies of the evaluation of the real part of the ch.f. showed [2] that, as N it is asymptotically consistent, efficient, and unbiased, i.e., evaluation properties tend to fundamental properties. Consequently, the value of the estimate (3.14) will be equal to the value of the ch.f. (2.12), while the threshold will be (3.13)
, where К_{1 }  variable coefficient; П_{1} – TD device threshold. The coefficient K1 in each modem is different, it depends on the signal modulation algorithm.
4. NOISE IMMUNITY OF THE MODEM IN THE CHANNEL WITH "WHITE" NOISE
The theoretical analysis of modem noise immunity is based on the determination of the real and imaginary parts of the ch.f. additive mixture. Then the values are calculated separately for each of the parts of the ch.f. additive mixture. And finally, these values of the ch.f. are compared with the thresholds set in the sine and cosine channels of the demodulator in order to make decisions in accordance with the observance of the inequalities recorded in the signal discrimination algorithm. Separately, a quantitative analysis of the probability of errors is carried out. In total, thirteen different modems of the new generation are considered together. To determine whether the material in this chapter belongs to the device model, the modem name was used, which includes a cipher of letters and numbers denoting the following: A  arcsine law; K is the law of cosine; B  Veshkurtsev's law; T is Tikhonov's law; 1  one channel; 2  two channels; 21  one channel resulting from combining two different demodulator channels using digital logic circuits. Let's correctly write down and decipher, for example, such a name: A21 modem is a singlechannel modem for receiving signals with distribution according to the arcsine law.
4.1. Noise immunity of modem A when receiving an additive mixture of noise and signal with the distribution of instantaneous values according to the arcsine law
Let's recall that a signal with the distribution of instantaneous values according to the arcsine law can be modulated in two ways, described above in Section 3.1. We will consider each of them separately and, on their basis, we will build modems that are different in structure and characteristics. As a result, we get three models of a new generation modem.
4.1.1. Noise immunity of the modem A2 when receiving an additive mixture of noise and a noncentered signal with the distribution of instantaneous values according to the arcsine law
The modem contains a modulator (Fig. 3.1), the quasideterministic signal at the output of which is shown in Fig. 3g, and a twochannel demodulator (Fig. 3.7). Its name will be: modem A2. The modulation algorithm for a quasideterministic signal (2.1) is written in Table 4.1.
Table 4.1.
Signal modulation algorithm with V_{m}=1
Telegraph signal 
Signal dispersion value 
The value of the expectation of the signal 
logical "0" 
0,18 
0 
logical "1" 
0,18 
0,9 
The methodology and results of the studies were published in [26]. Let us turn to the analysis of the noise immunity of the demodulator under the action of an additive mixture of a quasideterministic signal (2.1) and "white" noise at its input
z(t)=u(t)+n(t), (4.1)
where n(t) is “white” noise, u(t) is a signal with a=U_{0}, and the probabilistic characteristics are known from section 2.1.
Using expressions (3.3, 3.4) and the data in Table 4.1, using formulas (3.13), we calculate the thresholds in the sine and cosine channels of the demodulator. As a result, with the value V_{m} = 1 and U_{0} =0,6 we get
П_{1}=J_{0}(U_{0},t)sin(e_{0}) = 0,7116; П_{2} = J_{0}(U_{0},t) = 0,912.
Further, with the value V_{m}=1 and s(t) =0 we define for the additive mixture (4.1)
. (4.2)
When s(t)=0, similarly to (4.2), we calculate for the value V_{m}=1 for the additive mixture (4.1)
(4.3)
where W(z) – probability density of instantaneous values of the additive mixture; – signaltonoise ratio; – the dispersion of the quasideterministic signal; – the dispersion of "white" noise. The results (4.2, 4.3) need to be quantified. Tables 4.2, 4.3 present the results of calculations for П_{1}=0,7116; П_{2}=0,912; К_{1}=0,56; К_{2}=0,88, written in the line with the name of the evaluation.
Table 4.2.
The probability of errors in the cosine channel at a logical "1"
Threshold 
0,912∙ 0,88 = 0,8 

Evaluation 
0 
0 
0,37 
0,83 
0,9 
0,9 
Relation 
0,001 
0,01 
0,1 
1,0 
10 
100 
Probability of errors P_{1} 
1 
1 
1 
2,2∙10^{5} 
2∙10^{45} 
2∙10^{45} 
Table 4.3.
Probability of errors in the sinus channel at logical "0"
Threshold 
0,7116∙ 0,56 = 0,4 

Evaluation 
0 
0 
0 
0 
0 
0 
Relation 
0,001 
0,01 
0,1 
1,0 
10 
100 
Probability of errors P_{0} 
0 
0 
0 
0 
0 
0 
When analyzing the data in tables 4.2, 4.3, we always compare the values of the estimates of ch.f. additive mixture with the thresholds recorded in the first line of the tables. At the same time, we see that the data in Table 4.2 exceeds the threshold, starting from the signaltonoise ratio from 1 to 100, i.e. in the range of 20 dB. This means that there will be no errors here when receiving a logical "0", so the modem has maximum noise immunity. In the range of signaltonoise ratios from 0.1 to 1, errors when receiving a logical "0" are possible. However, it can be stated that the noise immunity of the cosine channel of the modem is an order of magnitude better than the data given in the publication. Analyzing the data in Table 4.3, we see ideal results. In the sinus channel of the demodulator, all data is below the set threshold. Therefore, we have the maximum noise immunity when receiving a logical "0" in the range of signaltonoise ratios from 10^{3} to 10^{2} or 50 dB, and the lower limit of the range is minus 30 dB. These data are at least twenty orders of magnitude better than the noise immunity of the device known from the publication.
Suppose the additive mixture (4.1) contain a noncentered quasideterministic signal at the demodulator input; this corresponds to the condition s(t)=1. Similarly, to (4.2), for the value V_{m}=1 we define
(4.4)
or similarly to (4.3) for the value V_{m}=1 we calculate
(4.5)
The results (4.4), (4.5) need a quantitative analysis. Tables 4.4, 4.5 show calculation data at П_{1}=0,7116; П_{2}=0,912; К_{1}=0,56; К_{2}=0,88, written in the line with the name of the evaluation.
Table 4.4.
Probability of errors in the cosine channel at logical "0"
Threshold 
0,912∙ 0,88 = 0,8 

Evaluation 
0 
0 
0,23 
0,52 
0,57 
0,57 
Relation 
0,001 
0,01 
0,1 
1,0 
10 
100 
Probability of errors P_{0} 
0 
0 
0 
2∙10^{45} 
2∙10^{45} 
2∙10^{45} 
Table 4.5.
The probability of errors in the sinus channel at a logical "1"
Threshold 
0,7116∙ 0,56 = 0,4 

Evaluation 
0 
0 
0,29 
0,65 
0,71 
0,71 
Relation 
0,001 
0,01 
0,1 
1,0 
10 
100 
Probability of errors P_{1} 
1 
1 
1 
8∙10^{32} 
2∙10^{45} 
2∙10^{45} 
Similarly, to the analysis of tables 4.2, 4.3, we will study the data of tables 4.4, 4.5. The data in Table 4.4 is below the set threshold. Hence, they correspond to the ideal case. Here we can say that the reception of the logical "1" in the cosine channel of the demodulator occurs without errors, i.e. with ultimate noise immunity, in the range of signaltonoise power ratios from 10^{3} to 10^{2} or in the range of 50dB. These data are at least twenty orders of magnitude better than the noise immunity of the device known from the publication. The data in Table 4.5 are much more modest than the previous ones. In the sinus channel of the demodulator, a logical "1" is received without errors only when the signaltonoise ratio is from 1 to 100 or in the range of 20 dB. With a signaltonoise ratio from 0.1 to 1 in the sinus channel of the demodulator, errors are possible when receiving a logical "1".
Let's move on from qualitative data analysis to a quantitative assessment of modem noise immunity. In tables 4.2–4.5, the following designations are adopted: Р_{0} – the probability of errors when receiving a logical "0"; Р_{1} – the probability of errors when receiving a logical "1";  is the total probability of device errors.
Quantitative assessment of the noise immunity of the modemA2. In expressions (3.11,3.12), instead of the expectation operator, an ideal adder is used. And, as a result of this, we obtain estimates of the real and imaginary parts of the ch.f., which are recorded in tables 4.2  4.5. Both estimates are random variables with their own properties and distribution laws. Let us recall that estimates for the real and imaginary parts of the ch.f. are efficient, consistent, and unbiased. This is shown in earlier works, for example [2], in which the effectiveness of estimates is characterized by their variances. In the book [2, p. 95 – 96] the dependence of the variance of estimates (3.11, 3.12) on the dimensionless time is shown , where  the duration of the signal realization;  the width of the energy spectrum of the signal. With a value of S= 100 the variance of the estimate of the real part of the ch.f. , and the variance of the estimate of the imaginary part of the ch.f. . The value S =100 will be obtained when we take with s and . Here the designations are borrowed from expressions (3.11,3.12).
The law of distribution of estimates of the real and imaginary parts of the ch.f. depends on the probability density of the additive mixture of signal and noise. Let it be normal in the first approximation, since it is difficult to solve this problem mathematically exactly, and maybe even not possible. According to Professor S.Ya. Vilenkin, who has been solving similar problems for many decades, "..an exact solution is possible only in some cases [27, p.106]". For example, in the same place, the author obtained the exact distribution law for the estimate of the correlation function of a Gaussian signal, and then, after some assumptions, suggested that it be considered approximately normal. Let's follow this example. Looking ahead, we say that when modeling a demodulator (Fig. 3.7), the validity of such a hypothesis was proved in [28].
Next, we proceed similarly to the procedure for discretizing a continuous value by level, with one level equal to the threshold, and the second level is not limited by the threshold, i.e. it is variable without negative consequences for the probability of errors. At the same time, we consider that the center of the distribution law coincides with the value of the ch.f., recorded in tables 4.2  4.5, since estimates of ch.f. are not displaced. There is a corridor between the value of the assessment and the threshold, it is different when . If the value of the estimate of the ch.f. goes beyond the corridor boundary, then an error occurs when receiving a logical element. For example, the corridor is 0.23 in table 4.4 with a value =10. We divide the value of the corridor by "sigma", i.e. on estimates of the real or on estimates of the imaginary part of the ch.f. depending on the demodulator channel in question. The mean square value of the estimate in the cosine channel is , so we get the number of "23sigma" separating these two values. Then we apply a rule similar to the “three sigma” rule and calculate the value of the error integral at “L sigma”. In our example L=23. The error probability that interests us will be equal to the difference between unity and the value of the error integral. Unfortunately, in reference books on special functions [17], the values of the error integral are limited to the size L≤10. Therefore, in tables 4.2  4.5, the values of the error probability are sometimes overestimated, for example, in table 4.4 at =10 . In fact, the errors will be smaller by many orders of magnitude.
For greater clarity and understanding of what was said above, we use the wellknown distribution law for the estimate of the real part of the ch.f. [28], the view of which is shown in Figure 4.1. We will calculate the modem error probabilities using a method developed on the basis of the statistical decision theory [29]. The estimate is a random variable, depends on the signaltonoise ratio and has a dispersion of the real part of the ch.f. and the dispersion of the estimate of the imaginary part of the ch.f. The values of estimates (3.11, 3.12) are distributed according to the Gauss law [28], in which the values recorded in tables 4.2  4.5 are the most probable, i.e. expectations. The distribution law and the initial data, where W(A) – the probability density of the ch.f. estimate is shown in Figure 4.1a; m_{1}{A} – the expectation of the estimate of the ch.f. You can also see the interval between the mathematical expectation of the estimate and the threshold in the demodulator. In Figure 4.1a, the thresholds are shown to the left and right of the expected value. This is done because either the left or the right half of the distribution law is involved in determining the logical "0" and logical "1".
Let's start calculating the error probabilities in the demodulator when a logical "0" is received. The mathematical expectation of the estimate is equal to the value (4.4). All values of the evaluation (3.11) must exceed the threshold П_{2k}, shown in Figure 4.1a on the left. Let's use the three sigma rule. Let's define the number of sigmas using the relation . The probability of errors when receiving a logical "0" is equal to the area under the curve W(A), lying to the left of the threshold П_{2k}.
Fig. 4.1. The probability density of the real a) and imaginary b) parts of the ch.f.
It is equal numerically
, (4.5d)
where erf (∙), erfc (∙) – probability integral (error function). Formula (4.5d) is suitable for calculating errors in the demodulator when a logical "1" is accepted. Only the expectation of the estimate in this case is equal to the value (4.2), and the probability of errors is equal to the area under the curve W(A), which lies to the right of the threshold П_{2k} (colored in black), and will be denoted by . Then the probability of modem errors. When calculating the error probability according to formula (4.5d), data from tables 4.2, 4.4 were substituted in place . A similar description can be repeated for estimating (3.12) the imaginary part of the ch.f. using the data in tables 4.3, 4.5 and figure 4.1b.
The total error probability of the sine (curve 1) and cosine (curve 2) demodulator channels is shown in Figure 4.2, and its main values are listed in Table 4.6. For comparison, in the same place from [15, p.473], the probability of errors (curve 3) of ideal phase modulation (PM), calculated in a noisy channel, is given. The choice of PM for comparison is not accidental. It is recognized by all as the most noiseresistant modulation.
Table 4.6.
Probability of errors of different modems
Total sinus channel error probability 
0,5 
2∙10^{21} 
4∙10^{32} 
1∙10^{45} 
Less than 1∙10^{45} 
Total cosine channel error Probability 
0,5 
4,9∙10^{1} 
1,1∙10^{5} 
1∙10^{45} 
Less than 1∙10^{45} 
PM error probability 
0,9 
3,2∙10^{1} 
1,5∙10^{1} 
8∙10^{}^{6} 
2∙10^{45} 
Signaltonoise ratio 
0,1 
0,5 
1,0 
10 
100 
Comparison of the noise immunity of the new modem with the noise immunity of the known device, in which ideal PM is used, shows its superiority by at least four orders of magnitude and more, up to thirty orders of magnitude when working with weak signals. This causes distrust among modem developers, whose opinion says: "this cannot be, because it can never be." In the cosine channel (curve 2), the new modem has a reference point with a nonzero error probability , i.e. from the limiting noise immunity of the device, if the probability of errors 10^{32 }is conventionally equated to zero. Its occurrence may be associated with a random, without any justification, choice for modulating the quantitative parameters of the distribution law of a quasideterministic signal. Probably, the optimization of these parameters will eliminate the modem reference point. There are no reference points in the sinus channel (curve 1) of the modem. Therefore, even with such data, one can hope for a good future for the new modem with two channels.
Figure 4.2. Probability of errors of the twochannel modem A2
As a result, we can say that in the presence of "white" noise in the data transmission channel, the potential noise immunity according to Kotelnikov of the proposed modem is limiting, because with accurate synchronization of both channels of the modem, there are no errors when receiving a telegraph signal. In the sine and cosine channels of the modem, the range of signaltonoise power ratios is different. In the sine channel it is equal to 30 dB, and in the cosine channel  25 dB, and the lower limit of the range in the sine channel lies at the level of minus 10 dB, while in the cosine channel it is equal to minus 5 dB. Thus, the sine channel of the modem has better noise immunity than the cosine channel. In Figure 4.2, curves 2 and 3 run parallel in the section . Therefore, the cosine channel at the error probability level of 1∙105 and less has an energy gain of 10 dB relative to the ideal PM signal modem.
Singlechannel modem A21. The new modem contains a modulator (Fig. 3.1) and a singlechannel demodulator (Fig. 3.8). Its name will be: modem A21. The modulation algorithm for a quasideterministic signal (2.1) remains the same and is recorded in Table 4.1. At the same time, the above theoretical analysis of modem noise immunity when operating in a noisy channel remains unchanged for the new modem model. However, the new modem model has only one channel and one output. Let's recall that the demodulator (Figure 3.8) combines the advantages of the sine and cosine channels of the demodulator in Figure 3.7.
Table 4.3 shows that in the sinus channel of the demodulator, the logical "0" is determined without errors in the entire range of signaltonoise power ratios, i.e. in the range of 50 dB. Table 4.4 shows that in the cosine channel of the demodulator, the logical "1" is determined without errors also in the entire range of signaltonoise power ratios, i.e. in the range of 50 dB, if the probability of errors 2∙10^{45} is conventionally equated to zero. Theoretically, when these advantages of both channels are combined, they should get a new modem model with maximum noise immunity in the range of signaltonoise ratios of 50 dB, with the lower limit of the range equal to minus 30 dB. However, in practice this does