CHARACTERISTIC FUNCTION
Abstract and keywords
Abstract (English):
The monography presents the fundamentals of the theory of construction new-generation modems. Modems are built on the principles of statistical communication theory, based on the use of a random signal (chaos) as a carrier of information. In such a signal, a characteristic function is modulated, which is a fundamental characteristic of a random process. The signal modulation and demodulation method is patented and allows you to create modems with efficiency and noise immunity indicators several orders of magnitude higher than those of the known devices of the same name. New-generation modems immediately improve the technical characteristics of digital IT equipment by several orders of magnitude, since they work without errors in wired and radio channels when receiving one hundred duodecillion of binary symbols. The book is recommended for scientists and specialists in the field of digital communication systems, statistical radio engineering and instrumentation, and may be useful for graduate students, masters and students of relevant specialties.

Keywords:
new-generation modems, information, signal, signal modulation and demodulation method, IT equipment, digital communication, radio engineering
Text
The characteristic function (ch.f.) is a probabilistic characteristic of a random process, or a random variable. It was proposed and used (1901) by the mathematician A.M. Lyapunov to prove the probability limit theorem. Since then, ch.f. has acquired independent significance and is successfully used to solve both fundamental and applied problems [1, 2]. In our opinion, it is appropriate to call it the characteristic function of A.M. Lyapunov, as it is done, for example, with the functions of F. Bessel, P. Dirac, P. Struve, O. Heaviside and other scientists. For more than fifty years, the characteristic function of A. Lyapunov served as a tool of mathematics and remains so to this day. Thanks to this function, mathematicians solve complex problems and construct difficult proofs of theorems. Since 1954 this function has been studied and applied by applied science. With its help, new results have been obtained in non-destructive testing and diagnostics, in communication technology, in noise filtering in the probability space, in detecting signals that are an order of magnitude or more superior to those previously known [2,3]. 1.1. Characteristic function: definition, properties A one-dimensional characteristic function is a statistical mean of an exponent with an imaginary exponent of the form jVmξ (t), in which the random process ξ(t) is multiplied by an arbitrary real parameter Vm. Mathematical model ch. f. is represented by the expression , (1.1) Where θ_1 (V_m ) is the characteristic function (ch.f.); m_1 {∙} is mathematical sign of the statistical average (expectation operator); Vm = т∆V is the ch.f. parameter; ∆V is the discretization step (quantization) of the ch.f. parameter; m∈[-∞,+∞]. In expression (1.1), the number 1 denotes a one-dimensional function. Obviously, using the Euler's formula, we can write θ_1 (V_m )=m_1 {cos⁡[V_m ξ(t)]+j sin⁡[V_m ξ(t)] }= =A(V_m )+jB(V_m ), (1.2) where A(V_m ) = m_1 {cos⁡[V_m ξ(t)] } is the real part of ch.f.; B(V_m )= =m_1 {sin⁡[V_m ξ(t)] } is the imaginary part of the ch.f. Then there will be an equality θ_1 (V_m )=|θ_1 (V_m )| exp⁡[j〖Υ(V〗_m)]. (1.3) Here |θ_1 (V_m )|= √(A^2 (V_m )+B^2 (V_m)), are the modulus and argument of ch.f., respectively. Geometric interpretation ch. f. shown in figure 1.1. It depicts a spatial figure formed by the rotation of the radius - a vector with a length equal to the module |θ_1 (V_m )|, around the axis on which the values of the parameter Vm are plotted. In this case, the projections of a point belonging to the figure on the coordinate axes are respectively equal to the real and imaginary parts of ch. f. The shape and dimensions of the figure are determined by the random process ξ(t). The figure in picture 1.1 is built for a Gaussian process with m1 = 0.3, σ = 0.01 and looks like a funnel, where m1 is the expectation of the process; σ is the mean square deviation (MSD) of the process. In the section of the figure, for a fixed value of ξVm, a circle is obtained. Picture 1.1. One of the possible geometric interpretations of the ch.f. From a physical point of view, the parameter Vm is the coefficient (or multiplicity) of amplification (weakening) of the instantaneous values of the random process, and the product V_m ξ(t) is the instantaneous phase of the analytical signal w(t)= e^(jV_m ξ(t))= cos⁡[V_m ξ(t)]+j sin⁡[V_m ξ(t)] . (1.4) Then ch.f. is the mathematical expectation of an analytical signal with a constant modulus √(cos^2⁡[V_m ξ(t)]+sin^2⁡[V_m ξ(t)] )=1, while the random process ξ(t) only determines the law of change of the instantaneous phase of the signal (1.4). In this case, the meaning of the picture in Figure 1.1 is as follows. With an increase in the multiplication factor of the instantaneous phase of the analytical signal, its expectation decreases, since the rapidly changing signal is strongly averaged, as a result of which its constant component tends to zero. For a certain multiplication factor Vm, it even becomes equal to zero. In this place, the top of the spatial figure appears. The well-known relation of the ch.f. with probabilistic characteristics of a random process is represented by the formula [4] θ_1 (V_m )= ∫_(-∞)^(+∞)▒〖W_1 (x)e^(jV_m x) dx〗 , (1.5) where W_1 (x) is the one-dimensional probability density of the random process ξ(t). In a particular case, we have 〖 θ〗_1 (V_m )= ∫_(-∞)^(+∞)▒〖W_1 (x) cos⁡〖〖(V〗_m x)〗 dx〗 , (1.6) if the imaginary part of the ch.f. equals zero. This implies that if ch. f. takes on only real values, then it is even, and the corresponding probability density will be symmetric. Conversely, if the probability density of a random process is symmetric, then ch. f. is a real, even function of the parameter Vm. In addition to (1.5), a relation x is established ch. f. with initial and central moment functions of a random process. The book [4] gives the equality (1.7) It follows from this that the initial moment functions of the kth order differ from the value of the k-х derivatives of ch. f. if Vm = 0 only by the factor jk. In a particular case, if k = 1, we obtain an expression for the expectation of a random process m_1 {ξ(t)}= -jθ ̇_1 (V_m ). (1.8) In the general case, if there are initial moments of any order, then, as follows from (1.7), ch. f. can be represented next to Maclaurin Θ_1 (V_m )=1+ ∑_(k=1)^∞▒〖[m_k {ξ(t)}/k!](jV_m )^k.〗 (1.9) If we expand not ch.f. into a Maclaurin series, but a cumulant function as ln⁡〖θ_1 (V_m )〗, then we obtain the expression M(V_m )= ln⁡〖θ_1 (V_m )〗= ∑_(k=1)^∞▒〖〖[χ〗_k/k!](jV_m )^k 〗. (1.10) The coefficients of this series, called cumulants (or semi-invariants) of the distribution, are expressed in terms of central moments by the formulas [4] χ_1= m_1 {ξ(t)}, χ_2= M_2 {ξ(t)}, χ_3= M_3, χ_4= M_4-3M_2^2, χ_5= M_5-10M_2 M_3, (1.11) where Mk{ξ(t)} is the central moment functions of the kth order of the random process ξ(t). It can be seen from formulas (1.11) that the 1st order semi-invariant is equal to the expectation, and the 2nd order semi-invariant is equal to the variance of the random process. For many random processes, the ch.f. is defined and calculated. Information about them is systematized in the literature [2,5]. A graph of the ch.f. widely used Gaussian random process is shown in Figure 1.2. It's plotted for the function θ_1 (V_m )=exp [j〖 m〗_1 V_m-((σ^2 V_m^2)/2)], (1.12) for which the coefficient m1 =0. When σ=1, the value of ch.f. for Vm = 4 differs from θ1(0) = 1 by a factor of 300. Therefore, ch.f. of normal random process rapidly decreases to zero. This behavior ch.f. is determined by its noise filtering property, which will be discussed below. In addition, the ch.f. has Figure 1.2. Characteristic function of a Gaussian process (m1=0, σ=1) other properties, for example, its maximum value is equal to one at Vm = 0 and the modulus |θ1(0)| = 1. This implies the measurability and boundedness of the ch.f. if all values V_m∈[-∞,+∞]. When solving many problems, it is especially useful to have the property that the ch.f. additive mixture of signal and noise is equal to the product of the characteristic functions of individual terms. This property is easy to extend to the sum of independent random processes. Ch.f. of the amount will be 〖 θ〗_1 (V_m )=∏_(i=1)^n▒θ_1i (V_m ), (1.13) where θ1i(Vm) is the ch.f. i-th random process. Here we can also talk about the probability density of the sum of independent random processes, which, as you know, is calculated by the convolution formula 〖 W〗_1 (z)=∫_(-∞)^(+∞)▒〖W_1 (u) W_1 (z-u)du=〗 =∫_(-∞)^(+∞)▒〖W_1 (z-y) W_1 (y)dy,〗 (1.14) where z(t)=u(t)+y(t). Then, using transformation (1.5), we can pass to the characteristic function of the additive mixture z(t). However, these two options for solving the problem do not reject each other. In some cases, it turns out to be easier to find the probability density, in others, the characteristic function. For example, when measuring the probability density of only a signal from a mixture z(t), of course, it is much easier to measure the ch.f. mixtures of z(t) and ch.f. interference y(t) when the signal u(t)=0. Then, it is necessary to find the ch.f. signal from the ratio of the characteristic functions, and to calculate its probability density using the Fourier transform W_1 (x)=1/2π ∫_(-∞)^(+∞)▒〖θ_1 (V_m ) 〗 e^(-jV_m x) dV_m , (1.15) The solution of such a problem with the help of convolution (1.14) is much more complicated. If we divide the characteristic function of the additive mixture by the ch.f. interference, we get the quotient. It contains expressions for the real and imaginary parts of the ch.f. separate signal. Expressions are equal 〖 A〗_u (V_m )=(A_z (V_m ) A_y (V_m )+B_z (V_m ) B_y (V_m ))/(A_y^2 (V_m )+B_y^2 (V_m ) ), (1.16) 〖 B〗_u (V_m )=(B_z (V_m ) A_y (V_m )-A_z (V_m ) B_y (V_m ))/(A_y^2 (V_m )+B_y^2 (V_m ) ), (1.17) where A_u (V_m ), A_y (V_m ), A_z (V_m ) is the real part of the ch.f. of signal, interference and additive mixture, respectively; B_u (V_m ), B_y (V_m ), B_z (V_m ) is the imaginary part of the ch.f. of signal, interference and additive mixture, respectively. When the noise has a symmetric distribution function and mathematical expectation equal to zero, then the calculation algorithms (1.16), (1.17) become simpler: 〖 A〗_u (V_m )=(A_z (V_m ))/(A_y (V_m ) ), (1.18) 〖 B〗_u (V_m )=(B_z (V_m ))/(A_y (V_m ) ). (1.19) In accordance with (1.2), the ch.f. signal will be equal to θ_1u (V_m )=(A_z (V_m ))/(A_y (V_m ) )+(〖jB〗_z (V_m ))/(A_y (V_m ) ). (1.20) If the additive mixture of signals is represented as z(t)= K0u(t) + U0, then the ch.f. sum looks like this: 〖 θ〗_1z (V_m )= θ_1u (K_0 V_m ) e^(jU_0 V_m ), (1.21) where U_0 is deterministic signal, constant in time (for example, constant voltage); θ1z(Vm) is ch.f. process z(t), and θ1u(Vm) is ch.f. of signal u(t). Where U0 = 0, expression (1.21) is transformed into the equality 〖 θ〗_1z (V_m )= θ_1u (K_0 V_m ). (1.22) Consequently, amplification (attenuation) of the signal by a factor of K0 leads only to a change in the real parameter of the ch.f. the same number of times. This property of the ch.f. is especially important when constructing instruments of a new class with the conditional name characterometers. When the sign of the real parameter of the ch.f. equality (1.22) takes the form θ_1z (–V_m )=¯(θ_1u (K_0 V_m ) ) (1.23) i.e. θlz(-Vm) and θ1u(K0Vm) are complex conjugate functions. If we put K0 = 1 in expression (1.23), then it is reduced to the known θ_1z (-V_m )= ¯(θ_1u (V_m ) )=A(V_m )-j B(V_m ). (1.24) Let us onsider a complex signal =u(t)+ jν(t), in which the terms are functions of a real variable. Then θ_1ξ (V_m )= m_1 {exp⁡(-V_m ν(t)) } θ_1u (V_m ), (1.24a) where θ_1ξ (∙), θ_1u (∙) are the characteristic functions of the signals ξ(t), u(t), respectively. Thus, ch.f. of complex signal is equal to the product of the ch.f. of the real part of this signal and the expectation of the exponential function, whose exponent with the opposite sign is the imaginary part of the complex signal amplified by Vm times. If a random signal v(t) is subject to the distribution law W1(x), then 〖 m〗_1 {exp⁡(-V_m ν(t)) }= -∫_(-∞)^(+∞)▒〖W_1 (x) 〗 e^(-V_m x) dx. For example, for a law of the form W1(x)=1/(2π) we have (1.24б) 〖 θ〗_1ξ (V_m )=-sh(πV_m )/(πV_m ) θ_1u (V_m ). (1.24в) Multidimensional ch.f. of random process has the following form: 〖 θ〗_n (V_1,V_2,…,V_n,t_1,t_2,…,t_n )= m_1 {exp[j∑_(i=1)^n▒〖V_i ξ(t_i)〗]⁡}. (1.25) The quantization step ∆V is applicable to any ch.f. parameter. Taking into account the previously adopted notation, we can write 〖 V〗_1=V_1m= mΔV,〖 V〗_2=V_2m=mΔV,… , 〖 V〗_n=V_nm=mΔV. (1.26) Moreover, it is not necessary at all in (1.26) to have a constant quantization step for all parameters of the ch.f. Each parameter (or group of parameters) ch.f. can be discretized by its step. If the instantaneous values of the random process, taken at the moments of time tl, t2,...,tn, do not depend on each other, then the expression (1.25) takes a different form: θ_n (V_1,V_2,…,V_n,t_1,t_2,…,t_n )= ∏_(i=1)^n▒〖θ_1i (V_i ) 〗, (1.27) where θ_1i (V_i ) is ch.f. of i-th set of instantaneous values of a random process. For stationary random processes, expression (1.25) is written in a simpler way: 〖 θ〗_n (V_1,V_2,…,V_n )= m_1 {exp[j∑_(i=1)^n▒〖V_i ξ(t_i)〗]⁡}, (1.28) since ch.f. does not depend on time or depends only on the differences of individual moments of time t2-t1, t3-t1,...,tn-t1 = пτ, i.e. θ_n (V_1,V_2,…,V_n,τ,2τ,…,(n-1)τ)= 〖 =m〗_1 {exp[j∑_(i=1)^n▒〖V_i ξ(t+iτ)〗]⁡}. (1.29) It is quite clear that the ch.f. of lower dimension is quite simply obtained from the n-dimensional characteristic function, namely: θ_k (V_1,V_2,…,V_k )=θ_n (V_1,V_2,…,V_n,0,0,…,0). (1.30) Calculation of n-dimensional ch.f. random process is a difficult task. Therefore, the publication contains few examples of multidimensional ch.f. Nevertheless, the expression for the n-dimensional ch.f. normal random process: 〖 θ〗_n (V_1,V_2,…,V_n )= =exp[j∑_(i=1)^n▒〖V_i m_1i-1/2 ∑_(i=1)^n▒∑_(k=1)^n▒〖V_i V_k σ_i σ_k R_ik (τ)〗〗], (1.31) where m1, σ2 are the mathematical expectation and variance of the random process; Rik(τ) is the normalized correlation function, with Rik(τ)=Rki(τ), Rii(0) = 1, Rkk(0)=1. Examples of other multidimensional ch.f. can be found in the books [2,5]. 1.2. Estimates of the characteristic function of instantaneous values of a random process Formula (1.1) contains the mathematical operation m1{·} is the statistical mean (or the expectation operator), which involves averaging an infinitely large number of values of the function exp[jVmξ(t)], depending on the instantaneous values of the random process ξ(t). It is quite clear that in the instrumental analysis of ch.f. a finite number of instantaneous values of the random process or its parameters (envelope, instantaneous phase, frequency) will be used. The result of calculating the value of ch.f. according to a limited set of sample data ) of a random process is called the estimate of the function. The estimate of the ch.f. will be denoted by the previously adopted symbol, and to distinguish it, mark with an asterisk: θ_1^* (V_m )=L{exp⁡[jV_m 〖ξ_i (t_n)]〗_1^k }, (1.32) θ_1^* (V_m )=L{exp⁡[jV_m 〖ξ(t_n)]〗_1^k }, (1.33) where L is the transformation operator for an array of sample data, a set of sample values, or time-limited implementations of a random process; k is the number of implementations or sequences used). The transformation operator L can be different. In [6], three operators are considered, namely: an ideal integrator with normalization with respect to T L_T=lim┬(T→∞)⁡〖1/T ∫_0^T▒dt,〗 (1.34) ideal adder normalized by N 〖 L〗_N 〖=lim┬(N→∞)〗⁡〖1/N ∑_(i=1)^N▒ ,〗 (1.35) ideal adder-integrator with normalization on N and T 〖 L〗_NT=lim┬(N,T→∞)⁡〖1/NT ∑_(i=1)^N▒∫_0^T▒dt,〗 (1.36) where T is the averaging time (the duration of the implementation of the random process); N is the amount of sample data or the set of sample values. Taking into account (1.34) - (1.36), we can write: θ_1^* (V_m )=lim┬(N→∞)⁡〖1/N ∑_(i=1)^N▒〖exp⁡[jV_m ξ_i (t_n)]〗,〗 θ_1^* (V_m )=lim┬(T→∞)⁡〖1/T ∫_0^T▒〖exp⁡[jV_m ξ_k (t)]dt〗,〗 (1.37) (1.38) 〖 θ〗_1^* (V_m )=lim┬(N,T→∞)⁡〖1/NT ∑_(i=1)^N▒∫_0^T▒〖exp⁡[jV_m ξ_i (t)]dt〗.〗 (1.39) By definition [6], estimates (1.37) - (l.39) are called k - current, t - current and average, respectively. The application of these estimates is largely determined by the type of random process. Using the classification [4] of random processes, we can specify the following: - for a stationary ergodic random process, all estimates of the ch.f. are equal to each other; - for a stationary non-ergodic random process t - current and average estimates of ch.f. are equal to each other; - for a non-stationary ergodic random process k - current and average estimates of the ch.f. are equal to each other; - for a non-stationary non-ergodic random process, all estimates of the ch.f. are not equal to each other. In this case, we recommend using average ch.f. estimates, since they converge better than others to a probabilistic characteristic (characteristic function). Passing to the estimates of the real and imaginary parts of the ch.f., taking into account (1.37) – (1.39), we write (V_m )= lim┬(T→∞)⁡〖1/T ∫_0^T▒〖cos[V_m ξ_k (t)]dt〗,〗 (1.40) (V_m )=lim┬(T→∞)⁡〖1/T ∫_0^T▒〖sin[V_m ξ_k (t)]dt〗,〗 (1.41) (V_m )=lim┬(N→∞)⁡〖1/N ∑_(i=1)^N▒〖cos[V_m ξ_i (t_n)]〗,〗 (1.42) (V_m )=lim┬(N→∞)⁡〖1/N ∑_(i=1)^N▒〖sin[V_m ξ_i (t_n)]〗,〗 (1.43) (V_m )=lim┬(N,T→∞)⁡〖1/NT ∑_(i=1)^N▒∫_0^T▒〖cos⁡[V_m ξ_i (t)]dt〗,〗 (1.44) (V_m )=lim┬(N,T→∞)⁡〖1/NT ∑_(i=1)^N▒∫_0^T▒〖sin[V_m ξ_i (t)]dt〗.〗 (1.45) Our further analysis is connected with the k-current estimate of the ch.f. To simplify the notation, we omit the symbols k, n in formulas (1.37), (1.40), (1.41), and we will only have them in mind. If in estimates (1.37), (1.40), (1.41) the integration operation is performed by a non-ideal integrator, and a real device with an impulse response q(t), then these expressions take the form 〖 θ〗_1^* (V_m )=∫_0^T▒〖q(T-t) exp⁡[jV_m ] dt,〗 (1.46) (V_m )= ∫_0^T▒〖q(T-t)cos[V_m ξ(t)]dt〗, (1.47) (V_m )=∫_0^T▒〖q(T-t)sin[V_m ξ(t)]dt〗. (1.48) Let us extend the notion of an estimate to an n-dimensional ch.f. Similarly to estimates (1.37)–(1.39), let us write θ_n^* (V_1,V_2,…,V_n )= =lim┬(T→∞)⁡〖1/T ∫_0^T▒〖exp⁡[j(V_1+V_2+⋯+V_n)ξ_k (t)]dt〗,〗 (1.49) 〖 θ〗_n^* (V_1,V_2,…,V_n )= =lim┬(N→∞)⁡〖1/N ∑_(i=1)^N▒〖exp⁡[j(V_1+V_2+⋯+V_n)ξ_i (t_n)]〗,〗 (1.50) 〖 θ〗_n^* (V_1,V_2,…,V_n )= =lim┬(N,T→∞)⁡〖1/NT ∑_(i=1)^N▒∫_0^T▒〖exp⁡[j(V_1+V_2+⋯+V_n)ξ_i (t)]dt〗.〗 (1.51) Separately for the real and imaginary parts of the ch.f. we have (V_1,V_2,…,V_n )= =lim┬(T→∞)⁡〖1/T ∫_0^T▒〖cos[(V_1+V_2+⋯+V_n)ξ_k (t)]dt〗,〗 (1.52) (V_1,V_2,…,V_n )= =lim┬(T→∞)⁡〖1/T ∫_0^T▒〖sin[(V_1+V_2+⋯+V_n)ξ_k (t)]dt〗,〗 (1.53) (V_1,V_2,…,V_n )= =lim┬(N→∞)⁡〖1/N ∑_(i=1)^N▒〖cos[(V_1+V_2+⋯+V_n)ξ_i (t_n)]〗,〗 (1.54) (V_1,V_2,…,V_n )= =lim┬(N→∞)⁡〖1/N ∑_(i=1)^N▒〖sin[(V_1+V_2+⋯+V_n)ξ_i (t_n)]〗,〗 (1.55) (V_1,V_2,…,V_n )= =lim┬(N,T→∞)⁡〖1/NT ∑_(i=1)^N▒∫_0^T▒〖cos⁡[(V_1+V_2+⋯+V_n)ξ_i (t)]dt〗,〗 (1.56) (V_1,V_2,…,V_n )= =lim┬(N,T→∞)⁡〖1/NT ∑_(i=1)^N▒∫_0^T▒〖sin[(V_1+V_2+⋯+V_n)ξ_i (t)]dt〗.〗 (1.57) For a k-current estimate of a multidimensional ch.f. taking into account (1.46) - (1.48) we can write 〖 θ〗_n^* (V_1,V_2,…,V_n )= =∫_0^T▒〖q(T-t) exp⁡[j(V_1+V_2+⋯+V_n)ξ(t)] dt,〗 (1.58) (V_1,V_2,…,V_n )= = ∫_0^T▒〖q(T-t)cos[(V_1+V_2+⋯+V_n)ξ(t)]dt〗, (1.59) (V_1,V_2,…,V_n )= =∫_0^T▒〖q(T-t)sin[(V_1+V_2+⋯+V_n)ξ(t)]dt〗. (1.60) In the above formulas, the model of a random process can be any, and the model of the impulse response of a physically realized integrator is presented in Table 1.1. In the theory of estimating the probabilistic characteristics of a random process, much attention is paid to the fundamental nature of estimates. Fundamentality is characterized by properties of estimates. They must be wealthy, efficient and unbiased. The properties of estimates (1.40 - 1.45, 1.47, 1.48) are studied in detail and described in the book [2]. It is also shown there that the estimates for the real and imaginary parts of the ch.f. are consistent, efficient, and asymptotically unbiased, i.e. they are fundamental. Table 1.1. Classification of integrators Integrator type Characteristic q(T-t) RC -integrator βeβ(t-T/2)/2sh(βT/2) RC -integrator (l-βT+βt)eβt/T RC -integrator 2(T-t-βeβ(t-T))/(2e-βT+T2-2) Measuring instrument with critical damping β2(T-t)eβT/(eβT-βT-1) Turning to estimates of the form (1.36 - 1.39, 1.46), we can say that they are also fundamental. Their properties will be the same as those of the estimates (1.40,1.41,1.47,1.48) of the real and imaginary parts of the ch.f. Here it is appropriate to say the following. Estimates (1.40), (1.41), (1.47), (1.48) are assigned the role of some approximators, with the help of which estimates of the probability density, correlation function, initial and central moment functions are constructed. Since the initial estimates (approximators) of the probabilistic characteristics of a random process are consistent, other estimates of the probabilistic characteristics constructed from them by classical methods will also be consistent, which is consistent with the conclusion in [2]. In the instrumental analysis of random processes, it can be difficult to obtain an estimate of a probabilistic characteristic that satisfies all of the above properties at once. For example, it may turn out that even if an effective estimate exists, then the algorithms for calculating it are too complicated and one has to be content with a simpler estimate, the variance of which is somewhat larger. The same can be said about biased estimates, as slightly biased estimates are sometimes used. The final choice of the estimate, as a rule, is dictated by the results of its analysis in terms of simplicity, ease of use in equipment or mathematical statistics, and reliability of properties. Estimates of the probability distribution function, probability density, correlation function, initial and central moment functions of the kth order are given in the book [2]. 1.3. Estimation of the characteristic function of a discrete quantity Let us consider a special case when the process ξ(t) is represented only by instantaneous values ξi(tn). To simplify the formulas, we introduce the notation ξi(tn) = ξi. This is a discrete random variable for which the probability density has the form [4] 〖 W〗_1 (ξ)=∑_(i=1)^N▒〖p_i δ(ξ-ξ_i ),〗 (1.61) where pi is the probability of occurrence of the i-th value of a discrete random variable; N is the total number of discrete values of a random variable; δ (·) is the delta function [4]. In view of (1.61), the ch.f. discrete random variable 〖 θ〗_1 (V_m )=∑_(i=1)^N▒〖p_i exp⁡(jV_m ξ_i ).〗 (1.62) If a discrete random variable ξi=U0= const with probability p=1, then the ch.f. constant value is calculated by the formula 〖 θ〗_1 (V_m )=exp⁡(jV_m U_0 ). (1.63) Known distribution laws for a discrete random variable are tabulated in the book [5]. The characteristic functions obtained taking into account the known distribution laws of a discrete quantity are also given there. For example, a discrete random variable distributed according to the Poisson law has a ch.f. kind 〖 θ〗_1 (V_m )=exp⁡〖[λ(e^(jV_m )-1)],〗 (1.64) where λ – Poisson distribution parameter. This function is widely used when observing the flows of elementary particles, such as electrons. Formulas (1.7) - (1.11) can be used to calculate the distribution moments of a discrete random variable. Substituting the ch.f. (1.64) into formula (1.7), we obtain , , . (1.65) They coincide with the results contained in [4]. Thus, the above material regarding ch.f. random process can be extended to a discrete random variable, the ch.f. which is defined by formula (1.32). Let us consider a complex number ξi=ui+jvi, which, in accordance with the notation adopted earlier, can be called a complex discrete quantity. Similarly to (1.24a), we obtain . (1.66) Let’s denote . Then we have (1.67) where pi is the probability with which the random variable takes on the value yi. If the probability density of this quantity is (1.68) and the probability density of the random variable v is equal to (1.69) then we get the equation (1.70) Taking into account (1.67), (1.70), we have the ch.f. discrete random variable (1.71) When the probability is , then , (1.72) where N is the number of equiprobable values of the discrete random variable v. For this case, expression (1.71) takes on the form 〖 θ〗_1ξ (V_m )=1/V_m θ_1u (V_m ). (1.73) New formulas of the form (1.73) follow from consideration of other laws of distribution of a discrete random variable. 1.4. New property of the characteristic function In the above analysis of the characteristic function, special attention was paid to its properties known from the publication, such as, for example, boundedness, measurability, and others. We have discovered a new property of ch.f., which concerns filtering in the space of noise probabilities with the help of this function. Expressions (1.16 - 1.19) form the basis of the filtering capacity of the characteristic function. Filtering capacity of ch.f. has an applied value [3,7] and the following physical meaning. The probability density of a random process and its ch.f. are connected by a pair of Fourier transforms (1.5), (1.15). It turns out that the ch.f. is the spectral density of the probability density (or, in short, the spectral density of the probabilities) of a random process in the domain of probabilities, if we use the terminology of the Fourier transform of signals, in which the domain is called the frequency domain. Ch.f. carries information about the probabilities of occurrence of instantaneous values of a random process depending on the parameter Vm, which we previously proposed to call the multiplicity of values of a random process. This multiplicity can be written as integer and fractional real numbers. For integer multiplicities, Vm= ±1, ±2, …, ±¥, and for fractional multiplicities, Vm takes any other values on the real axis from ¥ to +¥. This is done in the same way as in the frequency domain, when imaging a line spectrum, the spectral lines are located at points with abscissas w, 2w, 3w,…, nw, where w is the circular frequency of the signal. By analogy with the physical spectrum when using the ch.f. in practice, the multiplicity Vm is taken to be integer and positive. At Vm=0 ch.f. Q1(0)=1 is the total probability. This total probability is distributed between the probabilities of presence in the signal of instantaneous values with a multiplicity one (Vm=1), with a multiplicity two (Vm=2), with a multiplicity three (Vm=3), etc. For example, for "white" noise with dispersion σ2 =1 and ch.f. of the form (1.12) at m1 = 0 we have: p1 =0,6065 at Vm =1; p2 =0,1353 at Vm =2; p3 =0,0111 at Vm =3 etc. to infinity, and for Vm =∞ the probability p∞ =0. When filtering the additive mixture using expressions (1.16 - 1.19) with Vm =1, all instantaneous noise values that are present with a probability p1 =0.6065 are “cut out” from it. In this case, after filtering, instantaneous noise values remain in the additive mixture with a total probability of 0.3935, i.e. with a probability less than one. At the filter output, the additive mixture is different, it will be partially “cleaned” of noise. And, as a result of this, the signal-to-noise ratio at the filter output increases. You can continue filtering further the "purified" additive mixture, "cutting out" from it at Vm = 2 the instantaneous noise values that are present with a probability p2 = 0.1353. Then, at the output of the second filter, the signal-to-noise ratio increases even more, and the additive mixture becomes “cleaned” twice. And so it should be continued further, changing only the value of the multiplicity Vm=m∆V. When ∆V=1, it is appropriate to use the well-known term "filter section" and write the definition of a new device in the form m - link virtual filter. Simulation of virtual filters using the characteristic function with Vm =1 showed good results. The works [3,7] present quantitative and graphical data obtained using equation (1.19). As an example, Figure 1.3 shows graphs of noise and signal suppression when filtering an additive mixture, where σ0, σf are the SNR at the input and output of the filter, respectively; σs is the SNR of the signal; hвх is the signal-to-noise ratio at the filter input. The designation N is taken from expressions (1.42,1.43). An analysis of the curves in Figure 1.3 shows that the filter suppresses signal and noise differently. "White" noise with the help of ch.f. is suppressed by a factor of 5053, while the quasi-deterministic signal is attenuated by only a factor of 120. Thus, the signal-to-noise ratio at the output of a single-link virtual filter increased by an average of 30 times. а) b) Figure 1.3. Filter suppression of noise (a) and signal (b): 1, 4 – N=5; 2 – N=10; 3 – N=50
References

1. Lukach E. Characteristic functions / trans. from English; under. ed. by V.M. Zolotarev. – Moscow: Nauka, 1979. – 424 p.

2. Veshkurtsev Yu.M. Applied analysis of the characteristic function of random processes. – Moscow: Radio and communication, 2003. – 204 p.

3. Veshkurtsev Yu.M., Veshkurtsev N.D., Titov D.A. Instrumentation based on the characteristic function of random processes. – Novosibirsk: publishing house ANS "SibAK", 2018. – 182

4. Levin B.R. Theoretical Foundations of Statistical Radio Engineering. – Moscow: Sov. radio, 1966. –728 p.

5. Goryainov V.T., Zhuravlev A.G., Tikhonov V.I. Statistical radio engineering. Examples and tasks: textbook for universities / under. ed. by V.I. Tikhonov. – 2nd ed. revis. and add. – Moscow: Sov. radio, 1980. – 544 p.

6. Tsvetkov E.I. Fundamentals of the theory of statistical measurements. – Leningrad: Energoatomizdat. Leningrad department, 1986. – 256 p.

7. Veshkurtsev Yu.M., Veshkurtsev N.D., Titov D.A. Filtering in the probability space of an additive mixture of a non-centered quasi-deterministic signal and noise // Devices and systems. Management, control, diagnostics, 2018. – No. 3. – P. 18 – 23.

8. Bershtein I.L. Fluctuations of the amplitude and phases of a tube oscillator // Bullet. of Academy of Sciences of the USSR. Ser. Physical, 1950. – V. 14. – No. 2. – P. 146 - 173.

9. Veshkurtsev Yu.M., Veshkurtsev N.D. Statistical control of substances. – Novosibirsk: publishing house ANS "SibAK", 2016. – 64 p.

10. Veshkurtsev Yu.M. Noise immunity and efficiency of the new modulation method // international scientific journal "Science and World", 2019. – No. 3 (67). Volume 2. – P. 8 - 16.

11. Gradshtein I.S., Ryzhik I.M. Tables of integrals, series and products. Under ed. by A. Jeffrey, D. Zwillinger. – 7th edition: Trans. from eng. under ed. by V.V. Maksimov. – St. Petersburg: publishing house "BHV-Petersburg", 2011. – 1232 p.

12. Veshkurtsev Yu.M. Noise immunity of a modem based on dynamic chaos according to the Veshkurtsev law in a channel with Gaussian noise // Digital Signal Processing, 2019. – No. 4. – P. 42 – 45.

13. Veshkurtsev Yu.M. Signal modulation and demodulation method // Electrosvyaz, 2019. – No. 5. – P. 66 – 69.

14. Veshkurtsev Yu.M. Theoretical foundations of statistical modulation of a quasi-random signal // international scientific journal "Science and World", 2019. – No. 4 (68). V. 1. – P. 36 – 46.

15. Tikhonov V.I. Statistical radio engineering. – Moscow: Soviet Radio, 1966. – 678 p.

16. Veshkurtsev Yu.M. Building a modulation theory using a new statistical law for the formation of a quasi-deterministic signal // international scientific journal "Science and World", 2019. – No. 5 (69). V.2. – P. 17 - 26.

17. Handbook of Special Functions / Under ed. by M. Abramowitz and I. Stegun. trans. from English. Under ed. by V.A. Ditkin and L.N. Karamzina. – Moscow: ch. ed. physical – mat. lit., 1979. – 832 p.

18. Patent 2626554 RF, IPC N03S 5/00. Method of signal modulation / Yu.M. Veshkurtsev, N.D. Veshkurtsev, E.I. Algazin. – No. 2016114366; dec. 13.04.2016, publ. 28.07.2017. Bull. No. 22.

19. Lyapunov A.M. On a theorem of the theory of probability. One general proposition of probability theory. A new form of the theorem on the limit of probabilities // Collect. edit.: In 6 volumes. – Moscow, 1954. – V.1. – P. 125-176.

20. Veshkurtsev Yu.M. Formation of a reference oscillation in the statistical analysis of phase fluctuations // Instruments and Experimental Technique, 1977. – No. 3. – P. 7 -13.

21. Baskakov S.I. Radio engineering circuits and signals. Textb. for universities. – 2nd ed., Rev. and add. – M.: Higher School, 1988. – 448 p.

22. Sudakov Yu.I. Amplitude modulation and self-modulation of transistor generators. – M.: Energiya. 1969. – 392 p.

23. Patent 2626332 RF, IPC H04L 27/06. Signal demodulation method / Yu.M. Veshkurtsev, N.D. Veshkurtsev, E.I. Algazin. – No. 2016131149; dec. 27.07.2016, publ. 26.07.2017. Bull. No. 21.

24. Veshkurtsev Yu.M., Titov D.A. Study of the signal modem model with a new modulation // Theory and technology of radio communication, 2021. – No. 3. - P. 23 - 29.

25. Veshkurtsev Yu.M. Improving the noise immunity of the modem of digital systems with amplitude manipulation // Instruments and systems. Management, control, diagnostics, 2019. – No. 7. – P. 38 – 44.

26. Veshkurtsev Yu.M. New generation modem for future data transmission systems. Part 1 // Omsk Scientific Bulletin, 2018. – No. 4 (160). – P. 110 - 113.

27. Vilenkin, S.Ya. Statistical processing of the results of the study of random functions: monograph / S.Ya. Vilenkin. – Moscow: Energiya, 1979. – 320 p.

28. Veshkurtsev Yu. M., Titov D. A. Determination of probabilistic characteristics of random values of estimates of the Lyapunov function in describing a physical process // Metrology. 2021. No. 4. P. 53–67. https://doi.org/10.32446/0132-4713.2021-4-53-67

29. Veshkurtsev Yu. M. Study of modem in a channel with gladkie fading of signal with modulated kharakteristicheskaya function // Proccedings of the International Conference “Scientific research of the SCO countries: synergy and integration”, Beijng, China, 28 October, 2020. Part 2. Pp. 171 – 178. doi: 10.34660/INF.2020.71.31.026

30. Veshkurtsev Yu.M. New generation modem for future data transmission systems. Part 2 // Omsk Scientific Bulletin, 2018. – No. 5 (161). – P. 102 – 105.

31. Veshkurtsev Yu.M. New modem built in the space of probabilities // Current state and prospects for the development of special radio communication and radio control systems: Collec. edit. All-Russ. jubilee scientific-technical conf. - Omsk: JSC "ONIIP", October 3-5, 2018. – P. 114 - 119.

32. Veshkurtsev Yu.M. Addition to the theory of statistical modulation of a quasi-deterministic signal with distribution according to Tikhonov law // Elektrosvyaz, 2019. – No. 11. – P. 56 - 61.

33. Veshkurtsev Yu.M. Modem for receiving modulated signals using the Tikhonov distribution law // Instruments and systems. Management, control, diagnostics, 2019. – No. 8. – P. 24 - 31.

34. Veshkurtsev Yu.M. Modem noise immunity when receiving a signal with the distribution of instantaneous values according to Tikhonov law // Digital Signal Processing, 2019. – No. 2. – P. 49 – 53.

35. Zyuko A.G. Noise immunity and efficiency of information transmission systems: monograph / A.G. Zyuko, A.I. Falko, I.P. Panfilov and others / Under ed. by A.G. Zyuko. – M.: Radio and communication, 1985. – 272 p.

36. Zubarev Yu.B. Digital television broadcasting. Fundamentals, methods, systems: monograph / Yu.B. Zubarev, M.I. Krivosheev, I.N. Krasnoselsky. – M.: Publishing House of SRIR, 2001. – 568 p.

37. Veshkurtsev Yu.M. Investigation of the noise immunity of a modem of digital systems with amplitude shifting when operating in a channel with Gaussian noise // Instruments and systems. Management, control, diagnostics, 2019. – No. 9. – P. 28 – 33.

38. Bychkov E.D., Veshkurtsev Yu.M., Titov D.A. Noise immunity of a modem in a noisy channel when receiving a signal with the Tikhonov distribution // Instruments and systems. Management, control, diagnostics, 2020. – No. 3. – P. 38 – 43.

39. Lazarev Yu. F. MatLAB 5.x. – Kiev: Ed. group BHV, 2000. – 384 p. ISBN 966-552-068-7. – Text: direct.

40. Dyakonov V. P. MATLAB. Processing of signals and images. Special handbook. – St. Petersburg: Peter, 2002. – 608 p. – ISBN 5-318-00667-1. – Text: direct.

41. Dyakonov V. P., Kruglov V. I. Mathematical extension packages MATLAB. Special handbook. – St. Petersburg: Peter, 2001. – 480 p. – ISBN: 5-318-00004-5. – Text: direct.

42. Sergienko A. B. Adaptive filtering algorithms: implementation features in MATLAB // Exponenta Pro. 2003. N1. – P. 11 – 20. – Text: direct.

43. Dyakonov V. P. MATLAB 6/6.1/6.5 + Simulink 4/5. Basics of application. – Moscow: SOLON-Press. 2004. – 768 p. – ISBN 5-98003-007-7. – Text: direct.

44. Solonina A. I. Digital signal processing. Simulation in Simulink. – St. Petersburg: BHV-Petersburg, 2012. – 425 p. – ISBN 978-5-9775-0686-1. – Text: direct.

45. Dyakonov V. P. Digital signal processing. Simulation in Simulink. – Moscow: DMK-Press, 2008. – 784 p. – ISBN 978-5-94074-423-8. – Text: direct.

46. Simulink Environment Fundamentals. Blocks. The MathWorks. URL: https://ch.mathworks.com/help/referencelist.html?type=block (date of access: 21.04.2021).

Login or Create
* Forgot password?