An analytical approach to compute lower bounds of 𝝁-values

Article history: Received 17 January 2020 Received in revised form 5 May 2020 Accepted 7 May 2020 The lower bounds of μ −values for a family of square real and complex valued matrices are computed analytically. The proposed methodology consists of factorizing an admissible perturbation from a set of block diagonal matrices into a block diagonal matrix. The computation of the lower bounds of μ −values is then carried out by computing the spectral radius and numerical radius for matrix under consideration. The lower bounds of μ −value provide the conditions which guarantee the instability analysis of the linear feedback system.


Introduction
*The numerical approximation of eigenvalues corresponding to the family of matrices plays a vital role in science and engineering. For instance, the largest eigenvalue computed by the power method corresponding to Leslie matrix describes the long term growth rate of population. Besides, the vibration frequencies are described with the help of eigenvalues of matrices appearing in structural mechanics, and the eigenvalues are the measure of data variance that can be used for dimensional reduction in multivariate data analysis.
The eigenvalues are roots of the characteristic polynomial. The roots are computable by using iterative methods. The class of matrices governing from real systems possesses uncertainties and the computation of properties for such matrices in − ℎ (Nemirovskii, 1993;Braatz et al., 1994). The −value is a well known MATLAB Tool available in MATLAB Control Tool Box (Doyle, 1982;Safonov, 1982;Safonov and Doyle, 1984) and has been used to discuss the stability, instability, performance, and robustness of feedback system in linear control. The numerical methods (Packard et al., 1988;Fan and Tits, 1986;Packard and Doyle, 1993) are used to approximate bounds of −value but unfortunately the computation of its exact value is − ℎ (Braatz et al., 1994).
The class of positive matrices like level symmetric matrices and Hermitian positive definite matrices are widely used in mathematics and in various applications of engineering. For instance, computer vision (Nemirovskii, 1993), the mechine learning (Kishida and Braatz, 2014) and in the area of convex optimization (Braatz et al., 1994). The Linear Matrix Inequality (LMI) technique based on the positive definiteness nature of matrix is widely used to study the stability analysis of feedback systems in linear control. In control, however, various control systems are designed on the top of state-space models, which are symmetric in nature; this includes power networks and an electrical network (Doyle, 1982).
The algorithm provides tighter lower bounds for −values when real uncertainties are under consideration (Dailey, 1990). The proposed algorithm is based on simple operation of matrix algebra, and it iterates with respect to only one variable, which returns not only the size of the worst case parameter but also its actual values. In Karamancıoğlu and Kasimbeyli (2011), a non-linear programming technique is introduced to approximate the tighter lower bounds of real −values. The real structured singular value problem (RSSV) is formulated as a non-linear programming problem, which is then solved by making use of the F-modified sub-gradient (F-MSG) technique to compute lower bounds of structured singular value. The F-MSG algorithm solves a large class of non-convex optimization problems without making use of differentiability.
In Kim et al. (2009), a geometrical approach is introduced to approximate the lower bounds of −values for pure real repeated perturbations. The problem formulation appears in the sense that the resulting parametric search space does not depend on the fact that how many times the parameter is repeated in a structured perturbation matrix.
In Fabrizi et al. (2014), a detailed comparison of the developed numerical method is presented. Each listed numerical method in Fabrizi et al. (2014) approximates and gives improved results for lower bounds of −values.
A Gain-Based lower bound algorithm is presented in Seiler et al. (2010); to compute lower bounds of −values. The Gain-Based lower bound algorithm takes both real and mixed perturbation along with the given matrix whose −values computations are under consideration. The key idea of this algorithm is to make use of worst-case gain problems in order to approximate the real perturbation, and it uses a standard power algorithm to compute the complex perturbation.
In Rehman et al. (2019), the lower bounds of −values are approximated by making use of low rank ODEs based technique. The proposed methodology is based on two level algorithm, the inner-outer algorithm. The proposed iterative method approximated tighter lower bounds of −values when compared with the well-known MATLAB function MUSSV in most cases.
This manuscript is organized as Section 2 consists of mathematical preliminaries, which includes a basic definition of the spectrum, pseudo-spectra, and −values. In Section 3 we present our methodology, which consists of the decomposition of uncertainty from a set of block diagonal matrices into a block diagonal matrices that have identities matrices along with principle diagonal. Section 4 summarizes our findings.

Mathematical preliminaries
The real and complex dimensional matrices are denoted by K , with K = C ( R). The real and complex scalars are denoted by = C ( R). The complex vectors are denoted by C n .
For a complex matrix , * denotes the complex conjugate transpose. The × identity matrix is denoted by with representing its dimensions.
The spectrum of a matrix is represented with notation Λ(⋅) while Λ (⋅) represents thepseudospectrum of a matrix (⋅). The notation ∥⋅∥ represents the norm of a matrix or vector. The notations and * denote the set of block diagonal matrices having mixed real and complex uncertainties and pure complex uncertainties, respectively. (⋅) denotes the spectral radius of a matrix (⋅).
Definition 2.1: The set of eigenvalues of an ndimensional complex matrix M ∈ n,n with = is defined as: Definition 2.2: The ϵ-pseudospectrum of a complex valued matrix M ∈ , with a small parameter ϵ > 0 is defined as.
Definition 2.3: For ϵ > 0, a scalar λ ∈ (or ) belongs to the ϵ-pseudo-spectrum of M ∈ n,n and satisfies the following properties.
Definition 2.6: For M ∈ , and , the μ −value is defined as: where ∥ Δ ∥ 2 denotes the largest singular value of an admissible perturbation Δ ∈ and (⋅) denotes the determinant of matrix (⋅).
Definition 2.7: For M ∈ n,n and * , the μ −value is defined as:

Computing structured singular values
In this section, we present an analytical approach for the computation of lower bounds of −values. The proposed methodology is based on the idea of factorizing the set of a block of diagonal uncertainties into diagonal matrices having identity matrices along principal diagonal. Furthermore, the proposed methodology involves the computation of spectral and numerical radii of a given matrix, which in turn computes the lower bounds of −values.
Definition 3.1: The modified block diagonal matrix is denoted by B and is defined as:
Let Γ be a space of dimensional complex matrices and let be a space of dimensional Hermitian matrices. Let be a unit vector.

Definition 3.4:
The numerical range W(A) for A ∈ Γ n is defined as: Definition 3.5: The numerical radius r(A) of A ∈ Γ n is defined as: where ∈ with ∥ ∥= 1 is a scalar on the unite circle ̂. For = (̂1̂1,̂2̂2, ⋯ ,̂̂) we assume that each ̂̂, ∀ = 1: ,̂∈ ( ),̂∈ , ( , ) are invertible matrices. From here and onwards, we shall write ̂̂= ∀ = 1: . Theorem 3.6 computes a block diagonal matrix having identity matrices along its principal diagonal. This idea helps in order to compute the spectral and numerical radii for a given matrix rather than computing these mathematical quantities for the product of a given matrix with admissible perturbation. Theorem 3.6: Let A and B be n-dimensional square matrices which are identically partitioned into block diagonal matrices:
Proof: As block diagonal matrices ( ) ∀ = 1: and ( ) ∀ = 1: are identically partitioned, which in turn implies that each of these block diagonal matrices possesses number of columns and number of rows. Partitioning of block diagonal matrices can be presented while making use of some non-zero vector ⃗ having the length + 1. In such a situation , ∀ = 1: possesses ( +1 − ) number of non-zero columns and rows.
Since the ( ) ( ) = ( ) ∀ = 1: which means that ( ) ∀ = 1: can be described in terms of block diagonal matrix, which possesses number of non-zero columns and rows. The matrix is partitioned as = [ 1 , 2 , ⋯ , ] with 1 , 2 , ⋯ , being columns of . The submatrices in , takes the form, In turn, this implies that, As = 0 is a null-matrix. This is because the matrix is a block diagonal matrix. Thus, On the other hand = 0 as ≠ . This shows that, it indicates that, We give the following Theorem 3.7 in order to compute the spectral radius of a given matrix . Furthermore, Theorem 3.7 shows that ∈ ( ) attains the maximum value to be exactly equal to 1.
Theorem 3.8: Let M be a non-negative real valued Stochastic matrix while for some power k, M > 0, that is, the matrix M is positive. Then ∃ a unique eigenvector w > 0 with, such that ( ) = 1 and for some vector = (1,1, ⋯ ,1), = 1.
Theorem 3.9 shows that an Eigen spectrum of a Hermitian matrix contains the maximum eigenvalue to be exactly equal to 1, that is, | ( )| = 1.
where is an identity matrix.

Conclusion
An analytical method based on the decomposition of an admissible perturbation from a set of block diagonal matrices into a block diagonal matrix having identity matrices along the principal diagonal is presented. The main contribution is the computation of the lower bounds of −values. The proposed methodology is based on the idea of computing spectral and numerical radii of the perturbed matrix. The structured singular value is a tool in control to discuss the stability analysis of the linear system. The lower bounds of structured singular value measure the instability analysis of feedback systems in linear control.