Approximating structured singular values for Chebyshev spectral differentiation matrices

In this article, we present the numerical computation of lower bounds of structured singular value known as the µ-value for a family of Chebyshev spectral differentiation matrices. The µ-value is a versatile tool used in control in order to analyze the robustness, performance, stability, and instability of feedback systems in system theory. The purposed methodology is based upon low-rank ordinary differential equations based technique and provides tight lower bounds of µ-value once compared with the well-known MATLAB routine mussv available in the MATLAB control toolbox.


Introduction
* Structured Singular Value known as the µ-value defined by Packard and Doyle (1993) is a valuable tool available in system theory to analyses the robustness and performance of the uncertain control systems. The µ-value tool is applicable to investigate the stability analysis of control system with the help of the main loop theorem discussed in Packard and Doyle (1993). However, one need to do more analysis in order to deal with the complex robustness.
The structures addressed by µ-value are generic in nature. In principle, these structures allows us covering all kinds of uncertainties, perturbations which can be included into the linear control systems with the help of both real and complex linear fractional transformations (LFT's). For applications of structured singular values and its examples, interested readers can see Bernhardsson et al. (1998), Hinrichsen and Pritchard (2005), Chen et al. (1996), Zhou et al. (1996), Qiu et al. (1995), Karow (2011), and Karow et al. (2006) and the references there in.
Unfortunately, the computation of an exact value of structured singular value is not a trivial task and appears to be NP-hard problem, for more details, see Braatz et al. (1994). In case of pure real perturbations, even approximating µ-value appears to be NP-hard. The matter of fact is that the computation of µ-value needs dependency upon the approximation of both µ-lower and µ-upper bounds.
For the special case when only repeated parametric perturbations are allows, in such scenario its much valuable to have lower bounds because the upper bound could be conservative, especially when repeated parametric perturbations occurs. The widely used MATLAB routine, mussv, approximate an upper bound by means of the diagonal balancing technique, for further details, readers can consult Young et al. (1992) and a linear matrix inequalities (LMI) technique developed in Fan et al. (1991). The lower bound of µ-value is approximated by means of power method, the interested reader can consult Packard et al. (1988) and Young et al. (1994). The algorithm presented for this resembles a matrix of the power method for approximating the maximum eigenvalue and the maximum singular value of the given matrices.
In this paper, we present numerical approximations to a lower bound of the µ-values of Chebyshev spectral differentiation matrices and we consider the fact that the underlying perturbations are associated with pure complex, mixed real and complex uncertainties. The proposed methodology to approximate the lower bounds of µ-value is based on two level algorithm, inner-outer algorithm (Rehman and Tabassum, 2017).
In section 2, we emphasize our attention on the basic framework of proposed problem under consideration. It is describe that how the approximation of the µ-values can be addressed by means of a two level algorithm that is an inner algorithm and an outer algorithm. In Section3 of this article, we introduce the inner algorithm for the case of pure complex uncertainties. The outer algorithm is mentioned in section 4. Finally in Section 5, we give the numerical experiments to compare lower bounds of µ-values for Chebyshev spectral differentiation matrices obtained with algorithm of Rehman and Tabassum (2017) to the one obtained with MATLAB function mussv.

Framework
Let ∈ , ( , ), where denotes the complex matrices while denotes the family of the real matrices though out this article, and an underlying perturbation set with prescribed repeated real scalar block matrices and repeated complex scalars block matrices and the full blocks along the major diagonal.
The following definition is given in Packard and Doyle (1993), where is the ( , ) identity matrix.
Definition 2.1. Let ∈ , and consider the set of block diagonal matrices that is the set and let ∆∈ is an admissible perturbation. Then, a structured singular value is denoted by ( ) and is defined as follows: For a general set , the structured singular values become smaller and thus we have an upper bound. The important case, that is, when underlying perturbation set allows the pure complex perturbations, under such circumstances, we write * instead of .
For ∆∈ * , it's true to say that the perturbation ∈ for any value of ∈ . Thus we choose ∆∈ * such that the spectral radius achieves the maximum value to be one, that is, ( ∆) = 1 which is possible only if there is ∆∈ * , with the exactly same norm so that the matrix ∆ possesses an eigenvalue which attain the maximum value one and furthermore the matrix ( − ∆) is a singular matrix. This gives us following alternative definition of structured singular value as: In Eq. 3, the quantity ρ(⋅) denotes the spectral radius of a matrix.

Reformulation of definition of SSV
The structured spectral value of ∈ , w.r.t to perturbation level, > 0 is defined as follows: where, the set Λ(•)denotes the eigenvalues of the matrix. For pure complex uncertainties * , the Eq. 4 is simply a disk having its center at origin. Thus, the Eq. 3 for pure complex uncertainties can be reformulated as: (5)

Overview of the proposed methodology
We need to solve the maximization problem, For the fixed parameter > 0. From the above discussion, it's very much that clear that the quantity * ( ) is the reciprocal of minimum value of such that ( ) = 1. In the inner algorithm, we intend to solve the problem addressed in Eq. 6. In the outer algorithm, we first vary , the small parameter by using the fast Newton's method which gives knowledge to compute the extremizers. We address Eq. 6 by solving a system of ordinary differential equations (ODEs).

Computation of local extremizers
In this section, we consider the solution of problem as mentioned in the Eq. 6 by making use of the inner algorithm. Now, we use the following standard eigenvalue perturbation result by Kato (1980). where, 0 * and * are the right and left eigenvectors of 0 = (0) associated with simple eigenvalue 0 = (0) that is ( 0 − 0 0 ) and 0 * ( 0 − 0 ) = 0.
Definition 3.1.1. An admissible perturbation ∆∈ * such that ‖∆‖ 2 ≤ 1 and the matrix ( ∆) for some fixed parameter > 0 has the largest eigenvalue , which maximizes the modulus of the structured spectral value set Ʌ * ( ), is known as a local maximizer.
In the next theorem we replace full block matrices in an extremizer of the rank-1 matrices. with ‖∆‖ 2 = 1 which is an extremizer of the structured spectral value set, that is, Ʌ * ( ). Consider λ, , as given in Theorem 3.1.2. Furthermore, additionally assume that the nondegeneracy of Eq. 10 holds and every block possess a singular value which attains the maximum value exactly equal to 1. Moreover, the matrix, ∆= {diag( 1 1 , … , ; 1 1 * , … , * )} acts as the local extremizer of the structured spectral value set.
Remark 3.1.4. Theorem 3.1.3 helps us to consider the admissible perturbations that is the uncertainties in the spectral value set as given in Eq. 4. By making use of the fact that both Frobenius norm and the matrix 2-norm of a rank-1 matrix appear to be same, this helps us to search for extremizers within the sub-manifold given as:

System of ODEs to approximate extremal points of Ʌ * ( )
In order to approximate the local maximize for structured spectral vale set Ʌ * ( ), we aim to construct and then solve a matrix valued function ∆( ). The matrix valued function ∆( ) ∈ 1 * is so that the maximum value of the absolute value of an eigenvalue ∈ Ʌ * ( ) of the matrix valued function ( ∆( )) achieves the maximum growth. Our next aim is to derive a gradient system of ODE's which satisfies the choice of the initial matrix admissible perturbation ∆( ).

The orthogonal projection onto *
Lemma 3.2.1. For ∈ , , consider the product, which shows that the block diagonal matrix is obtained by the entry wise multiplication of the matrix C with the pattern matrix * . The pattern matrix is defined as below: The orthogonal projection of the matrix onto the family * is obtained as: where, = ( ) , ∀ = 1: and 1 = +1 , … , = + .

The optimization problem
We consider the fact that = | | acts the simple and the largest eigenvalue with the corresponding right and left eigenvectors , respectively and are normalized so that, From the result of the above Lemma 3.2.1, we get the following expression for the change in the largest eigenvalue as: The eigenvectors and are defined and normalized as in the Theorem 3.1.2. Now, by considering the suitable perturbation ∆∈ 1 * with 1 * in Eq. 11. We search the direction that maximizes the growth of the modulus of the largest eigenvalue . For this we need to determine the direction as given in the Eq. 16: which solves the following optimization problem: = max{ ( * ) } subject to ( ̅ ) = 0 , ∀ = 1: , (∆ , Ω j ) = 0 , ∀ = 1: The linear constraints in the maximization problem as in Eq. 17 ensure the fact that lies in the tangent space of the manifold 1 * at ∆( ). In particular, Eq. 17 ensures that the matrix norm of each block of the admissible perturbation ∆( ) remain conserved. The quantity > 0 is obtained in the first equation is nothing but the reciprocal of the absolute value of the quantity appearing on the right-hand side of the expression for , if this is other than zero, while this quantity appears to be one, that is, = 1, otherwise. In a similar way the quantity > 0 appear as the reciprocal of the Frobenius norm of the quantity appearing on the right-hand side of the expression for Ω j in the second equation, if it appear other than zero, while it appears to be equal to one, that is, = 1. Also note the fact that if the quantity appearing on the right-hand sides are other than zero, then ∈ 1 * .

Corollary 3.3.2.
The result of the previous Lemma 3.3.1 can be written as follows: In above Eq. 18, the quantity * (•) acts as the orthogonal projection to the manifold of the pattern matrices. Also, 1 , 2 ∈ * are the orthogonal matrices with 1 appear as to be positive matrix.

Gradient system of ordinary differential equations
Lemma 3.3.1 and Corollary 3.3.2 suggests us to focus on following differential equations on the manifold of rank-1 matrices 1 * .
The vector ( ) acts as an eigenvector which is associated to a simple and largest eigenvalue ( ) of the matrix valued function ( ∆( )) for some fixed parameter > 0. Also, consider the fact that the quantities ( ), 1 and 2 depends on the choice of the matrix valued function that is ∆( ). The obtained differential Eq. 19 represents a gradient system of ODE's due to fact that right-hand side is nothing but is the projected gradient of → ( * ).

Choice of initial value matrix ∆ and
In our two level algorithm for the approximation of the perturbation level , we make use of the admissible perturbation, ∆ which is obtained for the previous value of perturbation level as the initial value matrix for the system of ODE's as in Eq. 19.
Consider that the given matrix is invertible and we consider the fact that the matrix can be expressed as − 0 ∆ 0 = ( −1 − 0 ∆ 0 ). To compute the initial choice of the admissible perturbation ∆ 0 , we perform an asymptotic analysis around 0 ≈ 0. In order to achieve this we consider that the very suitable choice of the matrix valued function ( ) = −1 − ∆ 0 , and also consider that ( ) being as eigenvalue of the matrix valued function ( ) which possesses the smallest modulus. Finally by considering that , represents both right eigenvector and left eigenvector to the initial choice of (0) = 0 = | 0 | , scaled such that * > 0, From Lemma 3.1, we get To achieve local maximal decline of the function | ( )| 2 as = 0, take the initial perturbation as: In Eq. 20, the matrix appear as a diagonal and positive definite matrix and the initial admissible perturbation, uncertainty ∆ 0 ∈ 1 * . On the other hand a very natural choice of the 0 is given by as below: The quantity is the upper bound for structured singular values which are approximated by the MATLAB routine mussv.

Outer algorithm
In the following, we consider that ( ) represents the maximizers by approximating the stationary points against the gradient system of ODE's in Eq. 19.
For making use of fast Newton's method for solution of the equation | ( )| = 1 we approximate the derivative of the equation| ( )| − 1 = 0 w.r.t perturbation level .

Numerical experimentation
In the last section of this article, we present various numerical experimentations for pure and the admissible both mixed real and complex perturbations, uncertainties. The comparisons of lower bounds of µ-values for a family of Chebyshev spectral differentiation matrices is presented.
Making use of MATLAB function mussv, we obtain an admissible perturbation set ∆, which is given below as: In this case the admissible uncertainty has a unit 2-norm while the obtained lower bound of structured singular value is ( ) ( ) = 5.4677 − 0.006.
In Table 1, we give the comparison of lower bounds of structured singular values for 3dimensional Chebyshev spectral differentiation matrices.
Example 2: Consider the following four dimensional real Chebyshev spectral differentiation matrix. Also, consider the set of block diagonal uncertainties as an input argument. The uncertainty set is taken as: = {diag(∆ 1 ): ∆ 1 ∈ 4,4 }.
Making use of MATLAB function mussv, we obtain an admissible perturbation set ∆, which is given below as: In this case the admissible uncertainty has a unit 2-norm while the obtained lower bound of structured singular value is ( ) ( ) = 6.4745.
In Table 2, we give the comparison of lower bounds of structured singular values for 4dimensional Chebyshev spectral differentiation matrices.  Also, consider the set of block diagonal uncertainties as an input argument. The uncertainty set is taken as: = {diag(∆ 1 ): ∆ 1 ∈ 5,5 }.
Making use of MATLAB function mussv, we obtain an admissible perturbation set ∆, which is given below as: In this case the admissible uncertainty has a unit 2-norm while the obtained lower bound of structured singular value is ( ) ( ) = 10.3961.
In Table 3, we give the comparison of lower bounds of structured singular values for 5dimensional Chebyshev spectral differentiation matrices. Also, consider the set of block diagonal uncertainties as an input argument. The uncertainty set is taken as: = {diag(∆ 1 ): ∆ 1 ∈ 6,6 }.
Making use of MATLAB function mussv, we obtain an admissible perturbation set ∆, which is given below as: In this case the admissible uncertainty has a unit 2-norm while the obtained lower bound of structured singular value is ( ) ( ) = 15.3612.
In Table 4, we give the comparison of lower bounds of structured singular values for 6-dimensional Chebyshev spectral differentiation matrices.
In the Figs. 1-8, we present graphical interpretation of the bounds of µ-value obtained by our algorithm with the one obtained by MATLAB function mussv.

Conclusion
In this article, we have considered the problem for the computation of the lower bounds of µ-values for a family of Chebyshev spectral differentiation matrices. The numerical computation of µ-values gives an important role in stability analysis of linear systems in the system theory.
The numerical experimentation show that the comparison of the lower bounds of µ-values computed by algorithm mentioned in this article when compared to well-known MATLAB control toolbox.

B
Family of block diagonal matrices ɛ0 Perturbation level Δ0 Initial admissible perturbation µ Structured singular values.