Approximating structured singular values for discrete Fourier transformation matrices

In this article, we present the numerical computations of singular values and lower bounds of structured singular values, known as μ-values, for a family of Discrete Fourier Transform matrices. The μ-value is a well-known mathematical tool in linear control theory which speaks about the stability and instability analysis of feedback systems. The comparison of lower bounds of μ-values with the well-known MATLAB routine mussv is investigated.


Introduction
*In the present article, our main objective is to discuss the numerical approximation of Structured Singular Values (SSV) for a family of Discrete Fourier Transform matrices. The-values introduced by Doyle Packard and Doyle (1993) are a well-known mathematical tool in control which discuss stability and synthesis of the linear control systems subject to certain class of uncertainties. The perturbation structures addressed by the structured singular value are very generic. These structures allows to cover all kinds of parametric perturbations which can be incorporated into the linear control system by means of both real and complex Linear Fractional Transformations LFT's, (Chen et al., 1996;Hinrichsen and Pritchard, 2005;Karow et al., 2010Karow et al., , 2006Packard and Doyle, 1993;Qiu et al., 1995). Unfortunately, it's not possible to compute the exact values of SSV especially in higher dimensions computation of SSV. The reason is the fact that the exact computation is NP hard (Braatz et al., 1994). The here has been much written about the approximation of SSV; almost all of these numerical methods which are being used in practice approximated upper and lower bounds of SSV; the message from the computation of an upper bound of the SSV.
The upper bound of SSV provides sufficient conditions which guarantee the robust stability analysis of feedback systems, while on the other hand a lower bound provides sufficient conditions for guarantees the instability of the linear feedback systems in control. The well-known MATLAB function mussv available in the Matlab Control Toolbox approximates an upper bound of structured singular values by means of the known methodologies the like as diagonal balancing technique and Linear Matrix Inequality techniques (Fan et al., 1991). The approximation of lower bound of SSV is by means of the generalization of power technique (Packard et al., 1998).
Let's consider then -dimensional either a real (or a complex) matrix . The matrix could be either square or rectangular in nature. Also consider a family of block diagonal matrices ∆ In Eq. 1, is an identity matrix with the dimension , same as the dimension of the given matrix .
Definition 1.1: Suppose that be a -dimensional either square or a rectangular, real (or complex) matrix and also consider a family of block diagonal matrices ∆ . Then the SSV known as -value is defined as: In Definition 1.1, the quantity det(•) is the determinant of a matrix ( − ∆). We also consider the special case when the set ∆ allows us to have only pure complex uncertainties. We denote ∆ * instead of ∆ for the family of block diagonal matrices. We note that observation that ∆∈ ∆ implies the fact that exp( )∆∈ ∆ for any scalar ∈ ℝ. This lead us with the fact that ∆∈ ∆ * in such a way that ( ) = 1 if and only if there exists the perturbation ∆ ′ ∈ ∆ * having the same unit 2-norm such that the matrix ∆ ′ has the maximum eigenvalue exactly equal 1, in turn this implies det( − ∆ ′ ) = 0. The above discussion allows us to write down the following alternative expression for μ-value, that is: (3) In above Eq. 3, ρ(•) is the spectral radius of a matrix ∆.

μ-value based on structured spectral value sets
The structured epsilon spectral value set of given matrix ∈ ℂ × with respect to a perturbation level say is given as: In Eq. 4, the quantity Ʌ(•) express the spectrum of a matrix while the admissible perturbation ∆ possesses a unit 2-norm that is ‖∆‖ 2 = 1. For the special case when we have purely complex perturbations that is ∆ * , the structured spectral value set Ʌ ∆ ( ) is nothing but simply a disk having its center at the origin. While for a more generic case that is: of mixed complex and real uncertainties, the set allows us to express μ-value as: For purely complex uncertainties, the underlying set ∆ * allows us to write down the alternative form of SSV as: here, ∈ Ʃ ∆ ( ).

The mathematical problem
We consider the following optimization problem (Rehman, 2017), where ∈ Ʃ ∆ ( ) for the fixed value of perturbation level of , that is > 0. From above discussion, the structured singular values ∆ ( ) is the reciprocal of the minimum value of perturbation level so that ( ) = 0. This suggests us to give a two-level algorithm that is inner and outer algorithm: In the inner algorithm, we solve Eq. 8. While in the outer algorithm, we first vary perturbation level by means of some iterative method which helps to exploits the knowledge of the computation of exact derivative of an extremizer say ∆( ) with respect to perturbation level . We solve the optimization problem addressed as in Eq. 4 by first solving a gradient system of Ordinary Differential Equations (ODE's). This computation only produces a local minimum of Eq. 4 which, in turn, gives an upper bound for perturbation level and hence as a result one obtained the lower bound for ∆ ( ). The purely complex uncertainties set ∆ * can be addressed by taking the inner algorithm to compute local optima for the maximization problem as addressed below: Here in Eq. 9 ∈ Ʌ ∆ * ( ) which then produces a lower bound for ∆ ( ).

Pure complex perturbations
In this section, we establish the solution of the inner problem as mentioned in Eq. 9. This includes the estimation of the quantity ∆ ( ) for matrix ∈ ℂ , while taking into account a pure complex uncertainties set.
In the following Lemma 3.1, we give the eigenvalue perturbation result in order to approximate the rate of change in the eigenvalue ( ).
Lemma 3.1: Let's suppose matrix family Ω: ℝ → ℂ , and consider that ( ) is one of the eigenvalue of Ω( ) which converges to a simple eigenvalue say 1 of Ω 0 = Ω(0) as → 0. Then there exists eigenvectors 0 and 0 such that ( ) is analytic where, Ω 1 = Ω̇(0) and * , 0 * are the right and left eigenvectors of Ω 0 associated to 0 , that is (Ω 0 − 0 0 ) 0 = 0 and 0 * (Ω 0 − 0 ) = 0. Since our goal is to give a solution for the maximization problem as ad-dressed in Eq. 9. This requires the computation of a uncertainty local ∆ such that ( ∆ ) has the maximum growth among all admissible perturbations ∆∈ ∆ * so that ∆having a unit 2-norm that is ‖∆‖ 2 ≤ 1. In the following we call 1 to be the greatest eigenvalue if | 1 | equals the spectral radius of the matrix ( ∆ ). Next we give the definition of a matrix valued function ∆( ) which acts as a local extremizer and maximizes the modulus of the greatest eigenvalue 1 ( ).
The following theorem provides the characterization of local extremizers ∆( ).
By the help of the following theorem we replace full blocks ∆ in a local extremizer with rank-1 matrices.

The optimization problem (Rehman, 2017)
Consider that 1 ( ) = | 1 | be the simple eigenvalue having eigenvectors , which are normalized such that as a result of Lemma 3.1, we get where = * . By considering ∆∈ ∆ , we compute the direction ∆̇= which locally maximizes the growth of the modulus of greatest eigenvalue 1 . From this discussion, we get as a solution of the maximization problem. * = arg max{ ( * )} subject to ( ̅ ) = 0, = 1: and 〈∆ , Ω j 〉 = 0, = 1: The Lemma 3.2 gives the solution of the optimization problem as discussed in the Eq. 20.

Lemma 3.2:
We make use of the notation as introduced earlier in the above discussion and , partitioned a solution * of the maximization problem discussed in Eq. 20 is given as (Rehman, 2017): * = {diag( 1 1 , … , ; Ω 1 , … , Ω F )} with, = Ѵ ( * − ( * ̅ ) ), ∀ = 1: and Ω j = ( + + * − (∆ , + + * )∆ ) , ∀ = 1: . (23) Here in the solution * the coefficient Ѵ > 0 is nothing but the reciprocal of the absolute value of the right-hand side in Eq. 22 while if it is other than zero, and Ѵ = 1 else. Similarly the coefficient > 0 and is the reciprocal of the Frobenius norm of the matrix obtained on the right hand side in Eq. 23, if it is other than zero, and = 1 else.
The result obtained in Lemma 3.2 can alternatively be expressed as: In Eq. 18, ∆ * (•) is the orthogonal projection while 1 , 2 ∈ ∆ * are diagonal matrices where the orthogonal matrix 1 positive.

The gradient system of ODEs
Lemma 4.1: allows us to consider the following differential equations on the manifold ∆ * .
In Eq. 25, ( ) is an eigenvector having a unit 2norm and it is associ-ated to a simple eigenvalue 1 ( ) of the matrix ∆( ) associated a fixed perturbation > 0. The ordinary differential Eq. 25 represents a gradient system because righthand side is nothing but the projected gradient of → ( * ).

Choice of initial value matrix ∆ and
In two-level algorithm for approximating 0 we make use of the admissible perturbation obtained for the previous value 1 as the initial value matrix for the gradient system of ODEs. While in order to gain the locally maximal growth of | 1 ( )| we choose (Rehman, 2017): the positive diagonal matrix is taken in such a way that ∆ 0 ∈ ∆ . While on the other hand a very natural choice for is given as: here ̂∆ ( ) is the upper bound of -value which is approximated by computed by well-known MATLAB function mussv.

Outer algorithm
In this section, we approximate the lower bound of SSV, ∆ ( ) by means of outer algorithm. But we note immediately the fact that the principles remain same as discussed in the previous case, so one can treat the case of purely complex uncertainties in detail and provide a briefer discussion on the extension to the case of mixed complex and real uncertainties (Rehman, 2017).

Numerical experimentation
In this section, we present a comparison of the lower bounds of -value approximated by mussv and the algorithm Rehman (2017)  For this example, we obtain the upper bound = 1.4142 while the lower bound is approximated as = 1.3874. Now, by making use of the algorithm Rehman (2017), we obtain the perturbation * ∆ * with ∆ * = [ −1 0 0 −1 ] while * = 0.7071 and ‖∆ * ‖ 2 = 1.
The same lower bound is approximated = 1.4142 as one approximated by MATLAB function mussv. The same lower bound is approximated = 1.4142 as one approximated by MATLAB function mussv. The same lower bound is approximated = 1.4142 as one approximated by MATLAB function mussv.
The same lower bound is approximated = 2.2362 as one which is approximated by MATLAB function mussv.
Figs. 1-3 represent the graphical analysis of the bounds of structured singular values extracted by NewAlgo with calculated by MATLAB function mussv.

Conclusion
In this article we have presented the approximation of -values for the family of Discrete Fourier Transform matrices. Different experiments have been performed while taking into account the different Discrete Fourier Transform matrices with various dimensions. The experimental results show how the lower bounds of SSV approximated by mussv function and the one approximated by algorithm of Rehman (2017) are related to each other's.