-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linear operators acting on symmetric matrices #101
Comments
If I understand your question correctly, symmetry of the solution of your Lyapunov equation doesn't really depend on the operator, but rather on how you model the equation. Couldn't you formulate the problem by only considering tril(X) as your set of unknowns? If I misunderstood, could you give a short example illustrating the issue you're having? |
Let consider the inverse operator inv(L) of a Lyapunov operator L(X)
=AX+XA'. This operator maps a symmetric matrix Y to a corresponding
symmetric X which satisfies the Lyapunov equation AX+XA'+Y =0. Thus
inv(L)*vec(Y) = vec(X). To determine X you need to solve the above Lyapunov
equation, and Y must be symmetric. If a specialized solver is used which
exploits symmetry, a substantial gain in efficiency can be achieved. If
such a solver underlies the product operation, then inv(L)*Y can be
computed only for a symmetric Y. However, this would lead to error when
executing Matrix(inv(L)) due to the way how the n*n columns of this matrix
are generated using columns of the identity matrix of order n*n (n is the
order of A). Excepting n columns, the rest of columns do not correspond to
symmetric matrices and the solution process would fail. Would be it
possible to extend the definition of the linear operators by specifying the
classes of structured matrices on which the operator acts?
Dominique <[email protected]> schrieb am Di., 15. Okt. 2019, 17:41:
… If I understand your question correctly, symmetry of the solution of your
Lyapunov equation doesn't really depend on the operator, but rather on how
you model the equation. Couldn't you formulate the problem by only
considering tril(X) as your set of unknowns?
If I misunderstood, could you give a short example illustrating the issue
you're having?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#101?email_source=notifications&email_token=ALJDHECWGF7BYNHROM6JTXLQOXQDXA5CNFSM4JAZ5SQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBJHYJI#issuecomment-542276645>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALJDHEFGMAYWRZQQ5OHXWU3QOXQDXANCNFSM4JAZ5SQQ>
.
|
So it's the default implementation of |
Yes, this could be an elegant solution to my problem. In the meantime, I think I found a less elegant way to manage the issue, by assuming that only the upper triangular part of X is packed in the vector x, which is then used in the product op * x. The result is the packed upper triangular part of AX+XA’. So the corresponding matrix is not n^2 times n^2 but only n(n+1)/2 times n(n+1)/2 and the special solvers can be used to implement the inverse operators.
Many thanks for your time.
Another issue: It could be helpful for your LinearOperators project a LAPACK wrapper which I implemented as an interface to the functions *lanc2.f which is instrumental to estimate efficiently the 1-norm of linear operators, and thus can serve for condition number estimation. This function is in the file lapackutil.jl in the previously mentioned package https://github.com/andreasvarga/MatrixEquations.jl
I am still working hardly towards version 1.0 to be registered, but this basic tool will not be changed (I hope) in the near future.
|
Yes, that's what I meant in my first reply. I'm open to adding a function for estimating the 1-norm of an operator but it cannot be based on LAPACK because we can't assume that operators are explicit matrices. |
No matrices are involved, only vectors which are the results of product
operations op*x and op'*x. So, it seems to be perfectly suited for
LinearOperators.
Dominique <[email protected]> schrieb am Mi., 16. Okt. 2019, 17:00:
… I think I found a less elegant way to manage the issue, by assuming that
only the upper triangular part of X is packed in the vector x
Yes, that's what I meant in my first reply.
I'm open to adding a function for estimating the 1-norm of an operator but
it cannot be based on LAPACK because we can't assume that operators are
explicit matrices.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#101?email_source=notifications&email_token=ALJDHEEPGK55PCO2I2RN7LLQO4UBPA5CNFSM4JAZ5SQ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBM2ASQ#issuecomment-542744650>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALJDHEACDNTADQ3TAPCZFXDQO4UBPANCNFSM4JAZ5SQQ>
.
|
I see. It's reverse communication. That's an idea but it would be better in the long run to rewrite it in Julia, and have all four versions (and much more) for the price of one. It's simple enough: http://www.netlib.org/lapack/explore-html/d3/d0a/dlacn2_8f_source.html |
Just an update: I formulated for my package an issue which is related to the above discussion. Simply said, I was not able to ensure the condition |
I implemented several linear operators related to solve control problems. An example is the Lyapunov operator L: X -> AX+XA', where A is a squre matrix. This operator acts usually on symmetric/hermitian matrices X and is a special case of the more general Sylvester operator S: X -> AX+XB, with A and B square matrices (not necessarily of the same size). The definition of L as a linear operator is possible regardless X is symmetric/hermitian or not and Matrix(L) is the usual Kronecker-expansion based matrix. However, the definition of the inverse operator inv(L) : Y -> X involves the solution of a Lyapunov equation AX+XA' + Y = 0, where for a symmetric/hermitian Y the resulting X is also symmetric/hermitian. The solvers of Lyapunov equations usually exploit the symmetry of the solutions, by computing, for example, only the upper triangular part of X (the lower triangular part results via symmetry). It is possible to define inv(L) ignoring the symmetry in the input data. Unfortunately, in this case, some functions in the LinearOperators collection will fail, as for example Matrix(inv(L)), which will not generate the inverse operator matrix due to the restriction on the symmetry of the specialized solvers. To cope with this aspect, I was forced to use the more general solvers for Sylvester equations to compute the full solution X, with the associated efficiency losses.
I wonder if the problem of restrining the domain of input data to form the products as L * vec(X) can be addressed somehow, by assuming certain structural constraints on X (e.g., symetric, or hermitian or even diagonal).
Many thanks in advance for your time considering this question.
Note: The implemented linear operators belong to the recently developed (not yet registered) package MatrixEquation.jl.
The text was updated successfully, but these errors were encountered: