Videos and Webinars

- Contact sales
- Trial software
- Description
- Full Transcript
- Related Resources

## Pole Placement | State Space, Part 2

From the series: State Space

Brian Douglas

This video provides an intuitive understanding of pole placement, also known as full state feedback. This is a control technique that feeds back every state to guarantee closed-loop stability and is the stepping stone to other methods like LQR and H infinity.

We’ll cover the structure of the control law and why moving poles or eigenvalues around changes the dynamics of a system. We’ll also go over an example in MATLAB and touch on a few interesting comparisons to other control techniques.

In this video, we’re going to talk about a way to develop a feedback controller for a model that’s represented using state-space equations. And we’re going to do that with a method called pole placement, or full state feedback. Now, my experience is that pole placement itself isn’t used extensively in industry; you might find that you’re using other methods like LQR or H infinity more often. However, pole placement is worth spending some time on because it’ll give you a better understanding of the general approach to feedback control using state-space equations and it’s a stepping stone to getting to those other methods. So I hope you stick around. I’m Brian, and welcome to a MATLAB Tech Talk.

To start off, we have a plant with inputs u and outputs y. And the goal is to develop a feedback control system that drives the output to some desired value. A way you might be familiar with doing this is to compare the output to a reference signal to get the control error. Then you develop a controller that uses that error term to generate the input signals into the plant with the goal of driving the error to zero. This is the structure of the feedback system that you’d see if you were developing, say, a PID controller.

But for pole placement, we’re going to approach this problem in a different way. Rather than feed back the output y, we’re going to feed back the value of every state variable in our state vector, x. We’re going to claim that we know the value of every state even though it’s not necessarily part of the output y. We’ll get to that in a bit, but for now, assume we have access to all of these values. We then take the state vector and multiply it by a matrix that is made up of a bunch of different gain values. The result is subtracted from a scaled reference signal, and this result is fed directly into our plant as the input.

Now you’ll notice that there isn’t a block here labeled “controller” like we have in the top block diagram. In this feedback structure, this whole section is the controller. And pole placement is a method by which we can calculate the proper gain matrix to guarantee system stability, and the scaling term on the reference is used to ensure that steady state error performance is acceptable. I’ll cover both of these in this video.

In the last video, we introduced the state equation x dot = Ax + Bu. And we showed that the dynamics of a linear system are captured in this first part, Ax. The second part is how the system responds to inputs, but how the energy in the system is stored and moves is captured by the Ax term. So you might expect that there is something special about the A matrix when it comes to controller design. And there is: Any feedback controller has to modify the A matrix in order to change the dynamics of the system. This is especially true when it comes to stability.

The eigenvalues of the A matrix are the poles of the system, and the location of the poles dictates stability of a linear system. And that’s the key to pole placement: Generate the required closed-loop stability by moving the poles or the eigenvalues of the closed-loop A matrix.

I want to expand a bit more on the relationship between poles, eigenvalues, and stability before we go any further because I think it’ll help you understand exactly how pole placement works.

For this example, let’s just start with an arbitrary system and focus on the dynamics, the A matrix. We can rewrite this in non-matrix form so it’s a little bit easier to see how the state derivatives relate to the states. In general, each state can change as a function of the other states. And that’s the case here; x dot 1 changes based on x2 and x dot 2 changes based on both x1 and x2. This is perfectly acceptable, but it makes it hard to visualize how eigenvalues are contributing to the overall dynamics. So what we can do is transform the A matrix into one that uses a different set of state variables to describe the system.

This transformation is accomplished using a transform matrix whose columns are the eigenvectors of the A matrix. What we end up with after the transformation is a modified A matrix consisting of the complex eigenvalues along the diagonal and zeroes everywhere else. These two models represent the same system. They have the same eigenvalues, the same dynamics; it’s just the second one is described using a set of state variables that change independently of each other.

With the A matrix written in diagonal form, it’s easy to see that we’re left with a set of first-order differential equations where the derivative of each state is only affected by that state and nothing else. And here’s the cool part: The solution to a differential equation like this is in the form Z = a constant times e ^ lambda t. Where lambda is the eigenvalue for that given state variable.

Okay, let’s dive into this equation a little bit more. Zn shows how the state changes over time given some initial condition, C. Or another way of thinking about this is that if you initialize the state with some energy, this equation shows what happens to that energy over time. And by changing lambda, you can affect how the energy is dissipated or, in the case of an unstable system, how the energy grows.

Let’s go through a few different values of lambda so you can visually see how energy changes based on the location of the eigenvalue within the complex plane.

If lambda is a negative real number, then this mode is stable since the solution is e raised to a negative number, and any initial energy will dissipate over time. If it’s positive, then it’s unstable because the energy will grow over time. And if there is a pair of imaginary eigenvalues, then the energy in the mode will oscillate, since e ^ imaginary number produces sines and cosines. And any combination of real and imaginary numbers in the eigenvalue will produce a combination of oscillations and exponential energy dissipation.

I know this was all very fast, but hopefully it made enough sense that now we can state the problem we’re trying to solve. If our plant has eigenvalues that are at undesirable locations in the complex plane, then we can use pole placement to move them somewhere else. Certainly if they’re in the right half plane it’s undesirable since they’d be unstable, but undesirable could also mean there are oscillations that you want to get rid of, or maybe just speed up or slow down the dissipation of energy in a particular mode.

With that behind us, we can now get into how pole placement moves the eigenvalues. Remember the structure of the controller that we drew at the beginning? This results in an input u = r*Kr - k*x. Where r Kr is the scaled reference, which again we’ll get to in a bit. And kx is the state vector that we’re feeding back multiplied by the gain matrix.

Here’s where the magic happens. If we plug this control input into our state equation, we are closing the loop and we get the following state equation: Notice that A and -Bk both act on the state vector so we can combine them to get modified A matrix.

This is the closed-loop A matrix and we have the ability to move the eigenvalues by choosing an appropriate K. And this is easy to do by hand for simple systems. Let’s try an example with a second-order system with a single input. We can find the eigenvalues by setting the determinant of A - lambda I to zero and then solve for lambda. And they are at -2 and +1. One of the modes will blow up to infinity because of the presence of the positive real eigenvalue and so the system is unstable. Let's use pole placement to design a feedback controller that will stabilize this system by moving the unstable pole to the left half plane.

Our closed-loop A matrix is A - BK and the gain matrix, k, is 1x2 since there is one output and two states. This results in - K1, 1 - k2, 2 and -1. We can solve for the eigenvalues of Acl like we did before and we get this characteristic equation that is a function of our two gain values.

Let’s say we want our closed-loop poles at -1 and -2. In this way, the characteristic equation needs to be L^2 + 3L + 2 = 0. So at this point, it’s straightforward to find the appropriate K1 and K2 that make these two equations equal. We just set the coefficients equal to each other and solve. And we get K1 = 2, and K2 = 1 and that’s it. If we place these two gains in the state feedback path of this system, it will be stabilized with eigenvalues at -1 and -2.

Walking through an example by hand, I think, gives you a good understanding of pole placement; however, the math involved starts to become overwhelming for systems that have more than two states. The idea is the same; just solving the determinant becomes impractical. But we can do this exact same thing in MATLAB with pretty much a single command.

I’ll show you quickly how to use the place command in MATLAB by recreating the same system we did by hand. I’ll define the four matrices, and then create the open-loop state-space object. I can check the eigenvalues of the open-loop A matrix just to show you that there is, in fact, that positive eigenvalue that causes this system to be unstable.

That’s no good, so let’s move the eigenvalues of the system to -2 and -1. Now solving for the gain matrix using pole placement can be done with the place command. And we get gain values 2 and 1 like we expected.

Now the new closed-loop A matrix is A - BK, and just to double check, this is what Acl looks like and it does have eigenvalues at -1 and -2. Okay, I’ll create the closed-loop system object and now we can compare the step responses for both.

The step response of the open-loop system is predictably unstable. The step response of the closed-loop system looks much better. However, it’s not perfect. Rather than rising to 1 like we’d expect, the steady state output is only 0.5. And this is finally where the scaling on the reference comes in. So far, we’ve only been concerned with stability and have paid little attention to steady state performance. But even addressing this is pretty straightforward. If the response to the input is only half of what you expect, why don’t we just double the input? And that’s what we do. Well, not double it, but we scale the input by the inverse of the steady state value.

In MATLAB, we can do this by inverting the DC gain of the system. You can see that the DC gain is 0.5, and so the inverse is 2. Now we can rebuild our closed-loop system by scaling the input by Kr. and checking the step response. No surprise; its steady state value is 1.

And that’s pretty much what there is to basic pole placement. We feed back every state variable and multiply them by a gain matrix in such a way that moves the closed-loop eigenvalues, and then we scale the reference signal so that the steady state output is what we want.

Of course, there’s more to pole placement than what I could cover in this 12-minute video, and I don’t want to drag this on too long, but I also don’t want to leave this video without addressing a few more interesting things for you to consider. So in the interest of time, let’s blast through these final thoughts lightning-round style. Are you ready? Let’s go!

Pole placement is like fancy root locus. With root locus you have one gain that you can adjust that can only move to the poles along the locus lines. But with pole placement, we have a gain matrix that gives us the ability to move the poles anywhere in the complex plane, not just along single-dimensional lines.

A two-state pole placement controller is very similar to a PD controller. With PD, you feed back the output and generate the derivative within the controller. With pole placement, you are feeding back the derivative as a state, but the results are essentially the same: 2 gains, one for a state and one for its derivative.

Okay, we can move eigenvalues around, but where should we place them? The answer to that is a much longer video, but here are some things to think about. If you have a high-order system, consider keeping two poles much closer to the imaginary axis than the others so that the system will behave like a common second-order system. These are called the dominant poles since they are slower and tend to dominate the response of the system.

Keep in mind that if you try to move a bunch of eigenvalues really far left in order to get a super-fast response, you may find that you don’t have the speed or authority in your actuators to generate the necessary response. This is because it takes more gain, or more actuator effort, to move the eigenvalues further from their open-loop starting points.

Full state feedback is a bit of a misnomer. You are feedback every state in your mathematical model, but you don’t, and can’t, feed back every state in a real system. For just one example, at some level, all mechanical hardware is flexible, which means additional states, but you may choose to ignore those states in your model and develop your feedback controller assuming a rigid system. The important part is that you feed back all critical states to your design so that your controller will still work on the real hardware.

You have to have some kind of access to all of the critical states in order to feed them back. The output, y, might include every state, in which case you’re all set. However, if this isn’t the case, you will either need to add more sensors to your system to measure the missing states or use the existing outputs to estimate or observe the states you aren’t measuring directly. In order to observe your system, it needs to be observable, and similarly, in order to control your system it needs to controllable. We’ll talk about both of those concepts in the next video.

So that’s it for now. I hope these final few thoughts helped you understand a little more about what it means to do pole placement and how it’s part of an overall control architecture.

If you want some additional information, there are a few links in the description that are worth checking out that explain more about using pole placement with MATLAB.

If you don’t want to miss the next Tech Talk video, don’t forget to subscribe to this channel. Also, if you want to check out my channel, control system lectures, I cover more control theory topics there as well. Thanks for watching. I’ll see you next time.

## Related Products

Control system toolbox.

Bridging Wireless Communications Design and Testing with MATLAB

Featured Product

- Request Trial
- Get Pricing

## Related Videos:

View more related videos

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

- América Latina (Español)
- Canada (English)
- United States (English)
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- United Kingdom (English)

Asia Pacific

- Australia (English)
- India (English)
- New Zealand (English)
- 简体中文 Chinese
- 日本 Japanese (日本語)
- 한국 Korean (한국어)

Contact your local office

## Intro to Control Theory Part 6: Pole Placement

In Part 4 , I covered how to make a state-space model of a system to make running simulations easy. In this post, I'll talk about how to use that model to make a controller for our system.

For this post, I'm going to use an example system that I haven't talked about before - A mass on a spring:

If we call \(p\) the position of the cart (we use \(p\) instead of \(x\), since \(x\) is the entire state once we're using a state space representation), then we find that the following equation describes how the cart will move:

\[ \dot{p} = -\frac{k}{m}p \]

Where \(p\) is position, \(k\) is the spring constant of the spring (how strong it is), and \(m\) is the mass of the cart.

You can derive this from Hooke's Law if you're interested, but the intuitive explanation is that the spring pulls back against the cart proportionally to how far it is away from the equilibrium state of the spring, but gets slowed down the heavier the cart is.

This describes a ideal spring, but one thing that you'll notice if you run a simulation of this is that it will keep on oscillating forever! We haven't taken into account friction. Taking into account friction gets us the following equation:

\[ \dot{p} = -\frac{k}{m}p - \frac{c}{m}\dot{p} \]

Where \(c\) is the "damping coefficient" - essentially the amount of friction acting on the cart.

Now that we have this equation, let's convert it into state space form!

This system has two states - position, and velocity:

\[ x = \begin{bmatrix} p \\ \dot{p} \end{bmatrix} \]

Since \(x\) is a vector of length 2, \(A\) will be a 2x2 matrix. Remember, a state space representation always takes this form:

\[ \dot{x} = Ax + Bu \]

We'll find \(A\) first:

\[ \begin{bmatrix} \dot{p} \\ \ddot{p} \end{bmatrix} = \begin{bmatrix} ? & ? \\ ? & ? \end{bmatrix} \begin{bmatrix} p \\ \dot{p} \end{bmatrix} \]

The way that I like to think about this is that each number in the matrix is asking a question - how does X affect Y? So, for example, the upper left number in the A matrix is asking "How does position affect velocity?". Position has no effect on velocity, so the upper left number is zero. Next, we can look at the upper right number. This is asking "How does velocity affect velocity?" Well, velocity is velocity, so we put a 1 there (since you need to multiply velocity by 1 to get velocity). If we keep doing this process, we get the following equation:

\[ \begin{bmatrix} \dot{p} \\ \ddot{p} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{c}{m} \end{bmatrix} \begin{bmatrix} p \\ \dot{p} \end{bmatrix} \]

For the sake of this post, I'll pick some arbitrary values for \(m\), \(k\), and \(c\): \(m = 1 \text{kg}\), \(k = 0.4 \text{N/m}\), \(c = 0.3 \text{N/m/s}\). Running a simulation of this system, starting at a position of 1 meter, we get the following response:

Notice that this plot shows two things happening - the position is oscillating, but also decreasing. There's actually a way to quantify how much the system will oscillate and how quickly it will converge to zero (if it does at all!). In order to see how a system will act, we look at the "poles" of the system. In order to understand what the poles of a system mean, we need to take a quick detour into linear algebra.

Our matrix \(A\) is actually a linear transformation . That means that if we multiply a vector by \(A\), we will get out a new, transformed vector. Multiplication and addition are preserved, such that \( A(x \times 5) = (Ax) \times 5 \) and \( A(x_1 + x_2) = Ax_1 + Ax_2 \). When you look at \(A\) as a linear transformation, you'll see that some vectors don't change direction when you apply the transform to them:

The vectors that don't change direction when transformed are called "eigenvectors". For this transform, the eigenvectors are the blue and pink arrows. Each eigenvector has an "eigenvalue", which is how much it stretches the vector by. In this example, the eigenvalue of the blue vectors is 3 and the eigenvalue of the pink vectors is 1.

So how does this all relate to state space systems? Well, the eigenvalues of the system (also called the poles of a system) have a direct effect on the response of the system. Let's look at our eigenvalues for our system above. Plugging the matrix into octave/matlab gives us:

So we can see that we have two eigenvalues, both of which are complex numbers. What does this mean? Well, the real component of the number tells you how fast the system will converge to zero. The more negative it is, the faster it will converge to zero. If it is above zero, the system is unstable, and will trend towards infinity (or negative infinity). If it is exactly zero, the system is "marginally stable" - it won't get larger or smaller. The imaginary part of the number tells you how much the system will oscillate. For every positive imaginary part, there is a negative one of the same magnitude with the same real part, so it's just the magnitude of the imaginary part that determines how much the system will oscillate - the higher the magnitude of the imaginary part, the more the system will oscillate.

Why is this the case? Well, as it turns out, the derivative of a specific state is the current value of that state times the eigenvalue associated with that state. So, a negative eigenvalue will result in a derivative that drives the state to zero, whereas a positive eigenvalue will cause the state to increase in magnitude forever. A eigenvalue of zero will cause the derivative to be zero, which obviously results in no change to the state.

That explains real eigenvalues, but what about imaginary eigenvalues? Let's imagine a system that has two poles, at \(0+0.1i\) and \(0-0.1i\). Since this system has a real component of zero, it will be marginally stable, but since it has an imaginary component, it will oscillate. Here's a way of visualizing this system:

The blue vector is the position of the system. The red vectors are the different components of that position (the sum of the red vectors will equal the blue vector). The green vectors are the time derivatives of the red vectors. As you can see, the eigenvalue being imaginary causes each component of the position to be imaginary, but since there is always a pair of imaginary poles of the same magnitude but different signs, the actual position will always be real.

So, how is this useful? Well, it lets us look at a system and see what it's response will look like. But we don't just want to be able to see how the system will respond, we want to be able to change how the system will respond. Let's return to our mass on a spring:

Now let's say that we can apply an arbitrary force \(u\) to the system. For this, we use our \(B\) matrix:

\[ \begin{bmatrix} \dot{p} \\ \ddot{p} \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ -\frac{k}{m} & -\frac{c}{m} \end{bmatrix} \begin{bmatrix} p \\ \dot{p} \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{1}{m} \end{bmatrix} u \]

Now, let's design a controller that will stop there from being any oscillation, and drive the system to zero much more quickly. Remember, all "designing a controller" means in this case is finding a matrix \(K\), where setting \( u = Kx \) will cause the system to respond in the way that you want it to. How do we do this? Well, it turns out that it's actually fairly easy to place the poles of a system wherever you want. Since we want to have no oscillation, we'll make the imaginary part of the poles zero, and since we want a fast response time, we'll make the real part of the poles -2.5 (this is pretty arbitrary). We can use matlab/octave to find what our K matrix must be to have the poles of the closed loop system be at -2.5:

Which gives us the output:

So our K matrix is:

\[ K = \begin{bmatrix} 5.85 \\ 4.7 \end{bmatrix} \]

And running a simulation of the system with that K matrix gives us:

Much better! It converges in under five seconds with no oscillation, compared with >30 seconds with lots of oscillations for the open-loop response. But wait, if we can place the poles anywhere we want, and the more negative they are the faster the response, why not just place them as far negative as possible? Why not place them at -100 or -1000 or -100000? For that matter, why do we ever want our system to oscillate, if we can just make the imaginary part of the poles zero? Well, the answer is that you can make the system converge as fast as you want, so long as you have enough energy that you can actually apply to the system. In real life, the motors and actuators that are limited in the amount of force that they can apply. We ignore this in the state-space model, since it makes the system non-linear, but it's something that you need to keep in mind when designing a controller. This is also the reason that you might want some oscillation - oscillation will make you reach your target faster than you would otherwise. Sometimes, getting to the target fast is more important than not oscillating much.

So, that's how you design a state space controller with pole placement! There are also a ton of other ways to design controllers (most notably LQR) which I'll go into later, but understanding how poles determine the response of a system is important for any kind of controller.

If you're in NYC and want to meet up over lunch/coffee to chat about the future of technology, get in touch with me .

- LRC circuit
- BoostConverter
- NEXT ►

## Introduction: State-Space Methods for Controller Design

In this section, we will show how to design controllers and observers using state-space (or time-domain) methods.

Key MATLAB commands used in this tutorial are: eig , ss , lsim , place , acker

## Related Tutorial Links

- LQR Animation 1
- LQR Animation 2

## Related External Links

- MATLAB State FB Video
- State Space Intro Video

## Controllability and Observability

Control design using pole placement, introducing the reference input, observer design.

There are several different ways to describe a system of linear differential equations. The state-space representation was introduced in the Introduction: System Modeling section. For a SISO LTI system, the state-space form is given below:

To introduce the state-space control design method, we will use the magnetically suspended ball as an example. The current through the coils induces a magnetic force which can balance the force of gravity and cause the ball (which is made of a magnetic material) to be suspended in mid-air. The modeling of this system has been established in many control text books (including Automatic Control Systems by B. C. Kuo, the seventh edition).

The equations for the system are given by:

From inspection, it can be seen that one of the poles is in the right-half plane (i.e. has positive real part), which means that the open-loop system is unstable.

To observe what happens to this unstable system when there is a non-zero initial condition, add the following lines to your m-file and run it again:

It looks like the distance between the ball and the electromagnet will go to infinity, but probably the ball hits the table or the floor first (and also probably goes out of the range where our linearization is valid).

Let's build a controller for this system using a pole placement approach. The schematic of a full-state feedback system is shown below. By full-state, we mean that all state variables are known to the controller at all times. For this system, we would need a sensor measuring the ball's position, another measuring the ball's velocity, and a third measuring the current in the electromagnet.

The state-space equations for the closed-loop feedback system are, therefore,

From inspection, we can see the overshoot is too large (there are also zeros in the transfer function which can increase the overshoot; you do not explicitly see the zeros in the state-space formulation). Try placing the poles further to the left to see if the transient response improves (this should also make the response faster).

This time the overshoot is smaller. Consult your textbook for further suggestions on choosing the desired closed-loop poles.

Note: If you want to place two or more poles at the same position, place will not work. You can use a function called acker which achieves the same goal (but can be less numerically well-conditioned):

K = acker(A,B,[p1 p2 p3])

Now, we will take the control system as defined above and apply a step input (we choose a small value for the step, so we remain in the region where our linearization is valid). Replace t , u , and lsim in your m-file with the following:

The system does not track the step well at all; not only is the magnitude not one, but it is negative instead of positive!

and now a step can be tracked reasonably well. Note, our calculation of the scaling factor requires good knowledge of the system. If our model is in error, then we will scale the input an incorrect amount. An alternative, similar to what was introduced with PID control, is to add a state variable for the integral of the output error. This has the effect of adding an integral term to our controller which is known to reduce steady-state error.

From the above, we can see that the observer estimates converge to the actual state variables quickly and track the state variables well in steady-state.

Published with MATLAB® 9.2

## ECE 486 Control Systems Lecture 21

Pole placement.

Last time, we discussed the phenomenon of pole-zero cancellation . Further we introduced coordinate transformation or linear transformation and used a quick example to show how to convert the state-space model of a completely controllable system to Controllable Canonical Form .

In this lecture, we will learn how to assign arbitrary closed loop poles of a completely controllable system \(\dot{x} = Ax + Bu\) by means of (full) state feedback \(u=-Kx\).

## Pole Placement by State Feedback

Recall a system state-space model shown in Figure 1

Figure 1: State-space model of a system with input \(u\) and output \(y\)

corresponds to a transfer function \(G(s)\)

\[ G(s) = C(sI-A)^{-1}B. \]

Its open loop poles are the eigenvalues of \(A\) matrix

\[ \det(sI-A) = 0. \]

We can add a controller to move the poles to desired locations, called pole placement

Figure 2: Pole placement via controller \(KD(s)\)

Consider the single-input system in Figure 1 with \(y = 1 x\).

Let’s introduce a state feedback law

Then the closed loop system becomes

If we enclose the open loop system with feedback \(K\) and reference \(r\), as shown in Figure 3 , we can derive the closed loop transfer function \(\frac{Y}{R}(s)\) as follows.

Figure 3: State feedback \(K\) encloses system \((A, B, I, 0)\) with reference \(r\)

Take the Laplace transform, we have

We notice the closed loop poles are just the eigenvalues of \(A-BK\).

Pole Placement : Assigning closed-loop poles \(=\) Assigning eigenvalues of \(A-BK\).

Now we will see that this is particularly straightforward if the system model \((A,B,C,D)\) is in CCF, i.e., with

Claim : Let

\[ \det(sI-A) = s^n + a_1 s^{n-1} + \cdots + a_{n-1} s + a_n. \]

The last row of the \(A\) matrix in CCF consists of the coefficients of the characteristic polynomial in reverse order, with “\(-\)” signs.

Proof : We prove the claim by induction. We shall induct on natural number \(n\).

The base case \(n = 2\) is obvious.

Assuming true for \(n = k \geq 2\), consider a \((k+1)\times (k+1)\) matrix \(A_{k+1}\) of the form in the claim.

Its determinant can be written as

The first equality uses the fact that we can calculate the determinant by expansion by minors using the two entries \(s\) and \(-a_{k+1}\) in the first column.

The second equality uses the induction hypothesis for \(n = k\).

Using this claim, we shall see that the closed loop poles can be assigned by state feedback. By Laplace transforms,

Represent this as a system of ODEs

The last line gives us

\[ \underbrace{\left(s^n + a_1 s^{n-1} + \cdots + a_n\right)}_{\text{characteristic polynomial}}X_1 = U. \]

Therefore \(A - BK\) is also in some state-space model in CCF. Indeed, it is the “\(A\)” matrix of the state-space model of closed loop system in Equation \eqref{d21_eq1}. \(\det \big(sI - (A-BK) \big)= 0\) determines the eigenvalues of \(A - BK\), i.e., the closed loop poles. By the claim we proved above,

Observation : When the system is in Controllable Canonical Form (CCF), each control gain affects one and only one of the coefficients of the characteristic polynomial, and these coefficients can be assigned arbitrarily by a suitable choice of \(k_1,k_2,\ldots,k_n\).

Hence the name Controllable Canonical Form — convenient for control design.

We can summarize the general procedure for any completely controllable system.

- Convert state-space model of the system to CCF using a suitable invertible coordinate transformation \(T\); such a transformation exists by controllability.
- Solve the pole placement problem in CCF in the new coordinates.
- Convert the solution in CCF framework back to original coordinates.

Example : Consider the given state-space model below,

Find a state feedback to place closed loop poles at \(-10\pm j\).

Solution : Step 1 : Check the controllability and find the linear transformation to convert our state-space model to CCF. (We steal the transform \(T\) from last time.)

Step 2 : Find \(u = -\bar{K}\bar{x}\) to place closed-loop poles at \(-10 \pm j\).

The desired characteristic polynomial is

Thus the closed loop system matrix should be

On the other hand, we know

Matching the matrix entries, we get \(\bar{k}_1 = 86, \, \bar{k}_2 = 12\).

This gives the control law

Step 3 : Convert obtained state feedback back to the old coordinates.

The desired state feedback control law therefore is

PDF slides by Prof M. Raginsky and Prof D. Liberzon Edited and HTML-ized by Yün Han

Last updated: 2018-04-27 Fri 14:15

- school Campus Bookshelves
- menu_book Bookshelves
- perm_media Learning Objects
- login Login
- how_to_reg Request Instructor Account
- hub Instructor Commons
- Download Page (PDF)
- Download Full Book (PDF)
- Periodic Table
- Physics Constants
- Scientific Calculator
- Reference & Cite
- Tools expand_more
- Readability

selected template will load here

This action is not available.

## 9.1: Controller Design in Sate-Space

- Last updated
- Save as PDF
- Page ID 24429

- Kamran Iqbal
- University of Arkansas at Little Rock

## State Feedback Controller Design

The state feedback controller design refers to the selection of individual feedback gains for the complete set of state variables. It is assumed that all the state variables are available for observation. The design goal is to improve the transient response of the system and reduce the steady-state tracking error.

Let \({\bf x}(t)\) denote a vector of \(n\) state variables, \(u(t)\) denote a scalar input, and \(y(t)\) denote a scalar output; then the state variable model of a SISO system is given as: \[\dot{\bf x}(t)={\bf Ax}(t)+{\bf b}u(t), \;\; y(t)={\bf c}^{T} {\bf x}(t) \nonumber \]

where \({\bf A}\) is a \(n\times n\) system matrix, \({\bf b}\) is a \(n\times 1\) column vector, and \({\bf c}^{T}\) is a \(1\times n\) row vector. The above state variable model represents a strictly proper input-output transfer function.

Controller design in state-space involves selection of suitable feedback gain vector, \({\bf k}^T\), that imparts desired stability and performance characteristics to the closed-loop system.

## Pole Placement Design

Pole placement design refers to the selection of the feedback gain vector that places the poles of the characteristic polynomial at the desired locations. The control law for the pole placement design is expressed as: \[u(t)=-{\bf k}^{T} {\bf x}(t)+r(t), \nonumber \]

where \({\bf k}^{T} =[k_{1} ,\; \; k_{2} ,\; \ldots ,k_{n} ]\) is a vector of \(n\) feedback gains, one for each of the \(n\) state variables, and \(r(t)\) is a reference input.

A necessary condition for pole placement using state feedback is that pair \(\left({\bf A,\,b}\right)\) is controllable, i.e., its controllability matrix, \({\bf M}_{\rm C}\), is of full rank, where \({\bf M}_{\rm C} =[{\bf b,\; Ab,\;\ldots ,\;A}^{n-1} {\bf b}]\) .

By including control law in the state equations, the closed-loop system is given as:

\[\dot{\bf x}(t)=({\bf A-bk}^{T}){\bf x}(t)+{\bf b}r(t) \nonumber \]

The design problem, then, is to select the feedback gain vector \({\bf k}^T\) that multiplies the state vector \({\bf x}(t)\), such that the closed-loop system matrix \({\bf A-bk}^{T}\) has characteristic polynomial that aligns with a desired polynomial, i.e.,

\[\left|s{\bf I-A+bk}^{T} \right|=\Delta _{\rm des} (s) \nonumber \]

The above equation relates two \(n\)th order polynomials; by equating the polynomial coefficients on both sides of the equation, we obtain \(n\) linear equations that can be solved for the \(n\) feedback gains: \(k_{i} ,\; i=1,\ldots ,n\).

The desired characteristic polynomial for pole placement design is selected with root locations that meet the time-domain performance criteria.

## Closed-loop System

The closed-loop system model is given as: \[\dot{\bf x}(t)={\bf A}_{\rm cl}{\bf x}(t)+{\bf b}r(t) \nonumber \]

where \({\bf A}_{\rm cl} =({\bf A-bk}^{T})\). Assuming closed-loop stability, for a constant input \(r(t)=r_{\rm ss}\), the steady-state response, \({\bf x}_{\rm ss}\), obeys:

\[0 ={\bf A}_{\rm cl} {\bf x}_{ss} +{\bf b} r_{ss} ,\;\; y_{\rm ss} ={\bf c}^T {\bf x}_{ss} \nonumber \]

Hence, \(y_{\rm ss}={\bf c}^T\,{\bf A}_{\rm cl}^{-1}\,{\bf b}\,r_{\rm ss}\).

## Example \(\PageIndex{1}\)

The state and output equations for a DC motor model are given as:

\[\frac{\rm d}{\rm dt} \left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]=\left[\begin{array}{cc} {-100} & {-5} \\ {5} & {-10} \end{array}\right] \left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]+\left[\begin{array}{c} {100} \\ {0} \end{array}\right]V_a , \;\;\omega =\left[\begin{array}{cc} {0} & {1} \end{array}\right]\left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]. \nonumber \]

The motor model has a denominator polynomial \(\Delta (s)=s^{2} +110s+1025\); its roots are located at \(s_{1,2} =-10.28,\; -99.72\) that are reciprocals of the motor time constants \((\tau _\rm e \cong 10\; \rm msec,\; \tau _\rm m \cong 100\; \rm msec)\).

Assume that the design specifications require the closed-loop response to have a time constant of \(20ms\).

To reduce a time constants to \(20\; \rm msec\), we may choose to place the closed-loop poles at \(-50\),\(-100\). Then, the desired characteristic polynomial is given as: \(\Delta _{\rm des} (s)=s^{2} +150s+5000\).

We may assume \(r=0\), so the control law is given as: \(V_a =-k^{T} x\), where \(k^{T} =[k_{1} ,\;k_{2} ]\).

With the inclusion of state feedback, the closed-loop system matrix is given as:

\[A-bk^{T} =\left[\begin{array}{cc} {-100(1+k_{1} )} & {-5(1+20k_{2})} \\ {5} & {-10} \end{array}\right] \nonumber \]

The closed-loop characteristic polynomial is given as:

\[\left|sI-A+bk^{T} \right|=(s+10)\left[s+100(1+k_{1} )\right]+25(1+20k_{2} ). \nonumber \]

By comparing the coefficients of the two polynomials, we obtain the feedback gains: \(k_{1} =0.4,k_{2} =7.15\). The resulting control law is given as:

\[V_a\left(t\right)=-0.4\ i_a\left(t\right)-7.15\ \omega (t) \nonumber \]

The closed-loop system system matrix is given as: \(\left[\begin{array}{cc} {-140} & {-20} \\ {5} & {-10} \end{array}\right]\).

The steady-state system response is given as: \(y_{\rm ss}=0.1r_{\rm ss}\).

The step response of the closed-loop system is plotted in Figure \(\PageIndex{1}\).

## Pole Placement in Controller Form

The pole placement design is facilitated if the system model is in the controller form (Section 8.3.1). In the controller form structure, the coefficients of the characteristic polynomial appear in reverse order in the last row of \(A\) matrix.

\[A=\left[\begin{array}{cc} {\begin{array}{cc} {0\; \; \; \; \; \; \; } & {1} \\ {0\; \; \; \; \; \; \; } & {0} \end{array}} & {\begin{array}{cc} {0\; \; \; } & {\ldots } \\ {1\; \; \; } & {\ldots } \end{array}} \\ {\begin{array}{cc} {\; \; \; \vdots } & {\vdots } \\ {-a_{n} } & {-a_{n-1} } \end{array}} & {\begin{array}{cc} {\ddots } & {1} \\ {\ldots } & {-a_{1} } \end{array}} \end{array}\right],\quad b=\left[\begin{array}{c} {\begin{array}{c} {0} \\ {0} \end{array}} \\ {\begin{array}{c} {\vdots } \\ {1} \end{array}} \end{array}\right]. \nonumber \]

Then, using full state feedback through \(u=-k^{T} x+r\), the closed-loop system matrix is given as:

\[A-bk^{T} =\left[\begin{array}{cc} {\begin{array}{cc} {0\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; } & {1} \\ {0\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; } & {0} \end{array}} & {\begin{array}{cc} {0\; \; \; \; \; \; \; } & {\ldots } \\ {1\; \; \; \; \; \; \; } & {\ldots } \end{array}} \\ {\begin{array}{cc} {\; \; \; \vdots } & {\; \; \; \vdots } \\ {-a_{n} -k_{1} } & {-a_{n-1} -k_{n-1} } \end{array}} & {\begin{array}{cc} {\ddots } & {1} \\ {\ldots } & {-a_{1} -k_{n} } \end{array}} \end{array}\right]. \nonumber \]

The closed-loop characteristic polynomial can be written by inspection, and is given as:

\[\Delta (s)=s^{n} +(a_{1} +k_{n} )s^{n-1} +\ldots +a_{n} +k_{1} \nonumber \]

Let the desired characteristic polynomial be defined as:

\[\Delta _{\rm des} (s)=s^{n} +\bar{a}_{1} s^{n-1} +\ldots +\bar{a}_{n-1} s+\bar{a}_{n} . \nonumber \]

By comparing the coefficients, the desired feedback gains are computed as:

\[k_{1} =\bar{a}_{n} -a_{n} ,\; \, \, k_{2} =\bar{a}_{n-1} -a_{n-1} ,\, \, \ldots \, \, ,\; \, \; k_{n} =\bar{a}_{1} -a_{1} . \nonumber \]

Since the state variables in the controller form include system output and its derivatives, pole placement using state feedback is a generalization of proportional-derivative (PD) and rate feedback controllers.

## Example \(\PageIndex{2}\)

The state variable model of a mass–spring–damper is given as:

\[\frac{\rm d}{\rm dt} \left[\begin{array}{c} {x} \\ {v} \end{array}\right]=\left[\begin{array}{cc} {0} & {1} \\ {-10} & {-1} \end{array}\right]\left[\begin{array}{c} {x} \\ {v} \end{array}\right]+\left[\begin{array}{c} {0} \\ {1} \end{array}\right]f, \;\; x=\left[\begin{array}{cc} {1} & {0} \end{array}\right]\left[\begin{array}{c} {x} \\ {v} \end{array}\right]. \nonumber \]

Assume that the design specifications require raising the damping to \(\zeta =0.6\).

Since the system model is in controller form, its characteristic polynomial can be written by inspection, and is given as: \(\Delta (s)=s^{2} +s+10\).

In order to improve the damping, a desired characteristic polynomial is selected as: \(\Delta _{\rm des} (s)=s^{2} +4s+10\).

The desired feedback gains can be written by inspection, and are given as: \(k^T =[0\; \; 3]\).

## Pole Placement using Bass-Gura Formula

The Bass-Gura formula describes a simple expression to compute the state feedback controller gains from the coefficients of the available and desired closed-loop characteristic polynomials. To proceed, let the state variable model be given as:

\[\dot{x}(t)=Ax(t)+bu(t), \;\; y(t)=c^{T} x(t) \nonumber \]

Assuming that pair \(\left(A,b\right)\) is controllable, its controllability matrix is given as: \(M_{\rm C }=[b,\; Ab,\; \ldots, \;A^{n-1} b]\).

The model is transformed into controller form by a linear transformation, \(z=Px\); the transformed model is described as:

\[\dot{z}(t)=A_{\rm CF} z(t)+b_{\rm CF} u(t), \;\; y(t)=c_{\rm CF}^{T} z(t) \nonumber \]

The controllability matrix of the controller form representation is given as: \(M_{\rm CF} =[b_{\rm CF} ,\; A_{\rm CF} b_{\rm CF} ,\ldots ,\; A_{\rm CF} {}^{n-1} b_{\rm CF} ].\) The matrix comprises the coefficients of the characteristic polynomial, \(\mathit{\Delta}\left(s\right)=\left|sI-A\right|\), and can be written by inspection.

The required linear transformation matrix is obtained from the following relation:

\[P^{-1} =M_{\rm C} M_{\rm CF}^{-1} ,\;\; P=M_{\rm CF} M_{\rm C}^{-1} . \nonumber \]

The state feedback control law in the controller form is defined as:

\[u=-k_{\rm CF} {}^{T} z(t)=-k_{\rm CF} {}^{T} Px(t). \nonumber \]

In terms of polynomial coefficients, the controller gains are given as: \(k_{\rm CF} {}^{T} =\left[\begin{array}{cccc} {\bar{a}_{n} -a_{n} } & {\bar{a}_{n-1} -a_{n-1} } & {\cdots } & {\bar{a}_{1} -a_{1} } \end{array}\right]\).

The controller gains for the original state variable model are obtained as: \(k^{T} =k_{\rm CF} {}^{T} P\).

Hence, the Bass-Gura formula is obtained as:

\[k^{T} =\left[\begin{array}{cccc} {\bar{a}_{n} -a_{n} } & {\bar{a}_{n-1} -a_{n-1} } & {\cdots } & {\bar{a}_{1} -a_{1} } \end{array}\right]\, M_{\rm CF} M_{\rm C}^{-1} \nonumber \]

## Example \(\PageIndex{3}\)

The state equations for a DC motor model are given as:

\[\frac{\rm d}{\rm dt} \left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]=\left[\begin{array}{cc} {-100} & {-5} \\ {5} & {-10} \end{array}\right]\left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]+\left[\begin{array}{c} {100} \\ {0} \end{array}\right]V_a \nonumber \]

Assume that the design specifications require a time constant of \(20ms\).

The characteristic polynomial of the model is: \(\left|sI-A\right|=s^{2} +110s+1025.\)

The controllability matrix of the model is formed as: \(M_\rm C =[b,\; Ab]=\left[\begin{array}{cc} {100} & {-10^{4} } \\ {0} & {500} \end{array}\right]\).

The controller form representation for the model is developed as:

\[\frac{\rm d}{\rm dt} \left[\begin{array}{c} {x_1} \\ {x_{2} } \end{array}\right]=\left[\begin{array}{cc} {0} & {1} \\ {-1025} & {-110} \end{array}\right]\left[\begin{array}{c} {x_1 } \\ {x_{2} } \end{array}\right]+\left[\begin{array}{c} {0} \\ {1} \end{array}\right]V_a \nonumber \]

The controllability matrix for the controller form representation is: \(M_{\rm CF} =\left[\begin{array}{cc} {0} & {1} \\ {1} & {-110} \end{array}\right]\).

Assume that the desired characteristic polynomial is selected as: \(\Delta _{\rm des} (s)=s^{2} +150s+5000\).

The feedback gains for the controller form representation are obtained as: \(k_{\rm CF} {}^{T} =\left[\begin{array}{cc} {3975} & {40} \end{array}\right]\).

Using the Bass-Gura formula, the state feedback controller gains for the DC motor model are computed as: \(k^{T} =k_{\rm CF} {}^{T} \, M_{\rm CF} M_{\rm C}^{-1} =\left[\begin{array}{cc} {0.4} & {7.15} \end{array}\right]\).

The resulting state feedback control law is given as: \(V_{a} =-0.4i_{a} -7.15\omega\).

## Pole Placement using Ackermann’s Formula

The Ackermann’s formula is, likewise, a simple expression to compute the state feedback controller gains for pole placement. To develop the formula, let an \(n\)-dimensional state variable model be given as:

\[\dot{x}(t)=Ax(t)+bu(t) \nonumber \]

The controllability matrix of the model is formed as: \(M_{\rm C} =[b,\; Ab,\;\ldots ,\;A^{n-1} b].\)

Assume that a desired characteristic polynomial is given as: \(\Delta _{\rm des} (s)=s^{n} +\bar{a}_{1} s^{n-1} +\ldots +\bar{a}_{n-1} s+\bar{a}_{n} \).

Next, we evaluate the following matrix polynomial that involves the system matrix:

\[\Delta _{\rm des} (A)=A^{n} +\bar{a}_{1} A^{n-1} +\ldots +\bar{a}_{n-1} A+\bar{a}_{n} I. \nonumber \]

where \(I\) denotes an \(n\times n\) identity matrix. The state feedback controller gains are obtained from the following Ackermann’s formula:

\[k^{T} =\left[\begin{array}{cccc} {0} & {\cdots } & {0} & {1} \end{array}\right]\, M_{\rm C} {}^{-1} \Delta _{\rm des} (A) \nonumber \]

## Example \(\PageIndex{4}\)

\[\frac{\rm d}{\rm dt} \left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]=\left[\begin{array}{cc} {-100} & {-5} \\ {5} & {-10} \end{array}\right]\left[\begin{array}{c} {i_a } \\ {\omega } \end{array}\right]+\left[\begin{array}{c} {100} \\ {0} \end{array}\right]V_a \nonumber \]

Let the desired characteristic polynomial be given as: \(\Delta _{\rm des} (s)=s^{2} +150s+5000\).

The controllability matrix for the model is computed as: \(M_{\rm C} =[b,\; Ab]=\left[\begin{array}{cc} {100} & {-10^{4} } \\ {0} & {500} \end{array}\right]\).

The matrix polynomial used in the formula is evaluated as: \(\Delta _{\rm des} (A)=\left[\begin{array}{cc} {-25} & {-200} \\ {200} & {3575} \end{array}\right]\).

Using the Ackermann’s formula, the pole placement controller is computed as: \(k^{T} =\left[\begin{array}{cc} {0.4} & {7.15} \end{array}\right]\).

The MATLAB Control System Toolbox includes the ‘place’ command that uses Ackermann’s formula for pole placement design. The command is invoked by entering the system matrix, the input vector, and a vector of desired roots of the characteristic polynomial. The function returns the feedback gain vector that places the eigenvalues of \(A-bk^{T}\), at the desired root locations.

## Pole Placement using Sylvester’s Equation

The Sylvester equation in linear algebra is a matrix equation, given as: \(AX+XB=C\), where \(A\) and \(B\) are square matrices of not necessarily equal dimensions, and \(C\) is a matrix of appropriate dimensions. The solution matrix \(X\) is of the same dimension as \(C\).

In order to apply the Sylvester equation for pole placement design, let an \(n\)-dimensional state variable model be given as: \[\dot{x}(t)=Ax(t)+bu(t) \nonumber \] The pair \((A,b)\) is assumed to be controllable.

Let \(A-bk^T\) represent the closed-loop system matrix; assume that a diagonalizing similarity transform is defined as: \[X^{-1}\left(A-bk^T\right)X=\mathit{\Lambda} \nonumber \] The matrix \(\mathit{\Lambda}\) is diagonal and contains the desired eigenvalues; it may be assumed in modal form for complex eigenvalues.

Let \(k^TX=g^T\); then, the Sylvester eqution is formulated as: \[AX-X\mathit{\Lambda}=bg^T \nonumber \] A unique solution \(X\) to the Sylvester’s equation exists if the eigenvalues of \(A\) and \(\mathit{\Lambda}\) are distinct, i.e., \({\lambda }_i\left(A\right)-{\lambda }_j\left(\mathit{\Lambda}\right)\neq 0\).

The design procedure for pole placement using Sylvester’s equation is given as follows:

- Choose a matrix \(\mathit{\Lambda}\) of desired eigenvalues in modal form. Choose any \(g^T\) and solve \(AX-X\mathit{\Lambda}=bg^T\) for \(X\).
- Recover the feedback gain vector from \(k^T=g^TX^{-1}\). If \(X\) is not invertible, choose a different \(g^T\) and repeat the procedure.

## Example \(\PageIndex{5}\)

The state equation for the mass–spring–damper model is described as: \[\frac{\rm d}{\rm dt} \left[\begin{array}{c} {x} \\ {v} \end{array}\right]=\left[\begin{array}{cc} {0} & {1} \\ {-10} & {-1} \end{array}\right]\left[\begin{array}{c} {x} \\ {v} \end{array}\right]+\left[\begin{array}{c} {0} \\ {1} \end{array}\right]f \nonumber \]

Let a desired characteristic polynomial be selected as: \(\Delta _{\rm des} (s)=s^{2} +4s+10\); then, the modal matrix of the desired eigenvalues is: \(\Lambda =\left[\begin{array}{cc} {-2} & {\sqrt{6} } \\ {-\sqrt{6} } & {-2} \end{array}\right]\).

Let \(g^{T} =[1\; \; 0]\); then, \(bg^{T} =\left[\begin{array}{cc} {0} & {0} \\ {1} & {0} \end{array}\right]\). The solution to Sylvester’s equation is obtained as: \(X=\left[\begin{array}{cc} {-0.067} & {-0.082} \\ {0.333} & {0.0} \end{array}\right]\).

The feedback controller gains are computed as: \(k^{T} =[0\; \; 3]\).

Fusion of Engineering, Control, Coding, Machine Learning, and Science

Aleksandar Haber

## Pole Placement With Integral Control Action to Eliminate Steady-State Error (State-Space Control Design)

In this post, we explain how to integrate an integral control action into a pole placement control design in order to eliminate a steady-state error. The YouTube video accompanying this post is given below

First, we explain the main motivation for creating this tutorial. When designing control algorithms, we are primarily concerned with two design challenges. First of all, we have to make sure that our control algorithm behaves well during the transient response. That is, as designers we specify the acceptable behavior of the closed-loop system during the transient response. For example, we want to make sure that the rise time (settling time) is within a certain time interval specified by the user. Also, we want to ensure that the system’s overshoot is below a certain value. For example, below 10 or 15 percent of the steady-state value. Secondly, we want to make sure that the control algorithm is able to eliminate the steady-state control error.

In your introductory course on control systems, you have probably heard of a pole placement problem and the solutions. The classical pole placement method is used to stabilize the system or for improving the transient response. This method finds a state feedback control matrix that assigns the poles of the closed-loop system to desired locations specified by the user. However, it is not obvious how to use the pole placement method for set-point tracking and for eliminating steady-state error in set-point tracking design problems. This tutorial explains how to combine the pole placement method with an integral control action in order to eliminate steady-state error and achieve a desired transient response. The technique presented in this lecture is very important for designing control algorithms.

Consider the following state-space model:

is stable and that the poles are placed at the desired locations (that are specified by the designer). The issue with this approach is that although we can place the poles at the desired locations, we do not have full control of steady-state error.

The basic idea for tackling this problem is to augment the original system ( 2 ) with an integrator of the error:

We can write ( 5 ) compactly

From the last system of equations, we can observe that we have formed a new state-space model, with the state variable:

The state-feedback controller now has the following form

By substituting the feedback control algorithm ( 8 ) in the state-space model ( 6 ), we obtain the following system

The new system matrix

Next, we present MATLAB codes for implementing this control approach. The following code lines define the system, compute eigenvalues of the open-loop system, perform basic diagnostics, and compute the open-loop response.

The open-loop step response is given below.

The open-loop system is asymptotically stable. However, we can observe that we have a significant steady-state error. To eliminate the steady-state error, we use the developed approach.

Next, we define the augmented system for pole placement, define desired closed-loop poles, compute the feedback control gain, define the close-loop system, compute the transfer function of the system, and compute the step response. The following code lines are used to perform these tasks.

The resulting step response of the system is shown in the figure below.

A few comments about the presented code are in order. In code line 13, we define the closed-loop poles. We shift the open-loop poles to left compared to their original location in order to ensure a faster response. Also, we set an additional pole corresponding to the introduced integral action to be a shifted version of the open-loop pole with the minimal real part (maximal absolute distance from the imaginary axis). From the step response of the closed-loop system, we can observe that the steady-state error has been eliminated. Consequently, our control system with the additional integral action is able to successfully track reference set points. The computer transfer function of the closed-loop system is

That would be all. A related post to this post and tutorial is a tutorial on how to compute a Linear Quadratic Regulator (LQR) optimal controller. That tutorial can be found here .

## You might also like

State-space modeling of double mass-spring-damper system with python modeling, how to design parameter of proportional-integral controllers by using routh-hurwitz stability test, clear and concise particle filter tutorial with python implementation- part 3: python implementation of particle filter algorithm.

## Robust Stability and Pole Placement

An Application of Parametric Interval Analysis

- Published: 07 September 2021
- Volume 32 , pages 1498–1509, ( 2021 )

## Cite this article

- Heloise Assis Fazzolari ORCID: orcid.org/0000-0003-4113-2598 1 nAff2 &
- Paulo Augusto Valente Ferreira 1

200 Accesses

Explore all metrics

In this paper, we propose an integration of classic and parametric interval analysis methods for addressing robust stability and robust pole placement problems associated with linear dynamic systems with interval parameters. In order to reduce the conservatism of classic interval analysis and synthesis methods due to the parameter dependency phenomenon, we adopt a less conservative approach that explicitly considers multi-incident interval parameters. The paper includes numerical experiments which illustrates the characteristics and properties of the proposed methods, as well as their application to the control of an interval gyroscope system.

This is a preview of subscription content, log in via an institution to check access.

## Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Alefeld, G., & Herzberger, J. (1983). Introduction to interval computations . New York, NY: Academic Press.

MATH Google Scholar

Amini, F., & Khaloozadeh, H. (2017a). Robust fuzzy stabilisation of interval plants. International Journal of Systems Science, 48, 436–450.

Article MathSciNet Google Scholar

Amini, F., & Khaloozadeh, H. (2017b). Robust stabilization of multilinear interval plants by takagi-sugeno fuzzy controllers. Applied Mathematical Modelling, 51, 329–340.

Armenise, M., Ciminelli, C., Dell‘Olio, F., & Passaro, V. (2010). Advances in gyroscope technologies . Berlin: Springer Science and Business Media.

Barmish, B. R. (1985). Necessary and sufficient conditions for quadratic stabilizability of an uncertain system. Journal of Optimization Theory and Applications, 46, 399–408.

Bhattacharyya, S. P., Chapellat, H., & Keel, L. H. (1995). Robust Control: The Parametric Approach . Upper Saddle River, NJ: Prentice-Hall.

Bingulac, S. P. (1970). An alternate approach to expanding PA + A‘= \(-\) Q. IEEE Transactions on Automatic Control, 15, 135–137.

Article Google Scholar

Boyd, S., Ghaoui, L. E., Feron, E., & Balakrishnan, V. (1994). Linear matrix inequalities in systems and control theory . USA: SIAM Studies in Applied Mathematics.

Book Google Scholar

Chapellat, H., Dahleh, M., & Bhattacharyya, S. P. (1990). Robust stability under structured and unstructured perturbations. IEEE Transactions on Automatic Control AC, 36 (10), 1100–1108.

Chen, C. T. (1999). Linear system theory and design (3rd ed.). New York: Oxford University Press Inc.

Google Scholar

Educate, Q. I. (2020). User Manual of Rotary Servo Plant: Gyroscope. Https://www.quanser.com/products/gyrostable-platform/, 17 mar. 2020

Fazzolari, H. A., Ferreira, P. A. V. (2018). Controle robusto via análise intervalar paramétrica com aplicação à estabilização de um giroscópio. In: Annals of XXII Brazilian Congress of Automatics, João Pessoa - PB - Brasil.

Florian, H., Giusti, A., & Althoff, M. (2017). Robust control of continuum robots using interval arithmetic. IFAC Papersonline, 50, 5660–5665.

Gardenes, E., Sainz, M. A., Jorba, L., Calm, R., Estela, R., Mielgo, H., & Trepat, A. (2001). Modal intervals. Reliable Computing, 7, 77–111.

Hansen, E. R. (1975). A generalized interval arithmetic. Nicket K editor Interval Mathematics, Lect Notes Comput Sc, 29, 7–18.

Hladík, M., & Skalna, I. (2019). Relations between various methods for solving linear interval and parametric equations. Linear AlgebraanditsApplications, 574, 1–21.

Huang, Y. (2018). An interval algorithm for uncertain dynamic stability analysis. Applied Mathematics and Computation, 338, 567–587.

Jansson, C., & Rohn, J. (1999). An algorithm for checking regularity of interval matrices. SIAM Journal on Mathematical Analysis and Applications, 20, 756–776.

Jaulin, L., Kieffer, M., Didrit, O., & Walter, E. (2001). Applied Interval Analysis . London: Springer-Verlag.

Kolev, L. V. (2014). Parameterized solution of linear interval parametric systems. Applied Mathematics and Computation, 246, 229–246.

Lordelo, A. D. S., & Ferreira, P. A. V. (2006). Analysis and design of robust controllers using the interval diophantine equation. Reliable Computing, 12, 371–388.

Moore, R. E. (1966). Interval Analysis . Englewood Cliffs, N.J.: Prentice Hall, inc.

Oliveira, R. C., Peres, P. L. (2005). Stability of polytopes of matrices via affine parameter-dependent lyapunov functions: Asymptotically exact lmi conditions. Linear Algebra and its Applications, pp. 209–228.

Prado, M. L. M., Lordelo, A. D. S., Ferreira, P. A. V. (2005). Robust pole assignment by state feedback control analysis. In: Proceedings of the \(16^{th}\) IFAC World Congress, Praha, Czech Republic.

Rohn, J. (1994a). Enclosing solutions of linear interval equations is np-hard. Computing, 53, 365–368 ( 10.1007/BF02307386 ).

Rohn, J. (1994b). Positive definiteness and stability of interval matrices. SIAM Journal on Matrix Analysis and Applications, 15, 175–184.

Rump, S. M. (1994). Verification methods for dense and sparse systems of equations. Topics in Validated Computations, J. Herzberger, Amsterdam.

Rump, S. M. (2010). INTLAB - INTerval LABoratory, Institute for Reliable Computing. URL: http://www.ti3.tu-harburg.de/rump/intlab/

Seif, N. P., Husseim, S. A., & Deif, A. S. (1994). The interval sylvester equation. Computing, 52, 233–244.

Soh, Y., Evans, R. J., Petersen, I., & Betz, R. E. (1987). Robust pole placement. Automatica, 23, 601–610.

Soliman, M. (2016). Robust power system stabilizer design via interval arithmetic. International Journal of Modelling, Identification and Control, 25, 287–300.

Yang, G. H., & Lum, K. Y. (2007). Comparisons among robust stability criteria for linear systems with affine parameter uncertainties. Automatica, 43, 491–498.

Zhou, K., & Doyle, J. C. (1998). Essentials of Robust Control . Upper Saddle River, NJ: Prentice Hall.

Download references

## Acknowledgements

This work was sponsored by the National Council for Scientific and Technological Development (CNPq), Brazil, Grants 159829/2013-5 and 307926/2009-5.

## Author information

Heloise Assis Fazzolari

Present address: Engineering, Modeling and Applied Social Sciences Center - CECS, Federal University of ABC - UFABC, Av. dos Estados, 5001, Santo André- SP, Brazil

## Authors and Affiliations

School of Electrical and Computer Engineering, University of Campinas, Campinas, 13083-852, Brazil

Heloise Assis Fazzolari & Paulo Augusto Valente Ferreira

You can also search for this author in PubMed Google Scholar

## Corresponding author

Correspondence to Heloise Assis Fazzolari .

## Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

## Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and permissions

## About this article

Fazzolari, H.A., Ferreira, P.A.V. Robust Stability and Pole Placement. J Control Autom Electr Syst 32 , 1498–1509 (2021). https://doi.org/10.1007/s40313-021-00798-7

Download citation

Received : 21 October 2020

Revised : 23 June 2021

Accepted : 06 August 2021

Published : 07 September 2021

Issue Date : December 2021

DOI : https://doi.org/10.1007/s40313-021-00798-7

## Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

- Control theory
- Interval systems
- Robust control
- Parametric interval analysis
- Find a journal
- Publish with us
- Track your research

## IMAGES

## VIDEO

## COMMENTS

Pole placement is a method of calculating the optimum gain matrix used to assign closed-loop poles to specified locations, thereby ensuring system stability. Closed-loop pole locations have a direct impact on time response characteristics such as rise time, settling time, and transient oscillations. For more information, see Pole Placement.

Full state feedback (FSF), or pole placement, is a method employed in feedback control system theory to place the closed-loop poles of a plant in pre-determined locations in the s-plane. [1]

Pole Placement Closed-loop pole locations have a direct impact on time response characteristics such as rise time, settling time, and transient oscillations. Root locus uses compensator gains to move closed-loop poles to achieve design specifications for SISO systems. You can, however, use state-space techniques to assign closed-loop poles.

We can select n + m desired closed-loop pole locations, leading to a desired characteristic polynomial: ¢d(s) = dn+msn+m + dn+m¡1sn+m¡1 + ¢ ¢ ¢ + d0; and use our free parameters (the li and pi) to satisfy ¢(s) = ¢d(s): In the next section we will show when this problem has a solution and provide an algorithm for finding the required controller.

Find the closed-loop dynamics: (t) x ̇ = Ax(t) + B(r − Kx(t)) = (A − BK)x(t) + Br = Aclx(t) + Br y(t) = Cx(t) Objective: Pick K so that Acl has the desired properties, e.g., A unstable, want Acl stable Put 2 poles at −2 ± 2i

The main role of state feedback control is to stabilize a given system so that all closed-loop eigenvalues are placed in the left half of the complex plane. The following theorem gives a condi-tion under which is possible to place system poles in the desired locations. Theorem 8.1 Assuming that the pair

How to choose closed loop poles - The Million $ question How do the closed loop poles influence performance How do the closed loop poles influence robustness A bit of history - Mats Lilja's PhD thesis TFRT 1031 (1989) Insight into model reduction Pole Placement Design Introduction Simple Examples Polynomial Design State Space Design

This lecture describes a method to tune PID controllers using pole placement. For second-order systems, the approach is to: Use PID control and. Select the gains to place the three closed-loop poles at desired locations. A PI controller (without the D-term) should be used if the plant has sufficient damping.

Pole Placement | State Space, Part 2. From the series: State Space. Brian Douglas. This video provides an intuitive understanding of pole placement, also known as full state feedback. This is a control technique that feeds back every state to guarantee closed-loop stability and is the stepping stone to other methods like LQR and H infinity.

We can use matlab/octave to find what our K matrix must be to have the poles of the closed loop system be at -2.5: pkg load control A = [0 1;-0.4 / 1-0.3 / 1]; B = [0; 1 / 1]; place (A, B, [-2.5-2.5]) Which gives us the output: K = 5.8500 4.7000 So our K matrix is: ... So, that's how you design a state space controller with pole placement ...

The ﬁrst step in pole-placement is the selection of the pole locations of the closed-loop system. It is always useful to keep in mind that the control effort required in related to how far the open-loop poles are moved by the feedback. Furthermore, when a zero is near a pole, the system may be nearly uncontrollable and moving such ...

Control Design Using Pole Placement. Let's build a controller for this system using a pole placement approach. The schematic of a full-state feedback system is shown below. ... The stability and time-domain performance of the closed-loop feedback system are determined primarily by the location of the eigenvalues of the matrix (), ...

• Closed-loop control (feed-back control). -More robust: by measuring the output we can see the fact of no convergence of the output to the input (due to bad modeling or disturbances), and the controller changes the control signal to improve the system behavior. -More complicated and expensive:

Successful controller emulation requires a high enough sampling rate that is at least ten times the frequency of the dominant closed-loop poles of the system. In the following we illustrate the emulation of pole placement controller designed for the DC motor model (Example 8.3.4) for controlling the discrete-time model of the DC motor.

2 Answers Sorted by: 1 For LTI systems you can design the observer and state feedback separately, due to certainty equivalence. So the observer does not have to be faster then the state feedback in order to ensure stability. Often a system will have input constraints (such as saturation), which will indirectly place constraints on K K.

Pole Placement: Assigning closed-loop poles = Assigning eigenvalues of A − BK . Now we will see that this is particularly straightforward if the system model (A, B, C, D) is in CCF, i.e., with A = ( 0 1 0 … 0 0 0 0 1 … 0 0 ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ 0 0 0 … 0 1 − an − an − 1 − an − 2 … − a2 − a1), B = ( 0 0 ⋮ 0 1). Claim: Let

Pole Placement in Controller Form . The pole placement design is facilitated if the system model is in the controller form (Section 8.3.1). In the controller form structure, the coefficients of the characteristic polynomial appear in reverse order in the last row of \(A\) matrix.

An assisted pole placement method has been proposed to help the designers in their choice of the design parameters. Once the desired rise time, settling time and percentage of overshoot of the step response of the closed-loop system with respect to a reference change have been specified, the computation of the controller is performed automatically.

The classical pole placement method is used to stabilize the system or for improving the transient response. This method finds a state feedback control matrix that assigns the poles of the closed-loop system to desired locations specified by the user.

The pole placement (PP) technique for design of a linear state feedback control system requires specification of all the closed-loop pole locations even though only a few poles dominate the system's transient response characteristics. The linear quadratic regulator (LQR) method, on the other hand, optimizes the system transient response and does not directly impose the location of the dominant ...

In robust pole placement (Soh et al. 1987 ), we focus on placing closed-loop poles in a prescribed region of the left-hand complex plane within the same uncertainty context.

Control:continuously operating dynamical systems. Frequency response methods made it possible to design linear closed-loop Norbert Wiener (Cybernetics) State space methods Minorsky worked on automatic controllers (PID) for steering ships. linear model-based (MIMO) optimal/stochastic/ adaptive control Rudolf Kalman (Apollo) modern control theory ...