cBook‎ > ‎

Linearization in Control

Linearisation for a Function with 2 vairables

posted Nov 5, 2014, 9:14 PM by Javad Taghia   [ updated Nov 5, 2014, 9:15 PM ]

For a variable:

Linearisation of f(x) at x = a is L(x) = f(a) + f'(a) (x - a)

Linear Approximations
Let f be a function of two variables x and y de-
fined in a neighbourhood of (a, b). The linear function
L(x, y) = f(a, b) + fx(a, b)(x − a) + fy(a, b)(y − b)
is called the linearisation of f at (a, b) and the
f(x, y) ≈ f(a, b) + fx(a, b)(x − a) + fy(a, b)(y − b)
is called the linear approximation of f at (a, b).


posted Oct 31, 2014, 1:20 AM by Javad Taghia   [ updated Nov 5, 2014, 8:22 PM ]

YouTube Video

-- Linearization Basics

posted Mar 14, 2011, 6:29 AM by Javad Taghia   [ updated Apr 24, 2011, 12:37 AM ]

 What Is Linearization and Why?

In this short article I am going to introduce some useful techniques for linearization.
At first it is important to know why we need linearization. There are different scenarios which make us to use this technique in control.
It is worthy to indicate linearization is not specifically for control engineering. As you can read in the wiki page http://en.wikipedia.org/wiki/Linearization the definition is more general.
linearization definition on wiki page:  In mathematics and its applications, linearization refers to finding the linear approximation to a function at a given point. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or discrete dynamical systems[1]. This method is used in fields such as engineering,physics, economics, and ecology. "

In this short article, I am going to focus on the usage of linearization in control engineering. 

As you know, the aim of control engineering is controlling dynamic systems for particular responses or behavior.
what is dynamic system? Dynamic system is a system which states of system are changing in time for more info(http://en.wikipedia.org/wiki/Dynamical_system).
What is state of a system?  Every dynamic system which is the subject of control engineering has states. States are the variable parameters which change the response or behavior of our system; they are dependent to time. In control it is essential to distinguish the states of our system and write the relationships as set of equations. These states or relationship equations are  the formula for analyzing and understanding the behavior of system. 
Systems are in wide range from very simple ones to very complicated ones. So, the resulting equations can be obtainable in easy systems or are not obtainable in very complicated dynamics.
When the equations are obtainable two possibilities happen. The first one is when the relationship between the states of the system is linear. It means you can easily write the state space equations and you are able to describe the system in state space form as shown in the below equations EQ.1.

\dot{\mathbf{x}}(t) = A(t) \mathbf{x}(t) + B(t) \mathbf{u}(t)
\mathbf{y}(t) = C(t) \mathbf{x}(t) + D(t) \mathbf{u}(t)


\mathbf{x}(\cdot) is called the "state vector",  \mathbf{x}(t) \in \mathbb{R}^n;

\mathbf{y}(\cdot) is called the "output vector",  \mathbf{y}(t) \in \mathbb{R}^q;

\mathbf{u}(\cdot) is called the "input (or control) vector",  \mathbf{u}(t) \in \mathbb{R}^p;

A(\cdot) is the "state matrix",  \operatorname{dim}[A(\cdot)] = n \times n,

B(\cdot) is the "input matrix",  \operatorname{dim}[B(\cdot)] = n \times p,

C(\cdot) is the "output matrix",  \operatorname{dim}[C(\cdot)] = q \times n,

D(\cdot) is the "feedthrough (or feedforward) matrix" (in cases where the system model does not have a direct feedthrough, 

D(\cdot) is the zero matrix),  \operatorname{dim}[D(\cdot)] = q \times p,

\dot{\mathbf{x}}(t) := \frac{\operatorname{d}}{\operatorname{d}t} \mathbf{x}(t)

Taken from http://en.wikipedia.org/wiki/State_space_(controls)


As you can see the x(tis a vector of states for our system. It means the states of our system can be summarized in this vector. It is clear that the matrices A,B,C,D are responsible for defining the relationship between states vector and input vector and so on and so for. 

When we call a system linear? A system  is linear when system description is based on  linear operation between states. 
We can define linear operation as follow EQ.2.

as well as their respective outputs

y_1(t) = H \left \{ x_1(t) \right \}
y_2(t) = H \left \{ x_2(t) \right \}

then a linear system must satisfy

\alpha y_1(t) + \beta y_2(t) = H \left \{ \alpha x_1(t) + \beta x_2(t) \right \}

Taken from http://en.wikipedia.org/wiki/Linear_system
So when we can write the relationship between states as EQ.1 sate space description. We are dealing with a linear system. 

Moreover, if A(t) matrix is a matrix of constant coefficients and the coefficients are constant in time the system is called LTI (linear time invariant) system. In this case A(t) = A. If the states have linear relationship but there are coefficients which are varying in time. Then our system is called linear time variant system (TIV systems). 
If we can not obtain state space equations in the form of EQ.1 then our system is nonlinear. A nonlinear system has two major categories. Non-linear time invariant systems or Non-linear time variant systems. 

I would like to explain linear time-variant systems a little bit more. It is good because we can understand the differences to the linear time invariant systems. 
Time-variant systems can be divided in two main cases. 
The first one is when the change in the time dependent parameters of our system is not significant regards to the time constant of our control. For example system aging and wearing can result changes in parameters during time. But these changes are not important in control design of our system in short period of time. After a long time new tuning and analyzing for tuning up the system is necessary. These kind of time variant systems can be considered as time invariant systems in control design procedure. There are some examples for this category of time-variant systems which can be considered as time-invariant systems. You can see these examples in wiki page

"The following time varying systems cannot be modelled by assuming that they are time invariant:

  • Aircraft – Time variant characteristics are caused by different configuration of control surfaces during take off, cruise and landing as well as constantly decreasing weight due to consumption of fuel.
  • The Earth's thermodynamic response to incoming solar radiation varies with time due to changes in the Earth's albedo and the presence of greenhouse gasses in the atmosphere.
  • The human vocal tract is a time variant system, with its transfer function at any given time dependent on the shape of the vocal organs. As with any fluid-filled tube, resonances (calledformants) change as the vocal organs such as the tongue and velum move. Mathematical models of the vocal tract are therefore time-variant, with transfer functions often linearly interpolated between states over time.
  • Linear time varying processes such as amplitude modulation occur on a time scale similar to or faster than that of the input signal. In practice amplitude modulation is often implemented using time invariant nonlinear elements such as diodes.
  • The Discrete Wavelet Transform, often used in modern signal processing, is time variant because it makes use of the decimation operation."

The second case is systems that parameters change rapidly during the time. These systems in behavior are more similar to non-linear systems.

In general many well developed techniques exist for control design and response of L.T.I systems. For continues time systems Laplace and Fourier transforms and for discrete systems Z-transform and Discrete Fourier are applicable on L.T.I systems.
So, if we can consider our non-linear systems as L.T.I systems by some considerations we have the advantage of using different powerful L.T.I control design.

Thus, It is IMPORTANT to know how to simplify a non-linear system to L.T.I system for using well-known general techniques. Based on this introduction I think you also believe linearization is important for control engineering. In the following sections I am going to explain linearization in more detail and provide some examples as well. 

 Linearization Approach:

Linearization should be down around a point. This point can be one operating point. The operating point should be considered carefully because the resulting controller will be corresponded to the predefined operation point. 

I think it is logical to divide the discussion in two main topics. One is the methods which are easy to use by hand and the other one is the methods which we can use by utilizing softwares such as Matlab. 

 Linearization Approaches Based on Taylor Series.

We know that every function can be described by Taylor series. The order of differentiation is the main factor for reducing the error between the resulted series and the original function. 
The general formula for Taylor series taken from wiki page (http://en.wikipedia.org/wiki/Taylor_series) has shown in EQ.3.
As you know, the accuracy for Taylor estimation is related to the order o differentiations. 

The Taylor series of a real or complex function ƒ(x) that is infinitely differentiable in a neighborhood of a real or complex number a is the power series

f(a)+\frac {f'(a)}{1!} (x-a)+ \frac{f''(a)}{2!} (x-a)^2+\frac{f^{(3)}(a)}{3!}(x-a)^3+ \cdots.

which can be written in the more compact sigma notation as

 \sum_{n=0} ^ {\infin } \frac {f^{(n)}(a)}{n!} \, (x-a)^{n}

where n! denotes the factorial of n and ƒ (n)(a) denotes the nth derivative of ƒ evaluated at the point a. The zeroth derivative of ƒ is defined to beƒ itself and (x  a)0 and 0! are both defined to be 1. In the case that a = 0, the series is also called a Maclaurin series.


In linearization, we use Taylor series for estimation. But, we just use the first derivative not more. and the 'a' as the number will be the operating point.
The general formula for linearization taken from wiki page http://en.wikipedia.org/wiki/Linearization has shown below as EQ.4.

The equation for the linearization of a function f(x,y) at a point p(a,b) is:

 f(x,y) \approx f(a,b) + \left. {\frac{{\partial f(x,y)}}{{\partial x}}} \right|_{a,b} (x - a) + \left. {\frac{{\partial f(x,y)}}{{\partial y}}} \right|_{a,b} (y - b)

The general equation for the linearization of a multivariable function f(\mathbf{x}) at a point \mathbf{p} is:

f({\mathbf{x}}) \approx f({\mathbf{p}}) + \left. {\nabla f} \right|_{\mathbf{p}}  \cdot ({\mathbf{x}} - {\mathbf{p}})

where \mathbf{x} is the vector of variables, and \mathbf{p} is the linearization point of interest.


Thus, the linearization has been down by use of EQ.4 formula, and calculation of partial derivatives. 

The following picture shows how linearization works in a simple nonlinear curve.
This picture is from the wiki page http://en.wikipedia.org/wiki/Linearization. As you can see in this picture we reach to a tangent line with the slope of f'(x).

I think it is more useful to see the performance of this formula with in some examples. You can understand different cases which we use linearization technique based on this formula.

 Some Examples

The first example is an easy example which is not  implicitly related to control. In this example we use delta symbol. This symbol is useful in linearization. 
As you can see in the example. The resulting linear function is easy to understand by this symbol. We use this notification in more general examples in next examples.

In the second example we use a shortcut for calculation of linearized differential equations. 
The simple shortcut rule is:
For every differential term just calculate the coefficient at linearization point.
For other terms which are not differential terms calculate linear function by use of linearization formula.
In the following example you can understand how to apply this method. 
I want to indicate that you can use the general formula for linearization for calculation as well. This shortcut approach also is based on the EQ.4.
In the next example we are going to deal with state vectors in one nonlinear system.
Before going through these two examples. I would like to present more general case of linearization formula.
A First Course in Fuzzy and Neural Control Hung T. Nguyen (Author), Nadipuram R. Prasad (Author), Carol L. Walker (Author), Ebert A. Walker (Author).

We can rewrite EQ.4 as the following formula for a state variable. 


Now, we define delta type states as follow:

So, we can rewrite the EQ.5 in the form of the following equation.


Now we can write the new form for linearized state space equation as following equation.



There are two important points before using EQ.7:
-  A and Bare evaluated at the nominal operating point. 
- In general case this equation can lead to some time varying elements.  We can see this effect in the following example.

The third example is from the book (A First Course in Fuzzy and Neural Control Hung T. Nguyen (Author), Nadipuram R. Prasad (Author), Carol L. Walker (Author), Ebert A. Walker (Author)). I add more explanation to the example compare to the book for better understanding.

I would like to present the last exercise which is easier than the third exercise for more practice.

As you can see during the last 4 examples, we discussed different situations. I hope you can get started by these simple examples for doing more complicated linearization.

In the next section I am going to introduce some Matlab functions in order to linear a nonlinear system. 


1-3 of 3