## What Is Linearization and Why?In this short article I am going to introduce some useful techniques for linearization.At first it is important to know why we need linearization. There are different scenarios which make us to use this technique in control. It is worthy to indicate linearization is not specifically for control engineering. As you can read in the wiki page http://en.wikipedia.org/wiki/Linearization the definition is more general. linearization definition on wiki page: " In mathematics and its applications, linearization refers to finding the linear approximation to a function at a given point. In the study of dynamical systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or discrete dynamical systems^{[1]}. This method is used in fields such as engineering,physics, economics, and ecology. "In this short article, I am going to focus on the usage of linearization in control engineering. As you know, the aim of control engineering is controlling dynamic systems for particular responses or behavior. what is dynamic system? Dynamic system is a system which states of system are changing in time for more info(http://en.wikipedia.org/wiki/Dynamical_system). What is state of a system? Every dynamic system which is the subject of control engineering has states. States are the variable parameters which change the response or behavior of our system; they are dependent to time. In control it is essential to distinguish the states of our system and write the relationships as set of equations. These states or relationship equations are the formula for analyzing and understanding the behavior of system. Systems are in wide range from very simple ones to very complicated ones. So, the resulting equations can be obtainable in easy systems or are not obtainable in very complicated dynamics. When the equations are obtainable two possibilities happen. The first one is when the relationship between the states of the system is linear. It means you can easily write the state space equations and you are able to describe the system in state space form as shown in the below equations EQ.1. `where:` `is called the "state vector",` `;` `is called the "output vector",` `;` `is called the "input (or control) vector",` `;` `is the "state matrix",` `,` `is the "input matrix",` `,` `is the "output matrix",` `,` `is the "feedthrough (or feedforward) matrix" (in cases where the system model does not have a direct feedthrough,` `is the zero matrix),` `,` Taken from http://en.wikipedia.org/wiki/State_space_(controls). EQ.1
As you can see the x(t) is a vector of states for our system. It means the states of our system can be summarized in this vector. It is clear that the matrices A,B,C,D are responsible for defining the relationship between states vector and input vector and so on and so for. When we call a system linear? A system is linear when system description is based on linear operation between states. We can define linear operation as follow EQ.2.
- Taken from http://en.wikipedia.org/wiki/Linear_system
- EQ.2
So when we can write the relationship between states as EQ.1 sate space description. We are dealing with a linear system. Moreover, if A(t) matrix is a matrix of constant coefficients and the coefficients are constant in time the system is called LTI (linear time invariant) system. In this case A(t) = A. If the states have linear relationship but there are coefficients which are varying in time. Then our system is called linear time variant system (TIV systems). If we can not obtain state space equations in the form of EQ.1 then our system is nonlinear. A nonlinear system has two major categories. Non-linear time invariant systems or Non-linear time variant systems. I would like to explain linear time-variant systems a little bit more. It is good because we can understand the differences to the linear time invariant systems. Time-variant systems can be divided in two main cases. The first one is when the change in the time dependent parameters of our system is not significant regards to the time constant of our control. For example system aging and wearing can result changes in parameters during time. But these changes are not important in control design of our system in short period of time. After a long time new tuning and analyzing for tuning up the system is necessary. These kind of time variant systems can be considered as time invariant systems in control design procedure. There are some examples for this category of time-variant systems which can be considered as time-invariant systems. You can see these examples in wiki page http://en.wikipedia.org/wiki/Time-variant_system. "The following time varying systems cannot be modelled by assuming that they are time invariant: - Aircraft – Time variant characteristics are caused by different configuration of control surfaces during take off, cruise and landing as well as constantly decreasing weight due to consumption of fuel.
- The Earth's thermodynamic response to incoming solar radiation varies with time due to changes in the Earth's albedo and the presence of greenhouse gasses in the atmosphere.
- The human vocal tract is a time variant system, with its transfer function at any given time dependent on the shape of the vocal organs. As with any fluid-filled tube, resonances (calledformants) change as the vocal organs such as the tongue and velum move. Mathematical models of the vocal tract are therefore time-variant, with transfer functions often linearly interpolated between states over time.
- Linear time varying processes such as amplitude modulation occur on a time scale similar to or faster than that of the input signal. In practice amplitude modulation is often implemented using time invariant nonlinear elements such as diodes.
- The Discrete Wavelet Transform, often used in modern signal processing, is time variant because it makes use of the decimation operation."
The second case is systems that parameters change rapidly during the time. These systems in behavior are more similar to non-linear systems. In general many well developed techniques exist for control design and response of L.T.I systems. For continues time systems Laplace and Fourier transforms and for discrete systems Z-transform and Discrete Fourier are applicable on L.T.I systems. So, if we can consider our non-linear systems as L.T.I systems by some considerations we have the advantage of using different powerful L.T.I control design. Thus, It is IMPORTANT to know how to simplify a non-linear system to L.T.I system for using well-known general techniques. Based on this introduction I think you also believe linearization is important for control engineering. In the following sections I am going to explain linearization in more detail and provide some examples as well. ## Linearization Approach:## Linearization Approaches Based on Taylor Series.The general formula for Taylor series taken from wiki page (http://en.wikipedia.org/wiki/Taylor_series) has shown in EQ.3. As you know, the accuracy for Taylor estimation is related to the order o differentiations.
EQ.3 The general formula for linearization taken from wiki page http://en.wikipedia.org/wiki/Linearization has shown below as EQ.4.
EQ.4 Thus, the linearization has been down by use of EQ.4 formula, and calculation of partial derivatives. The following picture shows how linearization works in a simple nonlinear curve. This picture is from the wiki page http://en.wikipedia.org/wiki/Linearization. As you can see in this picture we reach to a tangent line with the slope of f'(x).I think it is more useful to see the performance of this formula with in some examples. You can understand different cases which we use linearization technique based on this formula. ## Some ExamplesIn the second example we use a shortcut for calculation of linearized differential equations. The simple shortcut rule is: For every differential term just calculate the coefficient at linearization point. For other terms which are not differential terms calculate linear function by use of linearization formula. In the following example you can understand how to apply this method. I want to indicate that you can use the general formula for linearization for calculation as well. This shortcut approach also is based on the EQ.4. Before going through these two examples. I would like to present more general case of linearization formula. This formula has tacked from the book (http://www.amazon.com/First-Course-Fuzzy-Neural-Control/dp/1584882441): A First Course in Fuzzy and Neural Control Hung T. Nguyen (Author), Nadipuram R. Prasad (Author), Carol L. Walker (Author), Ebert A. Walker (Author).We can rewrite EQ.4 as the following formula for a state variable. Now, we define delta type states as follow: So, we can rewrite the EQ.5 in the form of the following equation. Now we can write the new form for linearized state space equation as following equation. EQ.7 There are two important points before using EQ.7: - A and ^{* }B^{* }are evaluated at the nominal operating point. - In general case this equation can lead to some time varying elements. We can see this effect in the following example. The third example is from the book (A First Course in Fuzzy and Neural Control Hung T. Nguyen (Author), Nadipuram R. Prasad (Author), Carol L. Walker (Author), Ebert A. Walker (Author)). I add more explanation to the example compare to the book for better understanding. I would like to present the last exercise which is easier than the third exercise for more practice. As you can see during the last 4 examples, we discussed different situations. I hope you can get started by these simple examples for doing more complicated linearization. In the next section I am going to introduce some Matlab functions in order to linear a nonlinear system. |

For a variable: Linearisation of f(x) at x = a is L(x) = f(a) + f'(a) (x - a) Linear Approximations Let f be a function of two variables x and y de- fined in a neighbourhood of (a, b). The linear function L(x, y) = f(a, b) + fx(a, b)(x − a) + fy(a, b)(y − b) is called the linearisation of f at (a, b) and the approximation f(x, y) ≈ f(a, b) + fx(a, b)(x − a) + fy(a, b)(y − b) is called the linear approximation of f at (a, b). |