We have seen that a first-order ODE of degree one can be written as \[y'= \varphi(t,y)\] for a two-variable \(\varphi\) function (if we restrict the domain of the unknown function #y# in such a way that the coefficient #f(t,y)# of #y'# in the general form #f(t,y)\cdot y'=g(t,y)# of the ODE has no zeroes). If #\varphi# does not depend on the second argument, #y#, then solving the equation amounts to computing the antiderivative of the function \(\varphi\) of #t#. The other extreme is the following case.
A first-order ODE is called autonomous if it can be written as \[ y'=\varphi(y)\] for a function #\varphi#. In other words, if it is a first-order first-order ODE that does not depend on the independent variable.
Constant functions that are solutions of the equation #\varphi(y)=0#, are called equilibrium solutions. The corresponding solution curves are horizontal lines in the direction field of the ODE.
- If all solutions in the vicinity of an equilibrium solution converge to the equilibrium, then the solution is called stable.
- If all solutions close to equilibrium equilibrium run away from the equilibrium (diverge), then the solution is called unstable.
- If all solutions on one side of the equilibrium diverge from that equilibrium and all solutions on the other side of the equilibrium converge to the equilibrium, then the solution is called semi-stable.
If #\varphi# is continuous and #y=a# is the only equilibrium solution in an open interval around #a#, then this equilibrium solution is stable, unstable, or semi-stable.
Here is an example of a first-order differential equation that is not autonomous:
\[\begin{array}{rcl}\dfrac{\dd y}{\dd t}&=&t+y\end{array}\]
Here is an example of a first order autonomous differential equation:
\[\begin{array}{rcl}\dfrac{\dd y}{\dd t}&=&\left(2-{y}\right)\cdot y\end{array}\] Its equilibrium solutions are #y=0# and #y=2#. An impression of the stability of these equilibrium solutions can be obtained by drawing the direction field, but below we will give an algebraic method.
More generally, an ODE with unknown function #y# is called autonomous if it can be written in the form
\[ y^{(n)} = \Phi(y,y',\ldots,y^{(n-1)}) \]
where #n# is the order of the equation and #\Phi# is a function of #n# variables. For #n\gt1#, the degree of an autonomous ODE need not be defined.
In order to prove the last statement we use the following fact: If #y# and #y'# are continuous functions and both #\lim_{t\to\infty}y(t)# and #\lim_{t\to\infty}y'(t)# exist, then #\lim_{t\to\infty}y'(t)=0#. This can be derived as follows from the De L'Hôpital rule:
\[\begin{array}{rcl}\displaystyle\lim_{t\to\infty} y(t)&=&\displaystyle\lim_{t\to\infty}\frac{y(t)\cdot \e^t}{\e^t}\\ &&\phantom{xx}\color{blue}{\text{numerator and denominator multiplied by }\e^t}\\ &=&\displaystyle\lim_{t\to\infty}\frac{\left(y'(t)+y(t)\right)\cdot \e^t}{\e^t}\\ &&\phantom{xx}\color{blue}{\text{De L'Hôpital rule}}\\ &=&\displaystyle\lim_{t\to\infty}\left(y'(t)+y(t)\right)\\&&\phantom{xx}\color{blue}{\text{numerator and denominator divided by }\e^t}\\ &=&\displaystyle\lim_{t\to\infty}y'(t)+\lim_{t\to\infty}y(t)\\ &&\phantom{xx}\color{blue}{\text{sum rule for limits}}\\ \end{array}\]
If we compare the end result with the first expression, then we see that #\lim_{t\to\infty}y'(t)=0#, as claimed.
Now assume that #y=a# is the only equilibrium solution in the open interval #\ivoo{c}{d}# around #a# and that #\varphi# is continuous on this interval. We apply the above-mentioned fact to a solution #z# of the autonomous ODE with #z(t_0)\in\ivoo{c}{a}# for a given #t_0# and #z'(t_0)\gt0#.
Since #\varphi(z)=z'(t_0)\gt0#, we have that #z'(t)=\varphi(z)\gt0# for all #t#. Therefore, #z# is increasing. It is also bounded above by #a#, so #\lim _{t\to\infty} z(t)# exists. We denote this limit by #b#. Then thanks to the continuity of #\varphi# we also have: \[\lim_{t\to\infty}z'(t) = \lim_{t\to\infty}\varphi(z(t))=\varphi(\lim_{t\to\infty}z(t)) =\varphi(b) \] As a consequence, we can apply the above fact and conclude that #\varphi(b)=0#. That means that #b# is a zero of #\varphi#. The assumption that #y=a# is the only zero in the interval #\ivoo{c}{d}# thus forces #b=a#. The conclusion is that #\lim _{t\to\infty} z(t)=a#, that is, the solution #z# converges to the equation #y=a#.
The same kind of reasoning shows that, if #z# is a solution with #z(t_0)\in\ivoo{c}{a}# and #z'(t_0)\lt0#, this solution decreases to #c# and possibly further down, thus diverging from the equilibrium #y=a#. If #z# a solution with #z(t_0)\in\ivoo{a}{d}# and #z'(t_0)\lt0#, this solution diverges from #y=a# and if #z(t_0)\in\ivoo{a}{d}# and #z'(t_0)\lt0#, then the solution converges to #y=a#.
Because in each case, the solutions in the vicinity of #y=a# converge to or diverge from #y=a#, there are exactly four possibilities for equilibrium #y=a#:
- if the solutions on both sides of the equilibrium solution converge, the equilibrium solution is stable;
- if they diverge on both sides, the equilibrium solution is unstable;
- in the other two cases, the solutions converge on one side and diverge on the other side, so the equilibrium solution is semi-stable.
It may happen that an equilibrium solution occurs at the boundary of an interval of values of #y# for which solutions occur. The notion of semi-stability will then coincide with stability or instability, as there are no solutions on one side of the equilibrium. An example can be found at the bottom of this page.
The derivative of any solution of an autonomous ODE is not dependent on the independent variable (think of the time #t#). This means that the direction field has the first property described below.
The direction field of a first order autonomous differential equation #y'=\varphi(y)# does not change under horizontal shifts.
If #y# is a solution of this ODE and #c# is a constant, then the function #y_c# defined by #y_c(t)=y(t-c)# is also a solution.
The nature of an equilibrium solution #y=a# is determined by the sign of #\varphi(a+\delta)# for values of #\delta# close to #0#:
- If #\varphi(a+\delta)\lt0# for all positive #\delta# close to #0# and #\varphi(a+\delta)\gt0# for all negative #\delta# close to #0#, then #y=a# is stable.
- If #\varphi(a+\delta)\gt0# for all positive #\delta# close to #0# and #\varphi(a+\delta)\lt0# for all negative #\delta# close to #0#, then #y=a# is unstable.
- If #\varphi(a+\delta)# has the same sign for all #\delta\ne0# close to #0#, then #y=a# is semi-stable.
The autonomous ODE \[\begin{array}{rcl}\dfrac{\dd y}{\dd t}&=&\left(2-y\right)\cdot y\end{array}\] has equilibria #y=0# and #y=2#.
If #\delta# is a number with a small absolute value (that is, close to #0#), then
- #\varphi(\delta)= \left(2-\delta\right)\cdot \delta# is positive if #\delta# is positive and negative if #\delta# is negative. This means that the equilibrium #y=0# is unstable.
- #\varphi(2+\delta)=-\frac{1}{2}\delta\cdot\left(2+\delta\right)# is negative if #\delta# is positive and positive if #\delta# is negative. This means that the equilibrium #y=2# is unstable.
Let #a# and #b# be real numbers. The derivative of a solution #y# of the ODE at the point #\rv{t,y}=\rv{a,b}# is equal to \[y'(a)=\varphi(y)(a)=\varphi(y(a)) = \varphi(b)\] The conclusion is that the slope of the integral curve through #\rv{a,b}# does not depend on the first coordinate #a#; it only depends on the value #b=y(a)# of #y# at #a#.
Let #y# be a solution of the differential equation #y'=\varphi(y)# and let #c# be a constant. Due to the chain rule for differentiating, the function #y_c(t)=y(t-c)# satisfies
\[\begin{array}{rcl} y_c'(t)&=&\dfrac{\dd}{\dd t}(y(t-c))\\ &&\phantom{x}\color{blue}{\text{definition of }y_c}\\ &=&y'(t-c)\\ &&\phantom{x}\color{blue}{\text{chain rule}}\\ &=&\varphi(y(t-c))\\ &&\phantom{x}\color{blue}{y \text{ is a solution of the given ODE}}\\ &=&\varphi(y_c(t))\\ &&\phantom{x}\color{blue}{\text{definition of }y_c} \end{array}\]
It follows that #y_c'=\varphi(y_c)#. This proves that #y_c# is a solution of the ODE for any #c# and any solution #y#.
The stability criteria follow from the fact that a differentiable function #y# is decreasing at #t# if and only if and only if #y'(t)\lt0#, and increasing if and only if #y'(t)\gt0#.
A concrete example of a logistic equation is \[\frac{\dd y}{\dd t}=3 y \cdot \left(1-\frac{y}{2}\right)\] The figure below shows a solution curve. Experiment with the solution curve and study how the line elements run in the direction field to get to know the behavior of the solution curve.
Can you find the specific solution with initial value \(y(0)=1\)?
\( y(t)=\frac{2}{1+\e^{-3 t}}\)
Later we will see how to find such a solution.