Why Non-Linear Differential Equations are Hard to Solve

In the study of ordinary differential equations, it's well known that most nonlinear equations are pretty much impossible to solve (at least analytically). On the contrary, linear equations are always solvable, and have nice closed form solutions. This is a consequence of the structure of a general solution to the linear equation: it's a sum of two solutions that are easier to find. Solutions to non-linear equations do not always abide by this property. To understand this more deeply, we will briefly study and attempt to solve a small variation of a linear equation and see where the linear method fails. 

For first order equations, equations involving just the first derivative, a linear equation looks like y' + a(t)y = q(t). Note that there are no weird functions of y in the expression like y^3/2 or sqrt(y), there is just multiplication by functions of t and y and addition of terms, both linear operations. To solve this equation, we would start by looking for two different solutions: all of the solutions yn when q(t) = 0 (called the null solutions) and one particular solution yp to the equation.  The general solution y is just the sum of these two solutions. This can be seen by considering the two solutions separately: yn' + a(t)yn = 0 and yp' + a(t)yp = q(t). Adding these two equations we get yn' + yp' + a(t)yn + a(t)yp = q(t). Since the derivative and basic arithmetic with functions are linear operations, we can write this as (yn + yp)' + a(t)(yn + yp) = q(t). Letting y = yn + yp we see that this sum represents all solutions to the equation. 

Now we consider the equation y' + a(t)y^2 = q(t). Let's attempt to solve it using this same principle. Assume that we know all the null solutions yn and a particular solution yp to this equation. After adding them like we did before, we get the equation (yn + yp)' + a(t)[yn^2 + yp^2] = q(t). You might already notice that our previous trick no longer works. We know from algebra that for most numbers x and y, x^2 + y^2 != (x + y)^2. There are 3 straight forward counter examples when we let x or y be 0. If x = 0 then you get x^2 + (0)^2 = (x + 0)^2 = x^2. This argument is identical if we let  y  = 0. Finally, the most obvious solution x = 0, y = 0 which boils down to the equation 0 = 0. To proceed with our discussion, we need to know when exactly x^2 + y^2 = (x + y)^2. This will be useful for us since for every input t in the domain, y(t) is just a number. We will soon see that these three solutions are in fact the only solutions to this equation. 

Let's be totally clear about what we want to prove. Formally, we want to show that for all real numbers x, and y where neither x or y is 0, the equation (x + y)^2 = x^2 + y^2 has no solutions. Put another way, we want to show that if we pick any two non-zero real numbers, x^2 + y^2 != (x + y)^2. For starters, we want to assume that x and y are real numbers and that x != 0 or y != 0. There are four combinations of x and y (x > 0, y > 0), (x < 0, y < 0), (x > 0, y < 0), and (x < 0, y > 0). To get to the point, and avoid showing similar situations, we will break these four cases into two: x and y have the same sign (both pos. or negative), and x/y have opposite signs (one pos., one neg.). 

Let's start with the case where x and y have the same sign. We will just show the case when x, and y are both positive but the same argument holds when they're both negative. So, (x + y)^2 = x^2 + 2xy + y^2 by foiling. Since x and y are positive, 2xy is also positive. Therefore, x^2 + 2xy + y^2 > x^2 + y^2, and we now know that (x + y)^2 > x^2 + y^2. 

The next case is when x and y have opposite signs. Like the previous case, we will just show the case when x is positive and y is negative. To make the argument more clear, we will further split this case into three subcases. Namely |y| > x, |y| = x, and |y| < x. Suppose |y| > x. Then (x + y)^2 < y^2. We can understand this by studying what happens when  you square a number. The sign always becomes positive, so the only thing that matters is the magnitude of the number. Since y is a larger negative number, adding x to it only decreases its magnitude or brings the total value closer to 0. Therefore, y^2 alone is larger than (x + y)^2. Since x^2 is positive, we immediately have y^2 < x^2 + y^2. Therefore, (x + y)^2 < x^2 + y^2. Let's assume |y| = x. Then (x + y)^2 = 0 < x^2 + y^2 since x^2 is positive and y^2 is positive. Finally, suppose |y| < x. By the same argument we used when |y| > x, we immediately have (x + y)^2 < x^2 < x^2 + y^2. 

Where are we at now? We've shown that in every subcase (x + y)^2 < x^2 + y^2. Again the same argument can be made when y > 0  and x < 0, so we omit the details for brevity. With that said, we've shown that for any combination of non-zero real numbers x and y, (x + y)^2 != x^2 + y^2. 

What does this mean for our differential equation y' + a(t)y^2 = q(t)? We just showed that the only time  x^2 + y^2 = (x + y)^2 is when x = 0, y = 0, or x = 0 and y = 0. This means that y = yn + yp is the general solution to this equation as long as yn or yp = 0 on some shared domain. For simplicity, we consider only constant solutions yn or yp = 0.  If we let the particular solution be the constant function 0, then q(t) must also be 0 and so the general solution is the solution to y' + a(t)y^2 = 0, which is solvable via separation of variables. We find that in general the null solution is yn = 1 / (integral from 0 to t of a(s) ds + a constant of integration C).  If we let the null solution be the constant function 0 we are getting no closer towards the general solution: it's just the equation 0 = 0.  The family of null solutions we found is no help either since it's never 0 since solutions have the form -1 / a where a is a real number for an input t and choice of C. Therefore, we don't know much about the general solution besides the basics: these solutions will depend on the input q(t) and will be affected by the y^2 term. 

With a relatively small adjustment to a nice linear equation, the solution becomes much harder to find using traditional techniques. I hope this small example gives some insight into why solving non-linear differential equations can be much harder than linear equations.

 


Comments