Integration and Differential Equations
Let's close this section out by doing one of these in general to get a nice relationship between line integrals of vector fields and line integrals. Acceleration is the second derivative of the displacement with respect to time, Or the first First order differential equation is a mathematical relation that relates. In calculus, Leibniz's rule for differentiation under the integral sign, named after Gottfried . the interchange of a derivative and an integral (differentiation under the integral sign; i.e., Leibniz integral rule);; the change of order of .. The difference of two integrals equals the integral of the difference, and 1/h is a constant, so.
The derivative of x in this case is constant, as we show in the v t graph above. Varying derivatives What if v is not constant? Here's a simple case in which the same bloke is accelerating forwards, so his velocity is increasing. Let's work out his velocity at any particular time t, just from the graph x t.
We want to find the slope of this curve, at different values of t, as is shown in the animation. How can we do it? Well, in most cases, and especially in the case of experimental measurement, we do the same thing that we did for the simple case: Again, this is numerical differentiation. In fact, the slope of the black line gives us the average velocity between 0 and 2 seconds, but that is not what we want. Try it, but you'll find that that gives an overestimate, too. Here arises a practical problem: If we are calculating x and t numerically, we face the same problem.
There are other tricks that we'll see below. Analytical derivatives But what if we 'know' the formula for the function x t? I have put 'know' in quotation marks, because for anything in physics, the only things that we know are the measurements.
There are only a finite number of these, so we just have a set of points on a graph. What we can do is to find a mathematical model, a formula that goes close to the points on the graph. We can now choose whatever t we like, and calculate x to whatever precision we need, though of course the final precision will depend on how well we know x0, v0 and a, so we are still limited by measurement.
Power terms and polynomials Let's have a look at these terms in turn. This is like the first example we did: So, the derivative of a constant is zero, and the derivative of a term that is proportional to t is just the constant of proportionality or, in standard terms, the coefficient of t.
As we've said, dt is very small, and can be made smaller than anything that we could measure. So we can neglect it on the right hand side.
We don't neglect it on the left hand side, because there we have the ratio of two small things, and that ratio need not be small. So here we have one useful case for taking derivatives: I may get into trouble for pointing this out, but the universe doesn't have infinitesimals, and quantities don't go to zero in physics.
Infinitesimals, like many things in mathematics, are human inventions. So, for most purposes in physics, the limit taken is just the size necessary to have mathematical precision greater than that of our measurements, or greater than that of our numerical calculation. You really should take that mathematics course, but you won't need infinitesimals in physics. Let's summarise what we have so far: Let's graph these, setting the constants equal to one. We'll also omit units on the axes because, although you may find it helpful to think of the vertical axis as displacement and t as time to give a concrete example, the results are general.
For that reason, we'll use y as the vertical axis from here on. In all of the graphs on this page, the red curve is the derivative of the purple one. It is a good exercise to compare the two, and to check that, in all cases and over the whole curve, that the red line represents the slope of the purple one. Perhaps you see a pattern here?
This is often written thus: This result is more general: This will be important when we come to look at integrals, below. This is an easy one: The rate of change in the sum of functions is equal to the sum of their individual rates of change. The derivative of the sum is the sum of the derivatives.
With this unsurprising result, we can now differentiate polynomials, such as this: Trigonometric functions Sine and cos functions are important, especially in circular motion, simple harmonic motion, components of forces and other cases involving components of vectors.
Fortunately, the derivatives here are simple. Let's work them out, using this diagram, which shows a segment of a circle whose radius is one unit.
Calculus Facts: Derivative of an Integral
We say a circle of unit radius. The definition of sine of an angle uses a right angled triangle. It is the ratio of the side opposite the angle to the hypotenuse of the triangle. The definition of cosine is the other side that adjacent to the angle divided by the hypotenuse. In this diagram, first look at the triangle with blue sides. For clarity, this triangle is repeated outside the main diagram.
The run goes to the left in this case, so it has a negative sign. Now look at the small right triangle in red. We call its hypotenuse h. The hypotenuse approaches more and more closely the length of the arc of the circle between the two radii the radii are the blue hypotenuse and the green hypotenuse.
Further, h becomes closer and closer to being at a right angle to the radius. The chain rule Suppose we have a function z that depends on t, in a way that allows us to calculate z if we know t. And suppose that x depends on z in a similarly explicit way.
Just by cancelling a factor, we can write: If all of these are very small quantities, then we write This is the chain rule of differentiation, which we use when analysing circular and simple harmonic motion. So the chain rule gives: So much for maths: So the time for one complete circle or cycle is halved.
If the displacement goes through the same variation in half the time, then the velocity is doubled. How do the results of a variable rate add up? Let's leave displacement time graphs for a moment, because my favourite example of an integrator is a bucket.
A bucket integrates the flow of water from a tap above it. That function f t is shown as the red curve in the figure. The tap is already on, with a flow rate f0, called the initial flow rate. Notice that, when the flow is high, the area f. Note that, when the flow falls to zero tap offthe volume is no longer increasing.
Leibniz integral rule
And of course a fall in the V t curve would mean water flowing out of the bucket, which we should call negative flow into the bucket. For example, the bucket might have a leak. What is the volume Vf in the bucket at a final time tf? The subscript f here stands for 'final', not flow. So the equation above becomes: So the right hand side has the initial volume, plus a long sum of terms like f.
That sum is called the integral of f with respect to t. We saw earlier that differentiating was subtracting and dividing. We've now seen that integration is just multiplying and adding.
So integration is the opposite of differentiation. We'd waste time and look a bit silly writing this every time. So instead we write it like this: The integral sign is s shaped, which can stand for 'sum' and remind us that's all it is. We say we are integrating "with respect to t", because t is varying during our sum. In the way I've presented this example, V0 is a constant of integration.
Integration doesn't tell you the complete answer, it only tells you how much something has changed during the process. In this case, to know the final volume in the bucket, we need to know not only the integral of the flow, but also how much was in the bucket before it started to integrate the flow.
In most cases, you will need to find the constant of integration — very often by using the initial conditions, as we did here. Perhaps now is a good time to go back to the animations above and check that integrating the velocity finding the area under the curve gives the displacement.
Numerical integration If we had a set of numerical values for f t — whether experimental values or values calculated for a given mathematical function — then we could integrate just as described above: A very important practical point: This problem does not arise in multiplication.
Even better, the computation errors, being sometimes positive and sometimes negative, tend to cancel out. So numerical integration is much easier and safer than numerical differentiation. The latter requires considerable caution and that is why my calculator doesn't have a "differentiate" button. Analytical integration This section might be shorter than you expect.
We've mentioned above that integration is the opposite to differentiation: The rate at which something changes is its derivative, but You can recover that something by integrating the rate at which it changes. So for analytical integration, we can use in reverse the tricks we established above for differentiation. Omitting constants of integration, we write The derivative of tn is ntn-1, so The integral of ntn-1 is tn provided that n is not equal to zero, because in that case the first equation gives us no information.
This is an important exception, which we'll deal with below. The exponential function One very useful integral and differential is the exponential function. So it is also its own integral. In this the derivative is not shown in red, because the function and its derivative are equal.
And It's like, oh my god, I have to take the antiderivative of all of this business, evaluate it at the different boundaries, and then I got to take the derivative. You just apply the fundamental theorem of calculus, and it's actually a very straightforward and a very fast thing to do. Now let's mix it up a little bit. Let's say that you had the expression the definite integral from pi-- instead of from pi to x, let's say it's from pi to x squared-- and let me write that x squared in a different color just to make it clear what I'm doing-- from pi to x squared of cotangent squared of t dt.
And you wanted to take the derivative of this business. So you want to take the derivative with respect to x of this business.
How would you do it? Well the recognition here is that this is capital-- capital F of x was defined as this. Now instead of an x, you have an x squared. So this is the exact same thing as taking the derivative with respect to x of capital F of-- not x. That would be this. Instead, we have capital F of x squared. Where we had an x here before, we now have an x squared.
If you had capital F of x squared, every place where you saw the x here would be an x squared. So it would look exactly like this.
And so we just have to apply the chain rule. So this is going to be equal to the derivative of F with respect to x squared.
And this is just straight out of the chain rule. It's going to be F prime of x squared times the derivative with respect to x of x squared.