To reach to here, we first had to work out the point slope formula and then figure out limits. Derivatives are very powerful. This post was inspired by doing gradient descent on artificial neural networks, but I won’t cover that here. Instead we will focus on the very own definition of a derivative.
So let’s get started. A secant is a line that goes through 2 points. In the graph below, the points are and .
To derive a formula for this, we can use the point-slope form of a equation of a line: .
Plugging in the values, we get: .
What is interesting about this formula using the secant is that, as we will see, it provides us with a neat approximation at f(x).
Let’s define . So now we have: .
The limit as dx approaches 0 for will give us the actual slope (according to the definition of an equation of a line) at x.
So, let’s define . This slope is actually our definition of a derivative. This definition lies at the heart of calculus.
The image below (taken from Wikipedia) demonstrates this for h = dx.
Back to the secant approximation, we now have: . This is an approximation rather than an equivalence because we already calculated the limit for one term but not the rest. As dx -> 0, the approximation -> equivalence.
For example, to calculate the square of 1.5, we let x = 1 and dx = 0.5. Additionally, if then . So . That’s an error of just 0.25 for dx = 0.5. Algebra shows for this particular case the error to be dx^2. For dx = 0.1, the error is just 0.01.
Pretty cool, right?
Here are some of the many applications to understand why derivatives are useful:
- We can use the value of the slope to find min/max using gradient descent
- We can determine the rate of change given the slope
- We can find ranges of monotonicity
- We can do neat approximations, as shown