Horner's Rule for Polynomials
A general polynomial of degree
<!-- MATH
\begin{equation}
P(x) = a_0 + a_1 x + a_2 x^2 + ... + a_n x^n = \sum_{i=0}^n a_i x^i
\end{equation}
-->
If we use the Newton-Raphson method for finding roots of the polynomial we need to evaluate both
![]() | (1) |



It is often important to write efficient algorithms to complete a project in a timely manner. So let us try to design the algorithm for evaluating a polynomial so it takes the fewest flops (floating point operations, counting both additions and multiplications). For concreteness, consider the polynomial
<!-- MATH
\begin{displaymath}
7x^3 + 5x^2 - 4x + 2.
\end{displaymath}
-->

The most direct evaluation computes each monomial










<!-- MATH
\begin{displaymath}
2 - 4x + 5x^2 + 7x^3 = 2 + x[-4 + x(5 + 7x)].
\end{displaymath}
-->![\begin{displaymath} 2 - 4x + 5x^2 + 7x^3 = 2 + x[-4 + x(5 + 7x)]. \end{displaymath}](http://www.physics.utah.edu/~detar/lessons/c++/array/img14.gif)
![\begin{displaymath} 2 - 4x + 5x^2 + 7x^3 = 2 + x[-4 + x(5 + 7x)]. \end{displaymath}](http://www.physics.utah.edu/~detar/lessons/c++/array/img14.gif)
(Check the identity by multiplying it out.) This procedure can be generalized to an arbitrary polynomial. Computation starts with the innermost parentheses using the coefficients of the highest degree monomials and works outward, each time multiplying the previous result by

