Shows you
EULER stores a polynomial a+bx+...+cx^n in the form [a,b,...,c]. This is different from MatLab. It can evaluate a polynomial with the Horner scheme
>polyval(p,x)
where x can be a matrix, of course. If you want a more exact answer in case of a badly conditioned polynomial, you may use
>xpolyval(p,x)
You may add an additional maximal number of iterations. This function uses the exact scalar product for a residual iteration.
It can multiply polynomials with
>polymult(p,q)
or add them with
>polyadd(p,q)
Of course, the polynomials need not have the same degree.
>polydiv(p,q)
divides p by q and returns {result,remainder}.
>polytrunc(p)
truncates a polynomial to its true degree (using epsilon). In UTIL
>polydif(p)
is defined. It differentiates the polynomial once. To construct a polynomial with prescribed zeros z=[z1,...,zn]
>polycons(z)
is used. The reverse is obtained with
>polysolve(p)
This function uses the Bauhuber method, which converges very stably to the zeros. However, there is always the problem with multiple zeros destroying the accuracy (but not the speed of convergence). Another problem is the scaling of the polynomial, which can improve the stability and accuracy of the method considerably.
Polynomial interpolation can be done by
>d=interp(t,s)
where t and s are vectors. The result is a polynomial in divided differences (Newton) form, and can be evaluated by
>interpval(t,d,x)
at x. To transform the Newton form to the usual polynomial form
>polytrans(t,d)
may be used.
Interpolation in the roots of unity can be done with the fast Fourier transform
>p=ifft(s)
Then p(exp(2*pi*i*k/n))=s[k+1], 0<=k<n-1. For maximal speed, n should be a power of 2. The reverse evaluates a polynomial at the roots of unity simultanuously
>s=fft(p)
Note, that polynomials have the lowest coefficient first.
If A is a matrix
>fft(A)
will compute the two dimensional Fast Fourier Transform of A. This time, the number of columns and rows of A must be power of 2. ifft(A) returns the inverse.