%******************** INSTRUCTIONS ********************************** %1. RENAME this file ".tex" %2. REPLACE xxx by correct information %3. INSERT body of text in the sections %4. INSERT references %5. REMOVE all lines starting with % %************************ INITIALIZATION ************************ % In TEXing your file, remember that TEX will be asking for the % file satmacros.tex, so have it available in the same directory % as this file. \def\updated{21 December 2004} %% supply today's date, to help keep track of versions of this paper \count100= 46 \count101= 24 %% Replace 3 by the number of pages of your paper. \input satmacros \singlecount %% turn this on to use just one numbering system. \def\presect{} %% turn this on to make numbering independent of sections. % \draft %% turn this on to see, in the margin, the label names. \title{Divided Differences} %% Replace xxx by your title, using || for linebreaks as needed. \author{Carl de Boor} %% Replace xxx by author name(s), using || for linebreaks as needed. \def\shorttitle{Divided Differences} %% Replace xxx by a short title, for the running head. \def\shortauthor{C. de Boor} %% Replace xxx by author names (initials only), for running head. %************************* MACROS ****************************** %% Insert your own macros here \def\divdif{\mathord{\kern.43em\vrule width.6pt height5.6pt depth-.28pt\kern-.42em\Delta}} \def\dvd#1{\divdif(#1)} \newcount\excount\excount0 \def\nextex{\global\advance\excount by 1{}\formal{Example \the\excount.}} \def\ootpii{{1\over2\pi\ii}} \def\nwt#1{w_{#1}} \def\Nwt#1{W_{#1}} \def\tp{{}^t} \def\hatc{\widehat{c}} \def\hatt{\widehat{t}} \def\Matrix#1{\left[\matrix{#1}\right]} \def\braket#1{\hbox{$[\![$}#1\hbox{$]\!]$}} \let\bs\backslash \def\dist{\mathop{\rm dist}{}} \def\ran{\mathop{\rm ran}\nolimits} \def\Null{\mathop{\rm null}\nolimits} \def\fromto{\mathbin{\ldotp\ldotp}} % for interval spec. as in [a\fromto b] \let\gD\Delta \let\gd\delta \let\ga\alpha \let\gz\zeta \let\gl\lambda \let\gs\sigma \let\gt\tau \def\Fn{\FF^n} \def\FF{{{\rm I}\kern-.16em {\rm F}}} \def\inv#1{#1^{-1}} \def\id#1{{\rm id}{}_{#1}} \def\nextline{\hfill\break\noindent} \let\newline\nextline \def\newpar{\par}\let\nextpar\newpar \def\Iff{\hskip1em\Longleftrightarrow\hskip1em} \let\proof\pf \def\endproof{\hbox{}~\hfill\endproofsymbol\nopf} \let\endexample\endproof \def\rhl#1{\rahel#1::/} \def\rahel#1:#2:#3/{\edef\rh{#1}\def\next{#2}\ifx\next\empty\edef\rl{#1}% \else\edef\rl{#2}\fi% \edef\griff{cit\rh}\edef\inhalt{\rl}\definieres} \def\cite#1{\excite#1::/} \def\excite#1:#2:#3/{\edef\griff{cit#1}% \formcite{\plazieres\def\next{#2}\ifx\next\empty\else\formciteaddl{#2}\fi}} \def\formcite#1{{\bf[}#1{\bf]}} \def\formciteaddl#1{: #1} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% QUOTE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \bigskip {\ninepoint \rightline{\it``Est enim fere ex pulcherrimis qu\ae\ solvere desiderem.''} \rightline{\rm (It is among the most beautiful I could desire to solve.) \cite{N76}}} % this is the second letter, the epistola posterior, of Newton to Leibniz. % It is in [H. W. Turnbull (ed), The Correspondences of Isaac Newton, VOl.\ % II, 1676--1687, Cambridge University Press (for the Royal Society); 1960] \vskip-\baselineskip %************************* ABSTRACT ********************************** \abstract{Starting with a novel definition of divided differences, this essay derives and discusses the basic properties of, and facts about, (univariate) divided differences.} %% Replace xxx by your abstract %******************************* BODY ********************************* \sect{Introduction and basic facts} While there are several ways to think of divided differences, including the one suggested by their very name, the most efficient way is as the coefficients in a Newton form. This form provides an efficient representation of Hermite interpolants. Let $\Pi\subset(\FF\to\FF)$ be the linear space of polynomials in one real ($\FF=\RR$) or complex ($\FF=\CC$) variable, and let $\Pi_{n$ have $w_{n,t}$ as a factor, we can write $$ p\;=\; p_n + \nwt{n,t} q_n, \label{defpnqn}$$ with $q_n$ a polynomial we will look at later (in Example 6), and with $$p_n\;:=\;\sum_{j=1}^n\nwt {j-1,t}c(j)$$ a polynomial of degree $n}{\nwt{j-1,t}\over\nwt{n,t}}\dvd{t_{1:j}}p, \label{dvdasfunction}$$ and we recognize the sum here as a Newton form with respect to the sequence $(t_j: j>n)$. This provides us with the following \dword{basic divided difference identity}: $$ \dvd{t_{n+1:j}}\dvd{t_{1:n},\cdot} = \dvd{t_{1:j}},\quad j>n. \label{compofdvds} $$ \endexample For the special case $n=j-2$, the basic divided difference identity, (\recall{compofdvds}), reads $$ \dvd{t_{j-1:j}}\dvd{t_{1:j-2},\cdot}=\dvd{t_{1:j}},$$ or, perhaps more suggestively, $$ \dvd{t_{j-1},\cdot}\dvd{t_{1:j-2},\cdot}=\dvd{t_{1:j-1},\cdot},$$ hence, by induction, $$ \dvd{t_{j-1},\cdot}\dvd{t_{j-2},\cdot}\,\cdots\dvd{t_1,\cdot} \;=\; \dvd{t_{1:j-1},\cdot}. \label{dvdisdvd}$$ In other words, $\dvd{t_{1:j}}$ is obtainable by forming difference quotients $j-1$ times. This explains our calling $\dvd{t_{1:j}}$ a `divided difference of order $j-1$'. %For the same reason, we have the following % %\proclaim Proposition \label{propsymm}. If %$p\in\Pi_{n$, hence trivially continuous. As for $k\le n$, let $$\Nwt{t,n}:=\Fn\to\Pi_{i,$$ we conclude that $$P(pw_{j-1,t}) = \sum_{i=1}^n w_{i-1,t}\dvd{t_{j:i}}p,\quad j=1{:}n,$$ at least when the $t_i$ are pairwise distinct. In other words, the $j$th column of the matrix $\widehat{M_p} = \inv{V}M_pV$ (which represents $M_p$ with respect to the Newton basis $V$ for $\Piln$) has the entries $$(\dvd{t_{j:i}}p: i=1{:}n) = (0,\ldots,0,p(t_j),\dvd{t_j,t_{j+1}}p,\ldots,\dvd{t_{j{:}n}}p).$$ By the continuity of the divided difference (see Proposition \recall{propcont}), this implies \proclaim Proposition \label{propopitz}: Opitz formula. For any $p\in\Pi$, $$p(A_{n,t})\;=\;(\dvd{t_{j:i}}p: i,j=1{:}n).\label{opitzformula} $$ The remarkable identity (\recall{opitzformula}) is due to G. Opitz; see \cite{O} which records a talk announced but not delivered. Opitz calls the matrices $p(A_{n,t})$ \eword{Steigungsmatrizen} (`difference-quotient matrices'). Surprisingly, Opitz explicitly excludes the possibility that some of the $t_j$ might coincide. \cite{BR} ascribe (\recall{opitzformula}) to Sylvester, but I have been unable to locate anything like this formula in Sylvester's collected works. \nextex For the monomial $()^k$, Opitz' formula gives $$\dvd{t_{1:n}}()^k = (A_{n,t})^k(n,1) = \sum_{\nu\in\{1{:}n\}^k} A_{n,t}(n,\nu_k) A_{n,t}(\nu_k,\nu_{k-1})\cdots A_{n,t}(\nu_1,1),$$ and, since $A_{n,t}$ is bidiagonal, the $\nu$th summand is zero unless the sequence $(1,\nu_1,\ldots,\nu_k,n)$ is increasing, with any strict increase no bigger than 1, in which case the summand equals $t^\ga$, with $\ga_j-1$ the multiplicity with which $j$ appears in the sequence $\nu$, $j=1{:}n$. This confirms that $\dvd{t_{1:n}}()^k = 0$ for $k\deg r$, $z\in \FF$, and $$\eqalign{ \hatc(n)&:=c(n),\cr \hatc(j)&:=c(j)+(z-t_j)\hatc(j+1),\quad j=n{-}1, n{-}2, \ldots,1.\cr}$$ \newpar Then $\hatc(1)=r(z)$. More than that, $$ r = \sum_{j=1}^n \nwt{j-1,\hatt\ }\hatc(j), $$ with $$ \hatt\;:=\;(z,t_1,t_2,\ldots). $$ \proof The first claim follows from the second, or else directly from the fact that Horner's method is nothing but the evaluation, from the inside out, of the nested expression $$ r(z) = c(1) + (z-t_1)(c(2) + \cdots + (z-t_{n-2})(c(n-1) + (z-t_{n-1})c(n))\cdots), $$ for which reason Horner's method is also known as \dword{Nested Multiplication}. As to the second claim, note that $\dvd{z,t_{1:n-1}}r = \dvd{t_{1:n}}r$ since $\deg r1$, by (\recall{leibnizformula}), $0= \dvd{t_{1:n}}(()^{-1}()^1) = \dvd{t_{1:n}}()^{-1}t_n + \dvd{t_{1:n-1}}()^{-1}$, hence $\dvd{t_{1:n}}()^{-1} =-\dvd{t_{1:n-1}}()^{-1}/t_n$, and induction finishes the proof. This implies the handy formula $$\dvd{t_{1:n}}(z-\cdot)^{-1} = 1/\nwt{n,t}(z),\quad z\not=t_1,\ldots,t_n. \label{chakalovone}$$ Therefore, with $\#\xi:=\#\{j: \xi = t_j, 1\le j\le n\}$ the multiplicity with which $\xi$ occurs in the sequence $t_{1:n}$, and $$ 1/\nwt{n,t}(z) =: \sum_{\xi\in t}\sum_{0\le \mu <\#\xi}{\mu! A_{\xi\mu}\over(z-\xi)^{\mu+1}} $$ the partial fraction expansion of $1/\nwt{n,t}$, we obtain Chakalov's expansion $$ \dvd{t_0,\ldots,t_k}f = \sum_{\xi\in t}\sum_{0\le \mu <\#\xi}A_{\xi\mu}D^\mu f(\xi) \label{chakalovtwo} $$ (from \cite{Ch38}) for $f:=1/(z-\cdot)$ for arbitrary $z$ since $D^\mu 1/(z-\cdot) = \mu!/(z-\cdot)^{\mu+1}$, hence also for any smooth enough $f$, by the density of $\{1/(z-\cdot): z\in\FF\}$. This is an illustration of the peculiar effectiveness of the formula (\recall{chakalovone}), for the divided difference of $1/(z-\cdot)$, for deriving and verifying divided difference identities. \endexample \nextex When the $t_j$ are pairwise distinct, (\recall{chakalovtwo}) must reduce to $$\dvd{t_{1:n}}f = \sum_{j=1}^n f(t_j)/D\nwt{n,t}(t_j), \label{dvdexplicit}$$ since this is readily seen to be the leading coefficient of the polynomial of degree $ In contrast, \cite{G69} is explicitly concerned with a \eword{representation formula for the divided difference}. However, the `divided difference' he represents is the following: $$ \dvd{x,x+h_1}\dvd{\cdot,\cdot+h_2}\cdots\dvd{\cdot,\cdot+h_n} =(\Delta_{h_1}/h_1)\cdots(\Delta_{h_n}/h_n) $$ and for it he gets the representation $$ \int_0^1\cdots\int_0^1 D^n f(x + h_1t_1 + \cdots + h_nt_n) \dd t_1\cdots \dd t_n. $$ \cite{No:p.16} cites \cite{G78a}, \cite{G78b} as places where formulations equivalent to the Genocchi-Hermite formula can be found. So far, I've been only able to find \cite{G78b}. It is a letter to Hermite, in which Genocchi brings, among other things, the above representation formula to Hermite's attention, refers to a paper of his in [Archives de Grunert, t. XLIX, 3e cahier] as containing a corresponding error formula for Newton interpolation. He states that he, in continuing work, had obtained such a representation also for Amp\`ere's fonctions interpolatoires (aka divided differences), and finishes with the formula $$ \eqalign{ \int_0^1\cdot\cdot\int_0^1 s_1^{n-1} s_2^{n-2} \cdots s_{n-1}&\cr &\kern-2.5truecmD^nf (x_0 + s_1(x_1-x_0) + \cdots + s_1 s_2 \cdots s_n (x_n-x_{n-1})) \dd s_1\cdots \dd s_n\cr} $$ for $\dvd{x_0,\ldots,x_n}f$, and says that it is equivalent to the formula $$ \dvd{x_0,\ldots,x_n}f = \int\cdots\int D^nf (s_0x_0 + s_1x_1 + \cdots s_nx_n) \dd s_1\cdots \dd s_n $$ in which the conditions $s_0+\cdots+s_n=1$, $s_i\ge0$, all $i$, are imposed. \cite{St27:p.17f} proves the Genocchi-Hermite formula but calls it Jensen's formula, because of \cite{J}. \sect{Divided difference expansions of the divided difference} By applying $\dvd{s_{1:m}}$ to both sides of the identity $$ \dvd{\cdot} = \sum_{j=1}^n \nwt{j-1,t}\dvd{t_{1:j}} + \nwt{n,t}\dvd{t_{1:n},\cdot}$$ obtained from (\recall{withremainder}), one obtains the expansion $$ \dvd{s_{1:m}} = \sum_{j=m}^n \dvd{s_{1:m}}\nwt{j-1,t}\dvd{t_{1:j}} + E(s,t),$$ where, by the Leibniz formula (\recall{leibnizformula}), $$ \eqalign{ E(s,t) :=\;& \dvd{s_{1:m}}(\nwt{n,t} \dvd{t_{1:n},\cdot})\cr =\;&\sum_{i=1}^m \dvd{s_{i:m}}\nwt{n,t} \dvd{s_{1:m},t_{1:n}}. } $$ But, following \cite{Fl} and with $p:=n-m$, one gets the better formula $$ E(s,t) := \sum_{i=1}^m(s_i-t_{i+p}) (\dvd{s_{1:i}}\nwt{i+p,t}) \dvd{t_{1:i+p},s_{i:m}} $$ in which all the divided differences on the right side are of the same order, $n$. The proof (see \cite{B03}), by induction on $n$, uses the easy consequence of (\recall{leibnizformula}) that $$ (s_i-y)\dvd{s_{i:m}}f = \dvd{s_{i:m}}((\cdot-y)f)-\dvd{s_{i+1:m}}f.$$ The induction is anchored at $n=m$ for which the formula $$ \dvd{s_{1:m}} - \dvd{t_{1:m}} =\sum_{i=1}^m (s_i-t_i)\dvd{s_{1:i},t_{i:m}} $$ can already be found in \cite{Ho}. \sect{Notation and nomenclature} It is quite common in earlier literature to use the notation $$[y_1,\ldots,y_j]$$ for the divided difference of order $j-1$ of data $((t_i,y_i): i=1{:}j)$. This reflects the fact that divided differences were thought of as convenient expressions in terms of the given data rather than as linear functionals on some vector space of functions. The presently most common notation for $\dvd{t_{1:j}}p = \dvd{t_1,\ldots,t_j}p$ is $$p[t_1,\ldots,t_j]$$ (or, perhaps, $p(t_1,\ldots,t_j)$) which enlarges upon the fact that $\dvd{z}p = p(z)$, but this becomes awkward when the divided difference is to be treated as a linear functional. In that regard, the notation $$[t_1,\ldots,t_j]p$$ is better, but suffers from the fact that the resulting notation $$[t_1,\ldots,t_j]$$ for the linear functional itself conflicts with standard notations, such as the matrix (or, more generally, the column map) with columns $t_1,\ldots, t_j$, or, in the special case $j=2$, i.e., $$[t_1,t_2],$$ the closed interval with endpoints $t_1$ and $t_2$ or else the scalar product of the vectors $t_1$ and $t_2$. The notation $$[t_1,\ldots,t_j;p]$$ does not suffer from this defect, as it leads to the notation $[t_1,\ldots,t_j;\cdot]$ for the linear functional itself, though it requires the reader not to mistakenly read that semicolon as yet another comma. The notation, $\divdif$, used in this essay was proposed by W. Kahan some time ago (see, e.g., \cite{Ka}), and does not suffer from any of the defects mentioned and has the advantage of being literal (given that $\gD$ is standard notation for a difference). Here is a \TeX\ macro for it: \medskip \def\bs{\char'134} \def\lb{\char'173} \def\rb{\char'175} \centerline{\vbox{\tt\noindent \bs def\bs divdif\lb\bs mathord{\bs kern.43em\bs vrule width.6pt height5.6pt \newline\phantom{\bs def\bs divdif\lb\bs} depth-.28pt \bs kern-.43em\bs Delta\rb} }} \medskip Although divided differences are rightly associated with Newton (because of \cite{N87:Book iii, Lemma v, Case ii}, \cite{N11}; for a detailed discussion, including facsimiles of the originals and their translations, see \cite{Fra18}, \cite{Fra19}, \cite{Fra27}), the term `divided difference' was, according to \cite{WR:p.20}, first used in \cite{M:p.550}, even though, by then, Amp\`ere \cite{A} had called it \dword{fonction interpolaire}, and this is the term used in the French literature of the 1800s. To be sure, in \cite{N11}, Newton repeatedly uses the term ``differentia sic divisa'. %*************************** ACKNOWLEDGEMENTS *********************** \Acknowledgments{On 20jun05, the change $i(1)$ {\tt -->} $i(0)$ was made in the formula in Corollary 30. Thank you, Michael Floater!} %% Replace xxx by your acknowledgement (if any, in which case also remove %% percent in front of \Acknowledgements ) %******************************* REFERENCES ************************* \def\AM{Appl.\ Math.}%Applications of Mathematics, formerly Aplikace Matematiky \def\CRASP{C. R. Acad.\ Sci.\ Paris} \def\JAM{J. Analyse Math.} \def\JAT{J. Approx.\ Theory} \def\JRAM{J. reine angew.\ Math.} \def\SJNA{SIAM J. Numer.\ Anal.} \def\ZAMM{Z. Angew.\ Math.\ Mech.} \References %Ampere26 % carl 23jun03 \rhl{A:Amp\`ere 1826} \refJ Amp\`ere, Andr\'e-Marie; Essai sur un noveau mode d'exposition des princi\-pes du calcul diff\'erentiel, du calcul aux diff\'erences et de l'interpolation des suites, consid\'er\'ees comme d\'erivant d'un source commune; Annals de mathematicques pures et appliqu\'ees de Gergonne (Paris, Bachelier, in $4^o$); 16; 1826; 329--349; % divided differences alias fonctions interpolaires %Boor72 % carl \rhl{B72:de Boor 1972} \refJ de Boor, C.; On calculating with $B$-splines; \JAT; 6; 1972; 50--62; %Boor03a % carl 20jan03 \rhl{B03:de Boor 2003a} \refJ de Boor, C.; A divided difference expansion of a divided difference; % ms: 2002: \JAT; 122; 2003a; 10--12; % erratum \JAT: 124(1): 2003: 137: %Boor03b % carl 20nov03 \rhl{B03b:de Boor 2003b} \refJ de Boor, C.; A Leibniz formula for multivariate divided differences; \SJNA; 41(3); 2003b; 856--868; %BoorPinkus0? \rhl{BP:de Boor et al. 2003} \refJ de Boor, C., Pinkus, A.; The B-spline recurrence relations of Chakalov and of Popoviciu; \JAT; 124; 2003; 115--123; %BulirschRutishauser68b % carl 20nov03 \rhl{BR:Bulirsch et al. 1968} \refQ Bulirsch, R., Rutishauser, H.; Interpolation und gen\"aherte Quadratur; (Mathematische Hilfsmittel des Ingenieurs, Teil III), S. Sauer \& I. Szab\'o (eds.), Grundlehren vol.~141, Springer-Verlag (Heidelberg); 1968; 232--319; %Cauchy40 % carl 23jun03 \rhl{Ca:Cauchy 1840} \refJ Cauchy, Augustin; Sur les fonctions interpolaires; \CRASP; 11; 1840; 775--789; % Oeuvres (1) 5, Paris: 1885: 409--424: % basic divided difference stuff, including a special case of the refinement % formula of Popoviciu33=34a %Chakalov34 % author 20jan03 \rhl{Ch34:Tchakaloff 1934} \refJ Tchakaloff, L.; Sur la structure des ensembles lin\'eaires d\'efinis par une certaine propri\'et\'e minimale; Acta Math; 63; 1934; 77--97; % looks for minimal sets for given (n+1)-sequence tau and given function class % F, i.e., the smallest possible set E with the property that, % for every \xi in E there is some f in F with divdif{tau}f = D^nf(\xi)/n! . %Chakalov38a % carl 20jan03 \rhl{Ch38:Chakalov 1938} \refJ Chakalov, L.; On a certain presentation of the Newton divided differences in interpolation theory and its applications (in Bulgarian); Annuaire Univ.\ Sofia, Fiz.\ Mat.\ Fakultet; 34; 1938; 353--394; % first occurrence of the contour integral formula for B-splines, % found later also by Meinardus74 % An extensive summary in German follows directly; see Chakalov38b %CurrySchoenberg66 \rhl{CS:Curry \& Schoenberg 1966} \refJ Curry, H. B., Schoenberg, I. J.; On P\'olya frequency functions IV: the fundamental spline functions and their limits; \JAM; 17; 1966; 71--107; %ErdosTuran40 % carl 20nov03 \rhl{ET:Erd\H os et al. 1940} \refJ Erd\H os, P., Tur\'an, P.; On interpolation, III. Interpolatory theory of polynomials; \AM (2); 41; 1940; 510--553; % The fact that, for -1\le x_0<\cdotsn} \psi+_{j-n+1,j-1}(x_j-x_{j-n})\dvd{x_{j-n},\ldots,x_j} % with \psi^+_{r,s}:= (\cdot-x_r)_+\cdots(\cdot-x_s)_+, for the interpolant % at the sequence x_1< x_2< \cdots that agrees, on [x_j\fromto x_{j+1}], with % the polynomial interpolant at x_{j-n+1},\ldots,x_j. Evaluation of this % formula at x_i provides a unique description of the linear functional [x_i] % in terms of the linear functionals \dvd{x_{max(1,j-n+1)},\ldots,x_{j-1}}, % j=1,2,\ldots, hence Popoviciu's name `transformation formula' for the % latter. Also proves the `mean-value formula for divided differences': % \dvd{t_0,\ldots,t_n} = \sum_j a_j(t,s)\dvd{s_j,\ldots,s_{j+n}} % for any monotone refinement s of monotone t, with a_j(t,s)\ge0 and summing % to 1. %Schwarz81 % carlrefs \rhl{Sc:Schwarz 1881-2} \refJ Schwarz, H. A.; D\'emonstration \'el\'ementaire d'une propri\'et\'e fondamentale des fonctions interpolaires; Atti Accad.\ Sci.\ Torino; 17; 1881-2; 740--742; %Steffensen27a % shayne 26oct95 \rhl{St27:Steffensen 1927} \refB Steffensen, J. F.; Interpolation; Williams and Wilkins (Baltimore); 1927; % a good reference for classical results concerning Lagrange and Hermite % interpolation, and the corresponding differentiation and integration rules %Steffensen39a % MR 16aug02 \rhl{St39:Steffensen 1939} \refJ Steffensen, J. F.; Note on divided differences; Danske Vid.\ Selsk.\ Math.-Fys.\ Medd.; 17(3); 1939; 1--12; % MR 42.3X % the Leibniz formula for divided differences; but see Popoviciu33=34a. %Taylor15 % \rhl{T:Taylor 1715} \refB Taylor, Brook; Methodus incrementorum directa et inversa; London (England); 1715; %WhittakerRobinson37 % carlrefs \rhl{WR:Whittaker et al. 1937} \refB Whittaker, E. T., Robinson, G.; The Calculus of Observations, A Treatise on Numerical Mathematics, 2nd ed.; Blackie (Glasgow, UK); 1937; %******************************* ADDRESS ************************* % Beginning of addresses { \bigskip\obeylines C. de Boor P.O. Box 1076 Eastsound WA 98245 USA {\tt deboor@cs.wisc.edu} {\tt http://www.cs.wisc.edu/}$\sim${\tt deboor/} } % End of addresses \bye .