SUBROUTINE VARPRO (L, NL, N, NMAX, LPP2, IV, T, Y, W, ADA, A, 1. X IPRINT, ALF, BETA, IERR) 2. C 3. C GIVEN A SET OF N OBSERVATIONS, CONSISTING OF VALUES Y(1), 4. C Y(2), ..., Y(N) OF A DEPENDENT VARIABLE Y, WHERE Y(I) 5. C CORRESPONDS TO THE IV INDEPENDENT VARIABLE(S) T(I,1), T(I,2), 6. C ..., T(I,IV), VARPRO ATTEMPTS TO COMPUTE A WEIGHTED LEAST 7. C SQUARES FIT TO A FUNCTION ETA (THE 'MODEL') WHICH IS A LINEAR 8. C COMBINATION 9. C L 10. C ETA(ALF, BETA; T) = SUM BETA * PHI (ALF; T) + PHI (ALF; T) 11. C J=1 J J L+1 12. C 13. C OF NONLINEAR FUNCTIONS PHI(J) (E.G., A SUM OF EXPONENTIALS AND/ 14. C OR GAUSSIANS). THAT IS, DETERMINE THE LINEAR PARAMETERS 15. C BETA(J) AND THE VECTOR OF NONLINEAR PARAMETERS ALF BY MINIMIZ- 16. C ING 17. C 18. C 2 N 2 19. C NORM(RESIDUAL) = SUM W * (Y - ETA(ALF, BETA; T )) . 20. C I=1 I I I 21. C 22. C THE (L+1)-ST TERM IS OPTIONAL, AND IS USED WHEN IT IS DESIRED 23. C TO FIX ONE OR MORE OF THE BETA'S (RATHER THAN LET THEM BE 24. C DETERMINED). VARPRO REQUIRES FIRST DERIVATIVES OF THE PHI'S. 25. C 26. C NOTES: 27. C 28. C A) THE ABOVE PROBLEM IS ALSO REFERRED TO AS 'MULTIPLE 29. C NONLINEAR REGRESSION'. FOR USE IN STATISTICAL ESTIMATION, 30. C VARPRO RETURNS THE RESIDUALS, THE COVARIANCE MATRIX OF THE 31. C LINEAR AND NONLINEAR PARAMETERS, AND THE ESTIMATED VARIANCE OF 32. C THE OBSERVATIONS. 33. C 34. C B) AN ETA OF THE ABOVE FORM IS CALLED 'SEPARABLE'. THE 35. C CASE OF A NONSEPARABLE ETA CAN BE HANDLED BY SETTING L = 0 36. C AND USING PHI(L+1). 37. C 38. C C) VARPRO MAY ALSO BE USED TO SOLVE LINEAR LEAST SQUARES 39. C PROBLEMS (IN THAT CASE NO ITERATIONS ARE PERFORMED). SET 40. C NL = 0. 41. C 42. C D) THE MAIN ADVANTAGE OF VARPRO OVER OTHER LEAST SQUARES 43. C PROGRAMS IS THAT NO INITIAL GUESSES ARE NEEDED FOR THE LINEAR 44. C PARAMETERS. NOT ONLY DOES THIS MAKE IT EASIER TO USE, BUT IT 45. C OFTEN LEADS TO FASTER CONVERGENCE. 46. C 47. C 48. C DESCRIPTION OF PARAMETERS 49. C 50. C L NUMBER OF LINEAR PARAMETERS BETA (MUST BE .GE. 0). 51. C NL NUMBER OF NONLINEAR PARAMETERS ALF (MUST BE .GE. 0). 52. C N NUMBER OF OBSERVATIONS. N MUST BE GREATER THAN L + NL 53. C (I.E., THE NUMBER OF OBSERVATIONS MUST EXCEED THE 54. C NUMBER OF PARAMETERS). 55. C IV NUMBER OF INDEPENDENT VARIABLES T. 56. C T REAL N BY IV MATRIX OF INDEPENDENT VARIABLES. T(I, J) 57. C CONTAINS THE VALUE OF THE I-TH OBSERVATION OF THE J-TH 58. C INDEPENDENT VARIABLE. 59. C Y N-VECTOR OF OBSERVATIONS, ONE FOR EACH ROW OF T. 60. C W N-VECTOR OF NONNEGATIVE WEIGHTS. SHOULD BE SET TO 1'S 61. C IF WEIGHTS ARE NOT DESIRED. IF VARIANCES OF THE 62. C INDIVIDUAL OBSERVATIONS ARE KNOWN, W(I) SHOULD BE SET 63. C TO 1./VARIANCE(I). 64. C INC NL X (L+1) INTEGER INCIDENCE MATRIX. INC(K, J) = 1 IF 65. C NON-LINEAR PARAMETER ALF(K) APPEARS IN THE J-TH 66. C FUNCTION PHI(J). (THE PROGRAM SETS ALL OTHER INC(K, J) 67. C TO ZERO.) IF PHI(L+1) IS INCLUDED IN THE MODEL, 68. C THE APPROPRIATE ELEMENTS OF THE (L+1)-ST COLUMN SHOULD 69. C BE SET TO 1'S. INC IS NOT NEEDED WHEN L = 0 OR NL = 0. 70. C CAUTION: THE DECLARED ROW DIMENSION OF INC (IN ADA) 71. C MUST CURRENTLY BE SET TO 12. SEE 'RESTRICTIONS' BELOW. 72. C NMAX THE DECLARED ROW DIMENSION OF THE MATRICES A AND T. 73. C IT MUST BE AT LEAST MAX(N, 2*NL+3). 74. C LPP2 L+P+2, WHERE P IS THE NUMBER OF ONES IN THE MATRIX INC. 75. C THE DECLARED COLUMN DIMENSION OF A MUST BE AT LEAST 76. C LPP2. (IF L = 0, SET LPP2 = NL+2. IF NL = 0, SET LPP2 77. C L+2.) 78. C A REAL MATRIX OF SIZE MAX(N, 2*NL+3) BY L+P+2. ON INPUT 79. C IT CONTAINS THE PHI(J)'S AND THEIR DERIVATIVES (SEE 80. C BELOW). ON OUTPUT, THE FIRST L+NL ROWS AND COLUMNS OF 81. C A WILL CONTAIN AN APPROXIMATION TO THE (WEIGHTED) 82. C COVARIANCE MATRIX AT THE SOLUTION (THE FIRST L ROWS 83. C CORRESPOND TO THE LINEAR PARAMETERS, THE LAST NL TO THE 84. C NONLINEAR ONES), COLUMN L+NL+1 WILL CONTAIN THE 85. C WEIGHTED RESIDUALS (Y - ETA), A(1, L+NL+2) WILL CONTAIN 86. C THE (EUCLIDEAN) NORM OF THE WEIGHTED RESIDUAL, AND 87. C A(2, L+NL+2) WILL CONTAIN AN ESTIMATE OF THE (WEIGHTED) 88. C VARIANCE OF THE OBSERVATIONS, NORM(RESIDUAL)**2/ 89. C (N - L - NL). 90. C IPRINT INPUT INTEGER CONTROLLING PRINTED OUTPUT. IF IPRINT IS 91. C POSITIVE, THE NONLINEAR PARAMETERS, THE NORM OF THE 92. C RESIDUAL, AND THE MARQUARDT PARAMETER WILL BE OUTPUT 93. C EVERY IPRINT-TH ITERATION (AND INITIALLY, AND AT THE 94. C FINAL ITERATION). THE LINEAR PARAMETERS WILL BE 95. C PRINTED AT THE FINAL ITERATION. ANY ERROR MESSAGES 96. C WILL ALSO BE PRINTED. (IPRINT = 1 IS RECOMMENDED AT 97. C FIRST.) IF IPRINT = 0, ONLY THE FINAL QUANTITIES WILL 98. C BE PRINTED, AS WELL AS ANY ERROR MESSAGES. IF IPRINT = 99. C -1, NO PRINTING WILL BE DONE. THE USER IS THEN 100. C RESPONSIBLE FOR CHECKING THE PARAMETER IERR FOR ERRORS. 101. C ALF NL-VECTOR OF ESTIMATES OF NONLINEAR PARAMETERS 102. C (INPUT). ON OUTPUT IT WILL CONTAIN OPTIMAL VALUES OF 103. C THE NONLINEAR PARAMETERS. 104. C BETA L-VECTOR OF LINEAR PARAMETERS (OUTPUT ONLY). 105. C IERR INTEGER ERROR FLAG (OUTPUT): 106. C .GT. 0 - SUCCESSFUL CONVERGENCE, IERR IS THE NUMBER OF 107. C ITERATIONS TAKEN. 108. C -1 TERMINATED FOR TOO MANY ITERATIONS. 109. C -2 TERMINATED FOR ILL-CONDITIONING (MARQUARDT 110. C PARAMETER TOO LARGE.) ALSO SEE IERR = -8 BELOW. 111. C -4 INPUT ERROR IN PARAMETER N, L, NL, LPP2, OR NMAX. 112. C -5 INC MATRIX IMPROPERLY SPECIFIED, OR P DISAGREES 113. C WITH LPP2. 114. C -6 A WEIGHT WAS NEGATIVE. 115. C -7 'CONSTANT' COLUMN WAS COMPUTED MORE THAN ONCE. 116. C -8 CATASTROPHIC FAILURE - A COLUMN OF THE A MATRIX HAS 117. C BECOME ZERO. SEE 'CONVERGENCE FAILURES' BELOW. 118. C 119. C (IF IERR .LE. -4, THE LINEAR PARAMETERS, COVARIANCE 120. C MATRIX, ETC. ARE NOT RETURNED.) 121. C 122. C SUBROUTINES REQUIRED 123. C 124. C NINE SUBROUTINES, VPDPA, VPFAC1, VPFAC2, VPBSOL, VPPOST, 125. C VPCOV ,VPNORM, VPINIT, AND VPERR ARE PROVIDED. IN ADDITION, 126. C THE USER MUST PROVIDE A SUBROUTINE (CORRESPONDING TO THE ARGU- 127. C MENT ADA) WHICH, GIVEN ALF, WILL EVALUATE THE FUNCTIONS PHI(J) 128. C AND THEIR PARTIAL DERIVATIVES D PHI(J)/D ALF(K), AT THE SAMPLE 129. C POINTS T(I). THIS ROUTINE MUST BE DECLARED 'EXTERNAL' IN THE 130. C CALLING PROGRAM. ITS CALLING SEQUENCE IS 131. C 132. C SUBROUTINE ADA (L+1, NL, N, NMAX, LPP2, IV, A, INC, T, ALF, 133. C ISEL) 134. C 135. C THE USER SHOULD MODIFY THE EXAMPLE SUBROUTINE 'ADA' (GIVEN 136. C ELSEWHERE) FOR HIS OWN FUNCTIONS. 137. C 138. C THE VECTOR SAMPLED FUNCTIONS PHI(J) SHOULD BE STORED IN THE 139. C FIRST N ROWS AND FIRST L+1 COLUMNS OF THE MATRIX A, I.E., 140. C A(I, J) SHOULD CONTAIN PHI(J, ALF; T(I,1), T(I,2), ..., 141. C T(I,IV)), I = 1, ..., N; J = 1, ..., L (OR L+1). THE (L+1)-ST 142. C COLUMN OF A CONTAINS PHI(L+1) IF PHI(L+1) IS IN THE MODEL, 143. C OTHERWISE IT IS RESERVED FOR WORKSPACE. THE 'CONSTANT' FUNC- 144. C TIONS (THESE ARE FUNCTIONS PHI(J) WHICH DO NOT DEPEND UPON ANY 145. C NONLINEAR PARAMETERS ALF, E.G., T(I)**J) (IF ANY) MUST APPEAR 146. C FIRST, STARTING IN COLUMN 1. THE COLUMN N-VECTORS OF NONZERO 147. C PARTIAL DERIVATIVES D PHI(J) / D ALF(K) SHOULD BE STORED 148. C SEQUENTIALLY IN THE MATRIX A IN COLUMNS L+2 THROUGH L+P+1. 149. C THE ORDER IS 150. C 151. C D PHI(1) D PHI(2) D PHI(L) D PHI(L+1) D PHI(1) 152. C --------, --------, ..., --------, ----------, --------, 153. C D ALF(1) D ALF(1) D ALF(1) D ALF(1) D ALF(2) 154. C 155. C D PHI(2) D PHI(L+1) D PHI(1) D PHI(L+1) 156. C --------, ..., ----------, ..., ---------, ..., ----------, 157. C D ALF(2) D ALF(2) D ALF(NL) D ALF(NL) 158. C 159. C OMITTING COLUMNS OF DERIVATIVES WHICH ARE ZERO, AND OMITTING 160. C PHI(L+1) COLUMNS IF PHI(L+1) IS NOT IN THE MODEL. NOTE THAT 161. C THE LINEAR PARAMETERS BETA ARE NOT USED IN THE MATRIX A. 162. C COLUMN L+P+2 IS RESERVED FOR WORKSPACE. 163. C 164. C THE CODING OF ADA SHOULD BE ARRANGED SO THAT: 165. C 166. C ISEL = 1 (WHICH OCCURS THE FIRST TIME ADA IS CALLED) MEANS: 167. C A. FILL IN THE INCIDENCE MATRIX INC 168. C B. STORE ANY CONSTANT PHI'S IN A. 169. C C. COMPUTE NONCONSTANT PHI'S AND PARTIAL DERIVA- 170. C TIVES. 171. C = 2 MEANS COMPUTE ONLY THE NONCONSTANT FUNCTIONS PHI 172. C = 3 MEANS COMPUTE ONLY THE DERIVATIVES 173. C 174. C (WHEN THE PROBLEM IS LINEAR (NL = 0) ONLY ISEL = 1 IS USED, AND 175. C DERIVATIVES ARE NOT NEEDED.) 176. C 177. C RESTRICTIONS 178. C 179. C THE SUBROUTINES VPDPA, VPINIT (AND ADA) CONTAIN THE LOCALLY 180. C DIMENSIONED MATRIX INC, WHOSE DIMENSIONS ARE CURRENTLY SET FOR 181. C MAXIMA OF L+1 = 8, NL = 12. THEY MUST BE CHANGED FOR LARGER 182. C PROBLEMS. DATA PLACED IN ARRAY A IS OVERWRITTEN ('DESTROYED'). 183. C DATA PLACED IN ARRAYS T, Y AND INC IS LEFT INTACT. THE PROGRAM 184. C RUNS IN WATFIV, EXCEPT WHEN L = 0 OR NL = 0. 185. C 186. C IT IS ASSUMED THAT THE MATRIX PHI(J, ALF; T(I)) HAS FULL 187. C COLUMN RANK. THIS MEANS THAT THE FIRST L COLUMNS OF THE MATRIX 188. C A MUST BE LINEARLY INDEPENDENT. 189. C 190. C OPTIONAL NOTE: AS WILL BE NOTED FROM THE SAMPLE SUBPROGRAM 191. C ADA, THE DERIVATIVES D PHI(J)/D ALF(K) (ISEL = 3) MUST BE 192. C COMPUTED INDEPENDENTLY OF THE FUNCTIONS PHI(J) (ISEL = 2), 193. C SINCE THE FUNCTION VALUES ARE OVERWRITTEN AFTER ADA IS CALLED 194. C WITH ISEL = 2. THIS IS DONE TO MINIMIZE STORAGE, AT THE POS- 195. C SIBLE EXPENSE OF SOME RECOMPUTATION (SINCE THE FUNCTIONS AND 196. C DERIVATIVES FREQUENTLY HAVE SOME COMMON SUBEXPRESSIONS). TO 197. C REDUCE THE AMOUNT OF COMPUTATION AT THE EXPENSE OF SOME 198. C STORAGE, CREATE A MATRIX B OF DIMENSION NMAX BY L+1 IN ADA, AND 199. C AFTER THE COMPUTATION OF THE PHI'S (ISEL = 2), COPY THE VALUES 200. C INTO B. THESE VALUES CAN THEN BE USED TO CALCULATE THE DERIV- 201. C ATIVES (ISEL = 3). (THIS MAKES USE OF THE FACT THAT WHEN A 202. C CALL TO ADA WITH ISEL = 3 FOLLOWS A CALL WITH ISEL = 2, THE 203. C ALFS ARE THE SAME.) 204. C 205. C TO CONVERT TO OTHER MACHINES, CHANGE THE OUTPUT UNIT IN THE 206. C DATA STATEMENTS IN VARPRO, VPDPA, VPPOST, AND VPERR. THE 207. C PROGRAM HAS BEEN CHECKED FOR PORTABILITY BY THE BELL LABS PFORT 208. C VERIFIER. FOR MACHINES WITHOUT DOUBLE PRECISION HARDWARE, IT 209. C MAY BE DESIRABLE TO CONVERT TO SINGLE PRECISION. THIS CAN BE 210. C DONE BY CHANGING (A) THE DECLARATIONS 'DOUBLE PRECISION' TO 211. C 'REAL', (B) THE PATTERN '.D' TO '.E' IN THE 'DATA' STATEMENT IN 212. C VARPRO, (C) DSIGN, DSQRT AND DABS TO SIGN, SQRT AND ABS, 213. C RESPECTIVELY, AND (D) DEXP TO EXP IN THE SAMPLE PROGRAMS ONLY. 214. C 215. C NOTE ON INTERPRETATION OF COVARIANCE MATRIX 216. C 217. C FOR USE IN STATISTICAL ESTIMATION (MULTIPLE NONLINEAR 218. C REGRESSION) VARPRO RETURNS THE COVARIANCE MATRIX OF THE LINEAR 219. C AND NONLINEAR PARAMETERS. THIS MATRIX WILL BE USEFUL ONLY IF 220. C THE USUAL STATISTICAL ASSUMPTIONS HOLD: AFTER WEIGHTING, THE 221. C ERRORS IN THE OBSERVATIONS ARE INDEPENDENT AND NORMALLY DISTRI- 222. C BUTED, WITH MEAN ZERO AND THE SAME VARIANCE. IF THE ERRORS DO 223. C NOT HAVE MEAN ZERO (OR ARE UNKNOWN), THE PROGRAM WILL ISSUE A 224. C WARNING MESSAGE (UNLESS IPRINT .LT. 0) AND THE COVARIANCE 225. C MATRIX WILL NOT BE VALID. IN THAT CASE, THE MODEL SHOULD BE 226. C ALTERED TO INCLUDE A CONSTANT TERM (SET PHI(1) = 1.). 227. C 228. C NOTE ALSO THAT, IN ORDER FOR THE USUAL ASSUMPTIONS TO HOLD, 229. C THE OBSERVATIONS MUST ALL BE OF APPROXIMATELY THE SAME 230. C MAGNITUDE (IN THE ABSENCE OF INFORMATION ABOUT THE ERROR OF 231. C EACH OBSERVATION), OTHERWISE THE VARIANCES WILL NOT BE THE 232. C SAME. IF THE OBSERVATIONS ARE NOT THE SAME SIZE, THIS CAN BE 233. C CURED BY WEIGHTING. 234. C 235. C IF THE USUAL ASSUMPTIONS HOLD, THE SQUARE ROOTS OF THE 236. C DIAGONALS OF THE COVARIANCE MATRIX A GIVE THE STANDARD ERROR 237. C S(I) OF EACH PARAMETER. DIVIDING A(I,J) BY S(I)*S(J) YIELDS 238. C THE CORRELATION MATRIX OF THE PARAMETERS. PRINCIPAL AXES AND 239. C CONFIDENCE ELLIPSOIDS CAN BE OBTAINED BY PERFORMING AN EIGEN- 240. C VALUE/EIGENVECTOR ANALYSIS ON A. ONE SHOULD CALL THE EISPACK 241. C PROGRAM TRED2, FOLLOWED BY TQL2 (OR USE THE EISPAC CONTROL 242. C PROGRAM). 243. C 244. C CONVERGENCE FAILURES 245. C 246. C IF CONVERGENCE FAILURES OCCUR, FIRST CHECK FOR INCORRECT 247. C CODING OF THE SUBROUTINE ADA. CHECK ESPECIALLY THE ACTION OF 248. C ISEL, AND THE COMPUTATION OF THE PARTIAL DERIVATIVES. IF THESE 249. C ARE CORRECT, TRY SEVERAL STARTING GUESSES FOR ALF. IF ADA 250. C IS CODED CORRECTLY, AND IF ERROR RETURNS IERR = -2 OR -8 251. C PERSISTENTLY OCCUR, THIS IS A SIGN OF ILL-CONDITIONING, WHICH 252. C MAY BE CAUSED BY SEVERAL THINGS. ONE IS POOR SCALING OF THE 253. C PARAMETERS; ANOTHER IS AN UNFORTUNATE INITIAL GUESS FOR THE 254. C PARAMETERS, STILL ANOTHER IS A POOR CHOICE OF THE MODEL. 255. C 256. C ALGORITHM 257. C 258. C THE RESIDUAL R IS MODIFIED TO INCORPORATE, FOR ANY FIXED 259. C ALF, THE OPTIMAL LINEAR PARAMETERS FOR THAT ALF. IT IS THEN 260. C POSSIBLE TO MINIMIZE ONLY ON THE NONLINEAR PARAMETERS. AFTER 261. C THE OPTIMAL VALUES OF THE NONLINEAR PARAMETERS HAVE BEEN DETER- 262. C MINED, THE LINEAR PARAMETERS CAN BE RECOVERED BY LINEAR LEAST 263. C SQUARES TECHNIQUES (SEE REF. 1). 264. C 265. C THE MINIMIZATION IS BY A MODIFICATION OF OSBORNE'S (REF. 3) 266. C MODIFICATION OF THE LEVENBERG-MARQUARDT ALGORITHM. INSTEAD OF 267. C SOLVING THE NORMAL EQUATIONS WITH MATRIX 268. C 269. C T 2 270. C (J J + NU * D), WHERE J = D(ETA)/D(ALF), 271. C 272. C STABLE ORTHOGONAL (HOUSEHOLDER) REFLECTIONS ARE USED ON A 273. C MODIFICATION OF THE MATRIX 274. C ( J ) 275. C (------) , 276. C ( NU*D ) 277. C 278. C WHERE D IS A DIAGONAL MATRIX CONSISTING OF THE LENGTHS OF THE 279. C COLUMNS OF J. THIS MARQUARDT STABILIZATION ALLOWS THE ROUTINE 280. C TO RECOVER FROM SOME RANK DEFICIENCIES IN THE JACOBIAN. 281. C OSBORNE'S EMPIRICAL STRATEGY FOR CHOOSING THE MARQUARDT PARAM- 282. C ETER HAS PROVEN REASONABLY SUCCESSFUL IN PRACTICE. (GAUSS- 283. C NEWTON WITH STEP CONTROL CAN BE OBTAINED BY MAKING THE CHANGE 284. C INDICATED BEFORE THE INSTRUCTION LABELED 5). A DESCRIPTION CAN 285. C BE FOUND IN REF. (3), AND A FLOW CHART IN (2), P. 22. 286. C 287. C FOR REFERENCE, SEE 288. C 289. C 1. GENE H. GOLUB AND V. PEREYRA, 'THE DIFFERENTIATION OF 290. C PSEUDO-INVERSES AND NONLINEAR LEAST SQUARES PROBLEMS WHOSE 291. C VARIABLES SEPARATE,' SIAM J. NUMER. ANAL. 10, 413-432 292. C (1973). 293. C 2. ------, SAME TITLE, STANFORD C.S. REPORT 72-261, FEB. 1972. 294. C 3. OSBORNE, MICHAEL R., 'SOME ASPECTS OF NON-LINEAR LEAST 295. C SQUARES CALCULATIONS,' IN LOOTSMA, ED., 'NUMERICAL METHODS 296. C FOR NON-LINEAR OPTIMIZATION,' ACADEMIC PRESS, LONDON, 1972. 297. C 4. KROGH, FRED, 'EFFICIENT IMPLEMENTATION OF A VARIABLE PRO- 298. C JECTION ALGORITHM FOR NONLINEAR LEAST SQUARES PROBLEMS,' 299. C COMM. ACM 17, PP. 167-169 (MARCH, 1974). 300. C 5. KAUFMAN, LINDA, 'A VARIABLE PROJECTION METHOD FOR SOLVING 301. C SEPARABLE NONLINEAR LEAST SQUARES PROBLEMS', B.I.T. 15, 302. C 49-57 (1975). 303. C 6. DRAPER, N., AND SMITH, H., APPLIED REGRESSION ANALYSIS, 304. C WILEY, N.Y., 1966 (FOR STATISTICAL INFORMATION ONLY). 305. C 7. C. LAWSON AND R. HANSON, SOLVING LEAST SQUARES PROBLEMS, 306. C PRENTICE-HALL, ENGLEWOOD CLIFFS, N. J., 1974. 307. C 308. C JOHN BOLSTAD 309. C COMPUTER SCIENCE DEPT., SERRA HOUSE 310. C STANFORD UNIVERSITY 311. C JANUARY, 1977 312. C 313. C .................................................................. 314. C 315. DOUBLE PRECISION A(NMAX, LPP2), BETA(L), ALF(NL), T(NMAX, IV), 316. 2 W(N), Y(N), ACUM, EPS1, GNSTEP, NU, PRJRES, R, RNEW, VPNORM 317. INTEGER B1, OUTPUT 318. LOGICAL SKIP 319. EXTERNAL ADA 320. DATA EPS1 /1.D-6/, ITMAX /50/, OUTPUT /6/ 321. C 322. C THE FOLLOWING TWO PARAMETERS ARE USED IN THE CONVERGENCE 323. C TEST: EPS1 IS AN ABSOLUTE AND RELATIVE TOLERANCE FOR THE 324. C NORM OF THE PROJECTION OF THE RESIDUAL ONTO THE RANGE OF THE 325. C JACOBIAN OF THE VARIABLE PROJECTION FUNCTIONAL. 326. C ITMAX IS THE MAXIMUM NUMBER OF FUNCTION AND DERIVATIVE 327. C EVALUATIONS ALLOWED. CAUTION: EPS1 MUST NOT BE 328. C SET SMALLER THAN 10 TIMES THE UNIT ROUND-OFF OF THE MACHINE. 329. C 330. C----------------------------------------------------------------- 330.005 CALL LIB MONITOR FROM VARPRO, MAINTENANCE NUMBER 509, DATE 77178 330.006 C CALL LIBMON(509) 330.007 C***PLEASE DON'T REMOVE OR CHANGE THE ABOVE CALL. IT IS YOUR ONLY 330.008 C***PROTECTION AGAINST YOUR USING AN OUT-OF-DATE OR INCORRECT 330.009 C***VERSION OF THE ROUTINE. THE LIBRARY MONITOR REMOVES THIS CALL, 330.01 C***SO IT ONLY OCCURS ONCE, ON THE FIRST ENTRY TO THIS ROUTINE. 330.011 C----------------------------------------------------------------- 330.012 IERR = 1 331. ITER = 0 332. LP1 = L + 1 333. B1 = L + 2 334. LNL2 = L + NL + 2 335. NLP1 = NL + 1 336. SKIP = .FALSE. 337. MODIT = IPRINT 338. IF (IPRINT .LE. 0) MODIT = ITMAX + 2 339. NU = 0. 340. C IF GAUSS-NEWTON IS DESIRED REMOVE THE NEXT STATEMENT. 341. NU = 1. 342. C 343. C BEGIN OUTER ITERATION LOOP TO UPDATE ALF. 344. C CALCULATE THE NORM OF THE RESIDUAL AND THE DERIVATIVE OF 345. C THE MODIFIED RESIDUAL THE FIRST TIME, BUT ONLY THE 346. C DERIVATIVE IN SUBSEQUENT ITERATIONS. 347. C 348. 5 CALL VPDPA (L, NL, N, NMAX, LPP2, IV, T, Y, W, ALF, ADA, IERR, 349. X IPRINT, A, BETA, A(1, LP1), R) 350. GNSTEP = 1.0 351. ITERIN = 0 352. IF (ITER .GT. 0) GO TO 10 353. IF (NL .EQ. 0) GO TO 90 354. IF (IERR .NE. 1) GO TO 99 355. C 356. IF (IPRINT .LE. 0) GO TO 10 357. WRITE (OUTPUT, 207) ITERIN, R 358. WRITE (OUTPUT, 200) NU 359. C BEGIN TWO-STAGE ORTHOGONAL FACTORIZATION 360. 10 CALL VPFAC1(NLP1, NMAX, N, L, IPRINT, A(1, B1), PRJRES, IERR) 361. IF (IERR .LT. 0) GO TO 99 362. IERR = 2 363. IF (NU .EQ. 0.) GO TO 30 364. C 365. C BEGIN INNER ITERATION LOOP FOR GENERATING NEW ALF AND 366. C TESTING IT FOR ACCEPTANCE. 367. C 368. 25 CALL VPFAC2(NLP1, NMAX, NU, A(1, B1)) 369. C 370. C SOLVE A NL X NL UPPER TRIANGULAR SYSTEM FOR DELTA-ALF. 371. C THE TRANSFORMED RESIDUAL (IN COL. LNL2 OF A) IS OVER- 372. C WRITTEN BY THE RESULT DELTA-ALF. 373. C 374. 30 CALL VPBSOL (NMAX, NL, A(1, B1), A(1, LNL2)) 375. DO 35 K = 1, NL 376. 35 A(K, B1) = ALF(K) + A(K, LNL2) 377. C NEW ALF(K) = ALF(K) + DELTA ALF(K) 378. C 379. C STEP TO THE NEW POINT NEW ALF, AND COMPUTE THE NEW 380. C NORM OF RESIDUAL. NEW ALF IS STORED IN COLUMN B1 OF A. 381. C 382. 40 CALL VPDPA (L, NL, N, NMAX, LPP2, IV, T, Y, W, A(1, B1), ADA, 383. X IERR, IPRINT, A, BETA, A(1, LP1), RNEW) 384. IF (IERR .NE. 2) GO TO 99 385. ITER = ITER + 1 386. ITERIN = ITERIN + 1 387. SKIP = MOD(ITER, MODIT) .NE. 0 388. IF (SKIP) GO TO 45 389. WRITE (OUTPUT, 203) ITER 390. WRITE (OUTPUT, 216) (A(K, B1), K = 1, NL) 391. WRITE (OUTPUT, 207) ITERIN, RNEW 392. C 393. 45 IF (ITER .LT. ITMAX) GO TO 50 394. IERR = -1 395. CALL VPERR (IPRINT, IERR, 1) 396. GO TO 95 397. 50 IF (RNEW - R .LT. EPS1*(R + 1.D0)) GO TO 75 398. C 399. C RETRACT THE STEP JUST TAKEN 400. C 401. IF (NU .NE. 0.) GO TO 60 402. C GAUSS-NEWTON OPTION ONLY 403. GNSTEP = 0.5*GNSTEP 404. IF (GNSTEP .LT. EPS1) GO TO 95 405. DO 55 K = 1, NL 406. 55 A(K, B1) = ALF(K) + GNSTEP*A(K, LNL2) 407. GO TO 40 408. C ENLARGE THE MARQUARDT PARAMETER 409. 60 NU = 1.5*NU 410. IF (.NOT. SKIP) WRITE (OUTPUT, 206) NU 411. IF (NU .LE. 100.) GO TO 65 412. IERR = -2 413. CALL VPERR (IPRINT, IERR, 1) 414. GO TO 95 415. C RETRIEVE UPPER TRIANGULAR FORM 416. C AND RESIDUAL OF FIRST STAGE. 417. 65 DO 70 K = 1, NL 418. KSUB = LP1 + K 419. DO 70 J = K, NLP1 420. JSUB = LP1 + J 421. ISUB = NLP1 + J 422. 70 A(K, JSUB) = A(ISUB, KSUB) 423. GO TO 25 424. C END OF INNER ITERATION LOOP 425. C ACCEPT THE STEP JUST TAKEN 426. C 427. 75 R = RNEW 428. DO 80 K = 1, NL 429. 80 ALF(K) = A(K, B1) 430. C CALC. NORM(DELTA ALF)/NORM(ALF) 431. ACUM = GNSTEP*VPNORM(NL, A(1, LNL2))/VPNORM(NL, ALF) 432. C 433. C IF ITERIN IS GREATER THAN 1, A STEP WAS RETRACTED DURING 434. C THIS OUTER ITERATION. 435. C 436. IF (ITERIN .EQ. 1) NU = 0.5*NU 437. IF (SKIP) GO TO 85 438. WRITE (OUTPUT, 200) NU 439. WRITE (OUTPUT, 208) ACUM 440. 85 IERR = 3 441. IF (PRJRES .GT. EPS1*(R + 1.D0)) GO TO 5 442. C END OF OUTER ITERATION LOOP 443. C 444. C CALCULATE FINAL QUANTITIES -- LINEAR PARAMETERS, RESIDUALS, 445. C COVARIANCE MATRIX, ETC. 446. C 447. 90 IERR = ITER 448. 95 IF (NL .GT. 0) CALL VPDPA(L, NL, N, NMAX, LPP2, IV, T, Y, W, ALF, 449. X ADA, 4, IPRINT, A, BETA, A(1, LP1), R) 450. CALL VPPOST(L, NL, N, NMAX, LNL2, EPS1, R, IPRINT, ALF, W, A, 451. X A(1, LP1), BETA, IERR) 452. 99 RETURN 453. C 454. 200 FORMAT (9H NU =, E15.7) 455. 203 FORMAT (12H0 ITERATION, I4, 24H NONLINEAR PARAMETERS) 456. 206 FORMAT (25H STEP RETRACTED, NU =, E15.7) 457. 207 FORMAT (1H0, I5, 20H NORM OF RESIDUAL =, E15.7) 458. 208 FORMAT (34H NORM(DELTA-ALF) / NORM(ALF) =, E12.3) 459. 216 FORMAT (1H0, 7E15.7) 460. END 461. C 462. SUBROUTINE VPFAC1(NLP1, NMAX, N, L, IPRINT, B, PRJRES, IERR) 463. C 464. C STAGE 1: HOUSEHOLDER REDUCTION OF 465. C 466. C ( . ) ( DR'. R3 ) NL 467. C ( DR . R2 ) TO (----. -- ), 468. C ( . ) ( 0 . R4 ) N-L-NL 469. C 470. C NL 1 NL 1 471. C 472. C WHERE DR = -D(Q2)*Y IS THE DERIVATIVE OF THE MODIFIED RESIDUAL 473. C PRODUCED BY VPDPA, R2 IS THE TRANSFORMED RESIDUAL FROM DPA, 474. C AND DR' IS IN UPPER TRIANGULAR FORM (AS IN REF. (2), P. 18). 475. C DR IS STORED IN ROWS L+1 TO N AND COLUMNS L+2 TO L + NL + 1 OF 476. C THE MATRIX A (I.E., COLUMNS 1 TO NL OF THE MATRIX B). R2 IS 477. C STORED IN COLUMN L + NL + 2 OF THE MATRIX A (COLUMN NL + 1 OF 478. C B). FOR K = 1, 2, ..., NL, FIND REFLECTION I - U * U' / BETA 479. C WHICH ZEROES B(I, K), I = L+K+1, ..., N. 480. C 481. C .................................................................. 482. C 483. DOUBLE PRECISION ACUM, ALPHA, B(NMAX, NLP1), BETA, DSIGN, PRJRES, 484. X U, VPNORM 485. C 486. NL = NLP1 - 1 487. NL23 = 2*NL + 3 488. LP1 = L + 1 489. C 490. DO 30 K = 1, NL 491. LPK = L + K 492. ALPHA = DSIGN(VPNORM(N+1-LPK, B(LPK, K)), B(LPK, K)) 493. U = B(LPK, K) + ALPHA 494. B(LPK, K) = U 495. BETA = ALPHA * U 496. IF (ALPHA .NE. 0.0) GO TO 13 497. C COLUMN WAS ZERO 498. IERR = -8 499. CALL VPERR (IPRINT, IERR, LP1 + K) 500. GO TO 99 501. C APPLY REFLECTIONS TO REMAINING COLUMNS 502. C OF B AND TO RESIDUAL VECTOR. 503. 13 KP1 = K + 1 504. DO 25 J = KP1, NLP1 505. ACUM = 0.0 506. DO 20 I = LPK, N 507. 20 ACUM = ACUM + B(I, K) * B(I, J) 508. ACUM = ACUM / BETA 509. DO 25 I = LPK, N 510. 25 B(I, J) = B(I, J) - B(I, K) * ACUM 511. 30 B(LPK, K) = -ALPHA 512. C 513. PRJRES = VPNORM(NL, B(LP1, NLP1)) 514. C 515. C SAVE UPPER TRIANGULAR FORM AND TRANSFORMED RESIDUAL, FOR USE 516. C IN CASE A STEP IS RETRACTED. ALSO COMPUTE COLUMN LENGTHS. 517. C 518. IF (IERR .EQ. 4) GO TO 99 519. DO 50 K = 1, NL 520. LPK = L + K 521. DO 40 J = K, NLP1 523. JSUB = NLP1 + J 524. B(K, J) = B(LPK, J) 525. 40 B(JSUB, K) = B(LPK, J) 526. 50 B(NL23, K) = VPNORM(K, B(LP1, K)) 526.5 C 527. 99 RETURN 528. END 529. C 530. SUBROUTINE VPFAC2(NLP1, NMAX, NU, B) 531. C 532. C STAGE 2: SPECIAL HOUSEHOLDER REDUCTION OF 533. C 534. C NL ( DR' . R3 ) (DR'' . R5 ) 535. C (-----. -- ) (-----. -- ) 536. C N-L-NL ( 0 . R4 ) TO ( 0 . R4 ) 537. C (-----. -- ) (-----. -- ) 538. C NL (NU*D . 0 ) ( 0 . R6 ) 539. C 540. C NL 1 NL 1 541. C 542. C WHERE DR', R3, AND R4 ARE AS IN VPFAC1, NU IS THE MARQUARDT 543. C PARAMETER, D IS A DIAGONAL MATRIX CONSISTING OF THE LENGTHS OF 544. C THE COLUMNS OF DR', AND DR'' IS IN UPPER TRIANGULAR FORM. 545. C DETAILS IN (1), PP. 423-424. NOTE THAT THE (N-L-NL) BAND OF 546. C ZEROES, AND R4, ARE OMITTED IN STORAGE. 547. C 548. C .................................................................. 549. C 550. DOUBLE PRECISION ACUM, ALPHA, B(NMAX, NLP1), BETA, DSIGN, NU, U, 551. X VPNORM 552. C 553. NL = NLP1 - 1 554. NL2 = 2*NL 555. NL23 = NL2 + 3 556. DO 30 K = 1, NL 557. KP1 = K + 1 558. NLPK = NL + K 559. NLPKM1 = NLPK - 1 560. B(NLPK, K) = NU * B(NL23, K) 561. B(NL, K) = B(K, K) 562. ALPHA = DSIGN(VPNORM(K+1, B(NL, K)), B(K, K)) 563. U = B(K, K) + ALPHA 564. BETA = ALPHA * U 565. B(K, K) = -ALPHA 566. C THE K-TH REFLECTION MODIFIES ONLY ROWS K, 567. C NL+1, NL+2, ..., NL+K, AND COLUMNS K TO NL+1. 568. DO 30 J = KP1, NLP1 569. B(NLPK, J) = 0. 570. ACUM = U * B(K,J) 571. DO 20 I = NLP1, NLPKM1 572. 20 ACUM = ACUM + B(I,K) * B(I,J) 573. ACUM = ACUM / BETA 574. B(K,J) = B(K,J) - U * ACUM 575. DO 30 I = NLP1, NLPK 576. 30 B(I,J) = B(I,J) - B(I,K) * ACUM 577. C 578. RETURN 579. END 580. C 581. SUBROUTINE VPDPA (L, NL, N, NMAX, LPP2, IV, T, Y, W, ALF, ADA, 582. X ISEL, IPRINT, A, U, R, RNORM) 583. C 584. C COMPUTE THE NORM OF THE RESIDUAL (IF ISEL = 1 OR 2), OR THE 585. C (N-L) X NL DERIVATIVE OF THE MODIFIED RESIDUAL (N-L) VECTOR 586. C Q2*Y (IF ISEL = 1 OR 3). HERE Q * PHI = S, I.E., 587. C 588. C L ( Q1 ) ( . . ) ( S . R1 . F1 ) 589. C (----) ( PHI . Y . D(PHI) ) = (--- . -- . ---- ) 590. C N-L ( Q2 ) ( . . ) ( 0 . R2 . F2 ) 591. C 592. C N L 1 P L 1 P 593. C 594. C WHERE Q IS N X N ORTHOGONAL, AND S IS L X L UPPER TRIANGULAR. 595. C THE NORM OF THE RESIDUAL = NORM(R2), AND THE DESIRED DERIVATIVE 596. C ACCORDING TO REF. (5), IS 597. C -1 598. C D(Q2 * Y) = -Q2 * D(PHI)* S * Q1* Y. 599. C 600. C .................................................................. 601. C 602. DOUBLE PRECISION A(NMAX, LPP2), ALF(NL), T(NMAX, IV), W(N), Y(N), 603. X ACUM, ALPHA, BETA, RNORM, DSIGN, DSQRT, SAVE, R(N), U(L), VPNORM 604. INTEGER FIRSTC, FIRSTR, INC(12, 8) 605. LOGICAL NOWATE, PHILP1 606. EXTERNAL ADA 607. C 608. IF (ISEL .NE. 1) GO TO 3 609. LP1 = L + 1 610. LNL2 = L + 2 + NL 611. LP2 = L + 2 612. LPP1 = LPP2 - 1 613. FIRSTC = 1 614. LASTC = LPP1 615. FIRSTR = LP1 616. CALL VPINIT(L, NL, N, NMAX, LPP2, IV, T, W, ALF, ADA, ISEL, 617. X IPRINT, A, INC, NCON, NCONP1, PHILP1, NOWATE) 618. IF (ISEL .NE. 1) GO TO 99 619. GO TO 30 620. C 621. 3 CALL ADA (LP1, NL, N, NMAX, LPP2, IV, A, INC, T, ALF, MIN0(ISEL, 622. X 3)) 623. IF (ISEL .EQ. 2) GO TO 6 624. C ISEL = 3 OR 4 625. FIRSTC = LP2 626. LASTC = LPP1 627. FIRSTR = (4 - ISEL)*L + 1 628. GO TO 50 629. C ISEL = 2 630. 6 FIRSTC = NCONP1 631. LASTC = LP1 632. IF (NCON .EQ. 0) GO TO 30 633. IF (A(1, NCON) .EQ. SAVE) GO TO 30 634. ISEL = -7 635. CALL VPERR (IPRINT, ISEL, NCON) 636. GO TO 99 637. C ISEL = 1 OR 2 638. 30 IF (PHILP1) GO TO 40 639. DO 35 I = 1, N 640. 35 R(I) = Y(I) 641. GO TO 50 642. 40 DO 45 I = 1, N 643. 45 R(I) = Y(I) - R(I) 644. C WEIGHT APPROPRIATE COLUMNS 645. 50 IF (NOWATE) GO TO 58 646. DO 55 I = 1, N 647. ACUM = W(I) 648. DO 55 J = FIRSTC, LASTC 649. 55 A(I, J) = A(I, J) * ACUM 650. C 651. C COMPUTE ORTHOGONAL FACTORIZATIONS BY HOUSEHOLDER 652. C REFLECTIONS. IF ISEL = 1 OR 2, REDUCE PHI (STORED IN THE 653. C FIRST L COLUMNS OF THE MATRIX A) TO UPPER TRIANGULAR FORM, 654. C (Q*PHI = S), AND TRANSFORM Y (STORED IN COLUMN L+1), GETTING 655. C Q*Y = R. IF ISEL = 1, ALSO TRANSFORM J = D PHI (STORED IN 656. C COLUMNS L+2 THROUGH L+P+1 OF THE MATRIX A), GETTING Q*J = F. 657. C IF ISEL = 3 OR 4, PHI HAS ALREADY BEEN REDUCED, TRANSFORM 658. C ONLY J. S, R, AND F OVERWRITE PHI, Y, AND J, RESPECTIVELY, 659. C AND A FACTORED FORM OF Q IS SAVED IN U AND THE LOWER 660. C TRIANGLE OF PHI. 661. C 662. 58 IF (L .EQ. 0) GO TO 75 663. DO 70 K = 1, L 664. KP1 = K + 1 665. IF (ISEL .GE. 3 .OR. (ISEL .EQ. 2 .AND. K .LT.NCONP1)) GO TO 66 666. ALPHA = DSIGN(VPNORM(N+1-K, A(K, K)), A(K, K)) 667. U(K) = A(K, K) + ALPHA 668. A(K, K) = -ALPHA 669. FIRSTC = KP1 670. IF (ALPHA .NE. 0.0) GO TO 66 671. ISEL = -8 672. CALL VPERR (IPRINT, ISEL, K) 673. GO TO 99 674. C APPLY REFLECTIONS TO COLUMNS 675. C FIRSTC TO LASTC. 676. 66 BETA = -A(K, K) * U(K) 677. DO 70 J = FIRSTC, LASTC 678. ACUM = U(K)*A(K, J) 679. DO 68 I = KP1, N 680. 68 ACUM = ACUM + A(I, K)*A(I, J) 681. ACUM = ACUM / BETA 682. A(K,J) = A(K,J) - U(K)*ACUM 683. DO 70 I = KP1, N 684. 70 A(I, J) = A(I, J) - A(I, K)*ACUM 685. C 686. 75 IF (ISEL .GE. 3) GO TO 85 687. RNORM = VPNORM(N-L, R(LP1)) 688. IF (ISEL .EQ. 2) GO TO 99 689. IF (NCON .GT. 0) SAVE = A(1, NCON) 690. C 691. C F2 IS NOW CONTAINED IN ROWS L+1 TO N AND COLUMNS L+2 TO 692. C L+P+1 OF THE MATRIX A. NOW SOLVE THE L X L UPPER TRIANGULAR 693. C SYSTEM S*BETA = R1 FOR THE LINEAR PARAMETERS BETA. BETA 694. C OVERWRITES R1. 695. C 696. 85 IF (L .GT. 0) CALL VPBSOL (NMAX, L, A, R) 697. C 698. C MAJOR PART OF KAUFMAN'S SIMPLIFICATION OCCURS HERE. COMPUTE 699. C THE DERIVATIVE OF ETA WITH RESPECT TO THE NONLINEAR 700. C PARAMETERS 701. C 702. C T D ETA T L D PHI(J) D PHI(L+1) 703. C Q * -------- = Q * (SUM BETA(J) -------- + ----------) = F2*BETA 704. C D ALF(K) J=1 D ALF(K) D ALF(K) 705. C 706. C AND STORE THE RESULT IN COLUMNS L+2 TO L+NL+1. IF ISEL NOT 707. C = 4, THE FIRST L ROWS ARE OMITTED. THIS IS -D(Q2)*Y. IF 708. C ISEL NOT = 4 THE RESIDUAL R2 = Q2*Y (IN COL. L+1) IS COPIED 709. C TO COLUMN L+NL+2. OTHERWISE ALL OF COLUMN L+1 IS COPIED. 710. C 711. DO 95 I = FIRSTR, N 712. IF (L .EQ. NCON) GO TO 95 713. M = LP1 714. DO 90 K = 1, NL 715. ACUM = 0. 716. DO 88 J = NCONP1, L 717. IF (INC(K, J) .EQ. 0) GO TO 88 718. M = M + 1 719. ACUM = ACUM + A(I, M) * R(J) 720. 88 CONTINUE 721. KSUB = LP1 + K 722. IF (INC(K, LP1) .EQ. 0) GO TO 90 723. M = M + 1 724. ACUM = ACUM + A(I, M) 725. 90 A(I, KSUB) = ACUM 726. 95 A(I, LNL2) = R(I) 727. C 728. 99 RETURN 729. END 730. C 731. SUBROUTINE VPINIT(L, NL, N, NMAX, LPP2, IV, T, W, ALF, ADA, ISEL, 732. X IPRINT, A, INC, NCON, NCONP1, PHILP1, NOWATE) 733. C 734. C CHECK VALIDITY OF INPUT PARAMETERS, AND DETERMINE NUMBER OF 735. C CONSTANT FUNCTIONS. 736. C 737. C .................................................................. 738. C 739. DOUBLE PRECISION A(NMAX, LPP2), ALF(NL), T(NMAX, IV), W(N), 740. X DSQRT 741. INTEGER OUTPUT, P, INC(12, 8) 742. LOGICAL NOWATE, PHILP1 743. DATA OUTPUT /6/ 744. C 745. LP1 = L + 1 746. LNL2 = L + 2 + NL 747. C CHECK FOR VALID INPUT 748. IF (L .GE. 0 .AND. NL .GE. 0 .AND. L+NL .LT. N .AND. LNL2 .LE. 749. X LPP2 .AND. 2*NL + 3 .LE. NMAX .AND. N .LE. NMAX .AND. 750. X IV .GT. 0 .AND. .NOT. (NL .EQ. 0 .AND. L .EQ. 0)) GO TO 1 751. ISEL = -4 752. CALL VPERR (IPRINT, ISEL, 1) 753. GO TO 99 754. C 755. 1 IF (L .EQ. 0 .OR. NL .EQ. 0) GO TO 3 756. DO 2 J = 1, LP1 757. DO 2 K = 1, NL 758. 2 INC(K, J) = 0 759. C 760. 3 CALL ADA (LP1, NL, N, NMAX, LPP2, IV, A, INC, T, ALF, ISEL) 761. C 762. NOWATE = .TRUE. 763. DO 9 I = 1, N 764. NOWATE = NOWATE .AND. (W(I) .EQ. 1.0) 765. IF (W(I) .GE. 0.) GO TO 9 766. C ERROR IN WEIGHTS 767. ISEL = -6 768. CALL VPERR (IPRINT, ISEL, I) 769. GO TO 99 770. 9 W(I) = DSQRT(W(I)) 771. C 772. NCON = L 773. NCONP1 = LP1 774. PHILP1 = L .EQ. 0 775. IF (PHILP1 .OR. NL .EQ. 0) GO TO 99 776. C CHECK INC MATRIX FOR VALID INPUT AND 777. C DETERMINE NUMBER OF CONSTANT FCNS. 778. P = 0 779. DO 11 J = 1, LP1 780. IF (P .EQ. 0) NCONP1 = J 781. DO 11 K = 1, NL 782. INCKJ = INC(K, J) 783. IF (INCKJ .NE. 0 .AND. INCKJ .NE. 1) GO TO 15 784. IF (INCKJ .EQ. 1) P = P + 1 785. 11 CONTINUE 786. C 787. NCON = NCONP1 - 1 788. IF (IPRINT .GE. 0) WRITE (OUTPUT, 210) NCON 789. IF (L+P+2 .EQ. LPP2) GO TO 20 790. C INPUT ERROR IN INC MATRIX 791. 15 ISEL = -5 792. CALL VPERR (IPRINT, ISEL, 1) 793. GO TO 99 794. C DETERMINE IF PHI(L+1) IS IN THE MODEL. 795. 20 DO 25 K = 1, NL 796. 25 IF (INC(K, LP1) .EQ. 1) PHILP1 = .TRUE. 797. C 798. 99 RETURN 799. 210 FORMAT (33H0 NUMBER OF CONSTANT FUNCTIONS =, I4 /) 800. END 801. SUBROUTINE VPBSOL (NMAX, N, A, X) 802. C 803. C BACKSOLVE THE N X N UPPER TRIANGULAR SYSTEM A*X = B. 804. C THE SOLUTION X OVERWRITES THE RIGHT SIDE B. 805. C 806. DOUBLE PRECISION A(NMAX, N), X(N), ACUM 807. C 808. X(N) = X(N) / A(N, N) 809. IF (N .EQ. 1) GO TO 30 810. NP1 = N + 1 811. DO 20 IBACK = 2, N 812. I = NP1 - IBACK 813. C I = N-1, N-2, ..., 2, 1 814. IP1 = I + 1 815. ACUM = X(I) 816. DO 10 J = IP1, N 817. 10 ACUM = ACUM - A(I,J)*X(J) 818. 20 X(I) = ACUM / A(I,I) 819. C 820. 30 RETURN 821. END 822. SUBROUTINE VPPOST(L, NL, N, NMAX, LNL2, EPS, RNORM, IPRINT, ALF, 823. X W, A, R, U, IERR) 824. C 825. C CALCULATE RESIDUALS, SAMPLE VARIANCE, AND COVARIANCE MATRIX. 826. C ON INPUT, U CONTAINS INFORMATION ABOUT HOUSEHOLDER REFLECTIONS 827. C FROM VPDPA. ON OUTPUT, IT CONTAINS THE LINEAR PARAMETERS. 828. C 829. DOUBLE PRECISION A(NMAX, LNL2), ALF(NL), R(N), U(L), W(N), ACUM, 830. X EPS, PRJRES, RNORM, SAVE, DABS 831. INTEGER OUTPUT 832. DATA OUTPUT /6/ 833. C 834. LP1 = L + 1 835. LPNL = LNL2 - 2 836. LNL1 = LPNL + 1 837. DO 10 I = 1, N 838. 10 W(I) = W(I)**2 839. C 840. C UNWIND HOUSEHOLDER TRANSFORMATIONS TO GET RESIDUALS, 841. C AND MOVE THE LINEAR PARAMETERS FROM R TO U. 842. C 843. IF (L .EQ. 0) GO TO 30 844. DO 25 KBACK = 1, L 845. K = LP1 - KBACK 846. KP1 = K + 1 847. ACUM = 0. 848. DO 20 I = KP1, N 849. 20 ACUM = ACUM + A(I, K) * R(I) 850. SAVE = R(K) 851. R(K) = ACUM / A(K, K) 852. ACUM = -ACUM / (U(K) * A(K, K)) 853. U(K) = SAVE 854. DO 25 I = KP1, N 855. 25 R(I) = R(I) - A(I, K)*ACUM 856. C COMPUTE MEAN ERROR 857. 30 ACUM = 0. 858. DO 35 I = 1, N 859. 35 ACUM = ACUM + R(I) 860. SAVE = ACUM / N 861. C 862. C THE FIRST L COLUMNS OF THE MATRIX HAVE BEEN REDUCED TO 863. C UPPER TRIANGULAR FORM IN VPDPA. FINISH BY REDUCING ROWS 864. C L+1 TO N AND COLUMNS L+2 THROUGH L+NL+1 TO TRIANGULAR 865. C FORM. THEN SHIFT COLUMNS OF DERIVATIVE MATRIX OVER ONE 866. C TO THE LEFT TO BE ADJACENT TO THE FIRST L COLUMNS. 867. C 868. IF (NL .EQ. 0) GO TO 45 869. CALL VPFAC1(NL+1, NMAX, N, L, IPRINT, A(1, L+2), PRJRES, 4) 870. DO 40 I = 1, N 871. A(I, LNL2) = R(I) 872. DO 40 K = LP1, LNL1 873. 40 A(I, K) = A(I, K+1) 874. C COMPUTE COVARIANCE MATRIX 875. 45 A(1, LNL2) = RNORM 876. ACUM = RNORM*RNORM/(N - L - NL) 877. A(2, LNL2) = ACUM 878. CALL VPCOV(NMAX, LPNL, ACUM, A) 879. C 880. IF (IPRINT .LT. 0) GO TO 99 881. WRITE (OUTPUT, 209) 882. IF (L .GT. 0) WRITE (OUTPUT, 210) (U(J), J = 1, L) 883. IF (NL .GT. 0) WRITE (OUTPUT, 211) (ALF(K), K = 1, NL) 884. WRITE (OUTPUT, 214) RNORM, SAVE, ACUM 885. IF (DABS(SAVE) .GT. EPS) WRITE (OUTPUT, 215) 886. WRITE (OUTPUT, 209) 887. 99 RETURN 888. C 889. 209 FORMAT (1H0, 50(1H')) 890. 210 FORMAT (20H0 LINEAR PARAMETERS // (7E15.7)) 891. 211 FORMAT (23H0 NONLINEAR PARAMETERS // (7E15.7)) 892. 214 FORMAT (21H0 NORM OF RESIDUAL =, E15.7, 33H EXPECTED ERROR OF OBS 893. XERVATIONS =, E15.7, / 39H ESTIMATED VARIANCE OF OBSERVATIONS =, 894. X E15.7 ) 895. 215 FORMAT (95H WARNING -- EXPECTED ERROR OF OBSERVATIONS IS NOT ZERO 896. X. COVARIANCE MATRIX MAY BE MEANINGLESS. /) 897. END 898. SUBROUTINE VPCOV(NMAX, N, SIGMA2, A) 899. C 900. C COMPUTE THE SCALED COVARIANCE MATRIX OF THE L + NL 901. C PARAMETERS. THIS INVOLVES COMPUTING 902. C 903. C 2 -1 -T 904. C SIGMA * T * T 905. C 906. C WHERE THE (L+NL) X (L+NL) UPPER TRIANGULAR MATRIX T IS 907. C DESCRIBED IN SUBROUTINE VPPOST. THE RESULT OVERWRITES THE 908. C FIRST L+NL ROWS AND COLUMNS OF THE MATRIX A. THE RESULTING 909. C MATRIX IS SYMMETRIC. SEE REF. 7, PP. 67-70, 281. 910. C 911. C .................................................................. 912. C 913. DOUBLE PRECISION A(NMAX, N), SUM, SIGMA2 914. C 915. DO 10 J = 1, N 916. 10 A(J, J) = 1./A(J, J) 917. C 918. C INVERT T UPON ITSELF 919. C 920. IF (N .EQ. 1) GO TO 70 921. NM1 = N - 1 922. DO 60 I = 1, NM1 923. IP1 = I + 1 924. DO 60 J = IP1, N 925. JM1 = J - 1 926. SUM = 0. 927. DO 50 M = I, JM1 928. 50 SUM = SUM + A(I, M) * A(M, J) 929. 60 A(I, J) = -SUM * A(J, J) 930. C 931. C NOW FORM THE MATRIX PRODUCT 932. C 933. 70 DO 90 I = 1, N 934. DO 90 J = I, N 935. SUM = 0. 936. DO 80 M = J, N 937. 80 SUM = SUM + A(I, M) * A(J, M) 938. SUM = SUM * SIGMA2 939. A(I, J) = SUM 940. 90 A(J, I) = SUM 941. C 942. RETURN 943. END 944. SUBROUTINE VPERR (IPRINT, IERR, K) 945. C 946. C PRINT ERROR MESSAGES 947. C 948. INTEGER ERRNO, OUTPUT 949. DATA OUTPUT /6/ 950. C 951. IF (IPRINT .LT. 0) GO TO 99 952. ERRNO = IABS(IERR) 953. GO TO (1, 2, 99, 4, 5, 6, 7, 8), ERRNO 954. C 955. 1 WRITE (OUTPUT, 101) 956. GO TO 99 957. 2 WRITE (OUTPUT, 102) 958. GO TO 99 959. 4 WRITE (OUTPUT, 104) 960. GO TO 99 961. 5 WRITE (OUTPUT, 105) 962. GO TO 99 963. 6 WRITE (OUTPUT, 106) K 964. GO TO 99 965. 7 WRITE (OUTPUT, 107) K 966. GO TO 99 967. 8 WRITE (OUTPUT, 108) K 968. C 969. 99 RETURN 970. 101 FORMAT (46H0 PROBLEM TERMINATED FOR EXCESSIVE ITERATIONS //) 971. 102 FORMAT (49H0 PROBLEM TERMINATED BECAUSE OF ILL-CONDITIONING //) 972. 104 FORMAT (/ 50H INPUT ERROR IN PARAMETER L, NL, N, LPP2, OR NMAX. /) 973. 105 FORMAT (68H0 ERROR -- INC MATRIX IMPROPERLY SPECIFIED, OR DISAGRE 974. XES WITH LPP2. /) 975. 106 FORMAT (19H0 ERROR -- WEIGHT(, I4, 14H) IS NEGATIVE. /) 976. 107 FORMAT (28H0 ERROR -- CONSTANT COLUMN , I3, 37H MUST BE COMPUTED 977. XONLY WHEN ISEL = 1. /) 978. 108 FORMAT (33H0 CATASTROPHIC FAILURE -- COLUMN , I4, 28H IS ZERO, SE 979. XE DOCUMENTATION. /) 980. END 981. DOUBLE PRECISION FUNCTION VPNORM(N, X) 982. C 983. C COMPUTE THE L2 (EUCLIDEAN) NORM OF A VECTOR, MAKING SURE TO 984. C AVOID UNNECESSARY UNDERFLOWS. NO ATTEMPT IS MADE TO SUPPRESS 985. C OVERFLOWS. 986. C 987. DOUBLE PRECISION X(N), RMAX, SUM, TERM, DABS, DSQRT 988. C 989. C FIND LARGEST (IN ABSOLUTE VALUE) ELEMENT 990. RMAX = 0. 991. DO 10 I = 1, N 992. IF (DABS(X(I)) .GT. RMAX) RMAX = DABS(X(I)) 993. 10 CONTINUE 994. C 995. SUM = 0. 996. IF (RMAX .EQ. 0.) GO TO 30 997. DO 20 I = 1, N 998. TERM = 0. 999. IF (RMAX + DABS(X(I)) .NE. RMAX) TERM = X(I)/RMAX 1000. 20 SUM = SUM + TERM*TERM 1001. C 1002. 30 VPNORM = RMAX*DSQRT(SUM) 1003. 99 RETURN 1004. END 1005. .