TweetFollow Us on Twitter

Con't Neural Nets
Volume Number:6
Issue Number:11
Column Tag:Programmer's Workshop

Continuous Neural Networks

By Wayne Joerding, Pullman, WA

Code for Continuous Neural Networks

Introduction

For the past few years there has been considerable renewed interest in the use of Neural Networks in such diverse areas as cognitive science, artificial intelligence, and physics. My interest in the subject derives from a paper by K. Hornik, M. Stinchcombe, and H. White who show that a three layer feedforward neural network can approximate any continuous nonlinear function to any desired degree of accuracy by expanding the number of units in the hidden layer. This result probably explains the success physicist have had in using neural networks to model nonlinear dynamic systems, i.e. chaos. I plan to investigate the use of neural networks to model economic behavior (yes, I’m an economist).

Programming neural networks presents several problems, among them the algorithms which compute an output for the network and a method of training the network. This article, after a brief introduction to neural networks which can be skipped by those already familiar with the topic, describes code to compute with and train a particular type of neural network. The code is sometimes more awkward than is strictly needed for this article since it is to be put into a friendlier interface I am developing with Prototyper™. In fact the current code has no interface at all so you’ll need to add appropriate printf statements in order to see the results. The code is written in Lightspeed C™ with double floating point data. If you don’t have a math coprocessor then performance should be significantly reduced.

Neural Networks

For an excellent introduction to neural networks see Neural Computing: Theory and Practice by P. D. Wasserman. A neural network is a collection of simple processing units which are connected to each other. The output of one unit helps determine the output of other units in a potentially simultaneous system. The neural network originated as a model of neural activity and is inherently parallel, but can be modeled on a sequential machine like the Macintosh. My work concerns a particular type of neural network called a layered feedforward network. In this system the units are organized into layers; an input layer, a number of hidden layers, and an output layer. The output layer has only one unit. Each unit is connected to all the units in the previous layer by a linear relationship, n = Wa where n is the net input level to units of a layer, a is the output (activation) level of units in the previous layer, and W is a matrix with appropriate dimensions.

Figure 1.

A simple three layer network is shown in Figure 1. In this example, the connection between the input layer and the output layer can be represented by a 2X2 matrix W. The hidden units are depicted with a line in the middle because they behave differently than input and output units. An input unit is nothing more than the input value to the network, it does no processing of this value. For example, if one were trying to model the behavior of stock prices then the two input values might be last periods stock price and earnings. The hidden units on the other hand have some simple processing capability. Each hidden unit takes its net input, as described by the formula above, and transforms this value by a simple nonlinear relationship, called a squashing function or an activation function. The result of the squashing function is then passed along to the next layer as the output of the hidden unit. In the example, the single output unit combines the outputs of each hidden unit to form the output for the network.

A tiny bit of algebra should make this example clear. Let x = (x1 , x2) be the values of the input variable, i.e. the outputs of the input layer. Let W be the connection between the input layer and the output layer and n = (n1, n2) be the net inputs values to the hidden layer units. Then n = Wx. Breaking this down further, let Wij be the connection between input unit i and hidden unit j, then n1 = W11*x1 + W21*x2 and n2 = W12*x1 + W22*x2. Let a = (a1,a2) be the output values of each hidden unit, then a1 = f(n1) and a2 = f(n2) where f() represents the squashing function. Finally, let o be the output value, and V the connections between the hidden layer and the output layer, a 2X1 matrix, then o = Va. Again breaking this down further, let Vk be the connection between hidden unit k and the output unit, then o = V1*a1 + V2*a2.

Important characteristics of a neural network are determined by the squashing function. A common squashing function is the threshold function which outputs a zero if the net input is less than some threshold number, and a one otherwise. I am interested in continuous squashing functions because the output of a neural network with a continuous squashing function is a continuous variable, which is more appropriate for most economic data. The most common squashing function is the logistic function and is used in the code shown here. This choice can easily be changed.

Thus, mathematically, neural networks are fairly simple constructs. Their appeal comes from their demonstrated ability to model strikingly intuitive relationships reminiscent of human mental activity. This ability is accomplished by choosing the parameters, the elements of W and V in our example, so that input values generate the desired output. For example, the weights W and V can be chosen along with a threshold squashing function to create the logical XOR operation.

The process of choosing parameters is often called training. Training comes in two varieties, supervised and unsupervised. I will only consider the supervised variety, in which a neural network is thought to be trained by presenting it with input values and desired output levels. There are many specific techniques for supervised training, the most commonly discussed being back propagation. But, conceptually, supervised training is nothing more than finding parameter values that cause the neural network to optimize some objective criteria, and the most common objective criteria is the sum of squared deviations of the output from some target variable. Thus, traditional nonlinear least squares methods can be used for training. That is the approach taken.

Computation

In this section I introduce the code relevant to computing the output of a neural network from a given input. This code is contained in the listing “Compute SS.c”, along with code for the training methods to be used later. First, examine the NeuralNet structure defined in listing “Neural Network.h”. Most of the fields are relevant to training and the user interface, the only fields needed for computing output are the first three, OutLayer, Units[MaxLay], and W[MaxLay-1]. OutLayer basically defines the number of layers in the network and must be less than or equal to MaxLay-1. Units[] specifies the number of units in each layer, however, remember that the number of units in the output layer must be one, i.e. Units[OutLayer] must be 1. Note the type, method, and squash fields, to be used in the future to allow for different types of neural networks, methods of training, and different squashing functions.

There are two functions that compute output, ComputOutputOnly(), and ComputeOutputJac() which also computes the derivative of the squashing function and the Jacobian, used by the Gauss-Newton training method. Calculation of the analytical derivative of the network function is only slightly more expensive than computation of the network itself, so analytical derivatives are used in the Gauss-Newton method, instead of approximate derivatives as is more common. These functions call matrix manipulation routines defined in “DTypeMatrixObject.c” and “DTypeVectorObject.c”.

Training

I have investigated two training methods, a gradient and a non-gradient method. I have found gradient method, Gauss-Newton, difficult to work with because of network saturation resulting in an ill conditioned Jacobian matrix. Consequently, I have also coded a non-gradient method due to Hooke and Jeeves, see Methods of Optimization by G. R. Walsh. This method is slower but should work under almost all conditions. Finally, both types of methods suffer from the local minima problem, which appears to be rather serious for neural network models. In the future I plan to experiment with the simulated annealing method to overcome the local minima problem.

Regardless of the the method, the objective is to minimize a sum of squares objective criteria. The calculation of the sum of squares function is accomplished in both Compute_SS() and Compute_SS_Jac(), the latter function calculating the Jacobian matrix as well as the sum of squares function and is only used by the gradient method. The code for the sum of squares function is straightforward but rather complicated when the Jacobian is also to be computed. Actual definition of the Jacobian formula goes beyond this article, but is available from the author upon request.

The two gradient methods, Gauss-Newton and quasi Gauss-Newton, are defined in the file “Gradient methods.c”. The two driver routines are do_GaussNewton() and do_quasiGaussNewton(). Within each you will see considerable use of printf() and other awkward output functions I have uses at this stage for obtaining results and debugging. This code follows the methods described in Numerical Methods for Unconstrained Optimization and Nonlinear Equations by J. E. Dennis and R. B. Schnabel.

The calculation of the next quasi-Newton step in the minimization process involves an ordinary least squares repression of residuals on the gradient. The code for doing this is in the file “Ord Least Squares.c”. The algorithm used for regression uses the method of “QR decomposition” rather than the more common Gaussian elimination. The reason for this is that the QR decomposition is somewhat more numerically stable and since I plan to use neural networks to model chaotic systems it is crucial that the maximum degree of stability be attained. The cost is that although Gaussian elimination takes about n3/3 operations, the QR method takes about 2n3/3 operations.

The non-gradient method is coded in file “Search methods.c” It looks more complicated that it is but the interested reader should consult the Walsh book cited earlier for a detailed description.

Miscellaneous

There are several other files provided that are peripheral to the main body of code. The main() function is defined in “Neural Network Main.c”. “Setup Network.c” contains code used to initialize or manipulate various data structures. “General.c” is just a convenient place for a few functions that are used in more than one other file. “test data.c” contains some test data I have used to test the methods, however, the gradient methods completely fail with this data. It is only include here so that you can see how to run the routines The gradient methods seem to work better with larger data sets. Finally, the Lightspeed C project should include “MacTraps”, “Math881”, “stdio881”, and “storage”. I haven’t tried my code without the Math881 option on, but I know of no reason it shouldn’t work with this option off, a necessity for machines without the 68881 math coprocessor.

Listing:  Neural Network.h

/* Constants and structure definitions */

#define _MC68881_

#include <stdio.h>
#include <math.h>

#define MaxLay   5
#define ActiveNets 4

/*---- return codes for linesearch ----*/
#define LINEOK   0
#define LINENOTOK  1

/*---- return codes for gradient methods ----*/
#define FAIL     0
#define METGRADTOL 1
#define METSTEPTOL 2
#define LINEFAIL 4
#define EXCEDITNLIM 8
#define SINGJAC  16/* if Jacobian is singular */
#define NETSAT   32

/*---- codes used in Hooke & Jeeves search method ----*/
#define CHANGED  1

/*---- codes for type of network ----*/
#define SigUB    1
#define SigB2  /* biased Sigma network */
#define Lin 3  /* Linear model */

/* codes for squashing function */
#define Logistic 1 /* logistic squash function */
#define hyperTan 2 /* hypebolic tangent */
#define Thresh   3 /* threshold squashing function */

/* codes for estimation method */
#define BackProp 1
#define SimAnn   2 /* Simulated annealing */
#define GaussNew 3 /* Gauss Newton method */
#define qGaussNew4 /* quasi Gauss Newton metho*/
#define HookeJeeves6/*Hooke & Jeeves method for direct search*/

#define DataType double

typedef struct /*type for a matrix of floating values*/
{
 int rows,cols;
 double ** cells;
}DTypeMatrix;
 
typedef struct /*type for a vector of floating values*/
{
 int rows;
 double ** cells;
}DTypeVector;

typedef struct /*type for neural network data struc*/
 {
unsigned int OutLayer;
unsigned int Units[MaxLay];
DTypeMatrix W[MaxLay-1];
double  gradtol;
double  steptol;
double  maxstep;
double  maxrelstep;
unsigned int itnlimit;
short int usemaxstep;
short int type;  /* the type of network */
short int method;/* method to use */
short int squash;/* squashing function to use */
short int valid;
short int prtitinfo;
short int saveout;
}NeuralNet;

/*-------------------------- Prototypes ------------*/
/*---- Prototypes for Neural Network Main.c ----*/
 main(void);
 printf(char *, ...);
FILE *  fopen(char *,char *);
   fclose(FILE *);
   fprintf(FILE *, char *, ...);
 Estimate(void);
 
/*---- Prototypes for General.c ----*/
 RestoreParms(DTypeVector *);
 SaveParms(DTypeVector *);
 NotYetAvail(void);
 HandleOutOfMem(void);
 
 do_HookeJeeves(void);
 ExMove(void);
 PatMove(void);
 SetInitStep(void);
 AllotSearchWorkSpace(void);
 LockSearchWorkSpace(void);
 UnlockSearchWorkSpace(void);

/*---- Prototypes for Gradient methods.c ----*/
 do_GaussNewton(void);
 do_quasiGaussNewton(void);
 ComputegradSSforqGN(void);
 AppendResidZeros(void);
 AppendMuIdentity(void);
 LineSearch(double *);
 ConstrainStep(void);
 UpdateParms(DataType);
 StopYet(double,int,int);
 StopAtFirst(double);
double  Compute_minlambda(void);
 AllotGradientWorkSpace(void);
 LockGradientWorkSpace(void);
 UnlockGradientWorkSpace(void);

/*---- Prototypes for Ord Least Squares.c ----*/
 OLSbyQRmethod(DTypeMatrix *,DTypeVector *,DTypeVector *,DTypeVector 
*);
 ComputeQY(DTypeMatrix *,DTypeVector *,DTypeVector *);
 SolveRbY(DTypeMatrix *,DTypeVector *,DTypeVector *);
 QRDecomposition(DTypeMatrix *,DTypeVector *,DTypeVector *);
 WriteYXToFile(FILE *, DTypeVector *, DTypeMatrix * );

/*---- Prototypes for Compute_SS.c ----*/
 logisticSquash_dSquash(DTypeVector *,DTypeVector *);
DataType  ComputeOutputJac(void);
 ComputeJacobian(DataType *);
double  Compute_SS_Jac(void);
 logisticSquash(DTypeVector *);
DataType  ComputeOutputOnly(void);
double  Compute_SS(void);
 HLock_Alpha_dSquash(void);
 HUnlock_Alpha_dSquash(void);
 HLock_Alpha(void);
 HUnlock_Alpha(void);
 
/*---- Prototypes for Setup Network.c ----*/
 SetupNetDefaults(void);
 AllotInitNewNetWeights(void);
 HLockNet(void);
 HUnlockNet(void);
 DisplayNet(void);
 InitWeights(DTypeMatrix *);
 SetupTolerances(void);

/*---- Prototypes for DTypeMatrix.c ----*/
 AllotDTypeMatrix(DTypeMatrix *,int,int);
 DisplayDTypeMatrix(DTypeMatrix *);
 WriteMatrixToFile(FILE *,DTypeMatrix * );
 ClearDTypeMatrix(DTypeMatrix *);
 Matrix_by_Vec(DTypeMatrix *,DTypeVector *,DTypeVector *);     
 Matrix_by_Diag(DTypeMatrix *,DTypeVector *,DTypeMatrix *);
 Matrix_by_Matrix(DTypeMatrix *,DTypeMatrix *,DTypeMatrix *);

/*---- Prototypes for DTypeVector.c ----*/
double  L2Norm(DTypeVector *);
DataType  Vec_by_Vec(DTypeVector *,DTypeVector *);
 CopyDTypeVector(DTypeVector *,DTypeVector *);
 AllotDTypeVector(DTypeVector *,int);
 DisplayDTypeVector(DTypeVector *);
 WriteVectorToFile(FILE *,DTypeVector *);
 ClearDTypeVector(DTypeVector *);
 
/*---- Prototypes for testdata.c ----*/
 SetTestNet(NeuralNet *);
 testData(NeuralNet *);
 testJacobian(DTypeMatrix *,DTypeVector *,DTypeVector *,DTypeVector *);
 testdepdata(NeuralNet *);
Listing:  Compute SS.c

#include “Neural Network.h”
#include <math.h>

extern FILE * Jac;

extern NeuralNet * theNet;
extern DTypeVector yData;
extern DTypeMatrix XData;
extern DTypeVector Alpha[];
extern DTypeMatrix  Jac_T;
extern DTypeVector Resid;
extern DTypeVector dSquash[];
extern DTypeMatrix  Phi, T2;

/******************************************************/
/*------------------
 Logistic squashing function and its derivative.
*/
static void logisticSquash_dSquash(vector,dvector)
 DTypeVector * vector;
 DTypeVector * dvector;
{
 register int    i;
 register DataType temp;
 register DataType * cell;
 register DataType * dcell;
 
 cell = *vector->cells;
 dcell = *dvector->cells;
 
 for(i=0; i<vector->rows; i++, cell++, dcell++)
 { temp = exp(-*cell);
 *cell = 1/(1+temp);
 *dcell = (*cell)*(1.0-*cell);
 }
}

/*------------------
 Compute output for the neural net where the input 
 values are specified by the Alpha[0] vector.  Also 
 computes derivative of 
 squash function.
*/
static DataType  ComputeOutputJac()
{
int i;
 
for(i=0; i < theNet->OutLayer-1; i++)
/* skip the output layer since don’t want squash */
{Matrix_by_Vec(&theNet->W[i],&Alpha[i],&Alpha[i+1]);
 logisticSquash_dSquash(&Alpha[i+1], &dSquash[i+1]);
}
Matrix_by_Vec(&theNet->W[theNet->OutLayer-1], &Alpha[theNet->OutLayer-1], 
&Alpha[theNet->OutLayer]);
 /* no squash for output layer */
return(**Alpha[theNet->OutLayer].cells);
}

/*------------------
 Compute Jacobian for the SS problem where the input     values are specified 
by the Alpha[0] vector.   Expects activation values for each layer in 
Alpha[i], i=1,..,OutLayer
*/
static void ComputeJacobian(jcell)
DataType* jcell; 
{
int i,j,k;
int OL;
DataType * acell;/* pntr to elem of Alpha vec */
DataType * pcell;/* pntr to elements of Phi matrix */
 
OL = theNet->OutLayer;

/*** 1st, get the alpha values just prior to output ***/
acell = *Alpha[OL-1].cells+Alpha[OL-1].rows-1;
for(i=0;i<Alpha[OL-1].rows;i++,acell--,
 jcell-=Jac_T.cols)
 *jcell = *acell;

/**** Second, calculate the Phi_OL-1 matrix and use with the second to 
last layer ****/
Phi.cols = theNet->W[OL-1].cols;
Matrix_by_Diag(&theNet->W[OL-1], &dSquash[OL-1], &Phi);
acell = *Alpha[OL-2].cells + Alpha[OL-2].rows-1;
 
for(j=0; j<Alpha[OL-2].rows; j++, acell--)
{pcell = *Phi.cells + (Phi.cols-1);
 
 for(k=0; k<Phi.cols; k++, pcell--, 
 jcell -= Jac_T.cols)
 *jcell = (*acell)*(*pcell);
}

/**** Third, calculate Jacobian values for any other layers ****/
 
for(i=OL-3; i>-1; i--)
 
 /* indexing specifies the Alpha vector */
{T2.cols = theNet->W[i+1].cols;
 Matrix_by_Matrix(&Phi, &theNet->W[i+1], &T2);
 Phi.cols = T2.cols;
 Matrix_by_Diag(&T2, &dSquash[i+1], &Phi);
 acell = *Alpha[i].cells + Alpha[i].rows-1;  
 /* points to the last element of the Alpha[i] vec */
 for(j=0; j<Alpha[i].rows; j++, acell--)
 { pcell = *Phi.cells + (Phi.cols-1);
 for(k=0; k<Phi.cols; k++, pcell--, 
 jcell -= Jac_T.cols)
 *jcell = (*acell)*(*pcell);
 }
 }
}

/*------------------
 Compute the sum of squares and Jacobian for net.
 Output layer must be a singleton, so that dependent 
 values can be represented as a vector of scalars.  
 Because the SS function is only appropriate
 for scalars, to use vector output variables would 
 need to formulate a multidimensionl
 sum of squares criteria.
*/
double Compute_SS_Jac()
{
inti;
DataType * y;    /* pntr to actual dependent value */
DataType * r;  /* pntr to cells of residual vec */
DataType * jcell;/* pntr to cells of Jacobian 
 matrix, stored as transpose */
double SS;
DataType * handle; /* used for pseudo vector */
 
HLock(Phi.cells);
HLock(T2.cells);
HLock_Alpha_dSquash();

Alpha[0].cells = & handle;
y = *yData.cells;
r = *Resid.cells;
 
jcell = *Jac_T.cells + (Jac_T.rows-1)*Jac_T.cols;  
SS = 0.0;
for(i=0; i < XData.rows; i++, r++, jcell++)
{
 handle = *XData.cells + i*XData.cols;
 *r = y[i] - ComputeOutputJac();
 SS += (double)pow( *r , 2);
 ComputeJacobian(jcell);
}

HUnlock(Phi.cells);
HUnlock(T2.cells);
HUnlock_Alpha_dSquash();
return(SS);
}

/*------------------
 Logistic squashing function and its derivative.
*/
static void logisticSquash(vector)
DTypeVector * vector;
{
register inti;
register DataType  * cell;
 
cell = *vector->cells;
 
for(i=0; i<vector->rows; i++, cell++)
 *cell = 1/(1+exp(-*cell));
}

/*------------------
 Compute output for the neural net where the input values are specified 
by the Alpha[0] vector.
*/
static DataType  ComputeOutputOnly()
{
int i;
 
for(i=0; i < theNet->OutLayer-1; i++)
{Matrix_by_Vec(&theNet->W[i],&Alpha[i],&Alpha[i+1]);
 logisticSquash(&Alpha[i+1]);
}
 Matrix_by_Vec(&theNet->W[theNet->OutLayer-1], 
 &Alpha[theNet->OutLayer-1], 
 &Alpha[theNet->OutLayer]);
return(**Alpha[theNet->OutLayer].cells);
}

/*------------------
 Compute the sum of squares
*/
double Compute_SS()
{
inti;
DataType * y;
double SS;
DataType * handle; /* used for pseudo vector */
double r = 0.0;

HLock_Alpha();
 
Alpha[0].cells = & handle;
y = *yData.cells;
SS = 0.0;
for(i=0; i < XData.rows; i++)
{
 handle = *XData.cells + i*XData.cols;
 r = y[i] - ComputeOutputOnly();
 SS += (double)pow( r , 2);
}

HUnlock_Alpha();
 
return(SS);
}

/*------------------*/
static void HLock_Alpha_dSquash()
{
int i;
 
for(i=0; i<theNet->OutLayer; i++)
{HLock(Alpha[i].cells);
 HLock(dSquash[i].cells);
}
}

static void HUnlock_Alpha_dSquash()
{
int i;
 
for(i=0; i<theNet->OutLayer; i++)
{HUnlock(Alpha[i].cells);
 HUnlock(dSquash[i].cells);
}
}

static void HLock_Alpha()
{
int i;
 
for(i=0; i<theNet->OutLayer; i++)
 HLock(Alpha[i].cells);
}

static void HUnlock_Alpha()
{
int i;
 
for(i=0; i<theNet->OutLayer; i++)
 HUnlock(Alpha[i].cells);
}
Listing:  General.c

#include “Neural Network.h”
#include <math.h>

extern FILE * Jac;

extern NeuralNet * theNet;
extern DTypeVector yData;
extern DTypeMatrix XData;
extern DTypeVector Alpha[];
extern DTypeVector Pi;
extern DTypeVector Diag;

/*----------------------
 Restore the weight parameters in vector Pi to W[i] 
 for use in line search algorithm.
*/
RestoreParms(vec)
DTypeVector * vec;
{
int j,k,N;
DataType * v;
DataType * w;
 
v = *vec->cells;
for(j=0; j<theNet->OutLayer; j++)
{N = (theNet->W[j].rows)*(theNet->W[j].cols);
 w = *theNet->W[j].cells;
 for(k=0; k<N; k++, w++, v++)
 *w = *v;
}
}

/*----------------------
 Save the weight parameters in vector Pi for use in 
 line search algorithm.
*/
SaveParms(vec)
DTypeVector * vec;
{
int j,k,N;
DataType * v;
DataType * w;
 
v = *vec->cells;
for(j=0; j<theNet->OutLayer; j++)
{N = (theNet->W[j].rows)*(theNet->W[j].cols);
 w = *theNet->W[j].cells;
 for(k=0; k<N; k++, w++, v++)
 *v = *w;
}
}

NotYetAvail()
{
printf(“chosen method not yet available”) , ExitToShell();
}

HandleOutOfMem()
{
printf(“Out of memory”) , ExitToShell();
}
Listing:  Neural Network Main.c

#include “Neural Network.h”
#include <math.h>

/*---- Debuging prototypes and structs ----*/
Stestnet(NeuralNet *);
FILE * Jac;
/*----------------------------------------*/

/*** Global structures *****************/
NeuralNet * theNet;/* pointer to a NeuralNet struc */
NeuralNet Nets[ActiveNets];

DTypeVector yData;
DTypeMatrix XData;
DTypeMatrix Jac_T;
DTypeVector Pi, Diag;
DTypeVector Resid;
DTypeVector gradSS;
DTypeMatrix Phi, T2;
DTypeVector dSquash[MaxLay];
DTypeVector Alpha[MaxLay];

/******************************************************/
main()
{
theNet = &Nets[0];
printf(“this is test\n”);
if((Jac = fopen(“Testdata:Jacobian”,”w”))==NULL)
{printf(“Can’t open file”);
 ExitToShell();
}

SetupNetDefaults();
SetTestNet(theNet);
AllotInitNewNetWeights();
testData(theNet);

Estimate();
theNet->method = HookeJeeves;
Estimate();

fclose(Jac);

}

/******************************************************/

Estimate()
{
int termcode;  

HLock(yData.cells);
HLock(XData.cells);
 
switch(theNet->method)
{case BackProp:
 { break;
 }
 case SimAnn:
 { break;
 }
 case HookeJeeves:
 { AllotSearchWorkSpace();
 LockSearchWorkSpace();
 termcode = do_HookeJeeves();
 UnlockSearchWorkSpace();
 break;
 }
 case GaussNew:
 { AllotGradientWorkSpace();
 LockGradientWorkSpace();
 termcode = do_GaussNewton();
 UnlockGradientWorkSpace();
 break; 
 }
 case qGaussNew:
 { AllotGradientWorkSpace();
 LockGradientWorkSpace();
 termcode = do_quasiGaussNewton();
 UnlockGradientWorkSpace();
 break; 
 }
 default:
 NotYetAvail();
 }

HUnlock(yData.cells);
HUnlock(XData.cells);
 
return(termcode);
}
Listing:  Setup Network.c

#include “Neural Network.h”
#include <math.h>

extern NeuralNet * theNet;
extern DTypeVector yData;
extern DTypeMatrix XData;
extern DTypeVector Alpha[];
extern DTypeMatrix  Jac_T;
extern DTypeVector Pi, Diag;
extern DTypeVector Resid;
extern DTypeVector  gradSS;
extern DTypeVector dSquash[];
extern DTypeMatrix  Phi, T2;

double  CalcMachEPS();

/******************************************************/
SetupNetDefaults()
{
SetupTolerances();
theNet->OutLayer = 2;
theNet->Units[0] = 0;
theNet->Units[1] = 2;
theNet->itnlimit = 15;
theNet->maxstep = 10.0;   /* default step limit */
theNet->maxrelstep = .1;
theNet->usemaxstep = TRUE;
theNet->type = SigUB;
theNet->method = HookeJeeves;
theNet->squash = Logistic;
theNet->valid = FALSE;
}

/*------------------
Allot memory for weight matrices and initialize weight values.  Expects 
the number of units in each layer to be already defined.
*/
AllotInitNewNetWeights()
{
int i;
 
for(i=0; i<theNet->OutLayer; i++)
{/* ---- allot memory for Weight matrices ----*/
 AllotDTypeMatrix(&(theNet->W[i]), theNet->Units[i+1], theNet->Units[i]);
 
 /*---- initialize parameter values ----*/
 HLock(theNet->W[i].cells);
 InitWeights(&(theNet->W[i]));
 HUnlock(theNet->W[i].cells);
}
}

HLockNet()
{
int i;
 
for(i=0; i<theNet->OutLayer; i++)
 HLock(theNet->W[i].cells);

}

HUnlockNet()
{
int i;

for(i=0; i<theNet->OutLayer; i++)
 HUnlock(theNet->W[i].cells);
}

DisplayNet()
{
int i;
printf(“Output layer is %d \n”,theNet->OutLayer);
for(i=0; i<=theNet->OutLayer; i++)
printf(“Neurons in layer %d is %d\n”,i,theNet->Units[i]);
for(i=0; i<theNet->OutLayer; i++)
{
 printf(“Weight matrix for layer %d is:\n”,i);
 DisplayDTypeMatrix(&theNet->W[i]);
}
}

InitWeights(matrix)
DTypeMatrix * matrix;/* weight matrix to use in network calculation
 */
{
inti,j,k;
DataType * cell;
DataType d = .10;

cell = *matrix->cells;
for(i=0, k=0; i<matrix->rows; i++) 
{
 for(j=0; j<matrix->cols; j++, k++, d = d+.01) 
 *(cell + (i*matrix->cols + j)) = d*(DataType)pow(-1,k);
}
}

SetupTolerances()
{
theNet->gradtol = .000001;
theNet->steptol = .000000000001;
}
Listing:  test data.c

#include “Neural Network.h”
#include <math.h>

extern DTypeVector yData;
extern DTypeMatrix XData;
extern DTypeVector Alpha[];

SetTestNet(net)
NeuralNet * net;
{
net->OutLayer = 2;
net->Units[0] = 3;
net->Units[1] = 2; 
net->Units[2] = 1;
net->method = GaussNew;
net->maxstep = 1.0;
}

testData(net)
NeuralNet * net;
{
inti,j,k,m;
DataType * cell;
DataType * ycell;
int Obs = 16;
int grid = 4;
int Parms = net->Units[0];
DataType data;
 
AllotDTypeMatrix(&XData,Obs,net->Units[0]);
AllotDTypeVector(&yData,Obs);

HLock(XData.cells);
HLock(yData.cells);
cell = *XData.cells; 
ycell = *yData.cells;
for(j=0; j<Obs; j++)
 *(cell + (j*XData.cols + 0)) = 1.0;
for(j=0, k=1, m=0; j<Obs; j++, k++)
{if(k==Obs/grid+1) k=1;
 if(k==1) m += 1;
 *(cell + (j*XData.cols + 1)) = 10*m;
 *(cell + (j*XData.cols + 2)) = 10*k;
}
for(j=0; j<Obs; j++)
 ycell[j]=pow(*(cell+(j*XData.cols+1)),.2)*pow(*(cell+(j*XData.cols+2)),.8);

HUnlock(XData.cells);
HUnlock(yData.cells);
}
Listing:  DTyperMatrixObject.c

/*------ Double FloatMatrix --------------*/
#include “Neural Network.h”
#include <math.h>

AllotDTypeMatrix(matrix,r,c)
DTypeMatrix * matrix;
intr,c;
{
matrix->rows = r;
matrix->cols = c;
matrix->cells = (DataType **) NewHandle( r*c*sizeof(DataType) );
if(!matrix->cells) HandleOutOfMem();
}

WriteMatrixToFile(jac,matrix)
FILE * jac;
DTypeMatrix * matrix;
{
inti,j;
DataType * cell;
 
HLock(matrix->cells);
 
cell = *matrix->cells;
for(i=0; i<matrix->rows; i++) 
{
 for(j=0; j<matrix->cols; j++)  
 fprintf(jac,”%.5lf“,*(cell + (i*matrix->cols + j)) );
 fprintf(jac,”\n”);
}
HUnlock(matrix->cells); 
}

DisplayDTypeMatrix(matrix)
DTypeMatrix *matrix;
{
inti,j;
DataType * cell;
 
HLock(matrix->cells);
 
cell = *matrix->cells;
for(i=0; i<matrix->rows; i++) 
{
 for(j=0; j<matrix->cols; j++)  
 printf(“(%.5le)  “,*(cell + (i*matrix->cols + j)) );
 printf(“\n”);
}
HUnlock(matrix->cells); 
}

ClearDTypeMatrix(matrix)
DTypeMatrix *matrix;
{
inti,j;
DataType * cell;
 
HLock(matrix->cells);
 
cell = *matrix->cells;
for(i=0; i<matrix->rows; i++) 
{
 for(j=0; j<matrix->cols; j++) 
 *(cell + (i*matrix->cols + j)) = 0.0;
}
HUnlock(matrix->cells);
}

/*---------------------- 
 postmultiply matrix by vector, result in prod 
 no checking for conformability
*/
Matrix_by_Vec(matrix,vector,prod)  
DTypeMatrix *  matrix;  
DTypeVector *  vector;  
DTypeVector *  prod; 
{
register inti;
register intj;
register intR;
register intC;
register DataType temp;
register DataType * y;
register DataType * row;
register DataType * vec;

R = matrix->rows;
C = matrix->cols;
vec = *vector->cells;
y = *prod->cells;
for(i=0; i<R; y++, i++)
{row = *matrix->cells + i*C;
 temp = (*row)*(*vec);
 for(j=1; j<C; j++)  
 temp += (*(row+j))*(*(vec+j));
 *y = temp;
}
}

/*---------------------- 
postmultiply matrix by diagonal matrix, result in prod no checking for 
conformability diagonal matrix must be square, stored as a vector
*/
Matrix_by_Diag(matrix,diag,prod)
DTypeMatrix *  matrix;
DTypeVector *  diag;
DTypeMatrix *  prod;/
{
register inti;
register intj;
register intR;
register intC;
register DataType * y;
register DataType * row;
register DataType * vec;

R = matrix->rows;
C = matrix->cols;
vec = *diag->cells;
row = *matrix->cells;
y = *prod->cells;
for(i=0; i<R; i++)
{for(j=0; j<C; j++, row++, y++)  
 *y = (*row)*(*(vec+j));
}
}

/*---------------------- 
 postmultiply matrix1 by matrix1, result in prod 
 no checking for conformability
*/
Matrix_by_Matrix(matrix1,matrix2,prod)
DTypeMatrix *  matrix1;
DTypeMatrix *  matrix2;
DTypeMatrix *  prod;
{
register inti;
register intj;
register intk;
register intC1;
intR1;
register intC2;
register DataType sum;
register DataType * y;
register DataType * row;
register DataType * col;
 
R1 = matrix1->rows;
C1 = matrix1->cols;
C2 = matrix2->cols;
y = *prod->cells;
 
for(i=0; i<R1; i++)
{for(j=0; j<C2; j++) 
 { col = *matrix2->cells + j;
 row = *matrix1->cells + i*C1;
 sum = 0.0;
 for(k=0; k<C1; k++, col += C2, row++)
 sum += (*row)*(*(col));
 *(y + i*C2 +j) = sum;
 }
}
}
Listing:  DTypeVectorObject.c

/*------ DataType Vector ----------------------*/

#include “Neural Network.h”
#include <math.h>

/*-------------------- 
Compute the L2 norm of a vector
*/
double L2Norm(vec)
DTypeVector * vec;
{
register int i;
register int N = vec->rows;
register double norm = 0.0;
register DataType * cell = *vec->cells;

for(i=0; i<N; i++, cell++)
 norm += pow((double)*cell,2.0);
 
return(norm);
}

/*--------------------
Multiply vector 1 by vector 2 and return answer as scaler.
*/
DataType Vec_by_Vec(vec1,vec2)
DTypeVector * vec1;
DTypeVector * vec2;
{
register int i;
register DataType * cell1;
register DataType * cell2;
register DataType ans;
 
ans = 0.0;
cell1 = *vec1->cells;
cell2 = *vec2->cells;
for(i=0; i<vec1->rows; i++, cell1++, cell2++)
 ans += (*cell2)*(*cell1);
return(ans);
}
/*--------------------
 Copy vector vec1 to vec2.
*/
CopyDTypeVector(vec1,vec2)
DTypeVector * vec1;
DTypeVector * vec2;
{
register int i;
register DataType * cell1;
register DataType * cell2;
 
cell1 = *vec1->cells;
cell2 = *vec2->cells;
for(i=0; i<vec1->rows; i++, cell1++, cell2++)
 *cell2 = *cell1;
}

AllotDTypeVector(vector,r)
DTypeVector * vector;
intr;
{
vector->rows = r;
vector->cells = (DataType **) NewHandle( r*sizeof(DataType) );
if(!vector->cells) HandleOutOfMem();
}

DisplayDTypeVector(vector)/* clear vector */
DTypeVector *  vector;  /* address of vector record      */
{
inti;
DataType * cell;
 
HLock(vector->cells);
 
cell = *vector->cells;
for(i=0; i<vector->rows; i++) 
{
 printf(“(%.5le)  “,cell[i]);
}
printf(“\n”);

HUnlock(vector->cells); 
}

WriteVectorToFile(file,vector)
FILE * file;
DTypeVector * vector;
{
inti;
DataType * cell;
 
HLock(vector->cells);
 
cell = *vector->cells;
for(i=0; i<vector->rows; i++) 
{
 fprintf(file,”  %.5lf  “,cell[i]);
}
fprintf(file,”\n”);

HUnlock(vector->cells); 
}

ClearDTypeVector(vector)
DTypeVector *  vector;  /* address of matrix record
 */
{
inti;
DataType * cell;
 
HLock(vector->cells);
 
cell = *vector->cells;
for(i=0; i<vector->rows; i++) 
{
 cell[i] = 0.0;
}
HUnlock(vector->cells);
}
Listing:  Gradient Methods.c

#include “Neural Network.h”
#include <math.h>

extern FILE * Jac;

extern NeuralNet * theNet;
extern DTypeVector yData;
extern DTypeMatrix XData;
extern DTypeVector Alpha[];
extern DTypeMatrix  Jac_T;
extern DTypeVector Pi;
extern DTypeVector Diag;
extern DTypeVector Resid;
extern DTypeVector dSquash[];
extern DTypeVector gradSS;
extern DTypeMatrix  Phi, T2;

static unsigned inttotparms;
/*total parms in model,set by AllotGradientWorkSpace()*/

/*------------------------
Find parameter values that minimize sum of squares, uses line search 
method.
Expects:
 1. tolerance values gradtol, steptol; which are stored in NeuralNet 
structure.
 2. validly defined and alloted NeuralNet structure.
 3. input data in matrix XData and output data values in vector yData. 
Returns termcode FAIL if current parm value is not an 
 approximate critical point  
 METGRADTOL if norm of scaled gradient less than gradtol
 METSTEPTOL if scaled step is less than steptol
 LINEFAIL if linesearch failed to find next parm   
 distinct from current value
 EXCEDITNLIM if iteratation limit exceeded
 SINGJAC if Jacobian is singular
*/
do_GaussNewton()
{
int itncount = 0;/* start iteration count at zero */
int termcode = FAIL;
int retcode;
int sing; /* flag for singularity of Jacobian */
double SS = 0;
 
while(termcode == FAIL)
{itncount += 1;  /* increment iteration count */
 SS = Compute_SS_Jac();
 Matrix_by_Vec(&Jac_T, &Resid, &gradSS); 
 
 sing = OLSbyQRmethod(&Jac_T,&Pi,&Diag,&Resid);
 
 if(sing)
 termcode = SINGJAC;
 else
 { SaveParms(&Pi); 
 if(theNet->usemaxstep == TRUE)
 ConstrainStep();
 retcode = LineSearch(&SS);
 termcode = StopYet(SS,retcode,itncount);
 }
 }/* end of while(termcode == FAIL) */
return(termcode);
}

/*------------------------
 Find parameter values that minimize sum of squares, uses quasi Gauss-Newton 
method combined with line search.  Adds a nonsingular diagonal matrix 
to Jacobian to overcome singularity problems.  Takes more memory than 
Gauss-Newton. Similar to Levenberg-Marquardt but trust region is a constant 
since only trying to fix singularity of Jacobian problem.
Expects:
 1. tolerance values gradtol, steptol; which are stored in NeuralNet 
structure.
 2. validly defined and alloted NeuralNet structure.
 3. input data in matrix XData and output data values in vector yData.
 4. total number of parameters for model in “totparms”.
Returns termcode 
 FAIL if current parm value is not an approximate  
 critical point 
 METGRADTOL if norm of scaled gradient less than gradtol
 METSTEPTOL if scaled step is less than steptol
 LINEFAIL if linesearch failed to find next parm   
 distinct from current value
 EXCEDITNLIM if iteratation limit exceeded
 NETSAT if network is possibly oversaturated
*/
do_quasiGaussNewton()
{
int i;
int itncount = 0;/* start iteration count at zero */
int termcode = FAIL; 
int retcode;
int sing; /* flag for singularity of Jacobian */
double SS = 0;
 
while(termcode == FAIL)
{itncount += 1;  /* increment iteration count */
 ClearDTypeMatrix(&Jac_T);
 SS = Compute_SS_Jac(); 
 AppendResidZeros(); 
 AppendMuIdentity();
 ComputegradSSforqGN(); 
 sing = OLSbyQRmethod(&Jac_T,&Pi,&Diag,&Resid);    
 if(sing)
 termcode = NETSAT;
 else
 { SaveParms(&Pi); 
 if(theNet->usemaxstep == TRUE)
 ConstrainStep();
 retcode = LineSearch(&SS);
 termcode = StopYet(SS,retcode,itncount);
 }
 } /* end of while(termcode == FAIL) */
 return(termcode);
}

/*------------------------
 Append zeros to the Resid vector.
*/
static AppendResidZeros()
{
int i;
DataType * v;

v = *Resid.cells + XData.rows;
for(i=0; i<totparms; i++, v++)
 *v = 0.0;
}

/*------------------------
Special function to compute gradient for quasi Gauss-Newton method.
*/
static ComputegradSSforqGN()
{
register inti;
register intj;
register intR; /* number of rows in matrix   */
register intC; /* number of columns in matrix */
register DataType temp;
register DataType * y;
register DataType * row;
register DataType * vec;

R = Jac_T.rows;
C = Jac_T.cols;
vec = *Resid.cells;
y = *gradSS.cells;
for(i=0; i<R; y++, i++)
{row = *Jac_T.cells + i*C;
 temp = (*row)*(*vec);
 for(j=1; j<XData.rows; j++)
 temp += (*(row+j))*(*(vec+j));
 *y = temp;
}
}

/*------------------------
Append a matrix Mu times the Identity matrix to the Jacobian.
*/
static AppendMuIdentity()
{
register int i,j;
register DataType * jcell;
register DataType MuI = .001;
 
for(i=0; i<totparms; i++)
{jcell = *Jac_T.cells + XData.rows + i*Jac_T.cols;
 for(j=0; j<i; j++, jcell++)
 *jcell = 0.0;
 *jcell = MuI; 
 j++ , jcell++;
 for(; j<totparms; j++, jcell++)
 *jcell = 0.0;
}
}

/*----------------------
Constrain the step of a Gauss-Newton or quasi Gauss-Newton method.
*/
static ConstrainStep()
{
register int i;
register int N = Diag.rows;
register double snorm = 0.0;
register double pnorm = 0.0;
register DataType * cell;
register double K;
 
cell = *Diag.cells;
for(i=0; i<N; i++, cell++)
 snorm += pow((double)*cell,2.0);
cell = *Pi.cells;
for(i=0; i<N; i++, cell++)
 pnorm += pow((double)*cell,2.0);
K = (theNet->maxrelstep*pnorm/snorm) + (theNet->maxstep/sqrt(snorm));
cell = *Diag.cells;
for(i=0; i<N; i++, cell++)
 *cell = (*cell)*K;
}

/*----------------------
Alg A6.3.1 of Dennis and Shanabel.
Expects full Gauss-Newton step in vector Diag.
Must set the SS value.
Returns retcode = LINEOK if satisfactory new parm value found
 = LINENOTOK if can’t find a step small 
 enough to reduce SS 
*/
static LineSearch(ss_)
double * ss_;  /* pointer to value of Sum of Squares */
{
double ss;/* value of Sum of Squares */
double minlambda;
double initslope;/* initial slope of SS function */
double lambda = 1.0; 
int retcode = 2; /* return code for search */
double a = .0001;

initslope = -Vec_by_Vec(&gradSS,&Diag);
minlambda = Compute_minlambda();
SaveParms(&Pi);
while(retcode > 1) 
{
 UpdateParms(lambda);
 ss = Compute_SS();
 if(ss < *ss_ + a*lambda*initslope)
 { retcode = LINEOK;
 *ss_ = ss;
 }
 else if(lambda < minlambda)
 { retcode = LINENOTOK;
 RestoreParms(&Pi);
 }
 else 
 lambda = .1*lambda;
 }
return(retcode);
}

/*----------------------
Update weight params by lambda times step given in vector Diag.
Expects old weight values to be in the vector Pi.
*/
static UpdateParms(lambda)
DataType lambda;
{
int j,k,N;
DataType * p;  /* pointer to step value */
DataType * w_; /* pointer to previous weight values */
DataType * w;  /* pointer to weight matrix */
 
p = *Diag.cells;
w_ = *Pi.cells;
for(j=0; j<theNet->OutLayer; j++)
{N = (theNet->W[j].rows)*(theNet->W[j].cols);
 w = *theNet->W[j].cells;
 for(k=0; k<N; k++, w++, w_++, p++)
 *w = *w_ - lambda*(*p);
}
}
 
/*------------------------
Determine if should stop searching for minimum.
Expects step in vector Diag.
Returns termcode 
 FAIL if current parm value is not an approximate  
 critical point 
 METGRADTOL if norm of scaled gradient less than   
 gradtol
 METSTEPTOL if scaled step is less than steptol
 LINEFAIL if linesearch failed to find next parm   
 distinct from current value
 EXCEDITNLIM if iteratation limit exceeded
*/
static StopYet(ss,retcode,itncount)
double ss;
int retcode;/* return code from LineSearch() */
int itncount;  
{
int j,k,N;
DataType * v;  
DataType * w;  /* pointer to cell of weight matrix */
double rel = 0.0;/* hold value of relative gradient */
double max = 0.0;
inttermcode = FAIL;
 
if (retcode == LINENOTOK) 
 termcode = termcode | LINEFAIL;
 
/*---- First check for zero gradient ----*/
v = *gradSS.cells;
for(j=0; j<theNet->OutLayer; j++)
{N = (theNet->W[j].rows)*(theNet->W[j].cols);
 w = *theNet->W[j].cells;
 for(k=0; k<N; k++, v++, w++)
 { rel = fabs((double)((*v))*(fabs((double)(*w))/ss));
 max = (max < rel) ? rel : max;
 }
}
if (max < theNet->gradtol) termcode =(termcode | METGRADTOL);
 
/*---- Second check for zero step ----*/
v = *Pi.cells;
max = 0.0;
for(j=0; j<theNet->OutLayer; j++)
{N = (theNet->W[j].rows)*(theNet->W[j].cols);
 w = *theNet->W[j].cells;
 for(k=0; k<N; k++, v++, w++)
 { rel = fabs(((double)(*v-*w))/((double)(*w)));   /* don’t need abs(ss) 
since ss is always positive */
 max = (max < rel) ? rel : max;
 }
}
if (max < theNet->steptol) termcode =(termcode | METSTEPTOL);
 
if (itncount > theNet->itnlimit) termcode = (termcode | EXCEDITNLIM);

return(termcode);
}
 
/*----------------------
Compute the minimum lambda allowed for line search.
Any lambda value lower than this value would meet the stop criteria for 
minimum step anyway.
*/
static double Compute_minlambda()
{
int j,k,N;
DataType * p;
DataType * w;  /* pointer to cell of weight matrix */
double rellength = 0.0; 
double maxrel = 0.0;

p = *Diag.cells;
for(j=0; j<theNet->OutLayer; j++)
{N = (theNet->W[j].rows)*(theNet->W[j].cols);
 w = *theNet->W[j].cells;
 for(k=0; k<N; k++, p++, w++)
 { rellength = fabs((double)((*p)/(*w)));    
 if (maxrel < rellength) maxrel = rellength;
 }
}
rellength = theNet->steptol/maxrel;/* just using rellength to calculate 
return value */
return(rellength);
}

/*----------------------
Allot memory for data structures used by the Jacobian matrix and other 
structures used in minimization. Physical storage of Jacobian is as the 
transpose. Requires # observations from the data structure. Since is 
always run before method, also sets the totparms variable.
*/
AllotGradientWorkSpace()
{
int i;
int mxprms = 1;
 
totparms = 0;    
 
for(i=0; i<theNet->OutLayer; i++)
{totparms += (theNet->Units[i+1])*(theNet->Units[i]);
 
 AllotDTypeVector(&Alpha[i], theNet->Units[i]);
 AllotDTypeVector(&dSquash[i], Alpha[i].rows);
 if(Alpha[i].rows > mxprms) mxprms =  Alpha[i].rows;
 }
 AllotDTypeVector(&(Alpha[theNet->OutLayer]), 1);
 if (theNet->method == qGaussNew)
 { AllotDTypeMatrix(&Jac_T, totparms, XData.rows+totparms);
 AllotDTypeVector(&Resid,XData.rows+totparms);
 }
 else
 { AllotDTypeMatrix(&Jac_T,totparms,XData.rows);
 AllotDTypeVector(&Resid,XData.rows);
 }
 AllotDTypeVector(&Pi,totparms);
 AllotDTypeVector(&Diag,totparms);
 AllotDTypeVector(&gradSS,totparms);
 AllotDTypeMatrix(&Phi,1,mxprms);
 AllotDTypeMatrix(&T2,1,mxprms);
}

LockGradientWorkSpace()
{
 HLock(Resid.cells);
 HLock(Jac_T.cells);
 HLock(Pi.cells);
 HLock(Diag.cells);
 HLock(gradSS.cells);
 HLockNet();
}

UnlockGradientWorkSpace()
{
 HUnlock(Resid.cells);
 HUnlock(Jac_T.cells);
 HUnlock(Pi.cells);
 HUnlock(Diag.cells);
 HUnlock(gradSS.cells);
 HUnlockNet();
}
Listing:  Ord Least Squares.c

#include “Neural Network.h”
#include <math.h>

extern FILE * Jac;

/*------------------------
Compute the next step of iteration by solving linear system.
*/
OLSbyQRmethod(X,P,D,Y)
DTypeMatrix * X;/* pointer to transpose of explanatory data */
DTypeVector * P;/* pointer to Pi vector */
DTypeVector * D;/* pointer to Diag vector */
DTypeVector * Y;/* pointer to vector of dependent variables*/
{
int sing;

sing = QRDecomposition(X,P,D);
if(sing==FALSE)
{ComputeQY(X,P,Y);
 SolveRbY(X,D,Y);
}
return(sing);
}


/*------------------------
Compute Q*Y to get dependent variable for triangularized system, where 
Q=U(N)*U(N-1)*...*U(1), is the product of N elementary reflecting matrices. 
Uses output from QR decomposition.
*/
ComputeQY(Q,P,Y)
DTypeMatrix * Q;
DTypeVector * P;
DTypeVector * Y; /* vector of dependent values */
{
register inti;
register intk; 
register intM; 
register intN;
register DataTypesum;
register DataType* u;
register DataType* y;
register DataType* pi;
 
HLock(Q->cells);
HLock(P->cells);
HLock(Y->cells);
 
M = Q->cols;
N = Q->rows;
pi = *P->cells;
y = *Y->cells;

for(i=0; i<N; i++, pi++)
{u = *Q->cells +i*M;
 sum = 0.0;
 for(k=i; k<M; k++)
 sum += u[k]*y[k];
 sum = sum/(*pi);
 for(k=i; k<M; k++)
 y[k] -= sum*u[k];
}
 
HUnlock(Q->cells);
HUnlock(P->cells);
HUnlock(Y->cells);
}

/*------------------------
Algorithm 3.1.3 of Stewart. Solve linear system Rb=y for b, where R is 
an upper triangular nonsingular matrix.
R is stored as its transpose so R(i,j) is at jth row, ith column. The 
diagonal terms are in seperate vector D as described in Algorithm 5.3.8 
of Stewart.  Answer is returned in vector D.
*/
SolveRbY(R,D,Y)
DTypeMatrix * R;
DTypeVector * D;
DTypeVector * Y;
{
DataType  * r;
register inti;
register intj; 
register intM; 
register intN;
register DataType * b;/* pointer to parameter values */
register DataType * y;
register DataType sum;
register DataType * temp;
 
HLock(R->cells);
HLock(D->cells);
HLock(Y->cells);
 
r = *R->cells;
M = R->cols;
N = D->rows;
b = *D->cells;
y = *Y->cells;
for(i=N-1; i>-1; i--)
{sum = 0.0;
 for(j=i+1,temp = r+i+j*M; j<N; j++, temp = temp + M)
 sum += (*temp)*b[j];
 b[i] = (y[i]-sum)/b[i];
}
 
HUnlock(R->cells);
HUnlock(D->cells);
HUnlock(Y->cells);
}

/*--------------------
QR decomposition algorithm, see Algorithm 3.8 in Introduction to matrix 
Computations by G. Stewart, also Algorithm A3.2.1 in Numerical Methods 
for Unconstrained Optimization and Nonlinear Equations by Dennis and 
Shnabel.  Used to solve linear system Ax=b, where A is (MxM), x is (Nx1). 
For coding efficiency the input matrix A is the transpose of the matrix 
given in the statement of the algorithm available in above references. 
Assumes more observations than parameters, ie M>N.
*/

QRDecomposition(A,P,D)
DTypeMatrix * A; 
DTypeVector * P, * D;
{
register int k, j, i;
register intN;   /* number of rows in A*/
register intM;   /* number of columns in A */
int sing = FALSE;/* flag for singular A matrix */
DataType * kth_col;
DataType * pi; /* pointer to array for the pi values */
register DataType * diag;
register DataType * alpha;
register DataType * temp; /* temporary pointer */
register DataType sign;
register DataType aida, sigma, tau;
 
HLock(A->cells);
HLock(P->cells);
HLock(D->cells);
kth_col  = *A->cells;/* start off in row zero */
N = A->rows;
M = A->cols;
pi = *P->cells;
diag = *D->cells;
for(k=0; k<N; k++, kth_col = kth_col + M, pi++, diag++)
{
 aida = 0.0;/* initialize max abs value to zero */
 temp = kth_col+k;
 for(i=k; i<M; i++, temp++ )
 { tau = fabs(*temp);/* calculate aida */
 if(aida < tau) aida = tau;
 }
 if(aida == 0.0)
 { *pi = 0.0;  /* column is already triangular */
 *diag = 0.0;
 sing = TRUE;
 }
 else
 { 
 for(i=k, alpha = kth_col + k; i<M; i++, alpha++)
 *alpha = (*alpha)/aida;
 sigma = 0.0;
 temp = kth_col+k;
 if(*temp>0) sign = 1;/* calculate sign term */
 else sign = -1;
 for(i=k; i<M; i++, temp++ )
 sigma += (*temp)*(*temp);
 sigma = sign*sqrt(sigma);
 *(kth_col + k) += sigma;
 *pi = (*(kth_col+k))*sigma;
 *diag = -aida*sigma;

 for(j=k+1; j<N; j++)
 { 
 temp =kth_col+k;alpha=kth_col+k + (j-k)*M;
 tau = 0.0;
 for(i=k; i<M; i++, alpha++, temp++)
 tau += (*temp)*(*alpha);
 tau = tau/(*pi);
 temp =kth_col+k;
 alpha=kth_col+k + (j-k)*M;
 for(i=k; i<M; i++, alpha++, temp++)
 *alpha -= (*temp)*tau;
 }
 }
 }
HUnlock(A->cells);
HUnlock(P->cells);
HUnlock(D->cells);
return(sing);
}

WriteYXToFile(jac,vector,matrix)
FILE * jac;
DTypeVector * vector;
DTypeMatrix * matrix;
{
inti,j;
DataType * mcell;
DataType * vcell;
 
HLock(matrix->cells);
HLock(vector->cells);
mcell = *matrix->cells;
vcell = *vector->cells;
for(j=0; j<matrix->cols; j++)
{fprintf(jac,”%.5lf“,vcell[j]);
 for(i=0; i<matrix->rows; i++)
 fprintf(jac,”%.8lf“,*(mcell + (i*matrix->cols + j)) );
 fprintf(jac,”\n”);
}
HUnlock(matrix->cells);
HUnlock(vector->cells);
}

Listing:  Search methods.c

#include “Neural Network.h”
#include <math.h>

extern FILE *    Jac;

extern NeuralNet * theNet;
extern DTypeVector yData;
extern DTypeMatrix XData;
extern DTypeVector Alpha[];
extern DTypeVector Pi;
extern DTypeVector Diag;
extern DTypeVector gradSS;

static unsigned inttotparms;
static int itncount = 0;
static double baseSS;
static double patSS;
static double RelStep;
static DTypeVector * base;
static DTypeVector * base_;

/*----------------------
Hooke and Jeeve’s method for direct search to find least squares, see 
“Methods of Optimization” by G.R. Walsh.
Expects:
 1. tolerance value steptol; stored in NeuralNet structure.
 2. validly defined and alloted NeuralNet structure.
 3. input data in matrix XData and output data values in vector yData.
 4. total number of parameters for model in “totparms”.
Returns termcode = FAIL if current parm value is not an
 approximate critical point 
  = METSTEPTOL if scaled step is less  
 than steptol
  = EXCEEDITNLIM if iteratation limit  
 exceeded
*/
do_HookeJeeves()
{
int termcode = 0;
int success;
 
base = &Pi;
base_ = &gradSS;
RelStep = 1.0;
SaveParms(&gradSS);
 
printf(“Starting Hooke and Jeeves method\n”);      
while(termcode == 0)
{patSS = Compute_SS();
 success = ExMove();
 switch(success)
 { case TRUE:
 {
 PatMove()
 break;
 }
 case FALSE:
 {
 RelStep = .5*RelStep;
 if (RelStep < theNet->steptol)
 termcode = METSTEPTOL;
 else if (itncount > theNet->itnlimit)
 termcode = EXCEDITNLIM;  
 break;
 }
 } /* end of “switch(success)” */
 } /* end of “while(termcode == 0)” */
 return(termcode);
}

/*----------------------
Pattern move where previous successful steps given in Diag and current 
weights given in Pi.
*/
PatMove()
{
int j,k,N;
int flag = 0;
double ss;
DTypeVector * temp;
DataType * w;  /* pointer to weight matrix */
DataType * b;
DataType * b_;
 
SaveParms(base);
baseSS = patSS;
do
{
/*---- install new pattern step and update patSS ----*/
 b_ = *base_->cells;
 for(j=0; j<theNet->OutLayer; j++)
 { N = (theNet->W[j].rows)*(theNet->W[j].cols);
 w = *theNet->W[j].cells;
 for(k=0; k<N; k++, w++, b_++)
 *w = 2*(*w) - *b_;
 }
 patSS = Compute_SS();

/*---- try an exploratory step from new pattern ----*/
 ExMove();

/*---- check if explore step found better point ----*/
 if(patSS<baseSS)
 { temp = base_; 
 base_ = base;
 base = temp;
 SaveParms(base);/* save weights as new base */
 baseSS = patSS; /* reset the base SS value */
 }
 else
 { RestoreParms(base);
 temp = base_;   
 base_ = base;
 base = temp;
 flag = 1;/* set flag to exit pattern move */
 }
 } while(flag < 1);
printf(“patSS = %lf \n”,patSS);
}

/*----------------------
Do the exploratory moves. Resets patSS to minimum SS value found in exploratory 
search.
Returns success = TRUE if found SS reducing change 
 = FALSE if couldn’t find SS reducing 
 change
*/
ExMove()
{
int i,j,N;
int success;
double ss;/* value of SS for an exploratory move */
double ss_; 
DataType step;
DataType * w;
 
itncount +=1;
ss_ = patSS;
for(j=0; j<theNet->OutLayer; j++)
{N = (theNet->W[j].rows)*(theNet->W[j].cols);
 w = *theNet->W[j].cells;
 for(i=0; i<N; i++, w++)
 { step = RelStep*(fabs(*w) + theNet->maxstep);
 *w = *w + step;
 ss = Compute_SS();
 if(ss<ss_)
 ss_ = ss;
 else
 { *w = *w - 2.0*step;
 ss = Compute_SS();
 if(ss<ss_)
 ss_ = ss;
 else
 { *w = *w + step;
 }
 }
 } /* end of for(i=0; k<N; i++, w++) */
} /* end of for(j=0; j<theNet->OutLayer; j++) */
success = ( ss_ < patSS ) ? TRUE : FALSE;
patSS = ss_;
return(success);
}

/*----------------------
Allot memory for Alpha, Pi, and Diag data structures.
Requires # observations from the data structure.
Since is always run before execution of method, sets totparms variable.
*/
AllotSearchWorkSpace()
{
int i;
int mxprms = 1;

totparms = 0;    
 
for(i=0; i<theNet->OutLayer; i++)
{totparms += (theNet->W[i].rows)*(theNet->W[i].cols);
 AllotDTypeVector(&Alpha[i], theNet->Units[i]);
 if(Alpha[i].rows > mxprms) mxprms =  Alpha[i].rows;
}
AllotDTypeVector(&(Alpha[theNet->OutLayer]), 1);
AllotDTypeVector(&Pi,totparms);
AllotDTypeVector(&Diag,totparms);
AllotDTypeVector(&gradSS,totparms);
}

LockSearchWorkSpace()
{
HLock(Pi.cells);
HLock(Diag.cells);
HLock(gradSS.cells);
HLockNet();
}

UnlockSearchWorkSpace()
{
HUnlock(Pi.cells);
HUnlock(Diag.cells);
HUnlock(gradSS.cells);
HUnlockNet();
}
 

Community Search:
MacTech Search:

Software Updates via MacUpdate

OmniGraffle Pro 7.2.2 - Create diagrams,...
OmniGraffle Pro helps you draw beautiful diagrams, family trees, flow charts, org charts, layouts, and (mathematically speaking) any other directed or non-directed graphs. We've had people use... Read more
OmniGraffle 7.2.2 - Create diagrams, flo...
OmniGraffle helps you draw beautiful diagrams, family trees, flow charts, org charts, layouts, and (mathematically speaking) any other directed or non-directed graphs. We've had people use Graffle to... Read more
Spotify 1.0.44.100. - Stream music, crea...
Spotify is a streaming music service that gives you on-demand access to millions of songs. Whether you like driving rock, silky R&B, or grandiose classical music, Spotify's massive catalogue puts... Read more
Microsoft OneNote 15.29 - Free digital n...
OneNote is your very own digital notebook. With OneNote, you can capture that flash of genius, that moment of inspiration, or that list of errands that's too important to forget. Whether you're at... Read more
WALTR 2 2.0.8 - $39.95
WALTR 2 helps you wirelessly drag-and-drop any music, ringtones, videos, PDF, and ePub files onto your iPhone, iPad, or iPod without iTunes. It is the second major version of Softorino's critically-... Read more
Dropbox 16.3.27 - Cloud backup and synch...
Dropbox is an application that creates a special Finder folder that automatically syncs online and between your computers. It allows you to both backup files and keep them up-to-date between systems... Read more
EtreCheck 3.1.5 - For troubleshooting yo...
EtreCheck is an app that displays the important details of your system configuration and allow you to copy that information to the Clipboard. It is meant to be used with Apple Support Communities to... Read more
Carbon Copy Cloner 4.1.12 - Easy-to-use...
Carbon Copy Cloner backups are better than ordinary backups. Suppose the unthinkable happens while you're under deadline to finish a project: your Mac is unresponsive and all you hear is an ominous,... Read more
VueScan 9.5.62 - Scanner software with a...
VueScan is a scanning program that works with most high-quality flatbed and film scanners to produce scans that have excellent color fidelity and color balance. VueScan is easy to use, and has... Read more
SpamSieve 2.9.27 - Robust spam filter fo...
SpamSieve is a robust spam filter for major email clients that uses powerful Bayesian spam filtering. SpamSieve understands what your spam looks like in order to block it all, but also learns what... Read more

Latest Forum Discussions

See All

Track Santa with these three festive app...
Christmas is fast approaching and that means it's time to prepare for Santa's yearly pilgrimage around the globe. Christmas Eve is an exciting time as parents help their kids get ready to welcome Santa. You've got the cookies and milk all planned... | Read more »
Galaxy on Fire 3 and four other fantasti...
Galaxy on Fire 3 - Manticore brings the series back for another round of daring space battles. It's familiar territory for folks who are familiar with the franchise. If you've beaten the game and are looking to broaden your horizons, might we... | Read more »
The best apps for your holiday gift exch...
What's that, you say? You still haven't started your holiday shopping? Don't beat yourself up over it -- a lot of people have been putting it off, too. It's become easier and easier to procrastinate gift shopping thanks to a number of apps that... | Read more »
Toca Hair Salon 3 (Education)
Toca Hair Salon 3 1.0 Device: iOS Universal Category: Education Price: $2.99, Version: 1.0 (iTunes) Description: | Read more »
Winter comes to Darkwood as Seekers Note...
MyTona, based in the chilly Siberian city of Yakutsk, has brought a little festive fun to its hidden object game Seekers Notes: Hidden Mystery. The Christmas update introduces some new inhabitants to players, and with them a chance to win plenty of... | Read more »
Bully: Anniversary Edition (Games)
Bully: Anniversary Edition 1.03.1 Device: iOS Universal Category: Games Price: $6.99, Version: 1.03.1 (iTunes) Description: *** PLEASE NOTE: This game is officially supported on the following devices: iPhone 5 and newer, iPod Touch... | Read more »
PINE GROVE (Games)
PINE GROVE 1.0 Device: iOS Universal Category: Games Price: $1.99, Version: 1.0 (iTunes) Description: A pine grove where there are no footsteps of people due to continuous missing cases. The case is still unsolved and nothing has... | Read more »
Niantic teases new Pokémon announcement...
After rumors started swirling yesterday, it turns out there is an official Pokémon GO update on its way. We’ll find out what’s in store for us and our growing Pokémon collections tomorrow during the Starbucks event, but Niantic will be revealing... | Read more »
3 reasons why Nicki Minaj: The Empire is...
Nicki Minaj is as business-savvy as she is musically talented and she’s proved that by launching her own game. Designed by Glu, purveyors of other fine celebrity games like cult favorite Kim Kardashian: Hollywood, Nicki Minaj: The Empire launched... | Read more »
Clash of Clans is getting its own animat...
Riding on its unending wave of fame and success, Clash of Clans is getting an animated web series based on its Clash-A-Rama animated shorts.As opposed to the current shorts' 60 second run time, the new and improved Clash-A-Rama will be comprised of... | Read more »

Price Scanner via MacPrices.net

Never Settle for Low Performing Wifi With iOS...
AppYogi Software has announced the release of WiFi Signal Strength Status App 1.0, the company’s new utility developed exclusively for macOS. WiFi Signal Strength Status App features a unique, single... Read more
New 2016 13-inch Touch Bar MacBook Pros in st...
B&H Photo has stock of new 2016 Apple 13″ Touch Bar MacBook Pro models, each including free shipping plus NY sales tax only: - 13″ 2.9GHz/512GB Touch Bar MacBook Pro Space Gray: $1999 - 13″ 2.... Read more
New 2016 15″ Touch Bar MacBook Pros in stock...
B&H Photo has new 2016 Apple 15″ Touch Bar MacBook Pro models in stock today including free shipping plus NY sales tax only: - 15″ 2.7GHz Touch Bar MacBook Pro Space Gray: $2799 - 15″ 2.7GHz... Read more
DietSensor App Targeting Diabetes and Obesity...
DietSensor, Inc., a developer of smart food and nutrition applications designed to fight diabetes and obesity and help improve overall fitness, has announced the launch of its DietSensor app for... Read more
Holiday 2016 13-inch 2.0GHz MacBook Pro sales...
B&H has the non-Touch Bar 13″ MacBook Pros in stock today for $50-$100 off MSRP. Shipping is free, and B&H charges NY sales tax only: - 13″ 2.0GHz MacBook Pro Space Gray (MLL42LL/A): $1449 $... Read more
Holiday sale: Apple TVs for $51-$40 off MSRP,...
Best Buy has dropped their price on the 64GB Apple TV to $159.99 including free shipping. That’s $40 off MSRP. 32GB Apple TVs are on sale right now for $98 on Sams Club’s online store. That’s $51 off... Read more
12-inch Retina MacBooks, Apple refurbished, n...
Apple has restocked a full line of Certified Refurbished 2016 12″ Retina MacBooks, now available for $200-$260 off MSRP. Refurbished 2015 models are available starting at $929. Apple will include a... Read more
Holiday sale: 12-inch Retina MacBook for $100...
B&H has 12″ Retina MacBooks on sale for $100 off MSRP as part of their Holiday sale. Shipping is free, and B&H charges NY sales tax only: - 12″ 1.1GHz Space Gray Retina MacBook: $1199 $100... Read more
Apple refurbished 13-inch MacBook Airs availa...
Apple has Certified Refurbished 13″ MacBook Airs available starting at $849. An Apple one-year warranty is included with each MacBook, and shipping is free: - 13″ 1.6GHz/8GB/128GB MacBook Air: $849 $... Read more
Apple refurbished iMacs available for up to $...
Apple has Certified Refurbished 2015 21″ & 27″ iMacs available for up to $350 off MSRP. Apple’s one-year warranty is standard, and shipping is free. The following models are available: - 21″ 3.... Read more

Jobs Board

Integration Technician, *Apple* - Zones, In...
…at Zones and for our customers each day. Position Overview The Apple Integration Technician will be responsible for performing customer specific configuration Read more
*Apple* Brand Ambassador (Macy's) - The...
…(T-ROC), is proud of its unprecedented relationship with our partner and client, APPLE ,in bringing amazing" APPLE ADVOCATES"to "non" Apple store locations. Read more
*Apple* Retail - Multiple Positions- Trumbul...
Sales Specialist - Retail Customer Service and Sales Transform Apple Store visitors into loyal Apple customers. When customers enter the store, you're also the Read more
*Apple* Retail - Multiple Positions - Apple,...
Job Description: Sales Specialist - Retail Customer Service and Sales Transform Apple Store visitors into loyal Apple customers. When customers enter the store, Read more
US- *Apple* Store Leader Program - Apple (Un...
…Summary Learn and grow as you explore the art of leadership at the Apple Store. You'll master our retail business inside and out through training, hands-on Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.