**This code is for the cubic correlation function and for **the constant mean model. **Inputting data and defining some initial objects. **Initializing and then inputting the predictors. Below is **for the 2-D case. Add additional xi for more dimensions. x1<-c(0) x2<-c(0) x.training<-data.frame(x1,x2) data.entry(x.training) **Saving the data in a data frame. x.training<-data.frame(x.training) p<-ncol(x.training) **Initializing and then inputting the response. y.training <- c(0) data.entry(y.training) **Initializing and defining some parameters. n<-length(y.training) one<-rep(1,n) RD<-diag(1,n) **The values at which we wish to do prediction. Again, this is **for the 2-D case and would need to be modified for more dimensions. x1.pred<-seq(0,1,.05) rp<-rep(21,21) x2.pred<-rep(x1.pred,rp) x1.pred<-rep(x1.pred,21) x.pred<-data.frame(x1.pred, x2.pred) q<-nrow(x.pred) **One could also do something like **x.pred<-latin.hypercube(400,2) **but you first have to load the BACCO functions **Finding the mle of theta S<-function(a,b) { if({abs(a)>b}) 0 else (1) } T<-function(a,b) { if({abs(a)>b/2}) 2*(1-abs(a)/b)^3 else (1-6*(a/b)^2 + 6*(abs(a)/b)^3) } Rtheta<-function(h,theta) { R<- 1 for(i in 1:p) R<- R*S(h[i],theta[i])*T(h[i],theta[i]) return(R) } l<-function(theta) { for(i in 1:n) for(j in 1:n) RD[i,j]<-Rtheta(x.training[i, ]-x.training[j, ], theta) RDinv<-solve(RD) mu<-t(one)%*%RDinv%*%y.training/(t(one)%*%RDinv%*%one) sigma2<-1/n*t((y.training-mu))%*%RDinv%*%(y.training-mu) return(n*log(sigma2)+log(det(RD))) } **The starting values, lower, and upper bounds are for the 2-D case and **will need to be modified for more than 2 dimensions. W<-optim(c(5,5), l, method="L-BFGS-B", lower=c(0.1,0.1), upper=c(100,100)) **EBLUP theta<-W\$par for(i in 1:n) for(j in 1:n) RD[i,j]<-Rtheta(x.training[i, ]-x.training[j, ], theta) RDinv<-solve(RD) **Kriging predictors and std. errors mu<-t(one)%*%RDinv%*%y.training/(t(one)%*%RDinv%*%one) yhat<-length(t(x.pred[1])) r <- numeric(n) for(i in 1:q) { { for(j in 1:n) r[j] <- Rtheta(x.pred[i, ] - x.training[j, ], theta) } yhat[i]<-mu+t(r)%*%RDinv%*%(y.training-mu) } dim(yhat)<-c(q,1) predvars<- length(t(x.pred[1])) for(i in 1:q) { sigmahat<-1/n*t(y.training-one%*%mu) %*% RDinv %*% (y.training-one%*%mu) sigmahat <- sigmahat[1,1] sub <- t(one)%*%RDinv%*%one sub <- sub[1,1] { for(j in 1:n) r[j] <- Rtheta(x.pred[i, ] - x.training[j, ], theta) } h <- 1 - t(r)%*%RDinv%*%one predvars[i] <-sigmahat*(1 - t(r)%*%RDinv%*%r + (t(h)%*%h)/sub) } dim(predvars)<-c(q,1) prederrs<- sqrt(abs(predvars)) results <- data.frame(x.pred, yhat, prederrs) results