**This code is for the cubic correlation function and for **the constant mean model. **Inputting data and defining some initial objects. **Here I use equally spaced values of x as my inputs. **You could also use inputs that are a latin hypercube, **a uniform design, or something else. x.training<-seq(0,1,0.1) **In this example I have used a function I call datagenerator **to produce the observations y.training. **In general you would simple enter the observed values of the code. datagenerator<- function(x) { exp(-x)*sin((2*pi*x)) + x } y.training <- datagenerator(x.training) n<-length(y.training) one<-rep(1,n) RD<-diag(1,n) ** The values at which we wish to do prediction x.pred<-seq(0,1,.01) q<-length(x.pred) **For illustrative purposes, I plot the function datagenerator. plot(x.pred, datagenerator(x.pred),type="l", ylab="y(x)", col = "green", lwd=1.5) **Finding the mle of theta S<-function(h,theta) { if({abs(h)>theta}) 0 else (1) } T<-function(h,theta) { if({abs(h)>theta/2}) 2*(1-abs(h)/theta)^3 else (1-6*(h/theta)^2 + 6*(abs(h)/theta)^3) } Rtheta<-function(h,theta) { S(h,theta)*T(h,theta) } l<-function(theta) { for(i in 1:n) for(j in 1:n) RD[i,j]<-Rtheta(x.training[i]-x.training[j], theta) RDinv<-solve(RD) mu<-t(one)%*%RDinv%*%y.training/(t(one)%*%RDinv%*%one) sigma2<-1/n*t((y.training-mu))%*%RDinv%*%(y.training-mu) return(n*log(sigma2)+log(det(RD))) } **The next command finds the mle of theta. You may need to modify **the search limits specified in c(1,100). The first entry is the **lowerbound and the second the upperbound **over which the search for the mle **is conducted. If your estimator is not an interpolator, this **suggests that the mle you found produces a correlation matrix **that is nearly singular. Another choice for theta is necessary. W<-optimize(l,c(1,100)) **EBLUP theta<-W\$minimum[1] S<-function(h) { if({abs(h)>theta}) 0 else (1) } T<-function(h) { if({abs(h)>theta/2}) 2*(1-abs(h)/theta)^3 else (1-6*(h/theta)^2 + 6*(abs(h)/theta)^3) } R<-function(h) { S(h)*T(h) } A<-matrix(rep(x.training,n),n,n) B<-matrix(rep(x.training, rep(n,n)),n,n) for(i in 1:n) { for(j in 1:n) RD[i,j]<-R((A-B)[i,j]) } RDinv<-solve(RD) **Kriging predictors and std. errors mu<-t(one)%*%solve(RD)%*%y.training/(t(one)%*%solve(RD)%*%one) yhat<-length(x.pred) r <- numeric(n) for(i in 1:q) { { for(j in 1:n) r[j] <- R(x.pred[i] - x.training[j]) } yhat[i]<-mu+t(r)%*%RDinv%*%(y.training-mu) } dim(yhat)<-c(q,1) predvars<-length(x.pred) for(i in 1:q) { sigmahat<-1/n*t(y.training-one%*%mu) %*% RDinv %*% (y.training-one%*%mu) sigmahat <- sigmahat[1,1] sub <- t(one)%*%RDinv%*%one sub <- sub[1,1] { for(j in 1:n) r[j] <- R(x.pred[i] - x.training[j]) } h <- 1 - t(r)%*%RDinv%*%one predvars[i] <-sigmahat*(1 - t(r)%*%RDinv%*%r + h*(sub^(-1))*h) } dim(predvars)<-c(q,1) prederrs<- sqrt(abs(predvars)) results <- data.frame(x.pred, yhat, prederrs) results lines(x.pred, yhat, type = "l", lty = 4, col = "red", lwd = 1.5) lines(x.pred, yhat + 2*prederrs, lty = 4, lwd = 1.5) lines(x.pred, yhat - 2*prederrs, lty = 4, lwd = 1.5)