Archive for category R/S-Plus

Generate PowerPoint slides in R

A new R package “officer” makes generating PowerPoint slides in R much easier. It is newly developed and still has bugs. But there are ways to get around the bugs. I believe the developer of the package will fix bugs quickly.

First, you have to install the package before using it.


Second, use the examples in the GitHub site and other sites as the start point to create your own PowerPoint slide. I found this site ( is really useful.

Third, get around bugs. When I used it to automate the generation of PowerPoint slides, I found that all slides with images inserted did not work at all. After PowerPoint files were generated, they cannot be open correctly. Always popup error message and let you “repair” the presentations. I searched several hours and did not get any useful information. However, I visited “officer” package’s GitHub page and found a hit in bug submitting area. The reason popping up error message is a bug in the program. All image filenames cannot include “SPACE”, otherwise, you get error.

So, I went ahead and changed my program to make sure there is no “space”s in any image filenames. Waoh-lah, it works great.

Another useful point is the slide size. If you want to insert an image to occupy the whole 9:16 widescreen slide, you should use the following parameters:

left = 0.0
top = 0.0
width = 40.0/3.0
height = 7.

Use this set of parameters in ph_with_img_at function, you can guarantee inserted images take the whole slide.

Enjoying to use the package. Thank the developer sharing the wonderful art of work with us.


Tags: , , , ,

Using rsqlserver package in R

In R programming environment, you can conveniently connect to SQL Server through RSQLServer package. However, it is little bit tricky. Not like to way you connect MySQL server through RMySQL package. I searched RSQLServer on Google and tried to find a good example to connect to the SQL server. At the beginning, nothing worked. One time I read a post and used the method in the post. I was able to connect to the SQL Server. However, the connection opened a system database instead of the database I wanted to connect. Eventually I use R help command to learn how to use dbConnect function. That helps figuring out the correct method to connect to the right database on the SQL server.

1) Install the package: RSQLServer. Run command in R:


2) To use the package, you only need to use the following R command:


3) To connect to a specific database on a given SQL Server:

con < - dbConnect( RSQLServer::SQLServer(), server="localhost", database = "yourdatabasename", properties=list(user="yourusername", password="yourpassword") )

That should work well for you. You have to provide your SQL Server IP address, database name, your login information to the function.

One get get connected, to query the database table and manipulate data in the database, you can use all the functions available in DBI package. There is no more tricks.


Tags: , , ,

Clustering in R

There is a nice post at that provide a number of examples to illustrate how to determine number of clusters.

Below are two examples:

# prepare data from a dataframe
# var1 includes one group
# var2 includes second group
# var3 includes all the values used in clustering
# transpose the dataframe data to matrix
kmdata <- acast(, var1 ~ var2, value.var='var3')
# kmeans
my.cluster <- cascadeKM(kmdata, = 1, = nrow(kmdata)-1)
plot(my.cluster, sortg = TRUE, grpmts.plot = TRUE) <- as.numeric(which.max(my.cluster$results[2,]))
cat("Calinski criterion optimal number of clusters:",, "\n")
# sum of square error
wss <- (nrow(kmdata)-1)*sum(apply(kmdata,2,var))
for (i in 2:(nrow(kmdata)-1)) wss[i] <- sum(kmeans(kmdata,
plot(1:(nrow(kmdata)-1), wss, type="b", xlab="Number of Clusters",
ylab="Within groups sum of squares")
my.cluster2 <- kmeans(kmdata, 4)

Here is another nice article talking about clustering (


Tags: , , , ,

GET BLUE or BLUP from lmer in R

The following text is copied and pasted from


Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear Unbiased Estimator (BLUE)?

I understand that the difference between them is related to whether the grouping variable in the model is estimated as a fixed or random effect, but it’s not clear to me why they are not the same (if they are not the same).

I am specifically interested in how this works when using small area estimation, if that’s relevant, but I suspect the question is relevant to any application of fixed and random effects.


The values that you get from BLUPs aren’t estimated in the same way as the BLUE estimates of fixed effects; by convention BLUPs are referred to as predictions. When you fit a mixed effects model, what are estimated initially are the mean and variance (and possibly the covariance) of the random effects. The random effect for a given study unit (say a student) is subsequently calculated from the estimated mean and variance, and the data. In a simple linear model, the mean is estimated (as is the residual variance), but the observed scores are considered to be composed of both that and the error, which is a random variable. In a mixed effects model, the effect for a given unit is likewise a random variable (although in some sense it has already been realized).

You can also treat such units as fixed effects, if you like. In that case, the parameters for that unit are estimated as usual. In such a case however, the mean (for example) of the population from which the units were drawn is not estimated.

Moreover, the assumption behind random effects is that they were sampled at random from some population, and it is the population that you care about. The assumption underlying fixed effects is that you selected those units purposefully because those are the only units you care about.

If you turn around and fit a mixed effects model and predict those same effects, they tend to be ‘shrunk’ towards the population mean relative to their fixed effects estimates. You can think of this as analogous to a Bayesian analysis where the estimated mean and variance specify a normal prior and the BLUP is like the mean of the posterior that comes from optimally combining the data with the prior.

The amount of shrinkage varies based on several factors. An important determinate of how far the random effects predictions will be from the fixed effects estimates is the ratio of the variance of the random effects to the error variance. Here is a quick R demo for the simplest case with 5 ‘level 2’ units with only means (intercepts) fit. (You can think of this as test scores for students within classes.)

library(lme4) # we'll need to use this package
set.seed(1673) # this makes the example exactly reproducible
nj = 5; ni = 5; g = as.factor(rep(c(1:nj), each=ni))

##### model 1
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1, each=ni) + error

re.mod1 = lmer(y~(1|g))
fe.mod1 = lm(y~0+g)
df1 = data.frame(fe1=coef(fe.mod1), re1=coef(re.mod1)$g)

##### model 2
pop.mean = 16; sigma.g = 5; sigma.e = 5
r.eff2 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff2, each=ni) + error

re.mod2 = lmer(y~(1|g))
fe.mod2 = lm(y~0+g)
df2 = data.frame(fe2=coef(fe.mod2), re2=coef(re.mod2)$g)

##### model 3
pop.mean = 16; sigma.g = 5; sigma.e = 1
r.eff3 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff3, each=ni) + error

re.mod3 = lmer(y~(1|g))
fe.mod3 = lm(y~0+g)
df3 = data.frame(fe3=coef(fe.mod3), re3=coef(re.mod3)$g)

So the ratios of the variance of the random effects to the error variance is 1/5 for model 1, 5/5 for model 2, and 5/1 for model 3. Note that I used cell means coding for the fixed effects models. We can now examine how the estimated fixed effects and the predicted random effects compare for these three scenarios.

# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897

# fe2 re2
# g1 10.979130 11.32997
# g2 13.002723 13.14321
# g3 26.118189 24.89537
# g4 12.109896 12.34319
# g5 9.561495 10.05969

# fe3 re3
# g1 13.08629 13.19965
# g2 16.36932 16.31164
# g3 17.60149 17.47962
# g4 15.51098 15.49802
# g5 13.74309 13.82224

Another way to end up with random effects predictions that are closer to the fixed effects estimates is when you have more data. We can compare model 1 from above, with its low ratio of random effects variance to error variance, to a version (model 1b) with the same ratio, but much more data (notice that ni = 500 instead of ni = 5).

##### model 1b
nj = 5; ni = 500; g = as.factor(rep(c(1:nj), each=ni))
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1b = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1b, each=ni) + error

re.mod1b = lmer(y~(1|g))
fe.mod1b = lm(y~0+g)
df1b = data.frame(fe1b=coef(fe.mod1b), re1b=coef(re.mod1b)$g)
Here are the effects:

# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897

# fe1b re1b
# g1 15.29064 15.29543
# g2 14.05557 14.08403
# g3 13.97053 14.00061
# g4 16.94697 16.92004
# g5 17.44085 17.40445

On a somewhat related note, Doug Bates (the author of the R package lme4) doesn’t like the term “BLUP” and uses “conditional mode” instead (see pp. 22-23 of his draft lme4 book pdf). In particular, he points out in section 1.6 that “BLUP” can only meaningfully be used for linear mixed-effects models.


Tags: , , , , , ,

ggplot2 draw bar chart with error bars

Where I tried to generate a nice bar chart, I came across to the following post. It provides all the information I need. Very nice work!

R Graphics Cookbook also provides a lot of good examples. Some may be a little bit obsolete. When you directly apply the example, error messages show up. But it is still a good book.


Tags: , ,