Posts Tagged R

Slicing a data frame in R

Slice data rows in a data frame

Normally, we can easily slice a data frame based on given values in a column. Since data frame can be treated as a list, we can use expression “tbData[[colName]]” to represent a column in a data frame. That makes the program more generic. colName includes the column header information.

tbData[tbData[[colName]]==colValue, ]

The %in% operator in R is very useful. It can be used to test if there are common elements in two vectors, especially useful in character testing. It can be used to:
1) Test if shorter vectors are in longer vectors (ex, 6:10 %in% 1:36);
2) Test which elements of long vectors are in short vector (ex, 1:36 %in% 6:10)
3) Used in character vectors or factors (ex, c(“d”, “e”) %in% c(“a”, “b”, “c”, “d”))
If you want to know the indexes of the specific elements inside a larger vector, use function “which” to achieve this goal. Find the indexes of which elements of (1:36 %in% 1:6) are: which(1:36 %in% 6:10).
By using this function, we can slice a data frame based on a given string or string vector.

myDf[myDf$contrast %in% compstr, ]

Slice data columns in a data frame

In R slicing data columns in a data frame is pretty simple. Either using the index information of columns or directly using column names.
Using index or position of columns to slice the data frame is more generic.

df[c(1, 4:ncol(df))]

Using columns to slice the dat frame is useful when you know the column headers before you slice it.

df[c('c1', 'c5', 'c10')]

Tags: , , , ,

Construct R formula with variable names in a vector automatically

Suppose you have a list of variable names in a vector and you want to construct some standard linear model from the vector, how can we do it in R. Actually it is pretty easy in R. By using two R functions, that is,  formula and paste, linear model can be constructed automatically

1.additive model

PredictorVariables<- c("x1","x2")

Apply approach: We can then construct a formula as follows:
PredictorVariables <- paste(“x”, 1:100, sep=””)
Apply approach: We can then construct a formula as follows:

Formula <- formula(paste("y ~ ", 
     paste(PredictorVariables, collapse=" + ")))
lm(Formula, Data)

2.multiplicative model
Apply approach: We can then construct a formula as follows:
PredictorVariables <- paste(“x”, 1:100, sep=””)
Apply approach: We can then construct a formula as follows:

Formula <- formula(paste("y ~ ", 
     paste(PredictorVariables, collapse=" * ")))
lm(Formula, Data)

If you have more than two variable names in the vector, the mode can be quite complex, including higher order of interactions.

By using the similar idea, you can also include random terms in the constructed model.

Appendix: Get all column headers that fit to certain patterns.
Suppose I have data frame called my.df and there are some column headers include string “trt_”, we can use the following methods to get all columns headers with “trt_” and store them into a vector.

my.colnames <- colnames(my.df)
my.headers <- my.colnames[grepl("trt_", my.colnames)]

Tags: , ,

Generate PowerPoint slides in R

A new R package “officer” makes generating PowerPoint slides in R much easier. It is newly developed and still has bugs. But there are ways to get around the bugs. I believe the developer of the package will fix bugs quickly.

First, you have to install the package before using it.


Second, use the examples in the GitHub site and other sites as the start point to create your own PowerPoint slide. I found this site ( is really useful.

Third, get around bugs. When I used it to automate the generation of PowerPoint slides, I found that all slides with images inserted did not work at all. After PowerPoint files were generated, they cannot be open correctly. Always popup error message and let you “repair” the presentations. I searched several hours and did not get any useful information. However, I visited “officer” package’s GitHub page and found a hit in bug submitting area. The reason popping up error message is a bug in the program. All image filenames cannot include “SPACE”, otherwise, you get error.

So, I went ahead and changed my program to make sure there is no “space”s in any image filenames. Waoh-lah, it works great.

Another useful point is the slide size. If you want to insert an image to occupy the whole 9:16 widescreen slide, you should use the following parameters:

left = 0.0
top = 0.0
width = 40.0/3.0
height = 7.

Use this set of parameters in ph_with_img_at function, you can guarantee inserted images take the whole slide.

Enjoying to use the package. Thank the developer sharing the wonderful art of work with us.


Tags: , , , ,

Using rsqlserver package in R

In R programming environment, you can conveniently connect to SQL Server through RSQLServer package. However, it is little bit tricky. Not like to way you connect MySQL server through RMySQL package. I searched RSQLServer on Google and tried to find a good example to connect to the SQL server. At the beginning, nothing worked. One time I read a post and used the method in the post. I was able to connect to the SQL Server. However, the connection opened a system database instead of the database I wanted to connect. Eventually I use R help command to learn how to use dbConnect function. That helps figuring out the correct method to connect to the right database on the SQL server.

1) Install the package: RSQLServer. Run command in R:


2) To use the package, you only need to use the following R command:


3) To connect to a specific database on a given SQL Server:

con < - dbConnect( RSQLServer::SQLServer(), server="localhost", database = "yourdatabasename", properties=list(user="yourusername", password="yourpassword") )

That should work well for you. You have to provide your SQL Server IP address, database name, your login information to the function.

One get get connected, to query the database table and manipulate data in the database, you can use all the functions available in DBI package. There is no more tricks.


Tags: , , ,

GET BLUE or BLUP from lmer in R

The following text is copied and pasted from


Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear Unbiased Estimator (BLUE)?

I understand that the difference between them is related to whether the grouping variable in the model is estimated as a fixed or random effect, but it’s not clear to me why they are not the same (if they are not the same).

I am specifically interested in how this works when using small area estimation, if that’s relevant, but I suspect the question is relevant to any application of fixed and random effects.


The values that you get from BLUPs aren’t estimated in the same way as the BLUE estimates of fixed effects; by convention BLUPs are referred to as predictions. When you fit a mixed effects model, what are estimated initially are the mean and variance (and possibly the covariance) of the random effects. The random effect for a given study unit (say a student) is subsequently calculated from the estimated mean and variance, and the data. In a simple linear model, the mean is estimated (as is the residual variance), but the observed scores are considered to be composed of both that and the error, which is a random variable. In a mixed effects model, the effect for a given unit is likewise a random variable (although in some sense it has already been realized).

You can also treat such units as fixed effects, if you like. In that case, the parameters for that unit are estimated as usual. In such a case however, the mean (for example) of the population from which the units were drawn is not estimated.

Moreover, the assumption behind random effects is that they were sampled at random from some population, and it is the population that you care about. The assumption underlying fixed effects is that you selected those units purposefully because those are the only units you care about.

If you turn around and fit a mixed effects model and predict those same effects, they tend to be ‘shrunk’ towards the population mean relative to their fixed effects estimates. You can think of this as analogous to a Bayesian analysis where the estimated mean and variance specify a normal prior and the BLUP is like the mean of the posterior that comes from optimally combining the data with the prior.

The amount of shrinkage varies based on several factors. An important determinate of how far the random effects predictions will be from the fixed effects estimates is the ratio of the variance of the random effects to the error variance. Here is a quick R demo for the simplest case with 5 ‘level 2’ units with only means (intercepts) fit. (You can think of this as test scores for students within classes.)

library(lme4) # we'll need to use this package
set.seed(1673) # this makes the example exactly reproducible
nj = 5; ni = 5; g = as.factor(rep(c(1:nj), each=ni))

##### model 1
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1, each=ni) + error

re.mod1 = lmer(y~(1|g))
fe.mod1 = lm(y~0+g)
df1 = data.frame(fe1=coef(fe.mod1), re1=coef(re.mod1)$g)

##### model 2
pop.mean = 16; sigma.g = 5; sigma.e = 5
r.eff2 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff2, each=ni) + error

re.mod2 = lmer(y~(1|g))
fe.mod2 = lm(y~0+g)
df2 = data.frame(fe2=coef(fe.mod2), re2=coef(re.mod2)$g)

##### model 3
pop.mean = 16; sigma.g = 5; sigma.e = 1
r.eff3 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff3, each=ni) + error

re.mod3 = lmer(y~(1|g))
fe.mod3 = lm(y~0+g)
df3 = data.frame(fe3=coef(fe.mod3), re3=coef(re.mod3)$g)

So the ratios of the variance of the random effects to the error variance is 1/5 for model 1, 5/5 for model 2, and 5/1 for model 3. Note that I used cell means coding for the fixed effects models. We can now examine how the estimated fixed effects and the predicted random effects compare for these three scenarios.

# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897

# fe2 re2
# g1 10.979130 11.32997
# g2 13.002723 13.14321
# g3 26.118189 24.89537
# g4 12.109896 12.34319
# g5 9.561495 10.05969

# fe3 re3
# g1 13.08629 13.19965
# g2 16.36932 16.31164
# g3 17.60149 17.47962
# g4 15.51098 15.49802
# g5 13.74309 13.82224

Another way to end up with random effects predictions that are closer to the fixed effects estimates is when you have more data. We can compare model 1 from above, with its low ratio of random effects variance to error variance, to a version (model 1b) with the same ratio, but much more data (notice that ni = 500 instead of ni = 5).

##### model 1b
nj = 5; ni = 500; g = as.factor(rep(c(1:nj), each=ni))
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1b = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1b, each=ni) + error

re.mod1b = lmer(y~(1|g))
fe.mod1b = lm(y~0+g)
df1b = data.frame(fe1b=coef(fe.mod1b), re1b=coef(re.mod1b)$g)
Here are the effects:

# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897

# fe1b re1b
# g1 15.29064 15.29543
# g2 14.05557 14.08403
# g3 13.97053 14.00061
# g4 16.94697 16.92004
# g5 17.44085 17.40445

On a somewhat related note, Doug Bates (the author of the R package lme4) doesn’t like the term “BLUP” and uses “conditional mode” instead (see pp. 22-23 of his draft lme4 book pdf). In particular, he points out in section 1.6 that “BLUP” can only meaningfully be used for linear mixed-effects models.


Tags: , , , , , ,