Posts Tagged R

Generate PowerPoint slides in R

A new R package “officer” makes generating PowerPoint slides in R much easier. It is newly developed and still has bugs. But there are ways to get around the bugs. I believe the developer of the package will fix bugs quickly.

First, you have to install the package before using it.

install.packages("officer")

Second, use the examples in the GitHub site and other sites as the start point to create your own PowerPoint slide. I found this site (http://lenkiefer.com/2017/09/23/crafting-a-powerpoint-presentation-with-r/) is really useful.

Third, get around bugs. When I used it to automate the generation of PowerPoint slides, I found that all slides with images inserted did not work at all. After PowerPoint files were generated, they cannot be open correctly. Always popup error message and let you “repair” the presentations. I searched several hours and did not get any useful information. However, I visited “officer” package’s GitHub page and found a hit in bug submitting area. The reason popping up error message is a bug in the program. All image filenames cannot include “SPACE”, otherwise, you get error.

So, I went ahead and changed my program to make sure there is no “space”s in any image filenames. Waoh-lah, it works great.

Another useful point is the slide size. If you want to insert an image to occupy the whole 9:16 widescreen slide, you should use the following parameters:

left = 0.0
top = 0.0
width = 40.0/3.0
height = 7.

Use this set of parameters in ph_with_img_at function, you can guarantee inserted images take the whole slide.

Enjoying to use the package. Thank the developer sharing the wonderful art of work with us.

Share

Tags: , , , ,

Using rsqlserver package in R

In R programming environment, you can conveniently connect to SQL Server through RSQLServer package. However, it is little bit tricky. Not like to way you connect MySQL server through RMySQL package. I searched RSQLServer on Google and tried to find a good example to connect to the SQL server. At the beginning, nothing worked. One time I read a post and used the method in the post. I was able to connect to the SQL Server. However, the connection opened a system database instead of the database I wanted to connect. Eventually I use R help command to learn how to use dbConnect function. That helps figuring out the correct method to connect to the right database on the SQL server.

1) Install the package: RSQLServer. Run command in R:


install.packages("RSQLServer")

2) To use the package, you only need to use the following R command:


library(DBI)

3) To connect to a specific database on a given SQL Server:


con < - dbConnect( RSQLServer::SQLServer(), server="localhost", database = "yourdatabasename", properties=list(user="yourusername", password="yourpassword") )

That should work well for you. You have to provide your SQL Server IP address, database name, your login information to the function.

One get get connected, to query the database table and manipulate data in the database, you can use all the functions available in DBI package. There is no more tricks.

Share

Tags: , , ,

GET BLUE or BLUP from lmer in R

The following text is copied and pasted from https://stats.stackexchange.com/questions/122218/why-do-the-estimated-values-from-a-best-linear-unbiased-predictor-blup-differ.

Q:

Why do the estimated values from a Best Linear Unbiased Predictor (BLUP) differ from a Best Linear Unbiased Estimator (BLUE)?

I understand that the difference between them is related to whether the grouping variable in the model is estimated as a fixed or random effect, but it’s not clear to me why they are not the same (if they are not the same).

I am specifically interested in how this works when using small area estimation, if that’s relevant, but I suspect the question is relevant to any application of fixed and random effects.

A:

The values that you get from BLUPs aren’t estimated in the same way as the BLUE estimates of fixed effects; by convention BLUPs are referred to as predictions. When you fit a mixed effects model, what are estimated initially are the mean and variance (and possibly the covariance) of the random effects. The random effect for a given study unit (say a student) is subsequently calculated from the estimated mean and variance, and the data. In a simple linear model, the mean is estimated (as is the residual variance), but the observed scores are considered to be composed of both that and the error, which is a random variable. In a mixed effects model, the effect for a given unit is likewise a random variable (although in some sense it has already been realized).

You can also treat such units as fixed effects, if you like. In that case, the parameters for that unit are estimated as usual. In such a case however, the mean (for example) of the population from which the units were drawn is not estimated.

Moreover, the assumption behind random effects is that they were sampled at random from some population, and it is the population that you care about. The assumption underlying fixed effects is that you selected those units purposefully because those are the only units you care about.

If you turn around and fit a mixed effects model and predict those same effects, they tend to be ‘shrunk’ towards the population mean relative to their fixed effects estimates. You can think of this as analogous to a Bayesian analysis where the estimated mean and variance specify a normal prior and the BLUP is like the mean of the posterior that comes from optimally combining the data with the prior.

The amount of shrinkage varies based on several factors. An important determinate of how far the random effects predictions will be from the fixed effects estimates is the ratio of the variance of the random effects to the error variance. Here is a quick R demo for the simplest case with 5 ‘level 2’ units with only means (intercepts) fit. (You can think of this as test scores for students within classes.)

library(lme4) # we'll need to use this package
set.seed(1673) # this makes the example exactly reproducible
nj = 5; ni = 5; g = as.factor(rep(c(1:nj), each=ni))

##### model 1
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1, each=ni) + error

re.mod1 = lmer(y~(1|g))
fe.mod1 = lm(y~0+g)
df1 = data.frame(fe1=coef(fe.mod1), re1=coef(re.mod1)$g)

##### model 2
pop.mean = 16; sigma.g = 5; sigma.e = 5
r.eff2 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff2, each=ni) + error

re.mod2 = lmer(y~(1|g))
fe.mod2 = lm(y~0+g)
df2 = data.frame(fe2=coef(fe.mod2), re2=coef(re.mod2)$g)

##### model 3
pop.mean = 16; sigma.g = 5; sigma.e = 1
r.eff3 = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff3, each=ni) + error

re.mod3 = lmer(y~(1|g))
fe.mod3 = lm(y~0+g)
df3 = data.frame(fe3=coef(fe.mod3), re3=coef(re.mod3)$g)

So the ratios of the variance of the random effects to the error variance is 1/5 for model 1, 5/5 for model 2, and 5/1 for model 3. Note that I used cell means coding for the fixed effects models. We can now examine how the estimated fixed effects and the predicted random effects compare for these three scenarios.

df1
# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897

df2
# fe2 re2
# g1 10.979130 11.32997
# g2 13.002723 13.14321
# g3 26.118189 24.89537
# g4 12.109896 12.34319
# g5 9.561495 10.05969

df3
# fe3 re3
# g1 13.08629 13.19965
# g2 16.36932 16.31164
# g3 17.60149 17.47962
# g4 15.51098 15.49802
# g5 13.74309 13.82224

Another way to end up with random effects predictions that are closer to the fixed effects estimates is when you have more data. We can compare model 1 from above, with its low ratio of random effects variance to error variance, to a version (model 1b) with the same ratio, but much more data (notice that ni = 500 instead of ni = 5).

##### model 1b
nj = 5; ni = 500; g = as.factor(rep(c(1:nj), each=ni))
pop.mean = 16; sigma.g = 1; sigma.e = 5
r.eff1b = rnorm(nj, mean=0, sd=sigma.g)
error = rnorm(nj*ni, mean=0, sd=sigma.e)
y = pop.mean + rep(r.eff1b, each=ni) + error

re.mod1b = lmer(y~(1|g))
fe.mod1b = lm(y~0+g)
df1b = data.frame(fe1b=coef(fe.mod1b), re1b=coef(re.mod1b)$g)
Here are the effects:

df1
# fe1 re1
# g1 17.88528 15.9897
# g2 18.38737 15.9897
# g3 14.85108 15.9897
# g4 14.92801 15.9897
# g5 13.89675 15.9897

df1b
# fe1b re1b
# g1 15.29064 15.29543
# g2 14.05557 14.08403
# g3 13.97053 14.00061
# g4 16.94697 16.92004
# g5 17.44085 17.40445

On a somewhat related note, Doug Bates (the author of the R package lme4) doesn’t like the term “BLUP” and uses “conditional mode” instead (see pp. 22-23 of his draft lme4 book pdf). In particular, he points out in section 1.6 that “BLUP” can only meaningfully be used for linear mixed-effects models.

Share

Tags: , , , , , ,

ggplot2 draw bar chart with error bars

Where I tried to generate a nice bar chart, I came across to the following post. It provides all the information I need. Very nice work!

http://www.sthda.com/english/wiki/ggplot2-barplots-quick-start-guide-r-software-and-data-visualization

R Graphics Cookbook also provides a lot of good examples. Some may be a little bit obsolete. When you directly apply the example, error messages show up. But it is still a good book.

Share

Tags: , ,

Using R to create brick block graphs

R provides powerful graphics functions. We can use it to draw very complicated and beautiful graphs. Here provides a complete solution to draw brick block graph in R. The function name is brickblock that takes a matrix, graph height, a vector of x-labels, and groupname as inputs. The length of vector should be same as the second dimension of the matrix. The following lists the function code.

brickblock <- function(m, height, xlabs, groupname) 
{
	mdim <- dim(m); # dimmension of the input matrix
	margin <- 30;  # default margin
	padding <- 5;
	
	# create a new matrix filled with 0
	m1 <- matrix(0, mdim[1]+1, mdim[2]) 
	## relative percentage of height
        for (i in 1:mdim[2]) { m1[1:mdim[1],i] <- m[,i]/sum(m[,i]) }
	## height of each rectangles
	m1 <- m1 * height
	## vertical position
	for (j in (mdim[1]-1):1) { m1[j,] = m1[j,] + m1[j+1,] }
	m1[mdim[1]+1,] <- rep(0,mdim[2])
	m1 <- m1 + margin
	v1 <- m1[2:(mdim[1]+1),]
	v2 <- m1[1:mdim[1],] - padding

	## width of rectangles
	rectwidth = height / mdim[2]
	## horizontal positions
	h1 <- rep(0, mdim[2])
	h2 <- h1
	for (i in 1:mdim[2]) 
	{
	    h1[i] <- margin + (i - 1) * rectwidth
	    h2[i] <- margin + (i) * rectwidth - padding ## leave a space between two rectangles
	}

	## an example showing colouring and shading
	plot(c(margin, margin+height), c(margin, margin+height), type= "n", xlab="", ylab="", axes=FALSE)
	for (i in 1:mdim[2])
	{
	    for (j in 1:mdim[1])
	    {
	        rect(h1[i],v1[j,i],h2[i],v2[j,i], col="grey", border="black") # coloured
	        text(h1[i]+15, v1[j,i]+11, j, col="white");
	    }
	    text(h1[i]+15, v2[1,i]+11, xlabs[i])
	}
	mtext(groupname, side=2, line=0, cex=1.6);
}

To use the function is pretty simple. First create a matrix and then call the brickblock function like the following.

x<-matrix(c(18,8,10,8,14,19,12,3,14,3,11,17,18,7,11,12,64,37,44,40),4)
brickblock(x, 500, c("2CC", "2SC", "1CC", "1SC", "Total"),"Cluster"); 

It will draw a graph similar to the following.
brickblock

Share

Tags: , , , ,