I don’t use R regularly, though I’m somewhat familiar with it. My work is in 90% Excel (the lingua franca of my world) and 10% Python, which I just plain like.
Yet here is a paper evaluating R’s design and how it is *actually* used. Neat.
We assembled a body of over 3.9 million lines of R code. This corpus is intended to be representative of real-world R usage, but also to help understand the performance impacts of different language features. We classiﬁed programs in 5 groups. The Bioconductor project open-source repository collects 515 Bioinformatics-related R packages.
The Shootout benchmarks are simple programs from the Computer Language Benchmark Game implemented in many languages that can be used to get a performance baseline. Some R users donated their code; these programs are grouped under the Miscellaneous category. The fourth and largest group of programs was retrieved from the R package archive on CRAN.
Some excerpts of the results:
We used the Shootout benchmarks to compare the performance of C, Python and R. Results appear in Fig. 7. On those benchmarks, R is on average 501 slower than C and 43 times slower Python. Benchmarks where R performs better, like regex-dna (only 1.6 slower than C), are usually cases where R delegates most of its work to C functions.
…Not only is R slow, but it also consumes signiﬁcant amounts of memory. Unlike C, where data can be stack allocated, all user data in R must be heap allocated and garbage collected.
…One of the key claims made repeatedly by R users is that they are more productive with R than with traditional languages. While we have no direct evidence, we will point out that, as shown by Fig. 10, R programs are about 40% smaller than C code. Python is even more compact on those shootout benchmarks, at least in part, because many of the shootout problems are not easily expressed in R. We do not have any statistical analysis code written in Python and R, so a more meaningful comparison is difﬁcult. Fig. 11 shows the breakdown between code written in R and code in Fortran or C in 100 Bioconductor packages. On average, there is over twice as much R code. This is signiﬁcant as package developers are surely savvy enough to write native code, and understand the performance penalty of R, yet they would still rather write code in R.
…Parameters. The R function declaration syntax is expressive and this expressivity is widely used. In 99% of the calls, at most 3 arguments are passed, while the percentage of calls with up to 7 arguments is 99.74% (see Fig. 12).
…Laziness. Lazy evaluation is a distinctive feature of R that has the potential for reducing unnecessary work performed by a computation. Our corpus, however, does not bear this out. Fig. 14(a) shows the rate of promise evaluation across all of our data sets.
And the upshot:
The R user community roughly breaks down into three groups. The largest groups are the end users. For them, R is mostly used interactively and R scripts tend to be short sequences of calls to prepackaged statistical and graphical routines. This group is mostly unaware of the semantics of R, they will, for instance, not know that arguments are passed by copy or that there is an object system (or two)…
One of the reasons for the success of R is that it caters to the needs of the ﬁrst group, end users. Many of its features are geared towards speeding up interactive data analysis. The syntax is intended to be concise.