2018 R Consortium Silver member representatives for Board and ISC

By Announcement, Blog, News, R Consortium Project

Per the R Consortium ByLaws and ISC Charter, the Silver Member class is entitled to elect individuals representative of the Silver Member class for a term starting January 1, 2018 through December 31, 2018 as follows:

  • 1 representative to the ISC
  • 1 Silver Member Board Director per every 7 Silver Members, subject to provisions 4.2 and 4.3(d) of the R Consortium ByLaws. This means the Silver Member class can elect up to 2 Board Directors representing the class.

These elections ran during the month of November 2017, with 3 nominees for Silver Member Board Director and 3 nominees for the Silver Member ISC representative.

I am pleased to announce those elected by the Silver member class to serve on the Board of Directors and ISC effective 1/1/2018 through 12/31/2018.

Silver Member ISC representative

Silver Member Board Directors
Please join me in congratulating each of the elected representatives.
We would also like the share a big thank you to the outgoing Silver Member Board Director Richard Pugh of Mango Solutions. His guidance and leadership within the R Consortium have made a huge impact on its current success.

Recap of the uRos2017 conference

By Blog, Events

Editor’s Note: This post comes from Nicoleta Caragea, uRos2017 conference organizer. uRos2017 is a conference held in November 2017  for collaboration around the use of R in Romania.  Through the RUGS program, R Consortium was honored to be a sponsor for this event. If you have an smaller event you would like support for, stay tuned for the official program announcement in early 2018.

uRos2017 conference

The International Conference New Challenges for Statistical Software – The Use of R in Official Statistics – uRos2017, the fifth in a series of events, organized at Romanian NIS dedicated to the use of R Project in Romania, was held between 6-7 of November 2017. The conference, which provides a public forum for researchers from academia and institutes of statistics, brought together over 60 participants from 20 countries (Austria, Canada, Columbia, Croatia, France, Deutschland, Italy, Irak, Japan, Lithuania, Luxembourg, Morocco, Netherlands, Norway, Poland, Romania, Spain, Switzerland and Turkey). Moreover, representatives from Eurostat and other international organizations (United Nations/UNIDO and FAO) attended as guests.

Not only was uRos2017 an opportunity to develop new ideas and cooperation in the field of official statistics, the event once again demonstrated the significant role played by National Institute of Statistics in the official statistics and gives Romania a prominent spot on the map of useRs.

uRos2017 growth

Throughout the five editions of the event, the international participation has increased exponentially.

The event hosted, besides the presentations, eight workshops lectured by prestigious professionals from official statistics and academia:

  • Mark van der Loo (Statistics Netherlands), Statistical data cleaning with R
  • Valentin Todorov (United Nations Industrial Development Organization), R in the statistical office: the UNIDO experience
  • Bernhard Meindl (Statistics Austria), Current developments in R-packages sdcMicro and sdcTable for statistical disclosure control
  • Marcello D’Orazio (Food and Agriculture Organization of The United Nations), Outlier detection in R: some remarks
  • Camelia Goga (Institut de Mathématiques de Bourgogne, Université de Bourgogne, France), Survey sampling techniques with R
  • Hervé Cardot (Institut de Mathématiques de Bourgogne, Université de Bourgogne, France), Fast robust center estimation, clustering and Principal Components Analysis with large samples in high dimension with R
  • Bogdan Oancea (National Institute of Statistics/University of Bucharest, Romania) and Ciprian Alexandru (National Institute of Statistics/Ecological University of Bucharest, Romania), From unstructured data to structured data – Web scraping for Official Statistics
  • Elena Druică (University of Bucharest, Department of Economic and Administrative Sciences, Romania), Working with the ‘pglm’ package in R. Explaining the number of nosocomial infections in Romanian hospitals

uRos2017 speakers 1 uRos2017 speakers 2

The proceedings of the conference, which took place in parallel sections and included 22 presentations and 8 thematic workshops, will be published in two issues of Romanian Statistical Review: no. 4/2017 and no. 1/2018. The first one has already been published and handed to the participants during the conference, and the second one will be released in March 2018.

Romanian Statistical Review 4/2017A novelty of this year’s edition is that the conference joined with “International Conference On Computing, Mathematics And Statistics 2017” (iCMS2017), held in Langkawi Island, Malaysia. Nicolaas Jan Dirk Nagelkerke, Matthias Templ and Martin Everett delivered keynote talks at uRos2017 Asia Pacific/iCMS2017.

As a satellite event of uRos2017, a meeting between Japan’s, Austria’s (UN/UNIDO) and Romanian NIS representatives took place on November 8. The meeting was an opportunity to exchange ideas and knowledge. The discussions regarded the following subjects:

  • Modernization of Romanian Official Statistics
  • The use of R in statistical surveys
  • Data editing (outlier detection, imputation etc.)
  • Generation of statistical reports using R with Sweave/knitr
  • Online data collection for business statistics surveys

You can find more information about uRos2017 at the conference website.

The R-omanian team, has agreed to organize uRos2018 together with our colleagues from CBS-Netherlands. Keep in contact on:


By Blog, News, R Language

by Joseph Rickert

The ISC has determined that an error in the ISC proposal submission process has caused us to lose some, but not all proposals. If you submitted a project proposal to the ISC and have received a confirmation email then you are fine. However, if you have not received a confirmation email, please email your proposal as a pdf attachment to

If you do not receive a confirmation within 24 hours please email Once again, if you have already received a confirmation that your proposal was received you do not need to take further action.

The revised deadline for submitting proposals is now midnight PST, Sunday October 15th.

We apologize for the inconvenience.

R Consortium Call for Proposals: Summer 2017

By Announcement, Blog, News

by Joseph Rickert and Hadley Wickham

The second and final ISC Call for Proposals for 2017 is now open. In this round, with the intention of spreading the available funds as widely as possible, the ISC is encouraging the R community to submit proposals for projects that are smaller in scope than those solicited earlier this year. For this round, the total funds requested for an individual grant should be less than $10,000. Look at the Simple Features Project as an example of what can be achieved with this level of funding.

Note that the current funding cap should not discourage anyone with plans for a more ambitious project. The ISC tends to be conservative with initial grants for large projects. So, framing your initial proposal as a “proof of concept” or “initial objective” of a large project – with an estimate of the total project cost – will not necessarily slow down the work.

As always, proposals should clearly describe the problem that needs to be solved, and be likely to have an impact on a broad segment of the R Community. Keep in mind that the ISC generally does not fund projects that apply to a limited geographic region, or a very specialized domain.

Please do not submit proposals to sponsor conferences, workshops or meetups. The R Consortium is in the process of establishing a “Marketing Committee” reporting directly to the Board of Directors, for this purpose. Until the Marketing Committee establishes a more formal procedure, please send your request for a conference or meeting sponsorship to me,, and I will see that it gets forwarded to the committee.

The R Consortium and ISC are proud to report that, so far, we have awarded nearly half a million dollars in grants. With your help, we can continue this pace in the future. We need solid, well thought-out proposals. Act now! Submit a proposal using this form. The current call for proposals will end at midnight PST on September 15, 2017.

Take the R Consortium’s Survey on R!

By Announcement, Blog, News, R Consortium Project, R Language

by Joseph Rickert and Hadley Wickham

Help us keep the conversation going: Take the R Consortium’s Survey. Let us know: What are you thinking? What do you make of the way R is developing? How do you use R? What is important to you? How could life be better? What issues should we be addressing? What does the big picture look like? We are looking for a few clues and we would like to hear from the entire R Community.


The R Consortium exists to promote R as a language, environment and community. In order to answer some of the questions above and to help us understand our mission better we have put together the first of what we hope will be an annual survey of R users. This first attempt is a prototype. We don’t have any particular hypothesis or point of view. We would like to reach everyone who is interested in participating. So please, take a few minutes to take the survey yourself and help us get the word out. The survey will adapt depending on your answers, but will take about 10 minutes to complete.

The anonymized results of the survey will be made available to the community for analysis. Thank you for participating.

                                                                                   Take the survey now!      

现在进行调查!     今すぐ調査をしてください!    Participez à l’enquête en ligne!    ¡Tome la encuesta ahora!



Code Coverage Tool for R Working Group Achieves First Release

By Blog, News, R Consortium Project, R Language

by Mark Hornick, Code Coverage Working Group Leader

The “Code Coverage Tool for R” project, proposed by Oracle and approved by the R Consortium Infrastructure Steering Committee, started just over a year ago. Project goals included providing an enhanced tool that determines code coverage upon execution of a test suite, and leveraging such a tool more broadly as part of the R ecosystem.

What is code coverage?

As defined in Wikipedia, “code coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high code coverage, measured as a percentage, has had more of its source code executed during testing which suggests it has a lower chance of containing undetected software bugs compared to a program with low code coverage.”

Why code coverage?

Code coverage is an essential metric for understanding software quality. For R, developers and users alike should be able to easily see what percent of an R package’s code has been tested and the status of those tests. By knowing code is well-tested, users have greater confidence in selecting CRAN packages. Further, automating test suite execution with code coverage analysis helps ensure new package versions don’t unknowingly break existing tests and user code.

Approach and main features in release

After surveying the available code coverage tools in the R ecosystem, the working group decided to use the covr package, started by Jim Hester in December 2014, as a foundation and continue to build on its success. The working group has enhanced covr to support even more R language aspects and needed functionality, including:

  • R6 methods support
  • Address parallel code coverage
  • Enable compiling R with Intel compiler ICC
  • Enhanced documentation / vignettes
  • Provide tool for benchmarking and defining canonical test suite for covr
  • Clean up dependent package license conflicts and change covr license to GPL-3

CRAN Process

Today, code coverage is an optional part of R package development. Some package authors/maintainers provide test suites and leverage code coverage to assess code quality. As noted above, code coverage has significant benefits for the R community to help ensure correct and robust software. One of the goals of the Code Coverage project is to incorporate code coverage testing and reporting into the CRAN process. This will involve working with the R Foundation and the R community on the following points:

  • Encourage package authors and maintainers to develop, maintain, and expand test suites with their packages, and use the enhanced covr package to assess coverage
  • Enable automatic execution of provided test suites as part of the CRAN process, just as binaries of software packages are made available, test suites would be executed and code coverage computed per package
  • Display on each packages CRAN web page its code coverage results, e.g., the overall coverage percentage and a detailed report showing coverage per line of source code.

Next Steps

The working group will assess additional enhancements for covr that will benefit the R community. In addition, we plan to explore with the R Foundation the inclusion of code coverage results in the CRAN process.


The following individuals are members of the Code Coverage Working Group:

  • Shivank Agrawal
  • Chris Campbell
  • Santosh Chaudhari
  • Karl Forner
  • Jim Hester
  • Mark Hornick
  • Chen Liang
  • Willem Ligtenberg
  • Andy Nicholls
  • Vlad Sharanhovich
  • Tobias Verbeke
  • Qin Wang
  • Hadley Wickham – ISC Sponsor

Improving DBI: A Retrospect

By Blog, News, R Consortium Project, R Language

by Kirill Müller

The “Improving DBI” project, funded by the R consortium and started about a year ago includes the definition and implementation of a testable specification for DBI and making RSQLite fully compliant to the new specification. Besides the established DBI and RSQLite packages, I have spent a lot of time on the new DBItest package. Final updates to these packages will be pushed to CRAN at the end of May. This should give downstream maintainers some time to make accommodations. The follow-up project “Establishing DBI” will focus on fully DBI-compliant backends for MySQL/MariaDB and PostgreSQL, and on minor updates to the specs where appropriate.

DBItest: Specification

The new DBItest package provides a comprehensive backend-agnostic test suite for DBI backends. When the project started, it was merely a collection of test cases. I have considerably expanded the test cases and provided a human-readable description for each, using literate programming techniques powered by roxygen2. The DBI package weaves these chunks of text to a single document that describes all test cases covered by the test suite, the textual DBI specification. This approach ensures that further updates to the specification are reflected in both the automatic tests and the text.

This package is aimed at backend implementers, who now can programmatically check with very little effort if their DBI backend conforms to the DBI specification. The verification can be integrated in the automated tests which are run as part of R’s package check mechanism in R CMD check. The odbc package, a new DBI-compliant interface to the ODBC interface, has been using DBItest from day one to enable test-driven development. The bigrquery package is another user of DBItest.

Because not all DBMS support all aspects of DBI, the DBItest package allows developers to restrict which parts of the specification are tested, and “tweak” certain aspects of the tests, e.g., the format of placeholders in parameterized queries. Adapting to other DBMS may require more work due to subtle differences in the implementation of SQL between various DBMS.

DBI: Definition

This package has been around since 2001, it defines the actual DataBase Interface in R.

I have taken over maintenance, and released versions 0.4-1, 0.5-1, and 0.6-1, with release of version 0.7 pending. The most prominent change in this package is, of course, the textual DBI specification, which is included as an HTML vignette in the package. The documentation for the various methods defined by DBI is obtained directly from the specification. These help topics are combined in a sensible order to a single, self-contained document. This format is useful for both DBI users and implementers: users can look up the behavior of a method directly from its help page, and implementers can browse a comprehensive document that describes all aspects of the interface. I have also revised the description and the examples for all help topics. Other changes include:

  • the definition of new generics dbSendStatement() and dbExecute(), for backends that distinguish between queries that return a table and statements that manipulate data,
  • the new dbWithTransaction() generic and the dbBreak() helper function, thanks Barbara Borges Ribero,
  • improved or new default implementations for methods like dbGetQuery(), dbReadTable(), dbQuoteString(), dbQuoteIdentifier(),
  • internal changes that allow methods that don’t have a meaningful return value to return silently,
  • translation of a helper function from C++ to R, to remove the dependency on Rcpp (thanks Hannes Mühleisen).

Fortunately, none of the changes seemed to have introduced any major regression issues with downstream packages. The news contain a comprehensive list of changes.

RSQLite: Implementation

RSQLite 1.1-2 is a complete rewrite of the original C implementation. Before focusing on compliance to the new DBI specification, it was important to assert compatibility to more than 100 packages on CRAN and Bioconductor that use RSQLite. These packages revealed many usage patterns that were difficult to foresee. Most of these usage patterns are supported in version 1.1-2, the more esoteric ones (such as supplying an integer where a logical is required) trigger a warning.

Several rounds of “revdep checking” were necessary before most packages showed no difference in their check output compared to the original implementation. The downstream maintainers and the Bioconductor team were very supportive, and helped spotting functional and performance regressions during the release process. Two point releases were necessary to finally achieve a stable state.

Supporting 64-bit integers also was trickier than anticipated. There is no built-in way to represent 64-bit integers in R. The bit64 package works around this limitation by using a numeric vector as storage, which also happens to use 8 bytes per element, and providing coercion functions. But when an integer column is fetched, it cannot be foreseen if a 64-bit value will occur in the result, and smaller integers must use R’s built-in integer type. For this purpose, an efficient data structure for collecting vectors, which is capable of changing the data type on the fly, has been implemented in C++. This data structure will be useful for many other DBI backends that need support for a 64-bit integer data type, and will be ported to the RKazam package in the follow-up project.

Once the DBI specification was completed, the process of making RSQLite compliant was easy: enable one of the disabled tests, fix the code, make sure all tests pass, rinse, and repeat. If you haven’t tried it, I seriously recommend test-driven development, especially when the tests are already implemented.

The upcoming release of RSQLite 2.0 will require stronger adherence to the DBI specification also from callers. Where possible, I tried to maintain backward compatibility, but in some cases breaks were inevitable because otherwise I’d have had to introduce far too many exceptions and corner cases in the DBI spec. For instance, row names are no longer included by default when writing or reading tables. The original behavior can be re-enabled by calling pkgconfig::set_config(), so that packages or scripts that rely on row names continue to work as before. (The setting is active for the duration of the session, but only for the caller that has called pkgconfig::set_config().) I’m happy to include compatibility switches for other breaking changes if necessary and desired, to achieve both adherence to the specs and compatibility with existing behavior.

A comprehensive list of changes can be found in the news.

Other bits and pieces

The RKazam package is a ready-to-use boilerplate for a DBI backend, named after the hypothetical DBMS used as example in a DBI vignette. It already “passes” all tests of the DBItest package, mostly by calling a function that skips the current test. Starting a DBI backend from scratch requires only copying and renaming the package’s code.

R has limited support for time-of-day data. The hms package aims at filling this gap. It will be useful especially in the follow-up project, because SQLite doesn’t have an intrinsic type for time-of-day data, unlike many other DBMS.

Next steps

The ensemble CRAN release of the three packages DBI, DBItest and RSQLite will occur in parallel to the startup phase for the “Establishing DBI” follow-up project. This project consists of:

  • Fully DBI compatible backends for MySQL/MariaDB and Postgres
  • A backend-agnostic C++ data structure to collect column data in the RKazam package
  • Support for spatial data

In addition, it will contain an update to the DBI specification, mostly concerning support for schemas and for querying the structure of the table returned for a query. Targeting three DBMS instead of one will help properly specify these two particularly tricky parts of DBI. I’m happy to take further feedback from users and backend implementers towards further improvement of the DBI specification.


Many thanks to the R Consortium, which has sponsored this project, and to the many contributors who have spotted problems, suggested improvements, submitted pull requests, or otherwise helped make this project a great success. In particular, I’d like to thank Hadley Wickham, who suggested the idea, supported initial development of the DBItest package, and provided helpful feedback; and Christoph Hösler, Hannes Mühleisen, Imanuel Costigan, Jim Hester, Marcel Boldt, and @thrasibule for using it and contributing to it. I enjoyed working on this project, looking forward to “Establishing DBI”!

Q1 2017 ISC Grants

By Blog, Events

by Hadley Wickham and Joseph Rickert

The Infrastructure Steering Committee (ISC) was very pleased with both the quantity and quality of proposals received during the recent round of funding which closed on February 10th. Funding decisions were difficult. In the end, the ISC awarded grants to ten of the twenty-seven proposals it received for a total award of $234,000. Here is a brief summary of the projects that received awards.

Adding Linux Binary Builders to R-Hub – Award: $15,000. Primary Contact: Dirk Eddelbuettel (edd at

This project proposes to take the creation of binary Linux packages to the next level by providing R-Hub with the ability to deliver directly installable binary packages with properly-resolved dependencies. This will allow large-scale automated use of CRAN packages anywhere: laptops, desktops, servers, cluster farms and cloud-based deployments.

The project would like to hear from anyone who could possibly host a dedicated server in a rack for long term use.

An Infrastructure for Building R Packages on MacOS with Hombrew – Award: $12,000. Primary Contact: Jeroen Ooms (jeroenooms at

When installing CRAN packages, Windows and MacOS users often rely on binary packages that contain precompiled source code and any required external C/C++ libraries. By eliminating the need to set up a full compiler environment or manage external libraries this tremendously improves the usability of R on these platforms. Our project will improve the system by adapting the popular Homebrew system to facilitate static linking of external libraries.

Conference Management System for R Consortium Sponsored Conferences – Award: $19,000. Primary Contact: Heather Turner (ht at

This project will evaluate a number of open source conference management systems to assess their suitability for use with useR! and satRdays. Test versions of these systems will be set up to test their functionality and ease of use for all roles (systems administrator, local organizer, program chair, reviewer, conference participant). A system will be selected and a production system set up, with a view to be ready for useR! 2018 and future satRdays events.

Continued Development of the R API for Distributed Computing – Award:  $15,000. Primary Contact: Michael Lawrence (michafla at

The ISC’s Distributed Computing Working Group explores ways of enabling distributed computing in R. One of its outputs, the CRAN package ddR, defines an idiomatic API that abstracts different distributed computing engines, such as DistributedR and potentially Spark and TensorFlow. The goal of the project is to enable R users to interact with familiar data structures and write code that is portable across distributed systems.

The working group will use this R Consortium grant to fund an internship to help improve ddR and implement support for one or more additional backends. Please contact Michael Lawrence to apply or request additional information.

Establishing  DBI – Award: $26,500. Primary Contact Kirill Müller (krlmlr at

Getting data in and out of R is an important part of a statistician’s or data scientist’s work. If the data reside in a database, this is best done with a backend to DBI, R’s native DataBase Interface. The ongoing “Improving DBI” project supports the DBI specification, both in prose and as an automated test. It also supports the adaptation of the `RSQLite` package to these specs. This follow-up project aims to implement a modern, fully spec-compliant DBI backends to two major open-source RDBMS, MySQL/MariaDB and PostgreSQL.

Forwards Workshops for Women and Girls – Award $25,000. Primary Contact: Dianne Cook (rowforwards at

The proportion of female package authors and maintainers has remained persistently low, at best at 15%, despite 20 years of the R project’s existence. This project will conduct a grassroots effort to increase the participation of women in the R community. One day package development workshops for women engaged in research will be held in Melbourne, Australia and Auckland, New Zealand in 2017, and at locations yet to be determined in the USA and Europe in 2018. Additionally, one day workshops for teenage girls focused on building Shiny apps will be developed to encourage an interest in programming. These will be rolled out in the same locations as the women’s workshops. All materials developed will be made available under a Creative Commons share-alike license on the Forwards website (

Joint Profiling of Native and R Code – Award: $11,000. Primary Contact: Kirill Müller (krlmlr at

R has excellent facilities for profiling R code: the main entry point is the Rprof() function that starts an execution mode where the R call stack is sampled periodically, optionally at source line level, and written to a file. Profiling results can be analyzed with summaryRprof(), or visualized using the profvis,  aprof, or GUIProfiler packages. However, the execution time of native code is only available in bulk, without detailed source information.

This project aims at bridging this gap with a drop-in replacement to Rprof() that records call stacks and memory usage information at both R and native levels, and later commingles them to present a unified view to the user.

R-hub #2 – Award: $89,500. Primary Contact: Gábor Csárdi (csardi.gabor at

R-hub is the first top level project of the R Consortium. The first stage of the project created a multi-platform, R package build server. This proposal includes the maintenance of the current R-hub infrastructure and a number of improvements and extensions including:

  1. R-hub as the first step of package submissions to CRAN
  2. R package reverse dependency checks, on R-hub and locally
  3. General R code execution, on all R-hub platforms
  4. Check and code quality badges
  5. Database of CRAN code
  6. The CRAN code browser

School of Data Material Development – Award: $11,200. Primary Contact: Heidi Seibold (heidi at

School of Data is a network of data literacy practitioners, both organizations and individuals, implementing training and other data literacy activities in their respective countries and regions. Members of School of Data work to empower civil society organizations (CSOs), journalists, civil servants and citizens with the skills they need to use data effectively in their efforts to create better, more equitable and more sustainable societies

Our R consortium will develop learning materials about R for journalists, with a focus on making them accessible and relevant to journalists from various countries. As a consequence, our content will use country-relevant examples and will be translated in several languages (English, French, Spanish, German).

Stars: Scalable, Spatiotemporal Tidy Arrays for R – Award: $10,000. Primary Contact Edzer Pebesma (edzer.pebesma at

Spatiotemporal and raster data often come as dense, two-dimensional arrays while remote sensing and climate model data are often presented as higher dimensional arrays. Data sets of this kind often do not fit in main memory. This project will make it easier to handle such data with R by using dplyr-style, pipe-based workflows, and also consider the case where the data reside remotely, in a cloud environment. Questions and offers to support are welcome through issues at: .


ISC Project Status Webinar

By Blog, Events

Join us for a webinar on Jan 31, 2017 at 9:30 AM PST.

View Recording

Hear about R Consortium activities by watching the first ISC Project Status Webinar held on Tuesday, January 31st at 9:30AM PST (5:30PM GMT), 2017. Join us for 5 minute lightning talks on each active R Consortium project including:

  • R Hub – Gabor Csárdi
  • SatRdays -Gergely Daroczi
  • A Unified Framework for Distributed Computing in R – Michael Lawrence
  • Simple Features for R – Edzer Pebesma
  • Interactive data manipulation in mapview -Tim Appelhans
  • R Documentation Task Force – Andrew Redd
  • R-Ladies – Gabriela de Queiroz
  • Software Carpentry R Instructor Training – Laurent Gatto
  • Improving DBI – Kirill Mueller
  • RL10N: R Localization Proposal – Richard Cotton
  • RC RUGS (R Consortium) – Joseph Rickert
  • Future-proof native APIs for R – Lukas Stadler
  • Code Coverage Tooling for R – Jim Hester
  • RIOT Workshops – Lukas Stadler

The webinar will run approximately 90 minutes

Simple Features Now on CRAN

By Blog, R Consortium Project, R Language

by Edzer Pebesma

Support for handling and analyzing spatial data in R goes back a long way. In 2003, a group of package developers sat together and decided to adopt a shared understanding of how spatial data should be organized in R. This led to the development of the package sp and its helper packages rgdal and rgeos. sp offers simple classes for points, lines, polygons and grids, which may be associated with further properties (attributes), and takes care of coordinate reference systems. The sp package has helped many users and has made it attractive for others to develop new packages that share sp’s conventions for organizing spatial data by reusing its classes. Today, approximately 350 packages directly depend on sp and many more are indirectly dependent.

After 2003, the rest of the world has broadly settled on adopting a standard for so-called “features”, which can be thought of as “things” in the real world that have a geometry along with other properties. A feature geometry is called simple when it consists of points connected by straight line pieces, and does not intersect itself. Simple feature access is a standard for accessing and exchanging spatial data (points, lines, polygons) as well as for operations defined on them that has been adopted widely over the past ten years, not only by spatial databases such as PostGIS, but also more recent standards such as GeoJSON. The sp package and supporting packages such as rgdal and rgeos predate this standard, which complicates exchange and handling of simple feature data.

The “Simple Features for R” project, one of the projects supported by the R Consortium in its first funding round, addresses these problems by implementing simple features as native R data. The resulting package, sf provides functionality similar to the sp, rgdal for vector data, and rgeos packages together, but for simple features. Instead of S4 classes used by the sp family, it extends R’s data.frame directly, adding a list-column for geometries. This makes it easier to manipulate them with other tools that assume all data objects are data.frames, such as dplyr and tidyverse. Package sf links to the GDAL, PROJ.4 and GEOS libraries, three major geospatial “swiss army knives” for respectively input/output, cartographic (re)projections, and geometric operations (e.g. unions, buffers, intersections and topological relations). sf can be seen as a successor to sp, rgdal (for vector data), and rgeos.

The simple feature standard describes two encodings: well-known text, a human readable form that looks like “POINT(10 12)” or “LINESTRING(4 0,3 2,5 1)”, and well-known binary, a simple binary serialization. The sf package can read and write both. Exchange routines for binary encodings were written in Rcpp, to allow for very fast exchange of data with the linked GDAL and GEOS libraries, but also with other data formats or spatial databases.

The sf project on GitHub has received a considerable attention. Over 100 issues have been raised, many of which received dozens of valuable contributions, and several projects currently under development (mapview, tmap, stplanr) are experimenting with the new data classes. Several authors have provided useful pull requests, and efforts have begun to implement spatial analysis in pipe-based workflows, support dplyr-style verbs and integrate with ggplot.

Besides using data.frames and offering considerably simpler data structures for spatial geometries, advantages of sf over the sp family include: simpler handling of coordinate reference systems (using either EPSG code or PROJ.4 string), the ability to return distance or area values with proper units (meter, feet or US feet), and support for geosphere functions to compute distances or areas for longitude/latitude data, using datum-dependent values for the Earth’s radius and flattening.

The sf package is now available from CRAN, both in source form as in binary form for Windows and MacOSX platforms. The authors are grateful to the CRAN team for their strong support in getting the sf package compiled on all platforms. Support from the R Consortium has helped greatly to give this project priority, draw attention in a wider community, and facilitate travel and communication events.

For additional technical information about sf, look here on my website.