Approaches to estimating the universe of natural history collections data
estimates, natural history collections, primary biodiversity data, size
This contribution explores the problem of recognizing and measuring the universe of specimen-level data existing in natural history collections around the world, and in absence of a complete, world-wide census or register. Estimates of size seem necessary to plan for resource allocation for digitization or data capture, and may help to represent how many vouchered primary biodiversity data (in terms of collections, specimens or curatorial units) might remain to be mobilized. It further helps to set priorities, and assess certainties. Three general approaches are proposed for further development, and initial estimates are given. Probabilistic models involve crossing data from a set of biodiversity datasets, finding commonalities and estimating the likelihood of totally obscure data from the fraction of known data missing from specific datasets in the set. Distribution models aim to find the underlying distribution of collections’ compositions, estimating the hidden sector of the distributions. Finally, case studies seek to compare digitized data from collections known to the world to the amount of data known to exist in the collection but not generally available or not digitized. Preliminary estimates of size range from 1.2 to 2.1 gigaunits (109) of which a mere 3% at most is currently web-accessible through GBIF’s mobilization efforts. However, further data and analyses, along with other approaches relying more heavily on surveys, might change the picture and possibly help to narrow the estimate further. In particular, unknown collections not having emerged through literature are the major source of uncertainty.