diff --git a/.Rbuildignore b/.Rbuildignore index 79f31ef..9e09538 100644 --- a/.Rbuildignore +++ b/.Rbuildignore @@ -1,14 +1,5 @@ ^LICENSE\.md$ ^codecov\.yml$ -^vignettes/articles$ -^vignettes/$ -^vignettes/.* - -^inst/doc/* -^inst/doc/.* -^inst/doc/neotoma2-package_files/$ -inst/doc/neotoma2-package_files/leaflet-markercluster-1.0.5/leaflet.markercluster.freezable.js -inst/doc/neotoma2-package_files/leaflet-markercluster-1.0.5/leaflet.markercluster.layersupport.js ^.*\.Rproj$ ^\.Rproj\.user$ diff --git a/R/get_datasets.R b/R/get_datasets.R index 44bfd8b..ed6c8ff 100644 --- a/R/get_datasets.R +++ b/R/get_datasets.R @@ -386,4 +386,3 @@ get_datasets.site <- function(x, ...) { return(output) } - diff --git a/inst/doc/neotoma2-package.html b/inst/doc/neotoma2-package.html index e7fe00e..670d26d 100644 --- a/inst/doc/neotoma2-package.html +++ b/inst/doc/neotoma2-package.html @@ -1,106 +1,4485 @@ - + + + - - - - -The neotoma2 R Package • neotoma2 - - - - - - - - + + + + + + + + + +The neotoma2 R Package + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - - - -
-
- - - - -
- - - - -
-
- - - - + + + + + + +

The neotoma2 R Package

+ + +
-closeup of several Neotoma sites in the Caribbean.
closeup of several Neotoma sites in the +closeup of several Neotoma sites in the Caribbean. +
closeup of several Neotoma sites in the Caribbean.
-
-

Neotoma Resources -

-

The Neotoma Paleoecology +

+

Neotoma Resources

+

The Neotoma Paleoecology Database is a domain-specific data resource containing millions of fossil records from around the globe, covering the last 5.4 million years. The neotoma2 R package simplifies some of the data @@ -110,49 +4489,107 @@

Neotoma Resources -

Resources -

+
+

Resources

+
  • Neotoma Database Manual
    • Documentation for the database itself, with examples of SQL queries and descriptions of Neotoma tables.
    • -
    -
  • -
  • -Neotoma (JSON) API +
  • +
  • Neotoma (JSON) API
    • A tool to obtain data in JSON format directly through calls to Neotoma.
    • -
    -
  • -
  • -Neotoma GitHub +
  • +
  • Neotoma GitHub Organization
    • Open code repositories to see how folks are using Neotoma and what kinds of projects we’re working on.
    • -
    -
  • +
  • Workshops and Code Examples (see the section below)
  • -
    -

    Neotoma Data Structure -

    +
    +

    Neotoma Data Structure

    -Three panels showing context for Neotoma’s geographic representation of sites. In panel a a site is defined by the boundaries of a lake. The site also has a bounding box, and the core location is defined by a collection unit within the site that is defined with precise coordinates. In panel b a site is defined as a single point, for example, from a textual reference indicating the site is at the intersection of two roads. Here the site and collection unit share the unique point location. In panel c we show how that site location may be obfuscated using a bounding box as the site delimiter. In this case the collection unit would not be defined (but is represented as the triangle for illustration). Figure obtained from the Neotoma Database Manual.
    Three panels showing context for Neotoma’s + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + c. + + + + + + + + + + + + + + + + + b. + site + + + + + + + + + + a. + bounding box + site + + collection unit + + + +
    Three panels showing context for Neotoma’s geographic representation of sites. In panel a a site is defined by the boundaries of a lake. The site also has a bounding box, and the core location is defined by a collection unit within the site that is defined @@ -166,7 +4603,7 @@

    Neotoma Data Structure

    Data in Neotoma is associated with sites, specific locations with -lat/long coordinates. Within a site, there may be one or more collection +lat/long coordinates. Within a site, there may be one or more collection units – locations at which samples are physically collected within the site. For example, an archaeological site may have one or more collection units, pits within a @@ -185,13 +4622,93 @@

    Neotoma Data Structureconstituent database.

    -Figure. The structure of sites, collection units and datasets within Neotoma. A site contains one or more collection units. Chronologies are associated with collection units. Data of a common type (pollen, diatoms, vertebrate fauna) are assigned to a dataset.
    -Figure. The structure of + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Site: Nath Sagar + Collection Unit 1 + Collection Unit 2 + + + + + Chronology + Chronology + + + + + + + AnalysisUnit + Sample + + + CharcoalDataset + + + + PollenDataset + + + + DiatomDataset + + +
    Figure. The structure of sites, collection units and datasets within Neotoma. A site contains one or more collection units. Chronologies are associated with collection units. Data of a common type (pollen, diatoms, vertebrate fauna) are -assigned to a dataset. -
    +assigned to a dataset.

    Researchers often begin by searching for sites within a particular study area, whether that is defined by geographic or political @@ -205,7 +4722,2660 @@

    Neotoma Data Structure
    -A diagram showing the different major classes within the neotoma2 R package, and the way the elements are related to one another. Individual boxes represent the major classes (sites, site, collectionunits, etc.). Each box then has a list of the specific metadata contained within the class, and the variable type (e.g., siteid: integer). Below these are the functions that can be applied to the object (e.g., [[<-).
    A diagram showing the different major classes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    A diagram showing the different major classes within the neotoma2 R package, and the way the elements are related to one another. Individual boxes represent the major classes (sites, site, collectionunits, etc.). Each box then has a list of the @@ -220,9 +7390,8 @@

    Neotoma Data Structure -
    -

    Package Requirements -

    +
    +

    Package Requirements

    The earlier neotoma package tried to use base R as much as possible. The neotoma2 package now draws primarily on dplyr and purrr packages from the @@ -232,48 +7401,45 @@

    Package Requirements -

    Site Searches -

    +
    +

    Site Searches

    The highest level object in Neotoma is the site. Sites have spatial coordinates and, in many cases, additional metadata related to lake parameters, or other site-specific properties.

    -

    Sites can be searched using the get_sites() function, -or, can be created using the set_site() function. A single +

    Sites can be searched using the get_sites() function, +or, can be created using the set_site() function. A single site object is a special object in R, that can be combined with other sites into a sites object. A sites -object is effectively a list() of site objects +object is effectively a list() of site objects with special methods for printing, plotting and exporting information.

    -
    -

    Finding Sites -

    +
    +

    Finding Sites

    All sites in Neotoma have a unique numeric identifier. With the neotoma2 package you can search for a site using the -get_sites() function by its unique site id +get_sites() function by its unique site id (siteid), by name (sitename), by altitude (altmin, altmax), by geopolitical name (gpid), location (loc) or age bounds.

    If we’re looking for a site and we know its specific identifier, we -can use the simplest implementation of get_sites(). Here we +can use the simplest implementation of get_sites(). Here we are searching for a site (Alexander Lake), where we know that the siteid for the record in Neotoma is 24. We can get these siteids -using the Neotoma +using the Neotoma Explorer web application, or if we have some familiarity with the site records already.

    -
    -# Search for site by a single numeric ID:
    -alex <- get_sites(24)
    -alex
    -#>  siteid       sitename      lat      long altitude
    -#>      24 Alexander Lake 53.33333 -60.58333      143
    -
    -# Search for sites with multiple IDs using c():
    -multiple_sites <- get_sites(c(24, 47))
    -multiple_sites
    -#>  siteid       sitename      lat      long altitude
    -#>      24 Alexander Lake 53.33333 -60.58333      143
    -#>      47        Liberty 43.52000 -90.78000      305
    +
    # Search for site by a single numeric ID:
    +alex <- get_sites(24)
    +alex
    +#>  siteid       sitename      lat      long altitude
    +#>      24 Alexander Lake 53.33333 -60.58333      143
    +
    +# Search for sites with multiple IDs using c():
    +multiple_sites <- get_sites(c(24, 47))
    +multiple_sites
    +#>  siteid       sitename      lat      long altitude
    +#>      24 Alexander Lake 53.33333 -60.58333      143
    +#>      47        Liberty 43.52000 -90.78000      305

    Once you search for a site, the neotoma2 R package makes a call to the Neotoma Database, and returns a structured sites object that contains metadata about the sites, and @@ -281,8 +7447,83 @@

    Finding Sites -The result of a hypothetical get_sites() call, is a sites object containing two individual site objects. Each site object contains a collunits object with some limited metadata. The top site appears to have two collection units, while the lower site has only a single collection unit. Each of the top two collection units appear to contain two datasets, while the bottom site has only the one collection unit with only one dataset.
    The result of a hypothetical -get_sites() call, is a sites object containing + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + sites + + + + site + + + + collunits + + + + collunit + + + + chronologies + + + + chronology + + + + contacts + + + + contact + + + + datasets + + + + samples + + + + dataset + + + + +
    The result of a hypothetical +get_sites() call, is a sites object containing two individual site objects. Each site object contains a collunits object with some limited metadata. The top site appears to have two collection units, while the @@ -290,43 +7531,168 @@

    Finding Sites -

    Searching for Sites by Name -

    +
    +

    Searching for Sites by Name

    Often we do not know the particular siteid. If we’re looking for a site and we know its name or a part of its name, we can search using the function with the sitename argument, -get_site(sitename = 'XXX'), where 'XXX' is the +get_site(sitename = 'XXX'), where 'XXX' is the site name. This does not support multiple text strings (i.e., you can’t -use c()).

    -
    -alex <- get_sites(sitename = "Alexander Lake")
    -alex
    -#>  siteid       sitename      lat      long altitude
    -#>      24 Alexander Lake 53.33333 -60.58333      143
    +use c()).

    +
    alex <- get_sites(sitename = "Alexander Lake")
    +alex
    +#>  siteid       sitename      lat      long altitude
    +#>      24 Alexander Lake 53.33333 -60.58333      143

    Neotoma uses a Postgres Database to manage data. Postgres uses the % sign as a general wildcard, so we can use the % in the sitename argument operator to help us find sites when we’re not sure the exact match. Note that the search is case insensitive so a search for alex% or Alex% will return the same results.

    -
    -alex <- get_sites(sitename = 'Alex%')
    -alex
    -#>  siteid           sitename      lat      long altitude
    -#>      24     Alexander Lake 53.33333 -60.58333      143
    -#>      25        Alexis Lake 52.51667 -57.03333      200
    -#>    4478 Alexander [3CN117] 35.25000 -92.61667       88
    -#>   26226     Alexandra Lake 43.29030 -74.16966      354
    +
    alex <- get_sites(sitename = 'Alex%')
    +alex
    +#>  siteid           sitename      lat      long altitude
    +#>      24     Alexander Lake 53.33333 -60.58333      143
    +#>      25        Alexis Lake 52.51667 -57.03333      200
    +#>    4478 Alexander [3CN117] 35.25000 -92.61667       88
    +#>   26226     Alexandra Lake 43.29030 -74.16966      354
    -
    -

    Searching for Sites by Age -

    +
    +

    Searching for Sites by Age

    There are several ways of searching for sites using age parameters. These are represented below:

    -Site searches using age parameters including ageof, ageyoung, ageold, maxage and minage.
    Site searches using age parameters including + + + + + + + + + + + + + + + + + + + + + + + + + + + + Neotoma Dataset Age Range Searches + + + Time (calibrated ybp) + + + + + + + + + + 0 + 10000 + 5000 + 7500 + 2500 + + + + + + + + + We are looking for datasets/sites that span some specific point in time.• ageof: The dataset spans a range that includes ageof. + + ageof = NULLageof = 5000 (in all examples, empty circles are not selected)ageof = 1250 + + + + + + + + + + + + + + + + + + + + + Each rectange represents a dataset that spans an age range (x-axis). The filled circles at the end are used to represent the datasets in examples below.Dashed red lines represent ages used in the examples below. + + We are looking for datasets that completely span an age range.• ageyounger: Defines the minimum age that must be covered by the dataset.• ageolder: Defines the maximum age that must be covered by the dataset. + + ageyounger/ageolder = NULLageyounger = 5000, ageolder=7500ageyounger = 1250, ageolder = 2500 + + + + + + + + + + + + + + + + + + + + + We are looking for datasets with any samples that fall within an age bound.• minage: Defines the minimum age bound for searched datasets.• maxage: Defines the maximum age bound for searched datasets. + + minage/maxage = NULLminage = 5000, maxage=7500minage = 1250, maxage = 2500 + + + + + + + + + + + + + + + + + + + + + + + + + +
    Site searches using age parameters including ageof, ageyoung, ageold, maxage and minage.
    @@ -343,13 +7709,12 @@

    Searching for Sites by Ageget_sites(minage = 5000, maxage = 8000).

    We can see how these age bounds differ:

    -
    +
    # Note, we are using the `all_data = TRUE` flag here to avoid the default limit of 25 records, discussed below.
    +# Because these queries are searching through every record they are slow and and are not
    +# run in knitting this vignette.
    +get_sites(ageof = 8200, all_data = TRUE) %>% length()
    +get_sites(ageyounger = 5000, ageolder = 8000, all_data = TRUE) %>% length()
    +get_sites(minage = 5000, maxage = 8000, all_data = TRUE) %>% length()

    It is possible to pass all parameters (ageof, minage, maxage, ageyounger, . . . ), but it is likely that these will conflict and result in an empty set @@ -357,11 +7722,10 @@

    Searching for Sites by Age

    -
    -

    Accessing sites metadata -

    +
    +

    Accessing sites metadata

    Although the sites are structured using S4 objects (see -Hadley Wickham’s S4 +Hadley Wickham’s S4 documentation), we’ve added helper functions to make accessing elements easier for users.

    The alex object is composed of several smaller objects @@ -369,145 +7733,227 @@

    Accessing sites metadata[[ ]], placing the index of the desired object between the brackets. Then we can also call the particular variable we want using the $ symbol.

    -
    -alex <- get_sites(sitename = "Alexander Lake")
    -alex[[1]]$siteid
    -#> [1] 24
    +
    alex <- get_sites(sitename = "Alexander Lake")
    +alex[[1]]$siteid
    +#> [1] 24

    The elements within a site are the same as the defined -columns within the Neotoma ndb.sites +columns within the Neotoma ndb.sites table, with the exception of the collunits slot, which contains the collection units and associated datasets that are found within a site. You can see all the site slots using the -names() function. You can select individual elements of a +names() function. You can select individual elements of a site, and you can assign values to these parameters:

    -
    -names(alex[[1]])
    -#> [1] "siteid"       "sitename"     "geography"    "altitude"     "geopolitical"
    -#> [6] "area"         "notes"        "description"  "collunits"
    -
    -# Modify a value using $<- assignment:
    -alex[[1]]$area
    -#> [1] NA
    -alex[[1]]$area <- 100
    -alex[[1]]$area
    -#> [1] 100
    -
    -# Modify a value using [<- assignment:
    -alex[[1]]["area"] <- 30
    -# alex[[1]][7] <- 30  This fails because the `Notes` field expects a character string.
    +
    names(alex[[1]])
    +#> [1] "siteid"       "sitename"     "geography"    "altitude"     "geopolitical"
    +#> [6] "area"         "notes"        "description"  "collunits"
    +
    +# Modify a value using $<- assignment:
    +alex[[1]]$area
    +#> [1] NA
    +alex[[1]]$area <- 100
    +alex[[1]]$area
    +#> [1] 100
    +
    +# Modify a value using [<- assignment:
    +alex[[1]]["area"] <- 30
    +# alex[[1]][7] <- 30  This fails because the `Notes` field expects a character string.

    Using assignment, we can add information programmatically, for example, by working interactively with a digital elevation model or hydrographic data to obtain lake area measurements. Although not currently implemented, the goal is to support direct upload of updated information by users.

    -
    -

    Creating a Site -

    +
    +

    Creating a Site

    As explained above, a site is the fundamental unit of the Neotoma Database. If you are working with your own data, you might want to create a site object to allow it to interact with other data within Neotoma. You can create a site with the -set_site() function. It will ask you to provide important +set_site() function. It will ask you to provide important information such as sitename, lat, and long attributes.

    -
    -my_site <- set_site(sitename = "My Lake", 
    -                    geography = st_sf(a = 3, st_sfc(st_point(1:2))), 
    -                    description = "my lake", 
    -                    altitude = 30)
    -my_site
    -#>                                siteid sitename lat long altitude
    -#>  e9cf20b7-bedc-40b0-a110-cfb4d36c0e87  My Lake   2    1       30
    +
    my_site <- set_site(sitename = "My Lake", 
    +                    geography = st_sf(a = 3, st_sfc(st_point(1:2))), 
    +                    description = "my lake", 
    +                    altitude = 30)
    +my_site
    +#>                                siteid sitename lat long altitude
    +#>  24fcaf8f-239a-4c40-a221-c6e33f270404  My Lake   2    1       30

    If we have a set of sites that we are analyzing, we can add the new site to the set of sites, either by appending it to the end, using -c(), or by replacing a particular element using +c(), or by replacing a particular element using [[<-.

    This method allows us to begin modifying site information for existing sites if we have updated knowledge about site properties.

    -
    -# Add a new site that's been edited using set_site()
    -longer_alex <- c(alex, my_site)
    -# Or replace an element within the existing list of sites
    -# with the newly created site.
    -longer_alex[[2]] <- my_site
    -
    -# Or append to the `sites` list with assignment:
    -longer_alex[[3]] <- my_site
    +
    # Add a new site that's been edited using set_site()
    +longer_alex <- c(alex, my_site)
    +# Or replace an element within the existing list of sites
    +# with the newly created site.
    +longer_alex[[2]] <- my_site
    +
    +# Or append to the `sites` list with assignment:
    +longer_alex[[3]] <- my_site

    We can also use set_sites() as a tool to update the metadata associated with an existing site object:

    -
    -# Update a value within an existing `sites` object:
    -longer_alex[[3]] <- set_site(longer_alex[[3]],
    -  altitude = 3000)
    -longer_alex
    +
    # Update a value within an existing `sites` object:
    +longer_alex[[3]] <- set_site(longer_alex[[3]],
    +  altitude = 3000)
    +longer_alex
    -
    -

    Datasets -

    +
    +

    Datasets

    If you need to get to a deeper level of the sites object, you may -want to look at the get_datasets() function. You can use -get_datasets() using search parameters, or you can use it +want to look at the get_datasets() function. You can use +get_datasets() using search parameters, or you can use it on an existing sites object, such as our prior alex dataset.

    -

    get_datasets() adds additional metadata to the +

    get_datasets() adds additional metadata to the site objects, letting us know which datasettypes are associated with a site, and the dataset sample locations at the site.

    -Using get_datasets() provides more complete metadata about a record, including the addition of chronological information, and more complete metadata about the datasets, compared to the get_sites() call, shown above. The objects here are the same as above, but now have chronology metadata, and contact metadata for the records. Note that there is still no sample or taxonomic information about these records. This comes from the get_downloads() function.
    Using get_datasets() provides more + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + sites + + + + site + + + + collunits + + + + collunit + + + + chronologies + + + + chronology + + + + contacts + + + + contact + + + + datasets + + + + samples + + + + dataset + + + + +
    Using get_datasets() provides more complete metadata about a record, including the addition of chronological information, and more complete metadata about the -datasets, compared to the get_sites() call, shown above. +datasets, compared to the get_sites() call, shown above. The objects here are the same as above, but now have chronology metadata, and contact metadata for the records. Note that there is still no sample or taxonomic information about these records. This comes from -the get_downloads() function.
    +the get_downloads() function.

    Getting the datasets by id is the easiest call, you can also pass a vector of IDs or, if you already have a sites object, you can pass a sites object.

    -
    -# Getting datasets by ID
    -my_datasets <- get_datasets(c(5, 10, 15, 20))
    -my_datasets
    -#>  siteid                   sitename       lat      long altitude
    -#>       5                       17/2  55.25000 -74.93333      335
    -#>      10 Site 1 (Cohen unpublished)  30.83000 -82.33000       37
    -#>      15                    Aguilar -23.83333 -65.75000     4000
    -#>      20                   Akuvaara  69.12326  27.67406      170
    +
    # Getting datasets by ID
    +my_datasets <- get_datasets(c(5, 10, 15, 20))
    +my_datasets
    +#>  siteid                   sitename       lat      long altitude
    +#>       5                       17/2  55.25000 -74.93333      335
    +#>      10 Site 1 (Cohen unpublished)  30.83000 -82.33000       37
    +#>      15                    Aguilar -23.83333 -65.75000     4000
    +#>      20                   Akuvaara  69.12326  27.67406      170

    You can also retrieve datasets by type directly from the API.

    -
    -# Getting datasets by type
    -my_pollen_datasets <- get_datasets(datasettype = "pollen", limit = 25)
    -my_pollen_datasets
    -#>  siteid                            sitename       lat       long altitude
    -#>       7                     Three Pines Bog  47.00000  -80.11667      294
    -#>       8                 Abalone Rocks Marsh  33.95639 -119.97667        0
    -#>       9                              Adange  43.30556   41.33333     1750
    -#>      11        Konus Exposure, Adycha River  67.75000  135.58333      130
    -#>      12                       Ageröds Mosse  55.93329   13.42559       58
    -#>      13                     Aguas Calientes -23.08333  -67.40000     4210
    -#>      14                   Aguas Calientes 2 -23.50000  -67.58333     4210
    -#>      15                             Aguilar -23.83333  -65.75000     4000
    -#>      16                           Ahlenmoor  53.69908    8.74688        2
    -#>      17                               Ajata -18.25000  -69.20000     4700
    -#>      18                    South Soefje Bog  29.60000  -97.51694       97
    -#>      19             Akulinin Exposure P1282  47.11667  138.55000       20
    -#>      20                            Akuvaara  69.12326   27.67406      170
    -#>      21 Alazeya River Exposure, 8 m Terrace  68.50000  154.50000       40
    -#>      22 Alazeya River Exposure, 9 m Terrace  64.33333  154.50000       40
    -#>      24                      Alexander Lake  53.33333  -60.58333      143
    -#>      25                         Alexis Lake  52.51667  -57.03333      200
    -#>      27                          Aliuk Pond  54.58333  -57.36667       25
    -#>      29                          Lake Allie  44.80156  -94.55982      328
    -#>      30                         Almora Lake  46.20611  -95.29361      437
    -#>      31                           Alut Lake  60.13667  152.31278      480
    -#>      32                             Amarete -15.23333  -68.98333     4000
    -#>      33             Amba River Exposure 596  43.31667  131.81667        5
    -#>      68     Amguema River Valley Exposure 1  67.75000  178.70000      175
    -#>      69     Amguema River Valley Exposure 2  67.66667  178.60000       87
    +
    # Getting datasets by type
    +my_pollen_datasets <- get_datasets(datasettype = "pollen", limit = 25)
    +my_pollen_datasets
    +#>  siteid                            sitename       lat       long altitude
    +#>       7                     Three Pines Bog  47.00000  -80.11667      294
    +#>       8                 Abalone Rocks Marsh  33.95639 -119.97667        0
    +#>       9                              Adange  43.30556   41.33333     1750
    +#>      11        Konus Exposure, Adycha River  67.75000  135.58333      130
    +#>      12                       Ageröds Mosse  55.93329   13.42559       58
    +#>      13                     Aguas Calientes -23.08333  -67.40000     4210
    +#>      14                   Aguas Calientes 2 -23.50000  -67.58333     4210
    +#>      15                             Aguilar -23.83333  -65.75000     4000
    +#>      16                           Ahlenmoor  53.69908    8.74688        2
    +#>      17                               Ajata -18.25000  -69.20000     4700
    +#>      18                    South Soefje Bog  29.60000  -97.51694       97
    +#>      19             Akulinin Exposure P1282  47.11667  138.55000       20
    +#>      20                            Akuvaara  69.12326   27.67406      170
    +#>      21 Alazeya River Exposure, 8 m Terrace  68.50000  154.50000       40
    +#>      22 Alazeya River Exposure, 9 m Terrace  64.33333  154.50000       40
    +#>      24                      Alexander Lake  53.33333  -60.58333      143
    +#>      25                         Alexis Lake  52.51667  -57.03333      200
    +#>      27                          Aliuk Pond  54.58333  -57.36667       25
    +#>      29                          Lake Allie  44.80156  -94.55982      328
    +#>      30                         Almora Lake  46.20611  -95.29361      437
    +#>      31                           Alut Lake  60.13667  152.31278      480
    +#>      32                             Amarete -15.23333  -68.98333     4000
    +#>      33             Amba River Exposure 596  43.31667  131.81667        5
    +#>      68     Amguema River Valley Exposure 1  67.75000  178.70000      175
    +#>      69     Amguema River Valley Exposure 2  67.66667  178.60000       87

    It can be computationally intensive to obtain the full set of records for sites or datasets. By default the limit for all queries is 25. The default @@ -518,89 +7964,83 @@

    Datasets

    We can use that all_data = TRUE in R in the following way:

    -
    -allSites_dt <- get_sites(datasettype = "diatom")
    -allSites_dt_all <- get_sites(datasettype = "diatom", all_data = TRUE)
    -
    -# Because we used the `all_data = TRUE` flag, there will be more sites
    -# in allSites_dt_all, because it represents all sites containing diatom datasets.
    -length(allSites_dt_all) > length(allSites_dt)
    -
    -

    Spatial Searches -

    -

    You can get the coordinates to create a GeoJson bounding box from here, or you can use +

    allSites_dt <- get_sites(datasettype = "diatom")
    +allSites_dt_all <- get_sites(datasettype = "diatom", all_data = TRUE)
    +
    +# Because we used the `all_data = TRUE` flag, there will be more sites
    +# in allSites_dt_all, because it represents all sites containing diatom datasets.
    +length(allSites_dt_all) > length(allSites_dt)
    +
    +

    Spatial Searches

    +

    You can get the coordinates to create a GeoJson bounding box from here, or you can use pre-existing objects within R, for example, country-level data within the spData package:

    Accessing datasets by bounding box:

    -
    -brazil <- '{"type": "Polygon", 
    -            "coordinates": [[
    -                [-73.125, -9.102],
    -                [-56.953, -33.138],
    -                [-36.563, -7.711],
    -                [-68.203, 13.923],
    -                [-73.125, -9.102]
    -              ]]}'
    -
    -# We can make the geojson a spatial object if we want to use the
    -# functionality of the `sf` package.
    -brazil_sf <- geojsonsf::geojson_sf(brazil)
    -
    -brazil_datasets <- get_datasets(loc = brazil_sf)
    -brazil_datasets
    -#>  siteid                   sitename       lat      long altitude
    -#>      32                    Amarete -15.23333 -68.98333     4000
    -#>     211             Lago do Aquiri  -3.16667 -44.98333       10
    -#>     323                 Cala Conto -17.56667 -65.93333     2700
    -#>     347               Chacaltaya 1 -16.36667 -68.15000     4750
    -#>     348               Chacaltaya 2 -16.36667 -68.15000     4350
    -#>     523             Cumbre Unduavi -16.35000 -68.04167     4620
    -#>     532            Lagoa das Patas   0.26667 -66.68333      300
    -#>     843        Serra Campos Gerais -24.66667 -50.21667     1200
    -#>    1406                  Katantica -14.80000 -69.18333     4820
    -#>    1994                 Río Kaluyo -16.43333 -68.13333     4070
    -#>    2827                  Wasa Mayu -17.53333 -65.81667     2720
    -#>    9685              Laguna Granja -13.26341 -63.71025      138
    -#>    9686             Laguna Orícore -13.34833 -63.52796      139
    -#>   11923              Lake Valencia  10.16576 -67.76755      410
    -#>   13964              Lake Chalalán -14.42868 -67.92125      330
    -#>     526            Lagoa da Curuça  -0.76667 -47.85000       35
    -#>    1717               Monte Blanco -17.02500 -67.35000     4780
    -#>    2248 Lagoa Campestre de Salitre -19.00000 -46.76667      980
    -#>   11877              Lake Consuelo -13.95125 -68.99162     1333
    +
    brazil <- '{"type": "Polygon", 
    +            "coordinates": [[
    +                [-73.125, -9.102],
    +                [-56.953, -33.138],
    +                [-36.563, -7.711],
    +                [-68.203, 13.923],
    +                [-73.125, -9.102]
    +              ]]}'
    +
    +# We can make the geojson a spatial object if we want to use the
    +# functionality of the `sf` package.
    +brazil_sf <- geojsonsf::geojson_sf(brazil)
    +
    +brazil_datasets <- get_datasets(loc = brazil_sf)
    +brazil_datasets
    +#>  siteid                   sitename       lat      long altitude
    +#>      32                    Amarete -15.23333 -68.98333     4000
    +#>     211             Lago do Aquiri  -3.16667 -44.98333       10
    +#>     323                 Cala Conto -17.56667 -65.93333     2700
    +#>     347               Chacaltaya 1 -16.36667 -68.15000     4750
    +#>     348               Chacaltaya 2 -16.36667 -68.15000     4350
    +#>     523             Cumbre Unduavi -16.35000 -68.04167     4620
    +#>     532            Lagoa das Patas   0.26667 -66.68333      300
    +#>     843        Serra Campos Gerais -24.66667 -50.21667     1200
    +#>    1406                  Katantica -14.80000 -69.18333     4820
    +#>    1994                 Río Kaluyo -16.43333 -68.13333     4070
    +#>    2827                  Wasa Mayu -17.53333 -65.81667     2720
    +#>    9685              Laguna Granja -13.26341 -63.71025      138
    +#>    9686             Laguna Orícore -13.34833 -63.52796      139
    +#>   11923              Lake Valencia  10.16576 -67.76755      410
    +#>   13964              Lake Chalalán -14.42868 -67.92125      330
    +#>     526            Lagoa da Curuça  -0.76667 -47.85000       35
    +#>    1717               Monte Blanco -17.02500 -67.35000     4780
    +#>    2248 Lagoa Campestre de Salitre -19.00000 -46.76667      980
    +#>   11877              Lake Consuelo -13.95125 -68.99162     1333

    Now we have an object called brazil_datasets that contains 19.

    You can plot these findings!

    -
    -plotLeaflet(brazil_datasets)
    -
    - +
    plotLeaflet(brazil_datasets)
    +
    +
    -
    -

    Filtering Records -

    +
    +

    Filtering Records

    Sometimes we take a large number of records, do some analysis, and then choose to select a subset. For example, we may want to select all sites in a region, and then subset those by dataset type. If we want to look at only the geochronological datasets from Brazil, we can start -with the set of records returned from our get_datasets() +with the set of records returned from our get_datasets() query, and then use the filter function in neotoma2 to select only those datasets that are geochronologic:

    -
    -brazil_dates <- neotoma2::filter(brazil_datasets,
    -  datasettype == "geochronologic")
    -
    -# or:
    -
    -brazil_dates <- brazil_datasets %>%
    -  neotoma2::filter(datasettype == "geochronologic")
    -
    -# With boolean operators:
    -
    -brazil_space <- brazil_datasets %>% neotoma2::filter(lat > -18 & lat < -16)
    -

    The filter() function takes as the first argument, a +

    brazil_dates <- neotoma2::filter(brazil_datasets,
    +  datasettype == "geochronologic")
    +
    +# or:
    +
    +brazil_dates <- brazil_datasets %>%
    +  neotoma2::filter(datasettype == "geochronologic")
    +
    +# With boolean operators:
    +
    +brazil_space <- brazil_datasets %>% neotoma2::filter(lat > -18 & lat < -16)
    +

    The filter() function takes as the first argument, a datasets object, followed by the criteria we want to use to filter. Current supported criteria includes:

      @@ -617,63 +8057,158 @@

      Filtering Records -

      Sample and Taxonomic data -

      +
      +

      Sample and Taxonomic data

      Once we have the set of records we wish to examine, we then want to recover the actual sample data. This will provide us with information about the kinds of elements found at the site, within the dataset, their sample ages, and their counts or measurements. To do this we use the -get_downloads() call. Note, as before, we are returning a +get_downloads() call. Note, as before, we are returning a sites objects, but this time with the most complete metadata.

      -Using get_downloads() returns a sites object, but one that contains dataset objects with filled samples slots. The samples slot is often very large relative to the other metadata associated with sites, and so it is commonly held back until a direct request is provided. Helper functions at the sites level can pull out sample data once get_downloads() has been called.
      Using get_downloads() returns a + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + sites + + + + site + + + + collunits + + + + collunit + + + + chronologies + + + + chronology + + + + contacts + + + + contact + + + + datasets + + + + samples + + + + dataset + + + + +
      Using get_downloads() returns a sites object, but one that contains dataset objects with filled samples slots. The samples slot is often very large relative to the other metadata associated with sites, and so it is commonly held back until a direct request is provided. Helper functions at the sites level -can pull out sample data once get_downloads() +can pull out sample data once get_downloads() has been called.

      Assuming we continue with our example from Brazil, we want to extract records from the country, filter to only pollen records with samples covering the last 10,000 years, and then look at the relative frequency of taxa across sites. We might do something like this:

      -
      -brazil <- '{"type": "Polygon", 
      -            "coordinates": [[
      -                [-73.125, -9.102],
      -                [-56.953, -33.138],
      -                [-36.563, -7.711],
      -                [-68.203, 13.923],
      -                [-73.125, -9.102]
      -              ]]}'
      -
      -# We can make the geojson a spatial object if we want to use the
      -# functionality of the `sf` package.
      -brazil_sf <- geojsonsf::geojson_sf(brazil)
      -
      -brazil_records <- get_datasets(loc = brazil_sf) %>%
      -  neotoma2::filter(datasettype == "pollen" & age_range_young <= 1000 & age_range_old >= 10000) %>%
      -  get_downloads(verbose = FALSE)
      -#> [1] "data/downloads?datasetid=356,537,547,872,2069,18100,21537"
      -
      -count_by_site <- samples(brazil_records) %>%
      -  dplyr::filter(elementtype == "pollen" & units == "NISP") %>%
      -  group_by(siteid, variablename) %>%
      -  summarise(n = n()) %>%
      -  group_by(variablename) %>%
      -  summarise(n = n()) %>%
      -  arrange(desc(n))
      -#> `summarise()` has grouped output by 'siteid'. You can override using the
      -#> `.groups` argument.
      +
      brazil <- '{"type": "Polygon", 
      +            "coordinates": [[
      +                [-73.125, -9.102],
      +                [-56.953, -33.138],
      +                [-36.563, -7.711],
      +                [-68.203, 13.923],
      +                [-73.125, -9.102]
      +              ]]}'
      +
      +# We can make the geojson a spatial object if we want to use the
      +# functionality of the `sf` package.
      +brazil_sf <- geojsonsf::geojson_sf(brazil)
      +
      +brazil_records <- get_datasets(loc = brazil_sf) %>%
      +  neotoma2::filter(datasettype == "pollen" & age_range_young <= 1000 & age_range_old >= 10000) %>%
      +  get_downloads(verbose = FALSE)
      +
      +count_by_site <- samples(brazil_records) %>%
      +  dplyr::filter(elementtype == "pollen" & units == "NISP") %>%
      +  group_by(siteid, variablename) %>%
      +  summarise(n = n()) %>%
      +  group_by(variablename) %>%
      +  summarise(n = n()) %>%
      +  arrange(desc(n))
      +#> `summarise()` has grouped output by 'siteid'. You can override using the
      +#> `.groups` argument.

      In this code chunk we define the bounding polygon for our sites, filter by time and dataset type, and then return the full records for those sites. We get a sites object with dataset and sample -information (because we used get_downloads()). We execute -the samples() function to extract all the samples from the +information (because we used get_downloads()). We execute +the samples() function to extract all the samples from the sites objects, and then filter the resulting data.frame to pull only pollen (a pollen dataset may contain spores and other elements that are not, strictly speaking, @@ -686,51 +8221,44 @@

      Sample and Taxonomic dataarrange() so that the records show the most common taxa first in the resulting variable count_by_site.

      -
      -

      Publications -

      +
      +

      Publications

      Many Neotoma records have publications associated with them. The publication object (and the publications -collection) provide the opportunity to do this. The publication +collection) provide the opportunity to do this. The publication table in Neotoma contains an extensive number of fields. The methods for publications in the neotoma2 package provide us with tools to retrieve publication data from Neotoma, to set and manipulate publication data locally, and to retrieve publication data from external sources (e.g., using a DOI).

      -
      -

      -get_publications() from Neotoma -

      +
      +

      get_publications() from Neotoma

      The most simple case is a search for a publication based on one or more publication IDs. Most people do not know the unique publication ID of individual articles, but this provides a simple method to highlight the way Neotoma retrieves and presents publication information.

      -
      -

      Get Publication By ID -

      +
      +

      Get Publication By ID

      We can use a single publication ID or multiple IDs. In either case the API returns the publication(s) and creates a new publications object (which consists of multiple individual publications).

      -
      -one <- get_publications(12)
      -two <- get_publications(c(12, 14))
      +
      one <- get_publications(12)
      +two <- get_publications(c(12, 14))

      From there we can then then subset and extract elements from the list using the standard [[ format. For example:

      -
      -two[[2]]
      -#>   publicationid
      -#> 1            14
      -#>                                                                                                                                                                                                      citation
      -#> 1 Baker, R.G., R.S. Rhodes II, D.P. Schwert, A.C. Ashworth, T.J. Frest, G.R. Hallberg, and J.A. Janssens. 1986. A full- glacial biota from southeastern Iowa, USA. Journal of Quaternary Science 1(2):91-107.
      -#>    doi
      -#> 1 <NA>
      +
      two[[2]]
      +#>   publicationid
      +#> 1            14
      +#>                                                                                                                                                                                                      citation
      +#> 1 Baker, R.G., R.S. Rhodes II, D.P. Schwert, A.C. Ashworth, T.J. Frest, G.R. Hallberg, and J.A. Janssens. 1986. A full- glacial biota from southeastern Iowa, USA. Journal of Quaternary Science 1(2):91-107.
      +#>                      doi
      +#> 1 10.1002/jqs.3390010202

      Will return the second publication in the list, corresponding to the publication with publicationid 14 in this case.

      -
      - + +
      michPubs <- get_publications(search = "Michigan", limit = 2)

      This results in a set of 2 publications from Neotoma, equal to the limit. If the number of matching publications is less than -the limit then the length() will be smaller.

      +the limit then the length() will be smaller.

      Text matching in Neotoma is approximate, meaning it is a measure of the overall similarity between the search string and the set of article titles. This means that using a nonsense string may still return results results:

      -
      -noise <- get_publications(search = "Canada Banada Nanada", limit = 5)
      +
      noise <- get_publications(search = "Canada Banada Nanada", limit = 5)

      This returns a result set of length 5.

      This returns the (Neotoma) ID, the citation and the publication DOI (if that is stored in Neotoma). We can get the first publication using the standard [[ nomenclature:

      -
      -two[[1]]
      -#>   publicationid
      -#> 1            12
      -#>                                                                                                                          citation
      -#> 1 King, J.E. 1973. Late Pleistocene palynology and biogeography of the western Missouri Ozarks. Ecological Monographs 43:539-565.
      -#>    doi
      -#> 1 <NA>
      +
      two[[1]]
      +#>   publicationid
      +#> 1            12
      +#>                                                                                                                          citation
      +#> 1 King, J.E. 1973. Late Pleistocene palynology and biogeography of the western Missouri Ozarks. Ecological Monographs 43:539-565.
      +#>    doi
      +#> 1 <NA>

      The output will look similar to the output for two above, however you will see that only a single result will be returned and the class (for a single publication) will be of type @@ -775,190 +8300,149 @@

      +
      # Select publications with Neotoma Publication IDs 1 - 10.
      +pubArray <- get_publications(1:10)
      +# Select the first five publications:
      +subPub <- pubArray[[1:5]]
      +subPub
      +#>   publicationid
      +#> 1             1
      +#> 2             2
      +#> 3             3
      +#> 4             4
      +#> 5             5
      +#>                                                                                                                                                                                                                                                                                                                                                               citation
      +#> 1                                                                                                                                                                            International Organization for Standardization. 2006. Codes for the representation of names of countries and their subdivisions — Part 1: Country codes. ISO 3166-1, Geneva, Switzerland.
      +#> 2                                                                                                                                                                                                                                                                            United States Central Intelligence Agency. 2004. CIA World Factbook. Washington, DC, USA.
      +#> 3                                                                                                                                         Davis, M.B. 1978. Climatic interpretation of pollen in Quaternary sediments. Pages 35-51 in D. Walker and J.C. Guppy, editors. Biology and Quaternary environments. Australian Academy of Science, Canberra City, Australia.
      +#> 4 Nanavati, W., C. Whitlock, V. Outes, and G. Villarosa. 2011. Diatom-inferred wind activity at Lac du Sommet, southern Québec, Canada: A multiproxy paleoclimate reconstruction based on diatoms, chironomids, and pollen for the past 9500 years. The Holocene 21(6):925-938. http://hol.sagepub.com/content/21/6/925.full.pdf+html. [DOI: 10.1177/0959683611400199]
      +#> 5                                                                                                                                                                                                Janssen, C.R. 1968. Myrtle Lake: a late- and post-glacial pollen diagram from northern Minnesota. Canadian Journal of Botany 46(11):1397-1410. [DOI: 10.1139/b68-190]
      +#>                        doi
      +#> 1                     <NA>
      +#> 2                     <NA>
      +#> 3                     <NA>
      +#> 4 10.1177/0959683611400199
      +#> 5          10.1139/b68-190
      -
      -

      Create (or Import) New Publications -

      +
      +

      Create (or Import) New Publications

      Just as we can use the set_sites() function to set new site information, we can also create new publication information using -set_publications(). With set_publications() +set_publications(). With set_publications() you can enter as much or as little of the article metadata as you’d like, but it’s designed (in part) to use the CrossRef API to return information from a DOI.

      -
      -new_pub <- set_publications(
      -  articletitle = "Myrtle Lake: a late- and post-glacial pollen diagram from northern Minnesota",
      -  journal = "Canadian Journal of Botany",
      -  volume = 46)
      +
      new_pub <- set_publications(
      +  articletitle = "Myrtle Lake: a late- and post-glacial pollen diagram from northern Minnesota",
      +  journal = "Canadian Journal of Botany",
      +  volume = 46)

      A publication has a large number of slots that can be defined. These may be left blank, they may be set directly after the publication is defined:

      -
      -new_pub@pages <- "1397-1410"
      +
      new_pub@pages <- "1397-1410"
      -
      -

      Workshops and Code Examples -

      +
      +

      Workshops and Code Examples

      +
    • English Language Complex Workflow
      • Topics: Chronology building, Bchron
      • Spatial Domain: South America
      • Dataset Types: Diatoms
      • -
      -
    • -
    • -Spanish +
    +
  • Spanish Language Complex Workflow
    • Topics: Chronology building, Bchron
    • Spatial Domain: South America
    • Dataset Types: Diatoms
    • -
    -
  • - - + +
  • 2022 European Pollen Database Meeting; Prague, Czech Republic
  • +
  • English Language Complex Workflow
    • Topics: Chronology building, Bchron
    • Spatial Domain: Europe/Czech Republic
    • Dataset Types: Pollen
    • -
    -
  • - - + +
  • 2022 American Quaternary Association Meeting
  • +
  • English Language Complex Workflow
    • Topics: Chronologies
    • Spatial Domain: North America
    • Dataset Types: Pollen
    • -
    -
  • - - + +
  • Neotoma-charcoal Workshop, Göttingen, Germany. Authors: Petr Kuneš & Thomas Giesecke
      -
    • -English +
    • English Language Workflow
      • Topics: Simple Search, PCA, DCA, Charcoal/Pollen Correlation
      • Spatial Domain: Global/Czech Republic
      • Dataset Types: Pollen, Charcoal
      • +
    • +
  • - - - - -
    -
    - - -

    - - - -
    - -
    -

    -

    Site built with pkgdown 2.0.7.

    -
    -
    -
    - + - + + - +