Data from the Evolving Seas RCN

Synthesis activities conducted with RCN funds have generated new data in the form of simulations or metaanalyses. To be complient with our NSF grant, these data are archived in a compliant manner at this site:

Data sharing agreements

We encourage RCN working groups to share author guidelines, collaboration agreements, and data sharing agreements. The following examples are from Allie Cramer for the “Spatial and Genomic Connectivity” Working Group.

For RCN members: archiving data and code

To achieve longer-term storage and open access after publication, data products will undergo final quality control checks and will then be archived an appropriate online repository. Oceanographic data resulting from simulations, aggregated observations, or other types of synthesis products should be submitted to BCO-DMO ( BCO-DMO stands for Biological and Chemical Oceanography Data Management Office. Non-oceanographic data, reports, and other gray literature can be submitted to generalist repositories like Dryad or Figshare.

While the RCN does not fund the collection of new data, data has still been generated through RCN activities through synthesis projects, metanalyses, or simulations. If your group has generated data and been supported by RCN funds, you are responsible to archive these data in a complient matter.

First, please read these OCE Data Sharing resources:

  • OCE Sample and Data Policy
  • OCE Approved Data and Sample Repositories
    • Note that generalist archives like Dryad and FigShare are generally not approved repositories for data archiving. The repositories listed in the link provide domain-specific data curation, long-term archiving, and metadata and quality standards that ensure data are reusable and understandable.

Second, please educate your team on Documenting and Archiving your Code

We encourage working groups to use GitHub to maintain version control over their data and code. If your working group would like a GitHub repo, please contact Katie Lotterhos.

Keep in mind that Github itself is not an archive and anything posted there can be taken down at any time. Therefore, documenting and archiving code is an extra step that must be taken to be complient with our NSF grant.

You can refer to BCO-DMO’s code, models, and software page for more information about either getting a DOI for your github repository in a long-term archive (e.g. Zenodo) or contributing code to BCO-DMO to curate if it is related to data BCO-DMO will serve. That page also explains the level of documentation required whether the code sits with BCO-DMO or Zenodo.

How to submit data to BCO-DMO

  • For information about the data serving process at BCO-DMO and required metadata forms please consult their How to page
  • Related information associated with datasets being submitted to BCO-DMO, such as publications, code, or datasets served elsewhere, can be linked to as “Related Publications” and “Related Datasets” from a BCO-DMO Dataset Metadata Page.
  • Here is an example of what a Dataset Metadata Page looks like at BCO-DMO: “Pore water Geochemistry”:
  • BCO-DMO typically serves data in the order they are received and the submission queue can be weeks to months long. Please take journal publication timelines into account when planning your dataset submissions.
  • You can email them at if you have questions about what to submit to BCO-DMO or for help preparing your data or metadata for submission.

Link to Evolving Seas on BCO-DMO