Going Virtual! What we learned from the EFI-RCN Virtual Workshop

Date: May 21, 2020

Post by: Jody Peters1 and Quinn Thomas2

1 University of Notre Dame, 2Virginia Tech

On May 12 and 13 our NSF-funded EFI Research Coordination Network (RCN) hosted a virtual workshop, “Ecological Forecasting Initiative 2020: Coordinating the NEON-enabled forecasting challenge”. This workshop replaced the three day in-person workshop that was scheduled at the same time, but which was canceled due to COVID-19.  Going virtual allowed us to increase our participation and diversity. We were originally space-limited to 65 in-person participants, but with our virtual meeting, we had a little over 200 people register to access the workshop materials, with 150 individuals consistently joining on Day 1 and 110 individuals who consistently participated on Day 2.  We also welcomed participants from around the globe with almost 10% of participants calling in from outside the U.S. And instead of being limited to 15 graduate student participants, we ended up with over 50 graduate students who participated in the meeting.  While EFI has been using Zoom from the beginning and the EFI-RCN leadership committee members are constantly on Zoom for calls and online courses, this was a much larger gathering than any of us had organized previously.  To help others as their workshops are embracing the virtual format,  we reflected on the key elements that allowed the workshop logistics and technology flow smoothly. We hope you find our tips useful! If you have any additional questions feel free to reach out to us at eco4cast.initiative@gmail.com.

Thanks to Dave Klinges (University of Florida) who captured these screenshots of 6 screens of Zoom boxes.

Prepping for the Meeting

  1. Get input from many perspectives. There are a number of great suggestions online about hosting virtual workshops.  To prepare for the virtual format, multiple leadership committee members took a free 1 hr class on running virtual scientific meetings. You can find the video and slides from the class here https://knowinnovation.com/2020/03/you-too-can-go-virtual/. Alycia Crall from NEON was hugely helpful with ideas like QUBES and Poll Everywhere. Lauren Swanson from Poll Everywhere provided a tutorial on how to use the different features of Poll Everywhere and helped us to test the polls before the workshop.  Julie Vecchio, from the Navari Family Center for Digital Scholarship for the Hesburgh Libraries at the University of Notre Dame, shared an example slide deck and script for sharing virtual logistics at the beginning of a workshop.  And Google was a great resource for finding additional input along the way.
  2. Scale your goals to the format and your objectives.  Our goal for the original in-person meeting was to finalize rules for the NEON Forecasting Challenge but we knew that this was not possible virtually.  However, the virtual meeting allowed us to have more people and more perspectives for idea generation.  Therefore, our goals shifted to brainstorming so that we could leverage perspectives from the diverse attendees.  We now have a ton of work to do synthesizing the input but we have a better pulse of what the community is interested in.  Recognizing the challenge of engaging attendees virtually over long periods of time, we reduced the original 3-day in person meeting to a 2-day meeting with a schedule that was conducive to participants from east to west coasts of the U.S.
  3. Virtual meetings require as much or more prep than in-person meetings.  Be prepared for a lot of planning before the meeting.  

General Meeting Set-up

  1. Don’t go all day.  Our first day was 6 hours and the second day was only 4 hours and the hours were set to accommodate people from the U.S. east and west coast time zones. Unfortunately, there is no good time for all global participants, but we were thrilled to see so many participants who woke up early or stayed up late to join us from outside the U.S.
  2. Incorporate plenty of breaks.  Virtual meetings are more tiring than in-person meetings. We had two longer 30-minute breaks that corresponded to lunch-times on the U.S. east and west coasts as well as shorter 15-minute breaks spread throughout both days.
  3. Have a production manager for the meeting. This person focuses on set up and running the technical logistics. For example, this person stays in the main room during breakouts to provide assistance and oversee the timing of activities. Having a production manager allows the meeting lead (i.e., project Principal Investigator) to be the M.C. of the meeting and do real time synthesis of the ideas without having to worry about meeting logistics.
  4. Create a minute-by-minute script for the entire meeting.  This includes both the public Agenda and the behind the scenes tasks.  For example, we wrote out the messages that would be sent through Zoom Chat/Breakout messaging with the time that each message would be sent. You should be able to articulate in writing what is going to happen at every moment of the meeting before the meeting starts and assign who is going to do each task.
  5. Pre-record talks and add edited closed captioning.  This prevents issues that come with live talks like bad mics or bad connections.  This also keeps the meeting on schedule and avoids the awkward need to cut someone off.  We felt the talks were better because they were pre-recorded and, for the talks that presenters agreed to share, we now have an excellent resource for folks that missed the meeting.  The pre-recorded talks may require editing, so find someone with resources and time to make edits prior to the meeting. We made playlists for each plenary session available as unlisted videos on YouTube for any workshop participant that had connection issues while the videos were being played.
  6. Be prepared to pay for a closed captioning service so that the meeting is accessible.  In the registration form for the meeting, ask if anyone needs CC and if they do, hire a service. We were able to find a service through our university (Virginia Tech) vendor system that worked well (www.ACSCaptions.com). The production manager moved the captioner to be in the same Breakout as those that requested the service.  CC is also nice, because you get the full record of text right after the meeting, instead of waiting for the Zoom transcript to come through, plus the captioner’s transcription is better than the automatic Zoom transcript.  
  7. Use hardwired internet.  Our production manager/meeting host used a computer that was connected to the internet via a wire – this will reduce the chance that the central person loses connection.
  8. Plan for leadership team meetings during the workshop. The leadership committee met for 1 hour before and 30 minutes after the meeting each day to go over last minute logistics and any adjustments that were needed. Set up a separate Zoom meeting for these calls to avoid participants joining at times when you are not prepared for them.
Welcome to Zoom
  1. Zoom worked great.  While we know that there are other conferencing platforms, we used Zoom Meeting with a 300 person limit, hosted through the University of Notre Dame.  We chose Zoom Meeting over Zoom Webinar, because we wanted the ability for workshop participants to interact during breakouts. Plus it was provided by the University, and did not require the additional set-up or payment that Zoom Webinar required. It worked very well. There were some individuals that could not access Zoom. Therefore, we also streamed the workshop from Zoom to YouTube and shared the YouTube live link with individuals in our group who had registered for the workshop materials. 
  2. But Zoom can break communication lines among the host and leadership committee. Have an off Zoom and off computer way for the leadership team to communicate throughout the meeting (like text messaging to phones).  It is important to turn off notifications on the host/co-hosts computer due to screen sharing and sounds, but that can leave the production manager or leadership team flying blind unless there is an alternative way to communicate.  Leadership committee members that are in Breakout Rooms are unable to message the host in Zoom.  
  3. Assign leadership committee members as co-hosts. Assign all leadership members as co-hosts and have them mute people who are not talking but have background sounds. Leadership members can also help with spotlighting the speakers and can also move from breakout room to breakout room if needed to check on how things are going.
  4. Give a brief Zoom training at the beginning.  At the beginning of the workshop, use a slide deck (and a written out script to go with it) to introduce all the features of Zoom you want people to use. While many of us use Zoom regularly, not everyone is on Zoom all the time, and it is important that these folks feel comfortable so they can fully participate.  
  5. Play videos directly from the production manager/meeting host’s computer. Make sure the videos are downloaded onto your computer hard drive and play them from there. We used a playlist that automatically advances to the next video.  In Zoom’s screen share settings, make sure to click both the “Share computer sound” and “Optimize Screen Sharing for Video Clip” options.  Do not play videos in Zoom from YouTube to avoid the video having to be played over multiple web services.

Zoom Breakout Rooms

  1. Keep Breakout Rooms small. To make the meeting feel smaller we only had a max of 9 people per breakout room.  Using random sorting, as we did on Day 1, was a great way to meet different people throughout the group.  We built in time for introductions during the breakouts because one of our goals was community building
  2. Clear and easy to find Breakout instructions. Have specific and easy to find instructions for each Breakout session.  If possible try to spread the leadership team among different Breakout Rooms. In practice, this is hard because the production manager has to find them in the list of random Breakout Rooms and reassign (but see point 4 below and have the leadership members rename themselves).  In reality, specific instructions that aren’t too complex will allow Breakout Rooms to work fine without a member of the leadership team.  Our instructions were located in an easy to find place on the meeting website.
  3. Prepare for providing assistance getting into Breakout Rooms. Some people may not see their notification to join a Breakout Room pop up, so the production manager may need to walk them through that. Include a screenshot in the introductory Zoom instructions of what the Breakout Room assignment notification looks like so people know where to look. 
  4. Give extra time if using manually assigned (non-random) Breakout Rooms.  Manually sorting Breakout Rooms takes longer to organize so be sure to include the sorting time in your plans. The assigning and sorting can be done at any time during the plenary, it does not need to happen right before the Breakout Rooms open.  Use the renaming feature of Zoom to ease the sorting.  If everyone changes their Zoom name (under the Participants tab) to start with their group name or number (e.g., A1 Jody) it is much much easier for the production manager to sort. 
  5. Create extra Breakout Rooms.  When setting up the manually sorted Breakout Rooms, create additional rooms that may stay empty. There may be groups that want to breakout further and if you did not create an extra room when you set up the manually sorted rooms, these additional rooms cannot be added after the rooms are opened.
  6. Character limits for messages to Breakout Rooms.  There is a character limit for the messages that can be sent to the Breakout Rooms, so keep them short.  
  7. Breakout Rooms are unable to communicate with the production manager. When individuals are in the Breakout Rooms, they cannot use the Zoom Chat to communicate with the production manager in the main room or anyone else in the workshop who is in another Breakout Room. This is important to mention in the introductory Zoom instructions. 

Communication throughout the Workshop: Poll Everywhere and QUBES

  1. Create a means for engagement.  It is important for attendees to feel like they are involved so that the workshop isn’t a one-way delivery of information.  We used an educational account of Poll Everywhere through the University of Notre Dame to help promote participation throughout the workshop in multiple ways, including brainstorming ideas with word clouds, submitting questions and voting on priority questions for panel members, and brainstormed priorities that also could be voted on.
  2. Define use of communication tools.  Use the Zoom Chat for logistics and supplemental information and Poll Everywhere for Q&A.  That way questions on science do not get lost in questions about links or timing etc. We also were able to download all the questions for the panelists to get their feedback on any questions that we did not have time for during the Q&A sessions. We will share this feedback with the workshop participants in the next month.
  3. Centralize meeting materials.  We used QUBES as a platform to organize and share materials easily in a centralized location.  The EFI-RCN QUBES site worked well because it was free (thanks NSF and Hewlett Foundation), easy to set up, and we were able to include links to videos, surveys, Zoom login, google documents, papers, etc. all in one place.  

Introducing EFI Task Views!

Date: April 20, 2020, updated June 29, 2020 and December 16, 2022

For individuals new to the field of ecological forecasting it can feel like there are an overwhelming number of methods and tools to learn and implement. On the other hand, individuals who have been forecasting for some time may want to know if there are any additional tools that others have found useful.  In a series of 4 blog posts, the Methods and Cyberinfrastructure EFI Working Groups will highlight common tasks in ecological forecasting and methods and tools to help with those tasks.  

Today’s post will cover Reproducible Forecasting Workflows.  Other Task Views focus on 

  • Modeling & Statistical resources, including Uncertainty Quantification & Propagation 
  • Data Ingest, Cleaning, and Management
  • Visualization, Decision Support, and User Interfaces.  

The tasks and associated tools will be included in each of the four blog posts as well as kept on easily accessible and periodically updated web pages linked off the EFI Resources,  Methods & Tools, and Cyberinfrastructure pages.

Resources and tools listed in the four categories of tasks are meant to be living documents.  This list is not meant to be a comprehensive overview of all possible resources, as there are some tasks where there are hundreds of different tools available. Instead we focus on commonly used tools.  However, if there are often used tools and resources we are missing,  we welcome input from anyone — suggestions can be shared using this Google Form.  

On the short-term scale, our goal is to provide the Task Views as resources to the ecological forecasting community.  In the long-term, we want to supplement these resources with gap analyses to determine where there are unmet needs for generalizable tools (i.e. methods are known but tools don’t exist) versus where methods don’t exist and there’s a need for new research on statistical methods or cyberinfrastructure. 

Reproducible Forecasting Workflows.

This material can also be found on the Reproducible Forecasting Workflows Task View Page.

Curators: Jacob Zwart1, Alexey Shiklomanov2, Kenton McHenry3, Daniel S. Katz3, Rob Kooper3, Carl Boettiger4, Bryce Mecum5, Michael Dietze6, Quinn Thomas7

1USGS, 2NASA, 3National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign, 4University of California, Berkeley, 5National Center for Ecological Analysis and Synthesis, 6Boston University, 7Virginia Tech

Contents

Overview
Scripted Analyses
Project Structures
Version Control
Literate Programming
Workflows and Dependency Management
Unit Testing
Continuous Integration and Automation
Containerization
Metadata
Data and Code Release

Overview

Reproducibility8 of scientific output is of utmost importance because it builds trust between stakeholders and scientists, increases scientific transparency, and often makes it easier to build upon methods and/or add new data to an analysis. Reproducibility is particularly critical when forecasting, because the same analyses need to be repeated again and again as new data become available, often in an automated workflow. Furthermore, reproducible workflows are essential to benchmark forecasting skill, improve ecological models, and analyze trends in ecological systems over time. All forecasting projects, from single location forecasts disseminated to only a few stakeholders to daily-updating continental-scale forecasts for public consumption, benefit from using tools and techniques that enable reproducible workflows. However, the project size often dictates which tools are most appropriate, and here we give a brief introductory overview of some of the tools available to help make your ecological forecast reproducible. There are many more tools available than what we describe here, and we primarily focus on open-source software to facilitate reproducibility independent of software access.


8Reproducibility is the degree of agreement among results generated by at least two independent groups using the same suite of methods. Many of the tools highlighted here facilitate repeatability, which is a measure of agreement among results from a single task that is performed by the same person or tool. Repeatability is necessary for reproducible forecasting workflows. 

Scripted Analyses

Forecasts produced without using scripted analyses are often inefficient and prone to non-reproducible output. Therefore, it is best to perform data ingest and cleaning, modeling, data assimilation, and forecast visualization using a scripted computing language that perform tasks automatically once properly configured. 

  • Interpreted languages allow the user to execute commands line-by-line, interactively, in real time. This makes debugging and exploratory analysis much easier, and significantly reduces programmer time for performing analyses. Analyses using interpreted languages are also usually easier to reproduce because of fewer installation/configuration steps (and more standardized, centralized installation mechanisms). This convenience generally comes at the expense of computational speed, but many times the tradeoff is worth it.
    • R – Originally developed for statistical computing, and still primarily used for data science and other scientific computing tasks. Many important data science tools, including statistical distributions, plotting, and tabular data analysis, are included in the core language. Tens of thousands of add-on packages for just about any task imaginable exist.
    • Python – General purpose programming language with a much more limited set of core features than R. Many data-science features are accessible through add-on packages and are curated through repositories such as PyPi, Anaconda, and Enthought.
    • Julia – Very recent language. Claims to combine the ease-of-use of interpreted languages like R and Python with the performance of compiled languages like C and Fortran. Specifically relevant to forecasting and uncertainty propagation, Julia has extremely powerful probabilistic programming tools (e.g. Turing for Bayesian inference, Flux for machine learning).
  • Compiled languages generally perform computationally intensive tasks much faster (up to 80x or more) than interpreted languages. However, their syntax is generally stricter / less forgiving, and analyses have to be written as complete programs that are compiled in operating system-specific (and even machine-specific) ways. In general, these languages should be avoided in favor of easier-to-use interpreted languages unless you are addressing specific computational bottlenecks. Note that all of the interpreted languages above provide ways to call specific functions/subroutines written in these compiled languages, so you have the option to only use these routines for specific, computationally-limiting steps in your analysis. Commonly used compiled languages include:
  • A good standard is to develop an analysis using an interpreted language first and assess if it is fast enough for your needs. If it is fast enough, then you are done! If not, do some basic profiling to identify performance bottlenecks. See if there are existing tools or techniques in the language you are using that can help address the bottlenecks. Only fall back on compiled languages if you’ve reasonably exhausted possibilities using your current language.
  • List of programming languages

Back To Contents

Project Structure

Organized project structures help the scientist and collaborators navigate the workflow of the ecological forecasting project from data input to dissemination of results. Subfolders should be used to break up the project into conceptually distinct steps of the forecasts and sequentially numbering of scripts and subfolders helps with readability, for example, “10_data”, “20_clean”, “40_forecast”, “60_visualize”, “95_report” (see more detailed example below).  The number prefixes should represent a conceptual workflow for each forecasting project, and subdirectories within each phase of the project should describe the inputs, outputs, and functions for each step. Generally, unnumbered directories should contain supporting files that apply to the overall project, for example a configuration file that is used in multiple phases of the forecasting project. 

Example of organized folder and file structure for a forecasting project. 

Tools for organized project structures

  • Project-oriented workflows are self-contained workflows enabling reproducibility and navigability when used in conjunction with organized project structures for ecological forecasting projects. Ideally, a collaborator should be able to run the entire project without changing any code or files (e.g. file paths should be workstation-independent). R and Python both have options for enabling self-contained workflows in their coding environments.
    • R – RStudio projects – R projects allow for analyses to be contained in a single working directory that can be given to a collaborator and run without changing file directory paths.
    • Python – Spyder projects – Python projects also allow for self-contained analyses and integration with Git version control (see Version Control below).

Back To Contents

Version Control

Version control is the process of managing changes in code and workflows and improves transparency in the code development process and facilitates open science and reproducibility. Code versioning also enables experimentation and development of code on different “branches” while retaining canonical files that can be used in operations, for example. Modern version control systems make it easy to create and switch between branches within a code base, encouraging developers to experiment without potentially breaking changes without worrying about losing stable code. This is especially useful for forecasting projects that need to make forecasts at regular schedules (e.g. daily), while researchers can also make alterations to the code base on experimental branches.  Finally, version control facilitates collaboration by formalizing the process for introducing changes and keeping a record of who introduced which changes, when, and why. Additionally, version control allows contributions in an open way from even unknown contributors with the opportunity for the main authors control which contributions are accepted. Software development is a trillion dollar industry and it is well worth the time learning the basics of industry standard tools like version control, rather than relying on ad hoc and error prone approaches such as file naming (e.g. script.v2.R, python_script_final_FINAL.py), Dropbox/Google Drive, or emailing files to collaborators.

Tools for version control 

The distributed model of version control is where developers of code work from local repositories which are linked to a central repository. This enables automatic branching and merging, improves the ability to work offline, and doesn’t rely on a single repository for backup. 

  • Git is the most popular open-source version control system among ecologists and also professional software developers. The popularity enables contributions from many collaborators since potential contributors will likely be used to using Git and web interfaces like GitHub.
Example of version control workflow using Git. Figure from here.

List of other version control programs

Back To Contents

Literate Programming

Traditionally, scientific writing and coding are separate activities—for example, a researcher who wants to use code to generate a figure for her paper will have the code for generating that figure in one file and the document itself in another. This is a challenge for reproducibility and provenance tracking because both criteria have to be maintained for multiple files simultaneously. “Literate programming” provides an alternative approach, whereby code and text are interleaved within a single file; these files can be processed by special literate programming software to produce documents with the output of the code (e.g. figures, tables, and summary statistics) automatically interspersed with the document’s body text. This approach has several advantages. For one, the code output of a literate programming document is by definition guaranteed to be consistent with the code in the document’s source. At the same time, literate programming can make it easier to develop analyses by reducing the separation between writing and coding; for instance, interactive literate programming software can be used to keep “digital lab notebooks” where analyses are developed and described in the same file. In the context of ecological forecasting, literate programming techniques can be particularly useful for writing forecast software documentation, and can even be used for creating automatically-updating documents and reports describing forecast output.

Tools for literate programming 

Two effective and common tools for literate programming are:

  • R Markdown — Allows code from multiple different languages including R, Python, SQL, C, and sh to be embedded within a common markup language (Markdown). Multiple different languages can be embedded within different blocks in the same document. Documents can be exported to a wide range of formats, including PDF, HTML, and DOCX. By default, R Markdown documents are static (i.e. the entire document is rendered all at once with a command); however, recent versions of RStudio allow them to be used interactively by rendering specific code blocks directly in the code editor window. R Markdown documents compiled to HTML format can easily embed interactive elements ranging from clickable plots and subsettable tables (e.g. htmlwidgets) to full applications with user-defined inputs (via RShiny); for more information, stay tuned for our follow up task view on Visualization.
  • Jupyter — Unlike R Markdown, these were designed from the start to be used interactively. Documents are stored in a format that makes them difficult to edit with a plain-text editor; rather, they are typically edited using a special browser-based editor that runs a language “kernel” in the background. The results of any particular code block are stored across sessions, so code blocks do not need to be re-evaluated when exporting to other formats. A document can only use a single language, with Julia, Python, and R supported.

Back To Contents

Workflows and Dependency Management

Workflows are typically high-level descriptions of sets of tasks to be performed as part of an overall scientific application, at least in the context of this blog. There are a wide variety of methods and formats for expressing such descriptions. Workflows must include information about the tasks themselves, as well as their inputs and outputs, which either implicitly define or explicitly state dependencies among the tasks. This information, including the dependencies, is used by a Workflow Management System (WMS) to execute the tasks, potentially 1) on a local computer or one or more remote computers, including clouds and HPC or HTC systems; 2) serially or in parallel; 3) from the start or from a previous partially complete state. These dependencies can be static (fully defined before the application is run) or dynamic (e.g. partially defined based on data, execution, or other resources).

Workflow management systems help efficiently reproduce portions of or entire scientific workflows. These tools analyze workflows, skip phases of the workflow that are up-to-date (if the exact inputs and tasks have been run previously, the previous outputs can be returned; this technique is sometimes called memoization), and execute tasks that are out-of-date, tasks downstream of out-of-date tasks, or tasks required to execute based on scheduled run times (e.g., daily-updating forecast). These tools are especially useful for large projects that bring multiple streams of data together in an analysis since it relieves the analyst from duties of keeping track of workflow order and tasks that need to be rerun. For example, when new data about a model parameter is included in the forecasting workflow, only the portion of the workflow dependent on that new data will be executed.

Example of a simple dependency graph and which tasks will be executed using a workflow management system (from Drake workflow example).   

Below we list a few tools for workflows and dependency management.  There are however many other workflow and dependency management tools.  A larger list can be found here.

  • Drake is an R-based ‘make’ like toolkit that tracks dependencies among phases of your workflow and executes work that is out-of-date. Drake builds upon previous R dependency managers such as remake, and can deal with high-performance or -throughput computing (HPC / HTC) within the WMS framework. This includes automated detection and retries for model failures, and launching Slurm (or other job schedulers for HTC) jobs directly from a drake plan.
  • Snakemake is a Python-based workflow management tool that includes a lot of the same functionality as Drake for R, including being compatible with HPC / HTC or cloud computing environments. The rules defined in a Snakemake target can use shell or Python commands or run external Python or R scripts, as well as utilize various remote storage environments such as Amazon S3, Dropbox, or Google Storage.
  • Parsl is a Python library that lets users define a workflow through a Python program, or parallelize a Python program. They do this by ‘decorating’ the definition of Python functions and calls to external applications to indicate that they are potentially parallelizable and asynchronous tasks. When such a task is called, Parsl intercepts it and adds it to an internal dynamic directed acyclic graph that captures the overall application dependencies. If both the inputs for the task and execution resources are available, the task is run, and if not, it waits until these conditions are satisfied. In either case, Parsl immediately returns a ‘future’, a placeholder for the eventual return value, so that the overall application can proceed, which allows multiple tasks to run in parallel. Parsl is an open source project led by U Chicago & Illinois, which supports a wide variety of execution resources (e.g., local, CPUs, GPUs, HPC, HTC, cloud) and schedulers.
  • Pegasus is another scientific workflow system with a long history of development and use in the science world (e.g., it’s the workflow system used by LIGO)
  • Argo is a more recent kubernetes-based workflow system, convenient when much of the workflow is within docker already (see Containerization below).
  • Airflow is another workflow system, developed and used by AirBnB and others, mostly in industry. Airflow is now a project within the Apache Software Foundation. It allows a user to author workflows as Directed Acyclic Graphs (DAGs) of tasks. The Airflow scheduler executes the tasks on an array of workers while following the specified dependencies. It also has a user interface to allow the user to visualize pipelines running in production, monitor progress, and troubleshoot issues.

Back To Contents

Unit Testing

Ecological forecasting workflows can be complex and involve many steps from data ingest, data cleaning, producing forecasts, to visualizing output. Often, these forecasting workflows need to produce output on a regular schedule and ensuring that each part of the workflow performs appropriately is crucial for making forecasts and identifying failure points, whether operational or not. Unit testing is automated tests on small units within a larger workflow to ensure that the different sections behave as intended (e.g. testing that individual functions return the expected outputs for valid inputs and the expected errors for invalid inputs). Frequently unit tests are also used for regression testing, where a test is created for a previous bug or problem that is fixed. The regression test is used to prevent a bug from being reintroduced. In combination with continuous integration (see below), these tests ensure that modifications to a code base run as expected after the modifications have been integrated.

In case of complex workflows or systems, a unit test will only test to make sure each of the components are working as intended. Additionally an integration or system test will need to be performed at certain points to test all the components interacting with each other. For example does each component still produce the outputs expected by the next steps in the workflow.

Tools for unit testing 

Most programming languages have a testing framework that will help with the unit tests. A list of tools here, some of the commonly used testing frameworks for tools used in forecasting are:

Back To Contents

Continuous Integration and Automation

Both the models we use to make predictions and the forecasting workflows we build around them are, in some sense, always a work in progress. Any time we make changes to our models and workflows, whether it’s updating a library or adding a data source, there’s a chance that we’ll break our workflow. Tools for continuous integration enable researchers to update their forecasts and run tests on their code in an automated and robust manner (e.g. with system tests in place to check for accidental deployments that would otherwise break a deployment).  Continuous Integration (CI) tools automatically builds and deploys software ecosystems, and tests new versions of code to ensure development of models will work. This is especially important for iterative forecasts that need to be deployed at regular intervals (e.g. daily forecasts). As CI tools continue to become more powerful, flexible, and generous with their service offerings, they can expand from supporting development workflows to even be used as the primary platforms for application workflows, such as iterative, real-time forecasting.  Below we list few of these tools and a larger list can be found here or here:

  • Travis CI, Probably the most popular automated testing tool on GitHub, at least in the recent past.  This service is designed to test builds and run unit tests (and other, short-lived scripts) on a variety of different virtual platforms with different configurations.  Travic CI runs for free on its Travis CI servers, but has time and CPU limits (at least for the free version (though a user can request that these limits be increased).  Some features include the ability to run actions in parallel (configured via a YAML file) and an ability to be accessed via an API.
  • GitHub Actions, similar to Travis CI, but hosted natively by GitHub and with more generous time, memory, and CPU allowances for open-source (public) projects on GitHub.  GitHub Actions is quickly increasing in popularity.
  • GitLab CI, similar to Travis and GitHub Actions but hosted by GitLab.
  • Circle CI, similar to Travis and GitHub Actions.
  • Jenkins, a locally run alternative that you can deploy on your own servers.

Back To Contents

Containerization

Complex scientific workflows often involve combining multiple different tools written in different programming languages and/or possessing different software dependencies. Even a simple R script may depend on multiple R packages, and may only work as expected if specific versions of those packages are used. Managing these different tools and their dependencies can be a complex task, especially when tools conflict with each other (e.g. one tool may only work with an older version of a library, while another tool may only work with a newer version of the same library). As the number of tools and their dependencies in a workflow grows, managing these dependencies becomes challenging, and reproducing this workflow on a different machine (potentially with a different operating system) is even more challenging. Containers resolve these issues by providing a way to create isolated packages for each software element and its dependencies. These containers can then run on any computing environment (as long as it has the requisite container software itself installed). Moreover, containerization software sometimes allows for the creation of container stacks (a.k.a “orchestration”)— collections of multiple containers that communicate with each other (including sharing data) and with the host system in precise, user-defined ways (see Workflow and Dependency Management above). In some cases, these container stacks can be deployed across multiple physical or virtual computers, which greatly facilitates the process of scaling computationally intensive analyses.

Tools for containerization

By far the most common tool for containerization — indeed, the emerging standard across the software development industry — is Docker. Docker containers are typically created from a definition file, basically just a starting container (e.g. a specific version of a Linux operating system) followed by a list of shell commands describing the installation and configuration of the specified software and its dependencies. Thousands of existing containers (any of which can be used as a starting point for a custom container) for a wide range of software are available on Docker Hub, a publicly available registry. Software stacks and workflows using multiple containers can be created via Docker Compose, which automatically configures and runs multiple interrelated Docker containers from a human-readable (YAML) specification file. Several tools for orchestration of Docker containers exist — Docker Swarm is distributed as part of Docker (i.e. no additional installation) and allows for rapid deployment with minimal configuration, while Kubernetes is a much more complex but feature-rich solution. Another quickly maturing tool leveraging Docker is The Binder Project, which is a relatively easy to use tool that turns a Git repository into a Docker image for deploying a reproducible computing environment in the cloud. 

Unfortunately, Docker’s design precludes its use on high-performance computing clusters and other enterprise-managed machines often encountered in the sciences. In particular, running Docker containers requires running a persistent background process with administrative (“root”) privileges on the host machine. This is not an issue on self-managed, isolated physical (e.g. your personal laptop) and virtual (e.g. Amazon Web Services) machines. However, it does pose a major security concern that precludes its use on high-performance computing clusters and other enterprise-managed machines often encountered in the sciences. Singularity is an alternative that was designed specifically to address these concerns. Unlike Docker, Singularity does not require a persistent background process to run — rather, its design involves creating containers that are fully self-contained executable files. These files can then be distributed just like any other files, and executed on any machine (as long as that machine has a compatible version of Singularity installed). The initial install of Singularity, as well as the creation of containers, does require root permissions, but unlike Docker, the containers themselves run as a single process with only user permissions. Besides the security implications, this design also makes Singularity containers more amenable to HPC queue submission systems (running the containers is effectively the same as running any other executable). Like Docker, Singularity containers can be created via a definition file, and can be stored on a free, publicly available registry (Singularity Hub). The major downside of Singularity is that it has a much smaller user base (largely limited to a small subset of the scientific community, compared to Docker’s widespread use in both science and industry), and is much less mature software. For example, while Singularity does provide a “Compose” interface, as of this writing this is still in early development and highly experimental. Singularity also works with Kubernetes.

Back To Contents

Metadata

Metadata provide crucial information on the ecological forecasting data, including model input, output, and parameters, among others. Metadata tells the user how to interpret model output and what conditions are needed to reproduce output. Metadata is also used to describe the size and dimensions of the dataset, quality of the data, author of the data, keywords of the project used to produce the data, and details on how the data were produced. Appropriately documenting ecological forecasting output helps other researchers find relevant datasets and reuse output for other applications such as input to other models, or cross-model comparison such as a forecasting challenge.

Tools for metadata 

  • Ecological Metadata Language (EML) is a community-maintained project for documenting research data with a readable XML markup syntax. EML serves the needs of the research community and is modularly designed to enable growth in the language as the needs of the earth and environmental sciences evolve. The Ecological Forecasting Initiative has developed additional forecast-specific standards using EML as the base metadata standards. The EML R package facilitates generating an EML document, however, these documents can also be created using a text editor or other scripting languages such as Python.
  • EFI is in the process of drafting an ecological forecasting metadata standard that extends EML. Current info is located in our forecast-standards repo
  • Many other metadata standards can be found here.

Back To Contents

Data and Code Release

A core principle of creating reproducible scientific workflows is making the code and data used in the analyses available to the public through data and code publication or releases. It is now often required by journals or institutions to publish the data used in scientific publications and to a lesser extent, the code used in the analyses. Many of the other reproducible principles described above enable efficient data and code release and publication. For example, remote version control repositories, such as GitHub, display developmental and stable code bases and can tag versions of code to be released along with details on what the version was used for (e.g. “v1.2.1 used in analyses described by Dasari et al. 2019”). These code releases can also become citable with digital object identifier (DOI) by connecting with other archiving tools. Data releases should also be relatively painless if the previous principles of reproducible workflows are followed. Key to data releases and publishing in repositories are descriptive metadata that describe important characteristics of the dataset that is to be published (see Metadata section above). Additionally, embedding data publishing tasks (e.g. metadata descriptions, pushing data to a remote repository) in a dependency management system (see above) can make updating data in a public repository as easy as executing one line of code.

Tools for data and code release 

Back To Contents

Social Science in Ecological Forecasting: People refining forecast visualization

Date: March 23, 2020

Post by: Kathy Gerst1, Kira Sullivan-Wiley2, and Jaime Ashander3

Series contributors: Mike Gerst4, Kailin Kroetz5, Yusuke Kuwayama6, and Melissa Kenney7

1 USA National Phenology, 2Boston University, 3Resources for the Future, 4University of Maryland, 5University of Arizona,  6Arizona State University,  7University of Minnesota

In this series, we are asking: How might ideas from the social sciences improve ecological forecasting? What new opportunities and questions does the emerging interdisciplinary field of ecological forecasting raise for the social sciences? This installment engages with the relationship between the people producing forecast outputs and the people interpreting those outputs.

https://smartreservoir.org/

First, it is important to note that some forecast products have a specific and known user (e.g., the EFI-affiliated Smart Reservoir project producing forecasts for the Western Virginia Water Authority). Others are produced for public use (e.g., weather forecasts) or by a wide range of potential users, some of whom may be known, but others not. This is important because there is variability in how people perceive, engage with, and understand visual products. This variability was seen in popular media in 2015 when the picture of a dress went viral and people could not agree on whether the dress was gold and white or black and blue. What people “see” can vary based on differences in not only neurology, but also personal experiences and perspectives. For forecasters, this means that we should never assume that a visual product that means one thing to us will automatically have the same meaning for the people using it.

Using social science approaches to gain insights on how stakeholders perceive and use forecast outputs can improve the ways in which model outputs are visualized and shared.

One example where people contribute to the design and refinement of forecasts comes from the USA National Phenology Network (USA-NPN; www.usanpn.org). The USA-NPN collects, stores, and shares phenology data and information to advance science and inform decisions. To do so, USA-NPN engages stakeholders, including natural resource managers and decision-makers, to guide the prioritization, selection, and development of data products and tools. This happens through a deliberate effort to engage and seek input from existing and new stakeholder audiences by cultivating relationships that can lead to collaborative teams and co-production of products. In this way, data users are actively involved throughout the process of scoping and developing projects that meet their needs.

An example of a USA-NPN product designed with stakeholder input is the suite of Pheno Forecast maps (www.usanpn.org/data/forecasts); these maps show, up to 6 days in advance, when insect pests and invasive species are going to be in life stages that are susceptible to treatment. Interactions with end users after pilot maps were released revealed several opportunities for improvement.

Figure 1. The look and message of the Emerald Ash Borer Phenological Forecast produced by the USA National Phenology Network (USA-NPN) shifted substantially between 2018 (top) and 2019 (bottom) due to stakeholder feedback.


The original maps released in 2018 were focused on the “timing of treatment” – that is, map categories portrayed locations based on whether they were occurring before, during, or after particular treatment windows. Stakeholders requested the visualized information reflect life stages (e.g. eggs hatching or adults emerging) rather than treatment window status (e.g. “approaching treatment window”), giving the end user the opportunity to determine when (or if) to implement treatment (Figure 1).  This visualization style also increased the potential for the forecasts to be used by a broader community of people, especially those with interests other than treatment.

In addition, some end users indicated that the original legend categories reflecting treatment window were too broad, and advocated for the use of narrower legend categories that allowed for more spatial differentiation among categories and more precise lead time before life stage transitions. Through consultation and the inclusion of stakeholders in the process, the NPN was able to harness the collective knowledge of the community to ultimately provide users with more nuanced and actionable information.

This case highlights both the importance of visualizations in communicating forecast information and the importance of recognizing that visualizations are more successful when they reflect a range of potential users and uses. Approaches such as stakeholder participation and forecast co-production are likely to increase the usability of forecast products and their associated decision tools.

Further reading
Additional resources will be updated in a living document that EFI is working to put together. We will add a link here once that goes live.

Academic Books and Articles
Books by Denis Wood and co-authors on the ideologies, power dynamics, and histories embedded in maps. These accessible books may be particularly interesting for those interested in how visualizations (maps) can be harnessed by different people to establish or reclaim power, shape the human-environment relationship, or project a particular vision of reality.

  1. Weaponizing Maps (2015): This book explores the “tension between military applications of participatory mapping and its use for political mobilization and advocacy”
  2. Rethinking the Power of Maps (2010): This book updates the 1992 “Power of Maps” and expands it by exploring “the promises and limitations of diverse counter-mapping practices today.”
  3. The Natures of Maps (2008): This book draws examples from maps of nature (or environmental phenomena) in order to reveal “the way that each piece of information [visualized in the map] collaborates in a disguised effort to mount an argument about reality” in ways that can shape our relationship to the natural world.
  4. The Power of Maps (1992): This book shows “how maps are not impartial reference objects, but rather instruments of communication, persuasion, and power…[that] embody and project the interests of their creators.”

Masuda (2009). Cultural Effects on Visual Perception. In book: Sage encyclopedia of perception, Vol. 1, Publisher: Sage Publications, Editors: In E. B. Goldstein (Ed, pp.339-343). link
This chapter is a good overview of psychology research on how culture can influence what and how we “see.” It is full of research examples and is written in an accessible way.

Burkhard, “Learning from architects: the difference between knowledge visualization and information visualization,” Proceedings. Eighth International Conference on Information Visualisation, 2004. IV 2004., London, UK, 2004, pp. 519-524. IEEE link
This paper is a quick and helpful overview of how a complement of tools can help us communicate either information or knowledge, drawing on tools used by architects (i.e. sketches, models, images/visions)

Chen et al. (2009) “Data, Information, and Knowledge in Visualization” IEEE. link
This paper is a good reference for the distinctions among data, information, and knowledge, and how these can and should be treated differently in visualization products. Especially useful for those interested in distinctions in how these topics are treated in the cognitive and perceptual space as opposed to the computational space.

Blog posts on “best practices in data/information visualization” from data science companies
Note: These practices underlie many of the visualizations produced and shared from the private sector and appear in advertising and media outlets—visualizations and visualization styles that will be familiar to most users of ecological forecasts

Social Sciences in Ecological Forecasting

Date: December 13, 2019

Post by: Kira Sullivan-Wiley1 and Jaime Ashander2

Series contributors: Mike Gerst3, Kathy Gerst4,5, Kailin Kroetz6, Yusuke Kuwayama2, and Melissa Kenney7

1Boston University, 2Resources for the Future, 3University of Maryland, 4USA National Phenology Network, 5University of Arizona,  6Arizona State University,  7University of Minnesota

Ecological forecasting involves models and ecology but is also a fundamentally people-centered endeavor. In their 2018 paper, Mike Dietze and colleagues outlined the ecological forecasting cycle (Figure 1 is a simplified version of that cycle) where forecasts are designed, implemented, disseminated, iteratively reassessed and improved through design.  This cycle is about a process, and in each part of this process there are people. Wherever there are people, there are social scientists asking questions about them, their actions, and how to improve decisions made by these insights.

So we ask the questions: How might ideas from the social sciences improve ecological forecasting? What new opportunities and questions does the emerging interdisciplinary field of ecological forecasting raise for the social sciences?

This post introduces a series of posts that address these questions, discussing opportunities for the social sciences in Ecological Forecasting Initiative (EFI) and gains from considering humans in forecasting research. Thus, our aim is to better describe the role of social scientists in the ecological forecasting cycle and the opportunities for them to contribute to EFI.

Figure 1  The process of producing a forecast, which may begin at any step depending on which stakeholder group initiates.

So where are the people?

At EFI, we’re interested in reducing uncertainty of forecasts, but also improving the processes by which we make forecasts that are useful, useable, and used. This means improving forecasts, their design, use, and impact, but it also results in a range of opportunities to advance basic social science.

To do this we need to know: Where are the people in this process? Where among the boxes and arrows of Figure 1 are people involved, where do their beliefs, perceptions, decisions, and behavior make a difference? Are there people outside this figure who matter to the processes within it? Figure 2 highlights critical groups of people involved, and some of the actions they take, that are integral to the ecological forecasting process.

Figure 2 Critical actors in the ecological forecasting process, linked with important actions

Figure 2 moves beyond the three-phase process described in Figure 1 because the process of ecological forecasting is predicated on funding sources and data, often provided by people and processes outside the forecasting process. So when we think about where social science belongs in ecological forecasting, we have to look beyond the forecasting process alone.

Making forecasts better

If EFI wants to make this process work and produce better forecasts, which of these actors matters and how? How might different stakeholders even define a “better” forecast? One might think of “better” as synonymous with a lower degree of uncertainty, while another might measure quality by a forecast’s impact on societal welfare. Ease of use, ease of access, and spatial or temporal coverage are other metrics by which to measure quality, and the relative importance of each is likely to vary among stakeholders. Social science can help us to answer questions like: Which attributes of a forecast matter most to which stakeholders, under what conditions, and why?

While a natural scientist might assess the quality of a forecast based on its level of uncertainty, a social scientist might assess the value of a forecast by asking:

  • What individuals are likely to use the forecast?
  • How might the actions of these individuals change based on the forecast?
  • How will this change in actions affect the well-being of these or other individuals?

Building “better forecasts” will require a better understanding of the variety of ways that stakeholders engage with forecasts. The posts in this blog series will shine a spotlight on some of these stakeholder groups and the social sciences that can provide insights for making forecasts better. Posts in the series will discuss issues ranging from how stakeholders interact with forecast visualizations to the use of expert judgements in models to when forecasts should jointly model human behavior and ecological conditions. Considering these questions will help forecasters design forecasts that are more likely to increase our understanding of these socio-environmental systems and enhance societal well-being.

These examples hint at the range of potential interactions between the social sciences and ecological forecasting. There is a wealth of opportunity for social scientists to use the nascent field of ecological forecasting to ask new and interesting questions in their fields. In turn, theories developed in the social sciences have much to contribute to emerging interdisciplinary practice of ecological forecasting in socio-environmental systems. As we can see, better ecological forecasting may require us to think beyond ecological systems.

EFI at AGU 2019

Date: December 6, 2019

EFI’s oral and poster sessions on “Ecological Forecasting in the Earth System” have been scheduled for Wednesday, December 11, 2019 from 4-6pm in Moscone 3001 (oral session) and on Wednesday morning from 8am-12:20pm(posters). We’re excited to have a great set of speakers that really span the full gradient from terrestrial to freshwater to marine. Come check out the following talks!

Wednesday EFI Oral Session (4-6pm, Moscone 3001)

16:00 Nicole Lovenduski – High predictability of terrestrial carbon fluxes from an initialized decadal prediction system
16:15 Ben Bond-Lamberty – Linking field, model, and remote sensing methods to understand when tree mortality breaks the forest carbon cycle
16:30 Zoey Werbin – Forecasting the Soil Microbiome
16:45 Brian Enquist – Forecasting future global biodiversity: Predicting current and future global plant distributions, community structure, and ecosystem function
17:00 Heather Welch – Managing the ocean in real-time: Ecological forecasts for dynamic management
17:15 Clarissa Anderson – Bringing new life to harmful algal bloom prediction after crossing the valley of death
17:30 Ryan McClure – Successful real-time prediction of methane ebullition rates in a eutrophic reservoir using temperature via iterative near-term forecasts
17:45 Carl Boettiger – Theoretical Limits to Forecasting in Ecological Systems (And What to Do About It)

Wednesday EFI Poster Session (8am-12:20pm, Moscone South Poster Hall)

Christopher Trisos – B31J-2509 The Projected Timing of Abrupt Ecological Disruption from Climate Change
Gleyce K. D. Araujo Figueiredo – B31J-2510 Spatial and temporal relationship between aboveground biomass and the enhanced vegetation index for a mixed pasture in a Brazilian integrated crop livestock system
Rafael Vasconcelos Valadares B31J-2511 Modeling Brazilian Integrated Crop-Livestock Systems
Zhao Qian – B31J-2512 An optimal projection of the changes in global leaf area index in the 21st century
Takeshi Ise – B31J-2513 Causal relationships in mesoscale teleconnections between land and sea: a study with satellite data
Hisashi Sato – B31J-2514 Reconstructing and predicting global potential natural vegetation with a deep neural network model
Masanori Onishi – B31J-2515 The combination of UAVs and deep neural networks has a potential as a new framework of vegetation monitoring
Yurika Oba – B31J-2516 VARENN: Graphical representation of spatiotemporal data and application to climate studies
Stephan Pietsch – B31J-2517 A Fast and Easy to use Method to Forecast the Risks of Loss of Ecosystem Stability: The Epsilon Section of Correlation Sums
Jake F Weltzin – B31J-2518 Developing capacity for applied ecological forecasting across the federal research and natural resource management community
Theresa M Crimmins – B31J-2519 What have we learned from two seasons of forecasting phenology? The USA National Phenology Network’s experience operationalizing Pheno Forecasts
Tim Sheehan – B31J-2520 Sharp Turn Ahead: Modeling the Risk of Sudden Forest Change in the Western Conterminous United States
Margaret Evans – B31J-2521 Continental-scale Projection of Future Douglas-fir Growth from Tree Rings: Testing the Limits of Space-for-Time Substitution
Ann Raiho – B31J-2522 Improving forecasting of biome shifts with data assimilation of paleoecological data
Quinn Thomas – B31J-2523 Near-term iterative forecasting of water quality in a reservoir reveals relative forecastability of physical, chemical, and biological dynamics
Alexey N Shiklomanov – B31J-2524 Structural and parameter uncertainty in centennial-scale simulations of community succession in Upper Midwest temperate forests
Peter Kalmus – B31J-2525 Identifying coral refugia from observationally weighted climate model ensembles
Jessica L O’Connell – B31J-2526 Spatiotemporal variation in site-wide Spartina alterniflora belowground biomass may provide an early warning of tidal marsh vulnerability to sea level rise
Rafael J. P. Schmitt – B31J-2527 Assessing existing and future dam impacts on the connectivity of freshwater fish ranges worldwide
Teng Keng Vang – B31J-2528 Site characteristics of beaver dams in southwest Ohio

Other Forecasting Presentations

Mon 13:40-15:40, Moscone South e-Lightning Theater: Alexandria Hounshell, ED13B-07 Macrosystems EDDIE: Using hands-on teaching modules to build computational literacy and water resources concepts in undergraduate curricula (Alex’s presentation will be at ~2pm)
Mon 13:40-18:00, Poster Hall: Hamze Dokoohaki B13F-2442 – A model–data fusion approach to estimating terrestrial carbon budgets across the contiguous U.S
Mon 14:25, Moscone 3005: Michael Dietze B13A-04 – Near real-time forecasting of terrestrial carbon and water pools and fluxes
Mon 17:40, Moscone 3003: Michael Dietze B14B-11 Near real-time forecasting in the biogeosciences: toward a more predictive and societally-relevant science
Tues 13:40-18:00, Poster Hall: Erin Conlisk B23F-2598 – Forecasting Wetland Habitat to Support Multi-Species Management Decisions in the Central Valley of California
Wed 08:00-12:20, Poster Hall: B31H Building Resilient Agricultural Systems Supported by Near-Term Climate and Yield Forecasts II [Poster Session]
Wed 13:55, Moscone 3005: Inez Fung B33A-02 – Towards verifying national CO2 emissions
Thurs 09:15, Moscone 3012: John Reager B41A-06 – Hydrological predictors of fire danger: using satellite observations for monthly to seasonal forecasting 
Fri 10:20-12:20, Moscone 3007: B52A Building Resilient Agricultural Systems Supported by Near-Term Climate and Yield Forecasts I [Oral Session]

EFI Social

Anyone who is available to meet up after the Forecasting Session on Wednesday, we’ll have a group getting together at Tempest starting around 6:30 pm. It’s an 8 minute walk. Find directions here.

Seeking Judges for Outstanding Student Presentations

We would like to recruit judges for the student presentations in our forecasting sessions at AGU this year. We have one candidate for Outstanding Student Presentation in our poster session on Wednesday morning (B31J) and two candidates in our oral session Wednesday afternoon (B34C). If you plan to attend either of these sessions, please consider helping to mentor a young researcher with some constructive feedback.
You can sign up to judge at https://ospa.agu.org/2019/ospa/judges/ to register and agree to the honor code by selecting “Register to Judge”.Once there, sign up for the student presentations you wish to evaluate. Every judge must sign up for at least three presentations to ensure that all students are provided with feedback. Select “Find Presentations”. You can search for presentations by B31J or B34C in the lower of the two “quick search” boxes.When you arrive for Fall Meeting, confirm the time and location of the presentations you are evaluating. You can sync your judging schedule to your personal calendar to ensure you don’t accidentally miss any presentations. Go to your OSPA schedule and click ‘Add to Calendar’ on the task bar. Your judging schedule will now be added to your Google Calendar, Outlook, or iCalendar.You will need to evaluate all presentations you volunteered to judge. Students depend on your feedback to assess their presentation skills, identify the areas in which they are performing well, and areas that need improvement.Either submit scores in real time on a tablet or mobile device or take notes while you evaluate students and enter the scores later. Do not rely on your memory alone to submit complete scores at a later time. Students participate in OSPA to improve their presentation skills, so please provide them with thorough feedback. This year, comments are required in addition to the numerical scores. All reviews must be entered into the OSPA site no later than Friday, 20 December 2019, at 11:59 p.m. EDT.Finally, be constructive! OSPA presenters range in education levels from undergraduate to Ph.D. students. There are also many presenters for whom English is not their first language. Keep these things in mind when providing feedback. Judges are asked to evaluate students at their current education and language proficiency levels.

EFI Status Update: Accomplishments over the Past 6 Months

Date: December 1, 2019

Post by Michael Dietze, Boston University

We have had a busy 6 months with lots of progress and community building for the Ecological Forecasting Initiative. Here is a summary of what the group has been up to since the EFI meeting in DC in May.

Participants at the May 2019 EFI Meeting in Washington, DC

The inaugural meeting of the Ecological Forecasting Initiative took place at AAAS Headquarters in Washington, DC on May 13-15, 2019. The meeting brought together >100 participants from a broad array of biological, social, and physical environmental sciences and spanning internationally across academia, government agencies, and non-profits. Overall, it was a highly productive meeting that generated a lot of excitement about our growing community of practice. The meeting was organized around EFI’s seven themes (Theory, Decision Science, Education, Inclusion, Methods, Cyberinfrastructure, Partners) with a mix of keynotes, lightning talks, and panel discussions on each area. The panel discussions were particularly valued by participants, as they generated dynamics community discussions and often highlighted the perspectives of early-career participants. The meeting also included time for break out discussions, starting with a series of sessions (with participants randomly intermixed) addressing high-level questions about the opportunities for advancing science and decision making, and the challenges and bottlenecks facing our community. These breakouts then fed into a later set of sessions, organized by theme, where individuals self-organized by interest to synthesize what we learned and to start discussing next steps.  Finally, there was a healthy amount of unstructured break time, as well as a conference dinner on Monday night and a poster session on Tuesday early evening, that provided attendees with time for informal discussions and networking. A post-meeting survey showed overall satisfaction with the meeting was very high (4.8 of 5), as was the likelihood of attending another EFI meeting (4.6 of 5).

The original conference plan was for the breakout groups organized around the EFI cross-cutting themes to be the kick-off of the theme working groups. In practice, this was delayed slightly by the NSF Science Technology Center preproposal deadline (June 25) which occupied much of the organizing committee’s time in the ~6 weeks post-conference. However, working group telecons kicked off in July and all eight working groups have continued to meet virtually on Zoom at approximately a monthly frequency. Based on group discussions at the conference, and our post-meeting survey, a number of key ideas emerged for the working groups to focus on. A top priority was the establishment of community standards for forecast archiving, meta-data, and application readiness levels. Standards will allow our growing community to more easily perform higher-level synthesis, disseminate predictions, develop shared tools, and allow third-party validation. The bulk of the work on developing a draft set of forecast standards has been taken on by the Theory working group, which is focused on making sure forecast outputs and metadata will be able to support larger, synthetic analyses. Theory has also held joint meetings about Standards with Cyberinfrastructure, which has focused on the CI needs of archives (blog post in prep), repeatability/replication, and the standardization of model inputs. Application Readiness Levels (ARLs) have also been discussed by the Decision team, which wanted to evaluate whether existing NASA and NOAA ARLs reflect decision readiness.

Second, there was considerable enthusiasm for discussing and documenting best practices, both around the technical aspects of forecasting and for decision science and interacting with stakeholders. On the technical side the Methods and Tools team is working on a document summarizing the tools being used by the community in seven key areas: Visualization & Decisions Support tools; Uncertainty quantification; Data ingest; Data cleaning & harmonization; User interfaces; Workflows & Reproducibility; Modeling & Statistics. The primary goal of this exercise is to produce a set of EFI webpages that inform forecast developers about the tools available (especially newer members of the community). The secondary goal is to enable a gap analysis that will help the Methods and Tools team prioritize areas where needed tools are missing or not meeting the needs of the community. At the same time, the Decision team has been discussing the stakeholder side of best practices, has already produced two blogs about lessons learned by NOAA in translating from Research to Operations (R2O), and a third blog is being drafted that describes areas in the ecological forecasting process where social science can provide valuable input. Similarly, the Partners team has been thinking about how to improve the ‘matchmaking’ process between stakeholders and forecasters and is working on a survey to reach out to potential EFI partners to let them know what EFI is, what we are doing, and to learn how organizations are currently using data, models, and forecasts and where there is the potential for synergies with EFI.

Third, the community is interested in the expansion of educational materials and open courseware. The Education and Diversity teams have mostly been meeting together and have discussed key forecasting vocabulary and are working with EFI’s Cayelan Carey, who has a new NSF Macrosystems grant to develop undergraduate forecasting modules, to develop a survey of forecast instructors to provide information on (and a compilation of) syllabi, code, problem sets, and topics currently being taught, pre-requisites, and input on what new forecasting teaching material would be most useful. The Diversity team is also drafting a Strategic Plan to work on increasing diversity and inclusion in EFI and ecological forecasting more generally.  Steps in this plan include: 1) Identifying the current diversity status, 2) Identifying the barriers, 3) Identifying solutions and which solutions make sense to work on given the participants and networks currently in EFI, 4) Identify who else needs to be involved and make a plan to bring them in, and 5) Form collaborations and seek funding to carry out the plan.

Fourth, there was interest at the EFI conference in supporting the development of an EFI student community. The EFI student group was launched in August and is working on developing a charter, forming a steering committee, and running a journal discussion group.

Working Groups are always open for new people to join. There are 3 more calls scheduled before the end of the year: Education on Dec 4, Social Science on Dec 16, and Partners on Dec 17 all at 1pm US eastern time.  Polls will be sent out in mid-December to set recurring times for working group calls in Jan-May 2020.  If you would like to join a working group and be included on any of the remaining calls or if you wish to participate in the polls to set times for next year’s calls, email eco4cast.initiative@gmail.com

In addition, to responding to the ideas discussed at the EFI2019 conference, the EFI working groups are also involved in the planning process for the EFI Research Coordination Network. This NSF RCN funding was awarded after the EFI2019 meeting and ensures that EFI will continue to meet and grow over the next five years. The EFI RCN is also launching an open forecasting challenge using NEON data, the protocols for which will be finalized at the first RCN meeting, May 12-14, 2020 in Boulder, CO at NEON headquarters.

Other key products of the EFI2019 meeting are the meeting slides and videos. The overall meeting was recorded and the individual keynote and lightning talks have been edited down and released on YouTube, the EFI webpage, and Twitter. In addition, EFI2019 participants suggested dropping EFI’s existing discussion board (which participants were encouraged to use as part of meeting prep) and replacing it with a Slack channel, which has seen substantially greater use. The EFI organizing committee is also close to finalizing an Organizing Principles and Procedures document which establishes the obligations and benefits of EFI membership and lays out the operations of the EFI Steering Committee and committee chair. The OPP is currently being reviewed by legal counsel and we anticipate holding our first elections shortly after the new year.

Finally, we are happy to pass on that the NSF Science Technology Preproposal that was submitted shortly after the EFI2019 meeting has been selected to submit a full center proposal in January.

Making Ecological Forecasts Operational: The Process Used by NOAA’s Satellite & Information Service

Date: November 18, 2019

Post by Christopher Brown; Oceanographer – NOAA

My last blog briefly described the general process whereby new technologies and products are identified from the multitude available, culled, and eventually transitioned to operations to meet NOAA’s and its user’s needs, as well as offered some lessons learned when transitioning ecological forecasting products to operations, applications, and commercialization (R2X). In this blog, I introduce and briefly describe the steps in the R2X process used by NOAA’s Satellite & Information Service (NESDIS). NESDIS develops, generates, and distributes environmental satellite data and products for all NOAA line offices as well as for a wide range of Federal Government agencies, international users, state and local governments, and the general public. A considerable amount of planning and resources are required to develop and operationalize a product or service, and an orderly and well-defined review and approval process is required to manage the transition. The R2X process at NESDIS, managed by the Satellite Products and Services Review Board (SPSRB), is formal and implemented to identify funds and effectively manage the life cycle of a satellite product and service from development to its termination.  It is a real-life example of how a science-based, operational agency transitions research to operations. A ‘broad brush’ approach of the process is given here, yet will hopefully be useful in providing insight into the major components involved in an R2X process that can be applied generally to the ecological forecasting (and other) communities. Details can be found in this SPSRB Process Paper.

The first step in the R2X process is acquiring a request for a new or improved product or service from an operational NOAA “user”. NESDIS considers requests from three sources: individual users, program or project managers, and scientific agencies. Individual users must be NOAA employees, so a relationship between a federal employee and other users, such as from the public and private sectors, including academia and local, state and tribal governments, must first be established. The request, submitted via a User Request Form similar to this one, must identify the need and benefits of the new or improved product(s) and includes requirements, specifications and other information to adequately describe the product and service. As an example, satellite-derived sea-surface temperature (SST), an operational product generated from several NOAA sensors, such as the heritage Advanced Very High Resolution Radiometer (AVHRR) and the current Visible Infrared Imaging Radiometer Suite (VIIRS), was requested by representatives from several NOAA Offices.

If the SPSRB deems the request and its requirements valid and complete, the following six key steps are sequentially taken:

  1. Perform Technical Assessment
  2. Conduct Analysis of Alternatives
  3. Develop Project Plan
  4. Execute Product Lifecycle
  5. Approve for Operations, and
  6. Retire or Divest

These steps are depicted in Figure 1.

Figure 1. Key SPSRB process steps.  Credit: Process Paper, Satellite Products and Services Review Board, 2018, SPSRB Improvement Working Group, Ver. 17, Department of Commerce. NOAA/NESDIS, 23 July 2018, 29pp.

1. Perform Technical Assessment and Requirements Validation

A technical assessment is performed to determine if the request is technically feasible, aligns with NOAA’s mission and provides management the opportunity to decide the best ways to process the user request. For instance, a user requests estimates of satellite-derived SST with a horizontal resolution of 1 meter every hour throughout the day for waters off the U.S. East Coast to monitor the position of the Gulf Stream.  Though the request does match a NOAA mandate, i.e. to provide information critical to transportation, the specifications of the request are currently not feasible from space-borne sensors and the request would be rejected.  On the other hand, a request for 1 km twice a day for the same geographic coverage would be accepted and the next step in the R2X process – Analysis of Alternative – would be initiated.

2. Conduct Analysis of Alternatives

An analysis of alternatives is performed to identify viable technical solutions and to select the most cost-effective approach to develop and implement the requested product or service that satisfies the operational need. An Integrated Product Team (IPT) consisting of applied researchers, operational personnel and users, is formed to complete this step. In the case of SST, this may be consideration of the use of data from one or more sensors to meet the user requests for the required frequency of estimates.

3. Develop Project Plan

The Project Plan describes specifically how the product will transition from research to operations to meet the user requirements following an existing template. Project plans are updated annually. The plan consists of several important “interface processes” that include: 

  • Identifying resources to determine how the project will be funded.  Various components of the product or services life cycle, from beginning to end, are defined and priced, e.g. support product development, long-term maintenance and archive. Though the SPSRB has no funding authority, it typically recommends the appropriate internal NOAA source for funding, e.g. the Joint Polar Satellite System Program;
  • Inserting the requirements of the product and service into an observational requirements list database for monitoring and record keeping;
  • Adding the product and service into an observing systems architecture database to assess whether observations are available to validate products or services, as all operational products and services must be validated to ensure that required thresholds of error and uncertainty are met; and,
  • Establishing an archiving capability to robustly store (including data stewardship) and to enable data discovery and retrieval of the requested products and services.

4. Execute Product Lifecycle

Product development implements the approved technical solution in accordance with the defined product or service capability, requirements, cost, schedule and performance parameters.  Product development consists of three phases:  development, pre-operational and operational.  In the development stage, the IPT uses the Project Plan as the basis for directing and tracking development through several project phases or stages.  In the pre-operations stage, the IPT begins routine processing to test and validate the product, including limited beta testing of the product by selected users. Importantly, user feedback is included in the process to help refine the product and ensure sufficient documentation and compatibility with requirements.

5. Approve for Operations

The NESDIS office responsible for operational generation of the product or service decides whether to transition the product or service to operations.  After approval by the office, the IPT prepares and presents a decision brief to the SPSRB for it to assess whether the project has met the user’s needs, the user is prepared to use the product, and the product can be supported operationally, e.g. the infrastructure and sufficient funding for monitoring, maintenance, and operational product or service generation and distribution exists. The project enters the operations stage once the SPSRB approves the product or service. If the user identifies a significant new requirement or desired enhancement to an existing product, the user must submit a new user request.

6. Retire or Divest

If a product or service is no longer needed and can be terminated, or the responsibility for production can be divested or transferred to another organization, it enters the divestiture or retirement phase.

Each of NOAA’s five Line Offices, e.g. Ocean Service, Weather Service, and Fisheries Service, has their own R2X process, that differs in one way or another to that of NESDIS. Even within NESDIS, if a project has external funding, it may not engage the SPSRB.  Furthermore, the process may be updated if conditions justify, such as additional criteria are introduced from the administration.  The process will, however, generally follow the major steps involved in the R2X process: user request, project plan, product/service development, implementation, product testing and evaluation, operationalization, and finally termination.

Acknowledgment: I thank John Sapper, David Donahue, and Michael Dietze for offering valuable suggestions that substantially improved an earlier version of this blog.