EFI Book Club Suggestions & Actions Items

November 27, 2023

The following document was created by the DEI Working Group in October 2023 in response to the final discussion of the group that participated in the book club which read “97 Things About Ethics Everyone in Data Science Should Know”.  

Goals of this document:

  1. Provide a summary of action items suggested during book clubs. The summary is a living document kept on the EFI Book Club webpage and action items will be added from future book clubs. This document is to be used as a reference to support and inspire activities for EFI that can be developed/led as individuals have time, availability, and interest in doing so
  2. Provide an opportunity to collate and celebrate activities that have taken place and those activities that will take place in the future that have been inspired by or align with suggestions from the book clubs.

Books Read

Note: There was a group within EFI that read Spatio-Temporal Statistics with R. That book group was not led by the DEI working group and was not included in this assessment.

Have we done any of the stuff we planned to do?

  1. From 97 Things About Ethics
    1. Summarize ideas we’ve had in previous book clubs and do self-assessment
      1. This document!
  2. From Braiding Sweetgrass
    1. Gratitude statement regarding data as a gift from the community, posted in:
      1. Newsletter
      2. Book club page
    2. The book club included student participants from the Sustainable Land and Water Resources (SLAWR) REU which provided opportunities for discussions across career stages and sectors
  3. From Weapons of Math Destruction
    1. JEDI (justice, equity, diversity, inclusion) database assessing who is in EFI is a good place to start to contribute to the self-assessment
      1. Dave Klinges (University of Florida) has worked with EFI Community Manager, Jody Peters (University of Notre Dame) to develop a workflow to assess the demographics of EFI members and how that changes through time
  4. From The Years That Matter Most and from Caste
    1. DEI Workshop ideas – webinar series leading up to workshops. Whose voices do we want to hear ahead of time
    2. Culturally competent training
      1. In late 2020/early 2021 the EFI DEI working group hosted 4 panels for the “Conversations on Diversity and Data Seminar Series.” See the links below for the recordings for each panel and the resources shared.
      2. What do diverse students need to thrive in data- and computer-heavy settings? November 19, 2020
      3. What does culturally competent data science training look like? January 20, 2021
      4. How can 4-year data-driven institutions and programs support students transferring from community colleges? February 19, 2021 
      5. What kind of online tools are useful and work well for remote learning and teaching? March 25, 2021 
  5. From Caste:
    1. Mentorship networks and having a series of mentors
      1. EFISA co-working session
      2. Developing a mentoring network for minority students is one of the goals of the Alfred P. Sloan Foundation Equitable Pathways funding from the project “A Generation of Native American Earth and Environmental Data Scientists” that EFI is involved with
        1. Still working on developing this, but the idea is for BIPOC students in research to have mentors from their own institution as well as mentors from the institutions they are doing research with and that the mentors will continue to support the students beyond the original activity
  6. From the Years That Matter Most
    1. Train the trainers
      1. This is something that continues to be included in EFI-led proposals.  Freya Olsson is also thinking about updating her NEON Forecasting Challenge workshop material which is currently geared towards participants wanting to learn how to submit forecasts to the Challenge to be applicable for people want to teach others how to submit to the Forecasting Challenge
      2. On the October 2023 Education working group call, the group discussed the following two papers. The first is about the needs for instructor training for people teaching data science at the collegiate level. The second provides papers, discussion prompts, and R code materials that can be used or modified for teaching ecological forecasting at the upper undergrad/early grad student level
        1. Emery et al 2021: Instructor Training Needs for Data Science
        2. Ernest et al 2023: Ecological Forecasting & Dynamics Course
    2. Highlight the #jobs and #funding Slack channels in the newsletter and in the welcome letter for new EFI members.

Do any of the ideas overlap between books? 

  1. The Years That Matter Most, Caste, and 97 Things: mentorship, supporting involvement, inclusive trainings
    1. Ongoing EFI efforts with potential overlap:
      1. EFISA efforts to share grant applications
  2. Braiding Sweetgrass, Weapons of Math Destruction, and 97 Things: addressing unintended consequences of forecasts and taking time for self reflection
    1. Ongoing EFI efforts with potential overlap:
      1. Examining EFI efforts from the lens of Design Justice Principles
      2. Social scientist-ecologist matchmaking

A full list of the notes and recommendations compiled from the book clubs can be found in the living document on the EFI Book Club webpage.

Reenvisioning EFI-RCN NEON Forecast Challenge Dashboard Visualization

August 22, 2023

Melissa Kenney1, Michael Gerst2, Toni Viskari3, Austin Delaney4, Freya Olsson4, Carl Boettiger5, Quinn Thomas4

1University of Minnesota, 2University of Maryland, 3Finnish Meteorological Institute,4Virginia Tech, 5University of California, Berkeley

With the growth of the EFI NEON Ecological Forecasting Challenge, we have outgrown the current Challenge Dashboard, which was designed to accommodate a smaller set of forecasts and synthesis questions. Thus, we have reenvisioned the next stage of the EFI-RCN NEON Forecast Challenge Dashboard in order to facilitate the ability to answer a wider range of questions that forecast challenge users would be interested in exploring. 

The main audience for this dashboard are NEON forecasters, EFI, Forecast Synthesizers, and students in classes or teams participating in the Forecast Challenge. Given this audience, we have identified 3 different dashboard elements that will be important to include: 

  1. forecast synthesis overview, 
  2. summary metrics about the Forecast challenge, and 
  3. self diagnostic platform.

During the June 2023 Unconference in Boulder, our team focused on scoping all three dashboard elements and prototyping the forecast synthesis overview. The objective of the synthesis overview visual platform is to support community learning and emergent theory development. Thus, the synthesis visualizations are aimed at creating a low bar entry for multi-model exploration to understand model performance, identify characteristics that lead to stronger performance than others, the spatial or ecosystems that are more predictable, and temporal forecast validity. 

You can view the prototype developed during the meeting HERE and in Figures 1 and 2. 

Figure 1.  Static image of an interactive map of aggregate forecast skill relative to climatology at each forecasted sites, here showing the water temperature forecasts for the aquatics theme. Bubble colour represents the continuous rank probability score (CRPS) skill relative to climatology with positive values (blues) showing submitted models on average perform better than climatology and negative values showing submitted models perform worse (reds). The size of the bubble represents the percentage of submitted models that outperformed the climatology null (i.e., larger sized bubbles have a higher percentage of skilled models). When hovered over, the bubbles show this percentage (perc_skilled), the site type (field_site_subtype), as well as the total number of models forecasting at that site (n_mod). 

Figure 2. a) Percentage of submitted models that are classed as ‘skillful’ (outperform the null climatology forecast based on the continuous rank probability score metric) at the river (n=27) and lake sites (n=6) for water temperature forecasts at each horizon from 1 to 30 days ahead. b)  Percentage of submitted models that are classed as ‘skillful’ for water temperature forecasts at six of the lake sites (https://www.neonscience.org/field-sites/explore-field-sites). 

Developing these graphics requires aggregation of skill scores. There are a multitude of metrics that can be used to calculate the skill score, which each have their own benefits and flaws. Thus, there should be multiple skill scores for different metrics with clear presentation of what metric is used at a given visualization. Additionally, in order to isolate what sites are more interesting from a model development perspective, there needs to be a comparison of how many of the models meet a baseline skill score at a given site at a chosen time frame. That allows isolating challenge areas and also easily informs which models really succeed at situations where others struggle. For better future analysis of how models perform at certain sites, we also envisage the visualization to include the skill scores for the relevant drivers (NOAA weather) for comparison. For example, if we see a drop in skill across models in water temperature projections after some time, there should be a direct method to assess if this reflects overall flawed model dynamics or if the weather forecast driving the water temperature loses its reliability. This also allows the user to approximate a maximum length in which the model performance analysis is at all useful.

In addition to the main synthesis overview, the goal of this platform is to support exploration of synthesis data. For all themes, there was general agreement that it would be useful to pull up at a glance, site characteristics, a photo, and basic summary statistics about the number of models and model performance. 

During the meeting, we worked with the Aquatics and Beetles Challenge teams to identify some of the key data aggregation groupings that will be important to facilitate exploration. One important distinction arose during the conversations – the baseline model, time scale, and data latency.  For Aquatics there is a long time series of data that create a climatology and data are provided relatively quickly via data loggers. For Beetles, there is a different null baseline model given the length of historic data that is different at each site and it takes a year to provide beetle abundance and richness assessment. There was also a desire to have specific types of synthesis visualizations including the species accumulation curve over years, 3-year running average, and indicating the lower and upper bounds of a particular variable (use in scale). Thus, for both Beetles and Aquatics there are similarities and differences in the types of groupings that would be most useful to support synthesis exploration. 

Table 1. Different data groupings that would be useful to facilitate easy-to-develop synthesis visualizations of the EFI-NEON Forecast Challenge models to facilitate learning and community theory development.

GroupingsAll ThemesAquaticsBeetles
Team / Challengetheme, site, model ID, customized classroom or team groupingsparticular variables (e.g., DO) within a theme
Spatial / Ecosystemssites, NEON domains, site type (river, stream, lake…), altitude (high vs lowlands)sites by distance, dominant NLCD classification
Temporal Scale average for past year, seasonal groupings,1 day, 5 days, 7 days, 15 days, 30 days14 days, growing season, multi-year (up to 5 year) forecasts
Modelsbest model at each site, model inputs, model structure, functional type, output uncertainty representationmodel run time, model computational requirements
Skill Scoringcurrent skill forecast approaches, better than climatology/null baseline, comparison of your model to the best forecast
Other Featuresenvironmental variables and weather forecast observationscomparison with weather/climate forecast skilldisturbance events (e.g., widlfire), growing season dates at each sites, site disturbance characteristics (e.g., mowing, fencing)

In addition to the synthesis overview, there were two complementary and linked platforms that will create the dashboard.  First, the objective of the forecast challenge overview is to provide a basic summary of metrics related to the overall EFI NEON Ecological Forecasting Challenge. Specifically, the metrics that would be included are: number of forecasts submitted, number of unique teams, percentage (or median of all) of models that are better than climatology or a null model per theme, and total forecast and observation pairs.

Second, the objective of the self-diagnositic platform is to provide an overview for individuals or team forecast contributions and performance. The types of summaries that will be provided for the forecasters are: confirmation of forecast submission, date of the most recent forecast submitted for a model, model performance relative to climatology or null model, model prediction versus observation, model performance vs other selected models, and model skill over a specific time horizon (to assess whether it performs better over time).

Overall, the goal of the re-envisioned visual dashboard is to create platforms that will allow us to track challenge engagement, individually or as a team diagnose any model submission problems and performance improvement opportunities, and support community theory development through a synthesis given the range of models submitted through the EFI NEON Ecological Forecasting Challenge.  Long-term, if this platform structure is useful and robust, it could be applied to other systems where there are multi-model predictions and there is a desire to collaboratively learn together to improve our theoretical understanding and forecasts to support decision-making.

We are looking for input from the EFI community on the synthesis dashboard for other themes, to discuss with individuals what synthesis would be most relevant to phenology, terrestrial, and ticks forecasters. Reach out to info@ecoforecast.org to share your thoughts or let us know you would like to join future conversations about updating the dashboard. 

Collaborative Innovation and Skill-building at the 2023 Unconference: Empowering Ecological Forecasting with NEON Data

The 2023 EFI Unconference, hosted by the Ecological Forecasting Initiative Research Coordination Network (EFI RCN) and supported by the National Science Foundation, brought together 45 passionate individuals at the National Ecological Observatory Network (NEON) headquarters in Boulder, Colorado on June 21-23, 2023 to work on a diverse range of projects that were nominated and selected by the participants. With a focus on collaborative problem-solving, the Unconference fostered a unique environment for participants to exchange knowledge, generate new approaches, and advance the field of ecological forecasting.

In addition to project development, activities included a warm welcome from Kate Thibault, NEON Science Lead, icebreaker activities, expertly facilitated by Cayelan Carey from Virginia Tech that helped participants connect and form meaningful relationships, a tour of NEON facilities, and a poster session and social hour, where participants showcased their research and projects. Through these activities, Unconference participants and NEON staff were able to engage with one another, exchange feedback, and forge new collaborations.

To ensure a productive and focused Unconference, participants engaged in a review of project ideas and subsequent project selection. This process allowed attendees to propose projects aligned with their interests and expertise and fostered a sense of ownership and investment in the outcomes. Ten project groups developed out of the 24 that were initially proposed as part of the pre-meeting preparation.

Summaries provided by each project working group are listed below. Some groups will provide additional details in forthcoming blog posts, so be sure to watch for those future posts.

This was the first in-person EFI event since 2019 and it was absolutely lovely to be in the same room to meet new people and to see in-person people we had only seen on Zoom before.  We appreciate the Unconference participants’ willingness to share their time, talents, and perspectives.  As you will read below, there were a number of accomplishments over the three days of the meeting and we look forward to seeing future outcomes from what was developed at the Unconference!

Unconference participants. Photo courtesy Quinn Thomas

List of Projects

Spatially Explicit Forecasting

Participants: John Smith, David Durden, Emma Mendelsohn, Carl Boettiger

To date, the NEON Ecological Forecasting Challenge has been focused on generating near term forecasts for specific sites. However, many interesting ecological phenomena occur across both time and space. At the EFI 2023 Unconference, our group prototyped a forecasting challenge that is also spatially explicit. For our prototype forecasting challenge, we focused on Leaf Area Index (LAI) recovery in post-burn areas. Our focal sites so far include the California August complex fire and the Colorado East Troublesome fire. Our work at the Unconference focused on building cyber-infrastructure to ingest and aggregate data, build target files, assess models using proper scoring rules, and build baseline climatological forecasts. Current progress, including an example notebook and a detailed workflow diagram, are available on GitHub: https://github.com/eco4cast/modis-lai-forecast/. Current and future work includes building additional baseline models, setting up a submission portal using GitHub actions, and integrating additional sites to include a variety of ecoclimatic domains.

Go back to the list of projects.

Forecast uncertainty

Participants: Noam Ross, Eli Horner, Ashley Bonner, Mike Dietze, Chris Jones

Interest and use of ecological forecasting have increased in recent years due in large part to the efforts of EFI, including the NEON Ecological Forecasting Challenge. However, only a small percentage of ecological forecasts published have fully quantified and partitioned their forecast uncertainties. Quantifying and validating model predictions and uncertainties allows for understanding the degree of uncertainty in forecasts and how much we understand the underlying ecological system (our ability to predict them). Partitioning forecast uncertainties allows for increased focus on data collection efforts that could lead to improved model performance and reduction in uncertainty. Our group worked toward creating a tutorial for how to quantify and partition forecast uncertainties and validate model predictions with uncertainty by using the NEON Phenology Forecasting Challenge. We are using an ARIMA model and a random forest model as examples. During the Unconference we were able to get both models working and partition uncertainties. We are finishing up the code base, tutorial, and discussing challenges with each type of model when it comes to performing uncertainty quantification and partition.

Go back to the list of projects.

Forecasting Impacts: Measuring the Current and Future Impacts of EFI

Participants: Rebecca Finger-Higgens, Jessica Burnett, Alexis O’Callahan, Ayanna St. Rose

It turns out getting-to-know-you style ice breakers can provide more than just a few new friends, they can also demonstrate group priorities and motivations for coming together in the first place. On a sunny morning at the NEON Headquarters in Boulder, CO, Cayelan Carey (Virginia Tech) asked the group of EFI Unconference participants to organize themselves based on whether they individually felt that the goal of forecasts were for understanding or decision making. As the participants shuffled around and considered the question before them, the final results revealed a pattern among the group that resembled a skewed desire for forecasts to inform decision making versus broadening the understanding of ecological systems. However, the ability of ecological forecasts to effectively inform decision making has not clearly been measured. Besides directly impacting decision making processes, how do we, as a grassroots organization, recognize and measure the other societal impacts that EFI might be, or capable, of creating?

This led our group to think through ways that EFI could measure impacts, to ask: what are the impact goals and achievements of EFI, what does the community want out of EFI, and what is the best way to measure these often hard to measure metrics? Using five categories of societal impacts (instrumental applications, connectivity impacts, conceptual impacts, capacity building, and socio-ecological impacts), we developed a poll for Unconference participants to assess the priorities and current thoughts of this representative group. The poll results suggest that EFI community goals emphasize conceptual impacts (i.e. improve ecological understanding), connectivity impacts (i.e. maintaining and developing community and partnerships) and instrumental applications (i.e., applications for decision making). We also found that EFI has made the greatest advancements in capacity building (i.e., curriculum development, short courses), conceptual impacts (i.e., working groups), and connectivity impacts (i.e., newsletters and conference sessions).  These discoveries have allowed us to identify a space for the creation of a concrete link between the connectivity of forecasting and the desired application outcomes of the group. It has allowed us to develop a number of recommendations for the steering committee and the EFI community. Some of these recommendations include focusing on if, how, and why a created forecast product achieves one of the five predefined societal benefits. Together, we hope to continue to build on the vision statement of EFI to build forecasts to understand, manage, and conserve ecosystems in a measurable and remarkable way.

Figure 1: Word cloud generated from Unconference participant responses to the question “describe the potential, importance, or value of the community of EFI”.

Go back to the list of projects.

Reenvisioning the NEON Ecological Forecasting Challenge Dashboard Visualization

Participants: Melissa Kenney, Michael Gerst, Toni Viskari, Austin Delaney, Freya Olsson, Carl Boettiger, Quinn Thomas

With the growth of the NEON Ecological Forecasting Challenge, we have outgrown the current Challenge Dashboard, which was designed to accommodate a smaller set of forecasts and synthesis questions. Thus, we have reenvisioned the next stage NEON Forecast Challenge Dashboard in order to facilitate the ability to answer a wider range of questions that forecast challenge users would be interested in exploring. The main audience for this dashboard is NEON forecasters, the EFI community, Forecast Synthesizers, and students in classes or teams participating in the NEON Ecological Forecasting Challenge. Given this audience, we have identified 3 different dashboard elements that will be important to include: 

  1. forecast synthesis overview, 
  2. summary metrics about the Forecast Challenge, and 
  3. self diagnostic platform.

To learn more about the dashboard redesign approach, see the prototypes here. Find more details about this project in this blog post.

Go back to the list of projects.

Transporting Models Between NEON and non-NEON Systems

Participants: Brendan Allison, Olufemi Fatunsin, & Jeff Mintz

A community of practice is increasingly active in developing models and forecasts for NEON sites. We asked: how can we take models trained on NEON data and refine them for use in another context? Similarly, how can we take models trained on non-NEON data and refine them on NEON data? This goal of transplanting models can empower a range of applications, including local field studies, adaptive management, and data fusion from multiple monitoring networks, enabling greater statistical power for big ecological questions. Whether transporting a model to or from NEON, the challenges are effectively the same. These included unbalanced data, different monitoring protocols, different predictors, and different site selection criteria. To focus our efforts, we picked the particular case study of bringing together NEON vegetation survey data with similar datasets generated under the Forest Inventory Analysis (FIA) program. Our first product was the development of a Bayesian multilevel model with the capacity to scale to the integration of multiple sets of continental or global-scale monitoring networks, or shrink to the job of predicting outcomes at a single site, but informed by a shared global layer. With this case study in mind, we have been building a codebase for processing the relevant NEON and FIA forestry data and for joint modeling of residual error across monitoring systems in Stan, a popular probabilistic programming language. Find more details about this project in this blog post.

Go back to the list of projects.

ML-based Uncertainty in the NEON Ecological Forecasting Challenge

Participants: Marcus Lapeyrolerie, Caleb Robbins

How can machine learning (ML) provide a solution to estimating forecast uncertainty across NEON Ecological Forecasting Challenge? We generated a proof-of-concept workflow combining two machine learning approaches to make probabilistic forecasts. Random forests were used to learn relationships between forecast challenge variables and past NOAA weather data and to make predictions. While these models were able to make forecasts that perform well in approximating the future target time series, they were not implemented to provide estimates of uncertainty. We explored how we could use past data along with these deterministic forecasts to generate probabilistic forecasts. Our approach was to train another machine learning model to make probabilistic forecasts on the residual errors from the previous Random Forest models. We then used these predicted residual error forecasts to modify the Random Forest-based forecasts. This combined approach holds potential as it could be used in a plug-n-play manner, where this method could correct the deterministic (or even probabilistic) forecasts from any model to account for temporal trends in the residual error and provide uncertainty estimates. In our next steps, we will work on creating an automated workflow to generate residual error forecasts for the Eco4Cast challenge.  

Go back to the list of projects.

Forecasting Ground Beetles: Avoiding Pitfalls

Participants: Eric Sokol, Glenda Wardle, Vihanga Gunadasa, Juniper Simonis, Alison Gerken, Meghan Beatty, Khum Thapa-Magar

Ground beetles are a versatile species with which to measure biodiversity, yet they lack behind other EFI NEON Ecological Forecasting Challenge themes in terms of forecasts and models. Our group at the Unconference wanted to figure out why forecasters were not submitting to the NEON Ecological Forecasting Challenge Beetle Communities Theme and how we could remove those barriers to increase forecast submission. We created a tutorial  (in progress) that describes general goals for forecasting ecological communities, a how-to on submitting a forecast, some of the challenges in forecasting ecological community data, and examples of forecasts people might submit to begin to address those challenges. We first reviewed the underlying data structure of the pre-made targets file that had been developed for the forecasting challenge. We then combined currently available code for a null model, an ARIMA model, and an available tutorial for working with data from the Aquatics Challenge into a workable tutorial to prepare and submit forecasts to the Beetle Challenge. Our goal is to finalize the tutorial by adding a random walk model and more detail on how to add additional covariates to the model, including climate variables. We are also designing a new targets file that has different variables of interest at finer spatial scales at a given NEON site (e.g. plot or habitat information, survey effort). The beetles community data provides an example of when patterns in non-continuous or seasonal data may be poorly capture by a simple model (e.g., ARIMA). When there is latency or gaps in the data more data processing is often required than when using continuous sensor-captured data. Knowing the experimental design is also critical to be able to design a model to build understanding. We hope that this tutorial increases overall interest in submitting forecasts to the beetle forecasting challenge and removes barriers that may prevent forecasters at all levels from submitting. Further information and development on community ecology and biodiversity data is critical for understanding many different biological systems, can help researchers broaden their understanding of how and why communities change over time, and can better provide decision-making tools for ecosystem monitoring.

Go back to the list of projects.

Towards Principles for Designing Inclusive Ecological Forecasts

Participants: Anna Sjodin, Mary Lofton, Sean Dorr, Jody Peters, Jason McLachlan, Cazimir Kowalski, Melissa Kenney, Katie Jones

Our group is interested in exploring opportunities for improving inclusivity in ecological forecasting. Through discussion, we identified the ten principles of Design Justice (Box. 1) as a potential mechanism for evaluating the inclusivity of forecast products, services, and systems.

Box 1: Design Justice Network Principles, reproduced from https://designjustice.org/read-the-principles, license CC BY-ND 4.0.
To learn more, please visit the Design Justice Network website (https://designjustice.org) or see Design Justice: Community led practices to build the world we need by Sasha Costanza-Chock (https://designjustice.mitpress.mit.edu/;
open access pdf version is available here:  https://library.oapen.org/bitstream/handle/20.500.12657/43542/1/external_content.pdf).

1We use design to sustain, heal, and empower our communities, as well as to seek liberation from exploitative and oppressive systems.
2We center the voices of those who are directly impacted by the outcomes of the design process.
3We prioritize design’s impact on the community over the intentions of the designer.
4We view change as emergent from an accountable, accessible, and collaborative process, rather than as a point at the end of a process.
5We see the role of the designer as a facilitator rather than an expert.
6We believe that everyone is an expert based on their own lived experience, and that we all have unique and brilliant contributions to bring to a design process.
7We share design knowledge and tools with our communities.
8We work towards sustainable, community-led and -controlled outcomes.
9We work towards non-exploitative solutions that reconnect us to the earth and to each other.
10Before seeking new design solutions, we look for what is already working at the community level. We honor and uplift traditional, indigenous, and local knowledge and practices.

As a first step towards applying the Design Justice principles to EFI-created products, services, and systems, our team evaluated to what degree the ten principles were evident in the design of the NEON Ecological Forecasting Challenge. We identified several ways in which the design of the Challenge was well-aligned with Design Justice Principles (e.g., Principle 4: We view change as emergent from an accountable, accessible, and collaborative process, rather than as a point at the end of a process.), as well as areas in which we thought we could improve (e.g., Principle 3: We prioritize design’s impact on the community over the intentions of the designer). 

Moving forward, we are soliciting broader participation from all EFI community members in small focus groups to continue our internal evaluation of the inclusivity of current EFI products, with the ultimate goal of furthering the inclusivity of ecological forecasting by developing recommendations towards a more complete alignment of EFI-designed products with design justice principles. If you are interested in participating in such a focus group, please provide your contact information in the Google Form linked here.

Go back to the list of projects.

A proactive step toward decision-ready forecasts: Fusing iterative, near-term ecological forecasting and adaptive management

Participants: Jaime Ashander, LM Bradley, Mark Buckner, Nathan Byer, Cayelan Carey, Michael Gerst

This group aimed to improve the conceptual tools for co-production of ecological forecasts that aid in decision making. We identified that there is a need for tighter conceptual integration of the iterative, near-term ecological forecasting cycle (as practiced by the EFI community) with the adaptive management cycle (as practiced by communities of natural resource managers) and the broader context for management decisions. While prior frameworks have treated the iterative, near-term forecasting and adaptive management cycles as independent, with limited points of contact, a careful fusion of these processes may increase conceptual utility for co-production. As a first step towards a more useful framework, we then located iterative, near-term forecasting activities within the management decision making process, using the PrOACT (Problem, Objectives, Alternative Actions, Consequences, and Tradeoffs) tool from structured decision making. After creating this draft version of a framework, we explored several targeted case studies in ecological forecasting and adaptive management to evaluate its efficacy as a tool for fusing forecasting and adaptive management efforts. We will continue meeting to develop these ideas and work towards a manuscript.

Go back to the list of projects.

Disease Forecasting

Participants: Janet Agbaje, Kayode Oshinubi, Ethan Deyle (and thanks Ayanna St. Rose!)

Developing models to understand the transmission of pathogens in disease ecology is critical to understanding the spread of diseases and how to prevent them. A model study is relied on to simulate the spread of disease and predict the effectiveness of different control strategies. Model forecasting is also critical, both for planning and enacting public health interventions but also for building our understanding of the sometimes complex drivers of disease dynamics across space and time. Vector-transmitted diseases (e.g.,mosquito- or tick-borne) represent an exceptionally difficult case since key processes affecting spread and transmission are not directly reflected in typical public health monitoring. For example, the presence and behavior of the vector species themselves, but often there are infection reservoirs in wildlife populations as well.  In this way, connecting ecological forecasts to human epidemiological forecasts is an important challenge to tackle.

The NEON Ecological Forecasting Challenge has already included a tick forecast challenge, although it has not yet been tied directly to tick pathogen status monitoring or human health. In this project, we worked on the West Nile Virus (WNV), which is a mosquito-borne disease in the family of flaviviruses. The primary host is birds (across a wide range of species), while humans are the dead-end host. WNV occurs and is commonly spread, especially in the summer, through mosquito bites. Our goal at the EFI Unconference was to examine the opportunities that NEON data could provide to create impactful forecasts for the public’s health from vector-borne diseases, focusing on WNV. Especially since humans are a dead-end host, understanding and forecasting the disease dynamics demands ecological, human, and human data. We intend to forecast the mosquito abundance as well as the infection rate in humans over time, incorporating the mosquito abundance, seasonality, drivers, and co-occurring bird abundances.

We built a preliminary bridge that connects National Ecological Observatory Network (NEON) and Center for Disease Control (CDC) data and, through preliminary visualization, demonstrated the potential to match between the NEON mosquito data (abundance and pathogen status) and CDC-reported human cases on a year-by-year and county-by-county level for 14 NEON sites located in counties with reported cases of WNV. A first look at the collected data set showed a relationship between the NEON bird and mosquito abundance that suggests large bird presence is one driving condition of large mosquito abundance in a summer sampling season. Given the relative rarity of WNV compared to some other vector-borne illnesses like Lyme disease, there are definitely some challenges to setting up a forecasting challenge for the full disease dynamics, although we may be able to cast a wider net for human cases in counties adjacent to NEON site counties. We’re excited to build this preliminary effort into a new neon4cast theme, and we’re also eager to dive into the lessons learned from one of the other Unconference projects that examined pitfalls in recruiting broad engagement in the beetle forecasting challenge. 

Go back to the list of projects.

Translation Needs for the EFI Community

February 2, 2023
By the Translation Working Group

Translational ecology aims to increase the usability of knowledge and products beyond scientific communities by integrating socio-environmental knowledge and stakeholders or end users as partners in the process and projects. For the past few months, the Translation and Actionable Science Working Group has been working to gauge the translational needs of the EFI community and to identify where modelers and physical scientists see gaps in connecting to stakeholders and end users to conduct translational research.  The goal of this post is to share what the working group has identified as priorities for translational needs and share where there are connections to what other working groups and organizations are doing.

Michael Gerst (University of Maryland) led interviews in October-November 2022 with nine individuals across EFI that represent a range of early to late career stages, institutional type (academic, NGO, U.S. government agency), and gender to learn about interviewees’ experiences with forecasting projects that required stakeholder interaction, what went well, what didn’t, and what would have been useful to improve stakeholder engagement. The EFI Translation Working Group is using the results from these interviews to prioritize and develop activities that can help to fill those gaps.

The following seven topics were identified from the interviews and Working Group discussions that could be developed into self-contained seminars, manuals, or guidance documents (for short-term activities) or hosting larger workshops or proposal writing opportunities (for longer-term activities).  Cases where topics overlap with other EFI working group discussions and activities are also highlighted. 

  1. How can EFI serve as an advocate for ecological forecasting to stakeholder groups, especially the public sector?
    In some cases, the individuals interviewed found that the stakeholders they work with were not interested as much in the forecasts provided as they were in the real-time data that was provided during the data collection and forecast process.  These real-time data allowed the stakeholders to use their expert knowledge to make informed decisions that may or may not have been related to the forecasts that the teams provided. Within academia, there has been quite a bit of work to raise the profile of ecological forecasting, but there is now the opportunity to bring this awareness outside academia to private and government sectors to both promote the benefits of ecological forecasts in settings outside academia and listen to the needs of stakeholders. 
  1. What’s the appropriate level of stakeholder engagement? Is co-production always the right answer?
    Co-production is increasingly seen as a method for improving the fit between science and stakeholder needs. However, it can be time-consuming and expensive, necessitating assessment of whether its potential benefits are a good match for a project. While understanding of the contexts in which co-production can be fruitful has improved, there is still a gap in distilling guidelines for scientists on when it is appropriate, and if not, what other options exist.
  1. How to help colleagues find collaborators across disciplines (i.e., matchmaking) as well as providing guidance on what makes a fruitful multi-disciplinary collaboration?
    This topic, as well the previous point about the appropriate level of stakeholder engagement connects with conversations and a blog post the EFI Diversity, Equity, and Inclusion (DEI) working group is having and developing that highlights the need to bring in collaborators at the beginning of a project to learn what is of most interest to them. The DEI group has focused on underrepresented individuals in terms of race/gender, but this can also be applicable to underrepresented disciplines in ecological forecasting like social science.
    This topic also connects with previous discussions in the former Partners and Knowledge Translation working group (which merged with the former Social Science group to become the current Translation working group). Previous discussions have revolved around how to keep a database of individuals and groups that support matchmaking connections. The group has also discussed the need for forums or meetings to allow groups to pitch ideas and socialize early ideas that can then be followed up to develop proposals to fund those ideas.  Clearly, this is something that resonates both within the working groups and across the broader EFI community.
  1. How to initiate, foster, and maintain stakeholder relationships?
    In 2021 and early 2022 the Partners and Knowledge Transfer and Social Science working groups hosted a number of seminars on science communication and co-production.  In particular, the May 4, 2021 seminar on co-production provides some initial resources that could be built out further for this topic.  The discussions and the upcoming blog post the EFI DEI working group is developing that highlight ways to connect to underrepresented groups will also provide useful resources related to fostering relationships.
  1. How to understand stakeholder decision-making processes?
    In the interviews, a few anecdotes were shared that ultimately can be summarized as: we thought we knew how stakeholders make decisions (with forecasts!) but experience eventually proved otherwise. In addition to learning the process of stakeholder engagement, interviewees thought there would be utility in helping modelers understand the universe of ways stakeholders might make decisions and where forecasts fit in (or don’t!).
  1. How to set up extended project planning to ensure continued operationalization?
    It is important to have a plan in place for how ecological forecasts will be operationalized after the initial set of funding expires. Stakeholders are frustrated if they start to use a forecast and then it is no longer available when the funding is over.  NASA provides one example of how to avoid this issue.  In NASA’s Ecological Conservation Applications, solicitations (e.g., A.40 Earth Science Applications: Ecological Conservation) often require proposal teams to include partners/end users who will also be responsible for maintaining the ecological forecasting products (e.g, web apps) beyond the NASA funding period.
  1. How to make data, models, and systems that are documented and reusable (FAIR data, models)?This is a topic that is of interest across multiple working groups in the EFI community. The Forecasting Standards working group has recently submitted a manuscript for publication titled “A Community Convention for Ecological Forecasting: Output Files and Metadata.” The preprint is available here: https://ecoevorxiv.org/9dgtq/. The manuscript focuses on suggestions for documenting ecological forecasts.  The Cyberinfrastructure and Methods working group has also been thinking about the issue where groups creating ecological forecasts continue to develop one-off or boutique workflows.  That working group is writing a workshop proposal to bring the together people from government agencies, industry, NGOs, and academia together to develop a way to share forecasts and workflows so people don’t need to reinvent a forecast workflow. Instead, new forecasts can be created that borrow strength from resources already developed for similar models or workflows and instead be able to focus on the details and nuances of applying a forecast in their own study system or domain. This also resonates with what the EFI NEON Ecological Forecasting Challenge is working on with the cyberinfrastructure that is set up for accessing target and meteorological data and accepting, scoring, and visualizing forecasts across multiple NEON Data streams.

EFI at AGU 2022

Date: December 4, 2012

Below is the list of poster and oral presentations for EFI’s hosted session at the American Geophysical Union (AGU) 2022 Conference in Chicago, as well as other ecological forecasting-related talks that may be of interest to the community. All times are listed in US Central Time.

Thursday EFI Social – Anyone who is available to meet up on December 15, Thursday evening, we’ll have a group getting together at Kroll’s South Loop starting around 6:30 – 8:30pm. It’s an 18-minute walk from the Convention Center. Find directions here.

Friday Poster and Oral Sessions – EFI’s oral and poster sessions on “Ecological Forecasting in the Earth System” will be held on Friday, December 16, 2012. The in-person Poster Session is from 9am-12:30pm in Poster Hall A (South, Level 3). The Online Poster Session is from 1:45-2:45pm. The Oral session is from 4:45-6:15pm in S501bcd (South, Level 5). We’re excited to have a great set of speakers that really span the full gradient from terrestrial to freshwater to marine. Come check out the following talks!

Friday EFI In-Person Poster Session (9:00-12:30, Poster Hall A)

Friday EFI Online Poster Session (1:45-2:45pm, Online)

Friday EFI Oral Session (4:45-6:15pm, S501bcd – South, Level 5)

Other Forecasting Presentations

If you are presenting an ecological forecasting-related talk or poster that you don’t see on the list, reach out so we can get it added!

Serious Fun: Reimagining Virtual Events Recap

December 2, 2021

Post by Jody Peters

The EFI Partner’s Working Group hosted Pablo Suarez on November 19 for an intensely participatory Zoom call session titled “Serious Fun: Reimagining Virtual Events on Ecological Forecasting”. This was the first of a 2-part series on science communication. The second session, “Sharing is Caring: Communicating Science Beyond Academic Publications” will be held on January 19. You can find details and register for the January session here.

The first session on November 19 was not recorded due to the intensely participatory nature of the call. However, below we share an overview of the call and resources shared by Pablo.

Pablo Suarez is innovation lead at the Red Cross Red Crescent Climate Centre, a visiting fellow at Boston University, and an artist in residence at the National University of Singapore. He directs the Climate Centre’s initiatives linking applied knowledge with humanitarian work, as well as new approaches to climate risk management. Examples of his work include participatory games for learning about climate change and dialogue, and forecast-based financing pilots in Togo and Uganda.

Pablo started with the premise that disasters are collaborating better than humans are – for example, things like the pandemic, climate issues, and conflicts are exponentially causing harm and we need to have better collaborations between those who know and those who do. And if we want to effectively work with other scientists, people impacted by what we study, or policymakers, we need to let people know how much we care about what they care about. Often we convey information and work to develop collaborations through meetings and presentations typically with a keynote or series of talks followed by a short Q&A session. Pablo advocated that there are better ways to communicate and develop collaborations during meetings and conferences and that is by using games and including humor and spontaneity as a way to broaden engagement and make connections.

As I worked with Pablo to coordinate the logistics for the call and compile the advertising material, I didn’t grasp what the “Serious Fun” part of his talk title meant. But after participating on the call, I am inspired to think about ways to incorporate his suggestions for reimagining virtual events which are low-carbon and low-budget compared to in-person events. We know Zoom fatigue is real, but by including spontaneity, humor, and games, virtual events can be more engaging and can lead to deepened networks and collective action. By having fun, we can be more effective at conveying serious ideas about science, ecological forecasting, and making decisions.

Below I describe the two major activities held during the call and end with a list of the resources Pablo shared.

1. Rant

One group activity was called Rant. Early on in the call, we had a short amount of time to write out all our frustrations with Zoom calls. It was fast-paced and Pablo really encouraged us to get into the Rant by expressing our frustrations through things like capitalization, exclamation marks, etc. Later after other activities in the call, we had the opportunity to go back to rants submitted by ourselves and by others on the call to think about changes that could be made to alleviate issues with Zoom calls. You can see a few example rants and changes below and you can find all the rants here. I particularly appreciated the second example below, since that was my rant.

2. Decisions for Seasons

The other major activity Pablo facilitated was a version of Decisions for Seasons, a game designed to support learning and dialogue about key aspects of planning for extremes and experiencing climate change impacts and managing risk. In this game, everyone had 10 forecasts about the climate conditions for the next year. We could choose if the conditions would be regular/normal (thumbs up) or if we should plan for drought insurance (buckets) or flood insurance (umbrellas). This game was also fast-paced with only about a minute to make these decisions. After everyone had made their predictions about the future conditions, Pablo rolled a die where a 1 meant there was a drought and a 6 meant there was a flood and we all hoped we had picked enough drought or flood insurance to handle those rolls! The fun part was then seeing whose predictions were correct and who ended up getting broken hearts for incorrect predictions (see the second image below). Although we didn’t have time during our call, an extension to this game is to incorporate climate change by using an 8-sided die where floods become represented by a 6, 7, or 8 increasing the chance of an extreme event.  You can find more details about how this game can be played here.

Screenshot from GoodGames “Decisions for Seasons” game showing options for investing to plan for next year’s conditions.
Screenshot example of the results from the game after 10 rolls of a die

Throughout the call we had fun, there were moments of heart-racing tension to get our predictions and answers in, there were opportunities for competition and collaboration, and plenty of joy. But there was also a seriousness moving forward to think about what we are doing with forecasts, how we can improve forecasts, how we can improve people’s access, understanding, and opportunity to act, and how we can enable people to make decisions with forecasts.

While this seminar focused on how to make virtual events more engaging, Pablo’s suggestions and resources can also be applied to in-person conferences and meetings, classes, and lab meetings.  Let’s have more fun, let’s be more engaged!

Resources

For those interested in exploring how to incorporate games into your presentations and to find ways to make your audiences know you care about what they care about, the following are resources Pablo shared during the call.

  1. The Climate Centre’s website has lots of resources (climatecentre.org).  In particular, there is a section about humour and a number of climate games.  
    1. There are almost 30 climate games applicable for small to large groups (n=6-100) and that cover a range of topics including climate change adaptation and mitigation, collaboration, disaster risk reduction, health, resilience, etc. Each game includes learning objectives, intended audiences, required materials, and pdf and video tutorials. A number of the games specifically connect to forecasting and decision making incuding the Decisions for Seasons game we played during the call. Two other examples include
      • Before the Storm – a decision-making game designed to introduce weather forecasts and possible actions to take against natural disasters through different roles.
      • Paying for Predictions – a participatory activity to experience the impacts of climate change, to understand the value of forecasts and to enhance the understanding of climate smart disaster risk reduction.
  2. Pablo used the GoodGames platform during our call for the Rant and Decisions for the Season activities. The Decisions for the Season can be played in person with the instructions linked on the Climate Centre’s website, but GoodGames has also made an online version here. You can learn more about GoodGames here: https://gg.goodfocus.net/about/. The games do cost money, but if this was something you or your organization were interested in, it would be worth checking in with GoodGames. I believe their are discounts for those in non-profits.
  3. Pablo shared a 2016 paper he is a co-author on, “Using a Game to Engage Stakeholders in Extreme Event Attribution Science“. You can find his other publications about climate games here.

Diversity with EFI and How You Can Get Involved

Date: June 18, 2020

Post by: The EFI Diversity, Equity, and Inclusion Working Group

The Ecological Forecasting Initiative, like many other organizations, calls for justice for George Floyd and countless other Black individuals and persons of color, and we stand in solidarity with our Black colleagues and friends saying #BlackLivesMatter. Our EFI Diversity, Equity, and Inclusion (DEI) Working Group is committed to listening, learning, and exploring ways to promote anti-racism and to make EFI, and STEM fields more broadly, a welcoming environment. In regards to ecological forecasting, as a first step, we need the input and experience from people of all backgrounds at all stages of the ecological forecasting process, from forecast development and implementation to stakeholder decisions. Ecological forecasting as a field is relatively new; creating an inclusive, anti-racist field starts with understanding the lived experiences of all types of forecasters, end-users, and community stakeholders. We have more in-depth initiatives and tasks associated with the current NSF funded EFI-RCN grant and other proposals submitted for review, but for now, the following are four ways we invite you to get involved by joining the group, using or adding to our Bibliography of resources, filling out a short 5-minute survey, or joining our book club.

ONE
Of all the EFI Working Groups, our DEI Group has the smallest number of participants. We welcome anyone who is interested in participating to learn more about ways to expand diversity and inclusion as well as brainstorming ways to increase diversity within the ecological forecasting field. Our next call is June 30 and upcoming monthly meetings are posted on the EFI’s DEI webpage, as is our Strategic Plan, which is a living document that provides an overview of the steps the DEI Working Group is taking to promote diversity, accessibility, and inclusion in EFI. Email eco4cast.initiative@gmail.com to be added to the mailing list for this group.

TWO
If you are not able to join the Working Group calls at this time, there are additional ways to get involved. We are compiling a Bibliography that provides resources for learning more about anti-racism and the diversity status of fields relevant to ecological forecasting. These resources include lists of minority supporting associations, links to diversity and inclusion plans from professional societies, blog posts, publications, and compiled lists of resources from other organizations. This is also a living document, to which we will add additional documents moving forward. If there are additional resources you have found useful, they can be submitted through this Google form.

THREE
As part of Step 1 of our Strategic Plan, “Identify and clarify the problem”, we are working to identify the current status of diversity within fields relevant to ecological forecasting as a baseline to assess the current status of diversity within the Ecological Forecasting Initiative, specifically. Once we assess the current diversity status of EFI, our next goal is to provide suggestions to ecological forecasting labs about ways to recruit more diverse students into our undergraduate and graduate programs.
To assess the current status of diversity within fields relevant to ecological forecasting we are using the NSF funded NCSES Interactive Data Tool IPEDS database of the racial and ethnic backgrounds of students that have graduated from US institutions in over 400 academic programs. We have narrowed down the list to 29 academic degrees and are asking for your help to rank the relevance of these degrees to ecological forecasting in this short survey (https://nd.qualtrics.com/jfe/form/SV_3Pdyo1bh5OG8R93). Once we know which academic degrees are most relevant to ecological forecasting, we can assess the current diversity of those degrees relative to EFI. We will then work on Step 2 of our Strategic Plan, “Identify barriers that may prevent students from underrepresented groups from participating in ecological forecasting.”

FOUR
To encourage open, honest conversation and anti-racist thinking, EFI will host its first virtual book club. We will begin with The Years That Matter Most: How College Makes or Breaks Us by Paul Tough. Tough’s book explores privilege in higher education, from the application process to the classroom. As many forecasters are educators and participants in higher education, we believe this book will serve the interests of EFI’s mission while helping participants grow in anti-racist values. The book club is open to all participants, regardless of EFI membership, race, ethnicity, gender, religion, or any other personal identity – we ask only that you participate with an open mind and a willingness for vulnerability. For those who would like to participate but need help acquiring the book, we have a limited amount of financial assistance available. Email eco4cast.initiative@gmail.com for more info.
Logistics: The book club will meet weekly, in the evenings, starting the week of July 13th, with about 40-70 pages of reading per meeting (although meeting frequency and page counts can be adjusted to meet the needs of the group). If you are interested in participating, email eco4cast.initiative@gmail.com so we can send you the doodle poll to find a day/time for the group to meet.

EFI Guest Post on Dynamic Ecology

Date: June 8, 2020

EFI Member Nick Record (Bigelow Laboratory for Ocean Sciences) led an effort with Jaime Ashander (Resources for the Future), Peter Adler (Utah State University), and Michael Dietze (Boston University) to write a guest post titled Ecological forecasting ethics: lessons for COVID-19” for Dynamic Ecology|Multa novit vulpes.

You can find the post here:

https://dynamicecology.wordpress.com/2020/06/08/ecological-forecasting-ethics-lessons-for-covid-19/

EFI at AGU 2019

Date: December 6, 2019

EFI’s oral and poster sessions on “Ecological Forecasting in the Earth System” have been scheduled for Wednesday, December 11, 2019 from 4-6pm in Moscone 3001 (oral session) and on Wednesday morning from 8am-12:20pm(posters). We’re excited to have a great set of speakers that really span the full gradient from terrestrial to freshwater to marine. Come check out the following talks!

Wednesday EFI Oral Session (4-6pm, Moscone 3001)

16:00 Nicole Lovenduski – High predictability of terrestrial carbon fluxes from an initialized decadal prediction system
16:15 Ben Bond-Lamberty – Linking field, model, and remote sensing methods to understand when tree mortality breaks the forest carbon cycle
16:30 Zoey Werbin – Forecasting the Soil Microbiome
16:45 Brian Enquist – Forecasting future global biodiversity: Predicting current and future global plant distributions, community structure, and ecosystem function
17:00 Heather Welch – Managing the ocean in real-time: Ecological forecasts for dynamic management
17:15 Clarissa Anderson – Bringing new life to harmful algal bloom prediction after crossing the valley of death
17:30 Ryan McClure – Successful real-time prediction of methane ebullition rates in a eutrophic reservoir using temperature via iterative near-term forecasts
17:45 Carl Boettiger – Theoretical Limits to Forecasting in Ecological Systems (And What to Do About It)

Wednesday EFI Poster Session (8am-12:20pm, Moscone South Poster Hall)

Christopher Trisos – B31J-2509 The Projected Timing of Abrupt Ecological Disruption from Climate Change
Gleyce K. D. Araujo Figueiredo – B31J-2510 Spatial and temporal relationship between aboveground biomass and the enhanced vegetation index for a mixed pasture in a Brazilian integrated crop livestock system
Rafael Vasconcelos Valadares B31J-2511 Modeling Brazilian Integrated Crop-Livestock Systems
Zhao Qian – B31J-2512 An optimal projection of the changes in global leaf area index in the 21st century
Takeshi Ise – B31J-2513 Causal relationships in mesoscale teleconnections between land and sea: a study with satellite data
Hisashi Sato – B31J-2514 Reconstructing and predicting global potential natural vegetation with a deep neural network model
Masanori Onishi – B31J-2515 The combination of UAVs and deep neural networks has a potential as a new framework of vegetation monitoring
Yurika Oba – B31J-2516 VARENN: Graphical representation of spatiotemporal data and application to climate studies
Stephan Pietsch – B31J-2517 A Fast and Easy to use Method to Forecast the Risks of Loss of Ecosystem Stability: The Epsilon Section of Correlation Sums
Jake F Weltzin – B31J-2518 Developing capacity for applied ecological forecasting across the federal research and natural resource management community
Theresa M Crimmins – B31J-2519 What have we learned from two seasons of forecasting phenology? The USA National Phenology Network’s experience operationalizing Pheno Forecasts
Tim Sheehan – B31J-2520 Sharp Turn Ahead: Modeling the Risk of Sudden Forest Change in the Western Conterminous United States
Margaret Evans – B31J-2521 Continental-scale Projection of Future Douglas-fir Growth from Tree Rings: Testing the Limits of Space-for-Time Substitution
Ann Raiho – B31J-2522 Improving forecasting of biome shifts with data assimilation of paleoecological data
Quinn Thomas – B31J-2523 Near-term iterative forecasting of water quality in a reservoir reveals relative forecastability of physical, chemical, and biological dynamics
Alexey N Shiklomanov – B31J-2524 Structural and parameter uncertainty in centennial-scale simulations of community succession in Upper Midwest temperate forests
Peter Kalmus – B31J-2525 Identifying coral refugia from observationally weighted climate model ensembles
Jessica L O’Connell – B31J-2526 Spatiotemporal variation in site-wide Spartina alterniflora belowground biomass may provide an early warning of tidal marsh vulnerability to sea level rise
Rafael J. P. Schmitt – B31J-2527 Assessing existing and future dam impacts on the connectivity of freshwater fish ranges worldwide
Teng Keng Vang – B31J-2528 Site characteristics of beaver dams in southwest Ohio

Other Forecasting Presentations

Mon 13:40-15:40, Moscone South e-Lightning Theater: Alexandria Hounshell, ED13B-07 Macrosystems EDDIE: Using hands-on teaching modules to build computational literacy and water resources concepts in undergraduate curricula (Alex’s presentation will be at ~2pm)
Mon 13:40-18:00, Poster Hall: Hamze Dokoohaki B13F-2442 – A model–data fusion approach to estimating terrestrial carbon budgets across the contiguous U.S
Mon 14:25, Moscone 3005: Michael Dietze B13A-04 – Near real-time forecasting of terrestrial carbon and water pools and fluxes
Mon 17:40, Moscone 3003: Michael Dietze B14B-11 Near real-time forecasting in the biogeosciences: toward a more predictive and societally-relevant science
Tues 13:40-18:00, Poster Hall: Erin Conlisk B23F-2598 – Forecasting Wetland Habitat to Support Multi-Species Management Decisions in the Central Valley of California
Wed 08:00-12:20, Poster Hall: B31H Building Resilient Agricultural Systems Supported by Near-Term Climate and Yield Forecasts II [Poster Session]
Wed 13:55, Moscone 3005: Inez Fung B33A-02 – Towards verifying national CO2 emissions
Thurs 09:15, Moscone 3012: John Reager B41A-06 – Hydrological predictors of fire danger: using satellite observations for monthly to seasonal forecasting 
Fri 10:20-12:20, Moscone 3007: B52A Building Resilient Agricultural Systems Supported by Near-Term Climate and Yield Forecasts I [Oral Session]

EFI Social

Anyone who is available to meet up after the Forecasting Session on Wednesday, we’ll have a group getting together at Tempest starting around 6:30 pm. It’s an 8 minute walk. Find directions here.

Seeking Judges for Outstanding Student Presentations

We would like to recruit judges for the student presentations in our forecasting sessions at AGU this year. We have one candidate for Outstanding Student Presentation in our poster session on Wednesday morning (B31J) and two candidates in our oral session Wednesday afternoon (B34C). If you plan to attend either of these sessions, please consider helping to mentor a young researcher with some constructive feedback.
You can sign up to judge at https://ospa.agu.org/2019/ospa/judges/ to register and agree to the honor code by selecting “Register to Judge”.Once there, sign up for the student presentations you wish to evaluate. Every judge must sign up for at least three presentations to ensure that all students are provided with feedback. Select “Find Presentations”. You can search for presentations by B31J or B34C in the lower of the two “quick search” boxes.When you arrive for Fall Meeting, confirm the time and location of the presentations you are evaluating. You can sync your judging schedule to your personal calendar to ensure you don’t accidentally miss any presentations. Go to your OSPA schedule and click ‘Add to Calendar’ on the task bar. Your judging schedule will now be added to your Google Calendar, Outlook, or iCalendar.You will need to evaluate all presentations you volunteered to judge. Students depend on your feedback to assess their presentation skills, identify the areas in which they are performing well, and areas that need improvement.Either submit scores in real time on a tablet or mobile device or take notes while you evaluate students and enter the scores later. Do not rely on your memory alone to submit complete scores at a later time. Students participate in OSPA to improve their presentation skills, so please provide them with thorough feedback. This year, comments are required in addition to the numerical scores. All reviews must be entered into the OSPA site no later than Friday, 20 December 2019, at 11:59 p.m. EDT.Finally, be constructive! OSPA presenters range in education levels from undergraduate to Ph.D. students. There are also many presenters for whom English is not their first language. Keep these things in mind when providing feedback. Judges are asked to evaluate students at their current education and language proficiency levels.