Melissa Kenney1, Michael Gerst2, Toni Viskari3, Austin Delaney4, Freya Olsson4, Carl Boettiger5, Quinn Thomas4
1University of Minnesota, 2University of Maryland, 3Finnish Meteorological Institute,4Virginia Tech, 5University of California, Berkeley
With the growth of the EFI NEON Ecological Forecasting Challenge, we have outgrown the current Challenge Dashboard, which was designed to accommodate a smaller set of forecasts and synthesis questions. Thus, we have reenvisioned the next stage of the EFI-RCN NEON Forecast Challenge Dashboard in order to facilitate the ability to answer a wider range of questions that forecast challenge users would be interested in exploring.
The main audience for this dashboard are NEON forecasters, EFI, Forecast Synthesizers, and students in classes or teams participating in the Forecast Challenge. Given this audience, we have identified 3 different dashboard elements that will be important to include:
forecast synthesis overview,
summary metrics about the Forecast challenge, and
self diagnostic platform.
During the June 2023 Unconference in Boulder, our team focused on scoping all three dashboard elements and prototyping the forecast synthesis overview. The objective of the synthesis overview visual platform is to support community learning and emergent theory development. Thus, the synthesis visualizations are aimed at creating a low bar entry for multi-model exploration to understand model performance, identify characteristics that lead to stronger performance than others, the spatial or ecosystems that are more predictable, and temporal forecast validity.
You can view the prototype developed during the meeting HERE and in Figures 1 and 2.
Figure 1. Static image of an interactive map of aggregate forecast skill relative to climatology at each forecasted sites, here showing the water temperature forecasts for the aquatics theme. Bubble colour represents the continuous rank probability score (CRPS) skill relative to climatology with positive values (blues) showing submitted models on average perform better than climatology and negative values showing submitted models perform worse (reds). The size of the bubble represents the percentage of submitted models that outperformed the climatology null (i.e., larger sized bubbles have a higher percentage of skilled models). When hovered over, the bubbles show this percentage (perc_skilled), the site type (field_site_subtype), as well as the total number of models forecasting at that site (n_mod).
Figure 2. a) Percentage of submitted models that are classed as ‘skillful’ (outperform the null climatology forecast based on the continuous rank probability score metric) at the river (n=27) and lake sites (n=6) for water temperature forecasts at each horizon from 1 to 30 days ahead. b) Percentage of submitted models that are classed as ‘skillful’ for water temperature forecasts at six of the lake sites (https://www.neonscience.org/field-sites/explore-field-sites).
Developing these graphics requires aggregation of skill scores.There are a multitude of metrics that can be used to calculate the skill score, which each have their own benefits and flaws. Thus, there should be multiple skill scores for different metrics with clear presentation of what metric is used at a given visualization. Additionally, in order to isolate what sites are more interesting from a model development perspective, there needs to be a comparison of how many of the models meet a baseline skill score at a given site at a chosen time frame. That allows isolating challenge areas and also easily informs which models really succeed at situations where others struggle. For better future analysis of how models perform at certain sites, we also envisage the visualization to include the skill scores for the relevant drivers (NOAA weather) for comparison. For example, if we see a drop in skill across models in water temperature projections after some time, there should be a direct method to assess if this reflects overall flawed model dynamics or if the weather forecast driving the water temperature loses its reliability. This also allows the user to approximate a maximum length in which the model performance analysis is at all useful.
In addition to the main synthesis overview, the goal of this platform is to support exploration of synthesis data. For all themes, there was general agreement that it would be useful to pull up at a glance, site characteristics, a photo, and basic summary statistics about the number of models and model performance.
During the meeting, we worked with the Aquatics and Beetles Challenge teams to identify some of the key data aggregation groupings that will be important to facilitate exploration. One important distinction arose during the conversations – the baseline model, time scale, and data latency. For Aquatics there is a long time series of data that create a climatology and data are provided relatively quickly via data loggers. For Beetles, there is a different null baseline model given the length of historic data that is different at each site and it takes a year to provide beetle abundance and richness assessment. There was also a desire to have specific types of synthesis visualizations including the species accumulation curve over years, 3-year running average, and indicating the lower and upper bounds of a particular variable (use in scale). Thus, for both Beetles and Aquatics there are similarities and differences in the types of groupings that would be most useful to support synthesis exploration.
Table 1. Different data groupings that would be useful to facilitate easy-to-develop synthesis visualizations of the EFI-NEON Forecast Challenge models to facilitate learning and community theory development.
Team / Challenge
theme, site, model ID, customized classroom or team groupings
particular variables (e.g., DO) within a theme
Spatial / Ecosystems
sites, NEON domains, site type (river, stream, lake…), altitude (high vs lowlands)
sites by distance, dominant NLCD classification
average for past year, seasonal groupings,
1 day, 5 days, 7 days, 15 days, 30 days
14 days, growing season, multi-year (up to 5 year) forecasts
best model at each site, model inputs, model structure, functional type, output uncertainty representation
model run time, model computational requirements
current skill forecast approaches, better than climatology/null baseline,
comparison of your model to the best forecast
environmental variables and weather forecast observations
comparison with weather/climate forecast skill
disturbance events (e.g., widlfire), growing season dates at each sites, site disturbance characteristics (e.g., mowing, fencing)
In addition to the synthesis overview, there were two complementary and linked platforms that will create the dashboard. First, the objective of the forecast challenge overview is to provide a basic summary of metrics related to the overall EFI NEON Ecological Forecasting Challenge. Specifically, the metrics that would be included are: number of forecasts submitted, number of unique teams, percentage (or median of all) of models that are better than climatology or a null model per theme, and total forecast and observation pairs.
Second, the objective of the self-diagnositic platform is to provide an overview for individuals or team forecast contributions and performance. The types of summaries that will be provided for the forecasters are: confirmation of forecast submission, date of the most recent forecast submitted for a model, model performance relative to climatology or null model, model prediction versus observation, model performance vs other selected models, and model skill over a specific time horizon (to assess whether it performs better over time).
Overall, the goal of the re-envisioned visual dashboard is to create platforms that will allow us to track challenge engagement, individually or as a team diagnose any model submission problems and performance improvement opportunities, and support community theory development through a synthesis given the range of models submitted through the EFI NEON Ecological Forecasting Challenge. Long-term, if this platform structure is useful and robust, it could be applied to other systems where there are multi-model predictions and there is a desire to collaboratively learn together to improve our theoretical understanding and forecasts to support decision-making.
We are looking for input from the EFI community on the synthesis dashboard for other themes, to discuss with individuals what synthesis would be most relevant to phenology, terrestrial, and ticks forecasters. Reach out to firstname.lastname@example.org to share your thoughts or let us know you would like to join future conversations about updating the dashboard.
The 2023 EFI Unconference, hosted by the Ecological Forecasting Initiative Research Coordination Network (EFI RCN) and supported by the National Science Foundation, brought together 45 passionate individuals at the National Ecological Observatory Network (NEON) headquarters in Boulder, Colorado on June 21-23, 2023 to work on a diverse range of projects that were nominated and selected by the participants. With a focus on collaborative problem-solving, the Unconference fostered a unique environment for participants to exchange knowledge, generate new approaches, and advance the field of ecological forecasting.
In addition to project development, activities included a warm welcome from Kate Thibault, NEON Science Lead, icebreaker activities, expertly facilitated by Cayelan Carey from Virginia Tech that helped participants connect and form meaningful relationships, a tour of NEON facilities, and a poster session and social hour, where participants showcased their research and projects. Through these activities, Unconference participants and NEON staff were able to engage with one another, exchange feedback, and forge new collaborations.
To ensure a productive and focused Unconference, participants engaged in a review of project ideas and subsequent project selection. This process allowed attendees to propose projects aligned with their interests and expertise and fostered a sense of ownership and investment in the outcomes. Ten project groups developed out of the 24 that were initially proposed as part of the pre-meeting preparation.
Summaries provided by each project working group are listed below. Some groups will provide additional details in forthcoming blog posts, so be sure to watch for those future posts.
This was the first in-person EFI event since 2019 and it was absolutely lovely to be in the same room to meet new people and to see in-person people we had only seen on Zoom before. We appreciate the Unconference participants’ willingness to share their time, talents, and perspectives. As you will read below, there were a number of accomplishments over the three days of the meeting and we look forward to seeing future outcomes from what was developed at the Unconference!
Participants: John Smith, David Durden, Emma Mendelsohn, Carl Boettiger
To date, the NEON Ecological Forecasting Challenge has been focused on generating near term forecasts for specific sites. However, many interesting ecological phenomena occur across both time and space. At the EFI 2023 Unconference, our group prototyped a forecasting challenge that is also spatially explicit. For our prototype forecasting challenge, we focused on Leaf Area Index (LAI) recovery in post-burn areas. Our focal sites so far include the California August complex fire and the Colorado East Troublesome fire. Our work at the Unconference focused on building cyber-infrastructure to ingest and aggregate data, build target files, assess models using proper scoring rules, and build baseline climatological forecasts. Current progress, including an example notebook and a detailed workflow diagram, are available on GitHub: https://github.com/eco4cast/modis-lai-forecast/. Current and future work includes building additional baseline models, setting up a submission portal using GitHub actions, and integrating additional sites to include a variety of ecoclimatic domains.
Participants: Noam Ross, Eli Horner, Ashley Bonner, Mike Dietze, Chris Jones
Interest and use of ecological forecasting have increased in recent years due in large part to the efforts of EFI, including the NEON Ecological Forecasting Challenge. However, only a small percentage of ecological forecasts published have fully quantified and partitioned their forecast uncertainties. Quantifying and validating model predictions and uncertainties allows for understanding the degree of uncertainty in forecasts and how much we understand the underlying ecological system (our ability to predict them). Partitioning forecast uncertainties allows for increased focus on data collection efforts that could lead to improved model performance and reduction in uncertainty. Our group worked toward creating a tutorial for how to quantify and partition forecast uncertainties and validate model predictions with uncertainty by using the NEON Phenology Forecasting Challenge. We are using an ARIMA model and a random forest model as examples. During the Unconference we were able to get both models working and partition uncertainties. We are finishing up the code base, tutorial, and discussing challenges with each type of model when it comes to performing uncertainty quantification and partition.
Forecasting Impacts: Measuring the Current and Future Impacts of EFI
Participants: Rebecca Finger-Higgens, Jessica Burnett, Alexis O’Callahan, Ayanna St. Rose
It turns out getting-to-know-you style ice breakers can provide more than just a few new friends, they can also demonstrate group priorities and motivations for coming together in the first place. On a sunny morning at the NEON Headquarters in Boulder, CO, Cayelan Carey (Virginia Tech) asked the group of EFI Unconference participants to organize themselves based on whether they individually felt that the goal of forecasts were for understanding or decision making. As the participants shuffled around and considered the question before them, the final results revealed a pattern among the group that resembled a skewed desire for forecasts to inform decision making versus broadening the understanding of ecological systems. However, the ability of ecological forecasts to effectively inform decision making has not clearly been measured. Besides directly impacting decision making processes, how do we, as a grassroots organization, recognize and measure the other societal impacts that EFI might be, or capable, of creating?
This led our group to think through ways that EFI could measure impacts, to ask: what are the impact goals and achievements of EFI, what does the community want out of EFI, and what is the best way to measure these often hard to measure metrics? Using five categories of societal impacts (instrumental applications, connectivity impacts, conceptual impacts, capacity building, and socio-ecological impacts), we developed a poll for Unconference participants to assess the priorities and current thoughts of this representative group. The poll results suggest that EFI community goals emphasize conceptual impacts (i.e. improve ecological understanding), connectivity impacts (i.e. maintaining and developing community and partnerships) and instrumental applications (i.e., applications for decision making). We also found that EFI has made the greatest advancements in capacity building (i.e., curriculum development, short courses), conceptual impacts (i.e., working groups), and connectivity impacts (i.e., newsletters and conference sessions). These discoveries have allowed us to identify a space for the creation of a concrete link between the connectivity of forecasting and the desired application outcomes of the group. It has allowed us to develop a number of recommendations for the steering committee and the EFI community. Some of these recommendations include focusing on if, how, and why a created forecast product achieves one of the five predefined societal benefits. Together, we hope to continue to build on the vision statement of EFI to build forecasts to understand, manage, and conserve ecosystems in a measurable and remarkable way.
Figure 1: Word cloud generated from Unconference participant responses to the question “describe the potential, importance, or value of the community of EFI”.
Reenvisioning the NEON Ecological Forecasting Challenge Dashboard Visualization
Participants: Melissa Kenney, Michael Gerst, Toni Viskari, Austin Delaney, Freya Olsson, Carl Boettiger, Quinn Thomas
With the growth of the NEON Ecological Forecasting Challenge, we have outgrown the current Challenge Dashboard, which was designed to accommodate a smaller set of forecasts and synthesis questions. Thus, we have reenvisioned the next stage NEON Forecast Challenge Dashboard in order to facilitate the ability to answer a wider range of questions that forecast challenge users would be interested in exploring. The main audience for this dashboard is NEON forecasters, the EFI community, Forecast Synthesizers, and students in classes or teams participating in the NEON Ecological Forecasting Challenge. Given this audience, we have identified 3 different dashboard elements that will be important to include:
Transporting Models Between NEON and non-NEON Systems
Participants: Brendan Allison, Olufemi Fatunsin, & Jeff Mintz
A community of practice is increasingly active in developing models and forecasts for NEON sites. We asked: how can we take models trained on NEON data and refine them for use in another context? Similarly, how can we take models trained on non-NEON data and refine them on NEON data? This goal of transplanting models can empower a range of applications, including local field studies, adaptive management, and data fusion from multiple monitoring networks, enabling greater statistical power for big ecological questions. Whether transporting a model to or from NEON, the challenges are effectively the same. These included unbalanced data, different monitoring protocols, different predictors, and different site selection criteria. To focus our efforts, we picked the particular case study of bringing together NEON vegetation survey data with similar datasets generated under the Forest Inventory Analysis (FIA) program. Our first product was the development of a Bayesian multilevel model with the capacity to scale to the integration of multiple sets of continental or global-scale monitoring networks, or shrink to the job of predicting outcomes at a single site, but informed by a shared global layer. With this case study in mind, we have been building a codebase for processing the relevant NEON and FIA forestry data and for joint modeling of residual error across monitoring systems in Stan, a popular probabilistic programming language. Find more details about this project in this blog post.
ML-based Uncertainty in the NEON Ecological Forecasting Challenge
Participants: Marcus Lapeyrolerie, Caleb Robbins
How can machine learning (ML) provide a solution to estimating forecast uncertainty across NEON Ecological Forecasting Challenge? We generated a proof-of-concept workflow combining two machine learning approaches to make probabilistic forecasts. Random forests were used to learn relationships between forecast challenge variables and past NOAA weather data and to make predictions. While these models were able to make forecasts that perform well in approximating the future target time series, they were not implemented to provide estimates of uncertainty. We explored how we could use past data along with these deterministic forecasts to generate probabilistic forecasts. Our approach was to train another machine learning model to make probabilistic forecasts on the residual errors from the previous Random Forest models. We then used these predicted residual error forecasts to modify the Random Forest-based forecasts. This combined approach holds potential as it could be used in a plug-n-play manner, where this method could correct the deterministic (or even probabilistic) forecasts from any model to account for temporal trends in the residual error and provide uncertainty estimates. In our next steps, we will work on creating an automated workflow to generate residual error forecasts for the Eco4Cast challenge.
Ground beetles are a versatile species with which to measure biodiversity, yet they lack behind other EFI NEON Ecological Forecasting Challenge themes in terms of forecasts and models. Our group at the Unconference wanted to figure out why forecasters were not submitting to the NEON Ecological Forecasting Challenge Beetle Communities Theme and how we could remove those barriers to increase forecast submission. We created a tutorial (in progress) that describes general goals for forecasting ecological communities, a how-to on submitting a forecast, some of the challenges in forecasting ecological community data, and examples of forecasts people might submit to begin to address those challenges. We first reviewed the underlying data structure of the pre-made targets file that had been developed for the forecasting challenge. We then combined currently available code for a null model, an ARIMA model, and an available tutorial for working with data from the Aquatics Challenge into a workable tutorial to prepare and submit forecasts to the Beetle Challenge. Our goal is to finalize the tutorial by adding a random walk model and more detail on how to add additional covariates to the model, including climate variables. We are also designing a new targets file that has different variables of interest at finer spatial scales at a given NEON site (e.g. plot or habitat information, survey effort). The beetles community data provides an example of when patterns in non-continuous or seasonal data may be poorly capture by a simple model (e.g., ARIMA). When there is latency or gaps in the data more data processing is often required than when using continuous sensor-captured data. Knowing the experimental design is also critical to be able to design a model to build understanding. We hope that this tutorial increases overall interest in submitting forecasts to the beetle forecasting challenge and removes barriers that may prevent forecasters at all levels from submitting. Further information and development on community ecology and biodiversity data is critical for understanding many different biological systems, can help researchers broaden their understanding of how and why communities change over time, and can better provide decision-making tools for ecosystem monitoring.
Towards Principles for Designing Inclusive Ecological Forecasts
Participants: Anna Sjodin, Mary Lofton, Sean Dorr, Jody Peters, Jason McLachlan, Cazimir Kowalski, Melissa Kenney, Katie Jones
Our group is interested in exploring opportunities for improving inclusivity in ecological forecasting. Through discussion, we identified the ten principles of Design Justice (Box. 1) as a potential mechanism for evaluating the inclusivity of forecast products, services, and systems.
We use design to sustain, heal, and empower our communities, as well as to seek liberation from exploitative and oppressive systems.
We center the voices of those who are directly impacted by the outcomes of the design process.
We prioritize design’s impact on the community over the intentions of the designer.
We view change as emergent from an accountable, accessible, and collaborative process, rather than as a point at the end of a process.
We see the role of the designer as a facilitator rather than an expert.
We believe that everyone is an expert based on their own lived experience, and that we all have unique and brilliant contributions to bring to a design process.
We share design knowledge and tools with our communities.
We work towards sustainable, community-led and -controlled outcomes.
We work towards non-exploitative solutions that reconnect us to the earth and to each other.
Before seeking new design solutions, we look for what is already working at the community level. We honor and uplift traditional, indigenous, and local knowledge and practices.
As a first step towards applying the Design Justice principles to EFI-created products, services, and systems, our team evaluated to what degree the ten principles were evident in the design of the NEON Ecological Forecasting Challenge. We identified several ways in which the design of the Challenge was well-aligned with Design Justice Principles (e.g., Principle 4: We view change as emergent from an accountable, accessible, and collaborative process, rather than as a point at the end of a process.), as well as areas in which we thought we could improve (e.g., Principle 3: We prioritize design’s impact on the community over the intentions of the designer).
Moving forward, we are soliciting broader participation from all EFI community members in small focus groups to continue our internal evaluation of the inclusivity of current EFI products, with the ultimate goal of furthering the inclusivity of ecological forecasting by developing recommendations towards a more complete alignment of EFI-designed products with design justice principles. If you are interested in participating in such a focus group, please provide your contact information in the Google Form linked here.
A proactive step toward decision-ready forecasts: Fusing iterative, near-term ecological forecasting and adaptive management
Participants: Jaime Ashander, LM Bradley, Mark Buckner, Nathan Byer, Cayelan Carey, Michael Gerst
This group aimed to improve the conceptual tools for co-production of ecological forecasts that aid in decision making. We identified that there is a need for tighter conceptual integration of the iterative, near-term ecological forecasting cycle (as practiced by the EFI community) with the adaptive management cycle (as practiced by communities of natural resource managers) and the broader context for management decisions. While prior frameworks have treated the iterative, near-term forecasting and adaptive management cycles as independent, with limited points of contact, a careful fusion of these processes may increase conceptual utility for co-production. As a first step towards a more useful framework, we then located iterative, near-term forecasting activities within the management decision making process, using the PrOACT (Problem, Objectives, Alternative Actions, Consequences, and Tradeoffs) tool from structured decision making. After creating this draft version of a framework, we explored several targeted case studies in ecological forecasting and adaptive management to evaluate its efficacy as a tool for fusing forecasting and adaptive management efforts. We will continue meeting to develop these ideas and work towards a manuscript.
Developing models to understand the transmission of pathogens in disease ecology is critical to understanding the spread of diseases and how to prevent them. A model study is relied on to simulate the spread of disease and predict the effectiveness of different control strategies. Model forecasting is also critical, both for planning and enacting public health interventions but also for building our understanding of the sometimes complex drivers of disease dynamics across space and time. Vector-transmitted diseases (e.g.,mosquito- or tick-borne) represent an exceptionally difficult case since key processes affecting spread and transmission are not directly reflected in typical public health monitoring. For example, the presence and behavior of the vector species themselves, but often there are infection reservoirs in wildlife populations as well. In this way, connecting ecological forecasts to human epidemiological forecasts is an important challenge to tackle.
The NEON Ecological Forecasting Challenge has already included a tick forecast challenge, although it has not yet been tied directly to tick pathogen status monitoring or human health. In this project, we worked on the West Nile Virus (WNV), which is a mosquito-borne disease in the family of flaviviruses. The primary host is birds (across a wide range of species), while humans are the dead-end host. WNV occurs and is commonly spread, especially in the summer, through mosquito bites. Our goal at the EFI Unconference was to examine the opportunities that NEON data could provide to create impactful forecasts for the public’s health from vector-borne diseases, focusing on WNV. Especially since humans are a dead-end host, understanding and forecasting the disease dynamics demands ecological, human, and human data. We intend to forecast the mosquito abundance as well as the infection rate in humans over time, incorporating the mosquito abundance, seasonality, drivers, and co-occurring bird abundances.
We built a preliminary bridge that connects National Ecological Observatory Network (NEON) and Center for Disease Control (CDC) data and, through preliminary visualization, demonstrated the potential to match between the NEON mosquito data (abundance and pathogen status) and CDC-reported human cases on a year-by-year and county-by-county level for 14 NEON sites located in counties with reported cases of WNV. A first look at the collected data set showed a relationship between the NEON bird and mosquito abundance that suggests large bird presence is one driving condition of large mosquito abundance in a summer sampling season. Given the relative rarity of WNV compared to some other vector-borne illnesses like Lyme disease, there are definitely some challenges to setting up a forecasting challenge for the full disease dynamics, although we may be able to cast a wider net for human cases in counties adjacent to NEON site counties. We’re excited to build this preliminary effort into a new neon4cast theme, and we’re also eager to dive into the lessons learned from one of the other Unconference projects that examined pitfalls in recruiting broad engagement in the beetle forecasting challenge.
Translational ecology aims to increase the usability of knowledge and products beyond scientific communities by integrating socio-environmental knowledge and stakeholders or end users as partners in the process and projects. For the past few months, the Translation and Actionable Science Working Group has been working to gauge the translational needs of the EFI community and to identify where modelers and physical scientists see gaps in connecting to stakeholders and end users to conduct translational research. The goal of this post is to share what the working group has identified as priorities for translational needs and share where there are connections to what other working groups and organizations are doing.
Michael Gerst (University of Maryland) led interviews in October-November 2022 with nine individuals across EFI that represent a range of early to late career stages, institutional type (academic, NGO, U.S. government agency), and gender to learn about interviewees’ experiences with forecasting projects that required stakeholder interaction, what went well, what didn’t, and what would have been useful to improve stakeholder engagement. The EFI Translation Working Group is using the results from these interviews to prioritize and develop activities that can help to fill those gaps.
The following seven topics were identified from the interviews and Working Group discussions that could be developed into self-contained seminars, manuals, or guidance documents (for short-term activities) or hosting larger workshops or proposal writing opportunities (for longer-term activities). Cases where topics overlap with other EFI working group discussions and activities are also highlighted.
How can EFI serve as an advocate for ecological forecasting to stakeholder groups, especially the public sector? In some cases, the individuals interviewed found that the stakeholders they work with were not interested as much in the forecasts provided as they were in the real-time data that was provided during the data collection and forecast process. These real-time data allowed the stakeholders to use their expert knowledge to make informed decisions that may or may not have been related to the forecasts that the teams provided. Within academia, there has been quite a bit of work to raise the profile of ecological forecasting, but there is now the opportunity to bring this awareness outside academia to private and government sectors to both promote the benefits of ecological forecasts in settings outside academia and listen to the needs of stakeholders.
What’s the appropriate level of stakeholder engagement? Is co-production always the right answer? Co-production is increasingly seen as a method for improving the fit between science and stakeholder needs. However, it can be time-consuming and expensive, necessitating assessment of whether its potential benefits are a good match for a project. While understanding of the contexts in which co-production can be fruitful has improved, there is still a gap in distilling guidelines for scientists on when it is appropriate, and if not, what other options exist.
How to help colleagues find collaborators across disciplines (i.e., matchmaking) as well as providing guidance on what makes a fruitful multi-disciplinary collaboration? This topic, as well the previous point about the appropriate level of stakeholder engagement connects with conversations and a blog post the EFI Diversity, Equity, and Inclusion (DEI) working group is having and developing that highlights the need to bring in collaborators at the beginning of a project to learn what is of most interest to them. The DEI group has focused on underrepresented individuals in terms of race/gender, but this can also be applicable to underrepresented disciplines in ecological forecasting like social science. This topic also connects with previous discussions in the former Partners and Knowledge Translation working group (which merged with the former Social Science group to become the current Translation working group). Previous discussions have revolved around how to keep a database of individuals and groups that support matchmaking connections. The group has also discussed the need for forums or meetings to allow groups to pitch ideas and socialize early ideas that can then be followed up to develop proposals to fund those ideas. Clearly, this is something that resonates both within the working groups and across the broader EFI community.
How to initiate, foster, and maintain stakeholder relationships? In 2021 and early 2022 the Partners and Knowledge Transfer and Social Science working groups hosted a number of seminars on science communication and co-production. In particular, the May 4, 2021 seminar on co-production provides some initial resources that could be built out further for this topic. The discussions and the upcoming blog post the EFI DEI working group is developing that highlight ways to connect to underrepresented groups will also provide useful resources related to fostering relationships.
How to understand stakeholder decision-making processes? In the interviews, a few anecdotes were shared that ultimately can be summarized as: we thought we knew how stakeholders make decisions (with forecasts!) but experience eventually proved otherwise. In addition to learning the process of stakeholder engagement, interviewees thought there would be utility in helping modelers understand the universe of ways stakeholders might make decisions and where forecasts fit in (or don’t!).
How to set up extended project planning to ensure continued operationalization? It is important to have a plan in place for how ecological forecasts will be operationalized after the initial set of funding expires. Stakeholders are frustrated if they start to use a forecast and then it is no longer available when the funding is over. NASA provides one example of how to avoid this issue. In NASA’s Ecological Conservation Applications, solicitations (e.g., A.40 Earth Science Applications: Ecological Conservation) often require proposal teams to include partners/end users who will also be responsible for maintaining the ecological forecasting products (e.g, web apps) beyond the NASA funding period.
How to make data, models, and systems that are documented and reusable (FAIR data, models)?This is a topic that is of interest across multiple working groups in the EFI community. The Forecasting Standards working group has recently submitted a manuscript for publication titled “A Community Convention for Ecological Forecasting: Output Files and Metadata.” The preprint is available here: https://ecoevorxiv.org/9dgtq/. The manuscript focuses on suggestions for documenting ecological forecasts. The Cyberinfrastructure and Methods working group has also been thinking about the issue where groups creating ecological forecasts continue to develop one-off or boutique workflows. That working group is writing a workshop proposal to bring the together people from government agencies, industry, NGOs, and academia together to develop a way to share forecasts and workflows so people don’t need to reinvent a forecast workflow. Instead, new forecasts can be created that borrow strength from resources already developed for similar models or workflows and instead be able to focus on the details and nuances of applying a forecast in their own study system or domain. This also resonates with what the EFI NEON Ecological Forecasting Challenge is working on with the cyberinfrastructure that is set up for accessing target and meteorological data and accepting, scoring, and visualizing forecasts across multiple NEON Data streams.
Below is the list of poster and oral presentations for EFI’s hosted session at the American Geophysical Union (AGU) 2022 Conference in Chicago, as well as other ecological forecasting-related talks that may be of interest to the community. All times are listed in US Central Time.
Thursday EFI Social – Anyone who is available to meet up on December 15, Thursday evening, we’ll have a group getting together at Kroll’s South Loop starting around 6:30 – 8:30pm. It’s an 18-minute walk from the Convention Center. Find directions here.
Friday Poster and Oral Sessions – EFI’s oral and poster sessions on “Ecological Forecasting in the Earth System” will be held on Friday, December 16, 2012. The in-person Poster Session is from 9am-12:30pm in Poster Hall A (South, Level 3). The Online Poster Session is from 1:45-2:45pm. The Oral session is from 4:45-6:15pm in S501bcd (South, Level 5). We’re excited to have a great set of speakers that really span the full gradient from terrestrial to freshwater to marine. Come check out the following talks!
The EFI Partner’s Working Group hosted Pablo Suarez on November 19 for an intensely participatory Zoom call session titled “Serious Fun: Reimagining Virtual Events on Ecological Forecasting”. This was the first of a 2-part series on science communication. The second session, “Sharing is Caring: Communicating Science Beyond Academic Publications” will be held on January 19. You can find details and register for the January session here.
The first session on November 19 was not recorded due to the intensely participatory nature of the call. However, below we share an overview of the call and resources shared by Pablo.
Pablo Suarez is innovation lead at the Red Cross Red Crescent Climate Centre, a visiting fellow at Boston University, and an artist in residence at the National University of Singapore. He directs the Climate Centre’s initiatives linking applied knowledge with humanitarian work, as well as new approaches to climate risk management. Examples of his work include participatory games for learning about climate change and dialogue, and forecast-based financing pilots in Togo and Uganda.
Pablo started with the premise that disasters are collaborating better than humans are – for example, things like the pandemic, climate issues, and conflicts are exponentially causing harm and we need to have better collaborations between those who know and those who do. And if we want to effectively work with other scientists, people impacted by what we study, or policymakers, we need to let people know how much we care about what they care about. Often we convey information and work to develop collaborations through meetings and presentations typically with a keynote or series of talks followed by a short Q&A session. Pablo advocated that there are better ways to communicate and develop collaborations during meetings and conferences and that is by using games and including humor and spontaneity as a way to broaden engagement and make connections.
As I worked with Pablo to coordinate the logistics for the call and compile the advertising material, I didn’t grasp what the “Serious Fun” part of his talk title meant. But after participating on the call, I am inspired to think about ways to incorporate his suggestions for reimagining virtual events which are low-carbon and low-budget compared to in-person events. We know Zoom fatigue is real, but by including spontaneity, humor, and games, virtual events can be more engaging and can lead to deepened networks and collective action. By having fun, we can be more effective at conveying serious ideas about science, ecological forecasting, and making decisions.
One group activity was called Rant. Early on in the call, we had a short amount of time to write out all our frustrations with Zoom calls. It was fast-paced and Pablo really encouraged us to get into the Rant by expressing our frustrations through things like capitalization, exclamation marks, etc. Later after other activities in the call, we had the opportunity to go back to rants submitted by ourselves and by others on the call to think about changes that could be made to alleviate issues with Zoom calls. You can see a few example rants and changes below and you can find all the rants here. I particularly appreciated the second example below, since that was my rant.
2. Decisions for Seasons
The other major activity Pablo facilitated was a version of Decisions for Seasons, a game designed to support learning and dialogue about key aspects of planning for extremes and experiencing climate change impacts and managing risk. In this game, everyone had 10 forecasts about the climate conditions for the next year. We could choose if the conditions would be regular/normal (thumbs up) or if we should plan for drought insurance (buckets) or flood insurance (umbrellas). This game was also fast-paced with only about a minute to make these decisions. After everyone had made their predictions about the future conditions, Pablo rolled a die where a 1 meant there was a drought and a 6 meant there was a flood and we all hoped we had picked enough drought or flood insurance to handle those rolls! The fun part was then seeing whose predictions were correct and who ended up getting broken hearts for incorrect predictions (see the second image below). Although we didn’t have time during our call, an extension to this game is to incorporate climate change by using an 8-sided die where floods become represented by a 6, 7, or 8 increasing the chance of an extreme event. You can find more details about how this game can be played here.
Throughout the call we had fun, there were moments of heart-racing tension to get our predictions and answers in, there were opportunities for competition and collaboration, and plenty of joy. But there was also a seriousness moving forward to think about what we are doing with forecasts, how we can improve forecasts, how we can improve people’s access, understanding, and opportunity to act, and how we can enable people to make decisions with forecasts.
While this seminar focused on how to make virtual events more engaging, Pablo’s suggestions and resources can also be applied to in-person conferences and meetings, classes, and lab meetings. Let’s have more fun, let’s be more engaged!
For those interested in exploring how to incorporate games into your presentations and to find ways to make your audiences know you care about what they care about, the following are resources Pablo shared during the call.
There are almost 30 climate games applicable for small to large groups (n=6-100) and that cover a range of topics including climate change adaptation and mitigation, collaboration, disaster risk reduction, health, resilience, etc. Each game includes learning objectives, intended audiences, required materials, and pdf and video tutorials. A number of the games specifically connect to forecasting and decision making incuding the Decisions for Seasons game we played during the call. Two other examples include
Before the Storm – a decision-making game designed to introduce weather forecasts and possible actions to take against natural disasters through different roles.
Paying for Predictions – a participatory activity to experience the impacts of climate change, to understand the value of forecasts and to enhance the understanding of climate smart disaster risk reduction.
Pablo used the GoodGames platform during our call for the Rant and Decisions for the Season activities. The Decisions for the Season can be played in person with the instructions linked on the Climate Centre’s website, but GoodGames has also made an online version here. You can learn more about GoodGames here: https://gg.goodfocus.net/about/. The games do cost money, but if this was something you or your organization were interested in, it would be worth checking in with GoodGames. I believe their are discounts for those in non-profits.
Post by: The EFI Diversity, Equity, and Inclusion Working Group
The Ecological Forecasting Initiative, like many other organizations, calls for justice for George Floyd and countless other Black individuals and persons of color, and we stand in solidarity with our Black colleagues and friends saying #BlackLivesMatter. Our EFI Diversity, Equity, and Inclusion (DEI) Working Group is committed to listening, learning, and exploring ways to promote anti-racism and to make EFI, and STEM fields more broadly, a welcoming environment. In regards to ecological forecasting, as a first step, we need the input and experience from people of all backgrounds at all stages of the ecological forecasting process, from forecast development and implementation to stakeholder decisions. Ecological forecasting as a field is relatively new; creating an inclusive, anti-racist field starts with understanding the lived experiences of all types of forecasters, end-users, and community stakeholders. We have more in-depth initiatives and tasks associated with the current NSF funded EFI-RCN grant and other proposals submitted for review, but for now, the following are four ways we invite you to get involved by joining the group, using or adding to our Bibliography of resources, filling out a short 5-minute survey, or joining our book club.
ONE Of all the EFI Working Groups, our DEI Group has the smallest number of participants. We welcome anyone who is interested in participating to learn more about ways to expand diversity and inclusion as well as brainstorming ways to increase diversity within the ecological forecasting field. Our next call is June 30 and upcoming monthly meetings are posted on the EFI’s DEI webpage, as is our Strategic Plan, which is a living document that provides an overview of the steps the DEI Working Group is taking to promote diversity, accessibility, and inclusion in EFI. Email email@example.com to be added to the mailing list for this group.
TWO If you are not able to join the Working Group calls at this time, there are additional ways to get involved. We are compiling a Bibliography that provides resources for learning more about anti-racism and the diversity status of fields relevant to ecological forecasting. These resources include lists of minority supporting associations, links to diversity and inclusion plans from professional societies, blog posts, publications, and compiled lists of resources from other organizations. This is also a living document, to which we will add additional documents moving forward. If there are additional resources you have found useful, they can be submitted through this Google form.
THREE As part of Step 1 of our Strategic Plan, “Identify and clarify the problem”, we are working to identify the current status of diversity within fields relevant to ecological forecasting as a baseline to assess the current status of diversity within the Ecological Forecasting Initiative, specifically. Once we assess the current diversity status of EFI, our next goal is to provide suggestions to ecological forecasting labs about ways to recruit more diverse students into our undergraduate and graduate programs. To assess the current status of diversity within fields relevant to ecological forecasting we are using the NSF funded NCSES Interactive Data Tool IPEDS database of the racial and ethnic backgrounds of students that have graduated from US institutions in over 400 academic programs. We have narrowed down the list to 29 academic degrees and are asking for your help to rank the relevance of these degrees to ecological forecasting in this short survey (https://nd.qualtrics.com/jfe/form/SV_3Pdyo1bh5OG8R93). Once we know which academic degrees are most relevant to ecological forecasting, we can assess the current diversity of those degrees relative to EFI. We will then work on Step 2 of our Strategic Plan, “Identify barriers that may prevent students from underrepresented groups from participating in ecological forecasting.”
FOUR To encourage open, honest conversation and anti-racist thinking, EFI will host its first virtual book club. We will begin with The Years That Matter Most: How College Makes or Breaks Us by Paul Tough. Tough’s book explores privilege in higher education, from the application process to the classroom. As many forecasters are educators and participants in higher education, we believe this book will serve the interests of EFI’s mission while helping participants grow in anti-racist values. The book club is open to all participants, regardless of EFI membership, race, ethnicity, gender, religion, or any other personal identity – we ask only that you participate with an open mind and a willingness for vulnerability. For those who would like to participate but need help acquiring the book, we have a limited amount of financial assistance available. Email firstname.lastname@example.org for more info. Logistics: The book club will meet weekly, in the evenings, starting the week of July 13th, with about 40-70 pages of reading per meeting (although meeting frequency and page counts can be adjusted to meet the needs of the group). If you are interested in participating, email email@example.com so we can send you the doodle poll to find a day/time for the group to meet.
EFI Member Nick Record (Bigelow Laboratory for Ocean Sciences) led an effort with Jaime Ashander (Resources for the Future), Peter Adler (Utah State University), and Michael Dietze (Boston University) to write a guest post titled “Ecological forecasting ethics: lessons for COVID-19” for Dynamic Ecology|Multa novit vulpes.
We have had a busy 6 months with lots of progress and community building for the Ecological Forecasting Initiative. Here is a summary of what the group has been up to since the EFI meeting in DC in May.
The inaugural meeting of the Ecological Forecasting Initiative took place at AAAS Headquarters in Washington, DC on May 13-15, 2019. The meeting brought together >100 participants from a broad array of biological, social, and physical environmental sciences and spanning internationally across academia, government agencies, and non-profits. Overall, it was a highly productive meeting that generated a lot of excitement about our growing community of practice. The meeting was organized around EFI’s seven themes (Theory, Decision Science, Education, Inclusion, Methods, Cyberinfrastructure, Partners) with a mix of keynotes, lightning talks, and panel discussions on each area. The panel discussions were particularly valued by participants, as they generated dynamics community discussions and often highlighted the perspectives of early-career participants. The meeting also included time for break out discussions, starting with a series of sessions (with participants randomly intermixed) addressing high-level questions about the opportunities for advancing science and decision making, and the challenges and bottlenecks facing our community. These breakouts then fed into a later set of sessions, organized by theme, where individuals self-organized by interest to synthesize what we learned and to start discussing next steps. Finally, there was a healthy amount of unstructured break time, as well as a conference dinner on Monday night and a poster session on Tuesday early evening, that provided attendees with time for informal discussions and networking. A post-meeting survey showed overall satisfaction with the meeting was very high (4.8 of 5), as was the likelihood of attending another EFI meeting (4.6 of 5).
The original conference plan was for the breakout groups organized around the EFI cross-cutting themes to be the kick-off of the theme working groups. In practice, this was delayed slightly by the NSF Science Technology Center preproposal deadline (June 25) which occupied much of the organizing committee’s time in the ~6 weeks post-conference. However, working group telecons kicked off in July and all eight working groups have continued to meet virtually on Zoom at approximately a monthly frequency. Based on group discussions at the conference, and our post-meeting survey, a number of key ideas emerged for the working groups to focus on. A top priority was the establishment of community standards for forecast archiving, meta-data, and application readiness levels. Standards will allow our growing community to more easily perform higher-level synthesis, disseminate predictions, develop shared tools, and allow third-party validation. The bulk of the work on developing a draft set of forecast standards has been taken on by the Theory working group, which is focused on making sure forecast outputs and metadata will be able to support larger, synthetic analyses. Theory has also held joint meetings about Standards with Cyberinfrastructure, which has focused on the CI needs of archives (blog post in prep), repeatability/replication, and the standardization of model inputs. Application Readiness Levels (ARLs) have also been discussed by the Decision team, which wanted to evaluate whether existing NASA and NOAA ARLs reflect decision readiness.
Second, there was considerable enthusiasm for discussing and documenting best practices, both around the technical aspects of forecasting and for decision science and interacting with stakeholders. On the technical side the Methods and Tools team is working on a document summarizing the tools being used by the community in seven key areas: Visualization & Decisions Support tools; Uncertainty quantification; Data ingest; Data cleaning & harmonization; User interfaces; Workflows & Reproducibility; Modeling & Statistics. The primary goal of this exercise is to produce a set of EFI webpages that inform forecast developers about the tools available (especially newer members of the community). The secondary goal is to enable a gap analysis that will help the Methods and Tools team prioritize areas where needed tools are missing or not meeting the needs of the community. At the same time, the Decision team has been discussing the stakeholder side of best practices, has already produced two blogs about lessons learned by NOAA in translating from Research to Operations (R2O), and a third blog is being drafted that describes areas in the ecological forecasting process where social science can provide valuable input. Similarly, the Partners team has been thinking about how to improve the ‘matchmaking’ process between stakeholders and forecasters and is working on a survey to reach out to potential EFI partners to let them know what EFI is, what we are doing, and to learn how organizations are currently using data, models, and forecasts and where there is the potential for synergies with EFI.
Third, the community is interested in the expansion of educational materials and open courseware. The Education and Diversity teams have mostly been meeting together and have discussed key forecasting vocabulary and are working with EFI’s Cayelan Carey, who has a new NSF Macrosystems grant to develop undergraduate forecasting modules, to develop a survey of forecast instructors to provide information on (and a compilation of) syllabi, code, problem sets, and topics currently being taught, pre-requisites, and input on what new forecasting teaching material would be most useful. The Diversity team is also drafting a Strategic Plan to work on increasing diversity and inclusion in EFI and ecological forecasting more generally. Steps in this plan include: 1) Identifying the current diversity status, 2) Identifying the barriers, 3) Identifying solutions and which solutions make sense to work on given the participants and networks currently in EFI, 4) Identify who else needs to be involved and make a plan to bring them in, and 5) Form collaborations and seek funding to carry out the plan.
Fourth, there was interest at the EFI conference in supporting the development of an EFI student community. The EFI student group was launched in August and is working on developing a charter, forming a steering committee, and running a journal discussion group.
Working Groups are always open for new people to join. There are 3 more calls scheduled before the end of the year: Education on Dec 4, Social Science on Dec 16, and Partners on Dec 17 all at 1pm US eastern time. Polls will be sent out in mid-December to set recurring times for working group calls in Jan-May 2020. If you would like to join a working group and be included on any of the remaining calls or if you wish to participate in the polls to set times for next year’s calls, email firstname.lastname@example.org.
In addition, to responding to the ideas discussed at the EFI2019 conference, the EFI working groups are also involved in the planning process for the EFI Research Coordination Network. This NSF RCN funding was awarded after the EFI2019 meeting and ensures that EFI will continue to meet and grow over the next five years. The EFI RCN is also launching an open forecasting challenge using NEON data, the protocols for which will be finalized at the first RCN meeting, May 12-14, 2020 in Boulder, CO at NEON headquarters.
Other key products of the EFI2019 meeting are the meeting slides and videos. The overall meeting was recorded and the individual keynote and lightning talks have been edited down and released on YouTube, the EFI webpage, and Twitter. In addition, EFI2019 participants suggested dropping EFI’s existing discussion board (which participants were encouraged to use as part of meeting prep) and replacing it with a Slack channel, which has seen substantially greater use. The EFI organizing committee is also close to finalizing an Organizing Principles and Procedures document which establishes the obligations and benefits of EFI membership and lays out the operations of the EFI Steering Committee and committee chair. The OPP is currently being reviewed by legal counsel and we anticipate holding our first elections shortly after the new year.
Finally, we are happy to pass on that the NSF Science Technology Preproposal that was submitted shortly after the EFI2019 meeting has been selected to submit a full center proposal in January.