Advancing Ecological Forecasting, Skill-Building, and Community Connections at the 2025 Ecological Forecasting Initiative Conference

June 2, 2025

The Ecological Forecasting Initiative (EFI) convened its annual conference at Virginia Tech in Blacksburg, Virginia from May 19-22, 2025. Hosted by the NSF-funded EFI Research Coordination Network and the Virginia Tech Center for Ecosystem Forecasting, the event brought together over 100 scientists, practitioners, and decision-makers from academia, government, industry, and non-profit sectors to advance the field of ecological forecasting.  Many attended an EFI conference for the first time. 


🌱 Advancing Ecological Forecasting 

The conference featured a dynamic program that included:

  • Keynote addresses by Mark Urban (University of Connecticut), Antoinette Abeyta (University of New Mexico Gallup), and Kate Thibault (National Ecological Observatory Network; NEON), who provided insights into the the future of ecological forecasting, the need and ways to build accessible pathways to data science and forecasting accessible, and the reciprocal influence of NEON and EFI to empower ecological forecasting and improve usability of NEON resources.
  • Oral sessions and poster presentations covering a wide range of topics, including decision-making processes, forecasts in terrestrial, freshwater, and marine ecosystems, and for biodiversity conservation. Statistical, artificial Intelligence, and computational methodologies were highlighted, and forecasts presented ranged from early in development to operational.
  • Workshops and Working Groups that facilitated knowledge exchange, skill-building, and project development among participants.
  • Field trip and cultural event in the mountains of Virginia.  Participants visited Mountain Lake Biological Station (MLBS), providing attendees with a firsthand experience of the MLBS NEON site. Multiple hikes near the iconic Mountain Lake Lodge, where Dirty Dancing was filmed, provided time for networking.  The field trip wrapped up with getting the group on the dance floor to learn how to Appalachian square dance, two-step, and waltz, complete with a caller and live band.

🛠️ Building Capacity for the Future

A highlight of the conference was the emphasis on capacity-building through:

  • The EFI Futures Outstanding Student Presentation Award recognizes exceptional student contributions and fosters the next generation of ecological forecasters. Congratulations to Charlotte Malmborg (Boston University), whose talk on “Towards Forecasting Recovery After Disturbance: A Case Study and Potential Directions for Forest Management” won best oral presentation, and Parul Vijay Patil (Virginia Tech), who won best poster for work on “Gaussian Process Forecasting of Tick Population Dynamics.” Find more information about their presentations here.
  • Working group activities allowed subsets of participants to dive into four topics and brainstorm opportunities for further EFI activities to advance ideas for those topics. Here are short summaries from each working group.  

1. Predictability of Nature developed the foundations for a conceptual synthesis manuscript about how forecasts build on and contribute to an interdisciplinary conceptualization of ecological predictability. The group brainstormed hypotheses about predictability, then identified opportunities, challenges, and data gaps to assess those hypotheses across multiple scales.

2. Topical Discussion of NEON shared information about the tiers of code resources, openly available on NEON’s code-hub (https://www.neonscience.org/resources/code-hub), and NEON data tutorials available for classroom and self-paced online learning (https://www.neonscience.org/resources/learning-hub/tutorials). NEON also provides research support services to share what resources are available and what is feasible to do related to the proposed research project, and help put together a budget for using NEON resources. Find more details and how to connect at:  https://www.neonscience.org/resources/research-support.

3. EFI University for Everyone collaboratively redefined and reimagined what an inclusive educational community in ecological forecasting might look like, including the different components of data science and ecology curriculum, and community-based efforts to create resources that can be shared with the EFI community. The group identified three activities participants were most interested in developing, including 1) agreeing on standards for open educational resources that takes into account limited internet and software access and inventorying existing resources, 2) developing a mentorship model that could be incorporated into future EFI activities, and 3) assessing current resources that could be used or modified for use by EFI members in current connections they have with existing community activities (e.g., summer camps or programs) targeting the K-12 level.

4. Research to Operations, Applications, & Commercialization (R2X) fostered lively and thoughtful discussion among participants to address challenges and opportunities in transitioning ecological forecasts from research to products and projects beneficial to society as a whole.  Participants included representatives from numerous groups who provided a variety of perspectives.  Government agency personnel who could not travel to attend the meeting in person joined remotely to share examples of operational forecasts and barriers to operationalizing forecasts.  The perspectives will form the foundation for a future workshop focused on advancing R2X pathways for the EFI community. The word cloud below is an example of what “operational” meant to the participants of the Thursday session.

  • Training workshops enhanced participants’ skills in model development, data analysis, and stakeholder engagement.  Here are links to resources from the eight workshops.

1. Create and automate real-time water quality forecasts 

Leads: Mary Lofton, Freya Olsson, Austin Delany, Adrienne Breef-Pilz, Rohit Shukla, Quinn Thomas, Cayelan Carey
Virginia Tech

Participants in this workshop created, submitted, and automated real-time forecasts for up to 40 freshwater physical, chemical, and biological variables in the Virginia Ecoforecast Reservoir Analysis (VERA) forecasting challenge.

â—Ź VERA forecast challenge website: https://www.ltreb-reservoirs.org/vera4cast/
â—Ź VERA forecast tutorial in R: https://github.com/LTREB-reservoirs/vera4cast-example
â—Ź VERA forecast tutorial in Python: https://github.com/LTREB-reservoirs/PY-VERA_EXAMPLE

2. Spatial forecast of post fire recovery using MODIS LAI 

Leads: Steven Hammond1, David Durden2, Chris Jones3, John Smith1
1Montana State University, 2NEON, 3NC State University Center for Geospatial Analytics

Participants learned about creating a spatial forecast and how that differs from other non-spatial NEON forecast challenges. Forecasting wildfire recovery using MODIS LAI was used in this example. 

â—Ź Find the tutorial material at: https://github.com/eco4cast/modis-lai-forecast/tree/main/tutorials
â—Ź See the rendered html at: https://htmlpreview.github.io/?https://github.com/eco4cast/modis-lai-forecast/blob/main/tutorials/efi_2025_workshop.html

3. Hands-on introduction to the Beetle Communities NEON Ecological Forecasting Challenge 

Leads: Eric Sokol, Vicky Louangaphay
NEON

This workshop provided code-along instructions to submit forecasts for ground beetle abundance and richness across NEON’s terrestrial sites to provide a hands-on demonstration of how to participate in the NEON Forecasting Challenge. 

â—Ź https://www.neonscience.org/resources/learning-hub/tutorials/neon-beetle-forecasting

4. Introduction to Gaussian Process Modeling of time dependent ecological data in R 

Leads: Leah Johnson, Robert Gramacy, Parul Patil
Virginia Tech

This workshop introduced Gaussian Process (GP) modeling for forecasting time dependent ecological data and demonstrate applications to tick abundance in the NEON Forecasting Challenge. 

â—Ź Find workshop details, slides, and a tutorial at: https://lrjohnson0.github.io/QEDLab/training/EFI2025.html

5. Water Quality Modeling: Building aquatic ecosystem models with the modular AED framework 

Lead: Matthew Hipsey
The University of Western Australia

This workshop provided hands-on training in the simulation of aquatic ecosystem processes using the open-source Aquatic Ecosystem Dynamics (AED) platform. 

â—Ź Find workshop materials at: https://github.com/AquaticEcoDynamics/efi-workshop

6. Accessing and Using NEON Data 

Leads: Eric Sokol, David Durden, Vicky Louangaphay
NEON

This workshop included an overview of NEON data and how to use the Data Portal and the neonUtilities R package to access and work with selected datasets.
â—Ź Slides and notes from the workshop

7. Hands-on Introduction to Cloud-native, Event-driven Computing in R with FaaSr 

Lead: Renato Figueiredo, Ashish Ramrakhiani
Oregon State University

Participants in this workshop gained hands-on experience installing and running an example ecological forecasting workflow using FaaSr, an R package with cloud-native functions and workflows that execute on-demand. 

â—Ź Find workshop materials at: https://github.com/Ashish-Ramrakhiani/FaaSr_workshop

8. Evaluation, scoring, and synthesis of ecological forecasts using the NEON Forecasting Challenge Catalogue 

Leads: Freya Olsson1, Caleb Robbins2, Quinn Thomas1
1Virginia Tech, 2Baylor University

This workshop introduced concepts and tools for forecast evaluation, scoring, and synthesis. Participants used the vast, open, catalogue of forecasts submitted to the EFI-NEON Forecasting Challenge to apply the tools for forecast evaluation and comparison.

â—Ź Find workshop materials at: https://github.com/OlssonF/Forecast-evaluation-EFI25


🌍 A Global Community Committed to Ecological Forecasting

With over 100 attendees from 5 countries, almost half graduate students or postdocs and a third mid- to late-career stage, the EFI 2025 Conference underscored the importance of collaboration across disciplines and sectors to address complex ecological challenges. By fostering an inclusive and innovative environment, the conference contributed to the advancement of ecological forecasting as a vital tool for understanding, managing, and conserving ecosystems.

See more information about the conference, at: https://bit.ly/efi2025


EFI Futures Outstanding Student Presentation Award 2025 Results

June 2, 2025

The EFI2025 Conference provided the third opportunity for EFI to give out the EFI Futures Outstanding Student Presentation Award. This award is given to promote, recognize, and reward an outstanding student presenter and provides valuable feedback to student presenters on their research and presentation skills.  Awards were given to students who gave both Posters and Oral Presentations. Each poster or oral presentation were anonymously reviewed by two to three volunteer reviewers with no conflicts of interest with the presenters. In addition to being recognized for their outstanding work, award winners received an item of their choice from the EFI shop. We thank all the students who presented and the volunteers who reviewed the presentations!

Congratulations to this year’s Outstanding Presentation Award recipients!

Oral Presentation Award Winner:
Charlotte Malmborg (Boston University)
“Towards Forecasting Recovery After Disturbance: A Case Study and Potential Directions for Forest Management”

Poster Presentation Award Winner:
Parul Vijay Patil’s (Virginia Tech)
“Gaussian Process Forecasting of Tick Population Dynamics”

See Charlotte and Parul’s abstracts below.

Towards Forecasting Recovery After Disturbance: A Case Study and Potential Directions for Forest Management 

Charlotte Malmborg1, Michael Dietze1, Audrey Barker-Plotkin2 

1Boston University, Boston, Massachusetts, USA. 2Harvard Forest, Petersham, Massachusetts, USA 14 

Disturbance events are ubiquitous in all ecosystems, playing key roles in nutrient cycling, maintaining biodiversity, and driving community composition over long timescales. As a result, disturbance events receive ample attention from ecologists across sectors, from research to management and policy. While many efforts focus on how disturbance events arise and predicting impacts disturbances will have, there is decidedly less attention on forecasting the recovery process following disturbance, including predicting how ecosystems reorganize, whether an ecosystem’s state will reset or change, and which recovery trajectory will be established. In forests, where trees are the dominant community members, the reorganization phase just after disturbance can influence composition and function for centuries and thus has an outsized impact on recovery outcomes. Being able to forecast which recovery trajectory a system will experience, and which factors determine recovery rates, is vital for understanding how ecosystems will respond to more frequent and more severe disturbance events occurring under more variable climate regimes. In this presentation I will discuss how disparate recovery trajectories arise following an invasive pest outbreak, focusing on the differences between sites experiencing tree mortality and sites where trees re-leaf following defoliation. I will show results from a proof-of-concept model that predicts recovery rates as a response to disturbance magnitudes, environmental conditions, and mortality, and introduce the concept of “assisted recovery”, a way that forecasting recovery trajectories can intersect with forest management. 

Gaussian Process Forecasting of Tick Population Dynamics 

Parul Vijay Patil, Leah R. Johnson, Robert B. Gramacy 

Virginia Tech, Blacksburg, USA 

The Ecological Forecasting Initiative Research Coordination Network (EFI-RCN) is currently hosting a NEON Forecasting Challenge in which the aim is to forecast ecological patterns across five themes. Here we focus on forecasting within the Tick Population theme. The goal of this theme is to be able to forecast the abundance of Amblyomma americanum, more commonly known as the lone-star tick. This tick is native to eastern parts of the United States and is a vector of several diseases. Since incidence of tick-borne diseases is assumed to be correlated 42 to tick populations, it is important to predict tick abundance to develop strategies to control and prevent the spread of these diseases. We sought to build a forecasting model that is able to predict abundance and that can handle the often sparse, and irregularly sampled observations. Although tick populations are known to vary with temperature and relative humidity, it is challenging to predict weather accurately, which would be necessary to build weather-driven abundance models. Instead, we use a flexible, nonparametric Gaussian Process (GP) model which attempts to learn seasonal patterns of tick abundance across different sites over the past decade. We are able to use the fitted GP to make forecasts into the future with uncertainty quantification. We benchmark our GP forecasts out-of-sample against simple linear time series models that include temperature and other seasonal covariates and observe that the GP outperforms the linear models overcoming issues such as sparse training data which are unequally spaced in time.

New Funding for Ecological Forecasting Initiative Activities

March 31, 2025

Thanks to generous funding from the Alfred P. Sloan Foundation, the Ecological Forecasting Initiative (EFI) will be able to continue work to ensure equitable pathways to earth and environmental data science graduate programs through collaborations with Tribal Colleges & Universities, Minority Serving Institutions, research universities, and professional organizations. This new funding will help EFI expand collaborations started with previous seed funding.  

This initiative will develop and pilot three new environmental data science modules to enable a culturally relevant introduction to data, computing, and ecological forecasting; translate seven developed modules from the classroom to permanently archive online repositories for wider access; provide a new environmental data science microcredentialing opportunity for individuals who serve as tribal liasions; and develop and deliver at least six in-person environmental data science workshops, among other goals.

Grant collaborators

Cal Poly Humboldt: Nievita Bueno Watts, Rachel Torres

Salish Kootenai College: Georgia Smies

University of Colorado, Denver: Timberley Roane

University of Minnesota: Melissa Kenney, Diana Dalbotten, Dan Keefe, Sean Dorr

University of New Mexico, Gallup: Antoinette Abeyta, Chad Smith

University of Notre Dame: Jason McLachlan, Jody Peters

Resources for Reviewing Code

February 27, 2025

Co-authors are Education and Theory Working Group Participants, Resource Developers, and Testers of the Review Materials:
Jody Peters1, Abby Lewis2, Alyssa Willson1, Cazimir Kowalski1, Cole Brookson3, Gerbrand Koren4, Hassan Moustahfid5, Hannah O’Grady1, Jason McLachlan1, John Zobitz6, Mary Lofton7, Ruby Krasnow8, Saeed Shafiei Sabet9

1University of Notre Dame, 2Smithsonian Environmental Research Center, 3Yale University, 4Utrecht University, 5NOAA, 6Augsburg University, 7Virginia Tech, 8University of Maine, 9University of Guilan

The goal of this blog post is to share resources that individuals in the EFI community have developed and have found useful when reviewing code.  

Specifically, this blog post provides

Why to review code or have your code reviewed

Just like text review, such as peer-review or co-author review, improves published manuscripts, code review can be critical for reliability, reusability, reproducibility, and knowledge sharing. Code review can take many forms, including as an individual or team activity, in research or classroom settings. Ultimately, reviewing code provides an opportunity to:

  • learn from more experienced coders about how to code or code more efficiently, either as the code reviewer or as the one requesting code review
  • provide another set of eyes to reduce errors and the potential of reporting faulty results, which can slow down scientific progress and may lead to retractions for a publication
  • increase the reliability and reusability of the code to help with the repeatability of studies and the application of previously developed code in new contexts. This is increasingly recognized as an important characteristic of research software (Barker et al., 2022)
  • carefully check any code that has been drafted with AI tools (e.g., chatGPT, Copilot, etc.). AI tools may be helpful to save time when first drafting code. However, any code created using an AI tool should not be blindly trusted to work. Ben Weinstein discusses this at time 13:49 in the January 2025 Statistical Methods Seminar Series presentation on the DeepForest package https://youtu.be/fhlC0W2kDMQ?si=KZYObPIlt2512T1Y

Open code reviews coordinated through a third party like rOpenSci also provide opportunities to network and meet colleagues and collaborators from other scientific domains. For R package developers, submitting your code to rOpenSci for peer review has many additional benefits, including assistance with package maintenance and social media promotion. 

While there are many benefits of having your code reviewed, there are, however, few resources and standards that exist for code review in ecology, and the specific methods for code review will likely differ across career stages, manuscript development stages, etc.  

Background

Over the past year, the EFI Theory and Education working groups have discussed and developed resources for reviewing code that we wanted to share with others who are thinking about or are in the process of having their code reviewed or reviewing code for others. 

The working group discussions and subsequent resources were framed around the Ivimey-Cook et al 2023 paper “Implementing code review in the scientific workflow: Insights from ecology and evolutionary biology” (https://doi.org/10.1111/jeb.14230) and materials shared by the SORTEE community (Society for Open, Reliable, and Transparent Ecology and Evolutionary Biology; https://www.sortee.org/).  

The Ivimey-Cook et al 2023 paper provides commentary on  

  1. How to effectively review code
  2. How to set up projects to enable this form of review
  3. How to implement code review at several stages throughout the research process

In the paper, the authors highlight “The 4 Rs” that code review should evaluate:

  1. Is the code as Reported?
    1. Methods and code must match
  2. Does the code Run?
    1. Code must be executable
  3. Is the code Reliable?
    1. Code runs and completes as intended
  4. Are the results Reproducible?
    1. Results must be able to be reproduced

They describe a basic workflow of questions to answer when reviewing code, as summarized in the figure below. 

(Image source: Ivimey-Cook et al., 2023)

Jump to the top of the blog post

Resources developed by working group members to share with the EFI community

Based upon the work by Ivimey-Cook et al., the EFI Education and Theory working groups put together two documents: a project overview document and a code review checklist.  During the creation of these documents, the assumption is that the code review is being done by an internal code reviewer (e.g., lab mate) vs an external code reviewer (e.g., reviewer for a journal). 

The project overview template is filled out by the person who wrote the code. This document helps clarify the purpose of the analysis and where feedback would be useful. 

Conversely, the checklist template is filled out by the person who is reviewing the code. It identifies the key points to check during the review.

  1. Project Overview to Prep for Code Review Template – This Project Overview template helps authors describe their project for individuals who will be reviewing their code.
  2. Code Review Checklist Template (spreadsheet version, pdf version) – This Checklist template is based on the material in Ivemy-Cooke et al 2023. There are checklists related to project organization, project and input metadata, code readability, and output readability that both authors and reviewers can check and add notes about. This Checklist has been implemented by working group members based at the University of Notre Dame.

While the checklist is a good way for a reviewer to check off what has been reviewed, we recommend creating a separate document to note any issues a reviewer has during the code review that needs to be addressed by the code author. The code review document can be a word or Google doc or it can be an RMarkdown (.Rmd) file in the GitHub repo with the code. The benefit of the .Rmd file versus a pull request is that the updates to the .Rmd file allow for versioning and transparency without requiring the code reviewer to make the actual code fixes, but instead leaving that to the code author. 

Jump to the top of the blog post

Pain points to be aware of & suggestions for how to manage them

Pain point 1: Code that takes a long time to run or creates a large amount of output

How to Manage: Code authors can provide aggregated output or a small example data set that can be run locally

Often in ecological forecasting we develop code workflows that take hours, days, or weeks to run. To avoid placing this computational burden on a code reviewer, authors can either provide the aggregated analysis output or a small example data set for review. Choosing between these two options likely depends on the goals of the code review. If the review is happening at a more mature stage of the project and the primary goal is to reproduce manuscript figures, providing the reviewer with aggregated output may suffice. The disadvantage of this approach is that the reviewer will likely not be running all the steps in the analysis, and therefore may miss errors that occur “upstream” of the creation of the aggregated output. On the other hand, if feedback is needed on the scientific merit and correctness of the analysis from start to finish, it may be better to provide a small example data set to allow the reviewer to run the entire workflow. The disadvantage of this approach is that the results obtained with the example data set will not match the results reported in the final manuscript. 

If authors choose to provide an example subset of data for code review, tools such as RMarkdown, Quarto, or Jupyter Notebooks can be useful to walk reviewers through the analysis. These file types allow text interspersed with code and visualizations in an interactive format, which may help a reviewer navigate the steps of a complex coding workflow. The downside is that the file paths/etc will need to be updated to apply to the subset of data.

Each project will need to decide what specific approach works best for them.

Pain point 2: Large data used in analyses that are not yet publicly archived. It can be logistically challenging to share the data and it makes checking paths, folder structure, or data intermediates difficult. 

How to Manage:  One approach taken by some blog co-authors is to use the staging environment in the Environmental Data Initiative data portal to make data available online so it can be sourced as a script before it is assigned a DOI. The benefit of this approach is that the data is then ready to be archived and ready to add to the manuscript once the checks on the code are finalized.
Another approach used is to share the data with the code reviewer using an external hard-drive or Google drive with zipped folders. If this approach is taken, be sure to include notes for where file paths need to be changed for and after the review. 

Pain point 3: Different versions of R (or other coding language) or data packages and their dependencies and compiled languages, e.g. C++ 

How to Manage: Use a docker environment.

If using a container, code authors should be sure to provide clear instructions to peer reviewers about how to set up and run a container on their machine, as well as how to delete/uninstall the container software afterward. Because using a container adds an extra step to the review process (particularly for those who have not previously used containers), it may be best to reserve this option for analyses with a high number of software and package dependencies, because installing a container becomes easier than installing all those dependencies separately. 

Pain point 4: Reviewing code can take a substantial amount of time, anywhere from a couple of hours to a couple of days, depending on the scope of the review. Blog co-authors have found that completing a code review for a co-author often takes longer than completing a peer review of the manuscript. 

How to Manage: Be cognizant that reviewing code can take a substantial amount of time and plan accordingly to give the reviewer enough time before major milestones, such as submitting a manuscript. Alternatively, depending on the situation, it may be best to plan for continuous code review as part of the manuscript writing process.
No matter the approach, we highly recommend including code reviewers as co-authors for publications using output from code that was reviewed. This is to appropriately recognize the effort and intellectual contribution involved from code review and it is in line with increasing recognition for co-authorship and different roles in the development and testing process (Leem et al. 2023)  

Pain point 5: Getting too much or too little input from a reviewer based on your publication needs.

How to Manage: Before you ask for a review, determine how in-depth you want the review to be. This may reflect what stage you are in the manuscript writing or analysis process. If you are early in the process and want help with making your code more efficient, the code review feedback may work best to come in as GitHub pull requests. If you feel your code is finalized and ready for one final review before publication, then it may work best to have a more in-depth review to confirm the output for the publication can be recreated with the code that will be shared for the publication. The “Project Overview” template (described above) is intended to help communicate these needs when asking for code review.

In practice, you may wish to seek code review at multiple stages in the analysis and writing process. For example, you might ask for a co-author to review key components of the analysis for code correctness (e.g., the “science” is correct; units are converted properly; statistical analyses are applied appropriately; and so on) as you explore your preliminary results. Later, while developing and preparing to submit your manuscript, you may ask a co-author to review more surficial aspects of the code base (e.g., files are organized in a logical way; all filepaths are relative; and so on). We re-emphasize that even senior scientists and experienced coders make mistakes! It is always better to find them before publication than afterward.

Jump to the top of the blog post

Other resources from SORTEE

SORTEE is the Society for Open, Reliable, and Transparent Ecology and Evolutionary Biology.  The SORTEE community led the Ivimey-Cook et al 2023 paper and in addition to the paper have shared other resources.  

  1. SORTEE: https://www.sortee.org/
  2. SORTEE Slack channel – here is the link to join https://join.slack.com/t/sortee/shared_invite/zt-2fnqytett-AND1mTuXBKQWYyWUXKn6YA
  3. Library of code mistakes: https://docs.google.com/presentation/d/12QN3WUc5v1Df7OArEox2U7l_N_qnHHuwzjCYiI4idC8/edit#slide=id.p
    1. Issues that people have found when their code has been reviewed can be anonymously added to this file. It is structured with the same headings used in the 4R paper on code review

Other papers or resources the EFI working groups found helpful

Jump to the top of the blog post

Best wishes for your code review

We wish you all the best as you create and have your code reviewed or review code for others.  

This XKCD comic was shared in one of the EFI working group calls and hopefully, it brings you a smile and it inspires you to avoid this in your own code. Or hopefully, you won’t see it in the code you are reviewing! (https://xkcd.com/1833/)!

EFI at AGU 2024

November 26, 2024

Below is the list of poster and oral presentations for EFI’s hosted session at the American Geophysical Union (AGU) 2024 Conference in Washington, D.C., as well as other ecological forecasting-related talks, and talks by EFI community members that may be of interest. All times are listed in US Eastern Time.

EFI has name badges! EFI community members can find Mike Dietze at the Conference, during the EFI-hosted sessions, or at the Social to get a badge.

Tuesday EFI Social â€“ Meet up with others in the EFI community on Tuesday evening, December 10 from 6:30-8:00 pm at The Delegate front bar in the Courtyard hotel just across the street from the convention center.

EFI’s Tuesday Poster and Oral Sessions â€“ EFI’s oral and poster sessions on “Model-Data Integration and Novel Paradigms in Ecosystem Forecasting” will be held on Tuesday, December 10. The Poster Session is from 8:30am-12:20pm in Poster Hall B-C (Convention Center). The Oral session is from 14:10-15:40pm in 149A-B (Convention Center). We’re excited to have a great set of speakers that span cyberinfrastructure, decision making, and forecasts for coastal and terrestrial ecosystems and organisms. Come check out the following talks!

Tuesday Poster Session (8:30-12:20, Poster Hall B-C)

Tuesday Oral Session (14:10-15:40, 149 A-B (Convention Center))

Other Forecasting Presentations & Presentations by the EFI Community

If you are presenting an ecological forecasting-related talk or poster that you don’t see on the list, email EFI so we can get it added!

Monday

Tuesday

Wednesday

Thursday

Friday

Ecological forecasting is at a critical growth point – scientists call for new vision of data-driven environmental decision making

November 8, 2024

Summary: Ecological forecasts can be used to predict changes in ecosystems and subsequent impacts on communities. In addition to long-term projections, there is an important need for forecasts in shorter-term decision-making time periods of weeks and months. Scientists at the Ecological Forecasting Initiative are working to advance the field through a process that enables them to continually update model predictions with observed data in order to improve our ability to foresee what may happen in the future. These scientists are calling on greater investment in these efforts, asking the scientific community to collectively commit to building the capacity to improve the field while calling on world nations, major corporations, and NGOs to integrate ecological forecasting into their climate adaptation and mitigation strategies.

Climate and biodiversity crises threaten our ability to manage and conserve natural resources, putting many ecosystems at risk of collapse. Ecological forecasts are a tool used to predict changes in ecosystems and how communities may be impacted. These forecasts can then be used to make decisions to mitigate environmental impacts and build a future that is climate resilient. But while many ecological forecasts have focused on predictions for 2100 and beyond, climate change is happening now and there is an urgent need for forecasts in shorter-term decision-making time periods of weeks and months.

This urgency has led scientists organized through the Ecological Forecasting Initiative (EFI) to call for increased infrastructure for predicting nature and environmental events. This call to action was recently published in Nature Climate Change. EFI is also urging world nations and UN bodies, major international corporations, and NGOs to integrate ecological forecasting into their climate adaptation and mitigation strategies while encouraging the scientific community to commit to building the technological and institutional capacity to respond to this urgent need.

Ecological forecasting is at a critical point for future growth according to Michael Dietze, EFI’s chair and lead of the Ecological Forecasting Laboratory at Boston University. “We have a lot of know-how and advances in sensing technologies, AI, and computing have opened new doors, but we now need to scale up the overall forecasting enterprise considerably to help society mitigate and adapt to widespread change in ecosystems in the face of climate change.”

Specifically, EFI scientists are talking about advancing the field of near-term iterative ecological forecasting, through which forecasts are regularly updated given new data. The forecasts can be compared to what actually happened, and the models can then be improved given what is learned to make more accurate predictions.

“Because you are constantly updating the forecasts as new information becomes available, near-term iterative ecological forecasting establishes a learning loop that is a win-win situation for ecology and decision making,” added Dietze. “The same forecasts that improve decision making on actionable timeframes also accelerate scientific understanding and help us probe new frontiers of discovery about how nature works.”

This iterative approach has been used by atmospheric scientists for decades, who, helped by large-scale investments in weather monitoring, modeling, and data assimilation, were able to continuously improve weather forecasts. In the early days of numerical weather prediction,  weather forecasters had the choice between stepping away from forecasting until the mechanics of the atmosphere were better understood or stepping forward into an iterative forecast cycle of learning by doing – through choosing the latter, they achieved a critical win-win of relentless improvements.

Ecological forecasting is at a similar crossroads, though the challenge is more complicated because of the complexity of biodiversity and environmental systems. Scientists are seeing dramatic improvements in the field of ecological forecasting that were unimaginable even a decade ago, much of it fueled by advances in sensor technologies, satellites, computation methods, and machine learning. And, due to shifts towards large-scale networked science, data access has become more standardized and equitable. Thus, the time is right to accelerate our investment.

One of the ways that EFI has supported these advancements is through the ongoing National Ecological Observatory Network (NEON) Forecasting Challenge, which has the goal of predicting NEON data before it is collected. This challenge has involved over 200 teams and developed new educational resources and community cyberinfrastructure. “Ecological forecasting needs to build communities of practice to support its growth as a discipline to leverage these technical advances,” Quinn Thomas, who co-directs the Virginia Tech Center for Ecosystem Forecasting and leads the EFI NEON Forecasting Challenge.

One of the areas where there has been traction is exploring opportunities for cross-government agency collaborations on cyberinfrastructure. Cyberinfrastructure – such as models, community standards, consistent server time, and workflows – have community-scale benefits by reducing the costs, time, and learning curve involved in launching and maintaining forecasts. As an international grassroots consortium, EFI has taken a leadership role in piloting and hosting conversations about scaling cyberinfrastructure for the benefit of the larger community of practice.

“The field of ecological forecasting has included partnerships across many sectors –  academia, governmental agencies, industry, and NGOs,” stated Melissa Kenney, Director of Research at the University of Minnesota’s Institute on the Environment and a founding leader of EFI. “Such partnerships are critical to support the design and improvement of regularly updated forecasts that both improve the science and support decisions.”

“This is a new field of science that is at a transition point,” says Dietze. “Key investments in investments in education, training, community building, sensing and computational infrastructure, tool and model development, and research to operations is critical to rapidly scale forecast innovations in ways that will help address climate and biodiversity crises.”