About Jody Peters

Ecological forecasting is going to transform our understanding of ecology. I am thrilled to have the opportunity to help coordinate efforts to improve and move the field forward.

Social Sciences in Ecological Forecasting

Date: December 13, 2019

Post by: Kira Sullivan-Wiley1 and Jaime Ashander2

Series contributors: Mike Gerst3, Kathy Gerst4,5, Kailin Kroetz6, Yusuke Kuwayama2, and Melissa Kenney7

1Boston University, 2Resources for the Future, 3University of Maryland, 4USA National Phenology Network, 5University of Arizona,  6Arizona State University,  7University of Minnesota

Ecological forecasting involves models and ecology but is also a fundamentally people-centered endeavor. In their 2018 paper, Mike Dietze and colleagues outlined the ecological forecasting cycle (Figure 1 is a simplified version of that cycle) where forecasts are designed, implemented, disseminated, iteratively reassessed and improved through design.  This cycle is about a process, and in each part of this process there are people. Wherever there are people, there are social scientists asking questions about them, their actions, and how to improve decisions made by these insights.

So we ask the questions: How might ideas from the social sciences improve ecological forecasting? What new opportunities and questions does the emerging interdisciplinary field of ecological forecasting raise for the social sciences?

This post introduces a series of posts that address these questions, discussing opportunities for the social sciences in Ecological Forecasting Initiative (EFI) and gains from considering humans in forecasting research. Thus, our aim is to better describe the role of social scientists in the ecological forecasting cycle and the opportunities for them to contribute to EFI.

Figure 1  The process of producing a forecast, which may begin at any step depending on which stakeholder group initiates.

So where are the people?

At EFI, we’re interested in reducing uncertainty of forecasts, but also improving the processes by which we make forecasts that are useful, useable, and used. This means improving forecasts, their design, use, and impact, but it also results in a range of opportunities to advance basic social science.

To do this we need to know: Where are the people in this process? Where among the boxes and arrows of Figure 1 are people involved, where do their beliefs, perceptions, decisions, and behavior make a difference? Are there people outside this figure who matter to the processes within it? Figure 2 highlights critical groups of people involved, and some of the actions they take, that are integral to the ecological forecasting process.

Figure 2 Critical actors in the ecological forecasting process, linked with important actions

Figure 2 moves beyond the three-phase process described in Figure 1 because the process of ecological forecasting is predicated on funding sources and data, often provided by people and processes outside the forecasting process. So when we think about where social science belongs in ecological forecasting, we have to look beyond the forecasting process alone.

Making forecasts better

If EFI wants to make this process work and produce better forecasts, which of these actors matters and how? How might different stakeholders even define a “better” forecast? One might think of “better” as synonymous with a lower degree of uncertainty, while another might measure quality by a forecast’s impact on societal welfare. Ease of use, ease of access, and spatial or temporal coverage are other metrics by which to measure quality, and the relative importance of each is likely to vary among stakeholders. Social science can help us to answer questions like: Which attributes of a forecast matter most to which stakeholders, under what conditions, and why?

While a natural scientist might assess the quality of a forecast based on its level of uncertainty, a social scientist might assess the value of a forecast by asking:

  • What individuals are likely to use the forecast?
  • How might the actions of these individuals change based on the forecast?
  • How will this change in actions affect the well-being of these or other individuals?

Building “better forecasts” will require a better understanding of the variety of ways that stakeholders engage with forecasts. The posts in this blog series will shine a spotlight on some of these stakeholder groups and the social sciences that can provide insights for making forecasts better. Posts in the series will discuss issues ranging from how stakeholders interact with forecast visualizations to the use of expert judgements in models to when forecasts should jointly model human behavior and ecological conditions. Considering these questions will help forecasters design forecasts that are more likely to increase our understanding of these socio-environmental systems and enhance societal well-being.

These examples hint at the range of potential interactions between the social sciences and ecological forecasting. There is a wealth of opportunity for social scientists to use the nascent field of ecological forecasting to ask new and interesting questions in their fields. In turn, theories developed in the social sciences have much to contribute to emerging interdisciplinary practice of ecological forecasting in socio-environmental systems. As we can see, better ecological forecasting may require us to think beyond ecological systems.

EFI at AGU 2019

Date: December 6, 2019

EFI’s oral and poster sessions on “Ecological Forecasting in the Earth System” have been scheduled for Wednesday, December 11, 2019 from 4-6pm in Moscone 3001 (oral session) and on Wednesday morning from 8am-12:20pm(posters). We’re excited to have a great set of speakers that really span the full gradient from terrestrial to freshwater to marine. Come check out the following talks!

Wednesday EFI Oral Session (4-6pm, Moscone 3001)

16:00 Nicole Lovenduski – High predictability of terrestrial carbon fluxes from an initialized decadal prediction system
16:15 Ben Bond-Lamberty – Linking field, model, and remote sensing methods to understand when tree mortality breaks the forest carbon cycle
16:30 Zoey Werbin – Forecasting the Soil Microbiome
16:45 Brian Enquist – Forecasting future global biodiversity: Predicting current and future global plant distributions, community structure, and ecosystem function
17:00 Heather Welch – Managing the ocean in real-time: Ecological forecasts for dynamic management
17:15 Clarissa Anderson – Bringing new life to harmful algal bloom prediction after crossing the valley of death
17:30 Ryan McClure – Successful real-time prediction of methane ebullition rates in a eutrophic reservoir using temperature via iterative near-term forecasts
17:45 Carl Boettiger – Theoretical Limits to Forecasting in Ecological Systems (And What to Do About It)

Wednesday EFI Poster Session (8am-12:20pm, Moscone South Poster Hall)

Christopher Trisos – B31J-2509 The Projected Timing of Abrupt Ecological Disruption from Climate Change
Gleyce K. D. Araujo Figueiredo – B31J-2510 Spatial and temporal relationship between aboveground biomass and the enhanced vegetation index for a mixed pasture in a Brazilian integrated crop livestock system
Rafael Vasconcelos Valadares B31J-2511 Modeling Brazilian Integrated Crop-Livestock Systems
Zhao Qian – B31J-2512 An optimal projection of the changes in global leaf area index in the 21st century
Takeshi Ise – B31J-2513 Causal relationships in mesoscale teleconnections between land and sea: a study with satellite data
Hisashi Sato – B31J-2514 Reconstructing and predicting global potential natural vegetation with a deep neural network model
Masanori Onishi – B31J-2515 The combination of UAVs and deep neural networks has a potential as a new framework of vegetation monitoring
Yurika Oba – B31J-2516 VARENN: Graphical representation of spatiotemporal data and application to climate studies
Stephan Pietsch – B31J-2517 A Fast and Easy to use Method to Forecast the Risks of Loss of Ecosystem Stability: The Epsilon Section of Correlation Sums
Jake F Weltzin – B31J-2518 Developing capacity for applied ecological forecasting across the federal research and natural resource management community
Theresa M Crimmins – B31J-2519 What have we learned from two seasons of forecasting phenology? The USA National Phenology Network’s experience operationalizing Pheno Forecasts
Tim Sheehan – B31J-2520 Sharp Turn Ahead: Modeling the Risk of Sudden Forest Change in the Western Conterminous United States
Margaret Evans – B31J-2521 Continental-scale Projection of Future Douglas-fir Growth from Tree Rings: Testing the Limits of Space-for-Time Substitution
Ann Raiho – B31J-2522 Improving forecasting of biome shifts with data assimilation of paleoecological data
Quinn Thomas – B31J-2523 Near-term iterative forecasting of water quality in a reservoir reveals relative forecastability of physical, chemical, and biological dynamics
Alexey N Shiklomanov – B31J-2524 Structural and parameter uncertainty in centennial-scale simulations of community succession in Upper Midwest temperate forests
Peter Kalmus – B31J-2525 Identifying coral refugia from observationally weighted climate model ensembles
Jessica L O’Connell – B31J-2526 Spatiotemporal variation in site-wide Spartina alterniflora belowground biomass may provide an early warning of tidal marsh vulnerability to sea level rise
Rafael J. P. Schmitt – B31J-2527 Assessing existing and future dam impacts on the connectivity of freshwater fish ranges worldwide
Teng Keng Vang – B31J-2528 Site characteristics of beaver dams in southwest Ohio

Other Forecasting Presentations

Mon 13:40-15:40, Moscone South e-Lightning Theater: Alexandria Hounshell, ED13B-07 Macrosystems EDDIE: Using hands-on teaching modules to build computational literacy and water resources concepts in undergraduate curricula (Alex’s presentation will be at ~2pm)
Mon 13:40-18:00, Poster Hall: Hamze Dokoohaki B13F-2442 – A model–data fusion approach to estimating terrestrial carbon budgets across the contiguous U.S
Mon 14:25, Moscone 3005: Michael Dietze B13A-04 – Near real-time forecasting of terrestrial carbon and water pools and fluxes
Mon 17:40, Moscone 3003: Michael Dietze B14B-11 Near real-time forecasting in the biogeosciences: toward a more predictive and societally-relevant science
Tues 13:40-18:00, Poster Hall: Erin Conlisk B23F-2598 – Forecasting Wetland Habitat to Support Multi-Species Management Decisions in the Central Valley of California
Wed 08:00-12:20, Poster Hall: B31H Building Resilient Agricultural Systems Supported by Near-Term Climate and Yield Forecasts II [Poster Session]
Wed 13:55, Moscone 3005: Inez Fung B33A-02 – Towards verifying national CO2 emissions
Thurs 09:15, Moscone 3012: John Reager B41A-06 – Hydrological predictors of fire danger: using satellite observations for monthly to seasonal forecasting 
Fri 10:20-12:20, Moscone 3007: B52A Building Resilient Agricultural Systems Supported by Near-Term Climate and Yield Forecasts I [Oral Session]

EFI Social

Anyone who is available to meet up after the Forecasting Session on Wednesday, we’ll have a group getting together at Tempest starting around 6:30 pm. It’s an 8 minute walk. Find directions here.

Seeking Judges for Outstanding Student Presentations

We would like to recruit judges for the student presentations in our forecasting sessions at AGU this year. We have one candidate for Outstanding Student Presentation in our poster session on Wednesday morning (B31J) and two candidates in our oral session Wednesday afternoon (B34C). If you plan to attend either of these sessions, please consider helping to mentor a young researcher with some constructive feedback.
You can sign up to judge at https://ospa.agu.org/2019/ospa/judges/ to register and agree to the honor code by selecting “Register to Judge”.Once there, sign up for the student presentations you wish to evaluate. Every judge must sign up for at least three presentations to ensure that all students are provided with feedback. Select “Find Presentations”. You can search for presentations by B31J or B34C in the lower of the two “quick search” boxes.When you arrive for Fall Meeting, confirm the time and location of the presentations you are evaluating. You can sync your judging schedule to your personal calendar to ensure you don’t accidentally miss any presentations. Go to your OSPA schedule and click ‘Add to Calendar’ on the task bar. Your judging schedule will now be added to your Google Calendar, Outlook, or iCalendar.You will need to evaluate all presentations you volunteered to judge. Students depend on your feedback to assess their presentation skills, identify the areas in which they are performing well, and areas that need improvement.Either submit scores in real time on a tablet or mobile device or take notes while you evaluate students and enter the scores later. Do not rely on your memory alone to submit complete scores at a later time. Students participate in OSPA to improve their presentation skills, so please provide them with thorough feedback. This year, comments are required in addition to the numerical scores. All reviews must be entered into the OSPA site no later than Friday, 20 December 2019, at 11:59 p.m. EDT.Finally, be constructive! OSPA presenters range in education levels from undergraduate to Ph.D. students. There are also many presenters for whom English is not their first language. Keep these things in mind when providing feedback. Judges are asked to evaluate students at their current education and language proficiency levels.

EFI Status Update: Accomplishments over the Past 6 Months

Date: December 1, 2019

Post by Michael Dietze, Boston University

We have had a busy 6 months with lots of progress and community building for the Ecological Forecasting Initiative. Here is a summary of what the group has been up to since the EFI meeting in DC in May.

Participants at the May 2019 EFI Meeting in Washington, DC

The inaugural meeting of the Ecological Forecasting Initiative took place at AAAS Headquarters in Washington, DC on May 13-15, 2019. The meeting brought together >100 participants from a broad array of biological, social, and physical environmental sciences and spanning internationally across academia, government agencies, and non-profits. Overall, it was a highly productive meeting that generated a lot of excitement about our growing community of practice. The meeting was organized around EFI’s seven themes (Theory, Decision Science, Education, Inclusion, Methods, Cyberinfrastructure, Partners) with a mix of keynotes, lightning talks, and panel discussions on each area. The panel discussions were particularly valued by participants, as they generated dynamics community discussions and often highlighted the perspectives of early-career participants. The meeting also included time for break out discussions, starting with a series of sessions (with participants randomly intermixed) addressing high-level questions about the opportunities for advancing science and decision making, and the challenges and bottlenecks facing our community. These breakouts then fed into a later set of sessions, organized by theme, where individuals self-organized by interest to synthesize what we learned and to start discussing next steps.  Finally, there was a healthy amount of unstructured break time, as well as a conference dinner on Monday night and a poster session on Tuesday early evening, that provided attendees with time for informal discussions and networking. A post-meeting survey showed overall satisfaction with the meeting was very high (4.8 of 5), as was the likelihood of attending another EFI meeting (4.6 of 5).

The original conference plan was for the breakout groups organized around the EFI cross-cutting themes to be the kick-off of the theme working groups. In practice, this was delayed slightly by the NSF Science Technology Center preproposal deadline (June 25) which occupied much of the organizing committee’s time in the ~6 weeks post-conference. However, working group telecons kicked off in July and all eight working groups have continued to meet virtually on Zoom at approximately a monthly frequency. Based on group discussions at the conference, and our post-meeting survey, a number of key ideas emerged for the working groups to focus on. A top priority was the establishment of community standards for forecast archiving, meta-data, and application readiness levels. Standards will allow our growing community to more easily perform higher-level synthesis, disseminate predictions, develop shared tools, and allow third-party validation. The bulk of the work on developing a draft set of forecast standards has been taken on by the Theory working group, which is focused on making sure forecast outputs and metadata will be able to support larger, synthetic analyses. Theory has also held joint meetings about Standards with Cyberinfrastructure, which has focused on the CI needs of archives (blog post in prep), repeatability/replication, and the standardization of model inputs. Application Readiness Levels (ARLs) have also been discussed by the Decision team, which wanted to evaluate whether existing NASA and NOAA ARLs reflect decision readiness.

Second, there was considerable enthusiasm for discussing and documenting best practices, both around the technical aspects of forecasting and for decision science and interacting with stakeholders. On the technical side the Methods and Tools team is working on a document summarizing the tools being used by the community in seven key areas: Visualization & Decisions Support tools; Uncertainty quantification; Data ingest; Data cleaning & harmonization; User interfaces; Workflows & Reproducibility; Modeling & Statistics. The primary goal of this exercise is to produce a set of EFI webpages that inform forecast developers about the tools available (especially newer members of the community). The secondary goal is to enable a gap analysis that will help the Methods and Tools team prioritize areas where needed tools are missing or not meeting the needs of the community. At the same time, the Decision team has been discussing the stakeholder side of best practices, has already produced two blogs about lessons learned by NOAA in translating from Research to Operations (R2O), and a third blog is being drafted that describes areas in the ecological forecasting process where social science can provide valuable input. Similarly, the Partners team has been thinking about how to improve the ‘matchmaking’ process between stakeholders and forecasters and is working on a survey to reach out to potential EFI partners to let them know what EFI is, what we are doing, and to learn how organizations are currently using data, models, and forecasts and where there is the potential for synergies with EFI.

Third, the community is interested in the expansion of educational materials and open courseware. The Education and Diversity teams have mostly been meeting together and have discussed key forecasting vocabulary and are working with EFI’s Cayelan Carey, who has a new NSF Macrosystems grant to develop undergraduate forecasting modules, to develop a survey of forecast instructors to provide information on (and a compilation of) syllabi, code, problem sets, and topics currently being taught, pre-requisites, and input on what new forecasting teaching material would be most useful. The Diversity team is also drafting a Strategic Plan to work on increasing diversity and inclusion in EFI and ecological forecasting more generally.  Steps in this plan include: 1) Identifying the current diversity status, 2) Identifying the barriers, 3) Identifying solutions and which solutions make sense to work on given the participants and networks currently in EFI, 4) Identify who else needs to be involved and make a plan to bring them in, and 5) Form collaborations and seek funding to carry out the plan.

Fourth, there was interest at the EFI conference in supporting the development of an EFI student community. The EFI student group was launched in August and is working on developing a charter, forming a steering committee, and running a journal discussion group.

Working Groups are always open for new people to join. There are 3 more calls scheduled before the end of the year: Education on Dec 4, Social Science on Dec 16, and Partners on Dec 17 all at 1pm US eastern time.  Polls will be sent out in mid-December to set recurring times for working group calls in Jan-May 2020.  If you would like to join a working group and be included on any of the remaining calls or if you wish to participate in the polls to set times for next year’s calls, email eco4cast.initiative@gmail.com

In addition, to responding to the ideas discussed at the EFI2019 conference, the EFI working groups are also involved in the planning process for the EFI Research Coordination Network. This NSF RCN funding was awarded after the EFI2019 meeting and ensures that EFI will continue to meet and grow over the next five years. The EFI RCN is also launching an open forecasting challenge using NEON data, the protocols for which will be finalized at the first RCN meeting, May 12-14, 2020 in Boulder, CO at NEON headquarters.

Other key products of the EFI2019 meeting are the meeting slides and videos. The overall meeting was recorded and the individual keynote and lightning talks have been edited down and released on YouTube, the EFI webpage, and Twitter. In addition, EFI2019 participants suggested dropping EFI’s existing discussion board (which participants were encouraged to use as part of meeting prep) and replacing it with a Slack channel, which has seen substantially greater use. The EFI organizing committee is also close to finalizing an Organizing Principles and Procedures document which establishes the obligations and benefits of EFI membership and lays out the operations of the EFI Steering Committee and committee chair. The OPP is currently being reviewed by legal counsel and we anticipate holding our first elections shortly after the new year.

Finally, we are happy to pass on that the NSF Science Technology Preproposal that was submitted shortly after the EFI2019 meeting has been selected to submit a full center proposal in January.

Making Ecological Forecasts Operational: The Process Used by NOAA’s Satellite & Information Service

Date: November 18, 2019

Post by Christopher Brown; Oceanographer – NOAA

My last blog briefly described the general process whereby new technologies and products are identified from the multitude available, culled, and eventually transitioned to operations to meet NOAA’s and its user’s needs, as well as offered some lessons learned when transitioning ecological forecasting products to operations, applications, and commercialization (R2X). In this blog, I introduce and briefly describe the steps in the R2X process used by NOAA’s Satellite & Information Service (NESDIS). NESDIS develops, generates, and distributes environmental satellite data and products for all NOAA line offices as well as for a wide range of Federal Government agencies, international users, state and local governments, and the general public. A considerable amount of planning and resources are required to develop and operationalize a product or service, and an orderly and well-defined review and approval process is required to manage the transition. The R2X process at NESDIS, managed by the Satellite Products and Services Review Board (SPSRB), is formal and implemented to identify funds and effectively manage the life cycle of a satellite product and service from development to its termination.  It is a real-life example of how a science-based, operational agency transitions research to operations. A ‘broad brush’ approach of the process is given here, yet will hopefully be useful in providing insight into the major components involved in an R2X process that can be applied generally to the ecological forecasting (and other) communities. Details can be found in this SPSRB Process Paper.

The first step in the R2X process is acquiring a request for a new or improved product or service from an operational NOAA “user”. NESDIS considers requests from three sources: individual users, program or project managers, and scientific agencies. Individual users must be NOAA employees, so a relationship between a federal employee and other users, such as from the public and private sectors, including academia and local, state and tribal governments, must first be established. The request, submitted via a User Request Form similar to this one, must identify the need and benefits of the new or improved product(s) and includes requirements, specifications and other information to adequately describe the product and service. As an example, satellite-derived sea-surface temperature (SST), an operational product generated from several NOAA sensors, such as the heritage Advanced Very High Resolution Radiometer (AVHRR) and the current Visible Infrared Imaging Radiometer Suite (VIIRS), was requested by representatives from several NOAA Offices.

If the SPSRB deems the request and its requirements valid and complete, the following six key steps are sequentially taken:

  1. Perform Technical Assessment
  2. Conduct Analysis of Alternatives
  3. Develop Project Plan
  4. Execute Product Lifecycle
  5. Approve for Operations, and
  6. Retire or Divest

These steps are depicted in Figure 1.

Figure 1. Key SPSRB process steps.  Credit: Process Paper, Satellite Products and Services Review Board, 2018, SPSRB Improvement Working Group, Ver. 17, Department of Commerce. NOAA/NESDIS, 23 July 2018, 29pp.

1. Perform Technical Assessment and Requirements Validation

A technical assessment is performed to determine if the request is technically feasible, aligns with NOAA’s mission and provides management the opportunity to decide the best ways to process the user request. For instance, a user requests estimates of satellite-derived SST with a horizontal resolution of 1 meter every hour throughout the day for waters off the U.S. East Coast to monitor the position of the Gulf Stream.  Though the request does match a NOAA mandate, i.e. to provide information critical to transportation, the specifications of the request are currently not feasible from space-borne sensors and the request would be rejected.  On the other hand, a request for 1 km twice a day for the same geographic coverage would be accepted and the next step in the R2X process – Analysis of Alternative – would be initiated.

2. Conduct Analysis of Alternatives

An analysis of alternatives is performed to identify viable technical solutions and to select the most cost-effective approach to develop and implement the requested product or service that satisfies the operational need. An Integrated Product Team (IPT) consisting of applied researchers, operational personnel and users, is formed to complete this step. In the case of SST, this may be consideration of the use of data from one or more sensors to meet the user requests for the required frequency of estimates.

3. Develop Project Plan

The Project Plan describes specifically how the product will transition from research to operations to meet the user requirements following an existing template. Project plans are updated annually. The plan consists of several important “interface processes” that include: 

  • Identifying resources to determine how the project will be funded.  Various components of the product or services life cycle, from beginning to end, are defined and priced, e.g. support product development, long-term maintenance and archive. Though the SPSRB has no funding authority, it typically recommends the appropriate internal NOAA source for funding, e.g. the Joint Polar Satellite System Program;
  • Inserting the requirements of the product and service into an observational requirements list database for monitoring and record keeping;
  • Adding the product and service into an observing systems architecture database to assess whether observations are available to validate products or services, as all operational products and services must be validated to ensure that required thresholds of error and uncertainty are met; and,
  • Establishing an archiving capability to robustly store (including data stewardship) and to enable data discovery and retrieval of the requested products and services.

4. Execute Product Lifecycle

Product development implements the approved technical solution in accordance with the defined product or service capability, requirements, cost, schedule and performance parameters.  Product development consists of three phases:  development, pre-operational and operational.  In the development stage, the IPT uses the Project Plan as the basis for directing and tracking development through several project phases or stages.  In the pre-operations stage, the IPT begins routine processing to test and validate the product, including limited beta testing of the product by selected users. Importantly, user feedback is included in the process to help refine the product and ensure sufficient documentation and compatibility with requirements.

5. Approve for Operations

The NESDIS office responsible for operational generation of the product or service decides whether to transition the product or service to operations.  After approval by the office, the IPT prepares and presents a decision brief to the SPSRB for it to assess whether the project has met the user’s needs, the user is prepared to use the product, and the product can be supported operationally, e.g. the infrastructure and sufficient funding for monitoring, maintenance, and operational product or service generation and distribution exists. The project enters the operations stage once the SPSRB approves the product or service. If the user identifies a significant new requirement or desired enhancement to an existing product, the user must submit a new user request.

6. Retire or Divest

If a product or service is no longer needed and can be terminated, or the responsibility for production can be divested or transferred to another organization, it enters the divestiture or retirement phase.

Each of NOAA’s five Line Offices, e.g. Ocean Service, Weather Service, and Fisheries Service, has their own R2X process, that differs in one way or another to that of NESDIS. Even within NESDIS, if a project has external funding, it may not engage the SPSRB.  Furthermore, the process may be updated if conditions justify, such as additional criteria are introduced from the administration.  The process will, however, generally follow the major steps involved in the R2X process: user request, project plan, product/service development, implementation, product testing and evaluation, operationalization, and finally termination.

Acknowledgment: I thank John Sapper, David Donahue, and Michael Dietze for offering valuable suggestions that substantially improved an earlier version of this blog.

Predicting Nature to improve environmental management: How close are we and how do we get there?

Original Date: October 17, 2019; Updated: October 22, 2019

Melissa Kenney from the University of Minnesota presented in the John Wesley Powell Center for Analysis and Synthesis 2019 Seminar Series.

A recording of Melissa’s presentation can be found here: https://my.usgs.gov/confluence/display/PowellCenterAdmin/Powell+Center+Seminar+Series+2019

2019 Seminar Series

All seminars are presented online at: zoom.us/j/663855534

Predicting Nature to improve environmental management: How close are we and how do we get there?

When: Monday, October 21st, 11am MT/1pm ET

Presented by: Melissa Kenney – University of Minnesota

Dr. Melissa A. Kenney is an environmental decision scientist with expertise in multidisciplinary, team-based science approaches to solving sustainability challenges. Her research program broadly addresses how to integrate both scientific knowledge and societal values into policy decision-making under uncertainty. Dr. Kenney is also the Associate Director of Knowledge Initiatives at the University of Minnesota’s Institue on the Environment where she directs efforts to build synergy across IonE’s broad scientific research portfolio. She earned a Ph.D. from Duke University, focused on integrating water quality and decision models.

Powell Center Working Group: Operationalizing Ecological Forecasts