Forecast Spotlights – Elliott Hazen and Heather Welch

November 23, 2020

In this third installment of our ongoing series, “Forecast Spotlights”, we highlight the EcoCast nowcast and forecasts developed by Elliott Hazen, Heather Welch, and colleagues at the National Oceanic and Atmospheric Administration (NOAA). EcoCast is a fisheries sustainability tool that helps fishers and managers evaluate how to allocate fishing efforts to optimize sustainable harvest of target fish while minimizing bycatch of protected or threatened animals. 

The goal of the Forecast Spotlights blog series is to highlight operational forecasts being conducted by our EFI members, how they got into forecasting, and lessons learned. You can see all the ecological forecast project examples shared on the EFI Projects webpage. If you have an iterative ecological forecast project that you’d like added to this list, you can create a profile for the project using this form.

1. How did you get interested in ecological forecasting?

Elliott: A lot of my interest in ecological forecasting came from my graduate research at Duke University and learning about models forecasting seed dispersal. If such a random-seeming process such as seed dispersal could be modeled successfully it made me wonder what else could be modeled. I ended up using a lot of statistical models in my PhD to deal with the complexities of top predator datasets. In reading about these models, I realized that they could be used not just to understand ecological drivers of animal distribution but also for nowcasting and forecasting distributions moving forward. A paper by Drew Purves titled “Time to model all life on earth” also highlighted the fact that our computing capability has finally caught up to some of the questions that we have been trying to ask ecologically. 

Heather: During my masters at James Cook University I worked with Bob Pressey who was concerned that our global network of MPAs (marine protected areas) was designed to protect static representations of biodiversity, despite common knowledge that many species of management concern have dynamic distributions. We started thinking about how to design management strategies to explicitly accommodate this dynamism. This was a really interesting challenge. At the time there wasn’t much in the literature about how to manage highly mobile species and so there was a lot of room for creativity. It quickly became clear that, in order to manage species that move around, we need to know where species are in real time, or better yet, ahead of time which brought me to ecological forecasting. 

2. What are you trying to forecast?

We try to produce nowcasts and forecasts of the distributions of top predators and human activities to understand and mitigate their interactions, specifically interactions like fisheries bycatch and vessel collisions that put these species at risk. We, specifically Heather Welch in our group, have been predicting fishing behavior to try to identify where and why illegal fishing activities most likely to occur. The models that have predictive skill can be used to direct management and enforcement in the future.

Photo credit: Elliott Hazen

3. Who are the potential users or stakeholders for the forecasts you create?

We target our predictions for use by fishermen and fishery managers in EcoCast, the shipping industry and protected species managers for WhaleWatch, and most recently NOAA’s Office of Law Enforcement and the US Coast Guard for Illegal, Unregulated, and Unreported forecasting efforts. We also usually hope our predictions are interesting if not useful for the broader public.

Real-time predictions for top predators are integrated to produce a surface that indicates areas that are better to fish (blue), and poorer to fish (red) to improve fisheries sustainability. These maps are for the area off the west coast of the US.

4. What are the key lessons you have learned from your forecasts?

  1. We often learn as much from wrong forecasts as right forecasts as it tells us where physical processes may be driving our predictions differently than expected. These outliers can be really useful to understand ecological processes and patterning.
  2. Engaging with stakeholders from the get-go, even before the model is built, is really important to ensure that you’re producing forecasts that are as useful as possible.
  3. Often maintaining and testing ongoing forecasts are as much work if not more work than building the models (Welch et al. 2018). Creating accessible tools collaboratively with stakeholders can ensure the output is as applicable as possible.
  4. Also, Elliott remains interested in comparing the issues and successes in terrestrial vs. marine ecological forecasting systems. While some of the questions being asked are comparable, the processes often change at different scales because of the oceanic medium compared to land and air, which keeps him wondering how the fundamental processes of forecasting may vary across these systems.

5. What was the biggest or most unexpected challenge you faced while operationalizing your forecast?

The biggest challenges in our forecasting process has largely been finding funding and ongoing support for operationalization. Specifically, funding the development of the tools have been manageable but keeping tools working and ensuring predictions remain skillful has been incredibly difficult to fund. This need for ongoing maintenance, often termed research to operations, is a fundamental gap we often face in the forecasting process.

6. Is there anything else you want to share about your forecast?

We have also been moving more and more towards public code libraries to ensure that the lessons we have learned are available to help other forecasting projects get off the ground and also to remain operational. Reproducibility in science has changed as we’ve moved more from bench and field experiments towards modeling efforts. This field, often termed “data carpentry” is going to be growing more and more in the near future to ensure that our coding efforts are done in a publicly available and reproducible manner.

Photo credit: Elliott Hazen

Forecast Spotlights – Elvira de Eyto and team

September 22, 2020

This is the second installment in our ongoing series, “Forecast Spotlights”. The goal of this series is to highlight operational forecasts being conducted by our EFI members, how they got into forecasting, and lessons learned. You can see all the ecological forecast project examples shared on the EFI Projects webpage. If you have an iterative ecological forecast project that you’d like added to this list, you can create a profile for the project using this form.

This post highlights the forecasts developed by Elvira de Eyto and her collaborators, Andew French, Eleanor Jennings, and Tadhg Moore.

1. How did you get interested in ecological forecasting?

It started with the opportunity to do some work on short term (i.e. 5-7 days) forecasting of lake water quality through the PROGNOS project. We have had a high frequency monitoring station on a study lake (Feeagh) for close to 20 years, and we always knew that theoretically, the live data stream could be used to finetune or calibrate lake models at sub-daily time scales. The PROGNOS project funded a PhD student, Tadhg Moore (now completed), to explore this. Building from our experience with PROGNOS, we then moved into the seasonal forecasting domain, and started to adapt workflows for seasonal time scales.

Lough Feeagh AWQMS

2. What are you trying to forecast?

Dr. Andrew French is working on the WATExR project with us, and is building a forecasting workflow coupling seasonal forecasts with fish phenology models. We have a multi-decadal time series of fish movements in and out of the Burrishoole catchment, which Andrew is using to build a set of predictive models driven by meteorological data. The Burrishoole traps capture all seaward migrating Atlantic salmon, sea trout, and European eel.  Understanding and being able to predict the seasonal drivers of these movements is really interesting from a biological and fishery management point of view. It is also a very good test of the current usefulness of seasonal forecasts for phenological models. Andrew and Tadhg are also using the seasonal forecast workflows to predict water levels and availability in the Mount Bold Reservoir in Australia. This is the largest reservoir in Australia, and supplies Adelaide. More info about both these case studies can be found here: https://watexr.eu/case-studies/

European eel

3. Who are the potential users or stakeholders for the forecasts you create?

The water quality and quantity forecasts are targeted at water managers, although they aren’t fully operational – a lot of what we were doing in PROGNOS and subsequently in WATExR was developing workflows which show “proof-of-concept” and highlighted some of the potential pitfall and technical difficulties that need to be overcome. The fish phenology models are targeted at fisheries managers, who may have it within their power to mitigate impacts of e.g. predation, water abstraction, and fishing pressures at key migration times of diadromous fish (i.e. fish that move from freshwater to marine habitats and vice versa). Knowing in advance when to expect these peak migration periods may help to minimise these impacts, and ensure maximum survival at key life stages.

4. What are the key lessons you have learned from your forecasts?

That things get complex really quickly! Also, that unless we think very specifically about uncertainties all along the workflow, the results may be meaningless, so making those uncertainties visible all along the process is really important when communicating the end results. This is particularly true for seasonal forecasting, with end users having very high expectations for what they are going to be able to receive, in terms of model outputs. The reality is that the usefulness of a seasonal forecast is dependent on the variable, geographic location, and season of interest.

5. What was the biggest or most unexpected challenge you faced while operationalizing your forecast?

For the lake water quality forecasts, we realised very early on that we hadn’t put enough thought into incorporating catchment processes in our initial PROGNOS project design, and this required some attention. For the fish phenology models, one of the initial challenges was matching historical fish time series with archived seasonal forecasts (or “re-forecasts”), so that the predictive performance of models could be evaluated – although we had almost 50 years of fish data, the seasonal forecasts didn’t reach back that far. With the release of ECMWF’s ERA5 reanalysis and the SEAS5 seasonal forecasts, we were able to evaluate re-forecasts back to 1993. Fortunately for us, our project partners at Universidad de Cantabria have developed the tools for evaluating seasonal forecasts in their Climate4R R package bundle, which made various technical challenges, such as statistical downscaling, a lot easier than they could have been.

6. Is there anything else you want to share about your forecast?

The fish phenology forecast, which will be submitted for publication shortly, has been developed with data from one small catchment in the west of Ireland.  The next step would be to apply the workflow and methods across the native range of the species involved (the North Atlantic)  to understand whether seasonal forecasts have a role to play in fisheries management over a large geographical range. Incorporating the lessons we have learned into scenarios driven by future climate projections will also be a really exciting prospect, as we seek to mitigate the impacts of climate change on diadromous fish. 

PROGNOS was financed under the ERA-NET WaterWorks2014 Co-funded Call, Water JPI: (IE) EPA (Grant number: 2016-W-MS- 22); (SE) FORMAS; (DK) IFD; (Isreal) MoE-IL; (RO) RCN, with co-funding from the EU Commission. The WATExR project is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by MINECO (ES), FORMAS (SE), BMBF (DE), EPA (IE), RCN (NO), and IFD (DK), with co-funding by the European Union (Grant number: 690462).

Forecast Spotlights – Nicholas Record

July 28, 2020

We will be highlighting operational forecasts in conjunction with the release of Newsletters for a new series called “Forecast Spotlight”. The goal is to highlight operational forecasts being conducted by our EFI members, how they got into forecasting, and lessons learned.

The inaugural forecast is ecocaster by Nicholas Record at Bigelow Laboratory for Ocean Sciences. You can also see Nicholas’ ESA 2020 presentation, Using citizen observations to forecast ecosystems from jellyfish to moose to whales, at the EFI watch party on August 4 starting at 1:45pm Eastern Time here: https://youtu.be/42nZ4yAwG1o

1. How did you get interested in ecological forecasting?

I joined a NASA project in the early aughts, funded by their Ecosystem Forecasting program. The goal of the project was to forecast right whale movements to help manage this endangered species. In addition to the value of the project, I was drawn to the interdisciplinary nature of forecasting, including math, biology, and geoscience, as well as the social sciences and visual arts. 

2. What are you trying to forecast?

In my Ocean Forecasting Center, we work on everything from viruses to whales. We try to experiment with new methods as often as we can, embracing the “Cambrian explosion” of forecasting approaches that Payne et al (2017)1 described. At the moment, I’m really interested in using crowd-science approaches to build ecosystem forecasts for human-wildlife interactions. For example, we have real-time forecasts for jellyfish sightings, moose-car interactions, and tick encounters.

3. Who are the potential users or stakeholders for the forecasts you create?

For the crowd-science forecasts, like the jellyfish forecast, the audience is the general public. I need to engage hundreds of people for the forecasts to work, and giving people a daily forecast to look at is a fun way to engage them. For other projects, like our harmful algal bloom forecasts, stakeholders are more specific, including industry members and management agencies. Some of those forecasts are viewable by the public.

4. What are the key lessons you have learned from your forecasts?

If the general public is involved, any barrier to entry (e.g. a webform or login) can result in a major drop in the amount of data that comes in.

The human dimension is just as important a part of the system as the wildlife, environment, etc.

If you want your forecast to be perfect, it might take forever before anyone outside your group ever sees it. Failure is the greatest teacher (I also learned this from Yoda).

5. What was the biggest or most unexpected challenge you faced while operationalizing your forecast?

Data repositories periodically change their data formats or data pipelines. Keeping things automated is constant work.

6. Is there anything else you want to share about your forecast?

Check out some of the experimental crowd-science forecasts at eco.bigelow.org

…And if you have a crowd-sourced forecast idea that you’d like to tinker with, I’d be happy to collaborate.

1 Payne, M.R., A.J. Hobday, B.R. MacKenzie, D. Tommasi, D.P. Dempsey, S.M.M. Fassler, A.C. Haynie, R. Ji, G. Liu, P.D. Lynch, D. Matei, A.K. Miesner, K.E. Mills, K.O. Strand, E. Villarino. 2017. Lessons from the First Generation of Marine Ecological Forecast Products. Frontiers in Marine Science. https://doi.org/10.3389/fmars.2017.00289

Diversity with EFI and How You Can Get Involved

Date: June 18, 2020

Post by: The EFI Diversity, Equity, and Inclusion Working Group

The Ecological Forecasting Initiative, like many other organizations, calls for justice for George Floyd and countless other Black individuals and persons of color, and we stand in solidarity with our Black colleagues and friends saying #BlackLivesMatter. Our EFI Diversity, Equity, and Inclusion (DEI) Working Group is committed to listening, learning, and exploring ways to promote anti-racism and to make EFI, and STEM fields more broadly, a welcoming environment. In regards to ecological forecasting, as a first step, we need the input and experience from people of all backgrounds at all stages of the ecological forecasting process, from forecast development and implementation to stakeholder decisions. Ecological forecasting as a field is relatively new; creating an inclusive, anti-racist field starts with understanding the lived experiences of all types of forecasters, end-users, and community stakeholders. We have more in-depth initiatives and tasks associated with the current NSF funded EFI-RCN grant and other proposals submitted for review, but for now, the following are four ways we invite you to get involved by joining the group, using or adding to our Bibliography of resources, filling out a short 5-minute survey, or joining our book club.

ONE
Of all the EFI Working Groups, our DEI Group has the smallest number of participants. We welcome anyone who is interested in participating to learn more about ways to expand diversity and inclusion as well as brainstorming ways to increase diversity within the ecological forecasting field. Our next call is June 30 and upcoming monthly meetings are posted on the EFI’s DEI webpage, as is our Strategic Plan, which is a living document that provides an overview of the steps the DEI Working Group is taking to promote diversity, accessibility, and inclusion in EFI. Email eco4cast.initiative@gmail.com to be added to the mailing list for this group.

TWO
If you are not able to join the Working Group calls at this time, there are additional ways to get involved. We are compiling a Bibliography that provides resources for learning more about anti-racism and the diversity status of fields relevant to ecological forecasting. These resources include lists of minority supporting associations, links to diversity and inclusion plans from professional societies, blog posts, publications, and compiled lists of resources from other organizations. This is also a living document, to which we will add additional documents moving forward. If there are additional resources you have found useful, they can be submitted through this Google form.

THREE
As part of Step 1 of our Strategic Plan, “Identify and clarify the problem”, we are working to identify the current status of diversity within fields relevant to ecological forecasting as a baseline to assess the current status of diversity within the Ecological Forecasting Initiative, specifically. Once we assess the current diversity status of EFI, our next goal is to provide suggestions to ecological forecasting labs about ways to recruit more diverse students into our undergraduate and graduate programs.
To assess the current status of diversity within fields relevant to ecological forecasting we are using the NSF funded NCSES Interactive Data Tool IPEDS database of the racial and ethnic backgrounds of students that have graduated from US institutions in over 400 academic programs. We have narrowed down the list to 29 academic degrees and are asking for your help to rank the relevance of these degrees to ecological forecasting in this short survey (https://nd.qualtrics.com/jfe/form/SV_3Pdyo1bh5OG8R93). Once we know which academic degrees are most relevant to ecological forecasting, we can assess the current diversity of those degrees relative to EFI. We will then work on Step 2 of our Strategic Plan, “Identify barriers that may prevent students from underrepresented groups from participating in ecological forecasting.”

FOUR
To encourage open, honest conversation and anti-racist thinking, EFI will host its first virtual book club. We will begin with The Years That Matter Most: How College Makes or Breaks Us by Paul Tough. Tough’s book explores privilege in higher education, from the application process to the classroom. As many forecasters are educators and participants in higher education, we believe this book will serve the interests of EFI’s mission while helping participants grow in anti-racist values. The book club is open to all participants, regardless of EFI membership, race, ethnicity, gender, religion, or any other personal identity – we ask only that you participate with an open mind and a willingness for vulnerability. For those who would like to participate but need help acquiring the book, we have a limited amount of financial assistance available. Email eco4cast.initiative@gmail.com for more info.
Logistics: The book club will meet weekly, in the evenings, starting the week of July 13th, with about 40-70 pages of reading per meeting (although meeting frequency and page counts can be adjusted to meet the needs of the group). If you are interested in participating, email eco4cast.initiative@gmail.com so we can send you the doodle poll to find a day/time for the group to meet.

EFI Guest Post on Dynamic Ecology

Date: June 8, 2020

EFI Member Nick Record (Bigelow Laboratory for Ocean Sciences) led an effort with Jaime Ashander (Resources for the Future), Peter Adler (Utah State University), and Michael Dietze (Boston University) to write a guest post titled Ecological forecasting ethics: lessons for COVID-19” for Dynamic Ecology|Multa novit vulpes.

You can find the post here:

https://dynamicecology.wordpress.com/2020/06/08/ecological-forecasting-ethics-lessons-for-covid-19/

NEON Biorepository Seeks Collaborative Opportunities in Ecological Monitoring & Forecasting Research

Date: June 4, 2020

Post by: Kelsey Yule; Project Manager, NEON Biorepository and Nico Franz; Principal Investigator, NEON Biorepository

Background. The National Ecological Observatory Network (NEON; https://www.neonscience.org/) is known for producing and publishing 180 (and counting) data products that are openly available to both researchers and the greater public. These data products span scales: individual organisms to whole ecosystems, seconds to decades, and meters to across the continent. They are proving to be a central resource for addressing ecological forecasting challenges. Less well known, however, is that these data products are all either directly the result of or spatially and temporally linked to NEON sampling of physical biological (e.g. microbial, plant, animal) and environmental (e.g. soil, atmospheric deposition) samples at all 81 NEON sites.

The NEON Biorepository at Arizona State University (Tempe, AZ) curates and makes available for research the vast majority of these samples, which consist of over 60 types and number over 100,000 per year. Part of the ASU Biodiversity Knowledge Integration Center and located at the ASU Biocollections, the NEON Biorepository was initiated in late 2018 and has received nearly 200,000 samples to date (corresponding to some 850 identified taxa in our reference classification). Sampling strategies and preservation methods that have resulted in the catalog of NEON Biorepository samples have been designed to facilitate their use in large scale studies of the ecological and evolutionary responses of organisms to change. While many of these samples, such as pinned insects and herbarium vouchers, are characteristic of biocollections, others are atypical and meant to serve researchers who may not have previously considered using natural history collections. These unconventional samples include: environmental samples (e.g. ground belowground biomass and litterfall, particulate mass filters; tissue, blood, hair and fecal samples; DNA extractions; and bulk, unidentified community-level samples (e.g. bycatch from sampling for focal taxa, aquatic and terrestrial microbes). Within the overarching NEON program, examination of these freely available NEON Biorepository samples is the path to forecasting some phenomena, such as the spread of disease and invasive species in non-focal taxonomic groups.

NEON Biorepository samples include: pinned, identified insects; dry soils; bulk, unidentified, ground-dwelling invertebrate community samples; frozen small mammal tissue samples

Sample Use. Critically, the NEON Biorepository can be contrasted with many other biocollections in the allowable and encouraged range of sample uses. For example, some sample types are collected for the express purpose of generating important datasets through analyses that necessitate consumption and even occasionally full destruction. Those of us at the NEON Biorepository are working to expedite sample uptake as early and often as possible. While we hope to maintain a decadal sample time series, we also recognize that the data potential inherent within these samples needs to be unlocked quickly to be maximally useful for ecological forecasting and, therefore, to decision making. 

Data portal. In addition to providing access to NEON samples, the NEON Biorepository publishes biodiversity data in several forms on the NEON Biorepository data portal (https://biorepo.neonscience.org/portal/index.php). Users can interact with this portal in several ways: learn more about NEON sample types and collection and preservation methods; search and map available samples; download sample data in the form of Darwin Core records; find sample-associated data collected by other researchers; explore other natural history collections’ data collected from NEON sites; initiate sample loan requests; read sample and data use policies; and contribute and publish their own value-added sample-associated data. While more rapidly publishable NEON field data will likely be a first stop for forecasting needs, the NEON Biorepository data portal will be the only source for data products arising from additional analyses of samples collated across different research groups.

Map results for the spatial and taxonomic distribution of NEON mosquito (Culicidae) specimens currently available for use

Exploration of feasible forecasting collaborations. The NEON Biorepository faces both opportunities and challenges as it navigates its role in the ecological forecasting community. As unforeseen data needs arise, the NEON Biorepository will provide the only remaining physical records allowing us to measure relevant prior conditions. Yet, we are especially keen to collaboratively explore what kinds of forecasting challenges are possible to address now, particularly with regards to biodiversity and community level forecasts. And for those that are not possible now, what is missing and how can we collaborate to fill gaps in raw data and analytical methods? Responses to future forecasting challenges will be strengthened by understanding these parameters as soon as possible. We at the NEON Biorepository actively solicit inquiries by researchers motivated to tackle these opportunities, and our special relationship to NEON Biorepository data can facilitate these efforts. Please contact us with questions, suggestions, and ideas at biorepo@asu.edu.

Going Virtual! What we learned from the EFI-RCN Virtual Workshop

Date: May 21, 2020

Post by: Jody Peters1 and Quinn Thomas2

1 University of Notre Dame, 2Virginia Tech

On May 12 and 13 our NSF-funded EFI Research Coordination Network (RCN) hosted a virtual workshop, “Ecological Forecasting Initiative 2020: Coordinating the NEON-enabled forecasting challenge”. This workshop replaced the three day in-person workshop that was scheduled at the same time, but which was canceled due to COVID-19.  Going virtual allowed us to increase our participation and diversity. We were originally space-limited to 65 in-person participants, but with our virtual meeting, we had a little over 200 people register to access the workshop materials, with 150 individuals consistently joining on Day 1 and 110 individuals who consistently participated on Day 2.  We also welcomed participants from around the globe with almost 10% of participants calling in from outside the U.S. And instead of being limited to 15 graduate student participants, we ended up with over 50 graduate students who participated in the meeting.  While EFI has been using Zoom from the beginning and the EFI-RCN leadership committee members are constantly on Zoom for calls and online courses, this was a much larger gathering than any of us had organized previously.  To help others as their workshops are embracing the virtual format,  we reflected on the key elements that allowed the workshop logistics and technology flow smoothly. We hope you find our tips useful! If you have any additional questions feel free to reach out to us at eco4cast.initiative@gmail.com.

Thanks to Dave Klinges (University of Florida) who captured these screenshots of 6 screens of Zoom boxes.

Prepping for the Meeting

  1. Get input from many perspectives. There are a number of great suggestions online about hosting virtual workshops.  To prepare for the virtual format, multiple leadership committee members took a free 1 hr class on running virtual scientific meetings. You can find the video and slides from the class here https://knowinnovation.com/2020/03/you-too-can-go-virtual/. Alycia Crall from NEON was hugely helpful with ideas like QUBES and Poll Everywhere. Lauren Swanson from Poll Everywhere provided a tutorial on how to use the different features of Poll Everywhere and helped us to test the polls before the workshop.  Julie Vecchio, from the Navari Family Center for Digital Scholarship for the Hesburgh Libraries at the University of Notre Dame, shared an example slide deck and script for sharing virtual logistics at the beginning of a workshop.  And Google was a great resource for finding additional input along the way.
  2. Scale your goals to the format and your objectives.  Our goal for the original in-person meeting was to finalize rules for the NEON Forecasting Challenge but we knew that this was not possible virtually.  However, the virtual meeting allowed us to have more people and more perspectives for idea generation.  Therefore, our goals shifted to brainstorming so that we could leverage perspectives from the diverse attendees.  We now have a ton of work to do synthesizing the input but we have a better pulse of what the community is interested in.  Recognizing the challenge of engaging attendees virtually over long periods of time, we reduced the original 3-day in person meeting to a 2-day meeting with a schedule that was conducive to participants from east to west coasts of the U.S.
  3. Virtual meetings require as much or more prep than in-person meetings.  Be prepared for a lot of planning before the meeting.  

General Meeting Set-up

  1. Don’t go all day.  Our first day was 6 hours and the second day was only 4 hours and the hours were set to accommodate people from the U.S. east and west coast time zones. Unfortunately, there is no good time for all global participants, but we were thrilled to see so many participants who woke up early or stayed up late to join us from outside the U.S.
  2. Incorporate plenty of breaks.  Virtual meetings are more tiring than in-person meetings. We had two longer 30-minute breaks that corresponded to lunch-times on the U.S. east and west coasts as well as shorter 15-minute breaks spread throughout both days.
  3. Have a production manager for the meeting. This person focuses on set up and running the technical logistics. For example, this person stays in the main room during breakouts to provide assistance and oversee the timing of activities. Having a production manager allows the meeting lead (i.e., project Principal Investigator) to be the M.C. of the meeting and do real time synthesis of the ideas without having to worry about meeting logistics.
  4. Create a minute-by-minute script for the entire meeting.  This includes both the public Agenda and the behind the scenes tasks.  For example, we wrote out the messages that would be sent through Zoom Chat/Breakout messaging with the time that each message would be sent. You should be able to articulate in writing what is going to happen at every moment of the meeting before the meeting starts and assign who is going to do each task.
  5. Pre-record talks and add edited closed captioning.  This prevents issues that come with live talks like bad mics or bad connections.  This also keeps the meeting on schedule and avoids the awkward need to cut someone off.  We felt the talks were better because they were pre-recorded and, for the talks that presenters agreed to share, we now have an excellent resource for folks that missed the meeting.  The pre-recorded talks may require editing, so find someone with resources and time to make edits prior to the meeting. We made playlists for each plenary session available as unlisted videos on YouTube for any workshop participant that had connection issues while the videos were being played.
  6. Be prepared to pay for a closed captioning service so that the meeting is accessible.  In the registration form for the meeting, ask if anyone needs CC and if they do, hire a service. We were able to find a service through our university (Virginia Tech) vendor system that worked well (www.ACSCaptions.com). The production manager moved the captioner to be in the same Breakout as those that requested the service.  CC is also nice, because you get the full record of text right after the meeting, instead of waiting for the Zoom transcript to come through, plus the captioner’s transcription is better than the automatic Zoom transcript.  
  7. Use hardwired internet.  Our production manager/meeting host used a computer that was connected to the internet via a wire – this will reduce the chance that the central person loses connection.
  8. Plan for leadership team meetings during the workshop. The leadership committee met for 1 hour before and 30 minutes after the meeting each day to go over last minute logistics and any adjustments that were needed. Set up a separate Zoom meeting for these calls to avoid participants joining at times when you are not prepared for them.
Welcome to Zoom
  1. Zoom worked great.  While we know that there are other conferencing platforms, we used Zoom Meeting with a 300 person limit, hosted through the University of Notre Dame.  We chose Zoom Meeting over Zoom Webinar, because we wanted the ability for workshop participants to interact during breakouts. Plus it was provided by the University, and did not require the additional set-up or payment that Zoom Webinar required. It worked very well. There were some individuals that could not access Zoom. Therefore, we also streamed the workshop from Zoom to YouTube and shared the YouTube live link with individuals in our group who had registered for the workshop materials. 
  2. But Zoom can break communication lines among the host and leadership committee. Have an off Zoom and off computer way for the leadership team to communicate throughout the meeting (like text messaging to phones).  It is important to turn off notifications on the host/co-hosts computer due to screen sharing and sounds, but that can leave the production manager or leadership team flying blind unless there is an alternative way to communicate.  Leadership committee members that are in Breakout Rooms are unable to message the host in Zoom.  
  3. Assign leadership committee members as co-hosts. Assign all leadership members as co-hosts and have them mute people who are not talking but have background sounds. Leadership members can also help with spotlighting the speakers and can also move from breakout room to breakout room if needed to check on how things are going.
  4. Give a brief Zoom training at the beginning.  At the beginning of the workshop, use a slide deck (and a written out script to go with it) to introduce all the features of Zoom you want people to use. While many of us use Zoom regularly, not everyone is on Zoom all the time, and it is important that these folks feel comfortable so they can fully participate.  
  5. Play videos directly from the production manager/meeting host’s computer. Make sure the videos are downloaded onto your computer hard drive and play them from there. We used a playlist that automatically advances to the next video.  In Zoom’s screen share settings, make sure to click both the “Share computer sound” and “Optimize Screen Sharing for Video Clip” options.  Do not play videos in Zoom from YouTube to avoid the video having to be played over multiple web services.

Zoom Breakout Rooms

  1. Keep Breakout Rooms small. To make the meeting feel smaller we only had a max of 9 people per breakout room.  Using random sorting, as we did on Day 1, was a great way to meet different people throughout the group.  We built in time for introductions during the breakouts because one of our goals was community building
  2. Clear and easy to find Breakout instructions. Have specific and easy to find instructions for each Breakout session.  If possible try to spread the leadership team among different Breakout Rooms. In practice, this is hard because the production manager has to find them in the list of random Breakout Rooms and reassign (but see point 4 below and have the leadership members rename themselves).  In reality, specific instructions that aren’t too complex will allow Breakout Rooms to work fine without a member of the leadership team.  Our instructions were located in an easy to find place on the meeting website.
  3. Prepare for providing assistance getting into Breakout Rooms. Some people may not see their notification to join a Breakout Room pop up, so the production manager may need to walk them through that. Include a screenshot in the introductory Zoom instructions of what the Breakout Room assignment notification looks like so people know where to look. 
  4. Give extra time if using manually assigned (non-random) Breakout Rooms.  Manually sorting Breakout Rooms takes longer to organize so be sure to include the sorting time in your plans. The assigning and sorting can be done at any time during the plenary, it does not need to happen right before the Breakout Rooms open.  Use the renaming feature of Zoom to ease the sorting.  If everyone changes their Zoom name (under the Participants tab) to start with their group name or number (e.g., A1 Jody) it is much much easier for the production manager to sort. 
  5. Create extra Breakout Rooms.  When setting up the manually sorted Breakout Rooms, create additional rooms that may stay empty. There may be groups that want to breakout further and if you did not create an extra room when you set up the manually sorted rooms, these additional rooms cannot be added after the rooms are opened.
  6. Character limits for messages to Breakout Rooms.  There is a character limit for the messages that can be sent to the Breakout Rooms, so keep them short.  
  7. Breakout Rooms are unable to communicate with the production manager. When individuals are in the Breakout Rooms, they cannot use the Zoom Chat to communicate with the production manager in the main room or anyone else in the workshop who is in another Breakout Room. This is important to mention in the introductory Zoom instructions. 

Communication throughout the Workshop: Poll Everywhere and QUBES

  1. Create a means for engagement.  It is important for attendees to feel like they are involved so that the workshop isn’t a one-way delivery of information.  We used an educational account of Poll Everywhere through the University of Notre Dame to help promote participation throughout the workshop in multiple ways, including brainstorming ideas with word clouds, submitting questions and voting on priority questions for panel members, and brainstormed priorities that also could be voted on.
  2. Define use of communication tools.  Use the Zoom Chat for logistics and supplemental information and Poll Everywhere for Q&A.  That way questions on science do not get lost in questions about links or timing etc. We also were able to download all the questions for the panelists to get their feedback on any questions that we did not have time for during the Q&A sessions. We will share this feedback with the workshop participants in the next month.
  3. Centralize meeting materials.  We used QUBES as a platform to organize and share materials easily in a centralized location.  The EFI-RCN QUBES site worked well because it was free (thanks NSF and Hewlett Foundation), easy to set up, and we were able to include links to videos, surveys, Zoom login, google documents, papers, etc. all in one place.