Tracking Impact and Measuring Success in Data Education Events

By Yanina Bellini Saibene in Community Education English

June 22, 2021

With the increase in computing power and available technologies to store and analyze data, the demand for data science skills has grown. To address this need, numerous entities are creating training via institutional and community-led events. Data education events upskill and train anyone who works with data, from new researchers to experienced data science practitioners. Organizers of such events strive to measure immediate and long-term impact so that they can improve training efficacy, recruit new partners and event participants, acquire funding, and fulfill funding requirements. Measuring the impact of these events can be challenging as the impact is often only shown after the event has completed, as skills are applied in real world scenarios. Also, impact is multi-dimensional and not always quantifiable. This csv,conf,v6 birds of a feather session, co-developed by Emily Lescak, Beth Duckles, Yo Yehudi, Yanina Bellini Saibene, Ciera Martinez, Leslie Alanis, and Reshama Shaikh, facilitated discussion for newcomers and experienced event outcome measurers alike, covering topics such as motivation for measuring impact, determining what we can measure and how we can measure it, challenges to measuring impact, and designing impact strategies for different stakeholders (e.g., funders, co-organizers, and the larger community). This post summarizes conversations surrounding these topics and provides resources to help you develop an impact strategy for your next event.

Why do we measure impact?

“If you can’t measure it, you can’t improve it.”

— Peter Drucker

The reasons for measuring impact are plentiful, from general understanding of the process to essential reasons such as procuring funding. Measuring impact is essential for business reasons as well as engagement and satisfaction for all stakeholders.

For event organizers and educators, the reasons for measuring impact are deeply personal, and assessments provide answers to our questions:

  • What are our goals and are we achieving them?
  • Are participants learning what we are teaching? Do our curriculum and methods offer utility to the participants and the community at large?
  • Are we teaching the needed skills?
  • Are participants satisfied with the topics, are these teachings meaningful, and do they positively impact participants’ work and career trajectory?
  • Where can we improve as instructors and organizers?
  • How long does it take for the skills that we are teaching to be absorbed by learners?
  • Are we reaching all the participants who could benefit from this education event?
  • How can we measure impact and prioritize desired learning outcomes?

Institutions and funders have financial and reputational stakes in education programs and want to ascertain:

  • Is the funding impactful?
  • What is the measure of “success” or “impact”, and what is the cost per person?
  • How can we have maximum impact for our funding dollars? Is this program that we are funding effective?
  • How do the events we are funding compare to other events? How do they meet our benchmark of “success”?

For event participants, who are investing their time, resources, and aspirations:

  • What are the goals of the event?
  • Do these education events align with skills I would like to fill?

For the general community,

  • What skills is this education event teaching?
  • What are their goals and are they achieving them?
  • What can I learn from this data education event and how can I apply what I learn to other events?

What are the challenges?

Data education event organizers face a number of challenges in measuring impact:

  • A starting point on how to collect data
  • Measuring impact through both quantitative and qualitative methods
  • Measuring long-term impact of data education events
  • Building trust with learners and other stakeholders
  • Ethics of data collection; balancing information collection with privacy
  • Difference in priorities of funders and grantees

Many of the challenges of assessment that are present in data education events are the same challenges that any field would face when attempting to train learners, and can be approached from an understanding of learning theory. In addition, similar to the challenges of any training event, there must be a clear definition of the motivations and goals for assessment, which are often difficult to articulate in a useful way when attempting to assess success. This brings us to one of the main challenges of assessment—balancing the various interests of stakeholders in identifying success. Often this first step in defining clear and well articulated goals from event stakeholders is missing. Even if the goals are well-defined, the organizers must balance the motivations of the organizers, instructors, learners, and funders—all of whom have unique, and possibly conflicting, goals for the event.

Another set of challenges of assessment we discussed is that organizers of such events are often not trained in data skills over learning assessment and lost on where to start. The skills needed for proper assessment are many, from understanding how to properly construct a survey, the ethics of participant data collection (balance between information collected and participant privacy), using qualitative statistical methodology, and communication of assessment results to stakeholders. Even if the above-mentioned concerns are addressed, there is the added challenge of collecting this information and engaging participants to provide meaningful feedback (i.e. increasing survey responses). Having an organizing team with people of various areas of expertise is essential since the skills needed for effectively assessing events are varied.

The above concerns overlap with many of the challenges that face instructors of any event, but there are additional challenges that come with data specific events. Data science is a relatively new field that uses knowledge and skills from many fields, and is rapidly evolving with new tools. Data education events often aim to give attendees skills that are not taught in traditional education settings. These events are usually short in duration, making it challenging to assess learning objectives that were achieved because the attendees' knowledge is often gained after the event has taken place. The rapid evolution of data skills and tools also make it difficult to re-use a curriculum and evaluation strategies. And there are many data events that go beyond the traditional classroom setting, such as sprints and hackathons, which make traditional classroom assessment strategies difficult to apply.

What can we measure?

We agree that before seeing what we can measure, we must set the goals of our event and the goal of what we want to measure. Much of the discussion focused on measuring whether or not our students were learning at our events and what and how to measure in synchronous or asynchronous events.

When to measure

  • Before the event: assessing where participants are to determine if they have prerequisite skills to participate and benefit from this event.
  • During the event, for example, using formative assessment to know if students are understanding and we can move forward or we need to review some concepts.
  • After the event, not only in the “traditional” exam way, but also with some product that helps students to build a portfolio, which can be useful for student’s self-promotion and to demonstrate skills to potential employers.
  • Short and long term: we can do a survey immediately after the event to know, for example, if they learn something new. We can also do surveys after several months to measure changes in practice, asking for example if participants are using what they learned during the event. If the answer to that question is yes, what are they using and why? If the answer is no, why not?

Characteristics to measure

We can also measure event characteristics such as:

  • Number of individuals reached
  • Diversity of participants, organizing and teaching teams in terms of different dimensions of interest, such as age, gender identity/expression, body ability, career stage, geographic origin, region, language, neurodiversity, race, socioeconomic background, sector (academic-industry-student-retired-unemployed-volunteer), etc.
  • Outreach and engagement: How do you learn about us? How do you keep informed about us? How do you interact with us and our community? Who is helping us? Who are we helping?
  • Topics: what do people want to learn, what people attend, and what people finally use/apply.

In the article Measuring Impact to Craft Your Story, four aspects on what impact factors can be measured are summarized: Reactions, Learnings, Behavior, and Results, and two moments: short and long-term, giving concrete examples on how The Carpentries performs its impact measurements.

One example of what can be measured to change an event is presented in the analysis of 10 years of data from the agroinformatics congress (Argentina). It was possible to determine the different participation roles according to gender, countries, and province of the authors, speakers, committees and other roles. They also measure the topics addressed, and the collaborations between institutions and authors (using social network analysis). With this information, the organizers take action to accomplish these changes: increase the participation of women, increase the participation of countries in the region, increase the participation of regions different to the Pampa Humeda within Argentina, link R&D groups with similar interests, who are not working together, increase the participation from the private sector, encourage work on topics of interest.

The actions to increase female participation, for example, have positive and fast results for those roles that the organization can control (such as scientific committees, organizers, and speakers), but the same doesn’t work on lead (or leadership) roles (e.g. first author).

In these examples, it is clear that the importance of “measuring to improve” and “measuring in order to determine if the actions and decisions taken were useful” bring us closer to our objectives.

How can we measure impact and success?

Impact can be defined as: Has the action taken resulted in a change? There are various ways to measure impact. Traditionally, it has been one “survey” with a varying number of questions, including closed questions (with limited options to select) to open questions with options to add text. These surveys are typically administered after the event. This option is popular because it is familiar and can be easily replicated and analyzed. However, response rates to surveys can often be low, and responses to open-ended questions can be even lower. In addition, these kinds of surveys only measure the immediate impact of the event.

Given that data education events have impact beyond the limited timeframe of the actual event, measuring impact over time would be more representative of the actual impact.

There are other options, with various benefits as well as resource constraints:

  • 1-on-1 meetings with participants: the benefits of this can be casual and relaxed conversations with free-flowing ideas. The constraint of this method is time, particularly in scheduling, in meeting, and writing up and conveying notes.
  • Follow-up surveys: in addition to evaluations immediately after an event, surveys over time intervals post-event can be conducted. This method requires more resources to administer and analyze the survey. Determining what information to collect is something that needs to be researched. Another challenge is that engagement by past event participants may be limited.

Given this is the age of technology, social media can also be used to measure impact. Participants often share their experience via platforms such as LinkedIn, Twitter, Facebook, GitHub activity, blog articles, etc. These options for collecting feedback are valuable, but are often disjointed and require additional resources to collect the information.

We often measure what we can count, but that isn’t always what we want to measure.

Sometimes what is easy to measure does not line up with what we want to measure. Our group identified a variety of challenges with regard to finding ways to measure what we’re looking for. For instance:

  • How do we measure the long term impact of workshops?
  • How do we understand scientific impact and influence?
  • What are the ways we can understand people’s belonging and inclusion in groups and workshops?
  • Can we understand how people pass forward information and learning beyond the class or workshop?
  • For student events, how can we expand beyond attendance count to student success and engagement?
  • How do we measure long term skills application instead of just the end-of-seminar survey on what a student learned?

In our group discussion, we went over some ideas for how we might approach these issues, while also recognizing that we face a world where people are often tired of doing surveys and more in depth research is challenging and time consuming.

Our first approach is to look at mixed methods solutions which integrate not only qualitative but also quantitative research. One idea is to do retrospective evaluation of workshops to get at some of the long term impacts of workshops or events. This means interviewing or surveying people two to three years after an event to see how the event impacted them. There are challenges to this kind of data collection such as making sure that people are tracked ethically and that they are free to decline, but these kinds of projects are helpful in determining long term impact.

Second, we talked about finding quicker data collection processes from the design research world where we fold shorter questions into existing work that we are doing. Examples would include doing short interviews (2-3 minutes) at an event where we ask folks to tell us what has been most meaningful so far. Another might be analyzing or using existing feedback data that already exists in a document agenda using qualitative tools.

Next we talked about capturing stories. One person may tell another that a workshop was really impactful and useful (or that there were challenges that they faced), but if that story is never heard by the organizers, it’s unlikely that information will be of use. How can we gather those stories in such a way that helps the organizers?

Finally we talked about the need to bring together a group of people who are doing similar work and needing similar solutions to work together on these issues. What would it be like if we could talk about and share resources on these topics? If that’s of interest to you, feel free to dash a note to the sub-session organizer Beth Duckles ( bduckles@gmail.com).

It all comes down to people

How we assess data educational events is a balance between the expectations and motivations of the people involved. However one chooses to measure success and impact of an event one must take the time to define and balance the goals of all the stakeholders: the attendees, the organizers, the funders, and the general public. While this balancing act is challenging, there was a general optimism and excitement in all the discussions on sharing how each community approaches the many challenges. We look forward to more resources, strategies, and conversation in the hope of creating a community of practice on how to approach the assessment of data education events.

References / resources

Acknowledgments

  • Csv,conf,v6 Conference: for the opportunity to run this Birds of Feather session ( https://csvconf.com)
  • All the attendees of this Birds of Feather session
  • The Gordon and Betty Moore Foundation for supporting the CS&S Event Fund through grant GBMF 8449 ( https://doi.org/10.37807/GBMF8449)

About the Authors

  • Emily Lescak: is an educator and data scientist. She developed Code for Science & Society’s Event Fund and is now the Senior Research Community Officer at The Wikimedia Foundation.
  • Reshama Shaikh: is a statistician and runs the community Data Umbrella. Data Umbrella is a CS&S Event Fund grantee for scikit-learn open source sprints.
  • Yanina Bellini Saibene: is a researcher and data scientist. Co-founder of MetaDocencia (MetaTeaching in Spanish). MetaDocencia is a CS&S Event Fund grantee for teaching how to teach technology and coding and data science skills to spanish speaking educators.
  • Ciera Martinez: is a research lead at the Berkeley Institute for Data Science. Her work focuses on how to integrate large data sets and manage multidisciplinary research teams to perform biological and environmental research. She also works to make data science a more inclusive, diverse, and fun field through founding projects like Data Science by Design (datasciencebydesign.org).
  • Beth M. Duckles: is a research consultant and organizational sociologist who helps folks in science and tech collect and analyze human-centered data. She is also the founder of Open Post Academics, an international peer mentor group for people with a PhD which was a CS&S Event Fund grantee for Open Problem Workshops.

Cross post with CS&C Blog: https://eventfund.codeforscience.org/tracking-impact-and-measuring-success-in-data-education-events/

Posted on:
June 22, 2021
Length:
13 minute read, 2718 words
Categories:
Community Education English
Tags:
Community Education Conference
See Also:
The stories behind your community numbers
Unlocking Insights from LatinR. Collaboration and Innovation in Data Science
R Spatial y Tidymodels