By: Sarah Stawiski and John Little

Introduction

We all love to celebrate a great success story, and that holds true when reporting on the results of a leadership development program. Who doesn’t love to hear about a leader who became more compassionate or more strategic, or who got that big promotion after completing a development opportunity?

The reality, though, is that leaders have a range of experiences during and after a development programi, and not everyone gets the same results from leadership development. We’ve heard leaders state a program has been, “life-changing” or “transformational.” A particularly compelling example is that of the late Dr. Margaret Waddington, who attended CCL’s flagship Leadership Development Program (LDP)®. She had such a positive and impactful experience, that she entrusted CCL upon her passing with a multimillion-dollar estate gift to ensure ongoing access to leadership development for Vermont citizensii. Certainly not every leader has been quite so inspired by a leadership development program, but there are plenty of leader stories describing certain development experiences as highly personally impactful.

It is naïve (and perhaps irresponsible), however, to assume that all leaders benefit equally and positively from a development program. Leaders need the opportunity to provide feedback and report whether they made any changes to their leadership practices and mindsets. And, while quite uncommon, occasionally leaders may have negative experiences in a development programiii. Leadership development programs impact leaders in a variety of ways, and the people responsible for these programs should monitor experiences and results, ensuring no harm is done.

In short, there is a full range of experiences that leaders have, both in the program itself as well as after a program ends, when they actually apply their learnings to improve their overall leader effectiveness. In this blog, we learn from leaders who have reported significant improvement after a leadership program as well as those who had not yet noticed improvement in overall effectiveness. The intent is to encourage leadership development practitioners to not just focus on the most positive stories of transformation, because to design and deliver the highest quality programs, we need to understand what drives the full range of outcomes.

Our Approach

The insights presented do not come from a single research study, but rather from the ongoing work of CCL’s Insights and Impact team. We are responsible for collecting data from leaders who attend our programs, engage in coaching, and utilize other consulting services that CCL offers.

Our findings are based on datasets of survey responses comprised of thousands of leaders who attended a CCL leadership development program from 2020 to 2023. Programs vary in terms of length, modality, global region, organizational level (e.g., ranging from individual contributor to senior leaders), and content focus. In this exploration, we focused primarily on 2 datasets:

  1. The first dataset is based on surveys that captured leaders’ feedback about program experiences using metrics such as overall satisfaction or likelihood to recommend, and measured initial indicators of impact such as how well learning objectives were met and how likely leaders felt they would apply program learnings.
  2. The second dataset, based on surveys sent to leaders two months after program completion, included more impact-focused data and measured actual application of learning and improvement in leader effectiveness.

We parsed the data in 2 main ways. First, we used Net Promoter Scores (NPS) to identify Promoters (people who rate Likelihood to Recommend a 9 or 10 on a 10-point scale) and Detractors (i.e., people who rate Likelihood to Recommend 0-6 on a 10-point scale). Second, we separated leaders who reported “no change” in leadership effectiveness from those who reported “significant change.”

Our observations are from a sample of leaders who indicated they hadn’t yet improved their leadership effectiveness 8 weeks post-program. In some cases, we make comparisons to leaders who said they experienced significant improvement 8 weeks post-program. While we often have data from other raters (e.g. managers or direct reports) about the changes they observed in leaders, in this analysis we focused on self-report data only.

What We Are Learning 

Overall, the vast majority of leaders report positive experiences and impact on their leadership effectiveness. There is no evidence of harmful or detrimental program effects. We also see (in a subset of programs) about 80-90% of leaders reported greater effectiveness 8 weeks post-program. Our curiosity led us to ask, “What about the rest of them?”

We are well aware that measuring behavior change and impact just 8 weeks post-program provides a mere snapshot in time. It can take months (or even years) to hone newly-learned leadership skills and see resultsiv. However, looking at these data allow us to understand the experiences of a lesser-studied group of leaders and actually left us feeling optimistic.

1. The program experience was valuable, even for leaders who reported they did not improve their 8-week post-program leadership effectiveness.

For those who reported “no change” in 8-week post-program leadership effectiveness, we examined the data they provided on the last day or session of their program to understand if there was anything distinct about their experience. Overall, they reported positive program experiences. Yes, their ratings on end-of-program surveys were significantly lower than those who reported improvements 8 weeks later. However, their scores, on average, were still quite high. For example, overall satisfaction was 4.2 out of 5 points, their ability to make meaningful connections with others was 4.5 out of 5 points, and they indicated the course had achieved the learning objectives, on average, “to a great extent.” This indicates that even these “no improvement leaders” had a valuable experience learning and connecting with others.

Although many factors may explain differences in overall improvement, one factor that may have contributed to the differences in post-program outcomes is that leaders who reported making significant improvement were more likely to have received coaching in their program (either 1:1 or small group coaching) than leaders who did not yet report improvements. Coaching may personalize the experience and provide opportunities for sensemaking and reflection in a way that helps leaders expedite their ability to make changes post-program. Knowing how to apply program learnings is critical to leaders’ development experiences, whether that is achieved through coaching, ample practice opportunities, or time to reflect on skill application in one’s own organizational contextv.

2. The leaders who have not yet improved are taking steps to apply what they learned.

Even though some leaders in our sample did not yet report improvement, they too are trying to apply what they learned. Fewer than 2% of leaders (3 leaders total) who did not yet report improvement said they haven’t been able to apply what they learned at all. Almost all participants (98%) indicated they applied what they learned to a little extent or higher, with the majority (56%) indicating they applied what they learned “to some extent.” (3 on a 5-point scale).

Also, about half of these leaders are reporting positive improvements, even if they are not attributing those changes to an improvement in their own effectiveness. Among the leaders who reported they have not yet seen improvements in overall effectiveness, their responses to other items on the survey indicate that roughly half of them are making important changes in other areas. For example, 46% of those who did not report improvements in their own leadership effectiveness indicated they improved their ability to enhance their team’s engagement, which indirectly implies improved leadership effectiveness.

3. The (cognitive) seeds that were planted may take a little longer to bloom for some leaders.

In analyzing qualitative data from comments about the most significant change leaders had made post-program, the majority of participants listed at least one change. The leaders who had not yet improved their leadership effectiveness were more likely to respond that they had not yet made changes (e.g. “too soon to say”) than leaders who had improved significantly (9% vs. 0).

In contrast, the leaders who indicated they had significantly improved their leadership effectiveness were more likely to report specific behavioral changes (e.g., I give feedback more regularly) or concrete examples of impact (e.g., a promotion) than the leaders who indicated they had not yet improved. However, both groups reported about the same proportion of cognitive shifts (i.e., a change in mindset, increased self-awareness, a new perspective). This is promising because cognitive changes are likely to lead to eventual behavioral changes.

What You Can Do to Increase Impact

We focused on a small group of leaders who didn’t report an 8-week post-program improvement in leadership effectiveness. Here are a few takeaways about why this research matters and actionable steps to take to enhance the impact of leadership development programs.

“Keep it real” to continue learning. If we only focus on telling the most compelling stories of transformation, or we try to make it seem like (close to) 100% of leaders have the same, amazingly positive experience, we are missing out on rich learning opportunities. Understanding each participant experience, whether it be described as transformational, mediocre, or poor, is an opportunity to learn how to design and deliver development experiences that will be even more effectivevi.

Promote equity via continuous measurement and learning from full-spectrum experiences. Exploring the full range of program experiences and outcomes supports more equitable development because it may illuminate subgroup differences. People have different preferences and learning styles, and we can’t assume that all aspects of program design and facilitation are equally effective for all participants. As one of many examples, it would be beneficial to understand how neurodivergent leaders experience and benefit from leadership development programs and how such programs can be tailored to improve effectiveness.

Gain new insights by examining the factors that determine leadership program results. In our ongoing data analysis, we explore why we see each result by applying our Leadership Development Impact Frameworkvii. When trying to explain results (positive or negative) we use this framework to help pinpoint the possible factors at play. The 3 major factors that may contribute to the effectiveness of leadership development programs are:

  1. Leader characteristics (e.g., readiness, willingness, and other individual differences relevant to development).
  1. Leadership solution (e.g., content, design characteristics and delivery elements, cohesiveness and flow).
  1. Context (e.g., internal organizational factors, such as culture, support, leadership or other organizational changes as well as external factors such as shifts in the industry and marketplace or significant economic and social issues or changes).

In summary, through our research and evaluation work, we learn as much as we can about the factors that influence the effectiveness of our leadership development programs so that CCL can get better and better at designing and delivering these impactful programs. Our ongoing analyses help us to better understand what drives overall experience and longer-term outcomes. By telling these two tales of leadership outcomes, we celebrate the in-progress stories along with the success stories.

 

John Little is a Research Evaluation Analyst on the Insights & Impact team at CCL. He specializes in advanced statistics, assessments, and research methodologies, including scale development and validation, and meta-analysis. In his role, John develops innovative solutions to automate data collection processes, assists in generating client reports, and provides technical support to other team members across CCL.

References 

[i] Clerkin, C., Bergeron, D. M., & Wilson, M. S. (2024). Gender differences in developmental experiences. In S. R. Madsen (Ed.), Handbook of Research on Gender and Leadership (pp. 392–409). Edward Elgar Publishing. https://doi.org/10.4337/9781035306893.00037

[ii] Vermont Principals Association. (n.d.). Wadding Leadership Initiative. Retrieved April 17, 2024, from
https://vpaonline.org/professional-learning-support/waddington-leadership-initiative-2/

[iii] Arnulf, J. K., Glasø, L., Andreassen, A. K. B., & Martinsen, Ø. L. (2016). The dark side of leadership development: An exploration of the possible downsides of leadership development. Scandinavian Psychologist, 3. https://doi.org/10.15714/scandpsychol.3.e18

[iv] Lord, R. G., & Hall, R. J. (2005). Identity, deep structure and the development of leadership skill. The Leadership Quarterly, 16(4), 591–615. https://doi.org/10.1016/j.leaqua.2005.06.003

[v] Kosovich, J. & Wormington, S. (n.d.). Behind the Magic: What Really Drives Participant Satisfaction? Center for Creative Leadership-Innovation. Retrieved April 30, 2024, from
https://cclinnovation.org/news-posts/behind-the-magic-confidence-is-the-first-step-to-success/

[vi] Schneider, M., & Hamill, J. (n.d.). Practicing What We Teach: Early Lessons in Listening to and Learning From Participants Leads to Program Effectiveness. Center for Creative Leadership – Innovation. Retrieved April 30, 2024, from https://cclinnovation.org/news-posts/practicing-what-we-teach-early-lessons-in-listening-to-and-learning-from-participants-leads-to-program-effectiveness/

[vii] Stawiski, S., Jeong, S., Center for Creative Leadership, & Champion, H. (2020). Leadership Development Impact (LDI) Framework. Center for Creative Leadership. https://doi.org/10.35613/ccl.2020.2040

  • Sarah Stawiski Vice President, Leadership Research and Analytics
  • John Little
    John Little Research Evaluation Analyst