Today’s Revolutionary:
Kathrine Switzer


Kathrine Switzer (b.January 5, 1947) was the first woman to register (as “K.V. Switzer”) and run in the Boston Marathon, in 1967. (Other women had jumped in previous marathons and completed it, but without registering and without numbers on their jerseys). Most of the other runners in the 1967 race were happy to run with a woman, and the race organizers did nothing, until about mile 4, when officials, led by Jock Semple, tried to stop her. “Get the hell out of my race and give me those numbers,” cried Mr. Semple. Kathrine’s boyfriend, also running the race, shielded her, and she continued and finished.

Switzer has since pointed out that nowhere in the rules was there any provision that runners had to men only. It was just assumed. In an case, the rules were revised five years later, in 1972, explicitly allowing women, and Mr. Semple, who had tried to stop her before, was instrumental in having the rules changed.

 

  

Check out over 300 other Revolutionaries here.

 


Search Savings Revolution

Enter search term in the box below and click on the arrow.

 

Savings Groups are catching on in Europe and North America.

Follow this movement, and maybe get involved yourself.

Start by reading the Northern Lights page of Savings Revolution.

Then, if you like, contact us below, and we can talk about how you can form your own groups. We’ll put you in touch with someone who can help you do that!

This form does not yet contain any fields.
    Favorite Sites

    Here are some other sites that Kim and Paul read, that we think you might enjoy.


     

    Winkomun: This is a site of the ACAF network, mostly in Europe. They are doing great work and are Northern Lights leaders. Nice video where various members answer the question, “What is a Group”? Also available in español, català, and français. Where else can you get news about Savings Groups in Catalan?

    The SEEP Savings Led Working Group site. Congratulations to SEEP for putting together this comprehensive, easily accessible go-to site on savings groups. Check out their library, their report on outreach by country, and lots of other goodies.

    Village Finance Blog. Brett Hudson Matthew’s thoughtful posts are grounded in an understanding of oral cultures, history, and social dynamics. Recommended for anyone trying to understand what’s really happening in savings groups. 

    Institute for Money, Technology and Financial Inclusion at UC Irvine. “Its mission is to support research on money and technology among the world’s poorest people. We seek to create a community of practice and inquiry into the everyday uses and meanings of money, as well as … technological infrastructures”. ‘Nuff said.

    David Roodman’s Microfinance Open Book Blog. David Roodman combines intelligence, honesty, and a sense of humor. He attempts to bring intellectual rigor to the analysis of the impact of financial services, and isn’t afraid to ruffle a few feathers in the process.

    Clean Air, Bright Light. This site by Savings Revolution co-founder Paul Rippey contains useful information about lessons learned in using savings groups to promote clean lighting. Still in development but check it out anyway!

    Center for Financial Inclusion. CFI supports traditional microfinance to become more client friendly, more inclusive, and generally smarter. They have a long-term vision for the sector, and the blog attracts many good writers and thoughtful comments.

    Nanci Lee’s blog. Nanci Lee’s eclectic site includes Savings Groups, and also poetry, travel, links to interesting successes around the world, nature, art, women’s rights, and transformation. A very personal blog, and worth reading.

     

     

     

     

     

     

    Financial Promise for the Poor 

    Financial Promise for the Poor: How Groups Bulld Microsavings is your go-to book on savings groups. Its contributors are authors you often read in this blog. It covers current innovations in microsavings happening around the world.

    Also, don’t miss…

    Savings Groups at the Frontier, the book inspired by the 2011 Savings Group Summit!

    Buy in UK or US.

    Search Savings Revolution

     
     
     
     

    Over the last twenty years, many people have become interested in helping poor people around the world get good financial services. Mohammed Yunus and the institution he founded, the Grameen Bank in Bangladesh, won a Noble Prize in 2006 for helping start a movement that has brought financial services to millions around the world. 

    Banks and microfinance institutions are one way to bring financial series to the poor. Savings Groups, managed by the members and based on savings rather than debt, are another solution. In fact, we think they’re such a good solution that they really are revolutionary.

    Savings Groups are self-selected groups of 15 to 30 women and men who get together to save and borrow. Rather than go into debt to an external institution, they manage their own savings through transparent procedures and all the money they earn through interest on loans stays in their village, and in their group.

    This seven-minute video is a great short introduction to savings groups:

    A number of international non-profit organizations work with local partners to train people in villages and cities in how to manage their own savings groups. There are now over five million savings group members in Africa alone, and the movement is also growing in Asia and Latin America. (There are even a few groups in Europe and North America).

    Savings Revolution is designed to help you learn more about Savings Groups, and to get involved with the most exciting new approach to bringing safe financial services to people around the world.

    Tuesday
    Jun282011

    « Jenny Aker on Rigor for the Rest of Us »

    In her post of June 21st, Kim highlighted the (sometimes) complex world of impact evaluations and the debate over using randomized controlled trials (RCTs) as a way to conduct such evaluations.  She concluded by giving us three options:  to abandon RCTs, to use them (if we have the time and money) or to incorporate their principles into “less expensive” forms of evaluation.

    Yet the focus on RCTs is somewhat of a red herring.  Those who advocate RCTs aren’t advocating for randomization per se – they are (usually) advocating for impact evaluations of development programs (or evaluations that measure the change in a development outcome that can be attributed to the specific intervention or program).  So why do we spend so much time talking about RCTs?

    RCTs are often at the center of the debate on impact evaluations for a simple reason:  they can be a potentially powerful tool for measuring program impact.  Why?  Quite simply, they minimize bias – in other words, by using chance to select participants and non-participants, they increase the likelihood that program participants are as similar as possible to non-participants.  This means that, if we observe differences in outcomes between the two groups, then it is (probably) due to the program, and not to something else (which is the point of impact evaluations).  Yet RCTs are one tool among many for measuring impact, and they aren’t always feasible or appropriate.  

    What do you do if you want to conduct an impact evaluation, but you can’t or don’t want to randomize?  There are plenty of options.  Here are a few key principles for those interested in impact evaluations – many of which NGOs are probably doing already.

    • Principle #1:  Collect data on both program participants and non-participants before and after the program. 

    Suppose your organization collects data on program participants’ corn yields before and after an agricultural program that sought to increase yields by 20 percent.  Corn yields were 100 kg/ha before the program, but dropped to 75 kg/ha after the program.  Did the program fail?  Maybe, maybe not.  Maybe there was a drought during this period, and participants would have been even worse off without the program.  The point is, we don’t know, because we didn’t observe what happened to non-participants. 

    Now suppose you collect data on corn yields for participants and non-participants after the program, and find that yields are higher for participants.  Did the program succeed? Maybe, maybe not. It’s possible that the participant farmers were the most motivated or the richest – and so the higher yields among participants are due to those factors and not to the program. We don’t know where each group started, so we don’t know if the participant farmers were better to start off with.

    By collecting data on participants and non-participants before and after the program, we can control for two important issues in impact evaluations:  1) different starting points (levels) for each group; and 2) general trends over time (which tell us what might have happened without the program), which are captured by information from the comparison group.  

    •  Principle #2:  Select the program and non-program villages *before* the baseline. 

    Seems simple, right? If you want to follow program participants and non-participants over time, we need to know who the participants are.  In practice, though, it isn’t so simple.  Sometimes NGOs want to do a baseline first to decide who to target.  Or, perhaps the NGO will offer the program to beneficiaries, but can’t be sure that someone will accept the offer (a common issue in microfinance or savings programs).  In these cases, try to identify the treatment group at a “higher” geographic level first – such as the village or neighborhood – and collect data from individuals or households within participating and non-participating villages.

    • Principle #3:  Use clear-cut targeting criteria to choose the program participants.

    At first glance, this principle seems to contradict the whole point of RCTs – where we randomly assign villages, households or individuals to treatment and comparison groups, increasing the likelihood that the two groups will be as similar as possible before the program. 

    In the absence of a RCT, how can these criteria help us?  Suppose that your organization decides to offer savings accounts to individuals with a per capita income below USD$50.  This means that if an individual earns USD$50 or less, they are a program participant – but if he/she earns USD$51 or above, they aren’t.  But how different is someone with $51 (non-participant) as compared with someone with $49 (participant)?  Not too much.  From an evaluation perspective, we could potentially compare those individuals right below the threshold (the treatment group) with those right above the threshold (the comparison group), assuming that they aren’t too different.

    • Principle #4.  Collect data on the what, how and why.  

    One of the main criticisms levied against impact evaluations – and RCTs in particular – is that they provide us with the “what” (did the program have an impact?) but not the why (if it did have an impact, through what channels?).  Yet there is nothing inherent in impact evaluations that prevents us from learning about the channels of impact or from using qualitative techniques.  At the end of the day, impact evaluations should tell not only tell us whether the program worked, but also why it worked (or didn’t). 

    Suppose you want to pilot a new savings group model in Mali where group members receive SMS reminders to save, as compared with groups that don’t receive reminders.  You think that those groups that receive reminders will remember to save and save more, hopefully allowing them to invest or build their assets.  So we would like to collect data on household (or individual) investments and assets (the “what”), as well as their savings and whether they used the SMS reminders (why). We could also ask individuals whether they like the reminders, or why they were unable to save.  Combining data on outcomes from multiple levels, as well as a combination of qualitative and quantitative techniques, can help us to better understand the impact of the pilot program and why.

    • Principle #5. Share successes and failures. 

    It’s human nature: We want to share our successes and perhaps hide our failures.  But by only sharing our success stories (programs that worked) and hiding our failures, we are losing an opportunity to learn.  At best, this means that another NGO repeats the same program somewhere else, wasting and resources.  At worst, this “waste” prevents scarce resources from being used in another context or program that deserved it more, or encourages clients or poor households to waste their scarce time or resources on something that doesn’t work.

    Bottom line:  If we’re going to do impact evaluations, we all need to do a better job of sharing our results – with clients, communities, NGOs, donors and governments, successes and failures. Of course this might be easier said than done – but it should be a principle nonetheless. 

    Jenny Aker is an Assistant Professor of Development Economics at the Fletcher School, Tufts University. She was previously Deputy Regional Director, Programming, for CRS in West Africa where she oversaw CRS’s microfinance programming.

    PrintView Printer Friendly Version

    EmailEmail Article to Friend

    Reader Comments (4)

    Thanks for that. I love the idea of sending SMS reminders to save. (I know that wasn't the point of your post, but what a cool idea!)

    Many years ago - it seems like another lifetime - Beth Rhyne, when she was with USAID, said something like, "We don't think it is incumbent on every MFI to prove the positive impact of microcredit." I thought, "Great! Now we can just get out there and lend!"

    The problem was, NOONE really proved the positive impact of microcredit. There WERE lots of studies - I worked on a couple myself - but they were flawed in various ways. I don't think that the present round of RCTs will answer all questions, and I am aware of a lot of their potential shortcomings - but I am very glad to see the standards of proof getting so much higher.

    Better (I think) one very good study than two pretty good studies, or four not-too-bad studies, or eight quick-and-dirty studies, and so on.

    Tue, June 28, 2011 | Registered CommenterPaul Rippey

    Dear Jenny -

    This is a very thoughtful post. I am always struck by how even simple principles are not followed. I would underscore principle #1 and expand by this: Do the baseline before starting implementation. Seems duh but I often see programs where the baseline is done well into the program, hardly a baseline, which then leaves the evaluators in the difficult position of pretending the mid-term evaluation is a baseline.

    Thu, June 30, 2011 | Unregistered CommenterKim Wilson

    Great summary - I think its really important to emphasize that RCTs and qualitative methods are not mutually exclusive. I would love to see more links to RCT studies which have a strong qualitative component

    Fri, July 8, 2011 | Unregistered CommenterHelen Lindley

    I like the last prinicple of sharing success and failures? may be the next question in the same spirit of seeking answers to phenomena, is getting an answer to why there is tendency to hide the learning eperiences by players. what incentives exist to organizations that value learning from experiences in project implementation?

    Wed, July 13, 2011 | Unregistered Commenterwj

    PostPost a New Comment

    Enter your information below to add a new comment.

    My response is on my own website »
    Author Email (optional):
    Author URL (optional):
    Post:
     
    Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>