Frequently, analysts or evaluators gather information of workers, children, programs, participants and so on, and subsequently realize that information is missing on some important variables for several respondents in the sample. For example, in a survey we have administered to artists in communities around the country, some artists choose not to report their interests in specific services or income level. Missing data is an issue in nearly every study, and the evaluator has to decide which methods are the most appropriate for dealing with this complex issue.
In this context, it is essential to first understand the nature of the data in order to identify potential problems such as attrition, skip patterns or random data collection issues. Once the overall data set is understood, it is necessary to check the missing data patterns in order to see if certain groups or certain responses are more likely to have missing values. These will help the evaluator to identify if the missing data is: missing completely at random (MCAR), missing at random (MAR) or missing not at random (MNAR). If the data is MCAR, we can simply delete these observations, as the estimation process will not be biased or inconsistent, though there may be some loss of precision due to a smaller sample. However, missing data is more problematic when it occurs in a nonrandom sample. In these situations the only way to obtain an unbiased estimation of the statistics is to use a procedure that accounts for the missing data. Thus, it is important to acknowledge that the consequence of missing observations is going to be contingent on the assumptions about the mechanism behind the missing information. The following table represents a brief description of some methods for dealing with missing values, as well as the advantages and disadvantages of each approach.
Analysts and evaluators must properly treat missing data because erroneous strategies for dealing with this issue can produce estimated statistics that are biased and also inaccurate, leading to invalid conclusions. There is no specific recipe to effectively deal with this problem, however many researchers and practitioners recommend starting by avoiding the problem as much as possible by minimizing the missing values during the data collection process. It is also important to correctly inspect patterns of missing values and keep track of why a value is missing. Additionally, it is necessary to include information on the number of cases dropped from the analysis, and their reason for being dropped. Finally, it is important to determine whether the missing values are likely to cause biases in the findings in order to select the appropriate method. It is essential to acknowledge that approaches to dealing with missing values are not the end in itself, and in contrast are one of the many tools to help analysts and evaluators in reporting results and methods clearly and honestly to help the audience draw accurate conclusions.
Have you had to deal with missing values in the past? What approaches have you taken to solve this problem? Which approaches work better than others for you? Please feel free to share your thoughts on this issue!
Strong introductory readings for this topic include:
Afifi, A. A., & Elashoff, R. M. (1966). Missing observations in multivariate statistics I. Review of the literature. Journal of the American Statistical Association, 61(315), 595-604.
Acock, A. (2005). Working with Missing Values. Journal of Marriage and the Family, 67 (November): 1012-1028.
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual review of psychology, 60, 549-576.
Pigott, T. (2001). A Review of Methods for Missing Data. Educational Research and Evaluation, Vol. 7, No. 4, pp. 353-383.
Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological methods, 7(2), 147-177.
Scheffer, J. (2002). Dealing with missing data. Research letters in the information and mathematical sciences, 3(1), 153-160.
Posted: June 21st, 2013 | Author: igmain | Filed under: About evaluation, Knowledge exchange | Tags: Data analysis, Data Collection, deletion methods, dummy variable method, evaluation, Improve Group, Jose Casco, listwise deletion, maximum likelihood methods, mean substitution, Missing Data, missing information, missing not at random, Missing Values, mode substitution, Model based approach, multiple imputation, pairwise deletion, research, single imputation, single regression | No Comments »
The National Council on Teacher Quality (NCTQ) recently released its Teacher Prep Review 2013 Report, a research study spanning two years, evaluating more than 1,100 colleges and universities that prepare elementary and secondary teachers. The ultimate goal of the study is to allow teachers, parents and school districts to compare teacher preparation programs and determine which are doing the best job of training new teachers, as well as provide a basis for which these programs can determine where they need improvements.
The NCTQ graded program effectiveness on up to 18 standards on a scale of zero to four stars. Examples of standards used include student-teaching and content knowledge. The 18 standards fall into four buckets:
- Selection: The program screens for measurable attributes candidates bring to programs, principally academic aptitude
- Content Preparation: Content preparation in the subject(s) the candidate intends to teach
- Professional Skills: Acquisition and practice of skills in how to teach
- Outcomes: The program’s attention to outcomes and evidence of impact (Pg. 37 of report)
In total, only four programs received a four-star overall rating. All four of those programs were in secondary teaching preparation. Results of the study indicated that there is a large disparity in what programs expect teachers to know and demonstrate, as different programs scored high and low for varying standards.
Both the methodology and findings have stirred up some controversy. For example, this method does not highlight specific areas in which a teacher preparation program excels. Certain programs scored four-stars in several standards, but low scores in other standards dragged down their overall score, deeming their programs to be weak. Another challenge suggests that the review grades programs only by their methods, and not their actual results (I.e. teacher effectiveness in the fields after graduation).
 Disputed Review Finds Disparities in Teacher Prep by Stephen Sawchuck of Education Week
Posted: June 19th, 2013 | Author: igmain | Filed under: Education, Knowledge exchange | Tags: content preparation, Dan Goldstein, education, educational institutions, elementary preparation, elementary teachers, evaluation, Improve Group, k-12 teachers, National Council on Teacher Quality, outcomes, professional skills, report, review, schools, secondary preparation, secondary teachers, selection, Teacher Prep Review 2013 Report, teacher preparation programs, teachers, universities, youth | No Comments »
It’s possible that highly effective teachers make the greatest difference in student academic performance. Research has shown that some highly effective teachers bring about gains in their students, year after year, that are above average. Once large data sets allowed researchers to identify highly effective teachers, studies moved on to determine what characteristics made them so effective.
Research about teacher effectiveness has become more and more important as our educational system becomes more sophisticated and complex and the stakes of student success (and failure) grow. There is a growing demand for teachers with the abilities to have a large impact on the education of their students.
There are two critical challenges with finding and/or developing effective teachers, however. The first challenge is uncertainty about what characteristics effective teachers share. Recent research has identified effective teachers (usually using test scores) and then searched for shared characteristics or use general qualities associated with effectiveness and apply them to teaching. For example, Stanford University says effective teachers are organized, analytic, dynamic, and interactive. Another study found students prefer teachers who are respectful, responsive and knowledgeable. A recent blog synthesized some of the literature and found common themes – Defining Teaching Effectiveness.
The second challenge is in identifying the best way to measure teacher effectiveness. While test score data is a wonderful resource for identifying outliers (teachers who consistently deliver exceptional results, year after year), its high-stakes, single-point-in-time nature also makes it susceptible to fraud or anomalies, such as the recent cheating scandal in Atlanta. More comprehensive and nuanced evaluations can minimize these risks. For example, student achievement can be combined with classroom observations teacher content knowledge and perceptions held by students, parents and peers, such as the system developed in Memphis schools. (Define and Measure Effective Teaching). In early 2000, Cincinnati Public Schools developed a Teacher Evaluation System (TES), to observe individual teachers in their classrooms, by trained peer evaluators who have also scored high in TES performance, every fifth year. Teachers are evaluated based on their instructional strategies and content knowledge. In 2011, a study was conducted to discover the effect that top-scoring teachers had compared to their lowest-scoring peers (Evaluating Teacher Effectiveness). Students assigned to high-performing teachers did better than their peers on standardized tests.
What methods are your local schools taking towards improving teacher effectiveness? Do you have any insightful methods of effectiveness evaluation? Please comment or share any ideas you may have!
William L. Sanders and Sandra P. Horn from the University of Tennessee’s Value-Added Research Center have been leading researchers in the field. A synopsis of their work can be found at http://www.sas.com/govedu/edu/ed_eval.pdf
 Stanford Teaching Commons, https://teachingcommons.stanford.edu/resources/teaching-resources/characteristics-effective-teachers.
 Smyth, Ellen, What Students Want: Characteristics of Effective Teachers from the Students’ Perspective. http://www.facultyfocus.com/articles/philosophy-of-teaching/what-students-want-characteristics-of-effective-teachers-from-the-students-perspective/
Posted: June 3rd, 2013 | Author: igmain | Filed under: About evaluation, Knowledge exchange | Tags: Dan Goldstein, Defining Teacher Effectiveness, education, evaluation, Improve Group, research, Stanford University, teacher effectiveness, Teacher Evaluation System | No Comments »
Every year, when the clock strikes midnight on January 1st, millions of people begin their New Year’s resolutions. But how often do people really hold true on their goals? It is now May, and it is probably safe to say that a majority of people have given up on their resolution. This is not just the case for New Year’s pledges, this occurs with any goal we set that truly challenges us.
Take comfort in that your goals are not unattainable, but they do take a lot of planning. Simply stating your goal is not enough for you to achieve it. The following is a practical approach that I learned towards achieving your goals. This approach can be applied whether your goal is something small like losing a few pounds or something big, like starting your own business. Try not to look at the time it takes to achieve the end result, but the satisfaction you will get at each small milestone along the way.
The first step in this approach is to outwardly state your goal. You need to know exactly what it is that you want and when you want to achieve it by. Think realistically. If you want to lose a few pounds, set your goal for a pound or two a week. If your goal is to become the next CEO of your company, set the goal for five to ten years. It is your dedication to the goal that will allow for it to pay off. And be sure to write this plan down. Specifically writing down what you want to achieve increases success. A Dominican University study on writing down goals came to the conclusion that accountability, commitment and writing down one’s goals helped to accomplish significantly more in goal achievement.
Now that you know what your goal is and when you want to achieve it, you need to visualize it. Think: What do I need to do to in order to achieve this goal? This step takes the most planning. Make a list of all the things that will contribute towards achieving your goal. Then, set times for when and where you are going to implement the activities on your list. Do not stop after your first try, continue to refine and update this list. Perseverance and continuity with these tasks will set you above the rest.
After mental visualization it is time to start physically visualizing. What do I mean by that? I mean that you literally need to have tangible, visual cues for your goal and the tasks needed to accomplish it. Set reminders on your phone, place Post-it note reminders throughout your house. Make accomplishing this goal your priority by having its tasks engrained in your mind every single day.
The final and most important step: Reward yourself for your progress. Set a reward that you will occasionally give yourself as you see progress. Have you been properly following your diet without cheating? Then reward yourself at the end of the month by getting a piece of clothing in a new size. If you have not been following your plan, forgive yourself, review what may have gotten you off course, and refocus your efforts. If you plan these rewards in advance it will help motivate you towards achievement. Think of each of these occasional rewards as small incentives towards attaining your overall goal.
Goals are often on our mind at the Improve Group, as many of our clients ask us to help them develop and measure goals. Liz Radel Freeman previously blogged about goals and New Year’s resolutions here.
Do you have any other promising practices towards achieving your goals that I have not outlined? Have your previously tried this method? Please share your thoughts!
Posted: May 2nd, 2013 | Author: igmain | Filed under: Knowledge exchange | Tags: achieving goals, Dan Goldstein, Dominican University study, IG blog, Improve Group, Liz Radel Freeman, New Year's resolutions, visualizing | No Comments »
The Farm to School program is funded by the U.S. Department of Agriculture and it provides planning grants for schools just beginning Farm to School activities, along with granting funds for those schools hoping to expand their existing work. Additionally, eligible nonprofis, Indian tribal organizations, state and local agencies, and agriculture producers or groups of producers may apply for support service grants to conduct trainings, create complementary curriculum or further develop supply chains, among other activities. Deadline to apply is 4/24/2013. For more information please click here. To access 2014 grant application click here.
Posted: April 17th, 2013 | Author: igmain | Filed under: Grant Gazing, Knowledge exchange | Tags: curriculum, Farm to School, funding, grants, Improve Group, nutrition, schools, Susan Murphy, U S Dept of Agriculture | No Comments »
The Improve Group is always looking for creative ways to measure and evaluate program outcomes and their long-term impacts. Ripple effect mapping brings something new to the table by framing analysis around the initial program outcomes and how they connect to and interact with the larger service area, community, etc. It is a participatory strategy for measuring program outcomes, particularly those requiring collaboration among stakeholders or sectors.
Ripple effect mapping is usually utilized 12 months post-program completion and aims to capture socially complex interactions, social capital outcomes, and multi-causality. Steps in conducting ripple effect mapping include:
- Identification of program intervention
- Scheduling a group mapping event (~2 hrs.) and inviting participants (a mix of stakeholders)
- Participants must have a clear understanding of what the program is and why it exists
- Moderately sized group, usually 12-20 + 2 moderators: 1 facilitator and 1 mapper
- Utilize appreciative inquiry interviewing
- Holding the group mapping event
- Map live during the discussion
- Recommendation: probe using Community Capitals Framework (Cornelia and Jan Flora, 2008 http://www.soc.iastate.edu/staff/cflora/ncrcrd/capitals.html)
- Follow-up interviews
- Cleaning, coding, and analysis
- Can add to original map 1 year later or on an ongoing basis to continually capture impacts; a developmental evaluation can emerge from this
Before choosing this method for gathering information about program impacts, carefully consider the following benefits and challenges for implementing this strategy.
- The ripple effect mapping is participatory and engages a mix of stakeholders or sectors. The appreciative inquiry activity, in particular, motivates participants to think about successes of the intervention and continue to collaborate and build connections into the future.
- Including multiple stakeholders allows for cross-validation from members of the group. The live activity encourages people to comment as topics or outcomes arise in the discussion.
- The discussion that results from the mapping activity captures both intended and unintended impacts of an intervention. The results can help a client or an organization think about outcomes that they may not have identified when designing the intervention.
- Ripple effect mapping is a low cost option for collecting data. The group session is more cost-effective than conducting many separate interviews. Mapping software is available online for free! Some examples of free mind mapping software include XMind, Freemind, and MindMeister.
- The final map is a useful graphic to help clients and organizations understand and communicate program impacts to their stakeholders. In addition, the mapping results can be part of an ongoing evaluation process that can be used to track changes and new developments.
Limitations and Challenges:
- It is important to have a skilled facilitator to lead the ripple effect mapping activity. The facilitator should understand what information is most important to collect, and be clear about the types of “probes” or follow-up questions to ask in order to gather this information from participants. It is also ideal to have an external facilitator rather than program staff to ensure participants feel comfortable sharing both positive and negative impacts.
- There is potential for inconsistent implementation. The facilitator as well as others who assist in this process (moderators and mappers) should remain the same through all iterations of the mapping activity.
- There is a risk of bias as a result of participant selection. Not all participants may have information about all of the outcomes experienced by the group they represent. This can be avoided by carefully selecting participants and conducting supplementary interviews with additional stakeholders from that group.
The snapshot below is an example of a segment of a ripple effect map for a fictional park clean-up project created using XMind:
To sum up, ripple effect mapping is a unique data collection approach that has the potential to be a powerful evaluation tool. When implemented carefully, the results can greatly benefit a program, its surrounding community, and inform future decisions across stakeholder priorities or sectors.
Powerpoint and Sample Agenda: http://comm.eval.org/eval/resources/viewdocument?DocumentKey=a04a9a28-6c0b-4953-91f9-ed753f120f3f)
Posted: March 20th, 2013 | Author: igmain | Filed under: About evaluation, Improve Groove Newsletter, Knowledge exchange | Tags: AEA, American Evaluation Association, Cami Connell, Danielle Hegseth, evaluation, Freemind, Improve Groove, Improve Group, Leah Goldstein Moses, MindMeister, newsletter, Ripple effect mapping, Xmind | No Comments »