Here is a tale for anyone that has been surprised by evaluation results: As an undergraduate student at American University, I got my first taste of “formal” evaluation by conducting an evaluation of a peer health program on campus. In its “natural” setting, the program was presented to a small group of college students; over milk and cookies, presenters would provide educational information to college students about health issues they faced. In the name of science, I recruited participants in the residence halls (bribing them with candy) and randomly assigned my participants to “treatment” (program) or “comparison” (no program) groups. After an arduous semester involving the institutional review board, coordinating program showings and tracking down college students to fill out my confidential surveys, I had finally finished. I analyzed my results and was amazed to find that NOTHING HAPPENED! The students showed no change in attitudes or behaviors. They didn’t even show an increase in knowledge! It was hard for me to believe that a program I cared so greatly about did not work. I have since learned my findings were more complex than I first thought. There are two main explanations for null results in evaluation: 1) the program really did not have any impact on the intended outcomes or 2) the program could have had impact on the intended outcomes, but my evaluation design and instruments did not capture these findings. It is quite possible that my program did not have an impact on students’ knowledge, attitude or behaviors around health issues. It was one hour-long program in a busy semester for students. Another possibility is that the program did have impact on the intended outcomes, but they faded over time. Perhaps if I had administered the surveys immediately after the program, I would have seen improvements in attitudes or knowledge. The goal of my project was to learn about research design, and in that sense, the project was a success. I also learned anticipate when an evaluation may have the most impact and design the evaluation accordingly. Please watch my Research Tidbits column for more information about evaluation timing. If you have any similar evaluation stories to share, I’d love to hear them!