top of page
  • Anne Nordstrom PhD

Idea Exchange

Introduction

Evaluating Your Outdoor Education Program

All of us know that getting students out of the classroom works. Call it what you will (place-based education (PBE), environmental education, outdoor learning, meaningful watershed education experiences) this method teaches students to:

• Connect the dots,

• Actively learn about their place in communities --both human and ecological

• Deploy tools that allow them to understand and make tangible improvements

• Develop a fine-tuned understanding of the systems in which they live

All this provides them experiences that inspire, influence, and have a life-changing impact. As Lauren, a GOMI alumna writes elsewhere in this edition: “...GOMI has inspired me to be a lifelong activist. Through GOMI I recognized that environmental change does not come through watching documentaries, but rather through attending conservation commission meetings, educating the public, and performing citizen science.” How can you argue with that? Lauren, and numerous students along with her have developed a set of personal values that stand for social and environmental justice and are rooted in a scientific, experiential approach to problem-solving. They’ve learned to see the big picture and the intersecting, not insubstantial, role that humans play in stewarding the earth.

“So what?” (Why Evaluate?)

You understand and have seen first-hand that PBE makes a difference in the lives of students, but how do you communicate that knowledge to your community partners, your administrators, funders, and parents? When you want to convince others that this educational method is a worthy endeavor they should support, what case can you make, what tools do you have? How do you answer the question “so what?” when there are so many demands for so few resources and you want to prove what you know: i.e. students who have these experiences are more likely to go out and bring positive change to the world. As Lauren writes, “...and so I found myself protesting the North Dakota Access Pipeline, marching in the Climate March in Washington, D.C., and teaching school-age children about climate change”.

Evaluation research is uniquely positioned to answer two questions: “How is it going?” and “What good did it do?” The answer to these can and should be used to:

• Justify funding.

• Advocate program continuation.

• Judge aspects of the program that are most or least effective.

• Make mid-course adjustments.

What evaluation cannot do is everything at once (just like the rest of us). Evaluators must frame the evaluation according to what is relevant to the program's goals and objectives and:

• Develop relevant questions.

• Choose appropriate methods.

• Communicate findings, promptly, to people who can best use them.

Evaluation vs. Educational Assessment

Evaluation, as classically defined, is a systematic method for obtaining and assessing information about human activity that can be used by planners, implementers, funders, participants, policy-makers, and other stakeholders to make decisions about the effectiveness of the activity. As educators, you are no strangers to planning and evaluation. You expect that, if you present subject matter correctly, students will learn what is taught. So you plan your curricula or project with specific goals, objectives and outcomes in mind. These can be social or academic or in combination, the latter being the case with place-based education. In any event, you judge the effectiveness of your approach through some form of measurement, often tests. In formal education settings, success is often narrowly defined and measured by standardized tests that yield a numeric assessment of students’ abilities. Assessments like these have their place, but often receive over weighted importance in determining resource allocation, and the failure or success of a curriculum, program, school, student, or a teacher. As such, these metrics cannot assess many of the contextual and qualitative factors that nurture students’ performance and growth. And, these are often key outcomes for place-based education.

To get at what works best in improving performance and outcomes (qualitative and metric) and sharing results meaningfully, evaluation can assume the broader approach. This requires four simple, but not easy steps, each with many components:

1. Ask questions of substance.

2. Gather relevant data.

3. Report answers.

4. Provide opportunities for mid-course adjustment.

When it comes to place-based education, which is a community-based, holistic, and systemic approach to learning, having a strong evaluation component is critical to showing its benefits to others. Research has shown that place-based education can have significant effects on students, to include:

1. Better academic performance and graduation rates.

2. Better engagement with teachers.

3. More student independence, self-reliance, better social skills.

4. Increased likeness to enter a STEM field.

5. Improved connection with community.

6. Greater knowledge of nature and their relationship to nature.

And, this type of educational learning is FUN! (For more on this see Resources and References at the end of this article.)

Start with a Plan

An evaluation plan can be formal or informal, but is useful in either case as it guides you through the basic steps mentioned above. Filling out a simple matrix will help you proceed and keep you on track (Herman, Morris and Fitz-Gibbon, 1987).

Simple Planning Matrix

Simple Planning Matrix

Asking questions

What do you want to know? Do you have to collect information for funders, community partners, your administration, or other educators? Your target audiences, or stakeholders, often determine the focus of the questions you ask.

What are the program goals and objectives? What do you want students to be aware of, know and do? Specifying these outcomes as clearly as possible is an important first step in focusing the evaluation. Evaluators like to start by articulating a theory of change, laying out logically all the resources, processes and activities that lead to the desired changes in attitudes, knowledge, and behavior. Graphically portraying these relationships in a logic model is an effective exercise in clarifying how the program is supposed to work. Logic models can be complex or simple, but always contain these basic pieces:

Use your model to help you zero in on evaluation questions to be answered, for example:

  • Who is participating in your program?

  • What materials and activities are most effective?

  • What do the participants think and feel about being in the program?

  • How have they changed as a result of participating?

  • What were the greatest challenges in carrying out the program goals?

  • What could be done better?

Gathering Data

How will you get the information? Who is responsible, when will they do it and who/what will they get it from? Evaluators have a variety tools for answering evaluation questions, and we love to use a combination of story telling, images, and statistics. Our goal is to support quality improvement and to give you timely information that will help you make your program even better. Data collection tools include interviews, surveys, focus groups, videos, participant-observation, case studies, documents and records and are best used in combination to tell a complete story both in breadth and depth. You will find that different methods lend themselves better to different questions, for instance, a video interview with participants in-situ gives a clear perspective on how they think and feel whereas a survey is better suited to discovering what students know. Several key conditions must be met when collecting data: it is to be collected rigorously, systematically, and with the evaluation question(s) in mind. Nothing is more cringe-worthy than when a well-meaning planner chirps, “let’s do a survey!” without having first described the purpose and the use of the results.

Reporting and Using the Findings

Our ultimate goal is to report findings in ways that are meaningful to the planners/educators, ways that can be used to help you best see and make course adjustments; in ways that will provide compelling examples of how the program is working. This means providing an attractive-looking, easy-to-digest, relatively short report delivered at just the right time. Just the right time should include during program planning and implementation, periodically at strategically selected milestones in the program’s development (formative), and at the program’s end (summative).

By the time we get to conclusion, we should have a good idea of how answering your evaluation questions will help you improve, justify, promote or celebrate your program and who the is your right audience(s).

When we planned your evaluation, we strove to ask thoughtful questions to inform, move the program forward. The job now is to help your audience to understand the results, and to take appropriate action based on the facts. This includes helping them to think of the lessons learned from the evaluation as a feedback loop. Program success is not achieved all at once, and incremental/iterative information can guide and keep progress on track.

Conclusion

Evaluation and place-based education are not dissimilar. The best evaluations are dynamic, taking into account context and participants, and helping programs evolve to achieve their ideal outcomes. Place-based education helps students to learn about the dynamics of the systems in which they live, become aware the role of relationships and act to make positive changes for the whole.

Resources for Environmental Education Teachers (www. meera.snre.unmich.edu)

Best Practices Guide to Program Evaluation for Aquatic Educators

Recreational Boating and Fishing Foundation

www.takemefishing.org

Does Your Project Make a Difference?

NSW Office of Environment and Heritage

www.environment.nsw.gov.au

Designing Evaluation for Education Projects

National Extension Water Outreach Education

www.fyi.uwex.edu

Evaluating EE in schools: A Practical Guide for Teachers (Oldie but goodie)

UNESCO-UNEP International Environmental Education Programme

www.unesdoc.unesco.org

References

Fly, M.J. (2005). “A Place-based model for K-12 Education in Tennessee”. www.web.utk.edu

Herman, J.L, Morris, L.L., Fitz-Gibbon, C.T. (1987). Evaluator’s Handbook. Sage Publications.

Newbury Park, CA.

Friedman, M. (2014). Results Accountability Workshop. Fiscal Policy Studies Institute.

 

Anne Nordstrom

Anne is a Sociologist who specializes in solving research design, statistical analysis and program evaluation problems. She brings more than 25 years of experience to the challenge of measuring success and progress in change efforts in the public and non-profit sectors, especially in higher education, community health, and environment.

She takes great pleasure in helping clients develop and refine their questions and then choosing the best analytic tools to answer them, providing comprehensive interpretation of data so as to inform decision-making about legislation and policy, curriculum choices, and program improvement.

She has extensive experience working qualitatively with all types of stakeholders, including community groups, students, faculty, researchers, CEOs, councils and committees, to solicit input, foster engagement and feeding back findings for real change.

Anne could be considered a life-long learner: she received her Ph.D. from Boston College; M.A. in Community Social-Psychology from UMass Lowell; B.A. from Boston College and graduated with the first Green MBA cohort from Antioch University New England in May 2009.

She also has a life, enjoying flying, water sports, gardening, and adventures.

![endif]--

28 views0 comments
bottom of page