(no subject)

jmaxwell@ccmail.edc.org
Wed, 03 Jul 96 12:02:12 EST


I want to comment on Paul Gammill's distinction between "soft" and "hard" data
in evaluation. I think this is an oversimplified and usually misleading
distinction, because it contains a number of unstated assumptions about what you
are trying to evaluate and the most valid ways of doing this.

I'll illustrate this with Paul's example of surveys as "soft" and enrollment and
performance as "hard" data. Certainly if you are trying to determine enrollment
and behavioral performance, you want to use the most direct measures you can,
and surveys are a very indirect way of getting at these. But the example begs
the question of what the actual "hard" data consist of. Are they enrollment
statistics that were deliberately inflated by administrators to make the program
look good? Do they include people who were registered for a program but never
actually attended? (I've seen this happen.) Are the "performance" measures
based on people's actual behavior in their subsequent work, or are they pencil-
and-paper test scores that bear little relationship to what they actually will
do? Quantitative data on enrollment and performance can be either "hard" or
"soft", depending on the rigor and validity with which they were collected.

In addition, you may not be interested in enrollment and performance so much as
in a program's impact on participants' values, attitudes, and self-image. In
this case, open-ended interviews or surveys may be the most direct and valid way
of assessing this impact. The point is that ANY type of data has its own
validity threats, and you need to both understand and try to control for these,
as well as decide what you really want to assess and the most direct and valid
way of doing this.

Finally, the distinction between "soft" and "hard" data is usually equated with
the distinction between qualitative and quantitative data in a way that implies
that qualitative data are inherently less valid that quantitative. I don't
believe this, nor do I think that quantitative data are necessarily more useful
or persuasive than qualitative. It depends on your purposes and audience.

I've simply touched the tip of a very large iceberg here, and there is a
substantial literature on these issues.. For further discussion, see:

TD Cook and WR Shadish, Program Evaluation: The Worldly Science. Annual Review
of Psychology, 37, pp. 193-232 (1985).

MQ Patton, Qualitative Evaluation and Research Methods (2nd edition). Sage
Publications (1990).

JA Maxwell, Qualitative Research Design. Sage Publications (1996).

If you have questions about this or would like more information, you can contact
me directly at <jmaxwell@edc.org>, as well as replying on edequity.


_______________________________________________________________________________
Subject:
From: edequity@tristram.edc.org at internet
Date: 6/24/96 10:24 AM

Subject: Re: Introduction

Anita,
Good point. In measuring equity there are at least two types of
data and two methods. (actually this works for almost any type of
evaluation) the two typs of data are "soft" data such as surveys and
"hard" data such as enrollment and performance. To press a point I have
always found "hard" data to be more useful. "hard" data provides very
little wiggle room for decision makers and it also sets a definate
performance goal for measuring any improvement. The two methods are
logitudional and single event comparisons. Logitutional is the most
powerful and can even add power to "soft" data like survey data as you can
measure a trend (or not) in the same student.
This topic really deserves to be a white paper, but the exchange
of ideas on the internet is also useful.
Paul Gammill

On Thu, 20 Jun 1996, Anita P. George wrote:

>
> In response to Paul Gammill's query about measuring equity, I have not had
> direct experience *measuring* equity but I did participate in an AAUW study
> a few years ago that measured the *chilly climate* for women on university
> campuses. As I remember, it is an excellent questionnaire.
>
>
>

Received: from tristram.edc.org by ccmail.edc.org (SMTPLINK V2.11 PreRelease 4)
; Mon, 24 Jun 96 10:23:11 EST
Return-Path: <owner-edequity@tristram.edc.org>
Received: (from root@localhost) by tristram.edc.org (8.6.12/8.6.12) id JAA18371 for edequity-tempore; Mon, 24 Jun 1996 09:45:58 -0400
Date: Mon, 24 Jun 1996 09:45:58 -0400
Message-Id: <199606241345.JAA18371@tristram.edc.org>
From: Gammill - Paul <pgammill@umd5.umd.edu>
Sender: owner-edequity@tristram.edc.org
Precedence: bulk
Reply-To: edequity@tristram.edc.org
Apparently-To: edequity-tempore@tristram.edc.org


new message to this message