slideshare ppt on research

Tuesday 20 August 2013

let's see research "reliability and validity"

Sociologist James A. Quinn states that the tasks of scientific method are related directly or indirectly to the study of similarities of various kinds of objects or events.One of the tasks of scientific method is that of classifying objects or events into categories and of describing the similar characteristics of members of each type. A second task is that of comparing variations in two or more characteristics of the members of a category.Indeed, it is the discovery, formulation, and testing of generalizations about the relations among selected variables that constitute the central task of scientific method.
 
Fundamental to the performance of these tasks is a system of measurement. S.S. Stevens defines measurement as "the assignment of numerals to objects or events according to rules." This definition incorporates a number of important distinctions. It implies that if rules can be set up, it is theoretically possible to measure anything.
 
Further, measurement is only as good as the rules that direct its application. The "goodness" of the rules reflects on the reliability and validity of the measurement two concepts which we will discuss further later in this lab. Another aspect of definition given by Stevens is the use of the term numeral rather than number. A numeral is a symbol and has no quantitative meaning unless the researcher supplies it through the use of rules.The researcher sets up the criteria by which objects or events are distinguished from one another and also the weights, if any, which are to be assigned to these distinctions. This results in a scale. We will save the discussion of the various scales and levels of measurement till next week. In this lab, our discussion will be focusing on the two fundamental criteria of measurement, i.e., reliability and validity.
The basic difference between these two criteria is that they deal with different aspects of measurement. This difference can be summarized by two different sets of questions asked when applying the two criteria:
Reliability:
a.Will the measure employed repeatedly on the same individuals yield
similar results? (stability)
b.Will the measure employed by different investigators yield similar results?
(equivalence)
c.Will a set of different operati
onal definitions of the same concept
employed on the same individuals, using the same data collecting technique, yield a highly correlated result? Or, will all items of the measure be internally consistent? (homogeneity)
Validity:
a.Does the measure employed really measure the theoretical concept?(variable)
 
EXAMPLE: GENERAL APPROACHES TO RELIABILITY/VALIDITY OF
MEASURES
1.Concept : "Exposure to Televised News"
 
2.Definition: the amount of time spent watching televised news programs
 
3.Indicators:
a. frequency of watching morning news
b. frequency of watching national news at 5:30 p.m.
c. frequency of watching local news
d. frequency of watching television news magazine & interview programs
 
4.Index:
Design an eleven-point scale, where zero means "never watch at all," one means"rarelywatch" and ten "watch all the time." Apply the eleven-
point scale to each of the four indicators by asking people to indicate how often they watch each ofthe above TV news programs.Combining responses to the four indicators/or survey questions according to certain rules, we obtain an index of "exposure to televised news program," because we think it measures TV news exposure as we defined it above. A sum score of the index orscale is calculated for each subject, which ranges from 0 (never watch any TV news programs) to 40 (watch all types of TV news program all the time). Now, based on the empirical data, we can assess the reliability and validity of our scale.
 

No comments:

Post a Comment