Saturday, March 30, 2019

MediaResearch: Validity, Reliability, Etc.: Internal and External Validity + Sampling (W12-P2) Sp19


Sampling

Sampling is the process of selecting subjects for a study.  Generally, the subjects are the specific people studied in an experiment or surveyed.  The sample is chosen out of a larger population.

Why sample?  The population you are studying is too large to study, so you have to study just a part of that population (a sample).

What could be some problems with sampling (examples of poor sampling)?  Bias sample?

To reduce the problems of poor sampling, you want to use random sampling when you can.  In random sampling all members of a population have an equal chance of getting into sample.













------------


What are internal and external validity?

With this type validity we are looking at the validity of the overall study, not just the validity of the instruments being used to measure the variables.

You are asking the question: Is it a valid study? Not: Is it a valid instrument?



Internal validity: Are the conclusions to be trusted for the particular study?  Or, are the results valid for the subjects in your sample.   For a visual representation just look the orange circle at the top. The black dots inside the orange circle are the subjects in the sample.


External validity: To whom do the conclusions apply? Generalizability of findings.  The results, can they be generalized to the larger populations   For a visual representation see the orange circle within the pinkish-purple circle.  The orange circle represents the sample and pinkish-purple circle represents the larger population.




Question: Could you have very poor internal validity, but good external validity?

--------

If something goes wrong in a study, who can you blame it on?   That is, if the study is not getting valid results, who can you blame it on? And you can't blame it on the alcohol.  :)

What are some threats to a study’s internal validity?   Or, put another way, where can you put the blame?
  • Threats due to researcher (e.g., influence results).  
  • Threats due to how research is conducted (e.g., inaccurate, inconsistent research, poorly designed survey)
  • Threats due to research subjects
    • Hawthorne effect
    • mortality - losing people from a study (due to death, etc.)
    • maturation - internal change explains behavior.  In studies done over a period of time the subjects may change.
Example: 4 year study of film viewing and levels of prejudice. Subjects= college students.
See any possible threats to internal validity?


What are some threats to a study’s external validity?
  • Research procedures don’t reflect everyday life
    • ecological validity
  • Different finding, same sample
    • replication is important
  • Poor sampling


Any problems with studies done at universities?
Generalizability problem?


Share this post with others. See the Twitter, Facebook and other buttons below.
Please follow, add, friend or subscribe to help support this blog.
See more about me at my web site WilliamHartPhD.com.






MediaResearch: Validity, Reliability, Etc.: Definitions (Written and Visual) (W12-P1) Sp19


You operationalize your variables in order to measure them.
So, now let's talk about measurement and related concepts.

When measuring your variables you may ask yourself...
Is my measure “on target”?   That is, are my measurements accurate?
Do my measures “cluster together”?  That is, am I getting consistent results?

But what does that mean?

What we are talking about is validity and reliability.

Let's start by thinking about how to measure prejudice in people. How would you do that? A survey? What would the questions be on the survey?  Your measure of prejudice needs to be valid and reliable.  Are you sure they are valid and reliable?   Are you accurately measure the level of prejudice in a person with your survey?  Does your survey get the same results with the same person each time?


Validity: “the extent that scales or questions do measure what they are thought to measure”(Stacks & Hocking).

You can think of validity using a target metaphor.  Is it on target (i.e.,  near the bulls eye)?
Each "shot" on the target represents a measurement.




Or think of a bathroom scale.  What does it meen to say a bathroom scale is valid or not?


2012_May_03_Bathroom Scale_008
Photo by elcamino73. Used under Creative Commons.

If you get on your bathroom scale and it says 3 pounds or 1723 pounds, then your scale is broken. It is not right.  It is not valid.  Not only is your scale broken, the results (3, 1723) are not valid measures of your weight.

------------------

A related concept to validity is reliability.

Before looking at a formal definition of reliability, just think of the everyday use of that word.  If you say your friend is reliable, what does that mean?   It means you can count on your friend. Every time that you call on that friend they are there.  Not sometimes.  All the time.  They are consistent.  The formal definition of reliability is similar.

Reliability: “the extent to which measurement yields numbers (data) are consistent, stable, and dependable.” (Stacks & Hocking).

Again, let's use some metaphors to see the concept.




































What about a bathroom scale and reliability?  What does it mean to say that a bathroom scale is reliable?


2012_May_03_Bathroom Scale_008
Photo by elcamino73. Used under Creative Commons.




Can an instrument can be reliable, but not valid. That is, cluster together, but not be on target?


















If you had a bathroom scale that was reliable, but not valid, what results would you get if you weighted yourself several times?




Example

Let's say we are interested in the topic of communication apprehension.  More specifically, we are interested in the relationship between gender and communication apprehension.  Do men or women have higher levels of communication apprehension?  How would we go about answering that question?

How would we measure communication apprehension in our subjects (the people we are studying)?  We could observe.  What about a survey?  Yeah, let's do a survey.  Something like below.

-------------------------------------
Conversation Apprehension Scale

1. While participating in a conversation with a new acquaintance, I feel very nervous.
Strongly Agree --- Moderately Agree --- Neutral --- Moderately Disagree --- Strongly Disagree

2. I have no fear of speaking up in conversations.
Strongly Agree --- Moderately Agree --- Neutral --- Moderately Disagree --- Strongly Disagree

3. Ordinarily I am very tense and nervous in conversations.
Strongly Agree --- Moderately Agree --- Neutral --- Moderately Disagree --- Strongly Disagree

4. Ordinarily I am very calm and relaxed in conversations.
Strongly Agree --- Moderately Agree --- Neutral --- Moderately Disagree --- Strongly Disagree

------------------------------------


Think of this survey as a measuring instrument, just like a bathroom scale. The bathroom scale measures your weight and this survey would measure your communication apprehension.

Does our instrument (the above survey) have good measurement validity and measurement reliability? How would you determine that?

Measurement validity:
“the extent to which researchers are actually measuring the concepts they intend to measure”(FBFK)
Do the instruments give accurate/true readings?

Measurement reliability:
“the extent to which measurements of a variable are consistent and trustworthy”(FBFK)
Do the instruments continue to give the same readings every time they are used?


What are the procedures for checking an instrument’s reliability?

Similar results every time?
0% = Not reliable to 100% highly reliable

Three Ways to Check Instrument’s Reliability
1. Test and retest it.
2. Test, change wording slightly, retest.
3. Compare 1/2 items to the other 1/2

3 options, Not step-by-step
Which option is best?  Costs and benefits?

--------------


Share this post with others. See the Twitter, Facebook and other buttons below.
Please follow, add, friend or subscribe to help support this blog.
See more about me at my web site WilliamHartPhD.com.






Monday, March 18, 2019

MediaResearch: Operationalization: Levels of Measurement (W11-P2) Sp19


As you are determining what your variables are and how you are going to measure them, it is also helpful to have clearly in mind what type of data (or level of measurement) you will be using.  This is especially helpful when you are doing statistical analysis on the data later in the research process.

Recall the earlier discussion of types of variables?  Nominal variable and ordered variables, right?
Now, let's expand that "ordered" type to get a total of four types of variables or levels of measurement.



The above video covers nominal, ordinal and interval.  Note the addition of ratio below.  What's the difference between interval and ration?

Level
Can be
Ranked?
Equal
Distance
Zero-Point
Example Variables
NominalNoN/AN/AGender
OrdinalYesNoN/AList of most preferred TV shows
IntervalYesYes
Arbitrary
Has + & -
Agreement on Likert-Scale
RatioYesYes
Absolute
0 = absence
Amount of time talking


Nominal level:
  • nominal variables are classified into categories (names)
  • They are not arranged in any particular order
  • e.g., frequency counts, percentages.
    • 48% male and 52% female
    • 32% Catholic, 20% Baptist, etc.
Ordinal level:
  • categories are ordered from highest to lowest
  • intervals between categories are not standardized
    • e.g., frequency counts, percentages
Interval level:
  • categories are ranked
  • assumed equal distances between ranks
  • Arbitrary zero-point
    • e.g. temperature - 0 degrees doesn’t mean the absence of temperature. Scale has + & - values.
  • Another example: Likert-Scale
Ratio Level:
  • categories are ranked
  • Equal distances between rank
  • Absolute Zero point.   Zero means the absence of the thing you are measuring and there is no negative value.
  • e.g.,  age, weight, number of words in a sentence, etc.


What is the connection between a horse race and levels of measurement?
Horse race























Photo used under Creative Commons.


How would the MythBusters research (viewed earlier) fit in here?  Did they operationalize their variables?  How? At what level?

Busting Myths: Asking Questions, Finding Answers




Note: The level of measurement (or kind/type of data) you have will determine what statistics you use.  More on this later.



Share this post with others. See the Twitter, Facebook and other buttons below.
Please follow, add, friend or subscribe to help support this blog.
See more about me at my web site WilliamHartPhD.com.






MediaResearch: Operationalizing Your Variables (W11-P1) [VID] Sp19


Once your variables have been identified, then they will need to be measured, but how?   And, what does an operational definition have to do with it?

What is an operational definition?  What does it mean to operationalize a variable?

"Operational definition" is "a statement that describes the observable characteristics of a concept being investigated…”(Frey, et. al).  Or, put differently, an operational definition “specifies the procedures [or operations] the researcher uses to observe the variables” (Stacks, et.al).  Notice how the second definition indicates why it is called "operational."

Both I.V.s & D.V.s need O.D.s.   Operational definitions allow you to measure a variable.

----
Operationalization Examples:

1. Let's say you are going to do some research on prejudice, how would you operationalize prejudice?

  • Start with the conceptual definition or dictionary definition:
    • “the irrational hatred or suspicion of a particular group, race, religion, or sexual orientation”(Jandt).
  • What would the operational definition be?  How would you measure prejudice?

What are the basic “operational procedures” or ways of measuring variables?

Operational procedures:
  1. Self-report 
    1. the researcher asks subjects to report about themselves
  2. Observer’s ratings 
    1. researcher asks subject to observer and rate another
  3. Observe behavior
    1. researcher observes subject
Which method would you trust more?  Which would give a more valid measure?  Why?

How would you use these procedures with prejudice or violence?  Which would "work" better?



2. Let's say you are going to do some research on violence and video games, how would you operationalize violence?

  • Conceptual/dictionary definition of violence: "exertion of physical force so as to injure or abuse" (Merriam-Websters)
  • 2/12/13 NYT news article about recent research on video games and violence
  • See an example of recent video game and violence research:

3. Let's say you are going to do some research on the effects of television on children, what would be the variables you'd study and how would you operationalize them?





Share this post with others. See the Twitter, Facebook and other buttons below.
Please follow, add, friend or subscribe to help support this blog.
See more about me at my web site WilliamHartPhD.com.






Monday, March 11, 2019

MediaResearch: Library Research & APA Style: Citing Sources (W10-P6) Sp19


You are working on some research and you want to mention or cite a book in the research paper you are writing.

How do you cite a book using APA-style?


Two Book Examples:

Jewell, T. E., & Hart, W. B. (1996). Interpersonal communication: Student workbook. New York: McGraw-Hill.

Frey, L. R., Botan, C. H., & Kreps, G. L. (2000). Investigating communication: An introduction to research methods. Needham Heights, MA: Allyn & Bacon.



What about an edited book (APA-style)?

Iyengar, S., & Reeves, R. (Eds.). (1997). Do the media govern? Politicians, voters, and reporters in America. Thousand Oaks, CA: Sage.




What about a chapter from an edited book (APA-style)?

Rogers, E. M., & Hart, W. B. (1997). A paradigmatic history of agenda-setting research. In S. Iyengar & R. Reeves (Eds.), Do the media govern? Politicians, voters, and reporters in America (pp. 225-236). Thousand Oaks, CA: Sage.



What about an article in an academic journal (Gangman-style, I mean APA style)?

Hart, W. B., (1999). Interdisciplinary influences in the study of intercultural relations: A citation analysis of the International Journal of Intercultural Relations. International Journal of Intercultural Relations, 23, 575-589.

Examples of academic or scholarly journals. Public domain photo.



One of the best online sources for how to cite books, articles, etc. is Purdue University's Research and Citation Resources website.  This site covers APA and other methods.


Note: The above is based on the 6th edition of the APA manual.









Share this post with others. See the Twitter, Facebook and other buttons below.
Please follow, add, friend or subscribe to help support this blog.
See more about me at my web site WilliamHartPhD.com.