Previous Page Table of Contents Next Page


Chapter 6: Experimentation


Chapter Objectives
Structure Of The Chapter
A definition of experiments
Basic concepts in experimentation
Inferring causal relationships
Impediments to valid results from experiments
Internal validity
External validity
Experimental designs
The "After-only with control group" experimental design
Ex post facto design
Chapter Summary
Key Terms
Review Questions
Chapter References

The popularity of experimentation in marketing research has much to do with the possibilities of establishing cause and effect. Experiments can be configured in such a way as to allow the variable causing a particular effect to be isolated. Other methods commonly used in marketing research, like surveys, provide much more ambiguous findings. In fact, experimentation is the most scientific method employed in marketing research.

Chapter Objectives

Having read this chapter the reader should:

· Understand the basic concepts of experimentation: experimental design, treatments, confounding factors and extraneous causal factors

· Be familiar with the different bases for inferring causal relationships

· Recognise the principal impediments to valid experimental results, and

· Be familiar with the main forms which experimental designs can take.

Structure Of The Chapter

A definition of experiments is given at the outset of the chapter and then there follows a brief outline of the basic concepts of experimentation: dependency, causality and inference. The discussion then moves to the impediments to valid results in experimentation. A distinction is drawn between internal and external validity. The final section of the chapter gives an account of the main experimental designs used in marketing research.

A definition of experiments

An experiment involves the creation of a contrived situation in order that the researcher can manipulate one or more variables whilst controlling all of the others and measuring the resultant effects. For instance, when United Fruits were considering replacing their Gros Michel variety of banana with the Valery variety, a simple experiment was first carried out. In selected retail outlets, the two varieties were switched on different days of the week and sales data examined to determine what effect the variety had on sales volumes. That is, the variety was being manipulated whilst all other variables were held constant. United Fruits found that the switch back and forth between Gros Michel and Valery had no effect upon sales. United Fruit were therefore able to replace Gros Michel with Valery.

Boyd and Westfall1 have defined experimentation as:

"...that research process in which one or more variables are manipulated under conditions which permit the collection of data which show the effects, if any, in unconfused fashion."

Experiments can be conducted either in the field or in a laboratory setting. When operating within a laboratory environment, the researcher has direct control over most, if not all, of the variables that could impact upon the outcome of the experiment. For example, an agricultural research station may wish to compare the acceptability of a new variety of maize. Since the taste characteristics are likely to have a major influence on the level of acceptance, a blind taste panels might be set up where volunteers are given small portions of maize porridge in unmarked bowls. The participants would perhaps be given two porridge samples and the researcher would observe whether they were able to distinguish between the maize varieties and which they preferred. In addition to taste testing, laboratory experiments are widely used by marketing researchers in concept testing, package testing, advertising research and test marketing.

Figure 6.1 Types of experiment used in marketing research

When experiments are conducted within a natural setting then they are termed field experiments. The variety test carried out by United Fruits on their Gros Michel and Valery bananas is an example of a field experiment. The researcher obviously has less control over variables likely to have an effect upon the experimental variable but will strive to exert whatever control is possible.

Basic concepts in experimentation

Dependency: Experiments allow marketing researchers to study the effects of an independent variable on a dependent variable. The researcher is able to manipulate the independent variable (i.e. he/she is able to change the value of the independent variable) and observe what effect, if any, this has upon the value of the dependent variable. Put another way, an independent variable is one which can be manipulated independently of other variables. Independent variables are selected for inclusion in an experiment on the basis of an assumption that they are in some way related to the dependent variable being studied. It is for this reason that independent variables are on occasion referred to as explanatory variables. The dependent variable is the one under study. The researcher begins from the premise that changes in the value of the dependent variable are at least in part caused by changes in the independent variable. The experiment is designed to determine whether or not this cause and effect relationship actually exists.

Causality: A causal relationship is said to exist where the value of one variable is known to determine or influence the value of another. Green et al.3 draw a distinction between two types of causation: deterministic and probabilistic.

Where the independent variable (X) wholly explains changes in the value of the dependent variable (Y) and the researcher is able to establish the functional relationship between the two variables then this can be expressed as follows:

y = f(x)

In this case, it is said that X is both a necessary and a sufficient condition for Y to occur. The value of Y is determined by X, and X alone. Thus it can be said, in these circumstances, that X is a deterministic cause of Y. An illustrative example would be where the demand for agricultural commodities, say sugar, is dependent upon the world price. Further suppose that the functional relationship between sugar demand and world prices is known, then the formula becomes:

Changes in demand for sugar (grade No. 6) = f(World Price)

Whilst this example serves to illustrate the point it is rare to find such relationships when studying marketing problems. In most instances, the value of the dependent variable will be a function of several variables. For instance, only in exceptional cases would the demand for a product, even a commodity, depend solely upon price movements. Factors such as the reputation of the supplier, terms of sale, promotional activities, packaging etc., are likely to have an impact on demand as well. A more common causal model is one where the value of the dependent variable is a function of several independent variables.

Marketing problems are more often multivariate than univariate and so the relationship between dependent and independent variables is more often probabilistic than deterministic. A probabilistic relationship could be expressed as:

y = f(x1, x2,...xn).

What is depicted here is a situation where the dependent variable (y) is a function of several variables (x1, x2,...xn). If marketing research can establish the form of the relationship (f) between the independent variables and also between the independent and dependent variables then the value of y can be predicted. In this instance x1, for example, is a necessary but not sufficient condition for y to occur. The same is true of each of the other independent variables. Rather, each individual independent variable is said to be a probabilistic cause of the value of y.

Inferring causal relationships

The evidence for drawing inferences about causal relationships can take three forms: associative variation, consistent ordering of events and the absence of alternative causes.

Associative variation

Causality cannot be established unless there is associative, or concomitant, variation. That is, the data must show that a change in one variable is almost always accompanied by a change in the other.



Consistent order of events

If variable A causes variable B, then variable A must occur before or simultaneously with B, and not after it. It can happen that two events cause and effect one another. For example, the uptake of marketing innovations among farmers may show a high correlation with the number of visits by extension personnel. Closer study could reveal that extension personnel visit farmers who are immediately responsive to them more frequently than they do other farmers.



Absence of other causes

Before inferring causation researchers should check for equally plausible alternative explanations for the phenomenon under study. A vegetable trader might, for example, assume that the increase in her sales is due to improved grading procedures which she has introduced. However, there may be several other factors that act individually, or in combination, to bring about the sales increase. The trader might find her competitors are experiencing similar sales increases and that this is actually due to upward shifts in disposable incomes.

It should be noted that none of these forms of evidence, nor all three in combination, can unequivocally prove that a relationship exists. Rather, they help put the notion that a relationship exists beyond reasonable doubt. If all evidence points towards the same conclusion then the conclusion that a relationship exists is all the more compelling.

Impediments to valid results from experiments

The validity of experimental results, i.e. the extent to which results reflect the truth, is obviously a matter of importance. There are two distinct forms of validity which marketing researchers are concerned about when using experimentation: internal and external validity.

Internal validity

The question being asked is whether the experimental treatment is actually responsible for changes in the value of the dependent variable or if confounding factors have been in operation. Since laboratory experiments afford greater opportunities for controlling extraneous or confounding variables than do field experiments, internal validity is a bigger problem in the case of the latter.



External validity

External validity has to do with the extent to which experimental findings can be generalised to the population from which the participants in the experiment were drawn. In other words, the issue is the degree to which the sample represents the population. Given the naturalistic setting of field experiments, this category generally provides greater external validity than do those experiments conducted within a laboratory environment.

In some cases the marketing researcher seeks to exclude extraneous factors that can confound the results of an experiment. However, this is not always possible since it is difficult to determine when certain types of extraneous variable is in operation and even more difficult to measure them. In these circumstances, the researcher will seek to control confounding variables in a different way. Examples of confounding are:

Internal validity


History:

events taking place at the same time as the experiment is underway

Pre-testing:

errors arising from the process of taking "before" and "after" measures from the same sample as that providing "after" measures

Maturation:

biological and/or psychological changes in participants

Instrumentation:

changes in the calibration of measurement instruments, questionnaires, interviewers or interviewing technique

Sampling bias:

assignment of participants to experimental groups in a way likely to prejudice outcomes

Mortality:

differential loss of participants from experimental groups

External validity


Interactive effects of testing:

pre-exposure measurements going rise to heightened awareness

Interactive effects of sampling bias:

non-random assignment of participants to experimental groups leading to differing responses to the experimental treatment

Contrived situations:

experimental setting elicits responses that differ from those which would be obtained in the real world.

Internal validity

History: The term 'history' has been used to describe events that happen whilst the experiment is underway and serve to distort experimental results. A common occurence is when a commercial organisation is testing a new product within a small geographical area, prior to launching the product nationwide, and competitors intentionally set out to distort test results by giving additional promotional support to their own competing product and/or by cutting the price of their product.

Pretest effect: It is sometimes considered necessary to take some preliminary measures before the main experiment is carried out. For instance, a company wishing to promote monogerm sugar beet seed in Pakistan wanted to first establish how much farmers already knew about the different types of seed available. A particular district was chosen as a test area and a pretest was undertaken where a sample of farmers from that area were asked to list the types of seed of which they were aware. The farmers were also asked to list the brands of sugar beet seed with which they were familiar. This constituted the 'before' measure. A little later a promotional campaign was launched within the test area and after a period of time the sample of farmers were again visited and asked to identify the brands of seed with which they were familiar. It is likely that any increase in awareness of the company's brand was due, in part at least, to the heightened awareness of issues relating to seeds caused by the pretest activity. That is, the pretest is likely to increase interest in matters relating to seeds and therefore make farmers more attentive to the brand promotion than they otherwise might be.

Maturation: Maturation refers to biological and/or psychological changes to respondents that occur in the period between the 'before' and 'after' measurements and consequently affect the information which they provide. Experiments requiring the cooperation of respondents over a substantial period of time are most likely to suffer from maturation effects. Consumer and farmer panels are examples of experimental instruments that demand longer term participation by panel members. Suppose that a farmer panel were established to measure the level of adoption of new marketing practices or technologies promoted by agricultural extension officers. As the years pass the marketing extension officer has noted that farmers on the panel appear to be adopting fewer of the innovations being proposed by the extension service. However, the lower rates of adoption may not be explained by either the marketing extension service becoming less effective in communicating the benefits of innovative marketing practices and technologies nor by current innovations being somehow less appropriate or offering more marginal benefits. Rather, the explanation may be that the panel itself is aging and as farmers get older they may become more resistant to change. Certainly as people get older their needs and attitudes are subject to change. In these circumstances the data drawn from the panel is a function of the maturation of the panel rather than the experimental variables (i.e. the efforts of the marketing extension officers and the characteristics of the marketing innovations).

Whilst it is not always possible to adjust the experimental design so as to eliminate each of these potential threats to the validity of results, it is always possible to measure their impact upon results. The chief device for doing so is to include a 'control group'.

Instrumentation: From time to time, measurement instruments have to be recalibrated or their readings become suspect. Although marketing research does make use of a wide range of mechanical, electrical and electronic instruments in experiments that clearly require periodic readjustment, (e.g. tachistoscopes, pupilometers, audiometer) there are other, more commonly used, marketing research test instruments that also need to be checked for consistency, such as, questionnaires, interviewers, interviewing procedures.

Questionnaires may contain standardised questions with the challenge to consistency coming from the interpretation of the meaning of the question. Consider the apparently straight-forward question, "How big is your farm?" There are several equally valid responses to this question that could combine to give a totally misleading set of data. The variation is due to farmers' interpretation of what the researcher really wants to know. Some farmers will include only the land area that they had under crop in the year of the survey whilst others will include both productive and nonproductive land. In other instances, farmers may understand the question to mean the area of land they actually own. Some farmers may believe it is the farmland that they own and/or rent.

Another aspect is that of consistency in the conduct of interviews. There can be variation in the data collected during an experiment if either different interviewers are used to collect data after the experiment from those who conducted interviews before the experiment; or interviewers change the way questions are put to participants as they become more familiar with the content of the questionnaire.

Mortality: Over time there is a danger that some participants will drop out of an experiment. This can happen when people literally die or decide withdraw from an experimental group for one reason or another. This obviously changes the composition of the experimental group. Where the effects of a marketing variable are being studied by comparing data drawn either from two groups that have been matched to ensure that their composition is identical or the same group at different points in time then mortality can confound the results.

Sampling bias: Sampling bias occurs when the method of assigning participants to experimental groups results in groups whose behaviour cannot be compared to one another because they differ in some important respect(s). Consider the task of evaluating the implementation of new weighing and grading practices within a municipal grain market. It could be that it is easier for larger grain traders to adopt the new practices since they are better able to afford the grading and weighing equipment required. If during a field experiment conducted to study the rate of adoption two groups are established with a view to comparing the rate of adoption within them and one of those groups is predominantly comprised of larger (or smaller) traders then this is likely to distort the results.

External validity

Interactive effects of testing: The design of the experiment itself may give rise to measurement variations between the "before" and "after" phases of the research. Consider a test of consumer acceptance involving two exotic rice varieties being evaluated as possible replacements for a popular indigenous variety which is suffering from a disease and is therefore in short supply. The experimental design involves leaving a trial pack of rice A with a sample of households and returning a few weeks later to interview members of the household about rice A and to deliver a second trial pack containing rice B. A third visit is subsequently made during which household members are asked questions about rice B. Respondents' assessment of rice B is not made under the same conditions as their assessment of rice A. When trying rice A the respondents are likely to have made comparisons, perhaps only subconsciously, with existing rice varieties that they already use. However, when evaluating rice B the respondents will also be making comparisons with rice A. This problem can be overcome, to some extent, by splitting the sample so that half are given the trial varieties in the order rice A then rice B; the remaining half are given the two varieties in the reverse order of rice B then rice A. A more difficult problem to overcome is that whatever the sequence of presentation, by the time household members are asked about the second trial variety, they have become more 'experienced' interviewees and respond differently simply because they feel they better understand what the interviewer wants and how to answer the questions. By the same measure, the interviewer becomes more experienced the second time around, having become more familiar with the product, the interviewing process, and the questionnaire (or interview schedule), and may pose the questions in a different way. As a result, the interviewer may elicit different information on the third call from that which was obtained on the second visit.

Interactive effects of sampling bias: It can happen that participants are assigned to an experimental group without due concern for possible bias and this then interacts with the experimental treatment producing a spurious outcome. Such an interactive sampling bias would result from unknowingly assigning heavy users of a particular product category to one experimental group and using favourable responses to a new formulation within the category as the basis for projecting national demand.

Contrived situations: Any laboratory experiment is, by definition, unlike the real world. Typically, the researcher manipulates the situation so that only those variables in which he/she is immediately interested in studying are allowed to operate as they would in the real world. On occasion this leads to experimental results which are not replicated in the real world. An outstanding example of this set of circumstances is that of Coca Cola's infamous blind taste panels. Coca Cola was concerned at the creeping increases in market share of Coke's main competitor Pepsi. Coca Cola decided to conduct sensory analysis tests where participants were asked to score two colas on taste preference. The participants were given the colas in unmarked cups (i.e. a 'blind' tasting) before being asked which they preferred. On balance, the preference was of Pepsi's slightly sweeter cola. Coca Cola reacted in a way seldom seen anywhere in the world. The brand leader was removed from the market and a new, slightly sweeter formulation was launched under the Coca Cola brand name. It was to prove a costly mistake. Coca Cola were inundated by calls from consumers who were irate over the company's tampering with a product that has almost become a national institution. Most Americans have grown up with Coca Cola and could not accept that it could be changed. The company was forced to reintroduce the original formulation under the title of Coke Classic.

Coca Cola's taste panels were conducted in an artificial environment in which such variables as the brand name, the packaging and all the associations which go along with these were not allowed to operate. The research focused only on the taste characteristics of the product and a particular result was obtained. However, in the real world people consume Coca Cola for many reasons, many of them having little to do with the taste.

Experimental designs

The process of experimentation is one of subjecting participants (e.g. target consumers, farmers, distributors etc.) to an independent variable such as an advertisement, a packaging design or a new product, and measuring the effect on a dependent variable (e.g. level of recall, sales or attitude scores).

"After-only" designs

As the name suggests, with after-only experimental designs measures of the independent variable are only taken after the experimental subjects have been exposed to the independent variable. This is a common approach in advertising research where a sample of target customers are interviewed following exposure to an advertisement and their recall of the product, brand, or sales features is measured. The advertisement could be one appearing on national television and/or radio or may appear in magazines, newspapers or some other publication. The amount of information recalled by the sample is taken as an indication of the effectiveness of the advertisement.

Figure 6.2 An example of an after-only design

The chief problem with after-only designs is that they do not afford any control over extraneous factors that could have influenced the post-exposure measurements. For example, marketing extension personnel might have completed a trial campaign to persuade small-scale poultry producers, in a localised area, to make use of better quality feeds in order to improve the marketability and price of the end product. The decision to extend the campaign to other districts will depend on the results of this trial. After-only measures are taken, following the campaign, by checking poultry feed sales with merchants operating within the area. Suppose a rise in sales of good quality poultry feed mixes occurs four weeks after the campaign ends. It would be dangerous to assume that this sales increase is wholly due to the work of the marketing extension officers. A large part of the increase may be due to other factors such as promotional activity on the part of feed manufacturers and merchants who took advantage of the campaign, of which they were forewarned, and timed their marketing programme to coincide with the extension campaign. If the extension service erroneously drew the conclusion that the sales increase was entirely due to their own promotional activity, then they might be misled into repeating the same campaign in other areas where there would not necessarily be the same response from feed manufacturers and merchants.

After-only designs are not true experiments since little or no control is exercised over any of the variables by the researcher. However its inclusion here serves to underline the need for more complex designs.

"Before-after" designs

A before-after design involves the researcher in measuring the dependent variable both before and after the participants have been exposed to the independent variables.

The before-after design is an improvement upon the after-only design, in that the effect of the independent variable, if any, is established by observing differences between the value of the dependent variable before and after the experiment. Nonetheless, before-after designs still have a number of weaknesses.

Consider the case of the vegetable packer who is thinking about sending his/her produce to the wholesale market in more expensive, but more protective, plastic crates, instead of cardboard boxes. The packer is considering doing so in response to complaints from commissioning agents that the present packaging affords little protection to produce from handling damage. The packer wants to be sure that the economics of switching to plastic crates makes sense. Therefore, the packer introduces the plastic crates for a trial period. Before introducing these crates, the packer records the prices received for his/her top grade produce. Unless prices increase by more than the additional cost of plastic crates then there is no economic advantage to using the more expensive packaging.

Figure 6.3 Before-after designs

Suppose, for instance, that the packer was receiving $15 per crate, when these were of the cardboard type, but that the price after the introduction of plastic crates had risen to $17 per crate. The $2 difference would be attributed to better quality produce reaching the market as a result of the protection afforded by the plastic crates. However, there are several equally plausible explanations for the upward drift in produce prices including a shortfall in supply, a fall in the quality of produce supplied by competitors who operate in areas suffering adverse weather conditions, random fluctuation in prices, etc.

"Before-after with control group" design

This design involves establishing two samples or groups of respondents: an experimental group that would be exposed to the marketing variable and a control group which would not be subjected to the marketing variable under study. The two groups would be matched. That is, the two samples would be identical in all important respects. The idea is that any confounding factors would impact equally on both groups and therefore any differences in the data drawn from the two groups can be attributed to the experimental variable.

Study figure 6.4 which depicts how an experiment involving the measurement of the impact of a sugar beet seed promotional campaign on brand awareness might be configured with a control group.

Figure 6.4 An example of a before-after with control group design


Experimental Group

Control Group

Before1 measure: % recalling Brand X sugarbeet seed

25.5%

25.5%

Exposed to promotional campaign

Yes

No

'After' measure: % recalling Brand X sugarbeet seed

34.5%

24,5%

First, the two groups would be matched: attributes such as age distribution of group members, spread of sizes of farms operated, types of farms operated, ratio of dependence on hand tools, animal drawn tools and tractor mounted equipment, etc. would be matched within each group so that the groups are interchangeable for the purposes of the test. As figure 6.4 conveys, the initial level of awareness of the sugar beet brand would be recorded within each group. Only the experimental group would see the test promotional campaign. After the campaign, a second measure of brand awareness would be taken from each group. Any difference between the 'after' and 'before' measurements of the control group (C2 - C1) would be due to uncontrolled variables. Differences between the 'after' and 'before' measurements in the experimental group (E2 - E1) would be the result of the experimental variable plus the same uncontrolled variables affecting the control group. Isolating the effect of the experimental variable is simply a matter of subtracting the difference in the two measurements of the control group from the difference in the two measures taken from the experimental group. To illustrate the computation consider the following hypothetical figures.

Awareness of the brand within the experimental group has increased by 9 percent. At the same time, the awareness level, within the control group, appears to have fallen by 1 percent. This could be due to random fluctuations or a real lowering of awareness due to some respondents forgetting the brand in the absence of any supporting advertisements/promotions. Thus the effects of the test campaign would seem to have been:

Effect of experimental variable

= (34.5 - 25.5) - (24.5 - 25.5)


= (9%) - (-1%) = 10%

If a "before and after with control group" experiment is properly designed and executed then the effects of maturation, pretesting and measurement variability should be the same for the experimental group as for the control group. In this case. these factors appear to have had a negative effect on awareness of one percent. Had it not been for the experimental variable, the experimental group would have shown a similar fall in awareness over the period of the test. Instead of recording a fall in the level of awareness of the sugar beet brand, the experimental group actually showed a nine percent increase in brand awareness. However, the design is not guaranteed to be unflawed. The accurate matching of the two groups is a difficult, some would say impossible, task. Moreover, over time the rate and extent of mortality, or drop out, is likely to vary between the groups and create additional problems in maintaining a close match between groups.

The "After-only with control group" experimental design

Again, this design involves establishing two matched samples or groups of respondents. There is no measurement taken from either group before the experimental variable is introduced and the control group is not subsequently subjected to the experimental variable. Afterwards measures are taken from both groups and the effect of the experimental variable is established by deducting the control group measure from the experimental group measure. An illustrative example will help clarify the procedures followed.

A Sri Lankan food technology research institute was trying to convince small-scale food processors to adopt solar dryers to produce dried plantain and other dehydrated vegetables. Much of the initial resistance to the adoption of this technology was due to the belief that the taste characteristics of this snack food would be altered from those of traditional sun-dried plantain. The research institute was able to convince the food manufacturers that there would be no perceptible changes in the taste characteristics by carrying out an "after-only with control group" experiment. Sensory analysis experiments conclusively showed that almost none of the participants was able to discriminate between plantain dehydrated by means of the solar powered dryer and that which was sun-dried.

Many product tests are of the "after-only with control group" type. This design escapes the problems of pretesting, history and maturation. However, this form of "after-only design" does not facilitate an analysis of the process of change, whereas a comparable "before-after design" would. The attitudes, opinions and/or behaviour of individual participants can be recorded both before and afterwards and changes noted. For instance, the effect of the experimental variable on those participants who held unfavourable attitudes can be compared with those they held in the "before" measurement. Changes in those that held favourable attitudes in the "before" measurement can also be assessed after exposure to the experimental variable.

Ex post facto design

The ex post facto design is a variation of the "after-only with control group" experimental design. The chief difference is that both the experimental and control groups are selected after the experimental variable is introduced rather than before. This approach eliminates the possibility that participants will be influenced by an awareness that they are being tested.

Following market liberalisation in Zimbabwe a number of maize meal producers, using hammer mill technology, came into the industry to compete against millers using roller mill technology. The hammer milled product was much coarser than the highly refined roller milled maize meal to which most urban consumers had grown accustomed. The hammer milled product, however, had superior nutritional benefits since meal produced in this way retained a much larger amount of the germ, bran and endosperm. One production miller sought to communicate the nutritional advantages of hammer milled meal through point-of-sale material in stores and provisions merchants. A sample of consumers who claimed to have seen the point-of-sale material was subsequently assigned to an experimental group and a matching selection of consumers who denied having seen the point-of-sale material comprised the control group. It was hypothesised that those who had seen the point-of-sale material would suggest that hammer milled maize meal had superior nutritional properties to that of roller meal to a far greater extent than would those who had not seen the point-of-sale aids.

The results supported the hypothesis in as much as 68 percent of those recalling having seen the point-of-sale promotional aids reported hammer milled meal as nutritionally superior whilst only 43 percent of those unaware of the point-of-sale aids said that hammer mill was more nutritious than roller meal. However, some care has to be taken in making the conclusion that the point-of-sale campaign was an unqualified success. It is to be remembered that participants were assigned to the two groups on the basis of self-selection. Those reporting having seen the promotional material were probably those on whom the campaign had made most impression. It is quite likely that some of those in the control group also saw the material but do not recall having done so.

Where exposure to the experimental variable can be determined objectively, on an ex post facto basis, the bias introduced by self-selection can be eliminated and the design, in essence, becomes identical to the "after-only with control group" design. In these circumstances, the ex post facto design is an improvement upon the "after-only with control group" design since the experimental variable would have its impact in a natural situation. Suppose, for example, that government has been using radio to communicate thy benefits of giving vitamin supplements to children under two years of age and that these are available in tablet form, free-of-charge, in local clinics. Ownership, and access, to a radio can be established objectively.

Chapter Summary

Experimentation offers the possibility of establishing a cause and effective relationship between variables and this makes it an attractive methodology to marketing researchers. An experiment is a contrived situation that allows a researcher to manipulate one or more variables whilst controlling all of the others and measuring the resultant effects on some independent variable.

Experiments are of two types: those conducted in a laboratory setting and those which are executed in natural settings; these are referred to as field experiments. Laboratory experiments give the researcher direct control over most, if not all, of the variables that could affect the outcome of the experiment. The evidence for drawing inferences about causal relationships takes three forms: associative variation, consistent ordering of events and the absence of alternative causes.

There are a number of potential impediments to obtaining valid results from experiments. These may be categorised according to whether a given confounding factor has internal validity, external validity, or both. Internal validity is called into question when there is doubt that the experimental treatment is actually responsible for changes in the value of the dependent variable. External validity becomes an issue when there is uncertainty as to whether experimental findings can be generalised to a defined population. The impediments to internal validity are history, pre-testing, maturation, instrumentation, sampling bias and mortality. Impediments to external validity are: the interactive effects of testing, the interactive effects of sampling bias and errors arising from making use of contrived situations.

The main forms of experimental design differ according to whether or not a measure is taken both before and after the introduction of the experimental variable or treatment, and whether or not a control group is used alongside the experimental group. The designs are: after-only, before-after, before-after with control group, after-only with control group and ex post facto designs.

Key Terms

Causality
Confounding factors
Control groups
Dependent variables
Experimental design
Ex post facto measures
External validity
Extraneous factors
Independent variables
Internal validity
Treatments

Review Questions

1. Give the alternative name for 'the independent variable'.

2. Name 4 threats to the internal validity of experimental results.

3. What is the main device for controlling the effects of maturation in experimental groups?

4. In what way does the ex post facto experimental design differ from the after-only with control group design?

5. Define the term 'deterministic causation'.

6. What is meant by the term 'external validity'?

7. What are the 3 conditions necessary in order to be able to infer causation?

8. Why is it said that after-only designs are not true experiments?

Chapter References

1. Boyd, H.W. Jr. and Westfall, R. (1972) Marketing Research: Text and Cases, Irwin, p. 80.

2. Dillon, W.R., Madden, T.J. and Firtle, N.H. (1994), Marketing Research In A Research Environment, 3rd edition, Irwin, p. 175.

3. Green, P.E., Tull, D.S. and Albaum, G. (1993), Research For Marketing Decisions, 5th edition, Prentice-Hall, pp. 105-107.


Previous Page Top of Page Next Page