Some comments on this project by Prof. Sunil Gupta
Some comments on this project
By Sunil Gupta
Usually, student projects get done, graded, and forgotten. With the Web, they sometimes get posted, people might trip over them, and some exchanges take place. In the case of this particular study, the level of attention has been somewhat larger. Several people, in several fora, have commented on it. So, to try to put things in some perspective, I felt it useful to write a brief comment.
Those who read this study should understand that it is a student project, done by a group of MBA students (rather bright ones, I might add), as one part of the final grade, for one of the courses they took during the past semester. So, if it does not resolve all of the many uncertainties and unknowns about banner advertising, the reader ought not to be surprised.
In commenting on this study, some have pointed out that it would not be considered good enough to be published in an academic journal. I agree. However, I feel that studies such as this are not meant for academic journals. As long as the authors make their research qualifications, methods and results sufficiently transparent, each reader can rely on his/her own judgment to decide what is worth paying attention to, and what to discard. I think the students have done a good job of describing their methods and the study's shortcomings. The study is straightforward enough that it does not need expert peer-review analysis to judge its worth. Further, even if the reader misjudged the worth of the study, the consequences of an erroneous decision would not be so adverse as to justify the expenses and wasted time of going through a peer-review process.
For myself, I think the study basically showed that some rather innocuous manipulations (changing the banner's position/shape on the page) created measurable changes. The study DOES NOT provide any reasons for why this happened. Several people have suggested such possibilities as, mistaken clicks, laziness, attention grabbing, change in size, downloading speed, etc. to explain the effect. Some, at least, of these are reasonable possibilities. My own takeaway (when looked at in conjunction with other ad banner studies that have been reported by others online) is that doing something different gets people's notice, and then a measurable impact on clickthroughs. But, this is mere conjecture. Systematic exploration of the many possibilities that have been mentioned, guided by a theoretically defensible framework is clearly needed.
About sample sizes. At least a few people have commented on the "small cell sizes" (the number of impressions measured for each experimental condition). Some have suggested a cell size of 200,000 as the minimum needed. I think this issue is misunderstood. 200,000 impressions measured on a very high traffic site might happen in a few hours. If the hours are not chosen carefully, the results could be seriously biased. The mere largeness of cell sizes does not alleviate this problem. On the other hand, well designed experiments, guided by appropriate theory, where the expected effect size is sufficiently large can be done with much smaller cell sizes (see medical journals). In general, the power of an experiment (the probability of getting a significant result), is a function of the sample size, the effect size, the standard deviation of the error, and the desired significance level.
Finally, I thank Athenia Associates for allowing the students to muck around with advertising on their site. Such cooperation is not easy to come by. However, I am not entirely comfortable with the press release they put out. This study does not "explode" anything. It only raises questions and possibilities. And, I am convinced that not every study should be released to the press.
Comments are welcome
Created: May 3, 1997
Revised: May 5, 1997