No business can succeed without understanding its customers, its products and services, and the market in general. Competition is often fierce, and operating without conducting research may give your competitors an advantage over you. Market research is the process of collecting valuable information to help you find out if there is a market for your proposed product or service. The information gathered frommarket research helps budding entrepreneurs make wise and profitable business decisions. There are two categories of data collection: quantitative and qualitative Get paid wit online survey Click here
Marketing academics and practitioners are on an ongoing quest to discover what guides consumers’ preferences and choices and how consumers evaluate products and experiences. Understanding consumer behavior allows marketers to be more effective and profitable and improves researchers’ understanding of the psychological factors and processes that underlie human behavior. Our ability to accurately predict future behavior and effectively design marketing strategies and tactics depends to a great extant on the validity of the model we are using.
Making the Case for Field Experiments
Keeping this objective of validity in mind, it is somewhat ironic that a substantial fraction of research in marketing involves methods that do not investigate the real behavior of actual consumers, relying instead on methods such as surveys, simulations, and hypothetical scenarios. In field experiments, however, participants are typically unaware that the researcher is manipulating factors in the environment and measuring behaviors and outcomes. As a result, findings from field experiments can be taken at face value—they can tell us the size of the effect, who is most affected and under what conditions, and the short- and long-term implications.
Consider, for example, the possible methods for setting a profit-maximizing price. A firm could arguably estimate the demand function by observing sales at different price levels and subsequently choose a price based on data from these observations (e.g., Simon 1955). Alternatively, the firm could estimate demand elasticity by employing econometric techniques that use historical price and sales data (Nevo 2001; Rosse 1970), which can then be used to derive a demand function. These methods hinge on the critical assumption that the firm has been setting and testing profit-maximizing prices all along—an assumption that, even if true, restricts price variation and thereby handicaps the econometric approach. Finally, historical demand is often nondiagnostic of future demand (e.g., consider the demand, or lack thereof, for VCRs in 2016). Field experiments offer a clean, straightforward, and accurate alternative (or additional) approach to derive optimal prices. Simply put, the firm can vary prices and observe demand, along with spillover, cannibalization effects, and profitability (Anderson and Simester 2003; Gneezy, Gneezy, and Lauga 2014).
Notably, field experiments have proved equally instrumental and influential outside the marketing discipline and have been used to study many important questions, such as voting behavior (Bryan et al. 2011), human cooperation (Frey and Meier 2004), helping behavior (e.g., Ariely, Bracha, and Meier 2007), and ways to improve people’s health (Charness and Gneezy, 2009; Schwartz et al. 2014), to name but a few. The objective of this curation is to shine a spotlight on the strengths and opportunities inherent in field experiments and to offer researchers—both practitioners and academics—a taste of the possibilities available to anyone wishing to expand their research methods toolkit. In doing so, I draw from research in marketing in general, with a focus on some articles recently published at the Journal of Marketing Research, and studies that use field experiments as their main research methods.
Three Ways to Use Field Experiments
Although the definition of field experiments can depend on who you ask, there is a basic and necessary condition—either participants must be unaware that they are participating in a study, or if they are aware, they should be engaging in activities as they would regardless of the experiment (Gneezy 2017; for a detailed discussion, see Charness, Gneezy, and Kuhn 2013; Harrison and List 2004).
1. Testing Theory Against Reality
Given this fundamental understanding, field experiments can be instrumental in three ways. First, one could design an experiment to test whether a previously proposed theory survives the reality test (e.g., VanEpps, Downs, and Loewenstein 2016). The results of such experiments could further offer novel insights into effect size, second-order effects, long-term effects, etc.
For example, Putnam-Farr and Riis (2016) conducted field experiments that tested whether the previously documented effectiveness of a “yes/no” responses in forced choice settings apply to non-forced-choice setting. In addition to measuring participants’ choices, using a field experiment allowed the researchers to also observe their actual enrollment in the proposed wellness program.
2. Testing a Hunch
Second, a researcher might choose to design and run a field experiment to test a prediction/hypothesis that is based on a hunch or an anecdote (e.g., Bone et al. 2017; Fong e al. 2015). For example, in 2007, Radiohead, the English rock band, made its new album In Rainbows available for download online, allowing consumers to choose how much to pay for the download, including $0 (“pay-what-you-want”). About 1 million consumers downloaded the album, approximately 40% of whom payed something. This anecdotal observation of real-life practice led to a series of field experiments that tested whether people would indeed pay under this pricing scheme, what influences purchase likelihood, and average amount paid (Gneezy et al. 2010; Gneezy et al. 2012; Jung et al. 2014; Jung et al. 2017).
Another example is found in a recent paper by John et al. (2017), who designed a field experiment to identify whether “liking” a brand on Facebook increases preference for the brand, or vice versa. Surprisingly, they find evidence for the latter. The authors were then able to use the data to further explore possible second-order effects, which revealed that when consumers see that their Facebook friend “liked” a brand, they are less likely to buy the brand relative to when they learn of their friend’s preference in an offline context.
3. Testing Competing Theories
Third, field experiments can be used to pit several theories against one another, with the objective of observing the relative impact of each, and, thereby providing a more complete account of behavior within a given domain (e.g., Grinstein and Kronrod 2016).
A good example of this is provided by Sudhir, Roy, and Cherian (2016), who, motivated by findings from research in psychology (e.g., Bagozzi and Moore 2014; Small and Loewenstein 2003), ran a series of field experiments that tested the relative impact of sympathy and framing effects (e.g., Gourville 1998) on donation behavior. The effect sizes they found in their data can help practitioners better understand which behavioral theories are most relevant when developing charitable solicitation appeals.
The Best of Both Worlds
A final word of caution: Conducting field experiments has never been easier, particularly for companies who have a digital platform such as a website or an app. The good news is that one can design an experiment, implement it on a website, and observe the results, all within a very short period of time and for very little, if any, cost. The bad news is that this ease and perceived cost savings might misguide and ultimately undermine such efforts for a variety of reason.
Designing good field experiments is different than simply conducting A/B testing to determine what “works.” A good field experiment starts with a deep knowledge of consumer psychology. This foundational understanding is necessary for the formulation of behavioral hypotheses, around which the experiment can then be effectively designed. Marketing practitioners interested conducting field experiments should reach out to relevant marketing academics and invite them to collaborate. My own experience, as well as that of many of my colleagues, shows that such collaborations offer great value to all parties involved. Get insights
Source:
A Curation Based on Research from the Journal of Marketing Research
By Ayelet Gneezy
Books, Articles and Referenced:.
Anderson, Eric, and Duncan Simester (2003), “Effects of $9 Price Endings on Retail Sales: Evidence from Field Experiments,” Quantitative Marketing and Economics, 1 (1), 93–110.
Ariely, Dan, Anat Bracha, and Stehan Meier (2007), “Doing Good or Doing Well? Image Motivation and Monetary Incentives in Behaving Prosocially,” American Economic Review, 99 (1), 544–55.
Bagozzi, Richard P., and David J. Moore (1994), “Public Service Advertisements: Emotions and Empathy Guide Prosocial Behavior,” Journal of Marketing, 58 (January), 56–70.
Bone, Sterling A., Katherine N. Lemon, Clay M. Voorhees, Katie A. Liljenquist, Paul W. Fombelle, Kristen Bell Detienne, and R. Bruce Money (2017), “’Mere Measurement Plus’: How Solicitation of Open-Ended Positive Feedback Influences Customer Purchase Behavior,” Journal of Marketing Research, 54 (February), 156–70.
Bryan, Christopher J., Gregory M. Walton, Todd Rogers, and Carol S. Dweck (2011), “Motivating Voter Turnout by Invoking the Self,” Proceedings of the National Academy of Sciences, 108(31), 12653–56.
Charness, Gary, and Uri Gneezy (2009), “Incentives to Exercise,” Econometrica, 77 (3), 909-931.
Charness, Gary, Uri Gneezy, and Michael A. Kuhn (2013), “Experimental Methods: Extra-Laboratory Experiments-Extending the Reach of Experimental Economics,” Journal of Economic Behavior & Organization, 91, 93–100.
Frey, Bruno S., and Stephan Meier (2004), “Social Comparisons and Pro-social Behavior: Testing ‘Conditional Cooperation’ in a Field Experiment,” The American Economic Review, 94 (5), 1717–1722.
Fong, Nathan M., Zheng Fang, and Xueming Luo (2015), “Geo-Conquesting: Competitive Locational Targeting of Mobile Promotions,” Journal of Marketing Research, 52 (October), 726–35.
Gneezy, Ayelet (2017), “Field Experimentation in Marketing Research,” Journal of Marketing Research, 54 (February), 140–43.
Gneezy, Ayelet, Uri Gneezy, and Dominique Lauga (2014), “Reference-Dependent Model of the Price-Quality Heuristic,” Journal of Marketing Research, 51 (February), 153–64.
Gneezy, Ayelet, Uri Gneezy, Leif D. Nelson, and Amber Brown (2010), “Shared Social Responsibility: A Field Experiment in Pay-What-You-Want Pricing and Charitable Giving,” Science, 329 (5989), 325–27.
Gneezy, Ayelet, Uri Gneezy, Gerhard Riener, and Leif D. Nelson (2012), “Pay-What-You-Want, Identity, and Self-Signaling in Markets,” Proceedings of the National Academy of Sciences, 109 (19), 7236–40.
Gourville, John T. (1998), “Pennies-a-Day: The Effect of Temporal Reframing on Transaction Evaluation,” Journal of Consumer Research, 24 (4), 395–408.
Grinstein, Amir, and Ann Kronrod (2016), “Does Sparing the Rod Spoil the Child? How Praising, Scolding, and an Assertive Tone Can Encourage Desired Behaviors,” Journal of Marketing Research, 53 (June), 433–41.
Harrison, Glen W., and John A. List (2004), “Field Experiments,” Journal of Economic Literature 42, 1009–55
John, Leslie K., Oliver Emrich, Sunil Gupta, and Michael I. Norton (2017), “Does ‘Liking’ Lead to Loving? The Impact of Joining a Brand’s Social Network on Marketing Outcomes,” Journal of Marketing Research, 54 (February), 144–55.
Jung, Minah H., Leif D. Nelson, Ayelet Gneezy, and Uri Gneezy (2014), “Paying More When Paying for Others,” Journal of Personality and Social Psychology, 107 (3), 414–31.
Jung, Minah H., Leif D. Nelson, Uri Gneezy, and Ayelet Gneezy (2017), “Signaling Virtue: Charitable Behavior Under Consumer Elective Pricing,” Marketing Science, in press, http://www2.oneama.com/e/251952/10-1287-mksc-2016-1018-/bjqq/38690207
Nevo, Aviv (2001), “Measuring Market Power in the Ready-to-Eat Cereal Industry,” Econometrica, 69 (2), 307-42.
Putnam-Farr, Eleanor, and Jason Riis (2016), “‘Yes/No/Not Right Now’: Yes/No Response Formats Can Increase Response Rates Even in Non-Forced-Choice Settings,” Journal of Marketing Research, 53 (June), 424–32.
Rosse, James (1970), “Cost Function Parameters Without Using Cost Data: Illustrated Methodology,” Econometrica, 38 (2), 256-75.
Schwartz, Janet, Daniel Mochon, Lauren Wyper, Josiase Maroba, Deepak Patel, and Dan Ariely (2014), “Healthier by Precommitment,” Psychological Science, 25 (2), 538-46.
Simon, Herbert (1955), “A Behavioral Model of Rational Choice.” Quarterly Journal of Econometrics, 69 (1), 99–118.
Small Deborah A., and George Loewenstein (2003), “Helping the Victim or Helping a Victim: Altruism and Identifiability,” Journal of Risk Uncertainty, 26 (1), 5–16
Sudhir, K., Subroto Roy, and Mathew Cherian (2016), “Do Sympathy Biases Induce Charitable Giving? The Effects of Advertising Content,” Marketing Science, 35 (6), 849–69.
VanEpps, Eric M., Julie S. Downs, and George Loewenstein (2016), “Advance Ordering for Healthier Eating? Field Experiments on the Relationship Between the Meal Order–Consumption Time Delay and Meal Content,” Journal of Marketing Research, 53 (June), 369–80.
No comments:
Post a Comment