0 Comments
“Where’s the Yelp of B2B? How influential is such a system in B2B decisions?”
We have been exploring these questions and investigating how online B2B reviews and online vendor scorecards influence decision-making. On Thursday, August 4th from 11am-12:30pm at the ISBM conference at Emory University, we will be talking about our research, and have special corporate guests share their experiences on the topic. Our session title is: “The Impact of Online, Peer-to-Peer Professional Reviews on B2B Buying Behavior.” One of our special guests is Andy Kohm, CEO of VendOp. Andy will be demonstrating his company’s innovative approach to B2B reviews. Jim and Michelle were interviewed recently about the findings of their research.
Kelly Barner of the Buyers Meeting Point interviewed them on Talk Radio on the topic, “The Analytical Experience of Reconciling Positive and Negative Supplier Reviews.” http://www.blogtalkradio.com/buyersmeetpoint/2016/07/08/the-analytical-experience-of-reconciling-positive-and-negative-supplier-reviews (There is a short audio commercial before you will see the Play button appear, and the Pause button is at the top of the screen.) Customer expectations are the fundamental base by which satisfaction develops. If a company's performance exceeds customer expectations, then a client is satisfied. If a company's performance is below customer expectations, then the client is dissatisfied. How can a company begin to truly understand customer expectations? This topic was the focus of the Kaiser Sotheby's International Realty session in Gulf Shores, Alabama. Beautiful setting!
Jim and Michelle presented their research findings at the ISM-CV Annual Conference today in Winston-Salem, NC.
![]() After isolating the effects of different types reviews in the three field experiments, we decided to expand the portfolio to include two types of reviews of different types of valence (external reviews and internal reviews) considered together, but of conflicting valence. We also compared these results to settings in which two reviews were again considered simultaneously, but of the same valence (both positive or both negative).
![]() Experiment 3 included 97 purchasing professionals who read one of three scenarios of a negative review of a supplier (internal review, external review from a company similar to purchasing professional’s company, external review from a company different from the purchasing professional’s company). In this field experiment, we specifically asked how likely it was that the buyer was to blame for the supplier’s poor performance.
These results underscore the importance of suppliers identifying relevant characteristics of their customers to explain to prospects the similarities and differences in previous partnerships. ![]() We recognized that purchasing professionals do not have unlimited choices nor time in the decision-making process. We also knew from our earlier interviews that purchasing professionals see reviews, and any element speaking to a supplier’s performance, as one part of a larger decision-making matrix. So we were interested to understand the conditions under which purchasing professionals may willing to be more or less engaged when presented with a negative supplier review. Similarity of the source of a review had be examined in B2C research in terms of similar geographic location of a reviewer to a potential customer and similar language and interests between the two. The findings suggest that similarity breeds greater interest in a product. However, in these situations from previous research in B2C, the reviews were neutral or positive. What might happen if the review was negative? Where might the attribution for failure fall, with the reviewer or with the supplier? In the second field experiment with 100 purchasing professionals, we investigated the situation comparing engagement that resulted from reading a negative review from a reviewer similar or dissimilar to the purchasing professional, as well as comparing those results to those from purchasing professionals who read a review from an internal stakeholder. The internal review we suspected should produce identical results to the review from the reviewer deemed similar to the purchasing professional’s company.
The idea was interesting to us that purchasing professionals seemed to attribute the fault of the supplier’s poor performance to the buyer when that buyer was described as “very different” from their company, and thus purchasing professionals were more engaged in those cases. Suppliers have little to no control over the way in which others speak about their performance. So, understanding how purchasing professionals mentally process negative reviews and determine who is to blame for the supplier’s poor performance rating is important. We decided to conduct a third experiment to dig a bit deeper into this topic. The purpose of Experiment 1 was to determine how external and internal reviews may differentially influence engagement across two stages of the B2B customer journey. Further, we were interested in how reviews of different valence originating from these different sources might further influence the level of engagement.
Holistically, then, engagement is the degree of participation, involvement and positive affect that a buyer has for a supplier.
To tease out the hypothesized differences of stage, source and valence, we conducted a field experiment with 265 purchasing professionals across industries. Purchasing professionals read one of eight scenarios at random. Each scenario represented a combination of the variables of interest.
Given a general fear of suppliers about receiving negative reviews, yet the willingness of purchasing professionals to explore more about the conditions of a supplier’s negative review, we decided to investigate further with a second field experiment. We gathered considerable insights on the B2B buying process and purchasing managers’ usage of digital tools during the previous stages of the study. At this point, we wanted to subject a set of promising hypotheses to rigorous, academic data gathering and testing. Naturally, we could not test all potential hypotheses and use all of the myriad of large-sample data gathering and statistical analyses tools. Instead, we narrowed down our focus as follows.
• Rather than test all of our hypotheses in all 8-steps of the Robinson, Faris, with Wind (1967) BuyGrid Model, we decided to focus on two – selection of alternative vendors and the purchase decision. We selected these two stages because we believe them to be the most critical in the process and to capture pivotal “decisions”. • Testing the applicability of all available digital tools would be too cumbersome for one study. For this reason, we focus primarily on two, digital tools – peer-to-peer online professional reviews and comments and internally generated Vendor Scorecards. This would enable us to assess the impact of both internally and externally generated digital information. • Instead of using such techniques as descriptive data-collection and analyses or structural equation modeling, which are popular in current academic marketing research studies, we decided to conduct and analyze a series of field experiments. We did so in order to gain insights into the actual decision-making process. Furthermore, we found very few examples of experiments reported in B2B academic research. Those that do exist relied primarily on MBA students as subjects. We planned to use actual B2B purchasing managers. Before formulating our research hypotheses and crafting our narratives for each experiment, we conducted an extensive review of the extant, digital marketing literature. Among the observations we made include the following. We shaped our experiments around some of these insights. • By far, most of the research conducted and reported relates to consumer products. There are very few studies that address the concerns of B2B purchasing professionals and their use of digital tools in the buying process. • Most of the consumer product studies relied on traditional marketing performance measures – sales, brand image, and ROI – as dependent variables. Instead, we choose to develop a new latent construct, engagement. Not only is this construct consonant with the extant literature, it would contribute by producing an operational definition for use in the B2B context. • Most consumer studies exclusively address “external reviews” often from non-professional and anonymous consumers. We choose to contrast the impact of both external and internal reviews (i.e., in the form of Vendor Scorecard results). • Many of the consumer studies examine the impact of negative and positive reviews. We too chose to include valence in our experiments. We conducted a series of field experiments. We'll share the results in the upcoming posts. |
|