Tuesday, July 18, 2006

Is virtual reference successful? Part II (Hint: yes it is)

In Part II I'd like to respond specifically to two comments. The first is from Morgan Fielman. The second from Pascal Lupien, author the article I'm discussing.

RESPONDING TO FIELMAN

Commenter Morgan Fielman wrote, "The original poster seems to have missed the point of this article, which is primarily about software."

No, I get that the point of the article is primarily about the software and not the customer experience. But the article is so broadly written and lacking in detail that it ends up saying nothing more specific than "VR software has problems."

My questions, unanswered by the article are, What software? What problems? Two of the products Lupien writes about (Tutor.com’s and QuestionPoints') recently underwent complete overhauls, in effect becoming completely new products. It is unclear from Lupien's article which versions he's writing about but my sense is he's writing about the older versions. If that's correct then most of the article is, at best, moot.

Granted, the larger issue of whether or not the software is effective is a valid issue that warrants exploration and discussion. Fielman goes on to ask, "but how can customers be satisfied when the software we use is so poor?" I say that's the wrong question. The question is "Are customers satisfied?" The answer in our customers’ experience is yes, they are satisfied. We didn’t find this out by polling 20 libraries. We found this out by asking the customers. Another good question might be, "Do the problems with VR software affect the quality of the customer experience, and if so how and to what extent." There are many people at collaborative VR services looking at a lot of data to answer that question. Lupien’s article suggests that problems with software affect the customer experience but offers no actual data to back it up. He mentions problems with popup windows, problems with Windows service pack 2, and problems with serving customers who use Macs, but he is not specific about which software products exhibit which problems and to what extent. And again, Lupien is not clear which version of Tutor and QuestionPoint he’s talking about. The newer versions of both products are compatible with Mac users, and have no problems with service pack 2 issues that I’m aware of.


Fielman concludes his comment by saying "original VR supporters have realized that this service just isn't cutting it." The fact is our service has been cutting it for almost 5 years, and we have the hard data and glowing customer comments to prove it. If your VR service isn't cutting it, you need to ask why. Are your staff trained on the software? Are they enthusiastic? What are you customer service standards? Do your librarians give kick-ass customer service in f2f encounters? What quality control mechanisms do you have in place? Do you examine your transcripts for quality? Do you have regular and convenient service hours? Are you available 24/7? (going 24/7 made a huge difference in our usage, even though usage mostly grew during hours we were already open -- go figure…) And finally, but certainly not last, do you consistently and effectively market your service to your customers? Do they know you exist???

If your service ain't cutting it maybe you need to answer these questions first before blaming the software, which is an easy way out. Consider that here in New Jersey using standard VR software (currently QP, formerly Tutor/LSSI’s eGain-based software) we’re cutting it and then some. Other statewide collaboratives are doing quite well too. And we're all working very diligently with our respective vendors to ensure that our VR platforms are stable and highly functional. While the current glitch here and there can be a real and undeniable pain in the ass, it hasn't prevented us from delivering a high quality and slightly mind-blowing experience to our customers.

RESPONDING TO LUPIEN

First, I’d like to thank Pascal Lupien for taking the time to offer an extremely well-written and thoughtful comment in response to my first post. I’d like to assure him that contrary to his assertion, I’ve read his article through thoroughly a few times. I have no problem with bad news about VR. I just want accurate and somewhat substantiated news. I’m offering up the reality of my experience at QandANJ to counter the broad statements that Lupien makes. Now to some of his specific comments.

He writes, Perhaps these results aren’t what proponents of VR would prefer to hear, but they do represent a problem that needs to be discussed, for the sake of our users.

I do not consider myself a proponent of VR, I consider myself of proponent of libraries. It is my desire that libraries remain relevant to our customers by offering a suite of high quality services. Collaborative VR is one such service, offering our customers 24/7 access where and when they want it. I want to see libraries changing their customers’ perceptions about what libraries can offer them. I want libraries to blow customer expectations out of the water. I want libraries to be around in 50 years. It is not that I don’t want to hear bad news about VR software. I’m perfectly open to hearing about the problems with the current stable of VR software offerings. It’s just that I want to hear facts, not conjecture. And I want those facts to be couched in some meaningful context and always tied back, to whatever extent possible, to the impact on our customers. I didn’t get this from Lupien’s article.

Lupien writes, To respond to the person who claimed that software is the last thing that matters about VR, I say tell that to the user who is unable to log in because she uses a Mac, or because her computer has pop-up blockers. Tell that to the user who is "kicked off" in the middle of a session because the VR software does not function properly with the library’s licensed databases. These things happen regularly, and this article makes an attempt to discuss them.

I’m pleased to see Lupien talking directly about the impact on customers. Clearly we agree that it would be optimal if VR software worked across all platforms, had no problems with pop-up blockers, and worked 100% of the time so no user was ever "kicked off." I am not suggesting that these problems don't exist, I am asking to what extent do they exist, and to what extent do they impact the customer's experience and satisfaction with VR service. Because Lupien fails to identify what versions of the various VR products he tested, and is repeatedly non-specific regarding his data, the article fails to answer these questions.

Lupien grants that, "many regular VR users appreciate the service," and that he wasn’t contesting that fact. Our experience suggests that it is not "many" but most.

Lupien writes, "Shouldn’t we be thinking about these potential users as well, rather than focusing on those who already use and appreciate the service? Shouldn’t we be trying to determine if one software product could help us to improve the experience for all users, not merely the satisfied ones? Perhaps some would fear doing this, as it would reveal that their VR service isn’t as successful and user-friendly as they like to claim?

Yes, we should absolutely be thinking about our potential users, and we should always be shooting for a platform that will provide high quality service to everyone. Again, it's a matter of facts and context. Lupien's article disappoints me on both counts.

Lupien writes, "The point of this article is to focus on users who are unable to log in to begin with, who encounter technical problems during a transaction, or who choose not to use the service because they would be required to disable pop-up blockers or use a particular browser, etc. We’ll never know how these users feel about VR, because they don’t get far enough into a VR transaction to make…comments.

Actually, we have some way of knowing. We ask. Yes sir, right there on the front page of QandANJ we say, "Click here to give us feedback on how our new software is working for you." Here's a sample of what we find: Since May 1st (79 days), we have received 23 comments. 16 of them were specifically technical (some were positive, some were of the nature, "it wasn't fast enough".) One comment came from a Mac user, 3 came from customers accessing us through the AOL interface and browser. So Mr. Lupien, we do make an effort to compile and monitor such information, looking for problematic trends with an eye on improving the service.

Finally, Lupien suggests that I have not been keeping up with the VR literature and if I had "taken the time to consider some of the issues discussed in this article before jumping on that user-centric high horse" I would have "come away with a better understanding of what is happening beyond QandANJ."

I can assure Mr. Lupien that I keep up quite well with VR literature thank you, and I'm familiar with Coffman and Arret's article, which you can read here (right at the bottom of the page, after Brenda Bailey-Hainer's reasoned response.) And if speaking from a place of fact and experience instead of conjecture and generality puts me on a high horse then what can I say? Giddyup.

In Part III (much shorter, I promise) I'll address the VR software versus IM question.

Epilogue: Customer comment from today: "I am exceedingly impressed. First time in ages I felt like I was getting something positive for my tax dollars." (Our funders sure hate to see this... Ha Ha )

Labels:

2 Comments:

At July 20, 2006 7:56 AM, Anonymous Anonymous said...

I appreciate Peter’s comments, but feel compelled to correct some statements which I feel are inaccurate. The assertions made in this article are not based on conjecture. They are based on personal experience, documented cases in the literature and the experience of other libraries.

In terms of experience, I have been involved in VR services for over 5 years and have contributed to developing, coordinating and/or managing VR in three different library environments (one public and two academic). At all of these institutions, we determined that the types of technical problems discussed in the article were having a significant negative impact on usage and on our users’ experience. I am currently involved in a VR project which involves librarians from all library sectors. All of the individuals who have tried VR feel that they are not able to serve their patrons well, for the most part due to problems with software A review of the literature and discussions with other libraries reveals that these cases are not unique.

Coffman and Arret’s articles are cited, but there are many more articles in the LIS literature which discuss and document the failure of VR as a customer service (Online is not an academic publication and does not favour lengthy literature reviews). There are many well written pieces which challenge the usefulness of VR. As more and more libraries either shut down their VR service or move to IM reference, I’m sure we’ll see more of this in the literature. Furthermore, as stated in the article, libraries across North America were contacted and the vast majority felt that technical problems associated with their VR software prevented them from offering good service to their patrons. Again, not conjecture but the experience of close to 60 libraries in the U.S. and Canada. Peter seems to be basing his claims on his experience with one project (QandANJ). I am talking about a far larger number of cases. Also, posting a survey is not an efficient method for gathering data about technical problems users are experiencing. Rate of online survey completion are generally low, and this gives us no idea how many users are going away frustrated and not bothering to fill out a survey (or to ever come back).

Peter claims that a weakness of the article is that is does not state what versions of the software were tested. I’m sure he will appreciate that writing an article about VR technology is like hitting a moving target and that by the time a paper is written and published, technology may have evolved. I would suggest, however, that this is irrelevant in that the problems discussed in the article will continue to prevent libraries from offering good service to many of their patrons. Regardless of which version of Tutor.com, Docutek or QP we are talking about, the fact remains that as security features change and evolve, new problems will arise which create problems for our users. When SP2 was introduced, many libraries experienced a significant drop in usage. Many of our users couldn’t access the service and other libraries have reported a drop of over 30%. That’s hardly providing a user-friendly service. Vendors have responded to some of the SP2 problems (some vendors took months to respond), but what happens when the next security patch is introduced? Will VR users have to go through this all over again?

Luke Rosenberger, who also comments on this article (http://lbr.library-blogs.net/objects_in_mirror_are_closer_than_they_appear.htm), has addressed the issue more elequantly that I could. He states that it would be very easy to dismiss the article as out-of-date, but claims that this “would be a huge mistake” and goes on to point out that this “is a sobering reminder that no matter how much we perfect our interviewing and researching skills, it does no good for the patron who's unable to connect to our service, or gives up on our service, because of technical problems.” Stating that the article is, in fact, very timely, he draws out attention to a recent study which shows that a whopping 39% of VR sessions were unsuccessful and goes on to point out that the “39% figure only reflects a subset of the patrons we've missed out on -- because it counts only patrons who actually attempted to connect. That means there's another, unknown percentage looming behind that -- patrons who find themselves face-to-face with an arcane error message, or instructions or requirements and simply abandon the service before even sending in a question.” This is exactly the point of the Online article. Bill Drew, who commented on this post, correctly states that librarians are the only ones who want “a complicated and bloated VR system” and wonders why libraries don’t move to IM.

As suggested in the article, IM allows us to deliver online service to patrons without forcing them to jump through hoops. IM is far more reliable and stable than most VR packages, isn’t plagued by the technical problems discussed in the article, and patrons already use it. Even if we take the technical problems out of the equation, IM is till a more user-friendly option in that it puts us where the patron is, rather than expecting the patron to come to us (via a “a complicated and bloated VR system”.) This is what the article urges librarians to consider.

 
At August 14, 2006 6:01 PM, Anonymous Anonymous said...

So, um, are we gonna see Part III?

 

Post a Comment

<< Home