Questionnaire Length: The Long and Short of Participant Engagement

Survey length in online interviews continues to be a bone of contention among people all along the marketing research continuum. While some within the industry recommend keeping questionnaires short to promote participant engagement, others often dismiss this advice. Short questionnaires, they assert, won’t unearth the depth and breadth of information that is needed.

As a sample provider offering guidance based on research, SSI suggests keeping online interview length at 20 minutes or less. Generally, when interview length increases, fatigue also increases and, conversely, attention span decreases potentially damaging data integrity. In effect, researchers who insist on longer questionnaires, sincerely believing they’ll get more information, may, in actuality, be sabotaging their efforts.

Research Design

In order to more fully understand the possible effects of survey length, fatigue and subsequent response quality, SSI recently fielded two surveys: one long and one short. This study replicated a ground-breaking study conducted in 2004 by Sandra Rathod and Andrea la Bruna which concluded, among other things, that data quality suffers as interview length increases.

The surveys, both then and now, utilized a block design and were divided into four blocks of questions, each representing a different subject matter. The blocks were randomized for each respondent so that the effect of survey length on response quality could be investigated by comparing whether the different order of the blocks led to different response patterns as the block position varied in the survey.

Fatigue Effects

One of the hypotheses of the 2004 study was that respondents would take less time and exert less effort later in the questionnaire than they would earlier in the questionnaire, due to fatigue. This hypothesis was proven in 2004: as the same block of questions was moved further back in the study, the time taken to complete it gradually reduced.

It could be argued that the decrease in block completion time was due to increased familiarity with the question set. It is true that the question blocks were similar in their construction and contained somewhat similar questions. However, additional evidence on panelist fatigue shows that at least some of the increased speed was due to fatigue.

Panelist Fatigue and Satisficing

One of the behavior outcomes of cognitive fatigue is satisficing – doing just enough work to satisfy the task. To see if this behavior was present in the 2004 study, researchers looked at a question, in each block, that it was possible to skip. This question offered a set of scales, presented in the form of sliders. The slider bar was positioned at the mid-point so it was possible to click on “next” without moving the slider and still leave some data behind. The likelihood of skipping the question rose as the skippable question was encountered further and further into the questionnaire.

In 2009, SSI found precisely the same pattern. The first time the skippable question was encountered it was more likely to be completed than when it was seen on subsequent occasions.  This was particularly true for the long survey. A reduction in elapsed survey times did not mitigate the effect. The long survey, at nearly 25 minutes, was still too long.

Conclusions

In both 2004 and 2009, the long survey proved itself too long. It fatigued the respondent and led to satisficing behavior. When questions could legitimately be skipped, they were. Perhaps the most unsettling finding was that the instances of cheating, deliberately telling a falsehood in order to skip an entire section, also increased as the survey progressed.

Following the 2004 study, researchers indicated that there is a “critical point in online survey response when the fatigue effects become significantly more pronounced. That critical time is around the 20 minute mark. If researchers work to keep surveys shorter, it will not only help ensure response quality, but it will also make for more motivated and responsive respondents.”

Today’s research confirms that interview lengths of 20 minutes or less can produce wonderful and engaged responses if well designed. The fact that there was much less satisficing and cheating in the short survey attests to this. If researchers work to keep surveys shorter, it will not only help ensure response quality, but it will also make for more motivated and engaged participants.

Note: For a copy of the white paper Questionnaire Length, Fatigue Effects and Response Quality Revisited, with the results of the studies cited in this article and to share your comments, go to www.research-voice.com.

About the Author: Pete Cape is Global Knowledge Director for Survey Sampling International. SSI provides access to more than 6 million research respondents in 72 countries. Sources include SSI proprietary panel communities in 27 countries and a portfolio of managed affiliates. SSI can potentially access anyone online to give their opinions via a network of relationships with websites, panels, communities and social media groups.

Reblog this post [with Zemanta]

Optimized Copywriting: Help the Search Engines Recommend Your Company

We don’t often think about Search Engine Optimization as “research” or “market research”.  In fact, the behavioral data that comes out of a good search engine strategy can make your site more appealing to your target audience and more profitable for you.

This is another installment of our monthly series on search engine strategies from The Search Guru.  Enjoy.

Last month’s blog post focused on creating compelling benefit statements in your copywriting. Now we’ll offer tips on positioning your site so that more people can read about those benefits – and then purchase your products and/or services.

There is a simple, effective way to help the search engines “understand” your copy so that, when people search on GoogleYahoo!, Bing and the like, YOUR company can appear as a result.

We’re talking about using an effective internal linking strategy to help boost the visibility of your company’s website.

What is Internal Linking?

A “link” is hyperlinked text that leads a site visitor from the page he or she is viewing to another page on the Internet.

An “internal link” is hyperlinked text that links from one page on a website to another page within the same website.

Take a look at your website. If someone arrives at your site, will each of your top-level pages include links that would help the site visitor quickly and easily find the information he or she needs about your products and/or services? If not, it’s vital that you fix that as soon as possible by strengthening your internal linking.

Once the top-level pages contain a logical, helpful internal linking structure, then move on to the next level of pages – and so on and so forth. This is extremely helpful to site visitors but it’s also important for another reason, too.

Benefits of Strong Internal Linking

Search engines send out “spiders” to “crawl” around your site to determine the focus of your content. These spiders rely significantly upon hyperlinked text to locate and then thematically index web pages so that these pages can be properly indexed. Once indexed, these pages can be returned when people search for products and services online.

If one of your web pages is NOT indexed by the search engines, then it cannot be presented in response to a relevant search query. Simple as that.

Three Powerful Internal Linking Strategies

1)   Add relevant keyphrases to your site’s navigational links. When choosing keyphrases to use, ensure that they are:

  1. Searched upon
  2. Relevant to your website
  3. Achievable (not too competitive; we often recommend that clients choose keyphrases with competition under two million web sites)

If you need to review how to create a supercharged keyphrase research strategy, check out these two posts:

  1. Secret 2 Revealed: Use the Keyphrases Your Customers Use
  2. Attract Prospects to Your Site Through the Words You Use

2)   Add keyphrases to internal links that already exist in the body of your text.

3)   Look for opportunities where you can naturally add more optimized internal links within the body text of your site.

Whenever you’re linking from one internal page to another internal page, make sure that the link goes to the most relevant page on your site.

What’s Next?

Take a look at your own website. How far along are you in your internal linking strategy? Do you need to add more internal links? Or do your link placements make sense – and just need optimized? Determine where you are in your overall strategy; prioritize areas of your site to strengthen them; and then make a commitment to the tool – internal linking – that can help search engines recommend YOUR site.

Next month, we’ll share part 4 of our successful copywriting series – giving you more information to help attract targeted traffic (sales leads!) to your website.

This month’s opportunities:

Knowledge is power – this month, you’ll be filling any gaps in your understanding of SEO copywriting.

  1. Read through the Search Marketing Terms Glossary
  2. For more information about internal linking strategies: How to Create a Landing Page
  3. Discover what questions your prospects are asking – and then answer them on your site.
  4. Bonus: read back issues of the newsletter here: http://www.thesearchguru.com/email-archive.asp to learn more.
  5. Burning question or comment? Email me at Results@TheSearchGuru.com.
About the Author: Leslie Carruthers is President of The Search Guru, a best practices full services Search Marketing firm creating breakthrough results for their clients since 2004. Leslie can be reached at 440-306-2418 orResults@TheSearchGuru.com.

How to Characterize Users and Usage to Design Better Products and Services

At Blink we create behavioral profiles, along with key scenarios, to characterize users and usage.

If you have been around system design in the past several years, you have no doubt encountered personas: bright, whole, wholesome (and entirely fictional) users complete with family members, college degrees, cars, and recreational interests.

Personas are created to help project team members understand and empathize with users. This, in turn, should help drive better design decisions—creating features that will do the best job possible in meeting user needs. Unfortunately, there is a temptation with personas to focus on the personalizing details, giving less emphasis to the behavioral characteristics and motivations that should drive system design. Where this happens, personas have little value.

At Blink we prefer to characterize users in terms of behavioral profiles, devoid of personalizing details. But we aren’t opposed to telling a story to get project stakeholders to empathize with users—the difference is we accomplish this by using contextual details in key scenarios.

User Research is Key

The starting point for our process is user research. This involves going out and observing what users are doing and why. The particular methodologies chosen depend on several factors such as the specific research questions and the available budget.

Through user research we gain an understanding of how tasks are currently performed, what the barriers are, and where the opportunities lie. These data go directly into creating the behavioral profiles and key scenarios.

Example Behavioral Profiles

To create behavioral profiles, we look at patterns of motivations, goals, and usage. For example, let’s say we are creating a photo management and sharing system. We might have a behavioral profile for “Family Photographer” and one for “Photography Enthusiast.” Behavioral profiles can be captured in table form, with cells for goals, motivations, and usage patterns.

Family Photographer

  • Goals: Share family moments and events
  • Motivations: Give joy to friends and family members; tell a story in pictures

Usage Patterns

  • Irregular frequency: works with images in response to an outing or event
  • Spends large amount of time crafting image captions; may collaborate with other household member to “tell the story” to friends and family—humor often an important element
  • Enjoys adding thematic elements to photo pages—travel, parties, sports, etc.
  • Creative unit = the page or pages containing images
  • Time-sensitive: photo quality less important to timeliness, sharing an event as soon as possible
  • Long-distance photo sharing may include talking on the phone while both parties view the photos online

Photography Enthusiast

  • Goals: Create a compelling photography portfolio
  • Motivations: Showcase photographic talent; have talent recognized by others

Usage Patterns

  • More regular frequency: tends to work with images during a particular time devoted to the hobby
  • Highly selective: spends significant time comparing images of the same subject against each other to select the highest quality image
  • Photo editing highly important: uses Photoshop or other sophisticated editor to improve image quality or make creative composites
  • Minimal or no captions
  • Minimal or no thematic elements to photo pages
  • Creative unit = image
  • Public sharing of images
  • Watermarking of images highly important to protect copyright

Adding Contextual Details: Key Scenarios

Where we do add contextual details is in the key scenarios. Key scenarios focus on the most frequent and important “chain of events.” For example, a key scenario for the Family Photographer might be “Share Recent Images with Friends and Family.” The scenario would be written as a particular instance of the scenario—for example Mary, who has just had a birthday party for her four-year-old son. Tasks included in the scenario might be import images, create an album, add images to album, decorate album, share the album—but these would be put in the context of Mary working with the birthday party images to share with friends and family.

Yesterday, Mary and Tim had a birthday party for their four-year-old son Jacob. It was an exhaustive undertaking, but everyone had a wonderful time and Mary got some great photos. Mary wants to create a web-based photo album to share not only with the people who attended the party, but with those who weren’t able to attend, including her parents who live on the opposite coast.

  1. Mary plugs her camera into her computer and transfers the photographs to the Pictures folder.
  2. As soon as the transfer is complete, her computer automatically launches the application and displays the just-transferred images in chronological order.
  3. Mary took a huge number of photos with the idea that she could always weed them out later. She wants to put together a collection (album) of only the very best photos.
  4. Mary creates a new album called “Jacob’s 4th Birthday.”
  5. Mary would like to add a visual theme to her photo album. Because Jacob’s party had a “cars” theme, she browses for a matching photo album theme. She finds the perfect theme and applies it to the album. However, since she hasn’t yet added any photos, she just sees a car-themed “cover” for the album, which contains the album name.
  6. Mary carefully selects the photos for the album. She starts by selecting the absolute best photos first, regardless of where they fit in the chronology. As she adds photos, they are placed in the album in chronological order by default (but she can change the order if she desires).

The details help “make it real,” but by using them to describe actions, they are kept relevant to the tasks the system is designed to support. We have found that focusing contextual details on key scenarios gets us into the design process sooner and with clearer focus.

If You Choose to Use Personas

In many organizations, personas do have a place—particularly where there is a tradition of focusing on technology rather than users. As humans, we are naturally attracted to other people: personifying users can be a powerful agent of change. The key is to not let the personality of your personas overshadow (or replace) the relevant behaviors and attitudes. If you do use traditional personas, keep the following factors in mind:

Root your personas in data about actual user behavior. If conducting field research isn’t feasible, interview people in your organization that have direct and frequent customer contact: customer service personnel or sales people, for example.

When analyzing your data, focus in on key goals and behavior patterns—you want to create a few, distinct personas that will be easy for project stakeholders to work with.

Capture and describe relevant characteristics—skills, attitudes, motivations, goals—before adding any personalizing details.

Beyond Behavioral Profiles and Key Scenarios

At Blink, behavioral profiles and key scenarios are important inputs to design, but they characterize typical system use—not all system use. Even though we use key scenarios as the starting point for design for the most important functions, we also obviously need to consider all functions. For this we do more detailed task analysis using objects and actions. We’ll talk about objects and actions in a future essay.

About the Author: Heidi Adkisson is the Director of Interaction Design for Blink Interactive.  Heidi joined Blink in 2001 and has over 15 years of software development experience working with large-scale system implementations. Prior to Blink, her experience included requirements analysis and interactive design, working with established companies such as AT&T Wireless and Microsoft.

Reblog this post [with Zemanta]

5 Lessons from Usability Testing: Designing for the Real World

When designing a new system (or redesigning an existing one), it’s important to keep the user’s real-world context in mind. A lot of thought and effort will hopefully go into making sure the product delivers the right set of features, has the right look and feel, and abides by standard UI conventions. But designs that seem solid conceptually can still fail if they do not take into account how real users will interact with them in the real world. So we need to ask:

  • Where, when, and how will users engage with the system? How does this constrain the type of interaction that is possible?
  • What do users need to do physically and cognitively to use the system effectively? Is this realistic for the target users?
  • How will the interaction unfold over real time and in real space? Does the flow work logistically, as well as conceptually?
  • Besides the expected user, who else will participate in the interaction (directly or indirectly)? Does this change anything?

To illustrate how real-world logistics can affect the user experience, here are a few examples from some of our recent usability projects:

1. Mobile Application. We observed users downloading and installing banking software designed for use on smartphones. During installation, users were given a long confirmation number and were told to write it down, as they would need it again later in order to launch the software. Our study participants balked: One noted that if he was on his BlackBerry, that meant he was away from his desk, with no pen and paper in sight.

Lesson: On a mobile device, it’s important to make sure that tasks can be performed with a minimum of additional resources.

2. In-Store Kiosk. Employees at an electronics store were asked to walk through a buy-flow scenario in which they were helping customers subscribe to internet service at an in-store kiosk. Employees’ concerns focused less on the UI itself and more on how they would manage customers at the kiosk. For example, how would they minimize congestion and wait time? Which screens should the customer fill out and which screens should the employee fill out? Would they be able to print from the kiosk?

Lesson: The real-world environment of an in-store kiosk requires complex user scenarios. In spaces where sales associates and customers work together to complete a task, the design should help facilitate this interaction.

3. Web Activation. To activate a new hardware device, users had to complete a web activation that involved entering their device’s 12-character identification code. Success depended on users’ ability to coordinate action and attention between keyboard, computer monitor, and device. This was a real challenge for many users because they were not touch typists. Many made simple – but extremely frustrating – errors such as failing to click into the field before typing the code, missing a character, or mistaking zeros for ohs.

Lesson: Whenever tasks simultaneously burden cognitive and motor skills, user errors (and frustration) are likely. In such cases, preventing errors is important but not always possible. Helping users recover quickly and gracefully from errors – e.g., precise error messages, auto-correcting typos – can be vital for a positive user experience.

4. Online Quotes. When requesting an insurance quote online, users were asked detailed questions about their current deductibles and levels of coverage. Most people in the study said they would want to complete the quote form at home, where they would be able to look up their current policy.

Lesson: When information required to complete a task is not likely to be top-of-mind, tell users up front what will be required and/or allow them to save their work and return to finish it later. This will prevent wasted time, task abandonment, and entry of inaccurate information.

5. Time-Tracking Software. Time-tracking was a component of a larger personal information management suite we tested in the lab and in the field. While very impressed at the program’s ability to automatically associate time spent on the computer with a given project, most study participants’ also spent significant time offline – in meetings, phone calls, or out of the office – and they wanted to be able to allocate this time to projects as well.

Lesson: When a program cannot automatically account for or predict all real-world behavior, allowing simple manual editing or the ability to insert events after the fact is a must-have feature.

These examples give a taste for some of the real-world issues users confront… and designers have to plan for. They also underscore the value of field testing systems in users’ natural environments, where these types of issues will naturally surface. For lab-based usability work, the goal should be to create tasks and scenarios that evoke as much of the richness of the real-life user experience as possible.

About the Author: Jen Amsterlaw is a Usability Specialist with Blink Interactive.  Jen joined Blink in 2007 with a background in experimental psychology and education. She has a Ph.D. in Psychology from the University of Michigan and over 10 years of experience conducting laboratory and field research.

Reblog this post [with Zemanta]

Eye tracking usability studies: what are users really looking at?

To determine what usability study participants look at and take in while viewing online media, we used to watch their mouse cursors, interactions with links and controls, and body language. We also listened carefully to their think-aloud narratives and comments. These traditional testing techniques, however, could never tell us definitively what users notice and what they don’t. Eye tracking usability studies open up a new frontier.

Incorporating an eye tracker in a usability test gives us more precise information about how discoverable or attention-grabbing visual elements such as navigation structures, screen graphics, links, text, multimedia content, or promotions are to study participants.

Eye-tracking benefits

Eye tracking data can help clients improve and streamline designs. By identifying and understanding individual and common user gaze patterns and eye movements when viewing online content, we can address research questions such as:

  • What do users look at first on our home page (or any page, for that matter)?
  • Do the calls to action on this page stand out immediately?
  • Are users reading this content?
  • Are users noticing this interface feature and if so, how long does it take before they look at it?
  • Which of these navigation systems is the most discoverable?
  • What page elements are distracting users from easily accomplishing this task?
  • Will our new design be more effective than the current design?

Eye tracking gives us valuable insights into how users perceive online content. Data generated from eye tracking, when combined with findings from traditional usability methods, can help teams optimize layout and visual design, leading to better user experiences and higher conversion rates. Eye tracking studies can also be a cost-effective way for clients to ensure that they are getting a good design and usability ROI.

How does eye tracking work?

We use an eye tracking system developed in Sweden by Tobii Technology. The Tobii eye tracker looks like a computer monitor (see Figure 1), but sensors are built into the monitor’s casing that send and receive reflections of infrared light from study participants’ eyes. It is quick and easy to train or calibrate the eye tracker to work with an individual at the start of a usability session, and the technology is completely safe.

Figure 1: Eye tracker built by Tobii Technology.

When users view screen content—a web site, application, image, video, marketing piece, etc.—the eye tracking system precisely tracks and records where their gaze pauses or fixates, even if only for a 10th of a second. The system also tracks and records the eye movements or saccades between the fixation points.

A brief example

For illustrative purposes, we ran a short eye tracking test with a small sample of five users on the web site of one of our favorite charities, Oxfam America. Participants, all unfamiliar with the site, were given the task of finding a way to donate to Oxfam. Figure 2 shows a “heat map” of what our sample of users looked at during their first five seconds on the home page. The bright red-orange spots are the parts of the page users fixated on most frequently. We outlined the two pathways to donate, “What You Can Do” and “Donate now,” in red.

Figure 2: An eye tracking “heat map” of the Oxfam America home page showing what test participants viewed most frequently during their first 5 seconds on the site.

Unfortunately, both pathways to donate on the home page received little initial attention. All testers found and clicked one of the links within 16 seconds, so task success was 100%, but if a primary purpose of the Oxfam America site is to collect donations, the call to action on the home page may not be clear enough. It’s also possible that a more subtle approach to soliciting donations is more effective for Oxfam’s audience—we don’t know, and Oxfam is not one of our clients.

While heat maps show how different page elements command visual attention relative to each other and can be generated for individuals or a group of users, gaze plots and gaze replays show the visual path that individual users take on a page. The numbered circles in Figure 3 reflect what one user in our mock study fixated upon first, second, third, etc. during her first two seconds on the Oxfam site.


Figure 3: A gaze plot showing one user’s initial eye movements and pauses (or fixations) across the Oxfam America home page.

By analyzing individual gaze plots, we can identify patterns about the order in which study participants view a page or application screen. These patterns can reveal mismatches between where users expect to find links, controls, or content and where they are actually placed on the page, and the patterns help us to recommend changes in the way content or navigational elements are spatially arranged or aligned. For example, a gaze pattern that involves a lot of back and forth movement may suggest a need to place certain items closer together.

One useful feature of the eye tracking system is its ability to track views or fixations in specific areas of interest (AOIs). Once defined in web page or other on-screen content, the eye tracking analysis software can then generate quantitative data such as:

  • the percentage of users whose eyes fixate on the AOI
  • their gaze duration time within the AOI
  • the number of fixations on other page elements prior to viewing the AOI

Figure 4 shows data from an AOI we defined around Oxfam America’s “Donate now” box. This chart reveals that 3 users noticed the “Donate now” box, and it took them between 2 and 10 seconds to first fixate on it. Putting on our design hats momentarily, the brown “Donate now” box in Figure 3 looks a lot like a heading and less like a button, which may be why two of our testers did not notice it at all.

Figure 4: “Time to First Fixation” graphic based on the “Donate now” area of interest.

It can be telling how many people simply do not notice an AOI and thus are missing out on an important site function or brand message, echoing the old usability adage “If the user can’t find it, the function’s not there.”

How does eye tracking change how we conduct usability studies?

We do not view eye tracking as a replacement of traditional usability testing methods. With some minor modifications to introduce the eye tracker and fully take advantage of what eye tracking does best, we typically run studies very much as we always have. The data generated from an eye tracker complements other usability findings to give us a more comprehensive and sometimes more quantitative view of usability problems. Eye tracking data can help us pinpoint barriers and distractions that prevent users from finding things quickly or otherwise degrade their online experiences, and it can reveal interesting viewing patterns that lead to better, actionable design recommendations that meet both user needs and business goals—and those are the things we think help our clients the most.

Thanks to Laura Barboza and Jen Amsterlaw for their research assistance.

References

“Eye tracking in human-computer interaction and usability research: Ready to deliver the promises,” Jacob, Robert J.K. and Keith S. Karn. Published in “In the Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research,” Elsevier Science, Amsterdam (2003)
http://www.cs.tufts.edu/~jacob/papers/ecem.pdf

“A Comparison of Eye Tracking Tools in Usability Testing,” DeSantis, Rich, Quan Zhou and Judith A. Ramey. Society of Technical Communication Proceedings (2005).
http://tc.eserver.org/dir/Eye-Tracking

“Tobii Eye Tracking: See through the eyes of the user.”
Usability brochure available from http://www.tobii.com

Oxfam America Web Site
http://www.oxfamamerica.org

About the Author: John Dirks is the Director of Usability Engineering for Blink Interactive. He has spent over 10 years helping companies improve the usability of their software, web sites, and hardware products. John joined Blink Interactive in the year 2000 following six years working as a Senior Usability Engineer at WRQ Software, Attachmate Corporation, and Microsoft

Reblog this post [with Zemanta]

10-Point Checklist for Questionnaire Design

About the Webinar: Ryma’s April 7th webinar will be presented at noon ET by Esther Rmah and Kathryn Korostoff. Regardless of whether your planning your first market research questionnaire or your 10,000th, this webinar is for you if you ever see yourself considering a customer survey again.

The 10 steps for a stress-free customer survey process will contain bits of information that is a result of decades of practice. Esther and Kathryn will be discussing a simple process to write a successful survey, and basic tips when using an online survey tool to ensure data reliability.

Register for the 10-Point Checklist Seminar or Sign up to receive announcements about upcoming seminars.

About the Presenters: Kathryn Korostoff is a market research professional with a special interest in how organizations acquire, manage, and apply market research. Over the past 20 years, she has personally directed more than 600 primary market research projects and published over 100 bylined articles in trade magazines. Currently, Kathryn spends her time assisting companies as they create market research departments, develop market research strategies, or otherwise optimize their use of market research. Prior to Research Rockstar, Kathryn completed the transition of Sage Research–an agency that she founded and led for 13 years– to its new parent company.

She is the author of “How to Hire & Manage Market Research Agencies“,

Esther LaVielle is currently an Account Manager at QuestionPro and Survey Analytics, which was started in 2002 in Seattle and is now one of the fastest growing private companies in the US. Prior to her adventure at QuestionPro she spent 3 years as a Qualitative Project Manager at the Gilmore Research Group.

The Eyes Have It: What You Need to Know About Eye Contact

There’s a lot you can learn from considering the phenomenon of eye contact.  Just a fraction of a second’s eye contact yields a huge amount of information that you can – and do – use as you communicate with your interlocutor.  Thinking about how this works, and why we’ve evolved to do it, can pay big dividends. Take a look at this picture of three women and consider the amount of information you get almost instantly just by looking at their eyes.  For just a little time invested you know a lot about if each person is happy or sad, if she’s anxious or if she’s at peace.
Eye contact is a big part of any conversation.  And as you absorb the information – the feedback – you get from eye contact while having that conversation, you’ll find that you make subtle course corrections in what you’re saying and how you’re saying it.
This is a perfect, beautiful example of a feedback loop.  What is it that makes this feedback loop so successful?  In thinking about this, two things jump out at us.  First, it’s simple.  (On its surface, anyway.)  Your brain filters out extraneous information and focuses on certain vital cues which you’ve learned to watch for.  You’re not overloaded with feedback.  You’re fully focused on the other person’s eyes and what they’re doing with them. Second, it’s fast and it’s repetitive.
We don’t make eye contact once.  Rather, we maintain this feedback loop during our conversations. Why have we evolved this feedback loop?  Communication is among our most important human characteristics, and the ability to understand nonverbal cues is a big advantage.  What’s even more interesting to consider is how this feedback system’s simplicity and repetitiveness has allowed it to evolve to become so important to us.
What’s evolving in your organization?  Customer feedback initiatives are like anything else in corporate life.  If we’re not careful, these programs can become bloated and ineffective.  We suggest taking a page out of nature’s playbook and examining how you can use your customer feedback to give your organization “virtual eye contact” with lots of customers.
The big take-aways we see are those that have allowed eye contact to evolve into such an important part of who we are and how we communicate.
  • Keep it focused by concentrating only on those vital cues that drive results.  (For more ideas on this, read Choose One Thing.)
  • Find a system for streamlining the results so your employees don’t have information overload.  There are a variety of ways – including our software – to do this. Make it repetitive.
  • Ask for feedback and share it with your employee-facing customers regularly.
How we use eye contact to help us communicate is one of those great examples of nature accomplishing something very powerful with simple elegance.  We’d do well to emulate it.
About the Author: Max Israel is the founder of Customerville, a Customer Satisfaction Measurement Solution for Multi-unit Operators that can help you create happier customers and drive sales.
Related articles :

Benchmarking – A Live Experiment in Co-Creation – Are you ready for this?

Last week as we were batting around our “Web Analytics” integration strategy (Omniture, Google Analytics) – we thought up of an interesting concept that I’d like to share with all our readers and more importantly, get you to comment and discuss this.

Problem Statement:
We were talking to WebTrends about integrating surveys (stated-choice data) with behavioral data. We already do this with Google Analytics and Omniture SiteCatalyst. The folks from WebTrends asked us an obvious quesiton – How many customers/users do we have who have WebTrends deployed? – Good Question. So obviously, we thought, we can simply post a survey on the blog and get at least an idea (the blog in general represents our early adopters and more engaged customer base) – of how the distribution of the Web Analytics solutions is. Google Analytics, Vs. Omniture Vs. Webtrends Vs. Clicky Vs. Hitwise

Easy enough – we can get this done. Hell we don’t even have to pay for Survey Software!

Then something dawns on us:

As I was getting ready to put the survey together, we thought – “I wonder what some of my other blogger friends have to say about this? – i.e. Would they want to find out the same information about their own readers? I knew Ivana on her DIYMarketers blog would find that interesting – The Web Analytics solutions her readers are using. I also contacted Paul Dunay one of the most prolific B2B and Integrated Marketing bloggers out there to get his take – he seemed intrigued.

The brain expansion continues…
We then started thinking – how about we let anyone syndicate/republish the survey? In theory, if Ivana or Paul wants to collect the _exact_ same data, she should be able to use my survey – instead of copying and creating a new survey – and having a disparate data-set, she (or for that matter anyone) should be able to syndicate/republish the survey that I am doing. Not only that – anyone (including you) can take the survey and republish it with a new encoded URL – for your own website. Sort of a benchmarking profile. Now, this introduces some challenges and some advantages:

Challenges:

  • I am probably not interested in intermingling my data with other’s – After all the reason I am doing this exercise is to find the the answer to a specific question – How many of my users could be using WebTrends, Omniture, Google Analytics?
  • If I consider the survey to be a competitive advantage – the structure, questions etc. – I would not want to give that away to anyone else (without fair compensation.)

Advantages:

  • I can do “comparative analysis” – my reader profile vs. Ivana’s reader profile vs. anyone else that has chosen to a part of this benchmarking/syndication experiment.
  • By letting anyone syndicate/republish the survey, the data I collect from other blogs can actually be interesting – maybe there are patterns that can be discerned that group B2B blogs vs. B2C blogs vs. Product Blogs Vs. Marketing Blogs etc.
  • Ivana/Other bloggers can promote the survey and extend its reach (far beyond the QuestionPro blog)
  • The data that is collected can actually be VERY interesting to Omniture, WebTrends and Google Analytics.
  • If there is enough data, it could then be sold (and revenue could be shared) amongst the participants of the experiment.

Co-Creation:
Here is where you come in. Now we know (based on the commentary above) what we need to do. But like all good things, “we don’t know what we don’t know” —  I’ve decided to blog about this Benchmarking tool/offering and make it as open as possible. As part of this benchmarking experiment, I also decided to follow another model for development. I am going to do two things:

a) Blog in detail about the co-creation experiment.
b) Update and modify the software based on recommendations and consensus opinions (yes I know this is subjective – but this is not a democracy – it’s a meritocracy.)

Please use the comments (below) on this blog to post on:
a) What do you think about this Benchmarking/Republishing Concept?
b) Would you use it? Be a part of the pilot? If not why not?
c) What are the pitfalls?
d) What updates/enhancements can we make to the model to make it more attractive.

I’ll be posting a series of blog posts on our Benchmarking solution and as it evolves…

Next Post : The Web Analytics Benchmarking Survey itself.

How to Find Out if Your Brand is Bland

I subscribe to “Trendwatching.”  If you don’t you should.  This is a terrific resource for marketers, business owners and executives at all levels and in all industries.  Trendwatching is an organization dedicated to doing exactly what the name implies — watching trends, naming trends and reporting on them to those who don’t have the time or expertise to synthesize all the information.

They use a variety of ways to collect this information — mostly by having feet on the ground all over the world who self report.  But this is the first time I’ve seen this kind of information collection and I’m going to show you a series of them over the next week so that you can not only see these powerful presentations – but get some ideas about how you can use this methodology for yourself.

The key in each of these video presentations in the series is the question they ask – and their collection methodology.  With a few basic clicks of a video editor – they’ve pulled together a powerful case for trends and brands.

Would Customers Say These Things About YOUR Brand?

In this video’s consumers are asked the simple question of “What phone do you use?”  For several minutes peole for all over the world respond.

  • What pattern do you see in phones?
  • What phone brands are conspicuously absent from the video and the responses?
  • If you were to create one of these for your own brand – what question would you ask?  What would you expect to hear or see?

Leave a comment and tell me what you think.

Reblog this post [with Zemanta]