The application of artificial intelligence and machine learning to surveys

“Alexa, what’s the future of market research?”

AI and machine learning is playing an ever greater part in our lives, and survey research is no exception. Developers are continually searching for ways to automate processes to reduce costs. Early automation looked to remove simple repetitive tasks but systems such as IBM’s Watson now have knowledge workers in their sights. This conference aims to look at how AI and machine learning is impacting on survey research.

The conference will include presentations on:

  • How market researchers are actually using AI in the survey process now.
  • Tagging and categorization of text, images and video, with analysis of techniques and available tools.
  • Validation of machine learning systems.
  • The overlap/differences between statistical analysis and machine learning.

Event Schedule

Ray and Rosie’s presentation will highlight which “AI” approaches are being used, how they are being used, and why they are being used.  And by “used” they mean being used in significant quantities, and being used for projects where clients are paying the full price (as opposed to conducting pilots, test, or evaluation studies).

Location: Hall 1, Building A

Classification, “the mother” of all tasks that can be solved via machine learning, is ubiquitous in market research, where automated coding of textual comments can be implemented as a classification task.

In many MR applications of classification, the real goal is #not# obtaining the code(s) for individual answers, but estimating the relative frequencies (or percentages) of the different codes in the dataset. In machine learning this  task is called “quantification”. In this talk we experimentally show that classifiers obtained via state-of-the-art machine learning technology may deliver very bad quantification accuracy, especially when code frequencies in the data to be coded differ significantly from the corresponding frequencies in the training data. In the same experiments, a “deep learning” system (based on a LSTM recurrent neural network) that we have explicitly devised for quantification purposes, shows vastly better quantification accuracy.

Location: Hall 1, Building A

Digital Taxonomy has been working on AI Driven survey verbatim for some 20 months.

This paper will explore the reality of making Machine Learning work reliably for survey verbatims and will explain why, after much experimentation, we have concluded that a complete meld of techniques is needed to solve the problem. There is no Silver Bullet.

Location: Hall 1, Building A

We all know Machine Learning is hot. One of the universities we work with is struggling to get students to sign up for their Statistical courses, but their Data Science related programmes are over-subscribed.

But is Machine Learning (ML) really Statistics by another name? Or is it a more contemporary alternative?

Location: Hall 1, Building A

Artificial intelligence may lure people into thinking it’s impartial, but it can’t exist without human intervention. Bethan Turner explains why AI should be treated with the same caution as other forms of data and analysis.

Location: Hall 1, Building A , Golden Street , Southafrica

Dale explains how he applies machine-learning/AI  to validate predictions for verbatim sentiment.  The  software employed is the new Microsoft ML.Net.  This is a much better fit for most research agencies.

Neural nets are the ultimate ‘black box’. Beyond the trivial, it is not possible to follow how a network assigns weights across the hidden layers to determine a result. So how to validate? Dale’s paper takes an empirical approach by comparing neural net predictions for verbatim sentiment against both respondent-supplied sentiment scores (such as an NPS 0 to 10 rating question followed by “Why that rating?”) and the Syuzhet R package for text analysis. Unlike neural nets, a Syuzhet sentiment score can be easily traced from input to output. Therefore, if a neural net does at least as well as Syuzhet (and self-ratings where available), for all practical purposes it could be considered ‘valid’.


Location: Hall 1, Building A

As CEO of the Market Research Society, Jane Frost CBE is championing diversity in our sector and leading radical change at MRS to enable it to improve its profile and expand its membership.  We’re delighted to have Jane speak and offer some valuable Q&A for all.

Jane has 30 years experience in board level marketing and strategy positions in major blue chip companies and public bodies. Jane specialises in transforming organisations based on the creation of strong brands and value driven customer relations, and holds over 150 awards for advertising, branding, and design as well as being executive producer of a double platinum record. Jane has extensive experience as a non executive on PLC and smaller boards, with the added benefit of experience on audit and remuneration committees, as well as chairing charity boards and government consultancy panels.

Location: Hall 1, Building A , Golden Street , Southafrica

Today corporate organizations possess lots of customer feedback data, and a large part of it is in free-text format. The data can be survey responses obtained via dedicated websites or via email, it can be call centre transcriptions, or logs of customers’ chat sessions with technical support. The data can also be “unsolicited feedback”, such as any mentions of brands in the news, blogs, and social media. The volumes of textual data are such that it is unfeasible for human analysts to read and interpret them, so there is a clear demand for technologies to automate this process, extracting relevant information from text and making it amenable to further analysis.

In this talk we are going to present key points from our experience developing a commercial system for automatic analysis of open-text customer feedback, discuss considerations for the design of the system, and present results of its evaluation.

Location: Hall 1, Building A

Declining response rates and low engagement are all too common features of online panel surveying nowadays. In order to combat this we wanted to asses how AI and particularly Machine Learning can assist in improving the respondent experience. As such we tested two innovative technologies in the field of Artificial Intelligence and looked at how they could be appropriated in order to enhance survey engagement:

  1. Google Vision API for classifying images
  2. A Cloud Speech voice recognition tool to convert voice input to text.

Location: Hall 1, Building A

Surveys work best when they ask the right questions. To come up with these questions you need to do some preliminary research and talk to your target audience. But how do you pull together several hours of discussion into a short list of survey questions?

Audio and Video transcription has always been expensive but with Google, Microsoft, Amazon and IBM all competing to have the best voice control products, we now have access to AI transcription services for pennies.

In this presentation we will describe how easy it is now to upload video and get text back from these online services, the differences between them, the security aspects of uploading videos that may have personally identifiable data in them without violating GDPR, and the extra information that you would not get from traditional human transcription such as the speakers emotional state, age and gender estimation and automatic translation.

Location: Hall 1, Building A