The Future Is Now! A round-up of all the latest technology in survey research.

The Future is Now: A round-up of all the latest technology in survey research

The aim of this conference is to provide an wide-ranging overview of the current impact of technology on research techniques. We have an excellent line up of expert speakers covering topics that include panel hackers, data visualisation, communities, qualitative research, AI and machine learning.

Select a session below to view downloadable material:

Event Schedule

In this presentation, we share new findings on the behaviours and sentiments that affect the survey data we focus on. There is a principle in psychology of the figure-ground relationship. As humans we tend to focus on the figure, while the art of interpreting consumer trends is to focus on the background and use the trends to provide better context to the statistics. Examples include people’s device and web behaviours, their media “diet”, and their attitudes in areas including trust, privacy, optimism and economic and financial sentiment.  These factors not only provide context to our data, but also impact the way people take surveys and when, where and how they are comfortable sharing their lives with researchers.

The results presented here are from a new study of over 9,000 consumers in nine countries, (UK, France, Germany, Netherlands, US, Canada, Australia, China and Japan), conducted in November 2018 and February 2019.  The presentation will highlight both generational and regional trends.

Location: Hall 1, Building A , Golden Street , Southafrica

Bricks and Mortar retail is transforming from transactional to experiential.  How can we measure the payback on investment in this important touchpoint?

This paper will demonstrate how new technology, incorporating cameras and algorithms, (but not video – so no privacy issues) can shed new light on how display material is working.

Using case studies from Brazil and the US, we will demonstrate how the data generated fills an important gap in the measurement ecosystem.  Beyond cost and sales, we can pick up the number of people reached by the display, the engagement they have with it and the emotion it elicits.

When combined with other data sources, including sales and footfall as well as survey data from exit interviews, we can create a rich picture of the effectiveness of display – from location to messaging to help clients take better decisions about their investment in retail.

Location: Hall 1, Building A , Golden Street , Southafrica

Is technology a barrier to human connection, or is it a facilitator? What do we lose, and what do we gain, when virtual interaction replaces the physical reality of eye contact, touch and smell? Does technology open us up to ways of connecting that we never thought possible, or does it shut us off from authentic experience?

Qualitative research today is big, technology-powered, science-based and method agnostic. From image recognition to video analytics, facial coding to passive data capture, we’ve seen a wealth of examples of innovative uses of technology to delve into people’s lives, to add scale and efficiency, and to widen the playing field.

But there is certainly another side to the story. As much as technology connects and brings together, it also has the ability to fragment and objectify. Human insight often springs from attention and sensitivity to small moments and magnifying the meaning in small details. Data plenitude can make us lazy thinkers and desensitize us to its value.

I believe that this is a challenge to do with the mindset with which we approach the use of technology. The appeal of technology has so far been driven by scale and efficiency – how quickly and cheaply we gain access to people’s lives, thoughts, and behaviours. But without context, what surfaces is merely a sliver, one view of what’s happening in the moment. This compromises our ability as marketers or creators to use data strategically, and can severely impact decision-making.

The answer lies in our ability to zoom out, infuse tech-generated data with context (social, scientific, cultural insight) to generate human insight. This approach is what we call OpenThinking. New tech is a vehicle, a means to open up access to data and the OpenThinking approach is about imbuing this data with meaning.

Location: Hall 1, Building A , Golden Street , Southafrica

Digitalization of research creates a new challenge for us: how do we create empathy with participants, so that they are able to contribute in the most genuine way via remote research channels? And how do we encourage and bring this empathy into the client organization, so that our clients are able to see beyond the dataset, connect and understand consumers as people?

We conduct our market research communities on a sophisticated proprietary community platform which leverages a broad activity toolkit, powered by real-time analysis tools, text analytics, video transcription and chat bots to name a few.

But as these tools develop and we add new technologies to the digital bandwagon, we must constantly ask ourselves where is this thin line between data-fying our participants and humanising the insight we seek? In the race for quick and agile research, how do we as researchers stay human and really connect with our community members?

In this presentation, we will look at communities from two angles: snorkelling and scuba diving. The research frameworks we use to design activities (Censydiam Metaphors, Implicit Reaction Time) are guided by behavioural science principles and enable us to surface System 1 thinking, making sure the research is tapping into the spontaneous and the nonconscious. Think of it as snorkelling. But to build a powerful base so that participants would actually want to share their life stories and worries with us, we need to make sure the member engagement is high over long periods of time. This is where our 10 golden rules of engagement come in to play. Think of it as scuba diving. We will talk about our engagement philosophy, the key principle being creating and nurturing a clear value proposition.

Location: Hall 1, Building A , Golden Street , Southafrica

Survey chatbots are an increasingly popular method to gather feedback in an engaging way.

In this presentation we’ll explore the current state of the wider chatbot market and see how the latest trends in chatbots are being applied to the survey industry.

We’ll use findings from our latest case study that compared the results of chatbot surveys with form-based surveys to help answer common questions such as:

  • Why use a chatbot?
  • Where do chatbots work best, and where do they struggle?
  • What are the different types of chatbot surveys currently?
  • How intelligent are they?
  • What are the challenges of chatbot surveys?

Finally, we’ll present our vision on how model-first surveys could impact the future direction of survey chatbots.

Location: Hall 1, Building A , Golden Street , Southafrica

Trending topics come and go. The Skripal case may be all over the news for some time, and then the public will lose interest, until some new developments bring it back into light, or not. It is is not so difficult to get to know the trending topics of the day, they are all over the internet for us to see. Web Listening solutions will provide us with detailed quantitative insights, or we may just have a look at Google Trends or “Most Read” columns in News outlets. But is it possible to predict the trending topics of tomorrow?

Of course, this is partly an elusive grail. What will be hot tomorrow depends on events in the world which are beyond our predictive power. The lethal consequences of the finding of an old-fashioned French perfume bottle used as poison phial by Russian spies is an unpredictable trigger of interest. But among all sparks, we may try and know which ones are the most likely to light a fire. Actually, this may be what marketing in general, and online marketing in particular, is all about, understanding what is the right spark at the right place.
The present research will present predictive models of online news trends, assessing the import of various data sources and simulating the effect of news campaign by media types.

We shall compare what bots can do (models based on data anyone could harvest on the net), what insiders can do (models based on the previous data, plus up-to-date information about topics popularity), and what panels can do (models based on the previous data, plus information about who is reading what). We will show that panel data provides a significant increase in predictive power, and we will show how these predictive models may be used to simulate the effect of news campaign.

This research is based on on-line browsing data from 2.000 panel members in the UK. News content is analysed and predicted through a mix of proprietary machine learning algorithms and IBM Watson’s content analysis capabilities.

Location: Hall 1, Building A , Golden Street , Southafrica

Automated analysis of open-ended text survey data is an appealing prospect. It eliminates human error and human variability and can be used to create models that are easier to update over time than a manual approach to coding generally yields. Today, text analytics is a huge business and is among the most popular innovations within the current research landscape. However, within the research industry, there has been little change in usage in recent years, and awareness of the options available appears to be limited. We wished to look more closely at the true strengths of different approaches, the main barriers to their adoption, and how these might be overcome.

Using text responses from a short survey about work and play in two markets, we contrasted two tools in analyzing the output: Q’s text analysis component and Google Cloud Natural Language. We chose these tools as they can each be easily applied to survey data but are based on different analytic principles.

We found some surprising differences between the output of the two tools and between the text analysis metrics and scalar data.
We conclude by discussing some of the key contemporary themes in text analytics and the likely future role of this method within market research and insight. We’ll also touch on how we’ve used text analytics in practical applications for our clients: from providing further insight in brand tracking studies, to use of online reviews to understand reasons for customer dissatisfaction.

Location: Hall 1, Building A , Golden Street , Southafrica

Be a storyteller. Bring the data to life. Ignite action. These are all part of the current refrain imploring researchers to engage their audience when presenting research results. This can be challenging when we are delivering survey findings, largely explained by charts and graphs.
Video open-ends offers the potential to do exactly what we are asked to do – bring the data to life by bringing the customer into the room through their recorded response. The ability to see and hear the respondent provides a memorable and irrefutable view of that data.
However, while the new(er) question type offers much promise, to date the reality has shown are clear hurdles, ranging from poor response rates to time-consuming data analysis.
This presentation will discuss the practicalities of video open-ends. Firstly, it will draw upon existing and currently in field research-on-research to discuss response rates, data quality and respondent concerns, and outline approaches to maximize the effectiveness of question type. Secondly, it will cover analytical challenges in wading through large numbers of short videos and advancements in research technology that facilitates analysis through AI-powered capabilities.

Location: Hall 1, Building A , Golden Street , Southafrica

Abstract Data Visualisation software has seen huge investment by the likes of Tableau, Microsoft, Qlik as well as purpose-built tools for the Market Research industry. The capabilities available appear to be limitless. It is now even easier to create something that looks good on the eye but creating something that drives insight is much harder.
In 2018 the ASC / MRS Award for Best Technology Innovation went to MyKynetec, our online content delivery platform. As well as being a complete end-to-end data delivery platform for our clients, we gave significant attention to how we can make our visualisations not just beautiful but insightful.
Our approach uses leading data visualisation software yet the strength of our implementation comes from how we apply tried-and-tested data visualisation techniques that are often forgotten.
Our presentation will cover the three key techniques that we apply in all our data visualisations. The approach is agnostic of data visualisation software and can be applied to any implementation. The three techniques presented are:

  1. Tapping into how our brain perceives images and then exploiting this in our visualisations.
  2. Using a guided-analytical approach to take the reader on a journey of discovery.
  3. Adopting a practical less-is-more design technique to focus the reader’s attention on the things that matter.

Location: Hall 1, Building A , Golden Street , Southafrica

The value of machine learning in privacy: Results-oriented machine learning solution in securing PII data anonymisation

Online behavioural data is a valuable source of insights for researchers. However, data collected passively via tracking meters contains Personal Identifiable Information (PII). With the GDPR into force, the value of online behavioural data is constrained by the risk of disclosing PII. We present a machine-learning solution that significantly reduces the risk of revealing PII when sharing browsing data.

Behavioural data provides extensive information about the actions consumers perform online. We can list the apps they use, the websites they visit and in what order, along with the search terms used on browsers. Behavioural fata is continuously evolving to help the market research industry tackle the current limitations we are struggling with when observing online consumers.

It’s time to fully understand the consumer, it’s time to make the most of behavioural data, however unfortunately, behavioural data brings big challenges in terms of privacy issues.

It’s not exactly clear what constitutes PII online as there are so many interested parties. For example a unique Amazon code isn’t PII to anyone else but to Amazon it is completely identifiable. Data sets are huge and the thought of manually screening out PII is just not viable, consequently some form of automation has to be incorporated into the process.

Here through the use of machine learning we seek to explain how we devised our PII exclusion algorithmic solution.

Location: Hall 1, Building A , Golden Street , Southafrica

Come see Jon Puleston explore the increasing issue of panel hacking, and bots taking surveys.  He’ll  share ideas on how we might tackle this problem as an industry, and examine what’s just not working.  A very interesting area of MR which is sure to inspire some discussion!



Location: Hall 1, Building A , Golden Street , Southafrica