Building Bridges: Connecting survey data to the wider world

Building Bridges: Connecting survey data to the wider world

Today’s world is increasingly defined by an inter-connectivity across all platforms and devices.  To explore this theme, our conference will be a split into a game of two halves.

The first will show how thought leaders are connecting survey platforms and data with the wider world.  Then, after a delicious lunch, we’ll be looking at how to provide greater insight by using non-survey data to contextualise the information captured in surveys, and vice versa.

As always, we are extremely grateful to all the wonderful speakers who volunteer their time to bring you a fantastic conference at an incredibly low price (£159/£189).  So please come along and join in the discussion.

ASC conferences are part of the Market Research Society’s CPD programme (find out more here).  All ASC conference attendees are eligible for 6 MRS CPD hours.

If you have queries please contact admin@asc.org.uk

Please note that this conference has been postponed to the 12th of November due to international events.

Event Schedule

Location: Hall 1, Building A , Golden Street , Southafrica

Technology is driving business transformation. This also in the world of Consumer & Market Intelligence where new technological developments are promising to get to insights better, faster and cheaper.

Yet, are we blinded by technology? Isn’t there more to having a competitive edge as a CMI team? What about making the most of the data and insights that you already have next to future-proofing the business and its offering to customers? Are you focusing enough on how close the marketing and innovation community is to consumers? Is the company ready for a mindset of experimentation in a rapid changing world?

Location: Hall 1, Building A , Golden Street , Southafrica

The youngest children in the AI revolution are language models. They go by fancy muppet names such as ELMo (in 2018, by the Allen Institute for AI), BERT (in 2018, by Google) or RoBERTa (in 2019, by Facebook AI). These language models are deep neural networks trained in simple linguistic tasks on vast amount of data (usually nothing less than all published novels and all of Wikipedia). What they provide are representations for sentence meanings that can be used for pretty much any semantic task, ranging from semantic search to text classification or information retrieval.

We would like to introduce a market research application for those language models, which allows us to identify hot topics and matters of interests among any pre-defined target. More precisely, we will present an online tool that allows the user to query for any topic she is curious about (say: “climate change” or “shameful Harry”) and get the reach of that topic among various audiences defined in terms of socio-demographics as well as political leanings (are conservative voters really indifferent to climate change? do younger folks really care about Harry?).

Our tool works on the basis of a content-driven analysis of the internet usage of respondi’s online representative panel (people having installed a tracking device on their laptop and mobile). Contents and queries are matched using fine-tuned neural networks on top of Google’s earlier mentioned muppet magic for natural language processing, BERT.

We think this approach shows one way for the future of opinion analysis, by using AI and language model in order to directly research people’s interests and concerns as inferred from their web navigation. This approach is (almost) real-time: queries dig into the most recent web behavior of our panel members (D-1 and earlier). It is (completely) open-ended: topic and issues are not a closed list, one can ask for any topic. And it relies on a simple but crucial crossing of data sources: traditional survey data such as age, gender or political preferences allows for the definition of targets groups whose specific interests one cares about.

Location: Hall 1, Building A , Golden Street , Southafrica

Blockchain-powered technology is in the process of disrupting and revolutionising a broad range of industries to solve common problems and meet consumer demands. In market research, this technology and associated cryptographic techniques hold great promise for addressing persistent concerns in the market research space. Trust, privacy and transparency are key ingredients to building a data collection ecosystem where everyone strives for greater participation and data quality. Blockchain can intrinsically deliver on these components.

This session will cover insights we uncovered from a program with eight pilot partners including research agencies, brands and panels to help demystify the use of Blockchain in Market Research. A select number of diverse consumers participated using the Measure MSR iOS app to engage in the program. Data jobs included participating in surveys, completing profile requests, and enabling passive data sources, such as location and purchase history. The session will examine the results of the study and give attendees a step-by-step peek into this technology in action, giving a deep understanding of how blockchain works in our unique data-driven ecosystem.

Secondly, the session will provide an overview of how this technology approach is being used to address declining participation rates in data collection due to lack of trust or perceived anonymity for hard-to-research and thus underrepresented demographics in sensitive data collection applications such as student evaluations and cannabis related research.

Location: Hall 1, Building A , Golden Street , Southafrica

The seamless interconnectivity of systems in the current age is driving many significant changes within business. As companies move towards Digital Transformation, they expect data covering all aspects of the business to flow seamlessly and effortlessly through the organisation. They expect that data to be available in a convenient, usable form to those who need it in near real-time.

Similarly, consumer expectations of companies, technology and services are continuously rising as today’s hi-tech environment, makes people’s life more connected, more convenient and faster paced.

The survey world and survey software should be no exception to this. Questionnaires should be dynamic, interactive and smart so that we offer respondents a more relevant experience and ask the right questions at the right time.

One of the best ways to achieve this is through interconnectivity with specialist external services. For example, instead of asking closed questions we should allow the respondent to tell us their opinions, in their own words, using speech. Third party services can then allow us to interpret this speech in real-time and guide the survey accordingly.

In this presentation, we will look some examples where external services allow us to supercharge a survey. We will outline the advantages for both respondent and researcher as well as considering some of the challenges and pitfalls to consider.

For example:

– Applying voice to text recognition technology to open question responses

– Using Machine Learning to recode and codify open answers on- the-fly in the most efficient way possible

– Connecting to external tools like a CRM system to dynamically pull data to feed a questionnaire’s content

– Pulling secondary data (e.g. weather data) and augmenting survey data to enhance analysis

– Dynamically routing questionnaires using AI to increase relevance.

Location: Hall 1, Building A , Golden Street , Southafrica

Problem

Market Research data analysis has always taken place in the industry’s private ghetto of terminology and software.

Until recently all that was required was the delivery of cross-tab reports, and the associated dataset had no life outside the report that it informed.

The twenty-first century began bringing glitz to the presentation and delivery of the reports, such as charts, and new report media like Excel, PowerPoint and PDF, but with little other change.

In recent years however, data analysis has acquired a higher profile in the corporate world generally with transactional data of all kinds (big data) becoming amenable to ad-hoc analysis. These data and the online tools to analyse them are becoming more familiar to individual managers. This leads to new demands from corporate survey clients; to use survey data to enhance insight from their enterprise data, and to apply their enterprise tools to their survey data, rather than any secondary analysis tools from the survey industry that their supplier may recommend.

A generation ago there was a software genre of Management Information System tools – many may recall Oracle Express. These dealt with aggregated data rather than transactional data because the computers of the time could not provide fast response to individual-level queries. The main legacy from this OnLine Analytical Processing (OLAP) era is terminology such as cube, measure and dimension. The modern enterprise tools are mostly reliant on the relational data model, with SQL as a lingua franca for building aggregations.

The biggest problems in moving a survey from the survey research space therefore have long been familiar, they are the same ones creating difficulty in populating a relational database from a survey:

  • Representation of multiple classification variables and grids
  • Using value codes as well as labels, e.g. to control the order of presentation of categories
  • The shape of data – basically RDBMS systems are optimised for millions of rows and few columns, surveys have lots of columns (questions) and not so many rows (informants)
  • The rate of change of metadata – big data databases are typically built once and used for years, with ad hoc research it’s another day another database, and the big data tools are not optimised for this use case

There is also a communication problem – different terminology for the same concepts in the different domains.

Solution

We will begin by providing a phrase book to help those travelling between the survey and corporate data communities.

We go on to demonstrate deploying a substantial consumer survey from survey format into an enterprise query tool, via a relational database.

We explain the problems inherent in the process, and their solutions. Although the demonstration is practical and specific, we will expose the methodology in a way that is actionable and vendor neutral.

Location: Hall 1, Building A , Golden Street , Southafrica

In this talk, Glow’s CEO, Tim Clover, talks about the mindset needed to unlock value with synergistic datasets, the value in 80:20 thinking and the importance of an agile approach to gathering iterative learnings to improve outcomes through the process.

Tim asserts that more potential value can be unlocked in data when the barriers to its creation, capture, analysis and presentation are removed. For over 6 years the team at Glow have been building technology to bring this to life, with leading businesses in Australia and the UK pioneering to add value with new data used in different ways to support decision making. Glow has hand-picked a number of case studies to spotlight the value-add of integrated data in a number of its projects. These include:

  • An international retailer trying to unlock value from customer surveys at the point-of-sale to merge with finance data and improve working capital by over $250m dollars in 6 weeks
  • A global FMCG brand struggling with its new product development process and using consumer survey data to improve its return on capital through a repeatable business case framework

Location: Hall 1, Building A , Golden Street , Southafrica

No matter how robust your methodology, what really matters in the end is that clients are able to relate to your findings on a personal level and connect with what they mean for future strategy.

In this presentation, Dr Matilda Andersson, Managing Director at Crowd DNA London, will share how to authentically connect, empathise and represent people through immersive methodologies. Standard demographics and cookie cutter segmentations no longer work; categories are dissolving as audiences refuse to be boxed in; and people seek to be in control of their own image, irrespective of gender, sexuality, race, size or age. This means we need new methodologies. But also be able to communicate research findings in a compelling, yet commercial way.

Dr Andersson will explain how we achieve this via techniques such as IRL immersions, social media ethnos, multi-generational in-depth interviews, and putting control back in the hands of participants (to turn them into partners, not respondents). She’ll also explore the need to produce narratives that move beyond traditional formats – think curated Insta feeds, Snap stories, child’s-view GoPro footage, street photography – and how, in order to hold onto the grit of the core story, we need creative planning, careful execution and organic formats to help brands truly empathise with their audiences beyond survey results.

Location: Hall 1, Building A , Golden Street , Southafrica

The data processing and analysis software eco-system grows richer by the day, but too often where we want a bridge, we find instead a railway track which mandates one-way procedures and a fixed destination.

To address this issue, we propose (and are developing) a stand-alone cross-platform desktop/server/cloud survey tabulation and analysis tool (implemented as a zero-GUI DLL) for bridging between any data set which can be represented as Cases by Variables and downstream interactive visualisation tools. As a proof of concept, we collected tweets over the last five days of the recent UK general election using the public Twitter API. The collection can be automated (at N tweets per P period), with reporting updated by the DLL at every Pth period. The case data was then augmented by sentiment scores bridged in from other islands such as Microsoft’s ML.Net. Further examples include the UK and Australian electricity grid demand/supply from public sources, the First Fleet database and plain text corpora. By generalising the notion of a respondent to include such things as a tweet or a set of periodic readings, data sets which are the traditional preserve of BI and SQL can instead be visualised and analysed using familiar concepts such as bases, row/column percentages, filters, weighted/unweighted, multi-response, grids, coded increments, standard crosstab statistics, moving averages, etc.

With such a DLL in hand, it should be possible to bridge from multiple row-oriented data sources to a survey-esque reporting regime, which in turn as a practical matter can facilitate contextualisation against traditional survey results by consolidating both to a single platform and application.

This paper describes the required DLL functionality to fulfil the role of a practical and effective bridge, and details the data transforms necessary to effect cross tabulation.

Location: Hall 1, Building A , Golden Street , Southafrica

Through our horizon scanning work, Direct Line Group’s Insight team showed the business how insurance customers are changing and persuaded stakeholders to recognise that we need to unite to deliver a plan for the future.

Horizon scanning is part of a programme of understanding changing consumer behaviour to answer the following questions:

  • What’s happening out there now?
  • What does this mean for my business (qualitatively and quantitatively)?
  • How do I turn this into an opportunity?

We applied a variety of techniques to collect and analyse data including desk research, consumer research, field trips, video clips, presentations and workshops for over 200 stakeholders.

We created a unique approach for distilling the key themes that will be impacting insurance customers within the next 5 to 10 years.
We backed up our theories on several opportunity areas through a market sizing activity including show the proportion of our customer base that would take up these offers.

Location: Hall 1, Building A , Golden Street , Southafrica

A case study in combining open source data with qualitative thinking to inspire the flavour innovation pipeline for the F&B category.

Can any of us cross our hearts and say that we have not used the words “transformative”, “disruptive”, “enabling”, or “empowering” at some point in the last few years, to talk about the potential of open source data? In the last two decades, the amount of open source data generated via the usage of the internet has been mind-boggling. Google alone has released Images dataset (36.5 million images containing ~20000 categories of human-labeled objects), the Natural Questions database (containing 307,373 human-generated questions and answers) and Google Trends (aggregate search activity since 2004). Combined with the datasets available via other internet companies, governments, and other public and private bodies, this opens up a whole new world of information and insight about how people live, use, think, and feel. Open data was meant to revolutionise the access to new (and hitherto inaccessible) data points at scale and change the economics and innovation potential of traditional market research. We talked about integrating this data into our primary data, mining this data for new insight, and even substituting traditional data sources with this open source goodness.

But while the excitement about the potential of open data is understandable, how many successful use cases have we seen? The myth of open data grows with each new data set added even as the potential remains elusive. There are no clear answers on where and how to use open source data, where and how integrating it with the regular work we do in MR has failed or succeeded. The more use cases we can see, the more value we can bring in terms of efficiencies and impact to our work and clients.

Some of the questions I would like to tackle in this session are:
• How to use open source data for strategic business questions such as ideas for product innovation?
• How to extract meaning from and improve the actionable nature of open source data by adding socio-cultural context to this data?
• What are the limitations of working with open data sets?

To do this, I will share a recent case where we combined open source data with qualitative thinking to inspire the flavour innovation pipeline for the F&B category. Working with open data can be hugely beneficial if we drop the rigidity with which we approach traditional research and adopt a more fluid style. This is natural in qualitative research which is exploratory and iterative but also constrained by the over-reliance on verbal and primary data. Once we are able to look beyond the places we have always looked for answers, and adopt an explorer’s mindset with open data, we will find that the questions we have asked thus far are not the limit. Exploring new datasets will allow us to ask new questions and find new answers.

In the words of J.R.R Tolkien: “Not all those who wander are lost

Location: Hall 1, Building A , Golden Street , Southafrica

This will be an ASC-led update on developments in the creation of a new standard for survey data transfer.  The aim is to create a standard relevant to a world which is increasingly defined by interconnectivity across all platforms and devices.

Overall objective are:

  • Help preserve & develop MR’s position at the heart of businesses intelligence.
  • Help improve connectivity within the MR industry.
  • Help facilitate innovation within the MR industry.
  • Help ease the connection of MR insights & value to other sectors.

We’ll be inviting critique and collaboration to help achieve these goals.

More info coming soon…

Location: Hall 1, Building A , Golden Street , Southafrica