Many conference organizers are familiar with the saying location, location, location!
They know that their conference location has to be attractive to their prospective customers or they won’t attend.
Some organizers know the 4Ps of marketing: product, promotion, place and price.
So how many meeting professionals are familiar with the 3Vs of big data?
Defining Big Data
Big data is traditionally defined as:
Data sets whose size is beyond the ability of commonly used tools to process it within tolerable time.
Part of using data appropriately is asking the right data questions in order to create business competitiveness.
There’s no question that an organization’s volume of data is growing dramatically.
Most conferences’ data is also growing. Social media, RFID, the Internet of Things are just a few data sets that conference organizers rarely consider. Unfortunately even most traditional conference data is often overlooked, forgotten and never even touched in lieu of planning the next big meeting.
Traditional 3Vs Of Big Data
The 3Vs of big data are variety, velocity and volume.
Let’s take a look at each of these data sets as applied to conferences and meetings.
Data variety is exactly as it sounds. It comes from registration reports, evaluations, purchasing habits, exhibit and sponsor sales, VIP upgrades, lodging reports, food and beverage spend, session attendance numbers, pre and post conference events, special event parties and more. It also comes from social technology including community posts, check-ins, Twitter feeds, Facebook posts, conference mobile apps, photos, audio, videos, web, GPS data, sensor data, relational data bases, documents, SMS, pdf, flash, etc.
Traditionally, conferences have relied on data through batch processing. The organizers take a chunk of data and send it to the server to wait for delivery of results. This works when the incoming data rate is slower than the data processing rate and when the result is useful despite the delay. With new sources of data from social and mobile apps, the traditional batch process disintegrates. Real time, near real time, periodic and batch are all velocity rates conference organizers need to consider.
Data volume is fairly obvious. It is the size of the data from kilobytes to petabytes (one million gigabytes). A text file is in kilobytes. A sound file is usually in megabytes and a video in gigabytes. Data is generated by employees, customers, potential customers, suppliers, vendors and partners. The data is also generated by a group of machines. For example, mobile devices send a variety of information to the company network infrastructure through websites, SMS and phone calls.
What’s clear today is that the traditional association management system data base is not enough. Many are not built with off the shelf features that can collect and easily report the data conference organizers need to make better decisions.
However there are other data collection and mining systems that organizations can use to help them. Marketing automation and demand generation systems can help conference organizers know when specific customers are ready to purchase registration. Social technology aggregation and dashboards can help organizers collect sentiment and potential customers pain points for programming.
So what? Here’s what. The successful future conferences will collect and analyze better data. Successful organizers will ask better questions and use better predictive analysis tools to plan programs that will meet the needs of their customers. Volunteer committees will serve as advisors instead of actually picking conference locations, food and beverage, speakers and topics.
Until conference organizers get serious about using big data, today’s conference will continue to be hit or miss.
What are some of the strategic questions organizers should ask of their data? What are some tools that you use or seen other meeting professionals use to collect and mine data? (Note, I’m looking for personal recommendations, not solicitations by companies!)