1. Lessons In Leadership
  2. Thoughts from the Zeitgeist

Embracing Dangerous Ideas 

You already love AI. 

It seems as though every article or post I’ve read lately references artificial intelligence. Of course, AI has been around for decades, but the speed and memory capacity of computing systems have made AI applications ubiquitous. And so people race to that band wagon to either credit or blame artificial intelligence for everything from miracle cures to the Robot Armageddon. Personally, I think AI is the most exciting thing to happen since the early days of my career, when I was part of a movement to bring tech-heads together at such seminal events as Digital World and Interop. It was scary, because no one was sure what might happen. But once we realized that digital interaction was commercially viable, there was no going back. And yes, there’s been plenty of fallout. We are still battling hackers, scammers, spammers, espionage, and a full spectrum of misappropriation. But would any of us go back? I doubt it. Because the internet has made everyone’s life better in incalculable ways. 

That’s how I see AI. I don’t pretend it can’t be intrusive or even weaponized. But as Oscar Wilde once observed, “all great ideas are dangerous.” Innovation involves risky ideas — and scary first steps — because that’s the nature of progress. People sought to ban the use of electric lights because they feared electrocution. The medical community at large condemned the first use of antiseptics to reduce post-surgical mortality. And people around the world continue to demand that books they will never read be banned from public libraries. Fear and ignorance make for bad policy. 

In general, I can be found in the camp that says let’s embrace risk when it offers sufficient benefits. This includes an obligation to be intentional about selecting applications and mitigating harm. In terms of AI, I believe that since we’ve already opened that Pandora’s Box (if you use email, social media, or any internet search engine, you are using AI) we will be well served to make the best use of it. What’s important is that we deploy any new ideas (whether in technology, design, healthcare, education, manufacturing, etc.) by acting with intent, facilitating and respecting the collaborative efforts between informed parties, and always maintaining transparency. 

Bill Gates has a thoughtful blog about how AI can be deployed and managed in life-affirming ways; you can read it here. Ironically, because AI can crunch so much data so quickly, it can help us personalize (humanize) our approach to such critical services as healthcare and education. At Freeman, we are poised to bring greater personalization to the world of live events by improving our ROI metrics and strategic insights.  

We recently announced a partnership with Zenus to introduce ethical AI-based attendee behavior mapping for trade show and conference organizers. This will provide our customers with the aggregated and anonymized data they’ve been seeking in order to improve the effectiveness and value of their events.  

Our core purpose at Freeman is to deliver moments that matter — and now we can quantify those moments by analyzing data around dwell time, sentiment, and attendee activation. We can measure which moments matter most. This benefits everyone — event organizers, exhibitors, sponsors, vendors, venues and, most of all, attendees. This is what AI does best — it deals with the data in ways that help humans focus on what humans do best. That lies at the heart of life-centered design. And those of us in the driver’s seat need to mind the guardrails, watch the signs, check our GPS, and accelerate forward.

Follow me on LinkedIn.