The new world of generative AI can feel overwhelming. In an earlier article, we already mentioned that developments are progressing at exponential speed — so quickly, in fact, that McKinsey consultants predict generative AI could add $4.4 trillion in annual value to the economy.
As an event professional, keeping up with these technological advancements is crucial, but it demands time you likely don't have. Together with Xander, I'm here to help you make sense of it all. Developments in generative AI are unfolding across five domains: text, images, audio, code, and video. Today, we'll focus on text—the area most relevant to our work and the driving force behind innovation in all other domains.
‍ChatGPT’s big breakthrough
OpenAI made history at the end of 2022 with the launch of ChatGPT, suddenly giving the public access to one of the best language models available at that time: GPT-3. This model was unique—not because it was the first, but because it could generate text of such high quality that it appeared to be written by a human.
Language models (or Large Language Models—LLMs) have existed since the 1980s, but there were three key reasons their results were disappointing before OpenAI’s breakthrough:
- Data: To become truly good, language models need enormous amounts of training data (articles, blog posts, books), which simply wasn’t digitally available 40 years ago.
- Computing power: Capturing the nuances of language requires immense computational power (GPUs), which is expensive.
- Architecture: In 2017, Google introduced the Transformer architecture that today's models rely on. Now you also know where the 'T' in GPT comes from.
Simply put, language models like GPT-3 are created by vacuuming up huge quantities of text from the internet and using powerful computers to learn the underlying patterns and structures of language.
‍The competition has awakened
OpenAI, ChatGPT’s parent company, has led the pack so far. But now, other tech giants have awakened and are investing—either directly or indirectly—in their own proprietary or public (open-source) models with varying complexity and quality.
This competition is beneficial for us as end-users. While the paid version of ChatGPT remains my preferred model, sometimes it’s worthwhile experimenting with Claude (Amazon), Gemini (Google), Grok (X), or Lama (Meta). Each has its strengths and weaknesses, and approximately every six months, improved models emerge, each aiming to surpass the previous generation through more sophisticated training.
Fortunately, the skills required to effectively communicate with a language model are universally transferable. This means prompts you create for ChatGPT can usually be used with other models without modification.
‍How you, as an event professional, can get started
It seems so easy. ChatGPT's minimalist interface warmly invites you to offload all your routine writing tasks, leaving you time for coffee.
But you wouldn't be the first to walk away disappointed. You can avoid frustration by keeping these LLM-life lessons in mind:
- There's no such thing as a free lunch
Free users of ChatGPT have access to a fewer features and inferior models (such as GPT-3.5). Paying users get GPT-4o, a model trained on at least 10X more parameters (data), thus better capturing the nuances of Dutch and English.
Additionally, context windows are crucial in language models—they indicate how much short-term memory a model has and when it begins to forget the first information you've provided. Predictably, GPT-3.5 is limited here, quickly making conversations feel repetitive.
More importantly, the most powerful features remain hidden behind paid versions: uploading documents, analyzing spreadsheets, training your own AI bots, generating images, and accessing image recognition (GPT-Vision). All these have relevant applications in our industry.
- Communicate like you're talking to a child
This child is smart—even gifted—but it has (almost) no context about who you are, what you do, or why you're suddenly asking it to help draft an event plan.
If I ask my 4-year-old son to clean his room without extra context, he might sincerely believe putting dirty laundry back into the closet qualifies as “clean.” Just as I need to clarify what “clean” means for him, you'll need to provide similar context for your language model.
Thankfully, there are practical frameworks for this, such as the RACEF model, developed by Pete Huang from The Neuron. If you structure your prompts according to RACEF, you'll receive output that's more closely aligned with your expectations.

- A large language model doesn't actually understand language
Did you read that correctly? Yes, you did. The confusing reality is we're interacting with a computer program that's broken down the nuances of language into a mathematical model. Its only real function is predicting which word should come next in a sentence.
The “secret sauce” of a language model lies in its ability to deviate from predictions, which means the same question rarely yields identical answers. This flexibility—known as regeneration—offers a creative advantage, enabling varied and surprising text generation.
But it also underscores the necessity for human critical evaluation of AI-generated output. Our professional knowledge and experience are essential in determining whether the convincingly written texts actually align with our intended content.
From theory to practice
In the coming period, we'll share our favorite AI use cases for event professionals. This includes situations where you can leverage a language model like ChatGPT (GPT-4o) to draft event plans, handle insurance claims, or even assist you as a negotiation partner with suppliers.
For now, the advice is simple: invest approximately €20 per month for access to a premium LLM (such as GPT-4o) and start experimenting with the RACEF model.