It’s Data Habibi – Episode 3 With Vitalii Duk and Luke Cann

In Episode 3 of our podcast, “It’s Data, Habibi,” we dive into what agentic AI actually means for enterprise leaders and where autonomy truly belongs. Our guests, Vitalii Duk, Founder & CEO of Dynamiq, and Luke Cann, Co-founder & CEO of HEMOdata, break down the difference between deterministic workflows and non-deterministic agents, unpack the “AI Iceberg,” and explore how organisations can drive real ROI without scaling risk.

Inside Episode 3 of “It’s Data, Habibi” with Vitalii Duk & Luke Cann

Most people hear “AI” and still picture chatbots. Useful, sure. But that’s not where the shift is happening.

In Episode 3 of It’s Data, Habibi, host Ivan Leon Gee (Senior Data Scientist, HEMOdata) sits down with Vitalii Duk(Founder & CEO, Dynamiq) and Luke Cann (Co-founder & CEO, HEMOdata) to unpack the thing leaders are quietly being asked to approve without fully understanding: Agentic AI.

#ICYMI: Here are the biggest insights from the episode:

1. Agentic AI Isn’t New

Vitalii makes a point leaders need to hear: agents aren’t a new idea. The concept has been around since the 60s and 70s. What changed is that we finally have an engine capable of reasoning + planning + tool use at a level that makes autonomy practical.

Agentic AI, in its simplest form, is a loop:

  • Reason about the goal
  • Plan a sequence of steps
  • Act by calling tools or systems
  • Observe the outcome and repeat

That “repeat” part matters. It’s what separates a one-off answer from a system that can push work forward.

2. The Real Decision: To Use AI or Not to Use AI

This is the most important part of the episode, because it’s where organisations usually miss the mark.

“What does AI mean for your company? Where are the places you should be implementing AI?”

On Agentic AI, Vitalii draws a hard line:

Use a workflow when the process is predictable

If your process has low variability, is standardised, and doesn’t require judgement, don’t add an agent.

Why? Because agentic AI is non-deterministic. It’s powered by LLMs. It can hallucinate. It can choose different paths at different times.

Use an agent when variability is the problem

Agentic AI becomes valuable when:

  • the process has too many edge cases
  • it’s hard to code every “if/else” condition
  • judgement is required
  • the task is complex and multi-step

3. The “AI Employee” Narrative Might Not Be Fully Accurate (But the Metaphor Helps)

The market keeps pitching agentic AI as “AI employees.” Vitalii doesn’t love the framing but he admits it’s a useful metaphor if you treat it realistically:

Agents are closer to junior autonomous employees.

Which means they need:

  • clear responsibilities
  • structured processes
  • supervision and escalation
  • constraints and permissions

Luke reinforces the governance angle: whether it’s a human employee or an AI agent, the organisation is still responsible for outcomes.

And then Vitalii brings in a constraint that matters hugely in this region: data residency and regulation.

“If you’re a UAE company that needs to comply with local regulations and your AI employee is sitting in the US, that might not be something everyone will subscribe to.”

4. AI Readiness: The Foundations Haven’t Changed

Luke brings the conversation back to what HEMOdata sees daily: organisations want “AI value,” but skip the readiness work.

The basics still decide success:

  • Do you understand your data assets?
  • Are they clean, structured, and validated?
  • Are your data flows mapped?
  • Are processes documented?
  • Do you have governance and controls?

Vitalii agrees: “garbage in, garbage out” didn’t disappear in the LLM era.

But there’s a critical nuance: Some agentic use cases don’t require a mature data foundation. Example: deep research agents or coding copilots, tools that improve individual productivity without needing a full data warehouse.

The moment you automate business processes using corporate data, the foundation becomes non-negotiable.

“You still need to have all your processes documented. It doesn’t matter if you’re hiring a human and they need to join today and start performing some tasks – you need to tell them what they need to do, how you do work here in this company. Same for agentic workflows.”

5. Where Will Leaders Will See ROI

When Ivan asks “why adopt this?”, Vitalii talks about operational leverage:

  1. In customer support, agentic systems allow you to “scale much faster without a need to scale your customer support rapidly.” That’s margin protection while improving responsiveness.
  2. In engineering, velocity compounds: “Instead of doing six-month roadmaps, now you can probably squeeze that into two months.” Faster release cycles mean competitive advantage and earlier revenue realisation.
  3. And in document-heavy operations like KYC or compliance, processes that “previously… could have taken days” can now “take a couple minutes”, compressing onboarding timelines and reducing cost per transaction.

The AI Iceberg

Luke introduces a concept every enterprise leader will recognise: the AI iceberg.

On the surface, you see the visible success story, a competitor launching faster, automating more, showcasing AI wins. What you don’t see is the infrastructure beneath it: documented processes, clean data layers, governance frameworks, access controls, integration work, and rigorous business cases.

Too often, organisations attempt to replicate the top layer without building the foundation underneath. The result?

“You’ve deployed what can be a potentially expensive AI solution, and then you’re not getting the ROI or the expected returns.”

The iceberg isn’t about whether AI works. It’s about whether your foundations are strong enough to support it. Without that groundwork, autonomy will amplify inefficiency.

There’s a lot more to unpack in this episode…

🎧 Watch or listen to the full episode to hear Ivan, Vitalii, and Luke break down agentic AI without the hype and with the realism enterprise leaders actually need.

Popular Reads