Versantus
| AI Guide

Cut through
the AI noise.

AI is moving fast. The advice is everywhere. This guide is different - practical, honest, and built for people running real businesses. Pick a topic and start.

Last updated: March 2026
Prefer to listen? Audio version

What does your team need to do?

These are the tools we use to run our own businesses. They are best-in-class as of when this guide was created - new tools and capabilities are added frequently! Pick a task and we'll point your team to the right tool.

Making a website or app
Best tools for this

Describe what you want in plain English - the AI builds the code, design, and hosting.

Excel or Google Sheets
Best tools for this

Built into your sheets. Ask it to write formulas, clean data, or find trends.

Summarising or researching
Best tools for this

ChatGPT for fast summaries; NotebookLM for turning many files into audio or presentations.

Writing docs or emails
Best tools for this

ChatGPT for workhorse drafting; Claude for nuanced, professional, human-like tone.

Marketing visuals & video
Best tools for this

Generate photorealistic brand assets or cinematic social clips from a text prompt.

Coding or technical building
Best tool for this

The specialist for explaining code, writing scripts, and debugging in real-time - at any skill level.

What is AI?

AI is a genuinely powerful technology - but it's more straightforward than the hype suggests. Here's what you actually need to know.

A very clever autocomplete

Modern AI - specifically large language models like ChatGPT and Claude - works by predicting the most likely next word, or "token", based on patterns learned from enormous amounts of text. It doesn't think the way humans do. It has no opinions, feelings, or understanding in the human sense. What it has is an extraordinary ability to generate text that sounds coherent, relevant, and helpful.

Why it makes things up

AI is trained to produce confident, helpful responses - even when it doesn't know the answer. Rather than saying "I don't know", it fills the gap with what seems most plausible. This is called "hallucination", and it's a fundamental characteristic of how these systems work, not a bug that will simply be fixed. Human review of AI outputs is always essential.

Knowledge frozen in time

Most AI models are trained on data up to a specific cutoff date. After that point, they have no awareness of what has happened in the world - unless they are connected to live tools like web search or a database. This matters when asking about current events, recent regulation changes, or anything time-sensitive.

How AI works, in plain language

You don't need to understand the engineering. But a few simple concepts will help you get more out of AI - and stay out of trouble.

GPT

Generative Pre-trained Transformer

This is the architecture behind many of today's leading AI models. Generative means it creates new content. Pre-trained means it learned from a massive dataset before you ever used it. Transformer refers to the technical mechanism that allows it to weigh and relate different parts of text to each other. You don't need to know the engineering - but knowing what the letters stand for helps demystify what you're working with.

Tokens

The currency of AI

Every time you interact with an AI, it processes your words in small chunks called tokens. A token is roughly three to four characters - so a typical word might be one or two tokens. AI services are often priced by token usage, but in practice this rarely matters for everyday use. A paid monthly subscription to a leading model typically gives you access to millions of tokens for a fixed, predictable fee.

Recommendation: Get a paid subscription rather than relying on free tiers. It gives you better privacy protections, access to the most capable models, and no unexpected costs.

Non-determinism

Why the same question gives different answers

AI responses are randomised by design. Even with exactly the same prompt, you can get different answers each time. This is intentional - it makes AI more creative and less robotic. It also means you should never treat a single AI response as definitive. Ask again, rephrase, and test outputs in different ways to build confidence before acting on a result.

The key AI players

A handful of companies dominate the AI landscape. Each has its own approach, strengths, and ecosystem of tools. Here is a practical overview.

Anthropic

Claude - Haiku · Sonnet · Opus · Claude Code

Founded by former OpenAI researchers with a strong focus on AI safety and reliability. Claude is available in different sizes: Haiku for fast, lightweight tasks; Sonnet for the best balance of capability and speed; Opus for the most complex reasoning. Claude Code is a developer-focused coding tool.

OpenAI

ChatGPT · Codex · Sora

The company that brought AI to mainstream attention with ChatGPT. OpenAI's models power a vast ecosystem of products. Codex is focused on coding tasks; Sora is their video generation model. If your team has experimented with AI at all, they have probably started here.

Google

Gemini

Google's Gemini family of models is tightly integrated into Google Workspace, Search, and other Google products. If your organisation already runs on Google, Gemini's ecosystem can make it a natural and practical starting point.

DeepSeek

Open-source model

A Chinese-developed open-source model that attracted significant industry attention for its strong performance relative to its development cost. As an open-source model it can be deployed and customised - but using it introduces additional considerations around data residency and governance worth discussing with your technical team.

Meta

Llama - open source

Meta's Llama models are open-source, meaning anyone can download, run, and fine-tune them on their own infrastructure. This opens up possibilities for bespoke, private AI applications where you need full control over your data and the ability to train the model on your own content.

Microsoft

Copilot

Microsoft has integrated AI throughout its Microsoft 365 suite via Copilot. If your team uses Word, Excel, Teams, or Outlook, AI capabilities are likely already available to you. Microsoft draws on multiple AI models to power its products and services.

Beyond the big names: There are thousands of AI products and models built on top of these foundations - including open-source alternatives you can host yourself and fine-tune on your own data. If privacy, customisation, or compliance are priorities, an open-source approach is well worth exploring with technical advice.

How to get your team started with AI

The most common mistake organisations make is waiting for a perfect plan before doing anything. The right approach is structured, confident experimentation.

Make it genuinely OK to experiment

The most important thing any organisation can do right now is create a culture where experimenting with AI is not only acceptable - it is encouraged. Leaders need to be visibly curious and engaged. If your team feels they need permission to try something, most of them won't. Make it safe to learn and safe to fail.

Get everyone a paid licence

Give your whole team a paid subscription to a leading AI model - ChatGPT or Claude are both strong choices. A paid account provides significantly better data privacy protections, access to the most capable models, and a predictable monthly cost with no surprises. Free tiers have limitations that make productive, day-to-day use frustrating and introduce unnecessary data risks.

Find your AI ambassadors

Identify a small number of curious, capable people across the business who are willing to learn more deeply. Give them space - and perhaps slightly more latitude - to explore what is possible. Regular internal show-and-tells and shared discoveries help spread capability organically and build confidence across the wider organisation.

Building with AI: what's safe, what needs care

AI tools give your team the power to build things that previously needed a developer. That's exciting - but with it comes responsibility. Developers have spent years learning what's safe to do with code, APIs, and data. Your team may now have that same power without that same experience. Here's what you need to know before you start.

Generally safe to build

  • A simple landing or marketing page where no sensitive user data is processed or stored
  • Internal read-only tools - for example, a dashboard that pulls together reports from multiple sources
  • Using AI to generate copy, images, or email drafts for human review before publishing
  • Summarising internal documents, meeting notes, or research for your own use
  • Prototyping and exploring ideas before committing to a direction or build

Proceed with care

  • Sending customer or business data to an AI system - be very clear about what is shared and with whom
  • Building tools that can create, update, or delete records - AI makes mistakes with real consequences
  • Handling API keys - treat them like passwords and never expose them in code others can see or in public repositories
  • Using AI in recruitment, assessment, or any decision-making that affects people - bias risks are real, and the EU AI Act applies here
  • Releasing AI-generated content directly to customers without review - quality, accuracy, and brand impact all matter

Core principles when building with AI

Always keep a human in the loop

AI should support human decisions, not replace them. Build in review steps for anything with real-world consequences. Never automate accountability away.

Ask what the AI isn't doing

AI does what you ask - it won't flag what you forgot to ask. If you are building a website, have you addressed security, accessibility, GDPR, SEO, and performance? If not, that is on you.

You build it, you own it

Using AI to help create something does not transfer your responsibility for it. You remain fully accountable for what you build and how it behaves. Delegation to AI is not a defence.

Run security checks

AI can introduce security vulnerabilities - especially in generated code. Run security scans over any systems you build, or ask an expert to review before going live.

A word on "AI slop": AI can produce content that looks convincing but is mediocre, generic, or just wrong. Before releasing anything AI-generated to customers, prospects, or your team, ask honestly whether it meets the standard you would hold a human to. If not, it needs more work.
Advanced option

Running AI on your own machine

Most people use AI via cloud services - you send a prompt, a remote server processes it, and the response comes back. That works well for most tasks. But there is an alternative: running models locally, entirely on your own hardware.

Local AI means your data never leaves your device. No cloud. No external server. No data agreement to worry about. For teams handling sensitive information - legal, healthcare, confidential client work - or for developers who want full control, this is worth understanding.

This is not where most organisations should start. But it is a real option - and one that is becoming increasingly accessible.

Why it matters

Privacy: Data stays on your machine - not sent to any external service.

Control: Choose exactly which model runs, with no usage policies applied by a third party.

Independence: Works offline. No subscription. No outages.

Trade-offs

Local models are typically less capable than frontier cloud models. They require meaningful hardware (a modern GPU helps significantly). Setup requires technical confidence - not something to hand to a non-technical team without support.

Tools to know

Ollama - simplest local runner, command-line based.
LM Studio - desktop GUI, good for non-developers.
llama.cpp - optimised for lower-end hardware, technical setup.

Your AI policy template

A clear AI policy gives your team the confidence to use AI well - enough freedom to learn, with enough structure to stay out of trouble. The following is a practical template based on Versantus's own policy. Adapt it, print it, and share it with your team.

Establish what is permitted

Be explicit about which tools are approved and in which contexts. Avoid a blanket ban or a free-for-all - both create problems. A reasonable starting point: approved consumer AI tools for individual productivity, with stricter controls for anything touching client data or business systems. Let your AI ambassadors explore more freely.

Use with care

AI tools can make work faster and smarter, but they are not private. Avoid sharing any sensitive information - passwords, API keys, private client data, or proprietary code. Never assume AI-generated content is 100% accurate. All output should be critically reviewed for accuracy, context, bias, and appropriateness before use.

Confidentiality first

If you are unsure whether something is safe to share, do not share it. Treat all external AI tools as public spaces - even if the tool offers a private mode or paid subscription. No external system is as secure as simply not sharing sensitive information. Be specific about what must never be entered: passwords, API keys, client data, proprietary code, anything subject to legal or regulatory restriction.

Beware of integrations

It is easy to connect applications together - but integrations can expose data in ways that are not immediately obvious. APIs may share more than expected. Third-party tools can create vulnerabilities. Always review what data is being shared when setting up integrations, and seek approval from your team leader if client or sensitive data is involved.

Do

  • Use dummy or anonymised data when testing AI tools
  • Discuss ideas and concepts - not specific confidential details
  • Use secure, internal tools for passwords, API keys, and sensitive data
  • Ask your team leader before using AI for client or proprietary work
  • Keep a human review step for any AI output before it is published or sent

Don't

  • Paste code, API keys, or private data into AI chatbots or cloud tools
  • Upload client materials or datasets without explicit permission
  • Assume a paid or private account guarantees full data privacy
  • Enable tool integrations without understanding their data reach
  • Assume AI output is correct without checking from a primary source

Set a review process - and plan for when things go wrong

Decide who needs to approve AI use for work involving client data, sensitive decisions, or public-facing content. Include a clear incident process: what to do if sensitive data may have been shared, who to contact, and how quickly. Acting fast limits damage. Non-compliance with this policy, particularly where confidential data is involved, may be addressed under your disciplinary policy.

Print this AI policy Opens a printer-friendly version - use Ctrl+P or File → Print to save as PDF

Ready to go further with AI?

AI is not something to fear - but it does require thoughtfulness. The organisations that benefit most experiment early, build carefully, and keep people at the centre of every decision. That's where Glo and Versantus come in.

We build clever AI tools and train teams to develop their own skills. If you'd like to explore how we can help your organisation work with AI more effectively, we'd love to talk.

Back to the top