Versantus
| AI Guide

Cut through
the AI noise.

AI is moving fast. The advice is everywhere. This guide is different - practical, honest, and built for people running real businesses. Pick a topic and start.

Last updated: March 2026
Prefer to listen? Audio version

What tool for what task?

Not sure where to start? Here's a practical shortcut.

Writing, thinking & general tasks

Claude ChatGPT Gemini

Excel & spreadsheets

Microsoft Copilot

Email & presentations

Microsoft Copilot Claude

Document analysis & research

NotebookLM Claude Gemini

Web research & fact-finding

Perplexity ChatGPT

Coding & development

Claude Code GitHub Copilot Cursor

Images & creative assets

Midjourney DALL-E Adobe Firefly

Internal knowledge & search

NotebookLM Microsoft Copilot

Tools change fast - these reflect current best-practice thinking, not permanent endorsements.

What is AI?

AI is a genuinely powerful technology - but it's more straightforward than the hype suggests. Here's what you actually need to know.

A very clever autocomplete

Modern AI - specifically large language models like ChatGPT and Claude - works by predicting the most likely next word, or "token", based on patterns learned from enormous amounts of text. It doesn't think the way humans do. It has no opinions, feelings, or understanding in the human sense. What it has is an extraordinary ability to generate text that sounds coherent, relevant, and helpful.

Why it makes things up

AI is trained to produce confident, helpful responses - even when it doesn't know the answer. Rather than saying "I don't know", it fills the gap with what seems most plausible. This is called "hallucination", and it's a fundamental characteristic of how these systems work, not a bug that will simply be fixed. Human review of AI outputs is always essential.

Knowledge frozen in time

Most AI models are trained on data up to a specific cutoff date. After that point, they have no awareness of what has happened in the world - unless they are connected to live tools like web search or a database. This matters when asking about current events, recent regulation changes, or anything time-sensitive.

How AI works, in plain language

You don't need to understand the engineering. But a few simple concepts will help you get more out of AI - and stay out of trouble.

GPT

Generative Pre-trained Transformer

This is the architecture behind many of today's leading AI models. Generative means it creates new content. Pre-trained means it learned from a massive dataset before you ever used it. Transformer refers to the technical mechanism that allows it to weigh and relate different parts of text to each other. You don't need to know the engineering - but knowing what the letters stand for helps demystify what you're working with.

Tokens

The currency of AI

Every time you interact with an AI, it processes your words in small chunks called tokens. A token is roughly three to four characters - so a typical word might be one or two tokens. AI services are often priced by token usage, but in practice this rarely matters for everyday use. A paid monthly subscription to a leading model typically gives you access to millions of tokens for a fixed, predictable fee.

Recommendation: Get a paid subscription rather than relying on free tiers. It gives you better privacy protections, access to the most capable models, and no unexpected costs.

Non-determinism

Why the same question gives different answers

AI responses are randomised by design. Even with exactly the same prompt, you can get different answers each time. This is intentional - it makes AI more creative and less robotic. It also means you should never treat a single AI response as definitive. Ask again, rephrase, and test outputs in different ways to build confidence before acting on a result.

The key AI players

A handful of companies dominate the AI landscape. Each has its own approach, strengths, and ecosystem of tools. Here is a practical overview.

Anthropic

Claude - Haiku · Sonnet · Opus · Claude Code

Founded by former OpenAI researchers with a strong focus on AI safety and reliability. Claude is available in different sizes: Haiku for fast, lightweight tasks; Sonnet for the best balance of capability and speed; Opus for the most complex reasoning. Claude Code is a developer-focused coding tool.

OpenAI

ChatGPT · Codex · Sora

The company that brought AI to mainstream attention with ChatGPT. OpenAI's models power a vast ecosystem of products. Codex is focused on coding tasks; Sora is their video generation model. If your team has experimented with AI at all, they have probably started here.

Google

Gemini

Google's Gemini family of models is tightly integrated into Google Workspace, Search, and other Google products. If your organisation already runs on Google, Gemini's ecosystem can make it a natural and practical starting point.

DeepSeek

Open-source model

A Chinese-developed open-source model that attracted significant industry attention for its strong performance relative to its development cost. As an open-source model it can be deployed and customised - but using it introduces additional considerations around data residency and governance worth discussing with your technical team.

Meta

Llama - open source

Meta's Llama models are open-source, meaning anyone can download, run, and fine-tune them on their own infrastructure. This opens up possibilities for bespoke, private AI applications where you need full control over your data and the ability to train the model on your own content.

Microsoft

Copilot

Microsoft has integrated AI throughout its Microsoft 365 suite via Copilot. If your team uses Word, Excel, Teams, or Outlook, AI capabilities are likely already available to you. Microsoft draws on multiple AI models to power its products and services.

Beyond the big names: There are thousands of AI products and models built on top of these foundations - including open-source alternatives you can host yourself and fine-tune on your own data. If privacy, customisation, or compliance are priorities, an open-source approach is well worth exploring with technical advice.

Getting started with AI

The most common mistake organisations make is waiting for a perfect plan before doing anything. The right approach is structured, confident experimentation.

Make it genuinely OK to experiment

The most important thing any organisation can do right now is create a culture where experimenting with AI is not only acceptable - it is encouraged. Leaders need to be visibly curious and engaged. If your team feels they need permission to try something, most of them won't. Make it safe to learn and safe to fail.

Get everyone a paid licence

Give your whole team a paid subscription to a leading AI model - ChatGPT or Claude are both strong choices. A paid account provides significantly better data privacy protections, access to the most capable models, and a predictable monthly cost with no surprises. Free tiers have limitations that make productive, day-to-day use frustrating and introduce unnecessary data risks.

Find your AI ambassadors

Identify a small number of curious, capable people across the business who are willing to learn more deeply. Give them space - and perhaps slightly more latitude - to explore what is possible. Regular internal show-and-tells and shared discoveries help spread capability organically and build confidence across the wider organisation.

Building with AI: where it helps and where to be careful

Using AI to build tools and automate processes can be transformative - but some applications are straightforward while others require careful consideration. Here is a practical guide to the difference.

Generally safe to build

  • A simple landing or marketing page where no sensitive user data is processed or stored
  • Internal read-only tools - for example, a dashboard that pulls together reports from multiple sources
  • Using AI to generate copy, images, or email drafts for human review before publishing
  • Summarising internal documents, meeting notes, or research for your own use
  • Prototyping and exploring ideas before committing to a direction or build

Proceed with care

  • Sending customer or business data to an AI system - be very clear about what is shared and with whom
  • Building tools that can create, update, or delete records - AI makes mistakes with real consequences
  • Handling API keys - treat them like passwords and never expose them in code others can see or in public repositories
  • Using AI in recruitment, assessment, or any decision-making that affects people - bias risks are real, and the EU AI Act applies here
  • Releasing AI-generated content directly to customers without review - quality, accuracy, and brand impact all matter

Core principles when building with AI

Always keep a human in the loop

AI should support human decisions, not replace them. Build in review steps for anything with real-world consequences. Never automate accountability away.

Ask what the AI isn't doing

AI does what you ask - it won't flag what you forgot to ask. If you are building a website, have you addressed security, accessibility, GDPR, SEO, and performance? If not, that is on you.

You build it, you own it

Using AI to help create something does not transfer your responsibility for it. You remain fully accountable for what you build and how it behaves. Delegation to AI is not a defence.

Run security checks

AI can introduce security vulnerabilities - especially in generated code. Run security scans over any systems you build, or ask an expert to review before going live.

A word on "AI slop": AI can produce content that looks convincing but is mediocre, generic, or just wrong. Before releasing anything AI-generated to customers, prospects, or your team, ask honestly whether it meets the standard you would hold a human to. If not, it needs more work.
Advanced option

Running AI on your own machine

Most people use AI via cloud services - you send a prompt, a remote server processes it, and the response comes back. That works well for most tasks. But there is an alternative: running models locally, entirely on your own hardware.

Local AI means your data never leaves your device. No cloud. No external server. No data agreement to worry about. For teams handling sensitive information - legal, healthcare, confidential client work - or for developers who want full control, this is worth understanding.

This is not where most organisations should start. But it is a real option - and one that is becoming increasingly accessible.

Why it matters

Privacy: Data stays on your machine - not sent to any external service.

Control: Choose exactly which model runs, with no usage policies applied by a third party.

Independence: Works offline. No subscription. No outages.

Trade-offs

Local models are typically less capable than frontier cloud models. They require meaningful hardware (a modern GPU helps significantly). Setup requires technical confidence - not something to hand to a non-technical team without support.

Tools to know

Ollama - simplest local runner, command-line based.
LM Studio - desktop GUI, good for non-developers.
llama.cpp - optimised for lower-end hardware, technical setup.

AI policy for your business

A clear AI policy is not about restricting use - it is about giving your team the confidence to use AI well. The goal is enough freedom to learn, with enough structure to stay out of trouble.

Establish what is permitted

Be explicit about which tools are approved, and in which contexts. Avoid a blanket ban on everything or a free-for-all. Both extremes create problems. A reasonable starting position: approved consumer AI tools for individual productivity, with stricter controls for anything touching client data or business systems.

Define what must never be shared

Be specific about the categories of information that should never be entered into an AI system: passwords, API keys, private client data, proprietary code, and anything subject to legal privilege or regulatory restriction. Specificity here prevents avoidable mistakes.

Set a review and approval process

Determine who needs to approve AI use for work involving client data, sensitive decisions, or public-facing content. Consider giving your AI ambassadors or a research group slightly more latitude to explore - they are your learning edge.

Plan for when things go wrong

Include a clear incident process: what to do if sensitive data may have been shared with an AI tool, who to contact, and how quickly. Acting fast limits damage. Compliance and monitoring tools exist to help larger teams manage this at scale - worth exploring as AI use in your organisation matures.

Versantus AI policy

The following is Versantus's own policy for the safe and responsible use of AI tools. We share it here as a practical example you can adapt for your own organisation.

Use with care

AI tools can make our work faster and smarter, but they are not private. Avoid sharing any sensitive information - including passwords, API keys, private client data, or proprietary code. Never assume that AI-generated content (code, text, data, or images) is 100% accurate. All AI output should be critically reviewed for accuracy, context, bias, and appropriateness before use.

Confidentiality first

If you are unsure whether something is safe to share, do not share it. Treat all external AI tools as public spaces - even if the tool offers a "private" mode or a paid subscription for added privacy. No external system is as secure as simply not sharing sensitive information in the first place.

Code and technical work

When using AI to assist with coding or technical work, do not upload or expose confidential solutions - ours or our clients'. Use sanitised or dummy data wherever possible. Assume that anything entered into an AI coding tool could be visible beyond that session.

Integrations and connections

It is easy to connect applications and tools together - but integrations can expose data in ways that are not immediately obvious. APIs may share more than expected. Third-party tools can create vulnerabilities. Automatic syncing can lead to unintended leaks. Always review what data is being shared when setting up integrations, and seek approval from your team leader if client or sensitive data is involved.

Do

  • Use dummy or anonymised data when testing AI tools
  • Discuss ideas and concepts in chatbots - not specific confidential details
  • Use secure, internal tools for passwords, API keys, and sensitive information
  • Ask your team leader before using AI for client or proprietary work

Don't

  • Paste code, API keys, or private data into AI chatbots or cloud tools
  • Upload client materials or datasets without explicit permission
  • Assume a paid or private account guarantees full data privacy
  • Enable tool integrations without understanding their impact on data security

If something goes wrong

If you are unsure about using an AI tool or setting up an integration, speak to your team leader first - they can help you make the right call. If you suspect that sensitive data has been exposed, report it immediately. Acting quickly protects our work and our clients. Non-compliance with this policy, particularly where private or confidential data is involved, may be addressed under our disciplinary policy.

Frequently asked questions about AI in business

Practical answers to the questions we hear most often from business teams getting started with AI.

Yes. Even if your team is only experimenting with AI tools, a policy helps set clear expectations. It ensures people understand how to use AI safely, what data should not be shared, and where human oversight is required.

Not without caution. Many AI services process information externally. Sensitive information such as client data, internal documents, passwords, API keys, or proprietary code should never be pasted into AI tools unless the tool is specifically approved and secure.

Paid tools often offer better privacy controls, access to stronger models, and predictable pricing. However, paid does not automatically mean private. Teams should still treat external AI systems as public environments unless they have been specifically approved for confidential data.

AI can be very helpful, but it is not always correct. Outputs can contain errors, bias, or fabricated information. All AI generated content should be reviewed by a human before it is used in business decisions, customer communications, or production systems.

The person using the AI remains responsible. Delegating a task to an AI tool does not remove accountability for the final result.

The most common risks include sharing sensitive data, relying on incorrect outputs, introducing bias, exposing API keys or credentials, and unintentionally creating insecure integrations.

Yes, but with safeguards. Generated code should be reviewed carefully, security scans should be run, and API keys must be stored securely. AI can speed up development but should not replace proper engineering practices.

Yes, experimentation should be encouraged. However, teams should follow basic guardrails and have clear guidance on what data can and cannot be used.

Report it immediately to a team leader or IT contact. Acting quickly helps reduce risk and allows the issue to be assessed properly.

Start with low risk use cases such as brainstorming, summarising information, drafting documents, or generating ideas. Avoid any use that involves confidential data until clear policies and safeguards are in place.

Glossary of AI terms

A plain-language reference guide to the terminology you will encounter when working with AI.

AI agent
A system that performs tasks using AI models and tools with limited human intervention.
AI governance
Policies and controls used to manage AI responsibly within organisations.
AI safety
Practices that ensure AI systems behave responsibly and predictably.
API (Application Programming Interface)
A method for software systems to communicate with each other.
API key
A secret credential used to authenticate access to an API. Treat it like a password and never share it.
Artificial intelligence (AI)
Computer systems designed to perform tasks that normally require human intelligence, such as analysing data, writing text, generating images, or answering questions.
Automation
Using software or AI to perform tasks automatically.
Bias
AI systems can reflect biases present in their training data, producing outputs that favour certain groups or perspectives unfairly.
Context window
The amount of information an AI model can consider at one time. Longer conversations or documents may exceed the window, causing the model to lose earlier context.
Data privacy
Protection of sensitive or personal information from unauthorised access or misuse.
Embedding
A numerical representation of text or data used for similarity comparison. Embeddings allow AI systems to understand which pieces of content are semantically related.
EU AI Act
European legislation regulating the development and use of certain AI systems, based on risk level. High-risk applications such as AI used in recruitment or decision-making face strict requirements.
Fine tuning
Additional training applied to an existing AI model to improve its performance for specific tasks or domains.
Foundation model
A large general purpose AI model trained on broad datasets that can be adapted for many different applications.
Generative AI
AI systems that create new content such as text, images, video, audio, or code, rather than just analysing existing content.
Guardrails
Controls that limit how AI systems behave or what data they can access, helping to keep outputs within safe and appropriate boundaries.
Hallucination
When an AI system produces information that sounds convincing but is incorrect or fabricated. This is a fundamental characteristic of how language models work, not a bug that will simply be fixed.
Human in the loop
A process where humans review or approve AI outputs before they are acted on, maintaining accountability and catching errors.
Inference
The process of generating output from a trained AI model in response to an input. When you send a prompt to an AI, the model runs inference to produce a response.
Large language model (LLM)
A type of AI model trained on very large amounts of text data. It learns patterns in language and can generate text, answer questions, write code, and summarise information.
Model
The trained AI system that produces outputs based on input prompts. Different models have different capabilities, sizes, and costs.
Multimodal AI
AI systems that can process multiple types of input such as text, images, audio, and video within a single model.
Open source model
An AI model whose code and weights are publicly available and can be run locally, customised, or fine-tuned without relying on a commercial provider.
Prompt
The instruction or question given to an AI system. The quality and structure of your prompt significantly affects the quality of the response.
Prompt engineering
The practice of writing structured prompts to guide AI tools to produce better results. Good prompts are specific, contextual, and clear about the desired output format.
Retrieval augmented generation (RAG)
A technique where an AI retrieves relevant information from documents or databases before generating a response, improving accuracy and grounding answers in real data.
Token
A small unit of text processed by AI models. Tokens may be whole words, parts of words, or punctuation. Many AI services charge based on token usage.
Training data
The information used to train an AI model. The quality, diversity, and recency of training data significantly affects what the model knows and how it behaves.
Vector database
A database used to store embeddings for fast semantic search. Used in RAG systems to retrieve relevant documents based on meaning rather than exact keyword matches.

Start your AI journey with confidence

AI is not something to fear - but it does require thoughtfulness. The organisations that will benefit most are those that experiment early, build carefully, and keep people at the centre of every decision.

If you would like to explore how Glo and Versantus can help your organisation work with AI more effectively, we would love to talk.

Back to the top