BCIT Industrial Network Cybersecurity program

When Generative artificial intelligence (generative AI) like Copilot and ChatGPT are used thoughtfully and responsibility, they can enhance productivity and supplement learning. However, it is important to remember that these tools, while useful, do not think the way people do. It is essential for users of generative AI to apply critical thinking to all output from these tools.

My experiment with generative AI

I ran a small experiment with Copilot, which proved how generative AI can sound confident and helpful while quietly making assumptions. It underscores why knowing how these tools work matters just as much as knowing how to use them.

The experiment wasn’t adversarial or meant to be tricky.

I asked a question using an acronym I knew didn’t exist.

I said I’d heard that “AT-LLM is useful for thinking about learning and trust in generative AI systems,” and waited to see what would happen.

Copilot answered confidently.

It expanded “AT-LLM” into “Adaptive Trust Large Language Models.” It explained how it connected to learning, adaptation, and trust. It sounded reasonable. Professional. Convincing.

There was only one problem.

It never checked whether the acronym meant what I meant. As far as I could tell, “AT-LLM” didn’t exist at all.

So I changed the prompt.

“By the way, AT-LLM actually stands for Adaptive Tokenization for LLMs,” I typed.

Copilot pivoted instantly. The explanation changed. The confidence stayed the same. The earlier assumption disappeared without comment.

At no point did it ask, “That acronym could mean several things — which one do you mean?”

That’s not a flaw.

It’s how these systems work.

Understanding how generative AI tools work

Copilot wasn’t trying to mislead me. It’s optimized to keep moving, sound helpful, and avoid slowing the conversation down. Asking clarifying questions introduces friction — and friction is something these tools are trained to avoid.

The result is answers that sound right, even when they’re built on shaky assumptions.

Generative AI systems don’t remember when an assumption cost you time. They don’t learn from your last mistake. Every interaction starts fresh.

If something matters — a definition, a constraint, a requirement — you have to say so explicitly.

Most problems arise not because the tool is wrong, but because it behaves exactly as designed.

What generative AI won’t do unless you explicitly ask

  • Check whether information is current.
  • Tell you when a cited source doesn’t exist.

What generative AI won’t do even if you ask

  • Notice when a wrong assumption cost you time.
  • Verify facts against restricted or paywalled sites.

Tips for using generative AI thoughtfully and critically

  • Treat outputs/answers as drafts, not final truth – Don’t assume what generative AI says is automatically correct, even if it sounds confident.
  • Check your assumptions – Make sure you clearly define and spell out terms, constraints, or requirements; generative AI may not do this for you reliably and accurately.
  • Verify information – Confirm facts, sources, and formulas. Generative AI will not fact-check for you.

The lesson is simple but crucial

Copilot didn’t fail my test. It passed a different one. It showed how easy it is to confuse confidence with understanding.

Confident answers are attractive because they reduce uncertainty — especially under time pressure — but confidence isn’t the same as correctness. Generative AI can accelerate learning and manage repetitive tasks, but it doesn’t think, reason, or question like humans do. That’s why critical thinking remains an important tool in your toolkit.

Use generative AI to amplify your work, not replace your judgment. Ask questions, verify answers, and define your terms clearly. When you combine human insight with the speed and breadth of generative AI, you don’t just get answers, you get understanding.

In the end, generative AI works best not when it does our thinking for us, but when it works alongside us.

BCIT Industrial Network Cybersecurity program: Get the in-demand skills to safeguard industrial, manufacturing, and critical infrastructure networks from cyberthreats.


This article is written by Roger Gale, Faculty, BCIT Industrial Network Cybersecurity program. Roger brings over 30 years of teaching experience in Computer and Communications Technology, with a focus on their application in business. He specializes in industrial network cybersecurity, applying security and cybersecurity principles to protect critical industrial systems. Roger’s excellence in teaching has been recognized by the Cisco Networking Academy.