Artificial Intelligence

Post 1: Fears of AI

In our first class of COMP813, we explored what people fear about Artificial Intelligence. The fears are real and widespread but are they all justified?

The most common fear is job displacement. People worry AI will replace their roles entirely. While AI is automating repetitive tasks, it is also creating new roles that didn’t exist before AI trainers, prompt engineers, ethics auditors. The shift is real, but it’s more of a transformation than an elimination.

The second fear is bias. AI systems learn from data, and if that data is biased, the AI reproduces and amplifies that bias. This has already caused real harm — biased hiring algorithms, discriminatory loan approvals, facial recognition that fails on darker skin tones. The problem isn’t the AI. It’s the data and the people who chose that data.

The third fear is privacy. AI systems collect vast amounts of personal information — location, browsing history, health data, conversations. People fear they don’t know what’s being collected, who has access, or how it’s being used. This fear is the most justified in my opinion, especially after real-world events like the Manage My Health breach in New Zealand.

The fourth fear is loss of control. Will AI become so advanced that humans can no longer understand or manage it? This is the science fiction fear — but researchers are genuinely working on AI alignment and explainability to ensure humans remain in control.

What I took from this class is that the fears aren’t really about AI. They’re about trust. Do we trust the people building these systems to do it responsibly? That’s exactly what my research is about.

My research sits at the intersection of these technologies and the people who use them. As AI systems become more capable from simple rule-based chatbots to large language models like ChatGPT and Claude the question of trust becomes more urgent, not less. The smarter the system, the harder it is for users to understand what’s happening behind the interface. And that’s where design and ethics come in

How AI Reduces Costs Through Search

This week in COMP813 we looked at how AI is being used to reduce costs through smarter search systems. Traditional search is manual, slow, and expensive. AI changes that fundamentally.

Before AI, searching through large datasets meant hiring people to sort, categorise, and retrieve information. A legal firm reviewing thousands of documents for a case might need a team working for weeks. A hospital searching patient records relied on keyword matching that missed context. A business analysing customer feedback had to read every response manually.

AI-powered search changes this in several ways. Natural Language Processing (NLP) allows systems to understand the meaning behind a query, not just match keywords. If you search “heart problems in elderly patients,” AI understands you want cardiac conditions in older adults — not just pages containing those exact words.

Machine learning improves search over time. The system learns from what users click, what they ignore, and what they search for next. Every interaction makes the search smarter without any additional human effort.

For businesses, this translates directly to cost reduction. Fewer staff hours spent searching. Faster decision-making. Less duplication of work. McKinsey estimates that AI-powered knowledge management can reduce information search time by up to 35% in large organisations.

But there’s a trade-off. To make search smarter, AI needs data — your data. Your search history, your preferences, your behaviour patterns. The more it knows about you, the better it performs. This raises the same question I keep returning to in my research: where is the line between helpful and invasive? When does a smart search become surveillance?

Études is not confined to the past—we are passionate about the cutting edge designs shaping our world today.