About Me: My Aims, Curiosities and Motivations

I’m Hannah Kothapally. I have worked as a AI Conversation & Interaction Designer at the University of Auckland. My work involved designing the University’s Staff-facing Generative AI Chatbot from end to end.

During this project, I was also introduced to AI governance and ethics, and learned that AI systems can hallucinate, create bias and spread misinformation if not actively monitored.

From the release of ChatGPT in Nov 2022, the AI market has fundamentally changed. Since late 2022, nearly one-third of Fortune 500 companies have deployed real enterprise AI. There has been a massive surge of cybers cams and data leaks, which made me think, are these AI chatbots digital systems actively taking measures in terms of protecting their users?

Aims : Rachel Botsman said Trust is a confident relationship with the unknown.” , Who Can You Trust? (2017)1 My research asks where in the digital and interaction design does Trust live? I propose that Trust operates in layers – surface, functional and relational. My research is not against AI nor for AI; what I want to achieve through this is to create a safe, trustworthy, and ethical practical toolkit and prototypes that has all 3 trust layers. I believe that if humans are intelligent enough to create Artificial Intelligence, we are capable of creating safer AI and digital systems. This happens when there is empathy for people behind the screens who are vulnerable, treating them as humans rather than seeing them as users or just data. Ethics must be implemented actively rather than passively, not just as a checklist.

Curiosities : Exploring the Trust layers.

Surface trust : When you see something, be it an object or a human. In terms of design, I see this as a first layer of trust experience that a person has with a design or digital interaction system. Does it look trustworthy? The visual appearance of a design is the most important factor in making a person want to use it.

Functionality trust : This experience makes us think, does it work like I expect it to? Is it doing what I need it to be doing? Does it work or break? Can I depend on this to do this for me, or can I ask it to search for something for me?

Relational trust: Most of the time, we do not think about this because we do not know whetherour information is being leaked or used for something else. It is the responsibility of a responsible designer to implement this.

“Technology is a useful servant but a dangerous master.” — Christian Lous Lange, Nobel Peace Prize 19212

Motivations: I have a few real-world motivations. I read about the Manage My Health data breach. Approximately 125,000 users’ data was breached. This is not a small number; there was so much personal and sensitive information about the people using this. Many GPs who use this are part of the public system, and in some regions, like Northland, Health New Zealand (Te Whatu Ora) has officially partnered with this app. This made me think: if a govt-partnered people’s health app isn’t actively maintaining a trust relationship, what would the situation be with privately owned apps? I have read that MMH did not make 2FA or biometrics mandatory before the breach. They have added biometric Integration, login protection measures, and later, after the breach. This is why I believe Trust operates in layers. The app looked credible and functioned well, like booking appointments, accessing test results, etc., but failed at the relational layer. Designs like this implement ethics passively, as a checklist, without actively considering ethics, but things like this point out that Trust and Ethics are a continuous process in design. The threat to security evolves constantly.

“It is the duty of machines and those who design them to understand people. It is not our duty to understand the arbitrary, meaningless dictates of machines.”3Don Norman