Our Research
The NAOInstitute researches natural, artificial, and organization intelligence and its impact on humanity’s future. The Centre collaborates with global institutions, follows a collaborative innovation approach, and incorporates tikanga Māori. The Centre is transdisciplinary and aims to build strong relationships with hapū for inter-generational well-being.
A mental models approach for defining explainable artificial intelligence
Published in BMC Medical Informatics and Decision Making
Read the full paper here.
Exploring Explainable AI: Insights from our Experts
The integration of artificial intelligence (AI) in areas like healthcare faces a significant hurdle: the opacity of “black-box” models. Despite their potential, these complex systems often lack the transparency needed for wider acceptance. Enter explainable AI (XAI), a concept aimed at demystifying AI operations. Yet, a solid foundation for XAI remains elusive due to the absence of a clear, universal definition of “explainable.”
Our team critically reviewed literature on AI’s role in teams, mental models in healthcare, and the evolving definitions of explainability from the perspective of AI researchers. This effort led to the formulation of a new definition of explainability, centered on the model’s context—its purpose, audience, and the language of explanation. We applied this definition to various models, illustrating its relevance and offering a framework for future research.
We found that traditional explanations fail to consider the audience’s understanding. By contextualizing explainability, we ensure evaluations align with practical objectives, facilitating clearer distinctions between technical and lay explanations. This approach promises to enhance the practical application of AI, making it more accessible and effective in real-world settings.
Is it Possible to Preserve a Language using only Data?
Joshua Bensemann, Jason Brown, Michael Witbrock & Vithya Yogarajan
Published in Cognititve Science: A Multidiciplinary Journal
Read the full paper here.
Preserving Endangered Languages with AI: A Delicate Balance Between Data and Knowledge
The race to save endangered languages from extinction has led to innovative strategies combining data collection with artificial intelligence (AI). However, this approach encounters a critical hurdle: static data may not fully encapsulate the dynamic essence of language, posing a risk to capturing a language’s comprehensive functionality.
Recent interdisciplinary efforts, including specialized workshops, have spotlighted the use of AI in language preservation. Yet, the effectiveness of these AI language models, such as BERT, hinges on the quality of data, which often only offers a snapshot of linguistic complexity. This raises the question: Can AI truly grasp and preserve the depth of endangered languages?
The distinction between collecting data and preserving linguistic knowledge is crucial. Languages carry unique cultural and historical significance beyond mere words. As AI models learn from data, the challenge remains whether they can internalize and reproduce the nuanced rules and meanings inherent in languages, especially without direct human insight or the rich context of cultural knowledge.
This debate underscores a vital point: preserving a language goes beyond mere data collection to encompassing the deeper knowledge that defines it. Without this holistic approach, the essence of endangered languages may still slip away, despite our best technological efforts.