Artificial intelligence (AI) and machine learning (ML) are now ubiquitous. They dominate current research in computer science, with 30% of all research papers currently relating to AI/ML, and are driving job growth. Associate Professor Roopak Sinha, head of AUT’s Computer Science and IT & Software Engineering Departments, says potential future employment is a huge factor in increased enrolments in computer science and software engineering at AUT.
“We have seen massive year on year growth in jobs looking for expertise in machine learning – more than 300% growth each year for the last five years,” says Associate Professor Sinha. “At AUT we are focused on equipping our students for that future, and on promoting research in AI in a variety of fields.”
Four of the 12 technology trends identified by the Institute of Electrical and Electronics Engineers (IEEE) for 2021 are directly related to AI and Machine Learning, including;
One area where AI can have huge implications is in mental health, helping with earlier prognosis and diagnosis. Researchers at AUT recently secured $NZ2.1 m to investigate mental health diagnosis under the 2020 New Zealand-Singapore Data Science Research Programme funded by the New Zealand Ministry of Business, Innovation and Employment (MBIE) and the Singapore Data Science Consortium (SDSC). The project involves researchers from AUT, the University of Auckland, the National University of Singapore (NUS), Nanyang Technological University (NTU) and Singapore Institute for Mental Health (IMH).
The research, Computational neuro-genetic modelling for diagnosis and prognosis in mental health, will lead to the development of new machine-learning/AI platform for multimodal data modelling, better clinical intervention via early prognosis and diagnosis of mental health issues including schizophrenia in at-risk youth. This also includes the development of personalised modelling for a better understanding of the individual factors that trigger mental illnesses.
The project is led by AUT’s Dr Maryam Doborjeh, a young, emerging researcher and lecturer at AUT, and Professor Nikola Kasabov, who was her PhD supervisor, is the Science Leader. Professor Kasabov says having young researchers into lead roles helps to build a new generation of researchers who will lead and bring new perspectives to the field of data science in the future. The project also involves Prof. Edmund Lai from AUT’s SECMS, Dr Margaret Hinepo Williams (Public and Māori Health Research Lead at AUT) and Dr Zohreh Doborjeh, a PhD graduate from AUT, and now Postdoc at the University of Auckland.
“Mental illness, depression and depression-linked suicide are huge problems in both New Zealand and Singapore. Late diagnosis is the thing we can avoid with intelligent predictive computational models. The hospital and the cemetery are full of people who could have been helped earlier”. -says Professor Kasabov. “We are hoping the neuro-genetic modelling research will lead to the development of new AI-based predictive analytics for early diagnosis of mental health issues in at-risk youth that can ultimately support psychological wellbeing practitioners to plan better clinical interventions,” says Dr Doborjeh.
Creating ethical autonomous systems is one of the key challenges of AI, working to ensure these powerful systems are created using unbiased data, to operate in an ethical way. Can autonomous systems develop their own ethical systems even if their purpose is warfare?
“Unfortunately for many lay people, the idea of autonomous systems and AI brings to mind science fiction horrors like Skynet [from the Terminator movies],” laughs Associate Professor Sinha. “Ethics is AI is one of the key areas of research in AI, with Professor Ajit Narayanan in particular doing significant work in the space of autonomous robots and ethics in a variety of settings.”
Professor Narayanan’s recent research looks at the ethics of autonomous systems, from self-driving vehicles to unmanned ground and air vehicles in a military setting.
“Lethal autonomous robots (LARs) present an unusual case for machine ethics, as their main aim is not necessarily to ensure they treat us well when it comes to a warfare setting,” says Professor Narayanan.
“LARs are designed to be used in warfare – their purpose is to perform lethal actions and therefore harm humans. A logic-based response can constrain them in dynamic situations that require flexible responses. The best response is to use a fuzzy logic response within the bounds of the Laws of War and Rules of Engagement.”
Professor Narayanan’s research shows that using two ethics learning modules for learning and producing ethical deontological and consequentialist output with the framework of the four war principles of military necessity, humanity, proportionality and discrimination leads to ethical learning output, for ‘moral machines’.
“Where there is capacity for harm to humans – from driverless vehicles to warfare – the ethical considerations of AI must be central in their development,” says Professor Narayanan.
“Access to and opportunities to work with outstanding researchers like Professors Kasabov and Narayanan and Dr Doborjeh are a central aspect of what our Department offers,” says Associate Professor Sinha. “I look forward to seeing where our staff and students take AI and machine learning to.”