Prof. Saman Halgamuge, Fellow of IEEE, IET, AAIA and NASSL will visit AI-MAS Laboratory on Monday 30 June. He will also deliver a talk on “Impact of Explainable AI in Sciences, Energy and Medicine: Large Language Models, Graph Neural Networks and Physics Informed Neural Networks”.

Visit of Professor Saman K. Halgamuge
Visit of Professor Saman K. Halgamuge

Prof. Saman Halgamuge, Fellow of IEEE, IET, AAIA and NASSL will visit AI-MAS Laboratory on Monday 30 June. He will also deliver a talk on “Impact of Explainable AI in Sciences, Energy and Medicine: Large Language Models, Graph Neural Networks and Physics Informed Neural Networks”.
Brief Bio of Prof. Saman Halgamuge
Prof. Saman Halgamuge, Fellow of IEEE, IET, AAIA and NASSL received the B.Sc. Engineering degree in Electronics and Telecommunication from the University of Moratuwa, Sri Lanka, and the Dipl.-Ing and Ph.D. degrees in data engineering from the Technical University of Darmstadt, Germany. He is currently a Professor of the Department of Mechanical Engineering of the School of Electrical Mechanical and Infrastructure Engineering, The University of Melbourne. He is listed as a top 2% most cited researcher for AI and Image Processing in the Stanford database. He is a distinguished Visitor of IEEE Computer Society (2024-26) and was a Distinguished Lecturer of IEEE Computational Intelligence Society (2018-21). He supervised 50 PhD students and 16 postdocs on AI and applications in Australia to completion. His publications can be viewed at https://scholar.google.com.au/citations?hl=en&user=9cafqywAAAAJ&pagesize=80&view_op=list_works&sortby=pubdate
Impact of Explainable AI in Sciences, Energy and Medicine: Large Language Models, Graph Neural Networks and Physics Informed Neural Networks
Artificial intelligence (AI) has seen explosive growth over the last 20 years, largely through advances in data-driven machine learning (ML). ML research is contributing to almost every discipline thanks to the advent of new directions including Large Language Models (LLMs), Graph Neural Networks and Physics Informed Neural Networks (PINNs). AI/ML intelligence is learned from training data in ways unfathomable even by experts, and it is therefore impossible to interpret their decisions missing the opportunity to have effective human-AI collaboration. When learning from training data that may harbor dangerous prejudices—an inevitable but often concealed presence in our lives—such intelligence hidden in a maze of mathematical constructs amounts to serious ethical, safety, and security challenges for various stakeholders from government to industry. Explainable AI (XAI) aids in reducing such challenges by promising transparency and interpretability enabling stakeholders to gain a greater understanding of the AI model’s decisions. This explanation should be using one or more of the modalities, e.g., textual, mathematical.
I will motivate my talk using multiple examples of AI and XAI mediated progress in multiple disciplines. I will then introduce examples of on-going research from my AI group that can potentially transform some of these disciplines.
Recent Open Access Publications Relevant to this Talk
Our recent open access publications relevant to this talk include:
- Senanayake, D and Wang, W. and Naik, S.H. and Halgamuge, S, “Self Organizing Nebulous Growths for Robust and Incremental Data Visualization”, IEEE Transactions on Neural Networks and Learning Systems, DOI: 10.1109/TNNLS.2020.3023941, 2021, IEEE.
- Perera, Maneesha and De Hoog, Julian and Bandara, Kasun and Senanayake, Damith and Halgamuge, Saman, “Day-ahead regional solar power forecasting with hierarchical temporal convolutional neural networks using historical power generation and weather data”, Applied Energy, DOI: 10.1016/j.apenergy.2024.122971, Elsevier, 2024.
- Malepathirana, Tamasha A and Senanayake, Damith and Gautam, Vini and Engel, Martin …Halgamuge, Saman, “Visualization of Incrementally Learned Projection Trajectories for Longitudinal Data”, Scientific Reports, DOI: 10.1038/s41598-024-63511-z, Nature Group, 2024.
- Perera, Rashindrie and Savas, Peter and Senanayake, Damith and Salgado, Roberto and Joensuu, Heikki and O’Toole, Sandra and Li, Jason and Loi, Sherene and Halgamuge, Saman, “Annotation-efficient deep learning for breast cancer whole-slide image classification using Tumor Infiltrating Lymphocytes and slide-level labels”, Communications Engineering, DOI: 10.1038/s44172-024-00246-9, Nature Group, 2024.
- Ranasinghe, Nisal and Senanayake, Damith and Seneviratne, Sachith and Premaratne, Malin and Halgamuge, Saman “GINN-LP: A Growing Interpretable Neural Network for Discovering Multivariate Laurent Polynomial Equations, The 38th Annual AAAI Conference on Artificial Intelligence, arXiv preprint arXiv:2312.10913, 2024.
- Ranasinghe, Nisal and Xia, Yu and Seneviratne, Sachith and Halgamuge, Saman, “Ginn-KAN: Interpretability pipelining with applications in physics informed neural networks”, arXiv preprint arXiv:2408.14780, 2024.