IIT Mandi Develops AI Algorithm to Improve Accuracy of Landslide Prediction
The use of AI is becoming increasingly vital for the prediction of natural disasters such as landslides (Representative Image)
The developed algorithm has been tested for landslides and can be applied to other natural phenomena such as floods, avalanches, extreme weather events, rock glaciers and permafrost
Indian Institute of Technology Mandi Researchers have developed a new algorithm using Artificial Intelligence and Machine Learning (AI&ML) that could improve the accuracy of prediction for natural hazards. The algorithm developed by Dr. Dericks Praise Shukla, Associate Professor, School of Civil and Environmental Engineering, IIT Mandi, and Dr. Sharad Kumar Gupta, former research scholar, IIT Mandi, currently working at Tel Aviv University (Israel), can tackle the challenge of data imbalance for landslide susceptibility mapping which represents the likelihood of landslide occurrences in a given area. The results of their work have recently been published in the journal Landslides.
A Landslide Susceptibility Mapping (LSM) indicates the likelihood of a landslide occurring in a specific area based on causative factors such as slope, elevation, geology, soil type, distance from faults, rivers and faults, and historical landslide data.
The use of AI is becoming increasingly vital for the prediction of natural disasters such as landslides. They can potentially predict extreme events, create hazard maps, detect events in real-time, provide situational awareness, and support decision-making. ML is a subfield of AI that enables computers to learn and improve from experience, without being explicitly programmed. It is based on algorithms that can analyse data, identify patterns, and make predictions or decisions, much like human intelligence.
Read | IIT Mandi Researchers Develop Visual-based Method to Assess Earthquake-prone Structures in Himalayan Region
ML algorithms, however, require large amounts of training data for accurate prediction. In the case of LSM, this data consists of the causative factors of landslides as mentioned earlier, and historical landslide data. However, landslides are a rare occurrence in certain areas, leading to the unavailability of extensive amounts of training data, which hinders the performance of ML algorithms. For a given area, in comparison to non-landslide points (considered as negative), landslide points (considered as positive) are very less thus creating an imbalance between positive and negative points which affects the prediction.
Dr Shukla’s team has developed a new ML algorithm that overcomes this issue of data imbalance for training of the algorithm. It uses two under-sampling techniques, EasyEnsemble and BalanceCascade, to address the issue of imbalanced data in landslide mapping.
Data about the landslides that occurred in the Mandakini River Basin in northwest Himalaya, Uttarakhand, India, between 2004 and 2017 were used to train and validate the model. The results showed that the algorithm significantly improved the accuracy of the LSM, particularly when compared to traditional machine learning techniques such as Support Vector Machine and Artificial Neural Network.
Dr. D.P. Shukla, Associate Professor, School of Civil and Environmental Engineering, said, “This new ML algorithm highlights the importance of data balancing in ML models and demonstrates the potential for new technologies to drive significant advancements in the field. It also underscores the critical need for large amounts of data to accurately train ML models, particularly in the case of geohazards and natural disasters where the stakes are high and human safety is at risk.”
Dr Shukla believes that this study opens up new avenues in the field of LSM and other geohazard mapping and management. It can be applied to other phenomena such as floods, avalanches, extreme weather events, rock glaciers, and permafrost, helping to minimize the risks posed to human safety and property.
Read all the Latest Education News here