Is AI necessary for the prevention of plane collisions? Photo credit: Richard R. Schünemann via Unsplash
In modern times, airplanes are the easiest method to travel long distances, making it convenient to study abroad, visit family, and explore new countries. Although the percentage of planes that crash is approximately 0.0001%, they do still occur. One of the ways a plane crash could occur is plane collisions. Currently, humans prevent collisions by manually detecting and correcting for them. But what if Artificial Intelligence could supplement the roles of human controllers by preventing plane collisions? And, if such a system existed, would it be effective, or would the ethical drawbacks be sufficient to negate its use?
Firstly, it is essential to establish how plane collisions could occur. Air Traffic Control (ATC), composed of Air Traffic Control Officers (ATCOs), determines plane trajectories to ensure smooth traffic flow in the skies. According to the Federal Aviation Administration, an ATCO’s priority is keeping aircraft a safe distance apart, whilst ensuring routes and airspace use is optimised.
To do so, ATCOs must prevent any situation involving a potential conflict. Aircraft are required to be a specific distance apart, both vertically and horizontally. By ICAO standards, the vertical separation minimum is 1000 or 2000 feet, depending on the height of the aircrafts, and the horizontal minimum is 5 nautical miles, or 9.26 kilometres. When this minimum distance is breached, a situation known as “loss of separation” occurs and this must be corrected by ATCOs by adjusting the level, speed, or direction of one of the planes. ATCOs, are however human, and thus inevitably present us with the potential for human error. A lack of concentration, a high number of aircraft in the vicinity, or understaffing are examples of how the potential for human error may be increased, and situations like this are only set to increase as our skies get busier.
When this minimum distance is breached, a situation known as “loss of separation” occurs and this must be corrected by ATCOs…
This is where AI steps in—researchers at the Single European Sky ATM Research (SESAR) institute proposed an automated conflict resolution strategy to be used by ATC which uses a mathematical model for machine learning. It uses two AI techniques: decision trees and random forests. A decision tree takes an input and finds the output by following a branch based on the value of the input.
Random forests then aggregate multiple decision trees to reduce the likelihood of a scenario called overfitting—a situation where the algorithm becomes too specific and doesn’t generalise well.
Various features from the conflict scenario are used as input in a tuple—an ordered list of elements. These include the initial heading of both planes, conflict angle, numbers of neighbouring aircraft, and the distance of both planes to the expected collision point. Conflict features are then used to produce an output in the form of a tuple: which aircraft to navigate, when to initiate the manoeuvre, which direction and deviation to have, and where to merge the plane back onto its original path. The model was trained, to allow it to determine the patterns needed for manoeuvre, by presenting two separate controllers with conflicts in a simulated environment and recording chosen trajectories.
This model would help reduce stress on controllers by assisting them in conflict resolution decisions, but will it actively replace them?
This model would help reduce stress on controllers by assisting them in conflict resolution decisions, but will it actively replace them? Based on research by the International Monetary Fund, Air Traffic Control falls into a job category with “High Exposure, High Complementarity” . Therefore, controllers can be augmented but cannot be replaced by AI. This is because of the high stakes in an ATC job—mistakes could result in losses of hundreds of lives—meaning controllers must actively monitor the AI in case of equipment or software failure. Thus, the role of the controller would not become obsolete.
Additionally, many doubt AI’s ability to make decisions aligned with human morals—a sentiment previously raised in discussions regarding self-driving cars. How can we get software to act morally? For this, it is essential to draw on thought experiments, which allow us to understand the motivations behind human morality. One such experiment is the “Moral Machine”, created by a researcher at MIT. This experiment provided users with two scenarios where a self-driving car must crash into a group of people and prompted them to decide which group would be ‘crashed into’. Through responses collected, a level of morality was established for the self-driving car. When developing AI for ATC, a similar method can be established, allowing ATCOs to choose which of two planes to save in a collision scenario. Thus, by aligning its operation with that of human morality, a morality framework can be created for the AI.
How can we get software to act morally?
Another issue when AI is discussed is human trust—do humans trust AI to make such crucial decisions where human lives are at stake? Research conducted by the Pew Research Centre in the USA shows that over 52% of survey respondents feel more concerned about AI than excited about it, reflecting similar patterns in AI trust across the globe, in all major demographic groups. Building trust around any technology is vital, but the public is often reluctant , an example being putting trust in the modern elevator. Taking a leaf out of the elevator’s book, to ensure the general public trust AI, it is essential to create positive marketing and remain transparent on both the operation of AI and the benefits it brings to the consumer. To ensure that pilots and controllers trust the AI technology, courses and educational training should be held to teach them how the AI works, which will help them trust it more by tackling any mysteries.
…do humans trust AI to make such crucial decisions where human lives are at stake?
So, is AI collision prevention a viable idea? The system would bring about many benefits, including improved robustness and safety in the aviation industry, through reduction of human error, alongside reduced stress levels for controllers, and increased job satisfaction. Nonetheless, many barriers must be overcome before implementation, including the consideration of ethical and moral frameworks for the system and the building of trust in the AI system with both aviation professionals and the public. Once these barriers are overcome, artificial intelligence can be implemented to make the skies a safer place.