21
October
2019
|
15:13
Asia/Singapore

In pursuit of robust AI technologies

Machine learning algorithms for image recognition often have trouble distinguishing the targeted object and adversarial examples

| By Assistant Professor Jonathan Scarlett |

The integration of artificial intelligence (AI) into everyday technologies has become a trend that increasingly defines how consumers view and interact with the latest devices. Today, consumers eagerly anticipate devices that can perform tasks for them in an automated manner. Examples include camera phones that can accurately detect the subject of a photo and set the optimal camera settings, ‘smart home’ devices that control lighting or temperature depending on the resident’s location or daily routine, or even more ambitious technologies such as self-driving vehicles. Such technologies not only feature in our day-to-day lives, but are now deployed throughout industries. As such, societies place their trust in the ability of these AI-based technologies to make the right decisions in what are often highly dynamic situations and environments. 

At the heart of these AI-based technologies are ‘machine learning’ algorithms that effectively learn to perform a task, be it a prediction or a decision, based on data acquired over time, or provided by developers. In these instances, the devices are not explicitly programmed to carry out the task per se, or to meet a defined outcome, but rather the task is learned automatically from data. For instance, machine learning algorithms have demonstrated super-human performance in the detection of objects within images, and can automatically generate photo-realistic images and natural-sounding speech. It is no exaggeration to say that recent advances in machine learning have been extraordinary, and have led to a renaissance in the field with seemingly endless applications. 

Alongside these advances, however, many new challenges have emerged. In many applications, a key roadblock to unlocking the full potential of modern machine learning is the concept of robustness. Learning algorithms are often in fact quite brittle, and are prone to failure when faced with non-idealised settings. For instance, the data available to a learning algorithm may be corrupted, out-of-date, or simply too scarce for modern techniques that rely on the availability of large data sets. Furthermore, in areas such as robotics, learning is often done in an artificial, simulated environment, and the performance may degrade considerably when transferred to a real environment. 

Importantly, these limitations are not merely theoretical, but rather, they have direct consequences in the application of machine learning algorithms. Consider, for example, the unusual yet intriguing phenomenon that is particularly pertinent to computer vision applications, known as adversarial examples. Here, carefully crafted perturbations of an image, which are innocuous or even imperceptible to a human, can fool a machine learning algorithm into thinking that the image contains something completely different to what is actually there. To appreciate the implications of this phenomenon, one only needs to consider the impact of a self-driving car mistaking a stop sign for a speed limit sign. 

Public concerns surrounding AI technologies are often related to the consequences of machine learning algorithms failing when encountering perturbations to the system. These may come in the form of adversarial inputs or cases where the AI system encounters a situation it has not yet learned from. As such, new approaches to identify, understand, and circumvent key robustness issues faced by modern machine learning algorithms, must be explored.

Such efforts are now underway as part of the research I conduct at the NUS School of Computing. For example, one of the over-arching goals is to seek methods for mathematically determining when robustness can or cannot be achieved. Here, fundamental limits are determined that indicate when a given robust learning goal is mathematically possible versus when it is mathematically impossible. For example, although image classifiers robust against adversarial inputs exist, various results on fundamental limits prove that significantly more data is required for the algorithm to learn if it is to identify them correctly, and in a robust manner. These mathematical fundamental limits for robust learning provide crucial guidance in the development of practical algorithms. Without them, researchers and developers run the risk of placing considerable effort into trying to achieve something that is simply not possible. 

Since fundamental limits generally dictate what can be achieved without necessarily revealing how to achieve it in practice, we are also working to develop mathematically rigorous practical machine learning algorithms and certify their near-optimality via comparisons to these fundamental limits. Typical modern machine learning algorithms are based on optimisation — where the algorithm can tweak parameters based upon the errors or mistakes it makes on the data being used. However, if robustness to perturbations is required, it is not enough just to check which data examples are right and wrong — instead, the algorithms need to anticipate the possibility of worst-case perturbations and explicitly account for their effect. Accordingly, we are also developing robust optimisation methods, and mathematically proving that they achieve the underlying learning goal even in the presence of perturbations.

In many cases, robustness can be equally important even in problems that are not adversarial in nature. Similar types of issues can arise as a result of inaccurate modelling assumptions, data containing corrupted or unreliable entries, changes in the learning environment, and more. While one typically cannot expect to comprehensively resolve such issues simultaneously in a unified manner, our work seeks to understand the connections and interactions between them.

Robustness issues will continue to be prevalent in an extensive range of practical machine learning systems. Overcoming these issues is therefore crucial for the development and deployment of AI technologies in industry, in business, and in the wider community. Overall, the work we are conducting at NUS seeks to address these issues, and to ensure machine learning algorithms used in AI technologies are indeed robust, so that they can be deployed to solve problems with a certainty that people can depend on.

 

About the author

Jonathan Scarlett.jpg

Dr Jonathan Scarlett is an Assistant Professor at NUS Computer Science and NUS Mathematics, and an affiliate with the NUS Institute of Data Science. He is also a recipient of the 2019 National Research Foundation Fellowship.