Concordia researcher explores how to make AI ‘smarter’

Artificial Intelligence (AI) has become a ubiquitous part of our daily lives, and more so through the COVID-19 pandemic.

While many people care little about trading aspects of their privacy with AI systems in exchange for convenience, what happens when those systems get it wrong?

Simone Brugiapaglia, assistant professor of mathematics and statistics in the Faculty of Arts and Sciences, recently co-authored an article on this same question. “Deep Neural Networks Are Effective At Learning High-Dimensional Hilbert-Valued Functions From Limited Data”, to be published in the Research papers on machine learningexamines how to make AI “smarter”.

“Many important research questions await an answer”

Many people associate deep learning with high-level scientific work, but may not realize how much they use it in their daily lives. In what ways do most people use deep learning technology?

Simone Brugiapaglia: One of the most impressive features of deep learning is its extreme versatility. For example, deep learning is used to perform speech synthesis in Apple’s Siri and speech recognition in Amazon’s Alexa conversational engine. Another popular deep learning app that we use very often (depending on how much TV you watch) is the recommendation system that Netflix uses to suggest shows we might like. Deep learning is also an essential component of the computer vision system behind Tesla’s Autopilot.

Tell us about your study

SB: Many mathematical results on deep learning take the form of existence theorems: they affirm the existence of neural networks capable of approximating a given class of functions to a desired precision. However, most of these results do not address the ability to train such networks or quantify the amount of data needed to do so reliably.

In our article, Ben Adcock, Nick Dexter, Sebastian Moraga (all based at Simon Fraser University) and I address these questions by proving so-called practical existence theorems. In addition to showing the existence of neural networks with certain desirable approximation properties, our results provide conditions on the training procedure and the amount of data sufficient to achieve a certain accuracy.

What draws you to this subject?

SB: Despite the huge success of deep learning in countless applications, mathematics is still in its infancy. From an applied mathematician’s perspective, this is exciting because there are many important research questions waiting to be answered.

Another fascinating aspect of deep learning mathematics is its high level of interdisciplinarity. For example, to obtain the practical existence theorems of our paper, my collaborators and I combine elements of approximation theory, high-dimensional probability, and compressive sensing.

Finally, I’m highly motivated thinking about how new theoretical knowledge can inform deep learning practitioners, leading them to deploy more reliable algorithms in the real world.

Finally, what’s next?

A huge project I just completed in collaboration with Adcock and Clayton Webster (University of Texas) is the Sparse Polynomial Approximation of High-Dimensional Functions book. It has just been published by the Society for Industrial and Applied Mathematics (SIAM).

Our book illustrates the theoretical foundations of sparse high-dimensional approximation that may have made possible our theorems of practical existence. The book has a final chapter devoted entirely to open issues in the field and it will form the basis for exciting new research over the next few years. I will also be teaching a mini-course based on the book at the upcoming Canadian Mathematical Society summer meeting. I look forward to seeing how the Canadian mathematical community will receive our work.


Read the quoted article: “Deep Neural Networks Are Effective At Learning High-Dimensional Hilbert-Valued Functions From Limited Data.”

Learn more about Concordia Department of Mathematics and Statistics.


Source link

Comments are closed.