NeuroMath

With world-class speakers (Harvard, Johns Hopkins, Technion, Leeds, TU/e, Oxford, Siemens, Twitter, Google, Microsoft …).  Free accommodation for the first 90 students.

Full program and registration: WWW.SSIMA.EU

Abstract

Why does deep learning work so well? Why are all image processing people turning from computer vision to deep neural nets? Because it works: image segmentation, image super-resolution, image denoising, image registration, extracting depth from images, generate images at a certain style, generate CT from MRI images and vice versa: it all works very well with deep neural nets.

But we still don't know how they work. 'Explainable AI' has been approached from many viewpoints, but remains still largely a mystery.

Deep learning is in great majority approached with heuristic optimization. However, its internal workings are still for a large part a black box. Explainability of DL, needed to build trust, is a large research topic today, approached from many directions, like layer and kernel visualizations, layer-wise relevance propagation, perturbation, local approximation etc. But no good answers yet …

In this extended tutorial workshop I will focus on a geometric approach inspired by findings in the visual system. One of the key strategies of the brain is efficiency, so we will discuss extensively several methods to accomplish this, such as approximate representation learning for patterns, updating by differences only, intrinsic hierarchical decomposition and attention.

In particular, we will study how perceptual grouping might work (‘visual binding’), i.e. contextual processing. Contrary to data/dimension reduction, we see an explosion of data in the visual system. The retina measures with 12 different mosaics simultaneously (and a same number of information channels to the brain). Just as in deep neural nets, in higher levels the local descriptions get far richer, though at lower resolution, i.e. deeper in the graph hierarchy. We start in a single pixel, study self-emergence of differential operators from sets of neighborhoods, and study affinities (' groupings' ) between the geometric tensors in higher levels. Current Deep Learning approaches are primarily static, but motion of the scene or observer is a crucial binding factor.

There is a lot to learn from the retinal connectome and the visual system. Brain research has made just as big steps as AI, but the interchange between these worlds is still marginal. I will highlight the most recent findings in visual neurophysiology.

This is a highly interactive tutorial

This series of lectures will be almost completely presented with life coding examples in Mathematica (Wolfram Inc.). The course notes are all computational essays. Attendees will discover that this is an ideal environment to study deep learning, and combine it with both deep understanding of - and play interactively with - the underlying mathematics.

And yes, we will do mathematics, and physics, and all will be explained visually and intuitively. The approach is focused on geometric deep learning, and deriving solid insights by exploiting first principles.

The final hour will be devoted to modern insights in visual perception, the retinal connectome, and what seems to happen in the many layers of the visual system in the cortex.

Part of this course material is scheduled into a forthcoming book by the tutor (Q4 2022).

All the Mathematica notebooks will be made available to the attendees, so all that is explained during the lectures can be studied at ease later on by doing it.

The course is suitable for beginners and experts in Deep Learning programming.

Lecture 01 - 5 Sept 2022 - 10:00-11:00

Network construction and surgery
Explainable AI (XAI) introduction - PPT
Introduction to Convolutional Neural Networks
Network surgery
Transfer learning
Learning to program in Mathematica in 15 minutes
How does gradient descent really work?

Applications and visualizations
Some application examples (few lines program)
- News aggregator and topic classification
- COVID data analysis
- A camera-driven self driving car
- Auto-encoder
Data visualization (feature space plots) in 2D and 3D
The Wolfram Neural Network Repository
Dynamic inner layer visualization (camera)
Inner layer visualization for all deeper layers

Lecture 02 - 5 Sept 2022 - 11:00-12:00

Backpropagation
Layer differentiation in neural networks
Symbolic and numeric differentiation
How does backpropagation really work?

Self organization of the first layers
The optimal pixel shape from first principles
Constrained optimization
The first contextual neighbourhood of a pixel
Proper derivative kernels for discrete data
Template matching
Convolution and the Discrete Fourier Transform

Lecture 03 - 5 Sept 2022 - 12:15-13:15

Geometric deep learning
Learning the first filters with optimally sparse representations
Differential structure & geometric invariants in neural networks
Some famous invariants from computer vision
Vesselness, corner detection
How to properly describe shape in 2D and 3D?
Regularization: how does it really work?
Steerability of first level CNN kernels

Lessons from the visual system
Brain imaging methods
Modern insights into the anatomy and function of the retina
Modern insights into the anatomy and function of the visual cortex
Perceptual grouping
Why is the brain so incredibly energy-efficient?
Final remarks and wrap-up

Mathematica

It is highly recommended to acquire Mathematica 13 desktop version
and have it working during the Summer School classes:

Recommended reading: 

On neural networks and deep learning:
·        Bart M. ter Haar Romeny, Introduction to Artificial Intelligence in Medicine
Chapter in the Springer Handbook: Artificial Intelligence in Medicine.
·        Etienne Bernard: Introduction to Machine Learning (written in Mathematica)
Book (Amazon / Kindle) and online full text + free Mathematica code.

On the visual system:
·        David Hubel, Eye, Brain & Vision, Scientific American Press.
Free download.
·        Richard Masland: The Neuronal Organization of the retina
Free download.
·        Eric Kandel: Principles of Neuroscience 5th ed., chapters 25 – 28
Free download.

On Mathematica (= the Wolfram Language):
·        Stephen Wolfram: An elementary introduction to the Wolfram Language
Free download.
·       The Wolfram Language: Fast introduction for programmers 
-       William J. Turkel:  Digital Research Methods with Mathematica