Date: January 10, 2021

Time:   3 -   6 PM (CET)

            9 - 11 AM (EST)

            6 -   9 AM (PST)


Computer graphics and computer vision problems, also known as forward and inverse imaging problems, have been cast as causal inference questions consistent with Donald Rubin’s quantitative definition of causality, where “A causes B” means “the effect of A is B”, a measurable and experimentally repeatable quantity. Computer graphics may be viewed as addressing analogous questions to forward causal inference that addresses the “what if” question, and estimates the change in effects given a delta change in a causal factor. Computer vision may be viewed as addressing analogous questions to inverse causal inference that addresses the “why” question, and we define inverse causal inference as the estimation of causes given an estimated forward causal model and a set of observations that constrain the solution set.

There are two main classes of tensor factorizations that generalize two different properties of the matrix SVD,

  • -rank-K decomposition which represents a tensor as a sum of K rank-1 terms, and

  • -rank-(R1, R2,...,RM) decomposition which computes orthonormal mode matrices and a core tensor that has a similar role to the singular value matrix.

In the first part of the tutorial, we will define the meaning of causality, linear tensor rank-K and the multilinear tensor rank,rank-(R1, R2,...,RM).   Forward causal models discussed employed Multilinear PCA (MPCA), Multilinear ICA (MICA), kernel MPCA, kernel-MICA, Generalized Block Tensor Factor Analysis. The discussion on Inverse Causal Inference will introduce the multilinear projection algorithm, mode-m tensor pseudo-inverse and the mode-m identity tensor which are important in performing recognition in a tensor framework. Furthermore, we will discuss advantages and disadvantages of treating images as vectors, matrices and higher order objects in the context of a tensor framework.

 

 

 

In the second part, we discuss how neural network architectures can be tensorized  and how tensor factorizations such as TensorTrain and Hierarchical Tucker computation can result in state-of-the-art art performance with large parameter savings and computational speedups on a wide range of applications.

model_taxonomy3.png

ICPR                  Jan 10

                                    2021

Taxonomy of tensor models.
3layers_matrix_grey_icons5.png

Rewriting the data tensor D as a hierarchical data tensor DH.

 

 

 

 

 

Developing causal explanations for correct results or for failures from mathematical equations and data is important in developing a trustworthy artificial intelligence, and retaining public trust. Causal explanations are germane to the “right to an explanation” statute, i.e., to data driven decisions, such as those that rely on images. 

 

Tensor algebra is a suitable and transparent framework for representing the multi-causal structure of data formation.  Tensor based data analysis, also known in the literature as structural equation modeling with multimode latent variables, has been succesfully employed in representing the causal factor structure of data formation in econometrics, psychometrics and chemometrics since the 1960s. More recently, the tensor factorization approach has been successfully employed in computer vision computer graphics to represent the cause-and-effect relationship of image formation, but the approach is data agnostic.  In machine learning,  tensor dimensionality reduction has been employed to reduce the parameters of convolutional neural networks (CNN), and ease the deployment of CNNs on devices with limited computational resources.

Cause-and-Effect in a Tensor Framework

 

Tutorial Schedule:


Basic Concepts (2 hours, M. Alex O. Vasilescu)

  1. Causality

    • Forward Causal Inference​

    • Inverse Causal Inference

   2. Tensor factorizations:

  • Canonical Polyadic Decomposition ― Low-rank tensor approximation 

  • Tucker Decomposition and Multilinear Singular Value Decomposition 

Forward Causal Inference and Multilinear Factorizations

   1 Representing Global Causal Factors with​

  • Multilinear PCA and Multilinear ICA (M-mode ICA) (pdf)

  • Kernel MPCA, Kernel MICA, etc. 

 

    2. Representing Hierarchical Intrinsic and Extrinsic Causal  Factors

  • Defintions: intrinsic vs extrinsic causality, and local versus global causality

  • Compositional Hierarchical Block Tensor Factorization (pdf)

​Inverse Causal Inference -- Multilinear Projection 2011 / 2007 

pixel_space_mapped_to_factor_spaces2.png
Multilinear Projection

Tensor Factorizations for Neural Networks: - (2 hours, Ivan Oseledets)
 

  1. Parameterizing neural networks with tensor decomposition 

  2. Higher order operations as deep net layers

  3. TensorTrain (pdf)

  4. Hierarchical Tucker Computation (pdf)

 

Speakers/Organizers:

M. Alex O. Vasilescu (https://web.cs.ucla.edu/ maov) received her education at the Massachusetts Institute of Technology and the University of Toronto. She was a research scientist at the MIT Media Lab from 2005–07, at New York University’s Courant Institute of Mathematical Sciences from 2001–05, and is currently a senior fellow at UCLA's Institute of Pure and Applied Mathematics.  In the early 2000s, Vasilescu introduced the tensor algebraic framework for computer vision, computer graphics, and machine learning. She addressed causal inferencing questions by framing computer graphics and computer vision as multilinear problems, and demonstratively disentangled the causal factors of data formation. Causal inferencing in a tensor framework facilitates the analysis, recognition, synthesis, and interpretability of sensory data. The development of the tensor framework has been spearheaded with premier papers, such as: Human Motion Signatures (2001), TensorFaces(2002), Multilinear Independent Component Analysis(2005), TensorTextures(2004), and Multilinear Projection for Recognition (2007, 2011). Vasilescu’s face recognition research, known as TensorFaces, has been funded by the TSWG, the Department of Defenses Combating Terrorism Support Program, and by IARPA, Intelligence Advanced Research Projects Activity. Her work was featured on the cover of Computer World, and in articles in the New York Times, Washington Times, etc. MITs Technology Review Magazine named her to their TR100 List of Top 100 Young Innovators, and the National Academy of Science co-awarded the KeckFutures Initiative Grant.

Ivan Oseledets (https://faculty.skoltech.ru/people/ivanoseledets) graduated from Moscow Institute of Physics and Technology in 2006, got Candidate of Sciences degree in 2007, and Doctor of Sciences in 2012, both from Marchuk Institute of Numerical Mathematics of Russian Academy of Sciences. He joined Skoltech CDISE in 2013. Ivan’s research covers a broad range of topics. He proposed a new decomposition of high-dimensional arrays (tensors) – tensor-train decomposition, and developed many efficient algorithms for solving high-dimensional problems. These algorithms are used in different areas of chemistry, biology, data analysis and machine learning. His current research focuses on development of new algorithms in machine learning and artificial intelligence such as construction of adversarial examples, theory of generative adversarial networks and compression of neural networks. It resulted in publications in top computer science conferences such as ICML, NIPS, ICLR, CVPR, RecSys, ACL and ICDM. Professor Oseledets is an Associate Editor of SIAM Journal on Mathematics in Data Science, SIAM Journal on Scientific Computing, Advances in Computational Mathematics (Springer). He is also an area chair of ICLR 2020 conference. Ivan Oseledets got several awards for his research and industrial cooperation, including two gold medals of Russian academy of Sciences (for students in 2005 and young researchers in 2009), Dynasty Foundation award (2012), SIAM Outstanding Paper Prize (2018), Russian President Award for young researchers in science and innovation (2018), Ilya Segalovich award for Best PhD thesis supervisor (2019).