Seminář je otevřen všem pravidelným i občasným zájemcum.
27. 4. 2023 (Thursday): Jan Haskovec (KAUST, Saudi Arabia)
Title: Functional Differential Equations in Models of Collective Behavior
Abstract: I will give an overview of recent results for models of
collective behavior governed by functional differential equations. The
talk will focus on models of interacting agents with applications in
biology (flocking, swarming), social sciences (opinion formation) and
engineering (swarm robotics), where latency (delay) plays a significant
role. I will explain that there are two main sources of delay -
inter-agent communications and information processing - and show that
they have qualitatively different impacts on the group dynamics. I will
give an ovierview of analytical methods for studying the asymptotic
behavior of the models and their mean-field limits. Finally, motivated
by situations where finite speed of information propagation is
significant, I will introduce an interesting class of problems where the
delay depens nontrivially and nonlinearly on the state of the system,
and discuss the available analytical results and open problems here.
24. 4. 2023: Martin Kunz (FJFI ČVUT)
Title: The Tale of a Boundary-Layer Problem
Abstract: I will demonstrate the solution with all its steps (and missteps)
to the boundary layer problem eps y'' = yy' - y, such that y(0) = 1 and y(1) = -1
and where eps <<1. The journey towards the solution will take us on an excurse
through some basics of perturbation theory and asymptotic approximation matching.
I will demonstrate visualizations that will give you insight into the reasoning
behind the steps taken towards the solution serving as an exhibition for the
leverage in problem solving accessible to us through the computational power at our fingertips.
17. 4. 2023: David Sychrovský (MFF UK)
Title: t.b.a.
Abstract: t.b.a.
3. 4. 2023: Aleksej Gaj (FJFI ČVUT)
Title: Quantum decision making: an introduction
Abstract: Decision making (DM) is one of the basic disciplines in contemporary AI and ML. DM is a purposeful choice among several alternatives based on the available information. Classical subjective expected theory (Savage, 1954) serves as a main paradigm currently used.
Confrontation with a human-like style of thinking (see Ellsberg and Allais paradoxes) has stimulated a new line of research towards the quantum version of DM (see Yukalov and Sornette, 2010).
In the talk we will speak about how quantum mechanics apparatus can be used to modify & improve classical decision theory.
27. 3. 2023: Erin Carson (MFF UK)
Title: Using Mixed Precision in Numerical Linear Algebra
Abstract: Support for floating point arithmetic in multiple precisions is becoming
increasingly common in emerging architectures. Mixed precision
capabilities are already included in many machines on the TOP500 list
and will be a crucial hardware feature in exascale machines. From a
computational scientist's perspective, our goal is to determine how and
where we can exploit mixed precision computation in our codes. This
requires both an understanding of performance characteristics as well as
an understanding of the numerical behavior of algorithms in finite
precision arithmetic.
After giving an introduction to floating point computation, mixed precision hardware, and current work in mixed precision numerical linear algebra, we present examples that demonstrate what can go wrong if we use low precision blindly. This motivates the need for rigorous rounding error analysis in algorithms used in scientific computing and data science applications.
Understanding the behavior of algorithms in finite precision is necessary not only for illuminating potential dangers, but also for revealing opportunities. As an example of where rounding error analysis can lead to new insights and improved algorithms, we present a general algorithm for solving linear systems based on mixed-precision iterative refinement. From this, we develop a mixed-precision GMRES-based iterative refinement scheme that works for even ill-conditioned systems. We then present recent extensions of this theoretical analysis to least squares problems and practical settings in which approximate and randomized preconditioners are used.
20. 3. 2023: Pavel Jaks (FJFI ČVUT)
Název: Hypercubes and the Sensitivity Conjecture
13. 3. 2023: Magdalena Prorok (AGH University of Science and Technology)
Title: Directed graphs without rainbow triangles
Abstract: One of the most fundamental questions in graph theory is Mantel's theorem
which determines the maximum number of edges in a triangle-free graph
of order n. Recently a colourful variant of this problem has been solved. In
such variant we consider k graphs on a common vertex set, thinking of each
graph as edges in a distinct colour, and want to determine the smallest num-
ber of edges in each colour which guarantees existence of a rainbow triangle.
In this talk we solve the analogous problem for directed graphs without rain-
bow triangles, either directed or transitive, for any number of colours. The
constructions and proofs essentially differ for k = 3 and k >= 4 and the type
of the forbidden triangle.
This is joint work with Sebastian Babinski and Andrzej Grzesik.
6. 3. 2023: Adam Janich (FJFI, ČVUT)
Název:word2vec
Abstrakt: Bude referována metoda word2vec.
27. 2. 2023: RNDr. Věra Kůrková, DrSc. (ÚI AV)
Název: Some implications of high-dimensional geometry for classification by neural networks
Abstrakt: Computational difficulties of multidimensional tasks, called the ``curse
of dimensionality’’, have long been known. On the other hand, almost
deterministic behavior of some randomized models and algorithms
depending on large numbers of variables can be attributed to the
``blessing of dimensionality’’. These phenomena can be explained by
rather counter-intuitive properties of geometry of high-dimensional
spaces. They imply concentration of values of sufficiently smooth
functions of many variables around their mean values and possibilities
of reduction of dimensionality of data by random projections. In the
lecture, it will be shown how these properties of high-dimensional
geometry can be employed to obtain some insights into suitability of
various types of neural networks for classification of large data sets.
Probabilistic bounds on network complexity will be derived using
concentration properties of approximation errors based on Azuma and
McDiarmid inequalities. Consequences for choice of network architectures
will be analyzed in terms of growth functions and VC dimensions of sets
of network input-output functions. General results will be illustrated
by examples of deep perceptron networks with various piecewise
polynomial activation functions (ReLU, RePU).
20. 2. 2023: Jan Vybíral (FJFI ČVUT)
Název: A multivariate Riesz basis of ReLU neural networks
Abstrakt: We consider the trigonometric-like system of piecewise linear functions introduced recently by Daubechies, DeVore, Foucart, Hanin, and Petrova.
We provide an alternative proof that this system forms a Riesz basis of L2([0,1]) based on the Gershgorin theorem. We also generalize
this system to higher dimensions d>1 by a construction, which avoids using (tensor) products. As a consequence, the functions from the new Riesz basis
of L2([0,1]^d) can be easily represented by neural networks. Moreover, the Riesz constants of this system are independent of d, making it an attractive
building block regarding future multivariate analysis of neural networks.