Computer Science Summer in Russia (CSSR)

June 24 - July 5, 2019

List of Summer School Courses and Tutorials
Introduction to Formal Program Semantics
Course Description
Topics
Prerequisites
Lecturer
Language
Over 50 years have passed since Robert W. Floyd has published the first research that explicitly discussed formally how to assign meaning to programs, i.e., formal program semantics. Two decades have passed already since David A. Schmidt published an appeal "On the Need for a Popular Formal Semantics". These years have generated a plenty of individual formal semantics, their theories, and more experts, but fewer general users (D.A. Schmidt). Meanwhile, we could see a great activity (of different kinds) in the field of formal semantics in the last decade.

This course is designed to help rethinking formal semantics. We start with a make easy approach to popularize formal semantics for software engineers. It is based upon a toy language with esoteric operational, denotational, and logical (axiomatic) semantics. Then we present a realistic and esoteric operational, denotational and axiomatic semantics for a simple programming language. Finally, we discuss a problem-oriented formal semantics designed for reasoning about programs with pointers – a denotational Alias Calculus proposed by Bertrand Meyer.

This course will help students to achieve the following objectives.

  • Learn why we need formal semantics for programming languages.
  • Understand the principles of operational, denotational, and axiomatic approaches to formal semantics.
  • Become familiar with the basic operational, denotational, and axiomatic semantics for a simple programming language.
  • Understand problems with developing formal semantics for industrial programming languages.
  • Become familiar with problem-oriented formal semantics for reasoning about programs with pointers.
  • Learn about the use of formal semantics for static analysis and formal verification.
  • Discuss applications of formal semantics in software engineering, in particular, for space industry.
1. Introduction to Program Semantics: Motivation

1.1. What is Semantics?

1.2. Why Formal Program Semantics?

1.3. Operational, denotational, and axiomatic semantics for an esoteric language

2. Formal semantics for a Toy Programming Language

2.1. Data Types and Their Semantics

2.2. The main ingredient: Implementation Semantics

2.3. Structural Operational Semantics

2.4. Relational Denotational Semantics

2.5. Axiomatic Semantics

3. Second-Order Semantics

3.1. Introduction to Higher-Order Logic

3.2. Second-order semantics for esoteric language

3.3. The Weakest Precondition Semantics for a Toy Programming Language

4. Elements of λ-Calculus and Classical Denotational Semantics

4.1. Syntax, Semantics and Main Properties of λ-Calculus

4.2. Denotational Semantics of a Toy Programming Language

5. Conclusion: Formal Semantics – a Gateway to Formal Verification

5.1. Example of semantics for advanced features– Calculus of Aliasing

5.2. Example of formally verified software – Mars Code

6. Lab topics (optional home tasks). Working with an esoteric programming language:

6.1. Implementation and visualization of the operational semantics

6.2. Implementation and visualization of the denotational semantics

6.3. Implementation and visualization of the second-order semantics

6.4. Implementation of the axiomatic semantics

Visualization of the axiomatic semantics
Basic programming skills,
fundamentals of set theory,
propositional logic (syntax and truth tables),
and first-order logic (syntax and the notion of model)
Nikolay Shilov
Assistant Professor at Innopolis University.
Specialist in the theory of programming and applied logic.

He had lecture and research positions at Sydney University of Technology (Australia), Nazarbayev University (Kazakhstan), University of Canterbury, Christchurch (New Zealand), Korea Advanced Institute of Science and Technology, Chung-Ang University, and Samsung Advanced Technology Training Institute (South Korea).

Currently his research interests are related to the foundations and applications of Formal Methods for the analysis of program, information, distributed, and multiagent systems.

His current research is concerned with the project "Platform Independent Specification and Verification of the Standard Mathematical Functions".

English or Russian depending on the audience preferences
Introduction to the Theory of Programming Languages
Course Description
Topics
Prerequisites
Lecturer
Language
The goal of this course is to present the topics in the theory of programming languages as an active branch of Computer Science. We will start with the basic introduction into the main ideas, definitions, facts, and tools of the theory and then continue on to the ongoing topics of research as applied to the Haskell programming language and regularly presented in the main conferences in this area, namely, POPL (Principles of Programming Languages) and ICFP (International Conference on Functional Programming).

The second part of the course will be necessarily presented as an overview without going deep, but the students are expected to get an understanding of the method that is used in this theory. The course will be accompanied with detailed lecture notes and a reading list to get acquainted with the topics presented much deeper.
  • Lambda calculus as a language of the theory of programming languages. Evaluation rules. Programming in the lambda calculus.
  • Simply typed lambda calculus, type judgments, and typing rules. Classical and constructive logic, Curry-Howard correspondence. Extending simply typed lambda calculus along the Barendregts's lambda cube axes.
  • Coq proof assistant as a tool for doing research in the theory of programming languages: a quick introduction.
  • Use case, part 1: Haskell programming language and System FC (Girard's System F with algebraic datatypes and coercions).
  • Use case, part 2: extending Haskell with (a) linear arrows and (b) dependent types as examples of ongoing research in the theory of programming languages.
The students are not expected to know anything in the area in advance. Knowledge of discrete mathematics (sets, relations, proofs by induction, boolean algebra, logical predicates) is required.

Knowledge of any functional programming language (Haskell, OCaml, F#, Clojure) is a plus but is not required.
Vitaly Bragilevsky
Senior Lecturer at the Southern Federal University in Rostov-on-Don, Software developer at JetBrains, Russia. Teaches functional programming, theory of computations, and theory of programming languages at the undergraduate level.

Vitaly Bragilevsky is a member of the Haskell 2020 Language Committee and the GHC Steering Committee. He has translated into Russian and edited translations of several books on Haskell and the theory of programming languages.

He is the author of the book 'Haskell in Depth' (Manning Publications, available via Manning's early access program).
Russian
Modern Testing and Verification Techniques
Course Description
Topics
Prerequisites
Lecturer
Language
In this course, you will get acquainted with several modern industrial- and academic-based tools for testing and verification.

The course covers the following testing techniques:
  • Test-Driven Development (TDD)
  • Behavior-Driven Development (BDD)
  • Automatic generation of Unit tests with MS Spec Explorer
  • MS Code Contracts approach

as well as the following verification techniques:

  • Model-Based checking with Spin in iSpin environment
  • Contract verification of C programs with the Weakest Precondition method
  • Verification of Cyber-physical Systems and browsing samples in the KeYmaera theorem prover.
- Modern Testing Techniques
  • Triple-V model of testing and verification;
  • Developing sample system with the TDD approach;
  • Developing sample system with the BDD approach;
  • Spec Explorer in action: generation of unit-tests by a given behavior automaton.

- Formal Verification in Action
  • Purpose of verification;
  • Verification of actor-based systems and protocols with Spin;
  • An Example of solving search problems in Spin: Hanoi Towers;
  • Contract based programming with the MS Contracts approach.

- Formal Verification in Action (2)
  • The Weakest Precondition approach and verification of C programs with Frama-C;
  • Cyber-Physical Systems: issues of definition and verification;
  • Introduction to CPS verification: verification systems from various domains.
  • Basic C and Java programming skills
  • Fundamentals of Automata Theory
  • Basic understanding of unit-testing
Sergey Staroletov
Lecturer at Altai State Technical University. Sergey is a specialist in modeling and testing distributed software systems based on multi-agent automata models and has an extensive industrial experience.

He is an author of the book «Rudiments of software testing and verification», published in 2018 (in Russian).

Russian
Introduction to the Model Checking
Course Description
Topics
Prerequisites
Lecturer
Language
Model Checking is a method for automatically checking the compatibility between a model and its formal specification. It is used to verify the correctness of software, hardware, and other concurrent systems with respect to properties such as deadlock freedom, invariants, request-response etc. Model checking usually compares a finite state model of a system against its specification given in some formal notation which is usually a temporal logic.

In this introductory course, you will learn how to apply the basic logics and algorithms for model checking. We will consider probabilistic and real-time variants of the model checking technique. The methods will be illustrated on a variety of case studies and we will provide an overview of common model checking software, including SPIN and PRISM.
  • The reliability and correctness of programs and systems. Foundations of Model Checking. System modeling: Kripke structures, logical representation of concurrent systems.
  • Temporal logic CTL and LTL for specification of systems. Typical properties of systems and their specifications.
  • Model checking algorithms for temporal logics CTL and LTL. Fairness. Probabilistic model checking. Real-time systems.
  • Examples of model checking tasks: business process analysis, planning, communication, and cryptographic protocols.
  • Overview of model checking tools. SPIN and PRISM.
Basic knowledge of algorithms and data structures, and mathematical logic at the bachelor level.
Natalya Garanina
Senior Research Associate at Ershov Institute of Informatics Systems. Natalya is a specialist in logic and applications to verification of multi-agent systems and distributed algorithms.

Natalya has been working at Chung Ang University in South Korea and has participated in the EU FP7 project Computal on computable analysis. She is teaching courses on software verification and formal methods in software engineering at the Novosibirsk State University.

Russian
Finite Countermodel Finding for Infinite State and Parametrized Verification
Course Description
Topics
Prerequisites
Lecturer
Language
In this course, we introduce a conceptually simple, but powerful and efficient method for automated safety verification of infinite-state and parameterized systems. The method utilises modelling of reachability between the states of a system as derivability in the classical first-order (FO) logic. Within such modelling approach, to establish the safety of a system, that is non-reachability of unsafe states, it is sufficient to show that a particular first-order formula is not provable. In the FCM method this task is delegated to the available automated finite (counter)model finding procedures.

We present theoretical foundations and illustrate the method by numerous applications for infinite-state and parameterized verification tasks selected from the areas of

  • lossy channel systems
  • cache coherence protocols
  • parameterized arrays of finite state automata, weak memory models
  • recreational mathematics, etc.

We further prove the relative completeness of the method with respect to other safety verification methods based on regular invariants, including regular model checking, regular tree model checking, and tree automata completion techniques.
- Introduction

  • The infinite state problems from recreational mathematics tackled by FCM: Fibonacci words and MUpuzzle
  • Origins of the method and early work in security analysis
  • Generic requirements for FO logic encodings used in the method

- FCM for the classes of infinite state and parameterized problems

decision procedure for the safety of lossy channel systems
verification of parameterized cache coherence protocols
verification of string and term rewriting systems
linear systems of automata and mutual exclusion protocols

- Relative completeness

For proving safety, the FCM method is as powerful as any method using regular invariants, e.g.:
  • regular model checking
  • regular tree model checking
  • tree automata completion

- Limitations and challenges

Restrictions caused by regular invariants.
  • Simple tasks, for which FCM does not work, as it requires nonregular invariants.
  • Tasks, for which FCM is good in theory, but is slow in practice.
  • Can we improve FCM? Open problems to explore.

  • Basics of classical first-order logic (formulas and models)
  • Basics of formal languages (regular languages)
Alexei Lisitsa
Head of the Verification Group at the University of Liverpool. His research is concerned with three interrelated areas: methods for automated verification (specifically, for infinite state and parameterized systems), automated reasoning with applications to verification and mathematics, and computer security.

He has published more than 100 research papers. Alexei is a founding chair and member of Steering Committee of the VPT workshop series on Verification and Program Transformation, six editions of which have been held annually affiliated either with CAV or ETAPS conferences.

Alexei has served as an Invited Expert to the Horizon 2020 project SC^2 «Satisfiability Checking and Symbolic and is a Principal Investigator for the KTP (Knowledge Transfer Partnership) project on secure data processing and blockchain based technologies.

Russian or English depending on the audience preferences
Verification Support for Control-Oriented Models with Domain-Specific Languages
Course Description
Topics
Prerequisites
Lecturers
Language
Many application domains still suffer from a lack of logic-based verification support. For example, the development of control software using Matlab-Simulink(R) relies on simulation as the prevailing testing technique. There is currently no widely adopted technique able
to certify that control software behaves under all circumstances as expected.

On the other hand, we have witnessed over the last 20 years substantial progress in the development of specialized formalisms, in particular modal logics, that are tailored for a certain application domain. In addition, automatic or interactive theorem provers are becoming more and more available, even for non-standard logics. However, we still have to manually bridge the gap from the application domain to the world of logic.

In this tutorial, you will learn a tool architecture, which supports such bridging and which is based on Domain-Specific Languages (DSLs). DSLs are quite powerful to imitate the input format of established engineering tools such as Matlab-Simulink(R) and turned out to be very flexible to generate any proof obligations for logics of your choice.

The tutorial is based on a running example from the control domain, which is first described in Matlab-Simulink(R) and then refined by user-defined DSLs. As verification backend we use the theorem prover KeYmaera.
- Repetition of logical foundations
  • propositional, first-order, and dynamic logic

- Control theory based on examples
  • plant model, controller, implementation as formulated in MatlabSimulink(R)

- Current quality assurance techniques for control software
  • simulation

- Hybrid automata as realized by theorem prover KeYmaera
  • tiny examples
  • calculus rules (also for handling ordinary differential equations (ODEs))

- Using Domain-Specific Languages to bridge MatlabSimulink(R) to hybrid automata
  • defining a DSL in syntax with XText
  • implement generator for proof obligations
  • visualize results

- The one-tank model as running example

Basic knowledge of (some) logics (e.g., propositional, first-order, or modal):
syntax, semantics, sequent calculus
Thomas Baar and Heide Brandtstädter
Thomas Baar holds a diploma degree in computer science from Humboldt-University Berlin, Germany, and obtained his PhD from the University of Karlsruhe (today: Karlsruhe Institute
of Technology - KIT), Germany. Since 2011, Thomas has been a Professor for Software Engineering and Databases at Hochschule für Technik und Wirtschaft (HTW) Berlin.

His research interests include the design of programming and modeling languages, the application of formal methods using tailored specification languages, interactive and automated theorem proving, and safety analysis of hybrid systems.


Heide Brandtstädter received her diploma degree in Electrical Engineering from Technical University Berlin and her PhD degree in Control Engineering from Technical University of Munich. She joined Siemens Large Drives facility in Berlin in 2008 and worked as an expert for rotor dynamics of large electrical machines and torsional analyses of drive trains. Since April 2018 she has been a Professor at Hochschule für Technik und Wirtschaft (HTW) Berlin.

Her research focuses on mathematical modeling of dynamic systems and control theory, especially sliding mode control theory.
English
Modern Construction of Industrial Strength Compilers
Course Description
Topics
Prerequisites
Lecturer
Language
Compiler construction is one of the oldest and the most mature fields of computer science. Despite existing since the 1950s, it has been continuously evolving, advancing, and changing. Old classic study books are no longer enough to sufficiently prepare students and practitioners to face the research world nor the industrial reality.

This course is meant to strengthen the classic compiler construction / software evolution courses given at Bachelor and Master levels, by highlighting problems encountered in the field of compilation of fourth generation and domain-specific languages, legacy modernization, software reverse engineering and re-engineering, and revisiting the standard points of interest from the modern standpoint.

There will be examples from modern toolkits and recent academic articles, as well as example stories taken from real industrial projects in language development, support and migration.

During the first lecture we will have a look at the world around us and pinpoint places, where software languages and compilers exist, emerge, and operate, shifting the focus from traditional compilers to domain-specific languages and language workbenches. We will also have a brief overview of tools that people who are not necessarily system programmers can use to automate tedious and error-prone tasks of language processing, like configuration files handling or dealing with XML and JSON.

The second lecture will be rather technical and will go deeper into the actual techniques that have either emerged or have been improved during the last decade, including grammar extraction, convergence, adaptation, bidirectional transformation, testing, deployment, etc.

The last lecture will highlight a few areas that still deserve much attention, and will be mostly interesting for BSc/MSc/PhD students searching for an inspiring research topic.

Outline

• Grammarware as the future of compilers

• Language workbenches and other advanced tools

• Domain-specific and fourth generation languages

• Software language engineering

• Language design with intent

• Modern parsing algorithms: GLL, ALL(*), SGLR, Packrat

• Semi-parsing, error recovery, graceful degradation

• Parsing in a broad sense

• Grammar extraction and recovery

• Grammar quality and grammar smells

• Grammar equivalence and convergence

• Code generation with frameworks

• Bidirectional and coupled transformations

• Semantics and interpretation

• Meta-programming, analysis, and comprehension

• Between interpretation and compilation

• Compiler testing and fuzzing

• Language documentation

• Compiler deployment and integration

• Machine learning in compiler construction

• Open problems of the future


Either a finished BSc in Computer Science or Software Engineering,
or a completed compiler construction / formal language course.
Vadim Zaytsev
Chief Science Officer of Raincode Labs, a Belgian company providing compiler services and specialising in modernisation of software legacy systems.

Vadim has obtained a PhD from the Vrije Universiteit Amsterdam in the field of software language engineering with a focus on grammar(ware) technology. He has done research in that direction at Universität Koblenz-Landau in Germany, Centrum Wiskunde & Informatica in the Netherlands, as well as at the University of Amsterdam in the Netherlands, where he has also been extensively involved in teaching many graduate-level courses, supervising student projects, maintaining and expanding the network of university relations with the industry.

Besides hardcore software language engineering with grammar(ware) technology, his interests and research activities tend to invade such topics as software quality assessment, source code analysis and transformation, modelling, metamodelling and megamodelling, programming paradigms, declarative and functional programming, maintenance and renovation of legacy systems and others.

He is also actively practicing open science and open research, contributing to a range of open data and open source projects, co-organising and presenting at (mostly academic) events.
English or Russian depending on the audience preferences
Data Science and Solution Development
Course Description
Topics
Prerequisites
Lecturer
Language
In the lecture, we will focus on the most interesting and actual topics:

- The background of the Uma.Tech company;

- Solution Development and how Data Science helps in creating new solutions;

- The popular CRISP-DM methodology;

- We will describe a solution to a real-world problem of predicting user drop-off based on a proposed process model.

At the end of the lecture, you will be provided a ''home task''.
In the lecture, we will focus on the most interesting and actual topics:

- The background of the Uma.Tech company;

- Solution Development and how Data Science helps in creating new solutions;

- The popular CRISP-DM methodology;

- We will describe a solution to a real-world problem of predicting user drop-off based on a proposed process model.

At the end of the lecture, you will be provided a ''home task''.
no special prerequisites
Stanislav Gafarov
Head of the Department for Data Analytics. Stanislav has over 3,5 of experience in data analytics and implementation of Machine Learning. He is leading a group of specialists at Uma.Tech, a company which focuses at implementation of Machine Learning models, development of recommender systems, and Computer Vision .
Russian
Logic Tools for Ontology Engineering
Course Description
Topics
Prerequisites
Lecturer
Language
Every complex subject domain is related with a broad range concepts, which are used by specialists in formal and informal way to document the domain knowledge. This in turn, leads to the necessity of documenting concepts. This way classifications in Biology and Chemistry have been developed, as well as special concept systems in the form of glossaries, taxonomies, and ontologies, which specify definitions of concepts and their relationships.

Technical documentation and specification of devices is typically provided with a terminology, which is included in the form of a glossary as a part thereof. Some concept systems are recognized as reference information sources at the state of higher professional level, for example, the International Classification of Diseases (ICD), etc.

Formalized concept systems are employed in information systems for data annotation, search, integration, and natural language processing. Currently, there are hundreds of mathematically formalized concept systems (ontologies) available on the Web, which are being used for solving information processing tasks in natural sciences and technical subject domains.

In this course, you will learn about logic methods for ontology engineering. We will introduce the fundamentals of Description Logics, which have been used for over 20 years as a mathematical tool for formulation and analysis of ontologies. In particular, Description Logics underpin the computer languages and standards for publishing ontologies on the Web. In the course, you will learn how to use logic tools for developing ontologies and how to employ automated reasoning for structuring, analysis, and validation of ontologies.

1. Logic Tools of Ontology Modeling
  • Justification of logic as a formalization tool for ontologies. Basics of Description Logics EL and ALC: syntax and semantics. Basic reasoning problems: ontology consistency, concept satisfiability, concept disjointness. The problem of concept classification. Terminologies and expansions.

2. Automated Reasoning in Description Logics.
  • The complexity of reasoning in the logic EL. The complexity of concept satisfiability in ALC wrt the empty and a non-empty ontology. The relationship between the expressiveness and complexity of a logic. Consequence-based calculus for EL. Tableaux for ALC.

3. Automated Reasoning for Explaining Ontology Properties
  • The problem of inconsistency explanation. Minimal justifications and maximal counterexamples for an inference. The number of minimal justifications for an inference in the logic EL, the complexity of finding a justification of a given size.
  • Glass-box approach to computing justifications by modifying the reasoning procedure. Reducing computation of minimal justifications in EL to SAT (the satisfiability problem for boolean formulas). The search for minimal justifications with a modification of tableaux for ALC.
  • Black-box approach to computing justifications without modifying the reasoning procedure.

4. Computing concept definitions in ontologies
  • Explicit and implicit concept definitions. The number and the size of pairwise incomparable shortest concept definitions in the logics EL and ALC. Relationship between definitions and minimal justifications.
  • Extracting concept definitions from proofs. Computing concept definitions with a modified consequence-based reasoning procedure for EL. Rewriting ontologies with explicit definitions.
Basic knowledge of any formal logic (syntax and semantics).
Denis Ponomaryov
Senior Researcher at Ershov Institute of Informatics Systems and Senior Lecturer at the Novosibirsk State University. He has been working as a Visiting Professor and Senior Research Associate at the Department of Computer Science at Ryerson University, Toronto and the Institute of Artificial Intelligence at the University of Ulm, Germany.

Denis serves as a reviewer for the ACM Journal «Transactions on Computational Logic» and a program committee member to the top-level conferences on Artificial Intelligence, including AAAI and IJCAI, as well as the International Description Logic Workshop and related venues. His primary research interests include Knowledge Representation, Automated Reasoning, Ontology Engineering, and Reasoning About Actions.

English or Russian depending on the audience preferences
Ontology-Based Data Access
Course Description
Topics
Prerequisites
Lecturer
Language
Ontology-based data access (OBDA) is regarded as a key ingredient of the new generation of information systems. In the OBDA paradigm, an ontology defines a high-level global schema of (already existing) data sources and provides a vocabulary for user queries. An OBDA system rewrites such queries and ontologies into the vocabulary of the data sources and then delegates the actual query evaluation to a suitable query answering system such as a relational database management system.

This course will start with the overview of ideas and technologies that led to OBDA such as RDF-graphs, SPARQL queries, linked data, knowledge representation and description logics. Then we focus on OBDA with OWL 2 QL, one of the three profiles of the W3C standard Web Ontology Language OWL 2, and relational databases. We finish with a hands-on session with the OBDA system Ontop.
  • Graph databases, RDF graphs and SPARQL queries.
  • Web ontology language and knowledge bases
  • Relational Databases as virtual RDF graphs
  • Ontology-based data access via query reformulation in OWL 2 QL
  • Two-dimensional classification of ontology-mediated queries
  • Complexity of answering conjunctive queries with respect to ontologies
  • Succinctness of conjunctive query rewritings in OWL 2 QL
  • Ontology-based data access with OnTop
There are no hard prerequisites for the course. However, the students with a background in formal logic will feel more comfortable than those without.

Also a basic experience with SQL is desirable for the hands-on session where it will be used for mapping a relational database on the ontology vocabulary.
Stanislav Kikot
Stanislav Kikot is a specialist in logic and its applications for Data and Knowledge Engineering. He has graduated from Moscow State University and defended his PhD thesis on the correspondence theory in modal logic.

After an internship at Yandex he has been working in London on the fundamental problems of Ontology-Based Data Access. Currently he is a Research Associate at the University of Oxford in the Database Theory group.

English or Russian depending on the audience preferences
Course Description
Topics
Prerequisites
Lecturer
Language
In this course you will learn about principles, methods, and basic problems in automated reasoning. The development of logic methods and automated reasoning tools has begun right from the beginning of the computer era. For instance, Lisp, one of the first high-level programming languages, can be viewed as an implementation of the lambda-calculus. Later history has evidenced the rise and fall of applications of logic in Information Technologies. Nowadays, this topic is again at the top, which is due to applications for program, protocol, and system verification, knowledge analytics, and data processing. Recently, there have appeared numerous automated reasoning tools for solving these tasks.

  • An example of a logic calculus: syntax and semantics of the predicate logic. The proof search problem.
  • Application of automated reasoning: the case of software verification.
  • Type Theory for proof search.
  • Outlook: applications of Machine Learning for efficient proof search.
Basics of discrete mathematics: sets, relations, proof by induction, Boolean algebra, logic predicates.
Understanding of the concept of imperative program.
Dmitry Vlasov
Senior lecturer at Novosibirsk State University. Dmitry teaches Discrete Mathematics, Mathematical Logic, and the Theory of Programming.

Dmitry is an inventor and developer of the Russel language, with a reasoning support for a broad class of formal deductive systems.
Russian, with slides in English
Course Description
Topics
Prerequisites
Lecturer
Language
This course is on Applied Automated Reasoning. We will present recent developments in the application of automated reasoning (AR) methods and techniques, such as automated theorem (dis)proving and SAT solving, in Experimental Mathematics.

We will consider applications of AR in computational topology (knot theory), combinatorial number theory, and combinatorial group theory.
1. In the first part of the course we will present a recent application of automated reasoning to (un)knot detection, which is a classical problem in computational topology.


- We will show that the problem can be equivalently reformulated as a task of proving/disproving a first-order formula, which specifies the properties of a quandle, an algebraic invariant of the knot.

- We will adopt a two-pronged experimental approach, using a theorem prover to try to establish a positive result (i.e., that a knot is the unknot), whilst simultaneously using a model to try to establish a negative result (i.e., that the knot is not the unknot). The theorem proving approach utilises equational reasoning, whilst the model finder searches for a minimal size counter-model.

- We will further show that by using explicit knowledge of existing quandle families one can reformulate the disproving task in terms of quandle colourings and reduce it to a SAT solving problem, which in turn can be tackled very efficiently by off-the-shelf SAT solvers. This leads to the empirically most efficient non-trivial knot recognition procedure.

We will further discuss using different algebraic structures (such as knot semigroups) or topological objects (tangles) in the automated reasoning about knots trading efficiency of the proofs for their transparency.


2. In the second part of the course we will overview the results related to the Erdos Discrepancy Problem.

In the 1930s, Paul Erdos conjectured that for any positive integer C in any infinite sequence (x_n) consisting of numbers 1 and -1, there exists a subsequence x_d, x_{2d}, x_{3d},\dots, x_{kd}, for some positive integers k and d, such that the absolute value of the sum of the elements of the subsequence is greater than C. The conjecture has been referred to as one of the major open problems in combinatorial number theory and discrepancy theory.

For the particular case of C=1 a human proof of the conjecture existed; for C=2 the status of the conjecture remained open until March 2014. We show that by encoding the problem into the Boolean satisfiability and applying the state of the art SAT solver, one can prove the conjecture for C=2 using a subsequence of length 1160. These results appeared in 2014-2015. Later, in autumn 2015, the general case of EDP was solved by Terence Tao, not using automated reasoning or any other computational approach.

3. In the third part of the course, we will present the applications of the automated theorem proving to the exploration of the open Andrews-Curtis conjecture in the combinatorial group theory. The approach employs generic proof search for the search of simplifications implied by the conjecture in a very efficient way, outperforming all known specialized algorithms.
The course is suitable for people with some mathematical background, including familiarity with abstract algebra (at least definitions of groups/semigroups).

Less classical concepts (quandles, tangles, etc) will be defined during the course and should be accessible for the mathematically curious audience.

The familiarity with automated theorem proving, SAT solving is useful, but not crucial.
Alexei Lisitsa
Head of the Verification Group at the University of Liverpool. His research is concerned with three interrelated areas: methods for automated verification (specifically, for infinite state and parameterized systems), automated reasoning with applications to verification and mathematics, and computer security.

He has published more than 100 research papers. Alexei is a founding chair and member of Steering Committee of the VPT workshop series on Verification and Program Transformation, six editions of which have been held annually affiliated either with CAV or ETAPS conferences.

Alexei has served as an Invited Expert to the Horizon 2020 project SC^2 «Satisfiability Checking and Symbolic and is a Principal Investigator for the KTP (Knowledge Transfer Partnership) project on secure data processing and blockchain based technologies.

English or Russian depending on the audience preferences
Course Description
Topics
Prerequisites
Lecturer
Language
In this course, you will learn to apply the Deep Learning techniques to a range of Computer Vision tasks through a series of hands-on exercises. You will work with widely-used Deep Learning tools, frameworks, and workflows to train and deploy neural network models on a fully-configured GPU accelerated workstation in the cloud.

After a quick introduction into the Deep Learning, you will learn how to build and deploy Deep Learning applications for image classification and object detection and how to tune neural networks to improve their accuracy and performance. Finally, you will implement this workflow in a project, which combines Machine Learning and conventional Software Development techniques. After the course you will provided with access to additional resources to create Deep Learning applications on your own.

Upon successful completion of this course, you will obtain a NVIDIA DLI Certification.

  • Implementation of common Deep Learning workflows, such as image classification and object detection
  • Experiments with data, training parameters, network structure, and other strategies to increase performance and capability of neural networks
  • Integration and deployment of neural networks in your applications to start solving complex real-world problems

Familiarity with programming basics such as functions and variables
Nick Tolstokulakov
Specialist for highly loaded distributed systems and Deep Learning. He is a Senior Lecturer at the Novosibirsk State University and ambassador of the NVIDIA Deep Learning Institute at NSU.

Nick has an extensive experience of working in industrial projects using these technologies. An example is a smart parking system http://green-pay.ru
English or Russian depending on the audience preferences
Course Description
Topics
Prerequisites
Lecturer
Language
This course presents a unique Machine Learning method, which has been developed in the Novosibirsk Scientific Center and has proved to be useful in solving numerous practical tasks. In the first part of the course, we will introduce the basics of the logic-and-probabilistic method of Knowledge Discovery and compare it with other methods for Data Analysis. We will consider an algorithm of the Semantic Probabilistic Inference, which is at the core of the method. Finally, we will give examples of real-world problems, which are solved with this approach.

The second part will be devoted to the application of the method in Adaptive Control. We will first introduce you to the fundamentals of Reinforcement Learning and then give examples of logic-and-probabilistic models of self-learning control systems based on neurophysiological theories of brain activity. In the concluding part, we will address the control problem for hyper-redundant and modular robots.

  • Introduction to the logic-and-probabilistic method of Data and Knowledge Mining
  • Use cases for Knowledge Mining and prediction for various subject domains
  • Hierarchical control systems based on the modern neurophysiological theories of brain activity
  • The control problem in hyper-redundant and modular robotics

Basic knowledge of logic and probability (logic predicates, boolean operators, the notion of probability).
Alexander Demin
Alexander is a specialist in Data Analytics and Adaptive Control. He is a Senior Researcher at Ershov Institute of Informatics Systems and has close collaboration with the Laboratory of Computation Theory and Applied Logic at Sobolev Institute of Mathematics.

Alexander has an extensive experience in the development and application of Data Mining and predictive systems for various domains including Medicine, Finance, and Bioinformatics. His current research is concerned with the development of intelligent self-learning robotic systems.

Russian