PhD Fest
19 May 2026
14:30
A217
Jan Jedelský: Transductions between sparse graph classes
The first-order (FO) model-checking problem asks, given a graph G and an FO formula 𝜙, whether G is a model of 𝜙. Using FO logic, one may express the problems of asking if G contains a clique of size k, a distance-independent set of size k, or a subgraph isomorphic to a graph H; and even whether the tree-depth of G is at most k.
An FO transduction is a notion of non-deterministic reduction for this problem. Informally speaking, a transduction from a graph class 𝒞 to a graph class 𝒟 witnesses that every graph G∈𝒟 can be obtained from some coloring of some graph H∈𝒞 using FO logic. Hence, solving the FO model-checking problem restricted to inputs from 𝒞 is not easier than solving the same problem restricted to inputs from 𝒟.
In this talk, we characterize transductions between sparse graph classes in purely combinatorial terms. As a corollary of our characterization, for every k∈ℕ, we prove that the class of all graphs of tree-width ≤k+1 is not transducible from the class of all graphs of tree-width ≤k.
Anna Řechtáčková: Code Quality Defects and How to Find Them Automatically
The ability to write code is not the same as the ability to write it well. I study code quality defects found in code written by novice programmers and explore ways to detect them automatically -- be it by using static analysis or large language models.
In the talk, I will first define code quality defects and present our catalog of over 100 specific defects. I will then share results from a survey of teachers at FI and abroad on how they perceive the severity of these defects. Next, I will introduce EduLint, an educational linter I have developed, and describe its integration into the CS1 course IB111 taught at FI MU. I will outline our research into detecting poor identifiers using large language models and compare their performance with that of human evaluators. And finally, I will describe our efforts to detect duplicate code using both static analysis and LLM-based methods.