Randomness is a widely applicable tool in theoretical computer science. One such area where randomness has been immensely useful is information theory - be it classical or quantum. In this talk, we will first see a randomness reduction method in quantum information processing (aka derandomization). Quantum logic gates (and hence circuits) are generally unitary operators and in order to prove existential results via a probabilistic method one needs to sample these unitary operators randomly. The distribution of a unitary operator chosen uniformly at random from the unitary group is described via the Haar measure and we call these as Haar random unitaries. It is proven that the sampling and implementation of Haar random unitaries are exponentially hard. Hence, we consider one finite (pseudorandom) ensemble of unitaries which mimics Haar random unitaries approximately, upto first t-moments (over a polynomial in entries of the unitary). This pseudorandom ensemble of unitaries is called an approximate unitary t design. Brandao et al. (2012) and Sen (2019) have shown that approximate unitary t-designs can be constructed efficiently if the value of the parameter t=O(poly (\log n)). We first describe a known method of Low et al. (2009) of using these designs analogous to t-wise independent random variables and find large deviation bounds for polynomials in entries of such unitaries. This leads to replacing Haar random unitaries via designs and we shall refer to this process as derandomization.
We consider the task of sending classical information over quantum channels. The maximum amount of classical information that can be reliably transmitted over a quantum channel is known as the classical capacity or the Holevo capacity. Holevo capacity of a quantum channel. This is the first generalization of classical information theory in a quantum setting. Hastings(2008) in a major breakthrough, proved for the first time that there exists quantum channels exhibiting superadditive Holevo capacity, which was conjectured otherwise (Holevo, 2000) and in fact the conjecture was shown to be true for a wide variety of channels. This superadditivity of Holevo capacity implies that the quantum channels may not always have a single letter capacity formula, making it extremely difficult to compute and construct "optimal" error correction codes. Hastings used channels that are represented by choosing Haar random unitaries (U_{n \times n}). Constructing these channels (even approximately) requires exponentially many qubits (n^2 \log n, n=2^{
\char"0023 qubits}). Hence, the hunt for efficient channels having superadditive Holevo capacity has been a long-standing open problem. We take a step in this direction by aiming to derandomize Hastings counter example using the aforementioned Low's technique by first coming up with a novel stratified analysis of an n-dimensional unit sphere. In particular, we consider channels sampled uniformly from approximate unitary n^{2/3}-designs and show that these channels have superadditive Holevo capacity with exponentially high probability. Although the value of the design parameter t in our counterexample (for additivity of Holevo capacity) is n^{2/3}, we still save a lot in terms of random bits (O(n^{2/3} \log n) for implementation leading to a partial derandomization. We use tools from geometric functional analysis to do a novel stratified analysis of the unit n-sphere (\mathcal{S}^n) and prove a measure concentration theorem for the random subspaces coming from \mathcal{S}^n. This leads to proving a discretized version of the so-called Dvoretzky's theorem (not to be discussed in the talk). An advantage of our main result is that it can be directly used to unify the previous analysis on this problem.
We finally discuss the problem of universally simulating a classical Multiple Access Channel in the one-shot setting with independent inputs and infinite shared randomness. This is the second information theoretic problem where randomness is useful in saving the communication cost in a network setting. We show that we have an almost tight one-shot universal rate region for MAC, which we prove is tight asymptotically. The achievability is based on an elegant rejection sampling technique and converse is shown using a novel information-spectrum inspired technique. One-shot rate region is characterized in terms of the smoothed max-mutual information.
Brief bio:
Speaker: Aditya Nema, Assistant Prof. (on Contract), IIT Gandhinagar.
About: I did my Master's and Ph.D. from STCS, TIFR, Mumbai in Quantum Information Theory on derandomizing Haar random unitaries via approximate unitary t-designs to search for computationally efficient protocols. This involved, derandomizing the seminal result of Hastings that there exists quantum channels with superadditive classical capacity, prove a measure concentration for the decoupling theorem and construct efficient benchmarking protocols for a quantum logic gate.
After PhD, I was a Specially Appointed Assistant Prof. in Nagoya University, Japan continuing research in quantum information (Shannon) theory and resource theory. Then I was a postdoc in the Institute of Quantum Information, Department of Physics at RWTH Aachen University.
My research interests are in classical and quantum information theory, quantum computing and applied probability.
We will talk about plethysm and its relation to the restriction problem in algebraic combinatorics. The permutation group Sn can be naturally embedded inside GLn(C). Now we can take a irreducible polynomial representation of degree d of GLn(C) and restrict it to Sn and one can ask for a positive combinatorial formula for the restriction coefficients. It is an old problem in algebraic combinatorics. We will discuss the history and recent developments towards solving this problem.
Mathematics Seminar | Room 326
Nov 03 14:00-15:00
Surajit Kalita | Astronomical Observatory, University of Warsaw, Poland
Fast Radio Bursts (FRBs) are millisecond radio transients with high dispersion measures, making them powerful tracers of ionized matter across cosmological distances. In this talk, I present two complementary approaches, Bayesian analysis and machine learning, applied to a set of localized FRBs to rigorously test the consistency of the $\Lambda$CDM model at late cosmic epoch. Our results reveal a redshift-dependent variation in the inferred Hubble constant, a behavior that stands in contradiction to the core postulate standard cosmology. I will further show that this discrepancy can be resolved for alternate cosmological models. These findings suggest a fundamental inadequacy in the standard cosmological framework and necessitate a deeper revision of the theoretical underpinnings of cosmology to resolve the Hubble tension.
Scattering amplitudes play a central role in understanding fundamental interactions, offering a unifying framework for both quantum field theory and quantum gravity. In this seminar, I will present my research on perturbative and non-perturbative aspects of scattering amplitudes, focusing on two main directions. On the perturbative side, I will show how soft graviton theorems, which capture universal features of the gravitational S-matrix, determine the low-frequency behavior of gravitational radiation in the classical limit. I will outline how these results enable the derivation of waveforms for general classical scattering processes and explore their implications for gravitational tail memory effects, as well as their potential observational signatures in current gravitational-wave experiments. On the non-perturbative side, I will describe how S-matrix, form factor, and spectral density bootstrap techniques can be used to constrain renormalization group flows, providing new tools to study strongly coupled quantum field theories through principles such as unitarity and analyticity. This framework offers insights into the structure of renormalization group flows in both Lorentz-invariant and Lorentz-violating systems, particularly those connecting conformal field theories in the ultraviolet to gapped phases in the infrared.
Physics Seminar | Alladi Ramakrishnan Hall
Nov 05 14:00-15:00
Supratim Das Bakshi | Argonne National Laboratory, Chicago, IL, USA
Many of today's significant challenges, from organizing massive datasets to identifying similar points in large datasets or calculating the similarity between DNA sequences, rely on solving fundamental computational problems. As datasets grow in size, a common strategy is to relax the requirement for an exact solution and instead use an approximation algorithm to gain a significant speedup. My research addresses this from a theoretical standpoint. By studying the hardness of approximation, my work helps establish the fundamental limits of this speed–accuracy trade-off, clarifying for key problems when we cannot significantly improve upon the performance of known algorithms, even when allowing approximate solutions.
In this talk, I will present my contributions to this area through two key research programs.
First, I will address problems traditionally considered solvable in polynomial time, such as computing the closest pair of points in large-scale data or finding a set cover of fixed size, a task that routinely arises in computational proteomics and peptide identification. My work in fine-grained and fixed-parameter complexity shows that, for some of these core problems, no approximation algorithm can offer a significant asymptotic speedup over exact methods.
Second, I will turn to NP-hard geometric optimization problems such as clustering and Steiner tree, which are central to machine learning, logistics, and network design. For these problems, approximation algorithms are unavoidable. My work advances the understanding of the ultimate limits of approximation of these problems by proving strong inapproximability results.
Zoom Link:
Join Zoom Meeting
https://zoom.us/j/99598370034
Meeting ID: 995 9837 0034
Passcode: 941422
Let $A$ and $B$ be two Heisenberg translations in $\mathrm{Sp}(2,1)$
or $\mathrm{SU}(2,1)$ with distinct fixed points. Here,
$\mathrm{Sp}(2,1)$ and $\mathrm{SU}(2,1)$ act isometrically on the
quaternionic and complex hyperbolic spaces, respectively. We provide
sufficient conditions that guarantee that the subgroup $\langle A, B
\rangle$ is discrete and free, using Klein’s combination theorem. This is
joint work with Prof. I. D. Platis, University of Patras, Greece.
Motivic cohomology is a cohomology theory that can be defined internally within Grothendieck's category of motives. Voevodsky developed this theory for smooth varieties, demonstrating its profound connections to algebraic cycles and algebraic K-theory. However, its behaviour in mixed-characteristic remains less well understood. Building on recent advancements by Bachmann, Elmanto, Morrow, and Bouis, in a joint work with Bouis, we demonstrate a purity result over deeply-ramified bases. I will also discuss an application of this result in p-adic Hodge theory. The talk is intended to be accessible to non-experts.
Mathematics Colloquium | Alladi Ramakrishnan Hall
Nov 07 10:00-11:30
Saket+
Graph Theory Seminar
TCS Seminar | Alladi Ramakrishnan Hall
Nov 07 14:00-15:00
Satyajit Seth | Physical Research Laboratory, Ahmedabad
Classical Lp spaces, which form the foundation of modern analysis,
play a central role in harmonic analysis, probability theory, and partial
differential equations (PDEs). In recent decades, their non-commutative
analogues arising from operator algebras have gained increasing promi-
nence due to their applications in quantum probability, non-commutative
harmonic analysis, and quantum information theory.
Gravitational waves are a unique probe of the early Universe, as the Universe is transparent to gravitational radiation right back to the beginning. Here, I will summarise some of the different scopes of primordial events that could lead to a detectable stochastic gravitational wave background. Any such background would shed light on what (if anything) lies beyond the Standard Model, sometimes at remarkably high scales. I will also briefly overview the physics behind stochastic gravitational wave background before delving deep into some of the major primordial events that can source such a background.
Many new hadronic states have been discovered over the past two decades that do not fit into the traditional quark model. A prominent example of such an exotic hadron is the doubly charmed tetraquark $T_{cc}(3875)^+$. These discoveries call for first-principles calculations of their masses, and the only known method to achieve this is lattice QCD. In this talk, I will provide an overview of how a state-of-the-art spectroscopy calculation in lattice QCD works, and discuss some difficulties that arise and how we overcome them. An important and numerically challenging part of such a calculation is computing two-point functions of operators that carry the relevant quantum numbers. From these two-point functions we extract the low-lying spectrum. It is important to use a variety of operators to obtain an accurate spectrum: bilocal scattering operators that resemble weakly bound molecular states and local operators for more deeply bound states. Combining these two types of operators is computationally expensive; therefore, we developed a position-space sampling method that addresses this issue.