# Institute Seminar Days 2020-2021

##### SCHEDULE

Two crucial aspects of modern physical theories are symmetries and having infinite number of degrees of freedom. While the first one helps us to understand the theories at deeper levels, the latter gives rise to difficulties such as divergences. An important approach to deal with this is to define the theories on a lattice instead of space-time continuum. However, in traditional lattice theories certain important symmetries are lost and are recovered only in the continuum limit. We explore a new covariant (i.e. visibly symmetric) approach where all such symmetries can be preserved. As a result, there arises a possibility of much simpler description of lattice theories. The construction relies on the existence of a logarithmic discrete derivative that satisfies Leibniz rule. The resultant discrete calculus behaves exactly in the same way as the ordinary calculus to which it merges in the continuum limit.

Neutron stars are extremely dense undead stars. These objects emit electromagnetic waves due to their strong magnetic fileds and fast rotations and sometimes seen as "Pulsars". Due to their extreme density, the gravitational fields around neutron stars are so strong that general relativistic effects become significant and hence pulsars can be used as laboratories to test various theories of gravity. In this talk, I will describe two such uses of pulsars. First, I will mention how a pulsar in a binary system with a black hole can help establish or rule out some of alternative gravity theories. Second, I will describe how a number of pulsars can be used to detect low frequency gravitational waves through "Pulsar Timing Array" experiment. I will also mention our group's contribution in this international experiment, and its future.

Let $\lambda$, $\mu$ and $\nu$ be integer partitions with at most $n$ parts each. The Littlewood-Richardson (LR) coefficient $c_{\lambda,\mu}^{\nu}$ is the multiplicity of the irreducible representation $V(\nu)$ in the decomposition of the tensor product $V(\lambda)\otimes V(\mu)$ of irreducible polynomial representations of $GL_n$. For each permutation $w$ in $S_n$, the $w$-refined LR coefficient $c_{\lambda,\mu}^{\nu}(w)$ is the multiplicity of $V(\nu)$ in the decomposition of the so-called Kostant-Kumar submodule $K(\lambda,w,\mu)$ of the tensor product. The saturation problem asks whether $c_{\lambda,\mu}^{\nu}(w) >0$ given that $c_{k\lambda,k\mu}^{k\nu}(w) >0$ for some $k \geq 2$. We show that this is true when the permutation $w$ is $312$-avoiding or $231$-avoiding, by adapting the beautiful combinatorial proof of the LR-saturation conjecture due to Knutson and Tao.
This is joint work with K.N. Raghavan and Sankaran Viswanath.

Suppose $V(n)$ is a representation of the $n^{th}$ symmetric group for each $n$. There are naturally occuring families of this kind whose combinatorial properties stabilize for large values of $n$. For example if $V(n)$ is the space of homogeneous polynomials of a fixed degree $d$ in $n$ variables, then the dimension of the subspace of invariant polynomials stabilizes at $p(n)$, the number of partitions of $n$, for large $n$. I will outline a simple approach to studying such families using character polynomials.

Life on Earth is possible because of sunlight, and the source of this sunlight is the nuclear fusion processes that occur in the core of the Sun. This talk reviews the story of solar neutrinos and how they led to a deeper understanding of neutrinos. It also forms the motivation for the proposed INO laboratory.

The fractional quantum Hall effect (FQHE) forms a paradigm in our understanding of strongly correlated systems. FQHE in the lowest Landau level (LLL) is understood in a unified manner in terms of composite fermions, which are bound states of electrons and vortices. The strongest states in the LLL are understood as integer quantum Hall states of composite fermions and the compressible $\frac{1}{2}$ state as a Fermi liquid of composite fermions. For the FQHE in the second LL, such a unified description does not exist: experimentally observed states are described by different physical mechanisms. In this talk, I will discuss our first steps towards a unified understanding of states in the second LL using the parton" theory. I will elucidate in detail our recent work on the parton construction of wave functions to describe many of the FQH states observed in the second LL.

A string or a text is a sequence of characters from a finite alphabet. Given a text $T$ of length $n$ and a pattern $P$ of length $m$, the string matching problem asks for all occurrences of $P$ in $T$. While a naive algorithm takes $\mathcal{O}(nm)$ time, a classical algorithm due to Knuth, Morris and Pratt does this in $\mathcal{O}(m+n)$ time. When the text $T$ is given in advance, one can pre-process in $\mathcal{O}(n)$ time to support string matching queries for patterns $P$ in $\mathcal{O}(m)$ time. In this talk, we consider the string matching problem when the text or pattern contains "don't care" symbols which can match any character, using Discrete Fourier Transform.
Discrete Fourier Transform (DFT) is a transform that converts a complex $n$-dimensional vector into another, and uses properties of complex $n^{th}$ roots of unity. What makes it useful for algorithmic applications is that it can be computed in $\mathcal{O}(n\log{}n)$ time. We will start with describing DFT and explain how it can be used to solve the string matching with "don't care" symbols.

The celebrated Langrange's theorem (1770) asserts that every natural number is the sum of four squares. Recently (in 2018) Madhusudan, Nowotka, Rajasekaran and Shallit showed a binary version: every natural number larger than 686 is the sum of at most 4 binary squares. A number is a binary square if it is of the form ww where w is a finite bit sequence. (For instance 45 is a binary square since it is 101101 in binary.) Last year, Kane, Sanna and Shallit showed a version of Waring's theorem for binary powers. (There are natural analogs of the theorems for numbers in any base.)
These are instances of a spate of recent results made possible by decision procedures for logical theories (extensions of Presburger arithmetic) using finite state automata representations. Many of these proofs involve automata with thousands of states and hence the analysis could not have been done without the use of these decision procedures. The talk will be an attempt to highlight this automata based approach to additive number theory

Amorphous solids are all around us, in various shapes and forms, ranging from things that we use in our daily lives to large-scale industrial applications and even geophysical phenomena. Unlike crystalline solids which have ordered structures, amorphous materials are disordered. They also range from soft (e.g. gels, emulsions, foams, pastes, granular assemblies etc.) to hard (e.g. window glass, metallic glass etc.). Understanding the formation of these materials and thereby tuning their properties for targeted applications has been one of the challenging research areas over many decades. In this talk, I will discuss some of our recent work in this context, viz. probing equilibrium and non-equilibrium behaviour of different kinds of model glassy systems.

In a (not-so-)imaginary world, imagine a district named FRAGILE that needs to be protected from a district named SPREADER that spreads a deadly virus. The virus can be spread by contamination: when a person from district A which has the virus goes to district B, we assume that district B is infected with the virus. Travel is not necessarily allowed between every pair of districts and the information about which pairs of districts allow travel is already given to us. Initially only the SPREADER district has the virus. With this information at hand, all the districts decide to save the FRAGILE district from the virus by possibly shutting down the borders of some districts such that when these borders are closed the virus can’t reach the FRAGILE district from the SPREADER district. An obvious intention here is to try and close the borders of as small number of districts as possible that still prevents the virus from reaching the FRAGILE district. An algorithm designer is approached for this work and after tons of hard work he/she finds a smallest set of districts whose borders need to be closed to achieve the desired goal.
Soon after this problem is resolved, one faces the issue of synchronisation amongst the closed districts. In particular, suppose now that the districts which need to close their borders have to collectively decide on the protocol for doing so. In order to allow for a smooth conduct of events, one now desires that the chosen districts be such that none of them has a conflict’’ with any of the others. The same algorithm designer is again consulted to add this additional feature in the output of its algorithm. Sooner than later, the designer realizes that the previous approach to finding a solution fails fundamentally when though the problem being addressed is essentially the same with an added constraint. To the surprise of many (but may be not for the mathematicians and computer scientists), one never hears back from the designer!
The above example depicts the reality of algorithm designers. Often algorithms designed after years/decades of hard work become obsolete when asked to perform the same job with an additional constraint. In this talk we see a combinatorial tool that allows the reuse of algorithms when the additional constraint is that of conflict-freeness.

Mpemba effect refers to a counterintuitive result wherein an initially hotter system when quenched to a lower temperature equilibrates faster than one at an intermediate temperature. In this talk, we review some of the known results for the existence of the Mpemba effect in various physical systems. We then describe the existence of such an effect in driven inelastic gases. An exact analysis determining the conditions for the Mpemba effect will be presented for a simplified Maxwell model followed by a more realistic collision model for such systems. We also show the existence of the strong Mpemba effect where the system at higher temperature relaxes to a final steady state at an exponentially faster rate leading to smaller equilibration time.

Given the rise of antimicrobial resistance to many drugs, there is a need for a paradigm shift in thinking about and designing a new class of antimicrobial agents. In the quest for the same, there has also been a lot of focus on understanding the bacterial cell membrane and its structure in order to exploit any aspects in favor of antimicrobial mechanism. In this talk, I will discuss efforts in this direction and in particular on designing polymers that mimic naturally occurring antimicrobial peptides and their interactions with model bacterial membranes using computationally intensive atomistic-level simulations. Of crucial interest is the ability of these smart polymers to have shape-shifting properties that can sense the environment they are in and adopt functionally relevant forms.

After a brief introduction to the world of elementary particles we point out how physics beyond the standard model can be probed using rare so-called loop processes. We show how such decays of mesons allow us to probe energies far beyond the Large Hadron Collider

We shall introduce the mapping class groups of surfaces of infinite type, which are known as 'big' mapping class groups, and the associated Teichmüller spaces. Towards the end, we shall briefly discuss about an ongoing work with Dr. Gianluca Faraco.

Standard Model(SM) has been very successful so far in describing the physics of elementary particles. The most successful methodology to perform the theoretical calculations within SM are based on perturbation theory, due to our inability to solve the theory exactly. Under the framework of perturbation theory, all the observables are expanded in powers of the coupling constants present in the underlying Lagrangian. The result obtained from the first term of perturbative series is called the leading order (LO), the next one is called next-to-leading order (NLO) and so on. In most of the cases, the LO results fail to provide a reliable theoretical prediction of the associated observables, one must go beyond the LO result to achieve a higher accuracy. Perturbative computations can be performed with respect to the coupling constants associated with the three fundamental forces within SM , namely, electromagnetic ($α_{em}$), weak ($α_{ew}$) and strong ($α_s$) ones. However, at typical energy scales, at which the hadron colliders operate, the contributions arising from the $α_s$ expansion dominate over the others due to comparatively large values of $α_s$. Hence, to catch the dominant contributions to any observables, we must concentrate on the $α_s$ expansion and evaluate the terms beyond LO. These are called Quantum Chromo-dynamics (QCD) radiative or perturbative QCD (pQCD) corrections. In this talk, I will discuss a formalism called soft-virtual (SV) approximation to calculate the QCD radiative corrections to the inclusive cross-sections at the hadron colliders. I will also highlight some of the recent works done in our group to extend this formalism to next-to-soft virtual(NSV) approximation.

We present a formalism that resums both soft-virtual (SV) and next to SV (NSV) contributions to all orders in perturbative QCD for the rapidity distribution of any colorless particle produced in hadron colliders. Using the state-of-the-art results, we determine the complete NSV contributions to third order in strong coupling constant for the rapidity distributions for Drell-Yan and also for Higgs boson in gluon fusion as well as bottom quark annihilation. Using our all order $z$ space result, we show how the NSV contributions can be resummed in two-dimensional Mellin space.

Since the early work by Stuart Kauffman, Boolean networks have become a widely-used framework to model cellular decision making processes. Logical update rules (or Boolean functions), which form the logical edifice of Boolean networks, govern the local dynamics of the system. We critically assess the preponderance of the known classes of Boolean functions in more than 80 models of biological systems and relate our empirical observations on to biologically meaningful properties. We introduce an existing notion of complexity which has not been previously explored in biological systems, and look at the representation of low-complexity Boolean functions in the biological models. Our work naturally leads to a novel class of Boolean functions which satisfy biologically meaningful constraints and have low complexity. This work has been done in collaboration with Olivier C. Martin (INRAE, France) and Areejit Samal.

Fix a partition $\mu=(\mu_1,\mu_2,\ldots, \mu_m)$ of an integer $k$ and a positive integer $d$. For a partition $\lambda$ of an integer $n\geq k$, let $\chi_{\mu}^{\lambda}$ denote the value of the irreducible character of $S_n$ corresponding to $\lambda$ at an element with cycle type $(\mu_1,\mu_2,\ldots, \mu_m, 1^{n-k})$. We show that the proportion of partitions $\lambda$ of $n$ such that $\chi_{\mu}^{\lambda}$ is divisible by $d$ approaches $1$ as $n$ approaches infinity. This is joint work with Amritanshu Prasad and Steven Spallone.