1630 - 1700
#### A covariant approach to lattice

##### Partha Mukhopadhyay

Two crucial aspects of modern physical theories are symmetries and having infinite number of degrees of freedom. While the first one helps us to understand the theories at deeper levels, the latter gives rise to difficulties such as divergences. An important approach to deal with this is to define the theories on a lattice instead of space-time continuum. However, in traditional lattice theories certain important symmetries are lost and are recovered only in the continuum limit. We explore a new covariant (i.e. visibly symmetric) approach where all such symmetries can be preserved. As a result, there arises a possibility of much simpler description of lattice theories. The construction relies on the existence of a logarithmic discrete derivative that satisfies Leibniz rule. The resultant discrete calculus behaves exactly in the same way as the ordinary calculus to which it merges in the continuum limit.

Youtube Link:

1600 - 1630
#### Neutron Stars as tools to probe gravitational physics

##### Manjari Bagchi

Neutron stars are extremely dense undead stars. These objects emit electromagnetic waves due to their strong magnetic fileds and fast rotations and sometimes seen as "Pulsars". Due to their extreme density, the gravitational fields around neutron stars are so strong that general relativistic effects become significant and hence pulsars can be used as laboratories to test various theories of gravity. In this talk, I will describe two such uses of pulsars. First, I will mention how a pulsar in a binary system with a black hole can help establish or rule out some of alternative gravity theories. Second, I will describe how a number of pulsars can be used to detect low frequency gravitational waves through "Pulsar Timing Array" experiment. I will also mention our group's contribution in this international experiment, and its future.

Youtube Link:

1630 - 1700
#### Saturation of refined Littlewood-Richardson coefficients

##### Mrigendra Singh Kushwaha

Let $\lambda$, $\mu$ and $\nu$ be integer partitions with at most $n$
parts each. The Littlewood-Richardson (LR) coefficient $c_{\lambda,\mu}^{\nu}$
is the multiplicity of the irreducible representation $V(\nu)$ in the
decomposition of the tensor product $V(\lambda)\otimes V(\mu)$ of irreducible
polynomial representations of $GL_n$. For each permutation $w$ in $S_n$, the
$w$-refined LR coefficient $c_{\lambda,\mu}^{\nu}(w)$ is the multiplicity of
$V(\nu)$ in the decomposition of the so-called Kostant-Kumar submodule
$K(\lambda,w,\mu)$ of the tensor product. The saturation problem asks whether
$c_{\lambda,\mu}^{\nu}(w) >0$ given that $c_{k\lambda,k\mu}^{k\nu}(w) >0$ for
some $k \geq 2$. We show that this is true when the permutation $w$ is
$312$-avoiding or $231$-avoiding, by adapting the beautiful combinatorial proof
of the LR-saturation conjecture due to Knutson and Tao.

This is joint work with K.N. Raghavan and Sankaran Viswanath.

Youtube Link:

1600 - 1630
#### Representation Stability for Symmetric Groups

##### Amritanshu Prasad

Suppose $V(n)$ is a representation of the $n^{th}$ symmetric group for each $n$. There are naturally occuring families of this kind whose combinatorial properties stabilize for large values of $n$. For example if $V(n)$ is the space of homogeneous polynomials of a fixed degree $d$ in $n$ variables, then the dimension of the subspace of invariant polynomials stabilizes at $p(n)$, the number of partitions of $n$, for large $n$. I will outline a simple approach to studying such families using character polynomials.

Youtube Link:

1630 - 1700
#### The Sun, the Earth, and Neutrinos: The India-based Neutrino Observatory (INO)

##### D. Indumathi

Life on Earth is possible because of sunlight, and the source of this sunlight is the nuclear fusion processes that occur in the core of the Sun. This talk reviews the story of solar neutrinos and how they led to a deeper understanding of neutrinos. It also forms the motivation for the proposed INO laboratory.

Youtube Link:

1600 - 1630
#### Parton paradigm for the quantum Hall effect

##### Ajit C. Balram

The fractional quantum Hall effect (FQHE) forms a paradigm in our understanding of strongly correlated systems. FQHE in the lowest Landau level (LLL) is understood in a unified manner in terms of composite fermions, which are bound states of electrons and vortices. The strongest states in the LLL are understood as integer quantum Hall states of composite fermions and the compressible $\frac{1}{2}$ state as a Fermi liquid of composite fermions. For the FQHE in the second LL, such a unified description does not exist: experimentally observed states are described by different physical mechanisms. In this talk, I will discuss our first steps towards a unified understanding of states in the second LL using the ``parton" theory. I will elucidate in detail our recent work on the parton construction of wave functions to describe many of the FQH states observed in the second LL.

Youtube Link:

1630 - 1700
#### String Matching using Discrete Fourier Transform

##### Venkatesh Raman

A string or a text is a sequence of characters from a finite alphabet. Given a text $T$ of
length $n$ and a pattern $P$ of length $m$, the string matching problem asks for all occurrences
of $P$ in $T$. While a naive algorithm takes $\mathcal{O}(nm)$ time, a classical algorithm due
to Knuth, Morris and Pratt does this in $\mathcal{O}(m+n)$ time. When the text $T$ is given in
advance, one can pre-process in $\mathcal{O}(n)$ time to support string matching queries for
patterns $P$ in $\mathcal{O}(m)$ time. In this talk, we consider the string matching problem
when the text or pattern contains "don't care" symbols which can match any character, using
Discrete Fourier Transform.

Discrete Fourier Transform (DFT) is a transform that converts a complex $n$-dimensional vector
into another, and uses properties of complex $n^{th}$ roots of unity. What makes it useful for
algorithmic applications is that it can be computed in $\mathcal{O}(n\log{}n)$ time. We will
start with describing DFT and explain how it can be used to solve the string matching with
"don't care" symbols.

Youtube Link:

1600 - 1630
#### To number theory via automata

##### R Ramanujam

The celebrated Langrange's theorem (1770) asserts that every natural number is the
sum of four squares. Recently (in 2018) Madhusudan, Nowotka, Rajasekaran and Shallit
showed a binary version: every natural number larger than 686 is the sum of at most 4
binary squares. A number is a binary square if it is of the form ww where w is a finite
bit sequence. (For instance 45 is a binary square since it is 101101 in binary.) Last
year, Kane, Sanna and Shallit showed a version of Waring's theorem for binary powers.
(There are natural analogs of the theorems for numbers in any base.)

These are instances of a spate of recent results made possible by decision procedures
for logical theories (extensions of Presburger arithmetic) using finite state automata
representations. Many of these proofs involve automata with thousands of states and hence
the analysis could not have been done without the use of these decision procedures. The
talk will be an attempt to highlight this automata based approach to additive number
theory

Youtube Link:

1630 - 1700
#### Exploring the world of amorphous solids

##### Pinaki Chaudhuri

Amorphous solids are all around us, in various shapes and forms, ranging from things that we use in our daily lives to large-scale industrial applications and even geophysical phenomena. Unlike crystalline solids which have ordered structures, amorphous materials are disordered. They also range from soft (e.g. gels, emulsions, foams, pastes, granular assemblies etc.) to hard (e.g. window glass, metallic glass etc.). Understanding the formation of these materials and thereby tuning their properties for targeted applications has been one of the challenging research areas over many decades. In this talk, I will discuss some of our recent work in this context, viz. probing equilibrium and non-equilibrium behaviour of different kinds of model glassy systems.

Youtube Link:

1600 - 1630
#### Reusing algorithms under conflict-free constraints

##### Roohani Sharma

In a (not-so-)imaginary world, imagine a district named FRAGILE that needs to be
protected from a district named SPREADER that spreads a deadly virus. The virus can
be spread by contamination: when a person from district A which has the virus goes
to district B, we assume that district B is infected with the virus. Travel is not
necessarily allowed between every pair of districts and the information about which
pairs of districts allow travel is already given to us. Initially only the SPREADER
district has the virus. With this information at hand, all the districts decide to
save the FRAGILE district from the virus by possibly shutting down the borders of
some districts such that when these borders are closed the virus can’t reach the
FRAGILE district from the SPREADER district. An obvious intention here is to try and
close the borders of as small number of districts as possible that still prevents the
virus from reaching the FRAGILE district. An algorithm designer is approached for
this work and after tons of hard work he/she finds a smallest set of districts whose
borders need to be closed to achieve the desired goal.

Soon after this problem is resolved, one faces the issue of synchronisation amongst
the closed districts. In particular, suppose now that the districts which need to
close their borders have to collectively decide on the protocol for doing so. In order
to allow for a smooth conduct of events, one now desires that the chosen districts
be such that none of them has a ``conflict’’ with any of the others. The same algorithm
designer is again consulted to add this additional feature in the output of its algorithm.
Sooner than later, the designer realizes that the previous approach to finding a solution
fails fundamentally when though the problem being addressed is essentially the same
with an added constraint. To the surprise of many (but may be not for the mathematicians
and computer scientists), one never hears back from the designer!

The above example depicts the reality of algorithm designers. Often algorithms designed
after years/decades of hard work become obsolete when asked to perform the same job with
an additional constraint. In this talk we see a combinatorial tool that allows the reuse
of algorithms when the additional constraint is that of conflict-freeness.

Youtube Link:

1630 - 1700
#### Mpemba effect in driven granular gases

##### Apurba Biswas

Mpemba effect refers to a counterintuitive result wherein an initially hotter system when quenched to a lower temperature equilibrates faster than one at an intermediate temperature. In this talk, we review some of the known results for the existence of the Mpemba effect in various physical systems. We then describe the existence of such an effect in driven inelastic gases. An exact analysis determining the conditions for the Mpemba effect will be presented for a simplified Maxwell model followed by a more realistic collision model for such systems. We also show the existence of the strong Mpemba effect where the system at higher temperature relaxes to a final steady state at an exponentially faster rate leading to smaller equilibration time.

Youtube Link:

1600 - 1630
#### Designing Biomimetic Polymers: antimicrobial agents

##### Satyavani Vemparala

Given the rise of antimicrobial resistance to many drugs, there is a need for a paradigm shift in thinking about and designing a new class of antimicrobial agents. In the quest for the same, there has also been a lot of focus on understanding the bacterial cell membrane and its structure in order to exploit any aspects in favor of antimicrobial mechanism. In this talk, I will discuss efforts in this direction and in particular on designing polymers that mimic naturally occurring antimicrobial peptides and their interactions with model bacterial membranes using computationally intensive atomistic-level simulations. Of crucial interest is the ability of these smart polymers to have shape-shifting properties that can sense the environment they are in and adopt functionally relevant forms.

Google Meet Link:

voe-feeg-uxf

voe-feeg-uxf

1630 - 1700
#### Exploring the unknown using uncertainty principle

##### Rahul Sinha

After a brief introduction to the world of elementary particles we point out how physics beyond the standard model can be probed using rare so-called loop processes. We show how such decays of mesons allow us to probe energies far beyond the Large Hadron Collider

Google Meet Link:

voe-feeg-uxf

voe-feeg-uxf

1600 - 1630
#### TBA

##### Soumya Dey

TBA

Google Meet Link:

voe-feeg-uxf

voe-feeg-uxf

1630 - 1700
#### QCD radiative corrections to the observables at the hadron colliders

##### Aparna Shankar

Standard Model(SM) has been very successful so far in describing the physics of elementary particles. The most successful methodology to perform the theoretical calculations within SM are based on perturbation theory, due to our inability to solve the theory exactly. Under the framework of perturbation theory, all the observables are expanded in powers of the coupling constants present in the underlying Lagrangian. The result obtained from the first term of perturbative series is called the leading order (LO), the next one is called next-to-leading order (NLO) and so on. In most of the cases, the LO results fail to provide a reliable theoretical prediction of the associated observables, one must go beyond the LO result to achieve a higher accuracy. Perturbative computations can be performed with respect to the coupling constants associated with the three fundamental forces within SM , namely, electromagnetic ($α_{em}$), weak ($α_{ew}$) and strong ($α_s$) ones. However, at typical energy scales, at which the hadron colliders operate, the contributions arising from the $α_s$ expansion dominate over the others due to comparatively large values of $α_s$. Hence, to catch the dominant contributions to any observables, we must concentrate on the $α_s$ expansion and evaluate the terms beyond LO. These are called Quantum Chromo-dynamics (QCD) radiative or perturbative QCD (pQCD) corrections. In this talk, I will discuss a formalism called soft-virtual (SV) approximation to calculate the QCD radiative corrections to the inclusive cross-sections at the hadron colliders. I will also highlight some of the recent works done in our group to extend this formalism to next-to-soft virtual(NSV) approximation.

Google Meet Link:

voe-feeg-uxf

voe-feeg-uxf

Adapted from the design developed by: UIdeck