The mathematical game player A tribute to John Horton Conway (26 December 1937 --- 11 April 2020) Many lives have been lost to COVID-19 in the last few months, but for the world of mathematics, one loss was especially sad, that of John Conway, a legend in his lifetime. A distinct characteristic of Conway was that he loved to play games, preferably silly children's games, all day long. Born in Liverpool, England, he would spend whole summers going from one mathematics camp to another, one for middle school children here and one for teenage students, and in all of them, he would play games with children, pose and solve puzzles. He would carry all sorts of things with him: decks of cards, dice, ropes, coins, coat hangers, sometimes a Slinky, even a miniature toy bicycle. These were all props he would use for explaining ideas, though Conway insisted that they more more for his own amusement. He was educated at Cambridge on a scholarship but spent many years in Princeton University, USA. Unlike most other mathematicians, Conway was always obsessed with seemingly trivial concerns. He would be constantly factoring large numbers in his head. He could recite more than a 1000 digits of pi from memory. He developed algorithms you could use to calculate the day of the week for any given date (in your head!), to count the number of steps while you climb stairs without actually counting, .... But importantly, Conway claimed that it was such thinking that led to his mathematical research, and his colleagues at Princeton agree. If anyone has ever approached all mathematics in a spirit of play, it is Conway. He also led the world in the mathematics of playing games. Take any two player board game like Chess. We can ask, does one of the players have a *winning strategy*: a way to play in such a way that no matter what moves the other player makes, she is assured of a win? If I have a strategy in game g and another in game h, how do I combine them in a game made of g and h as subgames? Such considerations lead to a beautiful algebra of games, and Conway made pioneering contributions to it. Conway is credited with founding the area of Combinatorial Game Theory, a rich and beautiful subject of mathematical study. Conway also created many games for people to play. In fact, Conway is best known to the public for designing the Game of life, a great boon to screen savers of computers. Conway worked on diverse topics such as how to pack spheres in arbitrary dimensional spaces. He worked in the area of knot theory, a branch of topology. Knots can be thought of as closed loops of string. A fundamental problem in the area is that of knot equivalence: how different are knots? Can you apply finitely many allowed operations to obtain one knot from another? Conway also worked in number theory, combinatorial game theory and coding theory; also in geometry, geometric topology, algebra, analysis, algorithmics and theoretical physics. The vast range attests to Conway's ability to work on pretty much any mathematically posed question. Conway was at Princeton University for the last quarter century, interacting intensively with students and colleagues. His biography by Siobhan Roberts is a deeply inspiring story. Conway was known for his obsession for reducing proofs to the simplest terms. He gave 7 different proofs that the square root of 2 is irrational, analysing them to bring down the assumptions needed to the least possible. For John Conway, doing mathematics was always in a spirit of play; occasionally some theorems might come up for publication, but that is not the main aim. BOX The Game of life This is a very simple game. Take a sheet of grid paper (graph paper or any paper with squares), and keep a pencil and eraser with you. Each square is called a cell and you can see that each has 8 neighbours. In this game, every cell can be alive or dead at any instant. There are (only!) two rules: A dead cell having exactly three live neighbours comes alive at the next instant (otherwise it stays dead). An alive cell that has two or three alive neighbours, stays alive (else it dies). Let us look at the example shown. The three squares show three steps in the game. In the beginning, there are only three "living" cells, marked 1,2,3. Cell 1 has only one living neighbour (cell 2) and so in the next play of the game, it "dies", so the cell goes back to being uncoloured. Cell 2 has two neighbours (cells 1 and 3) and so it continues to live, while cell 3 also dies. The interesting move is shown in the next step. We know cells 1 and 3 have died, so we expect to see only cell 2 alive or shaded, but there is a new cell that has "come alive": cell 4, because the originally "dead" cell 4 had two "living neighbours", cells 2 and 3, which caused it to come alive! In the next step (last grid), both cells 2 and 4 die because they do not have enough living neighbours. You can see why this is called the Game of Life. Our little example ended here because it was too simple. If you have a bigger grid and start with more living cells, the game can effectively go on for ever. This seemingly simple Game of Life is exactly as powerful as the digital computer in a theoretical sense, and there are a variety of questions linking the game to formal aspects of computation theory and complexity theory. Try out the evolution of the game starting from some initial configurations suggested in the figure. A more complex game is also shown. END OF BOX