September 1, 2015 - January 31, 2016
Jack Copeland (University of Canterbury)
Eli Dresner (Tel Aviv University)
The theory of computability was launched in the 1930s by a group of logicians who proposed new characterizations of the ancient idea of an algorithmic process. The theoretical and philosophical work that these thinkers carried out laid the foundations for the computer revolution, and this revolution in turn fuelled the fantastic expansion of scientific knowledge in the late twentieth and early twenty-first centuries.
The 1930s revolution was a critical moment in the history of science: ideas conceived at that time have become cornerstones of current science and technology. Since then, many diverse computational paradigms have blossomed, and still others are the object of current theoretical enquiry - massively parallel and distributed computing, quantum computing, real-time interactive asynchronous computing, relativistic computing, hypercomputing, nano-computing, DNA computing, neuron-like computing, computing over the reals, computing involving quantum random-number generators. The list goes on; few of these forms of computation were even envisaged during the 1930s' analysis of computability.
The fundamental question tackled by the group is: do the concepts introduced by the early pioneers provide the logico-mathematical foundation for what we call computing today, or is there a need to overhaul the foundations of computing to fit the twenty-first century?