Keynote Speakers

Prof Brian Anderson Wednesday July 13, 08:40-09:40, Room 1019-1020


Distinguished Professor at ANU College of Engineering and Computer Science. Professor Anderson has served as a member of a number of government bodies, including the Australian Science and Technology Council and the Prime Minister’s Science, Engineering and Innovation Council. He was a member of the Board of Cochlear Limited, the world’s major supplier of cochlear implants, from 1995 to 2005. He is a Fellow of the Australian Academy of Science and Academy of Technological Sciences and Engineering, the Institute of Electrical and Electronic Engineers, and an Honorary Fellow of the Institution of Engineers, Australia. In 1989, he became a Fellow of the Royal Society, London, and in 2002 a Foreign Associate of the US National Academy of Engineering. He holds honorary doctorates of the Catholic University of Louvain in Belgium, the Swiss Federal Institute of Technology, and the Universities of Sydney, Melbourne, New South Wales and Newcastle, and the University of Technology, Sydney.

Plenary: 75 Years from the Wiener Filter
Extracting useful signals from noise-contaminated versions of those signals is a signal processing problem that goes back many decades. There were three distinct bursts of activity resulting in advances which reflected the probabilistic aspects of the problem: the first was due to Wiener and Kolmogorov in the 1940s, the second due to Kalman in the 1960s and the third associated with developments in Hidden Markov Models, in the last 25 years or so. At the same time, the original applications domains for Wiener filtering were broadened unimaginably, to include today such diverse areas as EEG processing, modelling of national economies, localization of GPS-denied drones, evaluation of the efficacy of regimes for restricting domestic water usage, estimating the shape of an underwater towed array, and so on. During the past several decades the potential benefits and countervailing disadvantages of using what is known as smoothing were gradually uncovered. This talk will survey this progress, and highlight common features of Wiener, Kalman and Hidden Markov Models, with and without smoothing.


Prof Mathukumalli Vidyasagar Wednesday July 13, 13:00-14:00, Room 1019-1020


Mathukumalli Vidyasagar is a leading control theorist and a Fellow of Royal Society. He is currently the Cecil & Ida Green (II) Professor of Systems Biology Science at the University of Texas at Dallas. Prior to that he was an executive vice-president at Tata Consultancy Services (TCS) where he headed the Advanced Technology Center. Earlier, he was the director of Center for Artificial intelligence and Robotics (CAIR), a DRDO defence lab in Bangalore

Plenary: Machine Learning Methods in Computational Cancer Biology
Molecular data from cancer tumours is characterized by the fact that the number of measured features is in the tens of thousands, while the number of samples is a few hundred at best. This mismatch necessitates the development of new algorithms for sparse regression and sparse classification with an eye towards cancer applications. Another aspect of biological data, which has no analogue in engineering, is that biological data needs to be “normalized” for platform variations. In this talk, all of these problems are first stated formally as machine learning problem; then specific algorithms invented by our research group are presented. Then the results of applying these algorithms to data from endometrial, breast, and lung cancer are discussed. The talk will conclude with some open problems in transfer learning thrown by the advent of “next generation” sequencing in biology.



Dr James Hughes Thursday July 14, 08:30-09:30, Room 1019-1020


James Hughes the Executive Director of the Institute for Ethics and Emerging Technologies, is a bioethicist and sociologist at Trinity College in Hartford Connecticut where he teaches health policy and serves as Director of Institutional Research and Planning. Dr. Hughes is a Fellow of the World Academy of Arts and Sciences, and a member of Humanity+, the Neuroethics Society, the American Society of Bioethics and Humanities and the Working Group on Ethics and Technology at Yale University. He serves on the State of Connecticut Regenerative Medicine Research Advisory Committee (formerly known as the Stem Cell Research Advisory Board).

Plenary: Cybernetics, Algocracy and Democracy: The Promises of Cyborg Governance
Continuing a Counter-Enlightenment complaint that has arisen repeatedly in the last two hundred years, some contemporary cyber-critics point to the emergence of “algocracy,” the spread of algorithms in every sphere of life that hide and institutionalize undemocratic decision-making. The critics suggest that participatory democratic resistance to algocracy is possible and desirable, and they are as wrong-headed as the misanthropic advocates for AI governance, free of human failings. The critics of and the advocates for AI technocracy are perpetuating a false dichotomy between cybernetics and human institutions, ignoring that every human institution has been partly, and inescapably, built on cybernetic principles. Participatory democracy, on the other hand, is, at most, a useful ritual with benefits for character development, but a practical impossibility given the number and complexity of decisions. Even attempts at participatory democracy devolve into endless meetings and hundred page ballots, as unhelpful for human flourishing as endless work. In the future, democratically accountable algocracy, or cyborg democracy, enabled by artificial intelligence and human-computer co-evolution, can optimally inform debate, aggregate popular desires without the biases of current institutions, and ensure the efficient workings of the gradually withering state. Publicly accountable algocracy can usher in equitably distributed post-capitalist abundance, free from the necessity of work for wages. Indeed, only the embrace of the possibilities of algocratic governance can secure our future against the threats of super-empowered individuals and groups, systemic fragility, and the emergence of catastrophic forms of self-willed cybernetic life.


Dr. Juerg von Kaenel, IBM Research Friday July 15, 08:30-09:30, Room 1019-1020


Dr. Jürg von Känel is the associate director of the “IBM Research – Australia” lab in Melbourne. He studied math and computer science at ETH Zürich and holds a Ph.D. in Computer Science (1991). Jürg joined IBM in 1985 in Zürich Switzerland.
In 1991 he moved to TJ Watson Research center in the US and most recently managed the relationship between Research and the financial services industries. In 2004 he initiated an Enterprise Risk & Compliance Framework focused primarily on the financial industry. This led to the Treasury & Risk magazine listing him as one of the 100 most influential people in finance in 2006. Since June 2011, he has moved to Melbourne, Australia to take on the build up of the new Research lab in Australia.
In his scarce spare time he and his wife invent, design and make mechanical puzzles (
More on IBM Watson
Plenary: Cognitive Computing – the Dawn of the 3rd Era of Computing
The first era of computing may be characterised as the era of the tabulating machines in the first half of the 19th century. These early computers were good at counting and sorting, and were essentially single purpose machines.
The second era of generally programmable computers started in the middle of the 19th century. In these machines, the software and hardware became distinct objects of design. Through their software, the machines are instructed to sort and count and logically process that data in incredibly fast and complex ways. These computers are instructed in a logical manner based on our left brain thinking patterns.
As an increasing amount of skilled programmers are needed to instruct these programmable computers, the question arose – “Why can’t computers learn themselves?” This gave rise to the third era of computing: cognitive computing. Computers which are based on our right brain’s capabilities to recognise patterns, and learn from them. The first of these machines was the Watson computer who beat two human Jeopardy champions in 2011.

The former eras of computing lasted about 50+ years each. So, 5 years in, we are truly just at the dawn of the era of cognitive computing. In this talk I will outline the history and project a road ahead of implications of this new era to business, education and research as it opens up an incredibly new space in addition to the existing computing capabilities.



Prof Judy Wajcman Friday July 15, 09:40-10-40, Room 1019-1020

Judy Wajcman

Judy Wajcman is the Anthony Giddens Professor of Sociology. She joined the LSE as Head of the Sociology Department in 2009. She was previously Professor of Sociology in the Research School of Social Sciences at the Australian National University. She has held posts in Cambridge, Edinburgh, Manchester, Sydney, Tokyo, Vienna, Warwick and Zurich. She was formerly a Centennial Professor at the LSE, a Visiting Fellow at All Souls College, Oxford, and a Visiting Professor at the Centre for Women in Business at London Business School. She was President of the Society for Social Studies of Science (2009-2011) and is currently a Visiting Professor at the Oxford Internet Institute. Her work has been translated into French, German, Greek, Korean, Japanese, Portuguese and Spanish.
Professor Wajcman’s scholarly interests encompass the sociology of work and employment, science and technology studies, gender theory, and organizational analysis. Her current research engages with theories about the impact of digital technologies on time poverty and the speeding up of everyday life. She is the 2013 recipient of the William F. Ogburn Career Achievement Award of the American Sociological Association. This award recognizes a sustained body of research that has made an outstanding contribution to the advancement of knowledge in the area of the sociology of communications or information technology.
Plenary: Automation, Robotics and the Promise of an Easier Life
Technologies are not neutral tools that emerge independently of the society that invents them. Rather, their design and use reflect as much as shape society. So what does the contemporary fascination with humanoid robots and automation more generally tell us about how our culture envisages the relationship between humans and machines?

This lecture will examine the ways in which robotics embody the desire to save valuable time by enabling us to complete tasks ever faster and more efficiently. They are supposed to make our lives easier. Yet we hear constant laments that we are pressed for time, and that the pace of everyday life is accelerating. How do we explain this conundrum? And why is it that machines designed for today’s service economy often resemble gender stereotypes? Perhaps we need a female Doctor Who to provoke a feminist reimagining of robotics, one that challenges the future on offer from the evangelists of Silicon Valley.

← Prev Step

Thank you. We'll get back to you as soon as we can.

Please provide a valid name, email, and question.

Powered by LivelyChat
  • You're chatting with
    21CW Representative

Powered by LivelyChat Delete History