Conference Abstracts 2021

Virtual Conference

Opening Keynote Speaker

Kumpati S. Narendra,
Harold W. Cheel Professor of Electrical Engineering,
Director, Center for Systems Science,
Yale University, USA .

Brains or Machines – Who Will Be in Control?
A Look Into The Crystal Ball

Abstract: Ever since 1997, when IBM’s computer Deep Blue defeated the world chess champion, Gary Kasparov, machines have been steadily taking over tasks that we thought were our exclusive preserve. In the past ten years, the field of artificial intelligence has enjoyed rapid progress, and at the present time is suffused with enormous optimism. Several impressive demonstrations are to be found in IBM’s Watson, Apple’s Siri and Google’s self-driving car.
However, the optimism is not shared by numerous eminent scientists and technologists. The philosopher, Nick Bostrom, considers artificial intelligence to be an “existential risk”; the entrepreneur, Elon Musk, recently described it as “summoning the demon”; the late Stephen Hawking of Cambridge University has warned that the development of AI could spell the end of mankind, and Steve Wozniak is quoted as saying “Will we be gods? Will we be family pets? Or will we be ants that get stepped on? … I don’t know.”
All these imply that autonomous technology equipped with artificial intelligence will, in the next few decades, result in a fundamental transformation in the structure of society and our role in it. People around the world will have to gradually decide whether they prefer increasing power over nature or moral responsibility and emotional satisfaction. What they collectively decide about the course of action that all the nations should take, will decide the future of humanity.

Valedictory Keynote Address

Dr. Moshe Y. Vardi,
Professor of Computer Science,
Rice University,

Ethics Washing in AI

Abstract: Over the past decade Artificial Intelligence, in general, and Machine Laerning, in particular, have made impressive advancements, in image recognition, game playing, natural-language understanding and more. But there were also several instances where we saw the harm that these technologies can cause when they are deployed too hastily. A Tesla crashed on Autopilot, killing the driver; a self-driving Uber crashed, killing a pedestrian; and commercial face-recognition systems performed terribly in audits on dark-skinned people. In response to that, there has been much recent talk of AI ethics . Many organizations produced AI-ethics guidelines and companies publicize their newly established responsible-AI teams.
But talk is cheap. “Ethics washing” — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. An example is when a company promotes “AI for good” initiatives with one hand, while selling surveillance tech to governments and corporate customers with the other. I will argue that the ethical lense is too narrow. The real issue is how to deal with technology’s impact on society. Technology is driving the future, but who is doing the steering?


Prof Ashok Jhunjhunwala,
Department of Electrical Engineering,
IIT Madras.

Some steps for India to move towards 100% Renewable Energy

Abstract: India is highly dependent on coal for producing its electricity. Even though the per-capita emission of GHG for India is lower than that for most advanced countries, reflecting its lower per-capita energy consumption, its total emissions of GHG is close to some of the big polluters. No wonder there is a increasing pressure on India to move towards Renewable Energy (RE). India now produces solar and wind-based electricity at cost lower or comparable to that of coal-based electricity. However, while coal-based electricity generation can be increased or decreased at will to match demand, that is not so
for solar and wind-based electricity, as the quantity produced entirely depends on nature, the sun-hours and the wind-hours. Only way that this could match demand is if there is massive energy-storage. This paper examines how office and commercial complexes in india can take the lead in becoming 100% RE user by installing storages and carrying out energy management. It would show that this can be done to reduce the cost of electricity for such complexes, while they become near-100% RE users. The industrial-complexes and housing complexes may use similar strategy to become RE-user. This would move India substantially towards 100% RE. The paper examines the technology and economics challenges that have to be overcome to get there.

Prof. Greg Adamson,
Associate Professor and Enterprise,
School of Computing and Information Systems,
University of Melbourne,
Parkville, VIC, Australia.

The research method of Norbert Wiener

Abstract: Norbert Wiener (1894-1964) made fundamental contributions to several areas of knowledge including mathematics, technology, and philosophy. This paper examines his writings to identify common threads which together define his research method. These include the interconnectedness of society and research, the social responsibility of scientists, and the limits to what can be known in the world.These together help to explain the extent of his multi-disciplinary discoveries, including his cybernetics project.

Dr. Junzo Watada,
Professor of Emeritus,
Waseda University,

Solving Problems by Artificial Neural Network

Abstract: We worked on artificial neural network (ANN) to understand ANN structures at first. After then we proposed double layered neural networks in 2001 to solve mean-variance problems, that is, quadratic programming problems such as portfolio problems in financial engineering. The double layered neural networks consist of Hopfield machine and Boltzmann machine. The two neural networks collaborate to solve the quadratic mean-variance problems by the way that the upper level NN selects optimal neurons and the lower level NN decided each optimal weights.
But bi level programming problem is more complicated. Even bi level linear programming problem is NP-hard to solve it. We realized that several wrong optimum results were presented in journal papers. We built a hybrid recurrent neural network to solve bi-level quadratic programming in 2014. Also we apply the system to solve real applications. We explain such research directions
First, we explain the research directions we studied till now. Second, we explain the double layered neural network to solvee mean-variance problems. Third, we explain the bi-level programming problems and how to solve the NP-hard problems and provide the comparison. Fourth, we will show an application to solve a real problem. Its applicatoin is explaind to deal with Winary Production Model. Fifth, we will discuss deep learning.

Dr Laurie Lau,
Hong Kong.

Cybercrime in Asia

Abstract: In this presentation we will passing through and briefly examining some contentious issues in relation to cybercrime that countries in Asia are facing today, by looking at them via ‘socio-legal’ lens how best these countries to tackle them, despite of ever evolving in technological advancement. In fact, some nation blocks in Asia are not up to the task, and why this so. On the other hand, some nation blocks within Asia are regarded very much advance in their dealing with cybercrime.
So, in this talk, would therefore provide a snapshot of picture on how, where, when and why certain nation blocks in Asia are heading relatively in the right direction and reason why others are not.

Dr Michael Rigby,
Faculty of Architecture, Building and Planning,
Acting Deputy Director at the Australian Urban Research Infrastructure Network (AURIN),
University of Melbourne, Australia.

Cybernetics of cities and towns: Digital twins and research infrastructures in Australia

Abstract: Analogies of the human body have been used for centuries to describe urban forms and are now being used to describe flows of resources and social interactions in cities and towns captured digitally through data. As the body has multiple systems, cities and towns can also be understood from this perspective with interdependent energy, transport and health sub-systems for example connecting within a complex ecosystem. Digital twins are rapidly emerging as a way of representing this ecosystem, providing researchers with the capability to model at new levels and make sense of new and large volumes of data to deliver improved decisions that can help tackle some of the greatest challenges affecting humanity today. The resulting research not only impacts social, economic and environmental dimensions, but also provides valuable feedback in the form of data, analytics and models for others to draw upon. It is here that the Gemini Principles were created to guide development of digital twins that are ethical by design with purpose, trust and function to deliver better outcomes for the public good. This presentation will introduce these principles and discuss several examples of digital twins to examine ideas around the social impacts of both digital twins and the feedback loops that are created via an ecosystem of connected sub-systems. These aspects will be presented in the context of Wiener’s works on cybernetics, Batty’s comments on closing the loop (2007) and others that provide important directions to follow when seeking to connect the physical, social and digital worlds. As we seek to better understand our cities and towns, the body analogy provides us with a mirror that reflects our own humanity and encourages us to examine the impact of our contributions.

Dr Luis Homem,
Center for Philosophy of Sciences of the University of Lisbon (CFCUL),

Prolegomena to a Critique of Cybernetic Reason

Abstract: Norbert Wiener´s neologism of Cybernetics, meaning ”navigation” or ”steering”, immediately resonates with the Kantian period of the three Critiques (1781-1790) book title question What does it mean to orient oneself in thinking? (1786). Besides, cognition (cognitio; erkenntnis) and knowledge (scientia; wissen) are pivotal concepts in regulatory systems and selforganization. Indeed, the regulative principle of teleology in the ”Dialectic of Teleological Judgement” in the Critique of Judgment (1790) theoretically expanded but forwent the mechanistic view of nature, next scientifically redeemed by Darwin, nevertheless foreshadowing feedback teleology under subjective autonomy and control. Control theory is rooted on A Dynamical Theory of the Electromagnetic Field (1865) by J. C. Maxwell, with electric and magnetic wave synthesis, thus predicting radiation energy pervasiveness in physics. Yet and before, it was Young’s doubleslit interference experiment (1801), just about the time of Kant´s death (1804) and prior commissioning of the Opus Postumum (ca.1800) that, while asserting the wave theory of light in anticipation to new interpretations of quantum mechanics, firstly dismantled the Aufkl¨arung edifice and architectonics. It is here suggested that not only the background of natural selection and new physics inflicted a fatal blow to transcendental idealism, but also and decisively, that a coup de grˆace came upon it by a series of diagonalisation arguments ever since Cantor (1891): the negative limits and aftermath results in logic (Frege, G¨odel, and Tarski), computation and information (Turing, Shannon) and quantum physics (Heisenberg and Bohr). Cybernetics as a science shall arise as an inversion of Kantian architectonics.

Prof. Paul Pangaro,
Professor of Practice,

Human Computer Interaction Institute,
Carnegie Mellon University.

Wiener + Macy Meetings + AI = Piloting a New Course

Abstract: The title of Norbert Wiener’s second book is widely misquoted. As was clear in the typesetting of his original title page, he was concerned about “The Human Use of Human Beings” — Wiener’s emphasis on the first “Human” is generally forgotten. His purpose is reinforced by his subtitle, “Cybernetics and Society”, for social contexts are where human beings can conserve or devalue their own human-ness.
This presentation explores specific ways in which humanity is not treating itself so well in our 21st-century era, especially in regard to digital technology, much as Wiener feared when writing in1950. His warnings of de-humanizing automation at the hands of digital machines were prescient, as we now live in the world he feared. We have seen that our “know-how” about digital machines has supplanted the “know-what” — again in Wiener’s phrasing — because as a society we bolt ahead before considering “what our purpose to be.”
Wiener was joined by others in initiating the field of Cybernetics. McCulloch and Mead and von Foerster, gathering a wide array of experts across all the academic disciplines, cut a wide and influential swath with their Macy Conferences. By combining this legacy of Cybernetics with the inspiration that a revival and revision of the Macy Meetings can bring, we chart a fresh course for addressing the tyrannies of today’s digital tech. Cybernetic concepts give foundation and cybernetic practice guides our actions toward a future more coherent and humane than the one we have today.
This presentation offers actionable paths forward that are restorative of Wiener’s values. As he wrote, “The hour is very late, and the choice of good or evil knocks at our door.”

Dr Russell Andrews,
Nanotechnology & Smart Systems,
NASA Ames Research Center,
Moffett Field, CA, USA

The Brain-Machine Interface: Nanotechnology and Cybernetics 60 Years After Norbert Wiener

Abstract: The heart of Norbert Wiener’s cybernetics is the concept of feedback control. The brain-machine interface (BMI) at the time of his death (1964) consisted of metal electrodes that delivered current into brain tissue without feedback. It was in Norbert Wiener’s time (1959) that Richard Feynman delivered his famed introduction to the nanorealm talk: “There is Plenty of Room at the Bottom”. Their research paths crossed in that Wiener’s work on integrals predated that of Feynman. Cybernetics and nanolevel techniques for the brain have matured over the ensuing 60 years.
Despite the effectiveness of deep brain stimulation (DBS) for movement disorders such as Parkinson’s disease before the end of the 20th century, DBS electrodes have remained essentially unchanged until quite recently.
The first clinically relevant BMI system incorporating feedback, established over the past decade, was for drug-resistant epilepsy. The device incorporates both stimulating and recording electrodes so that stimulation to abort an impending seizure can be guided by the brain’s focal electrical activity.
Research in both animal models and humans is currently investigating feedback-guided (or closed-loop) DBS for Parkinson’s disease and other movement disorders. It is hoped that such closed-loop techniques might expand the efficacy of DBS beyond movement disorders not only to epilepsy but also to mood disorders such as severe depression and obsessive-compulsive disorder.
Cybernetics in the BMI is not limited to feedback involving brain electrical activity. The brain communicates both electrically and chemically; to date feedback has been electrical because techniques to monitor chemical (neurotransmitter) feedback have been lacking. However, a biologically inspired synapse with neurotransmitter-mediated feedback is being developed.
Such devices will increasingly blur the distinction between the brain and the machine. They will provide insights into the realm of brain plasticity that are crucial for addressing neurorepair for trauma and stroke, as well as neurodegenerative disorders such as Alzheimer’s disease.
Norbert Wiener’s contributions to cybernetics – many decades later – are just beginning to bear fruit for the BMI!

Dr. Simone Garatti,
Associate Professor,
Politecnico di Milano, Italy.

Data-driven decision making via the scenario approach

Abstract: The increased complexity of the problems that modern science and engineering face, as well as the increasing availability of data, have determined a paradigm shift in decision-making: ever more often model-based approaches fall short of providing adequate solutions and they are replaced by data-driven methods. These latter use the a-posteriori knowledge derived from observations directly to design purposes, without reconstructing the mechanism through which data are generated. A reliable use of the solutions obtained from data-driven approaches, however, demands new theoretical results capable of underpinning the inherent empiricism with solid guarantees. This talk aims at introducing the scenario approach, a relatively new, and yet well-consolidated, framework for data-driven decision making. Interestingly, within this framework, a recent theory has unveiled a profound and extremely general link between the risk, defined as the probability of underperforming on new out-of-sample data, and an observable quantity called “complexity of the solution”. This has enabled the introduction of universal estimators of the risk, which is a fundamental indicator of the solution quality, and has paved the way for a reliable usage of data-driven methods in automated, human-free, decision-making.

Dr. Witold Pedrycz,
Canada Research Chair,
IEEE Fellow University of Alberta,

Design, Interpretability, and Explainability of Models In the Environment of Granular Computing and Federated Learning

Abstract: In data analytics, system modeling, and decision-making, the aspects of interpretability and explainability are of paramount relevance, just to refer here to explainable Artificial Intelligence (XAI). They are especially timely in light of the increasing complexity of systems one has to cope with and ultimate concerns about privacy and security of data and models. With the omnipresence of mobile devices, distributed data, and security and privacy restrictions, federated learning becomes a feasible development alternative.

We advocate that there are two factors that immensely contribute to the realization of the above important requirements, namely, (i) a suitable level of abstraction along with its hierarchical aspects in describing the problem and (ii) a logic fabric of the resultant constructs. It is demonstrated that their conceptualization and the following realization can be conveniently carried out with the use of information granules (for example, fuzzy sets, sets, rough sets, and alike).

Information granules are building blocks forming the interpretable environment capturing the essence of data and revealing key relationships existing there. Their emergence is supported by a systematic and focused analysis of data. At the same time, their initialization is specified by stakeholders or/and the owners and users of data. We present a comprehensive discussion of information granules-oriented design of information granules and their description by engaging an innovative mechanism of federated unsupervised learning in which information granules are constructed and refined with the use of collaborate schemes of clustering.

We offer a detailed study of quantification of interpretability of functional rule-based models with the rules in the form “if x is A then y =f(x)” with the condition parts described by information granules. The interpretability mechanisms are focused on a systematic elevation of interpretability of the conditions and conclusions of the rules. It is shown that augmenting interpretability of conditions is achieved by (i) decomposing a multivariable information granule into its one-dimensional components, (ii) delivering their symbolic characterization, and (iii) carrying out a process of linguistic approximation. A hierarchy of interpretation mechanisms is systematically established. We also discuss how this increased interpretability associates with the reduced accuracy of the rules and how sound trade-offs between these features are formed.

Need help? Ask us here. Chat With Us (Offline)

← Prev Step

Thank you. We'll get back to you as soon as we can.

Please provide a valid name, email, and question.

Powered by LivelyChat
Powered by LivelyChat Delete History