The recent COVID emergence with its unpredictable daily surprises and reversal of front has been administering our literate and numerate community of modellers and forecasters a harsh medicine. The Cartesian narrative whereby the new tools offered by big data, algorithms, cognitive psychology and what’s not finally endows humanity with the tools to achieve progress is under strain. At a time when both statistical and mathematical modelling passes through a period of soul searching, it is perhaps useful to revisit the opportunities and the limits of our use of mathematics to rule human affairs. This does not mean abandoning our tools but reconsidering our relation with risk, uncertainty and ignorance, in the context of a new paradigm of responsible modelling for sustainability.
This session aims at providing an overview on the definitions, the mathematical tools and the innovation practices about Sustainable Development Goals. The session is designed for enhancing the discussion among policy makers, data scientists and stakeholders.
The aim of this session is twofold. On the one hand, it will provide some examples of tools which have made their way into being adopted by policy-makers and their, troubled, route in doing so. The focus will be on themes close to sustainability, innovation and forecasting. On the other hand, we also aim at creating an intellectual bridge between both academic and institutional voices in order to discuss better ways to smooth the spillover from the first to the latter and to improve the communication between the two.
Conventional, neoclassical economics assumes perfectly rational agents (firms, consumers, investors) who face well-defined problems and arrive at optimal behaviour consistent with — in equilibrium with — the overall outcome caused by this behaviour. This rational, equilibrium system produces an elegant economics, but is restrictive and often unrealistic. Complexity economics relaxes these assumptions. It assumes that agents differ, that they have imperfect information about other agents and must, therefore, try to make sense of the situation they face. Agents explore, react and constantly change their actions and strategies in response to the outcome they mutually create. The resulting outcome may not be in equilibrium and may display patterns and emergent phenomena not visible to equilibrium analysis. The economy becomes something not given and existing but constantly forming from a developing set of actions, strategies and beliefs — something not mechanistic, static, timeless and perfect but organic, always creating itself, alive and full of messy vitality.
What is an emerging technology? The increased availability of computational power and data paved the way to innovative quantitative approaches to detect emerging technologies, especially in the scientometric field. However, there is little consensus on the very definition of emerging technology and the different bibliometric tools may lead to very diverse results. This session will explore potentiality and limitations of such approaches, also from a perspective of research design.
Patents are not only a legal instrument for the protection of IP rights but, from a management and economic perspective, they represent an unrivaled source of data concerning the evolution of different technological fields, its geographical diffusion, citation and collaboration networks, most active inventors and grantees. Hence, patents can be used as a proxy for analyzing the state-of-the-art of a given technology and, based on its evolution, they may provide ground for market prediction analysis, enhancing sustainability potential of considered technologies. There are however limitations that need to be taken into consideration and that will be discussed in this session.
With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes. I will give several reasons why we should use interpretable models, the most compelling of which is that for high stakes decisions, interpretable models do not seem to lose accuracy over black boxes – in fact, the opposite is true, where when we understand what the models are doing, we can troubleshoot them to ultimately gain accuracy.
Financial investments on sustainability have been guided by an increasing interest in ESG indicators. These are used to evaluate the impact of companies on society under three main components: Environmental, Social and Governance. The research on ESG has often glorified this investment strategy showing that it improves financial performance over time. However, there is little agreement about how the ESG scores should be calculated and modelled, and it is often unclear how this information is then used by investors.
Building on the knowledge and models available both from academia and industry, this session aims at bringing transparency to the ESG topic from a modelling and methodological perspective and to discuss their application in both the public and private domain.
Nowadays, scientific/expert advisory bodies are a well-established and yet increasingly pervasive feature of many decision-making processes being held within a growing number of national governments. However, it is well assumed that public policy is a field in which technical analysis deals with uncertain facts; spurious relationships between variables; and trans-scientific dilemmas that put values in dispute, while the stakes are high, decisions (more and more frequently) urgent as well as constrained by political boundaries.
In light of such context, the aim of this session is to frame the complex relationship between scientific advice and government policy-making, framing both (i) the risks which may occur in the shift from “evidence-based policy-making” to “policy-based evidence-making” and (ii) ways and means to find a proper balance between ever-changing political needs and scientific/expert advisory bodies’ autonomy.
The final roundtable will be devoted to discussing policy implications at the institutional level of the topics raised during the entire conference. The session will involve representatives of public and private funding agencies, policy makers and academics to discuss which tools can be implemented to address policy objectives by means of forecasting techniques, paying particular attention to the case of Sustainable Development Goals and their assessment by means of scientific advice. Fostering a responsible approach to decision-making processes in both the public and private sectors is indeed a core institutional challenge for the stakeholders (research councils, advisory bodies, international organizations, public and private consultancies) involved in science advice for governments and societies at large.