Blog Viewer

Different Minds: why we need human capabilities and machine intelligence

  
There have been great societies that did not use the wheel, but there have been no societies that did not tell stories.” —Ursula K. LeGuin

To listen to some scholars of decision-making, it is a great wonder that humans have survived as long as they have.

We are constantly prone to biases, rely on simplistic and flawed heuristics and, even when trained in statistics, we fail to apply even the most basic rules of probability effectively.  Despite this, humans have been a remarkable evolutionary success story and have adapted to many different environments. One of the keys to that success is not so much our ability to adapt physically but instead our skill in adaptive reasoning.

To better understand this apparent paradox it is worth exploring recent developments within the psychology discipline. They reveal that the human brain is capable of navigating highly complex and uncertain environments with a sophistication that is beyond more artificial forms of intelligence. This raises interesting and timely questions about the optimal balance between man and machine. The respective strengths of each may mean that, in complex fields such as fund management, superior decision-making performance is best achieved through combining human and machine intelligence, rather than choosing between them.

Is the brain really a computer?

Back in the mid-20th century, cognition re-emerged as a key field of study in psychology. In a reaction against the then dominant behaviourist perspective, psychologists once again began to speculate about, and research, processes going on within the person, rather than simply relying on observation of their external behaviour.

This development in psychological thinking was strongly influenced by the simultaneously emerging fields of information and computer science. As a result, a dominant metaphor in cognitive science became the brain-as-computer.  This analogy left little room for the role of emotion, except as a disturbance of optimal cognitive function, or, at best, as a signalling system to indicate the gap between goals and outcomes.  Given this history, it is therefore unsurprising that many efforts to improve the quality of financial decision-making have focused on replacing human with machine intelligence.

However, this approach may turn out to have significant limitations – given the growing evidence that humans and computers think in very different ways, and have very different strengths.

The idealised rational approach falls short

There are many accounts of what the ideal, rational approach to decision-making should look like. A lot of them have a good deal in common with the six steps described by psychologist, Max Bazerman:

(1) perfectly define the problem, (2) identify all criteria, (3) accurately weigh all of the criteria according to your preferences, (4) know all relevant alternatives, (5) accurately assess each alternative based on each criterion, and (6) accurately calculate and choose the alternative with the highest perceived value. [1].

This seems a plausible, even sensible approach. After all, it is very close to what were taught at school, where typically we worked with defined problems in contexts where there was a right answer and a known set of potential outcomes.  However, it turns out to be an idealised approach, giving a poor account of much human decision-making and best-suited to what have been described as ‘small world problems’.

‘Small world’ vs. ‘large world’ problems

Small world problems can be characterized as having the following attributes:

  1. A well-defined task and goal
  2. A known set of choices and potential outcomes
  3. Highly replicable processes
  4. Known (or at the very least knowable) probability distributions associated with outcomes given any choice.

Because small world problems are very tractable to study in the laboratory, they have dominated judgement and decision-making research. However, other fields of study have been more interested in what can be described as ‘large world problems’. These are characterised by:

  1. Ill-structured problems.
  2. Deep uncertainty (unknown and sometimes unknowable probability distributions).
  3. Complex and dynamic environments.
  4. Little replicability.

In many cases, large world problems are also reflexive – ie they depend on estimating the likely actions of other people who are themselves basing their actions on the likely actions of others. They may also be characterised by time stress, shifting goals, and multiple stakeholders with differing perspectives.  Large world problems are much less tractable to laboratory study, and have been much more studied in the field, notably by naturalistic decision-making researchers [2].

As technology and algorithms improve, computers are becoming highly effective tools for tackling small world problems as they are less subject to failures of probabilistic reasoning. However, computers are poorly suited to solving large world problems. This is a domain where humans seem to have a distinct advantage.

Storytelling as a solution to uncertainty

Humans are storytellers.  One of our key evolutionary adaptations to a complex, uncertain world has been the ability to weave stories which we use as device for making and sharing meaning. These narratives give us a basis for action in the face of uncertainty, as well as a tool for persuading others to work with us. Stories are inextricably connected with our emotions they enable us to draw on the past and to project ourselves forward into the future. Through stories we experience possibilities as if we were there, experiencing the emotions evoked by possible outcomes. Without stories and emotions to guide us we have no way to decide what to pay attention to, in a world in which available sensory information always massively exceeds our capacity to process it. In this way stories and emotions are an important tool of rational decision-making. However, just as we can make mistakes of calculation, we may make mistakes of narrative and emotion.

Like the changing world in which we live, these stories have consistencies over time, but we also update them to reflect our perceptions of change around us. Karl Weick [3] describes this process as ‘sensemaking’: the ongoing process through which we make meaning of our lived experience. Sensemaking is an emotional process because emotions are an inescapable (and often useful) element of human decision-making.

Using in depth research into fund manager decision-making, David Tuckett and colleagues[6, 7] have developed a new theory of human decision-making which places what we know about the role of emotions and story-making at its heart: Conviction Narrative Theory.

Conviction Narrative Theory

Conviction Narrative Theory proposes that, faced with uncertainty, humans construct narratives about the future outcomes of their actions. They develop these to the point where they have a subjective sense of conviction about a course of action.

These narratives both invoke and manage emotions. They could be pleasurable emotions about future gain (‘approach emotions’) or anxious / fearful emotions about future loss (‘avoidance emotions’).

Narrative resources brought to bear in this process include the full panoply of statistical and probabilistic techniques that humans have devised. However, in this way of understanding human thinking, such techniques are just one of many narrative resources available. Others include subjective judgements about the trustworthiness of information, an unfolding sense of ways in which the world may be changing, and so on.

Placing narrative and meaning-making at the heart of understanding human action gives us a route to understanding the particular advantages that humans have over machines – especially in the conditions of deep uncertainty (ie ‘large world problems’) which humans have evolved to confront.

A particular insight of Conviction Narrative Theory is to distinguish between the different narrative and emotion configurations employed in resilient decision-making and decision-making which is driven by the avoidance of anxiety:

  • Resilient decision-making is characterised by a mixture of ‘approach’ and ‘avoidance’ emotions and is open to new information and the possibility of being wrong.
  • Avoidant decision-making is characterised by a polarisation to either ‘approach emotions’ (anxiety is avoided by discounting all information which does not support a preferred action), or ‘avoidance emotions‘ (anxiety is avoided through the relief of avoiding an action and discounting information relevant to its benefits). Thus anxious decision-making is insensitive to information that conflicts with a preferred course of action.

The efficacy of Conviction Narrative Theory is beginning to be demonstrated through studies in which the balance of ‘approach’ and ‘avoidance’ emotions in financial news sources is used to predict market crises [8].

Joining forces

So where does this leave the role of computers and big data in improving financial decision-making?

It suggests that the role of machine intelligence should not be to replace humans but to complement them, recognising the particular strengths and weaknesses of each.

This would effectively be a decision-support approach, but not one in which technology is being used to try and make people think like computers.

Instead, the role of machine intelligence should be twofold:

  1. Computers should be used to ensure consistent, rapid, accurate and bias-free comparison of different action-options in domains which approximate to small world problems.
  2. For large world problems, computers can play a role in supporting and enhancing the human capacity to engage in resilient approaches to decision-making (which we know are well-suited to managing complexity, ambiguity, and rapidly changing conditions).

Those working with financial markets often use computers effectively to support decisions about small world problems, such as calculating the value of an asset given certain assumptions. However, these problems are embedded in a context of deep uncertainty and complexity. An important potential role of technology in this large world context is to help monitor and support human conviction narratives. A very valuable part of this will be to signal when the human side of the partnership is falling prey to anxiety-driven avoidance of relevant information.

The question is not whether human or machine intelligence is better, but rather how these very different forms of intelligence may best be used to complement each other.

* This post was originally written for Essentia Analytics.
A version of this post first appeared on their site.

1. Bazerman, M.H., Judgment in managerial decision making. 2002: Wiley New York.

2. Klein, G., Streetlights and Shadows: Searching for the Keys to Adaptive Decision-Making. 2009, Cambridge, MA: MIT Press.

3. Weick, K.E., Sensemaking in organizations (Foundations for organizational science). Thousands Oaks: Sage Publications Inc, 1995.

4. Vohra, S. and M. Fenton-O’Creevy, Intuition, expertise and emotion in the decision making of investment bank traders, in Handbook of Research Methods on Intuition, M. Sinclair, Editor. 2014, Edward Elgar: Cheltenham, UK. p. 88 – 100.

5. Fenton-O’Creevy, M., et al., Emotion regulation and trader expertise: heart rate variability on the trading floor. Journal of Neuroscience, Psychology and Economics, 2012. 5(4): p. 227-237.

6. Chong, K. and D. Tuckett, Constructing Conviction through Action and Narrative: How Money Managers Manage Uncertainty and the Consequences for Financial Market Functioning. Socio-Economic Review, 2014: p. 1-26.

7. Tuckett, D., Minding the Markets: An Emotional Finance View of Financial Instability. 2011, London: Palgrave Macmillan.

8. Nyman, R., D. Tuckett, and e. al., News and narratives in financial systems: Exploiting big data for systemic risk assessment. 2015 Bank of England Working papers series.

0 comments
11 views

Permalink