The power of an astute brief
Why mastery in Framing problems and setting clear project Goals will remain key skills for innovation leaders
‘A problem well-defined is a problem half solved’
Charles Kettering, (1876-1958), father of seven major American inventions‘The greatest challenge to any thinker is stating the problem in a way that will allow a solution.’
Bertrand Russell (1872-1970), British philosopher‘If I had an hour to solve a problem and my life depended on the solution, I would spend the first 55 minutes determining the proper question to ask, for once I know the proper question, I could solve the problem in less than 5 minutes.’
Albert Einstein (1879-1955), scientist
Every design and innovation leader I know is talking about how AI might raise productivity. This concern is understandable when so many projects misfire, and even those that make it to market often involve much wasted effort. The good news is that AI will accelerate many parts of the process, from project research to concept visualisation. However, one of the most significant efficiency gains leaders can make is the very human job of ensuring their teams focus on the right problems, and tackle them in the right way.
A Framing of the problem and a set of Goals for the project form the core of any good innovation brief. Done well, these two things focus each project team member on the right challenge, provide a clear understanding of the problem, and offer a perspective that can guide the whole team to the solution. Done badly, they waste time and resources, fuel frustration and drain energy.
As well as providing a touchstone for the project team, a well-defined brief also helps other interested parties to understand what you aim to achieve and why – building support for the project. Good Framing and Goals are especially critical in new, ambiguous and multifaceted problem areas, as well as early in the process when the cognitive ‘fog’ can be especially dense.
Framing
The importance of a well-diagnosed problem has long been established, but it was only during the 2010s craze for Design Thinking that the term ‘problem framing’ really gained prominence. It made designers sound more strategic, sure, but what does it actually mean? And what makes me so certain that AI will never be able to take over the framing done by human beings.
Like using a camera, Framing provides both perspective and focus. With a camera, we decide what angle to approach the subject and choose what to exclude from the shot. Similarly, a helpful Frame provides a meaningful and action-orientated point of view on the problem at hand and, at the same time, crystallises, clarifies and organises its critical aspects. And, while always guided by insights gained in the past, good Framing ideally provides the team with a fresh lens on the problem.
There are both hard and soft sides to Framing. The rational side often consists of distilling a complex set of issues down to a few critical variables or dimensions, such as a 2×2 matrix or a mental model. The emotive side can be trickier, as it often involves articulating critical but uncomfortable personal or political issues. So, when conversations begin to get near the ‘elephant in the room’, it’s best to name and claim the animal with care. Opening sentences such as ‘It’s probably just me, but…,’ or ‘You’ve probably thought of this already, but…’ may be best here.
Framing is an innate human ability. It relies heavily on capabilities that AI will never have: our experience, affinity, judgement and perspective-taking. Capabilities computers are inherently weak at.
AI is getting better and better at answering our questions, but it will remain our job to pose the right questions.
That said, when we Frame a problem, the resources we can marshal to tackle it influence how we approach it. As I’ve written elsewhere, as AI develops, it will take on more tasks in the innovation process and shape how we approach problems.
‘Most work, after all, is comprised of a mix of tasks: some of which are better suited to us and some of which could one day be done better by machines. As the capabilities of these grow, managers will redesign work to take advantage of the strengths of both their human workers and their automated assistants.’
Kevin McCullagh ‘Human machine interlace’, Perspective 05, July 2018
Three dimensions of Framing
Framing is a high-level and abstract topic, the nature of which is often hard to articulate. The book Framers: Human Advantage in an Age of Technology and Turmoil does a good job of explaining the process in more depth. I found the authors’ three dimensions of Causality, Counterfactuals, and Constraints particularly insightful.
Causality – thinking about cause and effect
This first dimension of Framing is the most unconscious. We understand the world through cause and effect. Some of those understandings are better than others. For example, some might put a product’s success down to being launched on a lucky date. In contrast, others might locate its fortune in how it anticipated a crucial shift in consumer expectations. Generalising across different situations, we use this causal thinking to make sense of the world and predict the consequences of actions, whether human or performed by technology.
Good leaders have a firm grasp of causality and how things work. They can identify the fundamental forces acting on the problem at hand.
So, how we approach any project challenge is highly informed by our understanding of the forces acting on it. Leaders often express their perspective on a problem with a mental model, metaphor or analogy to make some critical aspects of the problem more relatable. For example, ‘We should try to become the Nespresso of our category’.
Coupled to human agency, causal thinking separates us from AI. We act on the world and experience the effects of those actions – for ourselves, our team, our company, our professional peer group or wider society. So, when we select a Frame, we choose with a sense of responsibility about how we want to reshape things – By contrast, AI has no sense of responsibility and therefore no skin in the project game.
Counterfactuals – alternative solutions
When planning a project, we start to run through a range of possibilities and alternative futures. These we judge in terms of their potential. While project Goals should not suggest a particular solution, consciously framing out – excluding – certain solutions may prove sensible. Considering which directions to Frame in and out helps to bound the project space.
To imagine a range of counterfactuals is a good way to tap into tacit knowledge of how our part of the world works. It helps to surface extra insights from our experience. By integrating alternative routes into our Frame, we encourage the team to remain open-minded, and feel more of a sense of agency as the range of possibilities becomes clearer.
Again, AI is little match for humans when the task is to envision scenarios that do not exist. Reliable training data, the essential basis for AI, is very patchy about the future!
Constraints – creative guide rails
As every designer knows, a blank canvas with unlimited possibilities is the opposite of inspiring. Complete creative freedom is neither feasible, realistic, nor desirable. Charles Eames once said, ‘Design depends largely on constraints’. These cognitive curbs (or kerbs for stateside readers) help guide and focus exploration. Some will be externally imposed, for example, by the laws of physics or our manager’s budget, while others might be self-imposed to avoid well-worn grooves and stimulate new thinking.
One way to make constraints more concrete is to identify different dimensions of the challenge, for example, target markets, and then outline which are in and out of scope. However you define your constraints, one check to run is that they are internally consistent and do not conflict.
Like causal thinking and generating counterfactuals, defining constraints is another very human skill – and one that is much more of an art than science. Doing it well requires a mix of rigorous and creative thinking and experience in framing and solving related problems. Algorithms will not be able to impose discerning constraints any time soon. ‘Computers calculate, but minds imagine’.