2. Managing Complexity
by Marvin Cheung, Head of Research and Strategy
About the Next Series
We advise a wide range of clients at the Venture Strategy Group from first time entrepreneurs to Fortune 500 Executives. As part of Unbuilt Labsâ Think Tank Ecosystem, we specialize in translating academic and technical research into business insights. We take a research-driven approach in the Next Series to demonstrate the enduring principles behind best practices and identify new ways to think about business challenges. Feel free to sign up for an advising session if you want to explore a topic further.
Schedule a Free Introductory Advising Session
An Introduction to Complexity
In a conversation, when something takes too long to explain, requires too much contextual information, or has iffy components, we might say - it is complicated. The idea of complexity goes a little further. While we use the word complicated to describe a challenging and messy situation, we use the word complexity to describe a complicated situation we cannot fully comprehend.
Fixing a classic car is complicated. It may require parts to be shipped from abroad, or niche expertise to fix the particular engine. Fixing an oil spill is complex. It is not just a matter of scooping up the oil. There are second and third order effects that are difficult to predict, and you have no way of simulating an oil spill of the same magnitude at the same time and location, nor can you unspill it.
Many strategy challenges can be considered complex, especially novel strategy problems that fall under the umbrella of innovation. Say you are considering launching a new feature you do not have the capacity to build in-house. Regardless of your hiring plan, you will have to adjust your pricing strategy to reflect the increase in costs. How will this affect your customers in a different price tier and will the aggregate effect increase or decrease profits?
Complexity defies the many efforts to study it. In many ways, that is the nature of complexity. It lies beyond the boundaries of human knowledge, and it confounds even the most sophisticated minds and technology. While there is no metric to measure complexity, we can still see why the world is increasing in complexity.
As we become increasingly connected, traditional boundaries of markets start to break down. Long gone are the insular markets with homogeneous demographics. Companies small and large, with a local or global footprint, find themselves subjected to rapidly changing macro and micro dynamics from international influences. Companies âcannot develop models of the increasingly complex environment in which they operateâ, notes John C. Camillus in an article published by the Harvard Business Review.
While we are still far from being able to understand complexity itself, there is general consensus that the number of stakeholders involved contribute greatly to the complexity of a challenge. One of the reasons is that each stakeholder has a different set of perspectives and needs we need to account for. Another less appreciated reason is that the reliability of the information on individual stakeholders decreases as the number of stakeholders increases.
In the next two sections, we will first investigate strategies to manage complexity, and then establish basic information standards for evidence-based decision making in high-uncertainty environments.
Content:
Innovating systematically in complex conditions through guided trial and error
Trial and error in complex conditions
Existing innovation best practices
Innovating systematically in complex conditions
Basic Research Standards for Evidence-Based Decision-Making in Business Environments
Research in Business Environments
Part I: Problem formulation
Part II: Solution Generation
Part IIA: Ethics
Part IIB: Comprehensiveness
Part IIC: Managing Uncertainties
Part IID: Errors
Part III: Effective Communications
1. Innovating systematically in complex conditions through guided trial and error
Trial and error in complex conditions
Architecture professors Horst Rittel and Melvin Webber from UC Berkeley observed some of the properties of complexity in 1973 and called the type of challenges that involve complexity as âwicked problemsâ. At a basic level, it is when a challenge has factors that are deeply interwoven and cannot be understood in isolation. There is also no way to fully account for the solutionâs knock-on effects until it is executed.
Common innovation challenges, such as finding product-market-fit or scaling, can be considered a wicked problem. Even with the most sophisticated technologies, there is no way to know for certain whether a new product will succeed without testing it. Its success will depend on many factors ranging from the quality of the product or its marketing to its final design.
We can use variations of a puzzle as a metaphor for wicked problems and its inverse, âtameâ problems. A tame problem is like a thousand-piece puzzle from a box: you can follow the picture, start with the edge pieces, and work towards the middle; it is not easy, but there is a definitive goal and the relationship between the parts is clear.
We can model this through a tree diagram. There are one thousand pieces originally. You can start with any piece. Each time you find a match, the number of possible pieces decreases by one. Like any other games, there are established best practices that can facilitate the process. For example, you can start with the corner pieces and work your way into the center.
A wicked problem is like a hundred ten-piece puzzles mixed into the same box. Some of the ten pieces will produce a more desirable picture than the others, but you only have time to put one or two complete puzzles together. Immediately, we begin to see some characteristics of real world challenges. First, there is very little indicator of what a successful end result looks like. Second, there are resource constraints.
By definition, we know that some form of trial and error is required. There are, however, different kinds of trial and error, as outlined in Donald T. Piele and Larry E. Woodâs essay âThinking Strategies with the Computerâ as part of the anthology The Best of Creative Computing Volume 3 published in 1980 by Creative Computing Press. The three, with varying degrees of effectiveness, are random, systematic, and guided trial and error.
The most basic strategy is random trial and error. This would be equivalent to moving the algebraic symbols around randomly when you are stuck on a math question. You pick up a random piece, build the first puzzle, then pick up another random piece and build the second puzzle. At the end of this process, you have two complete puzzles. The chances of you liking both puzzles is the same as the chances of you hating both of them.
The better strategy is systematic trial and error. Instead of picking pieces at random, we set a parameter of trying to build a puzzle with at least one blue piece, and proceed to list the rest of the colours in order of preference in case there are no blue pieces. In this scenario, we decrease the likelihood of getting a puzzle you really hate, but we still have very little control over the outcomes.
The most effective strategy is guided trial and error. Say we start with trying to build a blue race car puzzle but fail to find any blue pieces. You then decide that orange is your next favourite colour and consider building a monotone puzzle. You discover fifteen orange pieces - either enough for one orange monotone ten-piece puzzle, fifteen puzzles with one orange piece each, or anything in between. Unfortunately, none of the fifteen orange pieces work together. You see the opportunity to build a puzzle with an orange motorcycle and complete it. In this scenario, though the top choice was not available, you manage to find a close second.
The idea of guided trial and error is further observed by mathematician Keith Devlin in 2006 from Stanfordâs Center for the Study of Language and Information, with reference to how gamers are reshaping the business world, and by the Headwaters Science Institute on the process of scientific research.
While guided trial and error may seem like an intuitive strategy, we are only just scratching the surface of the challenge. The bigger question remains: How do we apply a guided trial and error process to innovation, and even more importantly, is there a repeatable process we can use to help us innovate systematically? The answer, as it turns out, is not quite so simple.
Existing innovation best practices
We will start with common best practices and build up to our new recommendations. For example, in the guided trial and error scenario, we have already introduced a popular idea in lean startup: iterations. Rapid trial and error will increase the chances of discovering a desirable outcome. We have also introduced the idea of Minimum Viable Product: build only as much as you need to get a sense of the prototypeâs desirability.
To introduce the next best practice, we need to develop the metaphor further to better reflect the challenges of innovating in the real world: imagine trying to tackle the hundred ten-piece puzzle challenge in a team of five, where not everyone is allowed to see the puzzle pieces. Your colleagues, investors, advisors, and customers all have different motivations and viewpoints, yet the ultimate success of the project requires some alignment of your stakeholders.
As a response to the complexity - intertwined stakeholder needs, Human-Centered Design (HCD), commonly associated with IDEO and the Stanford Design School, gained recognition. One of the hallmarks of HCD is the use of post-it notes and mind maps to facilitate communications and help secure stakeholder buy-in.
If we return to the original guided trial and error scenario, we would still see two missing pieces in our current understanding of innovation: (1) how we choose our parameters, and (2) how we evaluate solutions within the parameters. We will address (2) in the next section of this Coursebook.
To understand the knowledge gap, we need to recognize the limitations of the hundred ten-piece puzzles metaphor in representing innovation challenges. For one, constraints are rarely as convenient as âpuzzle pieces with only orange (monotone)â, even âpuzzle pieces with some orangeâ would have increased the level of difficulty significantly. We have also taken perfect eyesight and perfect information for granted here. In real life, we have to work with imperfect instruments, as well as incomplete, inaccurate, and even incorrect information.
At its core, (1) is the more abstract part of how we think about problems: how should we guide the thinking process in a way that enables innovation? From a practical point of view, how do we connect the many frameworks available and choose the right parameter at the right time?
We found the answer in the field of design. Design is of interest to the field of innovation, because designers produce creative outcomes in every project. We can observe the properties of wicked problems within a project: there are clients and stakeholders with different needs, and at each point there is an infinite number of possibilities. An architecture can take many forms, and an empty canvas can carry whatever image you put on it. This is elaborated on in Richard Buchananâs essay from 1992, âWicked Problems in Design Thinkingâ, published by MIT Press.
Design thinking in this context is not the same as HCD. It comes as a surprise to many people including designers, that a lot of literature on design thinking predates IDEO. In fact, while the design thinking literature was being canonized in 1991 at the first symposium on Research in Design Thinking held at TU Delft, nowhere does David Kelley, co-founder of IDEO, mention the phrase âdesign thinking'' at his Ted Talk, titled âHuman-centered Designâ in 2002. Stanfordâs D. School, co-founded by David Kelley as well, only began teaching âdesign thinkingâ in 2005, according to Design Thinking: Understand - Improve - Apply, published by Springer in 2011. As a further demonstration of the difference between Pre-HCD Design Thinking and the current understanding of Design Thinking, none of the literature quoted below is included in IDEOâs article on the history and evolution of design thinking.
It seems unavoidable that we discuss, however briefly, the term âdesign thinkingâ: to think like a designer. As you may have noticed, design thinking has not been included in the titles of this essay. This is not only because of the semantic shift from the original meaning to HCD, but also because the term design thinking itself misses the point. All of the previous attempts to âscientiseâ design, as observed by Nigel Cross in his essay âDesign Discipline versus Design Scienceâ from 2001 published by MIT Press, have inevitably failed. Attempts to codify design ignores a fundamental truth of the discipline: art and design challenges the boundaries of the norms. The ways of thinking are not static. Here, we are not so much interested in the subject of design itself or how we can define a way of thinking exclusive to designers, but the subject of innovation and what we can learn from pre-HCD design thinking literature.
We can begin by reformulating (1) in the language of design. âIn design, âthe solutionâ does not arise directly from âthe problemâ; the designerâs attention oscillates, or commutes, between the two, and an understanding of both gradually develops, as Archer (1979) has suggested [...] Designers use alternative solution conjectures as a means of developing their understanding of the problem,â notes Nigel Cross in the proceedings of Research in Design Thinking published in 1992.
While it may seem at first glance that the creative process is like a pendulum swinging between two distinct ends with no apparent start or end, we know this to be untrue. As Cross observes, âa design solution is not an arbitrary construct - it usually bears some relationship to the problem, as given, which is, after all, the starting condition for considering solution possibilities.â
We can cut through the pendulum swing and frame the creative process instead as a starting condition, followed by a series of problem-solution pairs. We have seen this at work in the guided trial and error scenario when we started with the parameter âblue racecarâ, moved to the solution âno blue piecesâ, and then to the second parameter âorange monotone puzzleâ with the second solution âfifteen orange piecesâ. We can also see how âblue racecarâ, for example, serves as a parameter, a solution conjecture, and a hypothesis of a possible and highly desirable outcome.
We can map our starting condition as well as problem-solution pairs in a tree diagram too. At the top layer, we have one starting condition, then we branch off to our second layer with the first problem-solution pair âblue racecarâ and âno blue piecesâ. Also on the second layer is the problem-solution pair âorange monotone puzzleâ and âfifteen orange piecesâ, which branches off into the third layer with the problem-solution pair ânone of the fifteen orange pieces work togetherâ and âbuild the puzzle with an orange motorcycleâ.
Much of what we described above coincides with existing thinking in business. DMAIC in lean six sigma advocates for a continuous effort to Define, Measure, Analyze, Improve, and Control. The McKinsey Mind, published by McGraw-HIll Education in 2001, discusses in detail the importance of a hypothesis-led investigation, so we do not have to âboil the oceanâ looking for solutions, and the use of logic trees with issues and sub-issues. Where we see a clear point of divergence is the MECE principle (Mutually Exclusive, Collectively Exhaustive) in the making of the logic tree. By definition, complexity involves non-mutually exclusive parts that are closely interconnected.
Innovating systematically in complex conditions
This is when we need to leave the metaphor behind and instead consider the real world example described in Richard Buchananâs essay from 1992: âManagers of a large retail chain were puzzled that customers had difficulty navigating through their stores to find merchandise. Traditional graphic design yielded larger signs but no apparent improvement in navigation - the larger the sign, the more likely people were to ignore it. Finally a design consultant suggested that the problem should be studied from the perspective of the flow of customer experience. After a period of observing shoppers walk through stores, the consultant concluded that people often navigate among different sections of a store by looking for the most familiar and representative examples of a particular type of product. This led to a change in display strategy, placing those products that people are most likely to identify in prominent positions.â
We can see many of the prior principles apply here:
they have iterated and created small tests
there is a starting condition: customers have difficulty navigating the clientâs stores, and two problem-solution pairs
they have taken a hypothesis-led approach
The larger question still remains: how did they move from signage and graphic design, to product placement and customer experience? Is that just a coincidence? Pre-HCD Design thinking literature says otherwise. Specifically, we observed three types of flexibility that enable innovation. First, flexibility with the starting condition. Second, flexibility with exploring different layers of abstraction. Thirdly, flexibility with the frameworks used. We also found that designers add in information and perspective to manage the immense flexibility of the process.
The first strategy we found was having flexibility with the starting condition. Quoted from Crossâ 1992 essay, âThomas and Carroll (1979) concluded that âDesign is a type of problem solving in which the problem solver views the problem or acts as though there is some ill-definedness in the goals, initial conditions or allowable transformationsâ. The literature makes explicit that there is not just flexibility in how we ideate hypotheses, but that we should be ready to accept that the original starting condition is renegotiable. A starting condition is not written in stone and we can pivot.
The second strategy we found was allowing flexibility with exploring the different layers of abstraction. Because the parts and the wholes are interconnected in a complex challenge and the strict boundaries between the parts and the whole is unclear, we can and should be able to move between different levels of abstraction. Elaborated upon in the Doctrine of Placements by Buchanan in his 1992 essay, Buchanan observes how many designers work across the four broad areas he identified, including symbolic and visual communications, material objects, activities and organized services, as well as complex systems or environments to deliver creative outcomes.
Specifically, Buchanan notices the way the designers treat the areas as interconnected âwith no priority given to any single oneâ. While the âsequence of signs, things, actions, and thought could be regarded as an ascent from confusing parts to orderly wholesâ, âthere is no reason to believe that parts and wholes must be treated in ascending rather than descending order.â
One of the implications of this, is the idea that when we structure our thinking, we can at any point, move up and across branches in the tree. Unlike when we are executing a project where going down a certain path requires us to commit significant amounts of time and resources, creating a unidirectional navigation pattern down a decision tree, you can change paths in the thinking process with far fewer commitments, creating a multi-directional navigation pattern in a tree.
The third strategy, flexibility with frameworks used, was also identified in Buchananâs 1992 publication: âAlthough the [retail chain navigation challenge] is a minor example, it does illustrate a double repositioning of the design problem [...] There are so many examples of conceptual repositioning in design that it is surprising that no one has recognized the systematic pattern of invention that lies behind design thinking in the twentieth century [...] Understanding the difference between a category and a placement is essential if design thinking is to be regarded as more than a series of creative accidents. Categories have fixed meanings that are accepted within the framework of a theory or a philosophy, and serve as the basis for analyzing what already exists. Placements have boundaries to shape and constraint meaning, but are not rigidly fixed and determinate.â
Creatively switching between disciplines, frameworks, and layers of abstraction all help illuminate the relationship between the parts and the wholes and enable us to deliver innovation in complex situations. This applies to both the problem formulation process and the solution generation process. Emerging literature in Cross Domain Deterrence reveals the efficiency of leveraging capabilities in one domain to compensate and strengthen the capabilities of another. Quite simply, formulating solution combinations across departments gives us more flexibility, options, and control.
Having too many options can feel overwhelming. Indeed, this seems to add to the inherent ambiguity of the innovation process. Pre-HCD Design Thinking literature has in fact observed ways in which designers manage the immense flexibility. Cross in the 1992 publication notes, âIn early observational studies of urban designers and planners, Levin (1965) realized that they âadded informationâ to the problem as given, simply in order to make a resolution of the problem possible [...] Darke (1979) from her interviews with successful architects [...] also concluded that the architects had all found, generated or imposed particular strong constraints, or a narrow set of objectives, upon the problem, in order to help generate the early solution concept.â In innovation, these can include your values: what you will and will not do, it can be a vision: how your venture will align with a well researched projection of the future, or it can be anything that you discover using different frameworks.
2. Basic Research Standards for Evidence-Based Decision-Making in Business Environments
Research in Business Environments
We will add on to the previous section on the relationship between frameworks in this section, and focus on working within a specific framework. In other words, instead of looking at the relationship between problem-solution pairs, we will examine the general best practices for resolving a single problem-solution pair. More specifically, we will explore the ways in which we can evaluate a solution.
There are two key challenges to resolving a problem-solution pair regardless of which framework you use. First, there is so much flexibility in problem formulation that it can be quite daunting. Second, we have to navigate incomplete, inaccurate, and even incorrect information in the real world.
To address these two challenges, we will formulate guidelines based on academic research methods. It is important here to understand the differences between academic research and research in a business environment. While the quality of data and the amount of resources available is an obvious difference, there are more nuanced differences.
Research in business environments does not have to be generalizable. For example, you can use existing theories to understand a phenomena you are observing eg. decreasing customer satisfaction, or you can confirm whether a piece of information you found online applies to your company. It does not matter whether or not your findings can be applied to a larger population.
We tend to avoid conducting generalizable research in business environments because it can be expensive, but there are times when generalizable research is necessary. For example, in R&D functions where a new technology or theory can advance business goals or when generalizable research is a prerequisite for operating in the industry, such as when clinical trials are needed. These research projects need to conform to strict industry standards. We recommend involving domain experts in these scenarios.
We can, however, develop general best practices for everyday research. How we conduct research and the standards we adopt for our insights significantly influences our ability to make sound decisions. As you will see, non-generalizable research is not necessarily easier. People can be very scrupulous when several million dollars is on the line.
There are several stages common across all problem-solution pairs. We will elaborate on each in the subsequent parts:
Problem formulation: Are you asking the right question?
Solution generation: Are your findings useful?
Communication: Are you delivering your findings intentionally?
Part I: Problem formulation
While in earlier sections we have spoken about a starting condition and a series of problem-solution pairs in abstract through the pre-HCD Design Thinking framework, we want to now apply the innovation process to help us structure our inquiry.
In the first layer: we have the overarching research question. Oftentimes, you can get a solid research question just by adding the question word âhowâ in front of the relevant goal or metric. For example, âhow might we find product-market-fitâ, âhow might we increase revenueâ, or âhow might we decrease churnâ.
In the second layer: we want to specify the framework we will use to break down the research question. What is the angle? A well formulated problem needs to have a clear subject and should say something where you believe the bottleneck is. It also needs to be answerable, ie. testable and falsifiable to an acceptable degree of certainty within resource constraints.
A problem in the second layer tends to take one of these forms:
Exploratory research: what do we think about the subject?
Descriptive research: what are the characteristics of the subject?
Evaluative research: how effective is the subject?
To formulate a problem well, you want to ask:
What do we know to be true with a high degree of certainty?
What can we infer right away based on past research or experiences?
What are the areas that require further research?
In the third layer: we identify several interconnected variables that require further investigation and delineate their relationships. Resolving these relationships should provide specific, actionable outcomes. For example, a sentence in the copy needs to be replaced, or a new image is needed for the website. As you resolve problem-solution pairs, you will iterate and move between layers of abstraction. Changing frameworks, variables you study etc. is both common and expected.
This is a very broad description of how we formulate problems. For people unfamiliar with managing complexity, it can be easier to start by getting a sense of what a streamlined process looks like. Problem formulation in abstract can be difficult to grasp. We have included Bain and Coâs Case Interview Preparation page in the list of recommended readings. They have case studies available with video walkthroughs. These case studies take out the complexity and nuances of a situation, for example, stakeholder disagreements and other uncertainties outlined in Part IIC: Managing Uncertainties, but they nevertheless offer guidance on how to begin investigating problem-formulation.
Part II: Solution Generation
The solution generation process is in some ways more straightforward than the problem formulation process. The steps are fairly similar across most problem types:
Examine easy-to-access existing literature. This can include news articles, academic essays, blog stories, government reports, corporate publications etc.
Connect available literature to operational data. Do your best to understand the problem you have with the data available. (We will elaborate on how to work with operational data in later Coursebooks.)
Create custom solutions to resolve the problem, if necessary. This can include integrating new monitoring solutions, building data pipelines, custom dashboards, and so on. It is important to weigh resource considerations with the associated risks. Sometimes it is better to accept the risk than to build a custom solution.
When we evaluate the credibility and usefulness of a solution, we examine it across several factors:
Is the research ethical?
Is the research comprehensive?
Has the researcher accounted for different uncertainties?
Are there errors in the data or analysis?
Part IIA: Ethics
Unethical methods damage the credibility of the researcher, the institution, and the findings. Ethical best practices are established to help prevent behaviours that might harm organizations, researchers, research subjects, and the public.
If you are in a leadership role, you will be responsible for your organizationâs ethical standards. Even if you are not in a leadership role, you should always voice your concern through proper channels and in accordance with your employee handbook, if you believe that your work will intentionally or unintentionally promote an unethical agenda. This can include promoting unhealthy behaviours or creating detrimental financial, physical or mental health impacts to children, teenagers, and even adults.
Within a research project, there are two overarching questions:
The ends: Will the research be used to promote unethical or illegal behaviours?
The means: Will the research put anyone in harmâs way?
âPrinciplismâ or the âFour Principles Approachâ by Tom Beauchamp, Ruth Faden, and James Childress from the 1970s continues to provide guidance to researchers:
Respect for autonomy: we should not interfere with the subjectâs intended course in life and should avoid violations ranging from manipulative under-disclosure of relevant information to rejecting the subjectâs refusal as a research subject.
Nonmaleficence: we should avoid causing harm.
Beneficence: the risk of harm presented by interventions must constantly be weighed against possible benefits for subjects and the general public.
Justice: benefits, risks, and costs, should be fairly distributed - we should not recruit subjects unable to give informed consent or subjects who do not have the option to refuse participation.
To be clear, there is no circumstance in everyday research where you should prioritize your research over the participantâs safety. For example, if you are considering an ethnographic study examining peopleâs behaviour in supermarkets and you see a tin can about to fall on your participantâs head - please intervene.
If your research requires you to put participants at risk in any way, you should stop and seek legal advice. Some product tests are regulated by government agencies including the FDA and its European counterparts the EFSA, EMA, and ECHA. This includes but is not limited to: human foods, human drugs, vaccines, blood, biologics, medical devices, radiation-emitting electronic products, cosmetics, as well as animal and veterinary products.
There are a few additional best practices we have adapted from CITIâs Responsible Code of Conduct (RCR), originally designed for academic researchers:
Authorship: The general consensus is that authorship is based on intellectual rather than material contribution eg. data or funding. The research team should collectively decide who qualifies as an author, ideally before the project begins. Each author is responsible for reviewing the manuscript and can be held responsible for the work that is published. Those who do not qualify for author status can be recognized in the acknowledgements.
Plagiarism: Although it may seem obvious, it is important to avoid plagiarism. Always put quotation marks around direct quotations, and attribute an idea you referenced to the original source. Missing citations make it difficult for others, including people who need to sign off on a project, to check the work. You should also be prepared to cite the source of a piece of information when you are conducting a presentation.
Conflicts of interest: While there will always be a financial conflict of interest when you are conducting a study as an employee of an organization, you should still be wary of personal biases. For example, if you are a strong advocate for an idea, are you asking leading questions or bullying the interviewee into agreeing with you? As organizations mature, dedicated researchers can help maintain objectivity.
Data management: Ask for and record as little Personally Identifiable Information (PII) as possible. In most circumstances, an anonymous transcript of a user interview is sufficient. A screen-recording with voice of how users interact with your product can be helpful, but very rarely will recording the face of the interviewee add value to your research. You should clearly communicate the data that will be collected, as well as how it will be stored, and used. The tendency here is to over-collect, but the amount of PII needs to be balanced with the accuracy of the answers. Social-desirability bias can lead to an over-reporting of desirable behaviour and an under-reporting of undesirable behaviour. Please consult your legal team for details of the appropriate data privacy practices.
Part IIB: Comprehensiveness
One of the most common questions we receive is âWhen do I know I have enough research?â The simple answer is that you should exhaust all resources available to you within the resource constraints. There are some signs, however, that may indicate comprehensiveness before you reach that point:
If your new sources are beginning to repeat information you already know, that is the first sign that your research process is close to completion. There are often 3-5 most important reports and authors on the subject that everyone references. Can you identify them and discuss their relationship?
If you can identify errors in your sourcesâ reasoning and begin to develop your own perspective, this is the second sign that your research process is close to completion. At this point, the socratic method can be helpful. Depending on the size of the project and your own workflow, you can either have an informal discussion of your ideas before you start writing, or you can have a discussion after your first draft.
The final sign is when you finish writing. Depending on the context, you might need a final project sign-off from your stakeholders. If they sign off, brilliant. You can also continue to validate your ideas through presentations and roundtable discussions. Publication is rare, since most works are either confidential or not up to publication standards due to resource constraints.
Part IIC: Managing Uncertainties
Uncertainties arise when we work with incomplete, inaccurate, and even incorrect information. To craft a credible and useful solution, we need to account for known, unknown and unknowable uncertainties, a framework by Clare Chua Chow and Rakesh K. Sarin first published in 2002 in the journal Theories and Decisions. There is no simple metric or combined uncertainty metric that can tell us when we need to eliminate a piece of information entirely. We can, however, still identify common uncertainties. Managing them well will require experience and good judgement.
Known uncertainties are the easiest to manage. Their presence is easily detectable and they skew findings towards a predictable direction:
Conflicts of interest: corporate reports and research funded by corporations tend to advocate for specific private interests. Some are helpful but it is important to be critical of any omissions, research methodologies, and gaps in reasoning.
Missing research methodologies: reports, especially those by corporations, have in the past included very narrow and bizarre studies with odd metrics to prove a point. Sample selection bias, social desirability bias, and the hawthorne effect are examples of threats to a studyâs internal validity. Review a studyâs methodology or the legal fineprint on marketing materials whenever possible.
No acknowledgement of the limitations of the study: some studies make overly generalized and unsubstantiated claims. This calls into question the researchâs external validity. A closer look at the relationship between the studyâs research methodology and the conclusions will often reveal any gaps in the authorâs reasoning.
Fuzzy language and buzzwords: what does it mean when a company says they use artificial intelligence or make sustainability claims? Be wary of ambiguous or poorly defined terms.
Social impact claims: we are incredibly careful when an organization makes social impact claims. Social problems are wicked problems where accounting for the second and third order impacts of an intervention is both difficult and expensive. We typically expect to see results from an ethnographic study to understand the potential impacts of an intervention on a specific community, and a randomized controlled trial (RCT) to understand the efficacy of an intervention.
Unknown uncertainties are more difficult to manage. Though their presence can be detected, they skew findings towards an unpredictable direction:
Incomplete research: with resource constraints or limited expertise, some research may simply not meet the comprehensiveness criteria. Effects of incomplete research can include failing to take into account a confounding variable ie. a variable that affects both the dependent and independent variable, creating a spurious correlation, when there is no clear causal relationship. There are also projects we consider to be unrealistic, when certain aspects clearly go against the known logics of the industry. Formulating an answerable problem, acknowledging the limitations of a study, and consulting experts are key.
Misinformation and disinformation: this is particularly problematic when working with pop culture or news sources. We have explored this in further detail in the recommended reading âMedia â To sell a crisis: Understanding the incentives and control system behind sensationalist news and misinformationâ.
Uncalibrated tools: people often assume that digital tools, such as Google Analytics, are perfect. You can get a sense of how accurate your tools are if you do a few pre-tests eg. How often does it fail to track a click? When does it fail? What is the margin of error?
Presence of systemic corruption: corruption muddies data, reports, and findings. We recommend being extra cautious when referencing a report on countries that are highly corrupt. The Transparency Internationalâs Corruption Perceptions Index (CPI) is a good reference.
Unknowable uncertainties are incredibly difficult to identify. Their presence is difficult to detect and skew findings towards an unpredictable direction:
Unofficial narratives: these are the details left out of official reports. Stakeholder disagreements, details under confidentiality agreements etc. can be the root cause of an action without ever showing up on reports. This requires an insiderâs perspective.
Errors: mistakes and errors during the research process at a reputable organization are rare but can happen. The most common error is miscommunication when information flows up the chain of command. It is important to do a gut check when you read reports.
Part IID: Errors
We want to describe some of the errors we commonly observe, with reference to the framework by Andrew W. Brown, Kathryn A. Kaiser, and David B. Allison in the article âIssues with data and analysesâ published in the Proceedings of the National Academy of Sciences of the USA in 2018. Some errors are minor and do not impact the findings significantly, while others can invalidate the entire project.
Errors in design: poor data collection methods, research design, or sampling techniques can produce bad data. The most common mistake is when there is a mismatch between the concept being studied and how it is operationalized or measured. For example, we have seen papers that use residential real estate water usage figures to estimate commercial real estate water usage. Any conclusions from then onwards are questionable.
Errors in data management: this can be as simple as having one or two typos in the code you use to analyze the data. The bigger challenge, however, is when people fail to recognize the expiration date of their data. You need to review the validity of your data whenever there are big changes. This can include drastic changes in the macroenvironment, eg. the pandemic, or the product itself, eg. a rebrand.
Errors in statistical analysis: it is true that if you torture the numbers long enough, they will say anything. We are especially cautious when we read papers that look at the statistical correlation between two or more macroeconomic indices without fully considering the nuances of the content, the limitations of individual indices, and the limitations of the statistical methods applied.
Errors in logic: at a basic level, you can either disagree with the premises or the conclusion. The most common problem is when researchers make an unjustified generalization eg. because a certain demographic responds well to a product in North America, the product will perform well in Asia as well. Confusing correlation and causation is also common and problematic. The Stanford Encyclopedia of Philosophy describes other common logical fallacies in detail.
Errors in communication: this happens generally towards the end of a paper when there is a mismatch between the conclusions of a study and the ambitions of the author. Overzealous authors or bloggers can extrapolate and exaggerate the impacts of a study. Sensationalized language, and the overuse of hedging eg. might, could etc. are two of the signs we pay attention to.
Part III: Effective Communications
Writing is a great way to think through a problem and clarify the relationship between the variables you have identified. However, there are times when a full paper is not needed. Think carefully about what you want to spend time and resources on. There are many faster alternatives:
Share a quote: sometimes all you need to do is share a snippet from an article you have read to a colleague.
Write notes on a presentation slide: get to the point; keep it short and simple
Others: notes on the company white board, group slack messages etc. are all great options.
In the event a more formal document is needed, it should be as short as possible. By the time you reach two to three pages, you should include a one-paragraph executive summary. Longer papers may benefit from a one-page memo. These summaries should provide an overview of the topic and details of the recommendations. You should also assume that unless there are specific questions related to the methodology or the reasoning of the report, no one except your manager or an investor in a due diligence process will read the paper in full. Unfortunately, your colleagues are busy people too.
Getting the summary right is critical and we generally look for these components:
Context: this should include any relevant information surrounding the problem. For example, people or partnering organizations who have been involved in the research, the inspiration behind the project etc.
Research question: this is the overarching question described in the first layer from Part I. It is the high level question with reference to the business goal.
Method: this should include the sources, data, analytical methods you used to explore the question.
Recommendations: a specific course of action you recommend based on your research, which may include further research if necessary, as well as the limitations of the study.
There are a few tips we recommend when it comes to business writing, which applies to both short and long-form essays:
Keep it simple. Write simple sentences in an active voice with a clear subject, verb, and object. As a researcher, it is your responsibility to communicate your findings to the readers. A report that is easier to read is more likely to be read.
Structure it well. A generic structure is entirely okay and even encouraged. Start with an introduction explaining the context, the significance of the research, definitions for key terms, and details of the framework you will use. Then write a few paragraphs explaining your findings and end with a paragraph with your recommendations. Keeping it simple is key.
Write good topic sentences. Each paragraph should have a clear, self-contained idea. The topic sentence should identify or introduce the idea, with sentences behind it for support or clarification.
Avoid overly long sentences. Consider separating a sentence into two separate sentences if it is longer than two lines.
Always define the technical terms you use. Write as though you are talking to somebody who is not familiar with the topic - we would not need the research project if we already know everything about it. A successful report will be read by many executives across departments. Clearly defining technical terms will help you communicate with your investors and stakeholders too.
Recommended readings:
Bain and Co. âCase Interview Preparationâ. Bain and Co, nd. https://www.bain.com/careers/interview-prep/case-interview/
Cheung, Marvin. âMedia â To sell a crisis: Understanding the incentives and control system behind sensationalist news and misinformationâ. Unbuilt Labs, 5 January 2021. https://unbuiltlabs.com/2021/01/05/media-to-sell-a-crisis/
IDEO. âThe History of Design Thinkingâ. IDEO, nd. https://designthinking.ideo.com/history [https://perma.cc/RB7D-HA7H]
European Commission. âChapter 1 Locating ethics in researchâ and âChapter 2 Consentâ. European Textbook on Ethics in Research. Research*eu, 2010. http://materiais.dbio.uevora.pt/MA/Modulo2/Artigos/TextbookSyllabus.pdf?download