Wednesday, January 7, 2015

Data-driven management by Davenport (& a generic capability maturity model!)

During the Christmas break I read "Competing on Analytics" by Davenport & Harris. It is a really engaging and well-written book on the long-standing trend towards greater use of data in driving operations and decision making in organisations. In terms of the blog's theme, there is a shift from intuitive and narrative ways of reasoning towards more mathematically and statistically oriented data-based forms of reasoning. This is co-incidentally a topic I have recently started exploring on two externally funded projects, with three PhD students involved.

Among their key arguments is the idea that firms evolve from intuitive towards analytical ways of operating and even competing as they grow top management commitment, technological assets and organizational capabilities. This is summarised in a figure presented here:


This graph is representative of typical tools offered for managerial reasoning by management scholars and consultants. Its benefits are clear: it helps the management "see where they are" and figure out "where they want to go". It is significantly better than most models offered by academic articles. So should we learn from it?


Well it is a bit Generic...

The model is actually so good that you can replace analytics and analytics capability with almost anything and it still works! Try, for example, quality and quality management capability. Or social media and social media capability. Works, eh? In Stage 4, top management considers social media capabilities as a corporate priority, whereas in Stage 5 social media capability has become a major competitive strength.

Replacing analytics with quality seems to work pretty well...

 

In academia people tend to get all worried about things like falsifiability, the idea that ideas that cannot be proven wrong are not actually very good ideas to base your decisions on. This generality is perhaps not a flaw, but certainly getting a framework that is so generic that it could be about almost any capability would be a difficult sell in peer-review.

No Solid Empirical or Logical Grounding

Given this is a management book rather than a scientific article it is hardly a surprise that no methods are explicated nor data described. However, the model as it is elaborated in various tables implies relationships between human capital, top management commitment, available data, and competitive advantage that are plausible but not proven. Implicit in such stage model is that human capital,  technology, and available data evolve in tandem with (and in part driven by) top management commitment. There is no reason to believe this, nor is it clear that most companies could get world-class talent even if that was the number one priority of the top managers.

The obvious response to this critique is to suggest that stages are just idealised heuristics. It is quite possible, in other words, that an organisation's data is on "Level 4", while its human capital is on "Level 2" and top management commitment on "Level 3". If the model is merely a heuristic, then the model says really nothing but merely provides a vocabulary for managers to think of analytics. It not just generic but completely unfalsifiable. It is not a theory but an enumeration of potential things.

From the academic perspective, a heuristic feels like a cop-out,  but from the perspective of a good and useful business book that is not necessarily the case. What if it is just a vocabulary for managers to think of their firms and competitors now and in the future? Isn't that useful?


Conclusion & Bonus

In conclusion I liked reading Davenport & Harris book. It is already a bit old, but certainly includes many interesting stories and portrays a shift in management thinking (& organizational reasoning more broadly!) that is just reaching some industries (like education and manufacturing). It is non-technical to the extent that if you have any knowledge of engineering you will not learn much besides novel business applications of pretty rudimentary mathematical tools. There the book delivers what it promises.

Bonus!: I prepared an Excel sheet which allows you to replace analytics with social media, quality, customer service, human resource management, internationalization, or whatever capability development or maturity in an organization you may want to advance. With a change of a single Excel cell you can produce a figure like the one above! It's here: http://goo.gl/uL48ia (note: you have to click the download icon on the top to download the Excel file to your computer).

Thursday, October 2, 2014

Reasoning about the future

In the long run we are all dead. (John Maynard Keynes)
That statement is indicative of the kind of wisdom academics are willing to offer concerning the future. Given that strategy as a discipline would appear to a lay person to be inherently about shaping the futures of individuals, organisations, industries, and markets, there has been hardly any systematic theories that discuss the future in any meaningful way. The only widely used concept relating to the future is that of vision, which has been at best a marginal topic studied by a few fringe researchers (not a serious strategy topic). The discipline of entrepreneurship fares hardly better, and may actually be worse. Discussions on whether entrepreneurs discover or create "opportunities" seems to miss the point entirely. Rather than musing on the "creation of opportunities", we should try explain how new firms and new products are conceived as potential futures prior to their existence and how such projections are evaluated.

The problems with the future

This is hardly surprising. The future is a very difficult thing to talk about. Most academics and theories are grounded on firm realist grounding: we make and test our theories based on the real data we have observed, while our decision making theories posit actors to make choices based on information available to them. In social context, the future does not exist and arguably cannot be known. It is just one of many problems that a realist account of the future is not compatible with free will of humans. The problem of induction is an extreme statement of the indeterminacy of the future. Even if someone predicts the future accurately, they might just be lucky. These difficulties have led most management scholars to simply ignore the future altogether. All information is from the past, and thus the future is simply not relevant for explaining managerial decision making.

Finance provides an elegant account of the future, yet one that has limited usefulness for managers. The account finance gives of the future is that of statistics. It assumes that there are a large number of relatively stable processes that create outcomes of interest. Such second order stability leads to predictable distribution of outcomes, so that a satisfactory probability can be calculated to any range of outcomes. Although this is a somewhat simplifying account, most of finance theory does not want to deal with causal relationships (or "stories" as Nassim Taleb calls them), thereby proudly ignoring issues such as wars or the invention of electric cars in predicting the future price of oil.

Outside the academic theories, there is also an engineering approach to futures, which is to mine data for significant causal relationships that can predict future outcomes. Apparently, weather does not only predict traffic jams, but also how much will be sold online in the immediate future. These approaches embrace "The End of Theory" as Chris Anderson suggested at Wired, where the future is predicted but not explained. Despite inevitable techno-optimism, such approaches have very limited predictive power and seldom provide meaningful implications for strategic decisions. Predictions without explanation are also unreliable, as past patterns can change in unpredictable ways (as has been demonstrated in relation to the Google Flu Trends).

Reasoning about the future through stories

Some academic work has of course taken seriously the idea that decision making is not just about past information but depends on the active projection of futures. Merton's old idea concerning self fulfilling prophecies highlights how expectations concerning the future shape the future. This idea is more broadly known as performativity in sociology, suggesting that beliefs held by actors shape the subsequent reality. For example: if all investors believe the price of gold will increase, it will most definitely increase. In technology studies, there is a small but interesting stream on the sociology of expectations.

We cannot know for sure the price of oil yet we cannot claim all predictions to be equally plausible or foolish. There are no reliable ways to predict the future, yet not all predictions have the same credibility nor is the credibility itself an arbitrary attribute of predictions (although you would probably find some social scientists taking this extreme position). The predictions about the future are problematic, since they are neither arbitrary nor definite. We reason about the future by constructing stories of how things will be, and the credibility of those stories depend on the analogies we can provide as evidence, the status of the actors who tell those stories, and a broad variety of other justifications included to bolster them. Experts provide credible predictions because the stories they tell (of statistical analysis and personal expertise, for example) are convincing. Unfortunately, it seems that many experts are really terrible at predictions (as Tetlock has famously documented).

The future is something that does not exist, but humans construct futures through stories, hypothetical worlds that may come to be, which they can base their actions on. Whether such futures are accurate (i.e. come to be the future) cannot be known ex post. Despite such uncertainty, futures matter for organizations.

Reasoning about the futures in organizations

The futures matter for organizations and management for many reasons. Futures are the basis of coordination, identity, and credibility. Three quite interesting articles have recently appeared on the topic, all of them coincidentally in Organization Science.

I have myself written a recent article that examines how new ventures must create plausible and exciting futures in order to convince they are worthy of support from investors, prospective employees, and even pilot customers. Relationships are often based on expected future transactions, and the management of relationships requires constant management of expectations. Since the future cannot inherently be known in business context, some of the expectations are likely to remain unmet now and then, creating the need for revising the future projections in relationships. We also note how the plausibility of stories that an individual organization tells about the future are intrinsically linked to stories told by other organizations and the media

In another recent article, the authors argue that just as individual identities are much about possible future selves, organizational identities also tend to incorporate a "possible collective self", a vision (often unrealistic) of what the organization will be in the future. When the collective portrayed by a strategic vision is appealing to an individual and conforms with the possible selves of the individual, they are more likely to exert effort to pursue the vision. Strategic visions thus serve the purpose of allowing members of the organization to imagine an alluring future where they have a role to play. As we know, most individuals want to have dreams even if they cannot be fulfilled. The number of people joining gyms is greater than the number of people who actually get fit. The western societies are societies of hope as Nils Brunsson has noted. We want to see ourselves as members of a "leading organization" that "makes a difference", even if only a small portion of all organizations aspiring for a leading position actually attain one.

The last article I want to mention focuses on the role of expected futures in making credible and broadly acceptable decisions. Reasoning about what to do now must inevitably be able to provide coherence across acceptable accounts of the past, the present, and the future. The one who controls the present controls the past (said Orwell), and since much of history is open to multiple interpretations, the acceptability of the futures we form depend on the accounts of the past we have. There is certain reasonability across time, and the reasons we settle for to explain the past outcomes shape what futures we can justify as reasonable. (As a side note, the unsurprising finding by Kaplan and Orlikowski that accounts must be "coherent, plausible, and acceptable" for a social group to embrace them seems to capture the key of human reasoning in general.)


Concluding words

Irrespective of how "factual" (or realist) we want to be about our views of the world as it is today, it seems inevitable that the accounts we create of the future matter. Futures are non-arbitrary constructions formed by more or less plausible stories. To be successful leaders, managers must learn to construct compelling futures that can form self-fulfilling prophecies, stabilise relationships, resonate with possible selves of followers, and create seeming coherence across the past, the present, and the expected future.  

Futures, or what tend to be called "strategic vision" thus matters, yet there is disappointing lack of research and theory on the phenomenon. Their uneasy position as being neither arbitrary nor determined by an external reality makes them particularly resistant to well-structured logical analysis by academics. Even if the avoidance of the topic has been well-justified, I would hope there will soon be some breakthroughs in ways we think about projections of the future, how we study them, and how we teach practitioners to create and evaluate alternative futures.

Friday, May 10, 2013

Organization at the age of individualism

 
Charles Taylor
I recently finished reading Charles Taylor's The Ethics of Authenticity. It is a wonderful book, providing a review of recent debates around individualism together with a very interesting argument concerning the basis of modern ethics and related political implications -- all in 121 pages. Despite broad interest in philosophy, I have previously steered clear of ethics as a subfield that I did not find particularly interesting. I used to joke that "I am not particularly into ethics" when discussion turned to the topic. After reading Taylor's book I think I will need to reconsider.

In this blog post I will briefly comment on the relationship of reasoning and ethics in organizations and then proceed to discuss some of the potential implications that individualism and 'ethics of authenticity' have on leadership and management in organizations.

Ethics and reasoning

The field of ethics concerns a subset of questions that humans reason about. Ethical reasoning represent the application of principles and moral norms on passing judgment on past actions and future choices. The orthodox view among organizational scholars seems to be that most accounts of ethical reasoning are post-hoc rationalizations of intuitive judgments. When we observe an action that we judge to be moral or immoral, we pass judgment well before we consider explicit reasons for morality or immorality. Explicit and conscious reasoning rarely changes our intuitive moral judgments (this has been discussed e.g. by Sonenshein).

Sonenshein contrasts intuition with reasoning. I think he is mistaken. Intuitive reasoning is reasoning. Vaisey has shown quite convincingly that even when individuals cannot articulate why they make the moral judgments they make (i.e. rely on 'intuition'), the judgments tend to be aligned with the broader ethical system they are committed to. We reason about the goodness and appropriateness of actions and choices based on the broader knowledge and values we hold even if we are not consciously aware of it. Vaisey's results are exactly what one would expect if one approaches ethical reasoning as, well, reasoning more generally! Of course there are some evolutionary and some societal regularities in reasoning that pertain particularly to ethical or moral topics. In this vein, quite a bit of interesting research has been done in experimental philosophy. There is even more specifically a society for empirical ethics. These approaches will provide plenty more research opportunities in the domain of organizations and leadership.

While some seek to establish a distinction between ethical reasoning and reasoning in general within organizations, I would posit that such distinction is difficult if not impossible to make. To quote a rather poetic passage of G.H Mead's:
The order of the universe we live in is the moral order. It has become the moral order by becoming the self-conscious method of the members of a human society. We are not pilgrims and strangers. We are at home in our own world, but it is not ours by inheritance but by conquest. The world that comes to us from the past possesses and controls us. We possess and control the world that we discover and invent. And this is the world of the moral order. It is a splendid adventure if we can rise to it.
Organizations are ethical in many ways. Even efficiency is a norm, and for many a highly ethical norm. It can be a moral imperative to be productive. I am currently sitting in a hiring committee and the choice (conscious or not) to follow the bureaucratic rules to the letter is a moral choice. To create a sharp divide between instrumental rationality and moral reasoning seems a cop-out. Everything individuals do in organizations can be judged as acceptable or unacceptance and as valuable or worthless according to a broad range of norms (i.e. evaluation criteria). To decide which norms should be called "moral" or "ethical" is itself a moral question, leading me to conclude that we might as well accept all norms as somehow ethical.

Ethics of Authenticity

Taylor argues we have transformed from the days of traditional society where ethical rules were forced upon the individual by history, peer pressure, or religion to a modern era of individualism. Taylor begins by drawing on cultural criticism of individualism, and the rather uncontroversial suggestion that (p. 4):
[...] the dark side of individualism is the centering on the self, which both flattens and narrows our lives, makes them poorer in meaning, and less concerned with other or society.  
His great insight is to reject the dichotomy individualism and morality (although I have not read enough to judge the novelty of his proposition). He suggests that the individualism itself is built on the moral principle of authenticity. On page 16:
What we need to understand here is the moral force behind notions like self-filfilment. [...] What we need to explain is what is peculiar to our time. It is not just that people sacrifice their love relationships, and the care of their children, to pursue their careers. Something like this has perhaps always existed. The point is that today many people feel called to do this, feel they ought to do this, feel their lives would be somehow wasted or unfulfilled if they didn't do it.
According to my interpretation of Taylor, a modern (individualistic) environmentalist does not want to save the environment (just) because of moral norms given to him by his social group. The modern environmentalist needs to try save the environment because that is the person she is. Because only by doing so she can be the person she can and ought to be. Page 26:
[Authenticity as a principle emerges from 18th century view that] Morality has, in a sense, a voice within. The notion of authenticity develops out of a displacement of the moral accent in this idea. On the original view, the inner voice is important because it tells us what is the right thing to do. Being in touch with our moral feelings would matter here, as a means to the end of acting rightly. What I'm calling the displacement of the moral accent comes about when being in touch takes on independent and crucial moral significance. It comes to be something we have to attain to be true and full human beings.
People no longer act out moral codes that are external to them, as (we may assume) they did at the golden age of religious and traditional conformity and hierarchy. People act ethically as individuals because of themselves, because they are and what to be whatever they conceive themselves to be and want others to recognize them as. Individualism is not amoral, but the basis of morality. Yet, in modernity individuals strive to be unique and seldom accommodate externally imposed 'identities' or 'social roles' (an old-fashioned idea that many organizational sociologists still seem to cling to). Life and identity are 'projects' that we fashion in interaction with others and in relationship to books, stories, ideologies, and available social roles (say Giddens and Rorty).

Authenticity and Organizations

To return to the topic of the blog, it seems to me that under the modern era of individualism, ethics and morality in an organization is crucially shaped by the conceptions of the self (identities) that the organization nurtures in its members. An organization that recognizes individuals only for the profits their department bring in or sales they can make facilitates conceptions of self that are focused on such goals. It is easy to see how organizations select and nurture identities for which "realizing one's full potential" means to sell more.

We are social animals and the era of individualism does not need to be an era of social atomism. As Taylor also points out (page 49):
On the intimate level, we can see how much an original identity needs and is vulnerable to the recognition given or withheld by significant others. It is not surprising that in the culture of authenticity, relationships are seen as the key loci of self-discovery and self-confirmation. Love relationships are not important just because of the general emphasis in modern culture on the fulfilments of ordinary life. They are also crucial because they are the crucibles of inwardly generated identity.
Organizational discourse and relationships across individuals generate shared beliefs concerning the range of identities individuals may fashion. Leadership in the age of individualism is not to impose shared norms upon people but to enable individuals to fashion authentic views of themselves as ethical actors. While a manager may force subordinates to conform to 'green values', Taylor's book suggests that it is fundamentally more realistic in this era of authenticity to direct the individual to pursue a conception of the self that cares about the nature.

The recent crises in banking make it a soft target for discussing ethics and morality. Did the large banks foster among their key employees conceptions of self as responsible professionals who provide valuable services to their customers and the society? I doubt it. Some email evidence surfaced of bankers calling their customers "suckers". When employees set their personal goal to be to make the most money out of the "suckers" by any means necessary, we shouldn't necessarily say that they have no ethics or lament that they no longer follow the commonly accepted societal norms. Taylor suggests, I believe, that we should rather posit them to have undesirable morals, the kind of ethics of self-fulfilment that we as a society should not accept.

I think it will make little sense to try and impose normative conformity on modern individualistic employees. When unethical behavior cannot be controlled by laws (as was the case largely in the events leading to the banking crisis), external control is difficult. To be ethical, organizations must facilitate identity projects and conceptions of self that make employees intrinsically want to be the kind of unique individual persons that we as a society can approve of.

Saturday, September 15, 2012

Could management scholars produce visionary public policy documents?


Should the government commission broad policy reports from social scientists (e.g. management scholars) and should we write such reports? That is the topic for this slightly off-topic blog post, bearing only vague connection to reasoning and no actual relationship to organizational reasoning.

Several funding bodies in Finland (including the Academy of Finland) recently funded a 700.000 euro report on "The model for sustainable growth" by a well-known (celebrity) philosopher Pekka Himanen (of The Hacker Ethic fame). To be fair, he is not going to pocket all the money since it goes for flying celebrity intellectuals and experts to Finland and organizing a series of seminars on the topic of the report. The report does not seem to have much to do with universities or the academia. Indeed, one might question why the Academy of Finland (which is supposed to fund scientific research) is putting some money behind the production of this kind of political pamphlets.

What if we would write the report?

Today, the main Finnish daily (Helsingin Sanomat) had an interesting editorial pondering why such a report was not commissioned from sociologists, offering also some reasons (in Finnish here). This raises some intriguing questions. Should the government commission a vision-paper on "sustainable growth" from a business school? What if the faculty at Aalto University School of Business (where I work) would decide to produce a competing report on the topic for free just to show some return for the tax payer money put in the university system? And how different would these two reports be?

I imagine they would be vastly different. Universities cannot produce political essays or opinion pieces without seriously undermining their legitimacy (professors may be able to do that as individuals). Although science can hardly be totally apolitical, the genre of social science is distinct from political texts  (at least now that marxists have been silenced). The academia has a certain way of thinking. To stay in the vocabulary of this blog, we could say that the academia has strong norms concerning acceptable ways to reason and make arguments.  Academics tend to make reserved claims, commonly supported with references to earlier studies, a compelling line of argumentation, and even some analysis of carefully collected empirical evidence.

If we were to produce a report on "sustainable growth" it would need to be grounded on empirical evidence, either a synthesis of published research or own data collection efforts.  In contrast, I do not expect Himanen to produce or refer to much empirical evidence to back the vision he and his team will produce. Also, this type of reports are only seen to succeed when they contain what the client (government) want. Universities can better maintain their independence and legitimacy when they are not required to market their services to the government and paid to produce 'results' desired by it.

What is a 'model of sustainable growth' anyhow? 

I expect my fellow business school professors would question the whole idea of having a 'model' for sustainable growth. Surely the government can make some interventions that could improve the prospects of growth in the future. But I believe such interventions would add up to a coherent "model" only on the level of rhetoric. Why would 'sustainable growth' be a single problem that can be solved with a single solution? It seems its a series of rather disconnected challenges with rather disconnected potential solutions. The whole topic smells slightly fishy as it implies economic and ecological sustainability to be unproblematically aligned elements of the broader rhetoric category of 'sustainability'.

More damningly, an evidence-driven report produced by university professors on sustainable growth would most likely be boring in the extreme. It is not like governments have not thought about this before: higher education, competitive R&D, capabilities in marketing, industry clusters, ecosystem of competitive business services, attracting foreign investments, and so on. Visionary reports politicians want are more akin to journalism and marketing pitches than studies of anything. Although many social scientists are capable marketers and would make great journalists, I am not sure we should expect universities to be good in these domains.

Luckily, most countries are blessed with a few celebrity philosophers.

Thursday, April 26, 2012

Sensemaking is a mess. Why?


Sensemaking is perhaps the most ambitious concepts in organization theory. It purports to capture the essential features of individual and social aspects of consciousness and agency. Perhaps due to this ambition in the scope, it seems almost impossible to say what sensemaking exactly is (see Table below for evidence!). Weick’s work often defines sensemaking indirectly as merely ‘involving’, ‘being associated with’, or ‘being about’. Such claims provide a rich and open ended, but unstructured understanding of sensemaking that allows different readings. Sensemaking seems to suffer from almost complete lack of analytical clarity. It is a truly unique perspective even in management theory in this sense.
Selected claims regarding sensemaking in Weick, Sutcliffe & Obstfeld, (2005, Organization Science)
# Claim: Sensemaking… Page
1 ...involves turning circumstances into a situation that is comprehended explicitly in words and that serves as a springboard for action. Abstr.
2 ...[has] central role in the determination of human behavior. Abstr.
3 ...[is] primary site where meanings materialize that inform and constrain identity and action[.] Abstr. & 409
4 ...[should become] more future oriented, more action oriented, more macro, more closely tied to organizing, meshed more boldly with identity, more visible, more behaviorally defined, less sedentary and backward looking, more infused with emotion and with issues of sensegiving and persuasion. Abstr.
5 ...involves the ongoing retrospective development of plausible images that rationalize what people are doing. 409
6 ...unfolds as a sequence[.] 409
7 ...[involves actors who] engage ongoing circumstances from which they extract cues and make plausible sense retrospectively[.] 409
8 ...[involves] enacting more or less order into those ongoing circumstances. 409
9 ...is a way station on the road to a consensually constructed, coordinated system of action (Taylor and Van Every 2000, p. 275). 409
10 ...occurs when a flow of organizational circumstances is turned into words and salient categories. 409
11 ...[is a] process that is ongoing, instrumental, subtle, swift, social, and easily taken for granted. 409
12 ...is an issue of language, talk, and communication. 409
13 ...[tends] to occur when the current state of the world is perceived to be differrent from the expected state of the world, or when there is no obvious way to engage the world. 409
14 ...is about the interplay of action and interpretation rather than the influence of evaluation on choice. 409
15 ...[leads researcher to portray] organizing as the experience of being thrown into an ongoing, unknowable, unpredictable streaming of experience in search of answers to the question, "what's the story?" 410
16 ...[as a theoretical language] captures the realities of agency, flow, equivocality, transience, reaccomplishment, unfolding, and emergence, realities that are often obscured by the language of variables, ouns, quantitites, and structures. 410
17 ...is first and foremost about the question: How does something come to be an event for organizational members? 410
18 …starts with chaos. 411
19 …starts with noticing and bracketing. 411
20 ... means [in the context of noticing and bracketing] "inventing a new meaning (interpretation) for something that has already occurred during the organizing process, but does not yet have a name, has never been recognized as a separate autonomous process, object, event" (Magala 1997, p. 324)." 411
21 …is about labeling and categorizing to stabilize the streaming of experience. 411
22 ...is retrospective. 411
23 ...is about presumption. 412
24 …is to connect the abstract with the concrete. 412
25 …is social and systemic. 412
26 …is about action. 412
26 …as much a matter of thinking that is acted out conversationally in the world as it is a matter of knowledge and technique applied to the world. 412

Why has so little order has emerged over the decades?
Despite widespread attention to sensemaking during the last 30 years, there has been remarkably little progress in terms of analytical clarity of its constitutive elements and their relationships or in terms of accumulated coherent empirical findings. I suggest some reasons for this. First, sensemaking means two different things. Sensemaking is foremost a perspective, a set of prescriptive methodological commitments that addresses how researchers might or should design and conduct research. But, at the same time, sensemaking is a process, an observable phenomenon captured by a set of descriptive empirical observations and falsifiable theoretical claims that put forth generalized predictions concerning the observations. While this duality of meanings would not necessarily hinder our knowledge creation, the sensemaking perspective is ridiculously synthetic, with little encouragement for analytical work.
When I say synthetic, I mean that the sensemaking literature puts everything together. Weick (1995) tells people that cognition is driven by actions, bracketing, noticing, editing, interaction, and identity. And a few other things. Sensemaking literature asks researchers to take this synthetic soup of concepts and phenomena to explain real life cases and narratives. Although sensemaking is about identity, the synthetic ethos calls researchers to ignore minute details such as the alternative definitions of 'identity' and ways through which 'identities' relate to actors' motives or judgment. 
A prime example of the synthetic tendencies is the passage in Weick's 1995 book where he elaborates the differences between sensemaking and interpretation:
The process of sensemaking is intended to include the construction and bracketing of the textlike cues that are interpreted, as well as the revision of those interpretations based on action and its consequences.(p. 8)
From analytical perspective it seems dubious to define concepts that are just like other concepts plus some stuff that influence them. I cannot see physicists all worked up about studying "dynamism" that is like acceleration but also intended to incorporate the interactions through which acceleration originates from and the subsequent changes in acceleration based on the collisions of the object due to its past acceleration. Sensemaking purports to be different from other more elementary concepts simply by synthesizing them into a big fuzzy mess.

Analytically oriented organizational scientists might be simply compelled to ignore the sensemaking perspective as a genre of writing. However, since sensemaking is also a phenomenon (closely related to interpretation), there is a whole bunch of stuff people have hard time studying without accommodating the synthetic perspective of Weick and his fellow sensemaking scholars. Does anyone think an organizational scholar could publish a paper on how actors in organizations interpret events and construct meaning without using the concept of sensemaking?

My war against disorder
I will later blog about a paper of mine on sensemaking, where I tried to offer a critical look into the sensemaking process and elaborating a more analytical account of its cognitive core processes based on the various literatures on reasoning. Given the synthetic rather than analytical focus of the literature, it was unsurprising ASQ reviewers were incensed because I misrepresented Weick's work by leaving out some key aspects of it (all three reviewers found different critical aspects I had ignored). Also for some reason the sensemaking scholars did not appreciate my characterization of the literature as a confused mess (who would have thought; they even wondered why I attached the above table!).
If we cannot rephrase any aspect of sensemaking without incorporating every aspect mentioned by Weick, then we will never have any analytical advance on the phenomenon. If we need to fully theorize action (human agency) in order to be able to discuss sensemaking, no theory will ever evolve. And let's face it, sensemaking itself has never been a theory. What sensemaking offers is a convenient term to capture the black box between inputs and outcomes for periods of interpretation and revision of beliefs (e.g. Maitlis, 2005). In numerous publications, it is difficult to tell if 'sensemaking' is anything but a synonym for interpretation, thinking, appraisal, or holding conversations.
References

Maitlis, S. 2005. 'The social processes of organizational sensemaking'. Academy of Management Journal 48/1: 21-49.

Weick, K. E.1995. Sensemaking in Organizations. Thousand Oaks, CA: Sage Publications.
Weick, K. E., Sutcliffe, K. M. and Obstfeld, D. 2005. 'Organizing and the Process of Sensemaking'. Organization Science 16/4: 409-421.

Monday, December 12, 2011

Analogical reasoning

Analogies represent an important yet confusing domain of reasoning. There is no intuitive simple way to understand information processing related to analogies. Mathematics and computer programming do not deal with analogy. It is extremely difficult to program computers to recognize and process analogies. This also means that it is very difficult to theorize the processes.

Yet, we know that managers need to constantly engage in analogical reasoning. Entrepreneurs use analogies to evalute, examine, and sell novel ideas (e.g. Cornelissen & Clarke, 2010 in AMR). The whole Harvard case method popular in almost all business schools is based on the assumed ability of analogies to prepare students to manage firms and to solve problems.

There is a very interesting forthcoming laboratory study on analogical reasoning to be published in Strategic Management Journal by Lovallo, Clarke & Camerer.

The lab experiment on analogical reasoning

To examine the processes of analogical reasoning in managerial judgments, the authors asked experts to judge the expected returns for a sample of new ventures. They inially instructed the participants to take an "insider view" without analogical comparisons to other similar cases:
Please describe the path along which you see the Project proceeding. Start from where the Project is now and construct the most probable future scenario for the Project. Please create a timeline that describes the key steps, milestones, and actions that need to be taken to reach the Project’s goal, using as much space as you need (This should take about 15–20 minutes). After you have finished, please answer the questions on the following pages.
 They then asked the participants to use analogies to re-examine their estimates:
What two categories of investments or potential investments are most similar to the Project (e.g., founder-seller, early-stage, technical-risk, public company)? You can define/create whatever categories you think are the most relevant to the Project.
The authors found that the use of analogies led 82% of participants to lower their estimates (initial estimates were far greater than industry averages).This, I thought, was not particularly interesting as it can be simply an example of anchoring bias (in this case, a useful one). The interesting observation is this:
One finding from the study is that people do not seem naturally inclined to form a broad reference class of projects even when encouraged to do so. [...] the vast majority of reference projects were described as successes.These results are disturbing, as they suggest that reference class forecasting is itself open to bias in the recollection of reference projects (Kahneman and Tversky, 1979).
Analogical reasoning should not be limited to successes

It is clearly alarming if managers mostly base their analogical reasoning on success cases. People justifiedly complain about From Good to Great for its lack of attention to failed firms.If we compare situations we encounter to success cases, we will become overly optimistic about the characteristics of our situation that match successes while at the same time ignoring characteristics that would match failure cases. People love to read and hear good stories. Also, those who succeed are more willing to share their stories than those that fail. Journalists know this and tend to write about successful innovators and turn-arounds. Business professors know this and tend to write and teach case studies of the heroic success stories.

To counter these tendencies, we might need to instruct managers to engage in systematic analogical reasoning. When thinking about an investment or other decisions, managers should see that their discussions and private reflections consider analogies not only to similar success cases but also to an equal number of failure cases. Lovallo et al. forthcoming paper leads to a similar conclusion, although they propose a much more systematic methodology.

Two possible theories of analogical reasoning

I have yet to read a definite paper on the philosophy of analogical reasoning. Personally, I think there are two ways to go about it. First, we can assume analogies to be very complex and follow some form of parallel processing where numerous attributes are matched and conclusions are drawn from a complex body of tacit knowledge. The parallel processing can be assumed to be so complex that it is practically untheorizable on micro-level. The best we may be able to do is to see broad tendencies of  past experience or working memory recall to influence outcomes.

Second, we can assume analogical reasoning to follow working theories that are commonly tacit but can be made explicit. That is, analogical reasoning may repreresent a systematic comparison of characteristics and the application of knowledge concerning the relationship of matching characterics on outcomes of interest. We may have never heard of a "snow lion" (an imaginary animal), but through analogy we can figure out the likely characteristics: it would be analogous to snow leopard in being white and furry. It would likely eat other relatively large mammals such as goat, analogously to normal lions. These would be predicted by the knowledge-based theory of categorization -- traits that we know to have either functionality in snow-related environment or likely to be inherited across sub-species.The analogical reasoning is not pattern-matching, but the application of causal knowledge to estimate likely similarities.

The first option mystifies analogical reasoning as something that arises from the complexities of the human brain. The second option suggests that analogical reasoning is really just the application of existing knowledge to match premises to likely outcomes - not qualitatively different from more formal reasoning tasks, except that the initial premises attended to arise from specification of one or more analogous exemplars. The second option is the only account of analogical reasoning that allows discursive consideration of analogies. Classroom discussions of case studies is not about matching patterns, but about illustrating and memorizing knowledge concerning causal relationships.

Conclusion

Analogy and metaphor are difficult topics that can easily become mystified. They are linked to powerful and complicated processes of reasoning that we need to understand better. The forthcoming paper by Lovallo et al. is a nice example of work trying to make analogical reasoning more explicit with simplicity and clarity.

I'll conclude with my favorite passage of James Joyce, an application of analogy (metaphor) to motivate selective reasoning about the properties of women.
What special affinities appeared to him to exist between the moon and woman?

Her antiquity in preceeding and surviving successive tellurian generations: her nocturnal predominance: her satellitic dependence: her luminary reflection: her constancy under all her phases, rising and setting by her appointed times, waxing and waning: the forced invariability of her aspect: her indeterminate response to inaffirmative interrogation: her potency over effluent and refluent waters: her power to enamour, to mortify, to invest with beauty, to render insane, to incite to and aid delinquency: the tranquil inscrutability of her visage: the terribility of her isolated dominant implacable resplendent propinquity: her omens of tempest and calm: the stimulation of her light, her motion, and her presence: the admonition of her craters, her arid seas, her silence: her splendour, when visible: her attraction, when invisible.

James Joyce, Ulysses Vol. 2, p. 110

Saturday, October 22, 2011

Selection problem in reasoning

Employees who have chosen to join a labour union seem to make less money than their coworkers who have abstained from union membership. So why would anyone join a labour union to begin with?

J.J. Heckman
The above example illustrates sample selection bias, a typical fault in scientific reasoning that was first explicated in detail by D.B. Rubin (I am not 100% sure) and later addressed by J.J. Heckman (worthy of Nobel prize in 2000). The above reasoning is faulty because people who are likely to benefit from union membership join a union, while those unlikely to benefit decide not to join. It happens to be that in general those who earn more are less likely to benefit from the union membership.

Selection bias in academic research
The selection bias (a specific case of the broader problem of 'endogeneity') is discussed in detail in every good PhD program. Much of the advanced statistics (stuff covered after basic and time-series regression models) relates to the problem of endogeneity. Yet, selection bias remains an endemic weakness in strategic management, and probably plenty of social sciences studies.Some academics have joked that if the reviewer does not like an article and would like it to be rejected, (s)he can always complain about selection bias and endogeneity.

I recently read an article in Academy of Management Journal that I really liked, finding in effect that the actively involvement of managers and their tendency to draw in external stakeholders to discuss problems increased both the quality of resulting agreements and the resulting actions.It is a strong and enticing article. Yet, the author neglects to discuss the possibility that managers are unlikely to engage with problems or call in external stakeholders when they are thorny: managers select which problems to attend to based on their likely ability to resolve the issues. Thus, the seemingly self-evident prescription that the more managers and stakeholders engage with issues the better may be false. Indeed, it is easy to see that when problems cannot ultimately be solved, managerial engagement and the involvement of stakeholders can have high cost for the managers themselves - if not the organization.

Why are academics unable to reason correctly and to attend to the selection bias? I think there are three issues. First, it is very difficult to robustly correct for selection bias. We would have far less research done and published if we insisted on controlling for all potential selection problems. Second, the PhD education, while attending to selection problems, also creates a lot of trust into basic methodologies, both quatitative and qualitative. Researchers will always be overjoyed with any novel findings they make, It is not very enticing for us to go and try to decimate our results. Management being such a shitty practically oriented discipline that it is nearly impossible to publish studies that identify relationships and then prove them to be spurious. Finally, researchers often have a good qualitative understanding of their research subjects. When you know the managers and know how they think, you immediately know that the selection bias is not an issue. If you know it is not an issue, you may not think it is worth doing a lot of extra work to prove it conclusively.

Selection bias of managers?
To my best knowledge (not saying much), nobody has really examined selection bias in managerial reasoning. The phenomenon lies under a broader umbrella of 'superficial learning', the idea that managers learn wrong lessons from their experiences. In reality, we do not know the extent to which managers assume causality from mere correlation.The bias would seem likely: managers supposedly imitate the behaviors of their successful competitors, even though the only reason why less successful companies do not behave in the same way lies in the inability of less competitive companies to benefit from the practices.

The question is pretty significant for two reasons. First, selection bias leads to false causal attributions and thereby wrong decsisions. Second, problems resulting from selection bias can be influenced. By drawing attention to problematic causal attributions, managers can either correct their mistakes or at least approach their causal attributions and knowledge with the required scepticism.

How to study the selection bias in real life? I suppose we would require very intelligently deviced large-scale surveys. The next step would be to design laboratory experiments to investigate potential ways to mitigate biases. While neither form of research really appeals to me, I hope someone would investigate this.  

Selection bias and network centrality
When doing my PhD I had some data from a big telecoms firm to examine interpersonal networks within a big R&D unit. I found, along with the prior research, that engineers who had worked with other central engineers created inventions with greater impact within the firm. However, once I looked at the technological domains these people worked with, there was no longer any causal relationship: engineers were well connected if they worked on technologies that were crucial to the firm and the inventions of these engineers had big impact only because of the technological area they operated in. It turned out that the social ties were selected based on the work task, which also explained the apparent 'productivity'.

The data was not rich enough to "disprove" the importance of centrality in explaining 'innovative productivity', and I lost my interest in the whole domain area over time. Yet, my own observations provide a nagging feeling that many effects reported in research are significantly weaker than expected, but our social sciences are terrible in self-correcting themselves.

Wednesday, October 5, 2011

Reflexivity and irony

Philosophers are a reflexive bunch. Not that they are very shiny (in my experience the contarary), rather they tend to think a lot about their thoughts. In management theory and education, reflexivity comes heavily recommended (see e.g. an article on reflexivity in reseach by my friend Nelson and his co-authors). Reflexivity is commonly associated with wisdom, something most of us would consider desirable.

In this blog post, I raise the question whether reflexivity and irony -- key philosophical virtues -- are also potential problems for managers wo must lead actual organizations.

Reflexivity and irony

To reflect is to ask why, to engage in reasoning where existing beliefs are used to justify or reject an action, a choice, a norm, a belief or an assumption. Philosophy has been and largely still is about reflecting on basic issue - why we consider something to be good, why we accept something to be true, and so forth. An intelligent individual reflects, an ignorant one accept the status quo without considering further reasons.

Richard Rorty, my favourite philosopher, has taken reflexivity to a point where most philosophers become uncomfortable. Namely, he convincingly argues that there can be no reason for a philosophical system that we can outright accept. There is no escaping language, in the sense that nothing but experience outside claims can underwrite our premises. With whatever philosophical standpoint we take, we ought to consider it with irony. This is a sense of irony that Rorty associates foremost with Nietzsche, the detached amusement towards the beliefs one accepts himself or herself. We may accept a certain outlook to life, but we must accept that there is no final unquestionable reason to do so (philosophy is in this sense no different from religion, it relies on belief).

Now, the view has not made Rorty very liked among many philosophers. Such is the burden of irony socially. Ironic approach to management studies is likewise warranted - does what we do make any difference? Are our papers really insightful? Has everything mostly not been said (and forgotten)? Are most approaches not only dogmatic continuations from commonsensical observations mystified by charismatic old men? Whatever the answers, these are worthy considerations. But don't expect them to be crowd pleasers on the cocktail event at a major conference.

Should managers be ironic?

Some authors, including Karl Weick, have called for more reflection and "mindfulness" on the behalf of managers. I am not sure it is always a good thing. Management and leadership benefit from confidence. Social action requires unity and permanence. How can hundreds of employees in an organization coordinate their work efforts if there is no uniform and stable understanding of means and ends, of premises and values? In Blink, Gladwell argues that Gettysburg, one of the most famous battles in military history, was lost because of reflection and indecision. Psychologically, individuals need their life to be predictable. Living with a partner who constantly questions and adjusts key life choices would probably be quite horrible.

A truly strong individual might be ironic privately, yet project utmost confidence externally. The self-control of an actor? The benefit of irony is the lack of fear. One who accomplishes to not take ones own position seriously will not be fooled to respect authority when it is not warranted. But without authority, even the authority of one's own knowledge, what is the basis for continued motivation and effort?

Entrepreneurship is particularly an area where scholars have identified passion and persistence to be advantageous. Irony and reflexivity, taken to an extreme, seem antithetical to passion. Indeed, the stereotypical philosopher is a miserable being mired in the fundamentsl doubt, best exempfilied in Sartre's existentialist novels (and perhaps even better by Camus). The entrepreneur is, in Lampel's words, "an optimistic martyr", an individual who chooses not to reflect on potential problems with the knowledge that the battle is more important than the victory. Because of the complexity of our existing beliefs, reasoning will seldom lead to any closure. A manager knows that analysis leads to paralysis because there are infinite facts and choices to reflect on.

Conclusions? I'll reflect on that...

Ironic reflection is the reasonable conclusion of 20th century philosophy, a conclusion that dethrones philosophy from its position as the meta-science, casting what used to be philosophy into history of philosophy, a humanistic curiosity and a source of inspiration devoid of authority. To reflect is to reason more, to be wiser. But managers might need a sort of meta-wisdom that tells them when not to reflect. We all need that actually, not to become the antiheroes of Camus and Sartre. Maybe someone should do a study on thhe downsides of wisdom and reflexivity in management? I'm not holding my breath to see that published and taught in the MBA programs...

Monday, September 12, 2011

Identity mystique

Organization theory seems to be undergoing some type of identity reneissance. Identity work is very popular, and plenty of cultural explanations use the concept of identity as part of the explanation. In popular management thinking, the concept of 'organzational identity' is now used in parallel with 'organizational culture'. This blog post, however, is of interest only to academics.

Identity in Jaco Lok's (2010) paper in AMJ

It is hardly controversial to think that actors have identities (self-understandings), a set of beliefs concerning themselves (identity beliefs). However, there is often a temptation to treat identities as roles, singular templates that somehow define what actors are. In such talk, a priest is bound to have the identity of a priest and a professor has the identity of a professor.
[I]dentity is thought to form an important link between institutional logics and the behavior of individuals and organizations
The construction of resonant identities in their legitimating accounts has been shown to be an important mechanism by which institutional entrepreneurs are able to effect particular logics
Identity is seen as central to entrepreneurial attempts to theorize need for change, as it is through subsequent identification by individual and/or collective actors that new logics can become institutionalized ['identification' here remains unclear to me]
This study deepens understanding of the relations between identity construction and institutional logic reproduction and translation by demonstrating how identity can be contested and reconstructed, or “worked.”
The above quotes are from Lok's introduction. As is typical among institutional theorists, identity is offered as a some sort of explanation for behavior. Priests behave in a certain way because they have priest identity. This is a very neat sociological explanation, but for one thing. It explains nothing.

I am not going to go into the details of Lok's article. It is a nice qualitative study and it is perhaps unfair to take it here as a representative of the broader malaise. The problem I have is with his framing, and the article is chosen simply because its new and published in the highest impact factor management journal.

The identity-based non-explanation can be summarized as follows:
Can a better explanation be devised? Would it have anything to do with reasoning? YOU BET.

Now again, but without holistic identities

A much clearer story can be told if we accept the rather mundane argument that in reality actors never assume a single stereotypical identity and that all identities actually consist of multiple identity beliefs, beliefs concerning the self.

It then happens that beliefs actors have about themselves are quite likely to be connected to beliefs one has more broadly. If one believes that animals have souls and it is quite cruel to eat them, then it may also happen that the individual has a self-belief that by abstaining from eating meat he/she is an ethical and good human being. Does the vegeterian identity here explain the behavior (not eating meat)? Or might we device a more elegant explanation by saying that eating meat just does not seem like a very rational thing to do given the beliefs of the individual?

We might have an explanation like this:
Let's reconsider:
The construction of resonant identities in their legitimating accounts has been shown to be an important mechanism by which institutional entrepreneurs are able to effect particular logics
One can say that plausible identity beliefs (e.g. vegetarianism is a good thing) do influence how people think more broadly (farming meat is unethical). But it seems more apt to state that identity beliefs are only plausible once they conform to broader beliefs accepted by actors.
Identity is seen as central to entrepreneurial attempts to theorize need for change, as it is through subsequent identification by individual and/or collective actors that new logics can become institutionalized ['identification' here remains unclear to me]

This may also get it the wrong way around: if actors build their identity beliefs based on broader beliefs, then the broader beliefs become accepted? It does seem that if actors are reasonable and they accept the broader cultural beliefs, then their identity beliefs will also change over time.


Durability of identities matters

Even if the explanations in instutional theory amount to unnecessary 'identity mystique' and/or the explanations have gotten causality the wrong way around, there is at least one clear way in which identity beliefs matter. Intuitively, identity beliefs seem to be stable. It seems that actors are much less likely to reject beliefs that strongly relate to our understanding of ourselves than other beliefs (there must be research on this as well, but I am too lazy busy to look atm). The claims concerning the centrality of identity are therefore probably correct, but the role identity has in the initial changes in industries is likely to be exagerated by accounts that treat identities as holistic entities. 

Me against mystification

I am thinking about writing a paper concerning the aggregation mystification that results from theorization on the level of aggregate concepts (identities) rather than their constitutive parts (identity beliefs). Aggregate concepts are seldom suspect to reasoning-based explanations whereas the effects of constitutive parts can be largely explained by simply positing actors to be reasonable. Aggregation mystification represents exotic theoretical arguments that result from intrepreting data on an unwieldy level of analysis.

Monday, August 29, 2011

Reasoning about ourselves and our organizations

In this blog post: The tendency for individuals to reason about themselves has arguably increased. Is the same true for organizations ? Top managers are told to ask themselves 'what is this firm about', to become 'paranoid' about what their firm is and could be. Has such anxiety increased? Was there a time when firms had a fixed organizational identity that has now come to pass? 

The modern question: Who am I?
Anthony Giddens has suggested that 'the late modern age' we live in is distinct from the prior times in terms of how we think of ourselves. While previously our identities (who we consider ourselves to be) were defined by our family and our role in the society. Now, we constantly 'try on' different identities, burdened by the knowledge that whatever we are is just one choice out of many possible ones. Identity is a key topic we reason about: We consider whether our observations and 'facts' justify the self-conception we have, and we attend to reasonable implications 'justified' by the self-conception we have chosen. We use our identity to reason about what to do. 
A person may take refuge in a traditional or pre-established style of life as a means of cutting back on anxieties that might otherwise beset her. But, for reasons already given, the security such a strategy offers is likely to be limited, because the individual connot but be conscious that any such option is only one among plural possibilities. (Giddens, 1991: 182).
In late modernity, there is no choice but to reason about who we are, a source of burned and anxiety (and freedom, one may say). Does this apply to organizations? 

Reasoning about Organizational Identity
Just as our conception of ourselves is our identity, the conception an organization has of itself is its organizational identity. Organizational identity is pretty close to what many would understand to be strategy, but I'll use the former concept in order to keep with the argument by Giddens discussed above.Even though Giddens has no interest in organizaitons, we could make analogous argument: in early modernity, organizations were not too concerned to reason about themselves, in late modernity organizations are anxious to consider and reconsider what they are about.

In terms of reasoning, this thesis would mean that the range of beliefs subject to reasoning is broadening. In earlier 'simple times', managerial reasoning incorporated simple premises: how can we do whatever we are about in a way that creates growth or profits? Changes in identity emerged from 'diversification', which added new elements but did not raise thorny questions about the legacy. In late 'complex times', the complexity of reasoning is increased exponentially because the very premises of these prior questions are also subject to reasoning: Should we be this or that? Is there an identity we could assume that we are not aware of? Such complexity can make anyone nervous. Only companies with large irreversible investments are safe from questions concerning the optimality of their current business. 

In conclusion
This blog is just a thought experiment, leading to a rather dull-sounding proposition that 'things have gotten more complex'. But this line of thought may also suggest that managers are becoming increasingly anxious and paranoid. More generally, this implies that the range of topics managers reason about are defined by the broader societal context.  

P.S.
My colleague Saku told a story behind the book. Giddens got remarried, and his new wife read self-help books. When spending time on the toilet seat, Anthony started browsing these books. They got him thinking about the constant quest to conceive and reconceive the understanding of self, which he associated with late modernity.

Reference
Giddens, Anthony. 1991. Modernity & Self-Identity: Self and Society in the Late Modern Age. Polity Press.

Sunday, August 21, 2011

Homes for Africans: How reasonability helps understand social institutions

I'm going to tell a story about David, a smart university graduate who volunteered to help build homes in Africa (and now a CEO of his own firm). This story helps elaborate my own stuff on the role of 'discursive institutional work' in creating, maintaining, and disrupting the prevailing social order.

Warning: This post marks the first instance of explicit promotion of my own work on this blog! 

Selling Houses in Africa
While working at Imperial College London, I met an entrepreneur running an IT company, whom I had to give a guest lecture. While we were chatting, he told me had gone to Africa immediately after he had graduated (from Cambridge) to work as a volunteer. He got involved in a big charity building proper houses for the poor in developing countries, using money and volunteer workers from the West. In really poor countries, 40% of energy is consumed by households, a lot of it goes into heating during winter nights because insulation is terrible. The guy was pretty smart, and he soon figured out that the scale of their activities was non-existing. If he could delegate the house-building to entrepreneurial Africans who would do it for profit, they could leverage the donated money and accomplish a substantive improvement. Moreover, selling subsidized houses would allocate the resources effectively for those families most willing to invest (with safeguards in place to make sure the new tenants were not wealthy). Because continued flow of the Western money was not guaranteed, a for-profit-philanthropy hybrid would lead to a more sustained impact. Great idea, so lets do it?

Not so fast. In the 90s most philantrophists were not overly excited about turning volunteer activities into a franchised business. They hated the idea. The reasoning was pretty straight-forward: entrepreneurs making money out of the poor Africans was wrong. Moreover, because nobody else was doing it, it had to be a bad idea (this is a common element in practical reasoning and probably a very good rule of thumb usually). Today, of course the salient reasoning would be quite different: creating entrepreneurship in an African country is a great way to boost its economy and the well-being of the people. The idea of subsidizing entrepreneurship as a form of philanthropy had not yet been institutionalized, the business plan was "rationally speaking" as good in the 90s as it is today, but the plan did not correspond to an existing social institution. Anyhow, during his 3-year stay in Malawi, the charity became the country's largest home builder.

After Africa and some other charity work, David was dead-set on becoming an entrepreneur. He went to do the MBA at Imperial and got involved into a venture while there. He is now the CEO of his IT firm, a portfolio company of the Imperial Innovations.

Social Institutions
Organization theory is hugely occupied with social institutions -- basically, a term usable for any durable, widespread, observable regularities in behavior. Institutions can be anything: the ubiquitous quality management systems, the tenure system in universities, sales commissions, whatever. Why such preoccupation? Because a lot of behaviors and other stuff taking place in companies (and more broadly in industries and countries) does not seem to result solely from rational decision making or technical concerns. If we can explain rational/purposeful decisions in organizations and the social institutions around those decisions then we have a pretty good understanding of what is going on.

Reasonability and Institutions
The notion of 'institutional work' captures the work done by actors, such as David, to create, maintain, or disrupt social institutions. This theory approach is founded on the notion that practices, entities, or social arrangements do not 'sell themselves' to customers, potential employees, government regulators, or the media. For example, when someone seeks to shape the attitudes or regulations concerning immigration or social welfare they are engaging in 'institutional work'. Institutional work is distinct from 'selling' because acceptability ('legitimacy') gained through institutional work tends to help everyone equally. There tends to be a freeriding problem.

This is a topic of a work (see below) I did with Saku Mantere and Eero Vaara. Our commentary suggests that the theory on institutional work -- by focusing what is done or said rather than the cultural context in which things are said -- has tended to ignore the discursively articulated reasoning around the social institutions in question. Thus:
We argue that reasonability plays at least three crucial roles in institutional work: It provides the main contextual constraint of institutional work, its major outcome as well as the key trigger for actors to engage in it.
These are very basic observations, but they are things we should at least control for when explaining why some actors manage to promote novel social order while others fail. These observations also help explain why discourse matters a lot when firms try to shape their industries or when managers try to change the ways things are done inside their own firm. Institutional work involves the creation and circulation of credible justifications to social institutions. On a meta level, the broader background assumptions (propagated by the Media) define how things can be justified in the first place -- for example, what arguments can be used to defend or attack immigration (an unfortunately hot topic in our increasingly xenophobic Finland).
1. Reasoning as a constraint: Any claims put forth by actors need to be justifiable with acceptable reasons, must have some implications that recipients comprehend, and need to avoid clearly acceptable reasons for their refutation.
For David, the salient beliefs donors and volunteers used to reason about philanthropy mattered a lot. The evaluation criteria and beliefs African regulators used to reason of foreign organizations would also define the conclusions these regulators would draw should the organization experiment with alternative approaches. What matters is how premises (the organization is now doing X) leads to conclusions (the organization is "not philanthropic") in reasoning, not just whether the actual or expected measurable outcomes from X are desirable.
2. Reasoning as an outcome: When actors engage in discursive work, they define and refine the generally accepted conditions for beliefs—the reasons why actors ought to believe one thing or another.

If David would engage in successful institutional work and shape how philanthropists perceive the franchising organization, it would likely lead to a broader change in how they reason -- what type of conclusions they draw from any philanthropic 'business model' that involves local entrepreneurship. Institutional work would make employment and economic growth more salient topics  to reason about.

3. Reasoning as a trigger: When new events and outcomes contradict the existing, accepted linkages between premises and conclusions (the established way of reasoning), institutions can only remain reasonable and legitimate if 'maintenance work' again makes them reasonable. We gave the example of the financial crisis in 2007, which questioned the very reasonability of the existing financial order and the premises on which it had been considered legitimate. As always, various experts were ready to provide reasoning to support the system.

The current story is the Euro crisis. Although the common currency has had many reasonable justifications going for it, the near-default of Greece and the continued crisis with Ireland, Portugal, Spain, and now even Italy and France, is suggesting that the reasoning behind Euro may have been flawed. To retain its legitimacy, the proponents of Euro must engage in what we call 'discursive maintenance work' that diminishes criticism and fears raised by recent events and provides solid line of reasoning to support Euro, set of reasons that must be immune to the recent evidence against the viability of Euro. 
We also pose some research questions in the rest of our brief paper that relate to the role of various actors (such as professions) in creating and maintaining reasonability of practices and arrangements. 

Reasonability
To say some some behavior has reasonability means two different things, which I here call weak and strong reasonability (terms I conveniently invented tonight). Weak reasonability means we are able to provide reasons for that behavior; we can comprehend and evaluate the behavior because there are some reasons for it. A merger lacks reasonability if we cannot understand what the purpose of it is, or how it will change things (to the better or the worse). Strong reasonability means that the justifications are broadly acceptable and aligned with the broader beliefs and interests of the actors involved. A merger may be weakly reasonabile if those who decided on it had clear goals, but still lack strong reasonability (or be 'unreasonable') if reasons exist to discredit the justifications. For example, if the costs of the merger exceed the benefits provided by its goals or if there reasons exist to assume that the goals of the merger cannot be accomplished.

In our paper we say that reasonability is "the existence of acceptable justifying reasons for beliefs and practices". It is not clear that we (or at least I) had in mind the first, weak, sense of reasonability. Social institutions may be sustained to the extent they are weakly reasonable, they can be justified in some comprehensive manner by someone. That does not mean that they are strongly reasonable, that there wouldn't be a plausible way to discredit the social institution. After all, the U.S. has a law about debt ceiling that is justified by some pretty good arguments. However, in a broader sense the law is really bad and even the justifications do not really stand to closer scrutiny. The debt ceiling is a weakly reasonable arrangement (social institution, if you will), but it does not really seem to be strongly reasonable.

The story of our paper
The piece we got published on reasonability and institutional work is not really an article. It is an invited commentary in a special issue. Journal of Management Inquiry was publishing a special issue on Institutional Work -- a new research program that seeks to understand how actors create, maintain, and disrupt social institutions (mainly, how they shape the acceptability of things such as the franchising model for philanthropic construction company). The special issue had a big 'agenda setting' paper by Lawrence, Suddaby & Leca, and some invited pieces by various academics. Because the comments were mainly from Americans, the editor thought we could do a more European type of thing with Saku & Eero.

I knew immediately I wanted to write on reasoning. Luckily Saku was keen on this as well, bringing in the notion of linguistic division of labor (Saku is more knowledgeable on Hilary Putnam, whom I've just read a bit recently). While Eero had his own brilliant ideas as well, we needed to retain some focus so he concentrated on just making our initial ideas much better.We didn't have too much time to fine-tune the commentary but I am pretty pleased about how it turned out. Now I just need to finish writing the proper article on reasoning & social institutions.

The most imporant thing: Reference
Schildt, H.A., Mantere, S., Vaara, E. 2011. Reasonability and the Linguistic Division of Labor in Institutional Work. Journal of Management Inquiry. March 2011, Vol. 20: 82-86.
doi:10.1177/1056492610387226