Software Development Magazine - Project Management, Programming, Software Testing |
Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban |
Complexity Theory for Software Developers
Jurgen Appelo, https://management30.com
Many agile software development experts agree that a software development team is a complex adaptive system, because it is made up of multiple interacting parts within a boundary, with the capacity to change and learn from experience. [Highsmith 1999:8] [Schwaber 2002:90] [Larman 2004:34] [Anderson 2004:11] [Augustine 2005:24]. And who am I to claim otherwise?
The magazine Emergence: Complexity & Organization once conducted an extensive study of management books referencing complexity, with experts from various sciences, including the hard ones like physics and mathematics. It turned out that the reviewers agreed on the usefulness of complexity theory when applied to organizations and management:
One finds widespread agreement [among reviewers] on the existence of a significant potential for the study of complex systems to inform and illuminate the science and management of organizations. [Maguire, McKelvey, 1999]
But, as we will see later, the real debate among experts is about which scientific terms can be applied where.
This article is an introduction to complexity theory for software developers and their managers. Or perhaps I should make that plural (complexity sciences), because you will notice that ideas about systems have grown into a body of knowledge comprising multiple theories over a period of more than a hundred years.
It is good to know a little of context and history. And it's nice to look smart next time you're at a party, when you can recite the difference between general systems theory and dynamical systems theory.
I have just one word of warning for you. This overview is necessarily incomplete, oversimplified, and at times subjective. Though I'm sure those are exactly the reasons why it will be understandable.
Cross-Functional Science
Agile software development often addresses the problem of organizational silos, or the concept of separating people who are doing different kinds of work, claiming that this often negatively impacts the performance of an organization. Interestingly enough, a similar situation has existed in science for many decades.
Most universities and research institutes are organized in scientific silos. Physicists work with physicists, biologists with biologists, and mathematicians with mathematicians. This has led to scientific fragmentation and tunnel vision among scientists and researchers. The different scientific disciplines are so isolated from each other that they usually don't know what the others are doing [Waldrop 1992:61].
Scientific silos can be a problem, because many phenomena in the world, across different scientific disciplines, are very similar to each other. For example, economists were baffled in the past by a phenomenon known as "local equilibriums," which happened to be something that physicists were already very familiar with at the time [Waldrop 1992:139]. And phase transitions in physics look suspiciously similar to punctuated equilibriums in biology. And biologists have noticed that mathematics can help them analyze ecologies of species [Gleick 1987:59]. And "discoveries" made by mathematicians turned out to have been discovered years earlier by meteorologists. [Gleick 1987:31].
For many decades, scientists in different disciplines have struggled with complex phenomena that they could not explain. But when the dots were connected between the sciences, and systems across all disciplines were understood to be complex systems, suddenly things began to make more sense. In fact, I once read the suggestion that the biggest leaps in science happened when scientists worked in fields they were unfamiliar with, because they brought with them the knowledge and experience (and fights and failures) of another field that they were familiar with!
Like agile software development, complex systems theory favors a cross-disciplinary approach to problem solving. Complexity thinking is the antidote to specialization in science. It recognizes patterns in systems across all scientific disciplines, and promotes problem-solving involving concepts from different fields. But complexity theory has not been the first attempt at cross-breeding the sciences. Let's have a brief look at history to see what happened before.
General Systems Theory
In the late 1940s, a number of scientists and researchers, led by biologist Ludwig von Bertalanffy, created an area of study called general systems theory (sometimes simply called systems theory). Their studies were based on the idea that most phenomena in the universe can be viewed as webs of relationships among elements. And no matter whether their nature is biological, chemical, or social, these systems have common patterns and behaviors that can be studied to develop greater insight into systems in general. The grand goal of systems theory was to form a unity of science that was interdisciplinary: a common language of systems across all sciences.
One of the achievements of systems theory, which continued to be studied and expanded until at least the 1970s, was shifting the focus from elements in a system to the organization of elements, thereby recognizing that relationships among elements are dynamic, not static. Scientists studied concepts like autopoiesis (how a system constructs itself), identity (how a system is identifiable), homeostasis (how a system remains stable), and permeability (how a system interacts with its environment). [Mitchell 2009:297].
The recognition that a software development team can construct itself, that it can define its own identity, that it needs to interact with its environment, and that interactions among team members are just as important as the team members themselves (or even more so) can all be attributed to general systems theory.
Regrettably, the unification was never fully achieved, which should come as no surprise to software developers with experience in attempts at unification. But the legacy of general systems theory is significant. Almost all laws for system theory also turn out to be valid for complex systems [Richardson 2004a:75], which is more than various unification frameworks in software engineering have achieved.
Cybernetics
Around the time when general systems theory was conceptualized by biologists, psychologists, economists, and other researchers, a similar area of study called cybernetics was created by a similarly diverse group of neurophysiologists, psychiatrists, anthropologists, and engineers, with mathematician Norbert Wiener as a leading figure.
Cybernetics is the study of regulatory systems that have goals and interact with their environment through feedback mechanisms. The goal of cybernetics itself is to understand the processes in such regulatory systems, which include iterations of acting (having an effect on the environment), sensing (checking the response of the environment), evaluating (comparing the current state with the system's goal), and back again to acting. This circular process is a fundamental concept in the study of cybernetics.
From cybernetics, we have adopted the view that a software team is a goal-directed system that regulates itself using various feedback cycles. We have learned that in a self-regulating system like a software team, rather than energy and force, it is information, communication, and purpose that are the most important factors. And cybernetics helped us understand that feedback plays a crucial role in the development of complex behavior [Mitchell 2009:296].
General systems theory and cybernetics are often confused. This is not surprising because they both influenced each other; they both have difficult names; they both tried to work toward a unified science for systems; and they both proved unable to live up to their original goals. Nevertheless, each is responsible for carrying the body of knowledge of systems, which later theories could benefit from and build upon.
Dynamical Systems Theory
When we see systems theory and cybernetics as the two legs of the body of knowledge of systems, then one of its arms is certainly dynamical systems theory.
Grown out of applied mathematics in the 1960's, dynamical systems theory explains that dynamic systems have many states, some of which are stable and some of which are not. When parts of a system never change over time, or when they always settle back to original values after having been disturbed, we say that the stable states are acting as attractors.
The relevance of dynamical systems theory to software development is that it helps explain why some projects are stable and why others are not. And why sometimes it seems impossible to change an organization, because it always reverts back to its original behavior.
Dynamical systems theory played a pivotal role in later theories by offering mathematics as a helping hand when dealing with hard-to-measure concepts from systems theory and cybernetics. (And it is a comforting thought that part of what was to become complexity theory was not
Game Theory
If we consider dynamical systems theory as one arm of the body of knowledge of systems, then game theory must certainly be the other one. Multiple systems often compete for the same resources, or try to have each other for lunch. Game theory indicates that, in such cases, systems may develop competing strategies.
As another branch of applied mathematics, game theory attempts to capture behavior of systems in strategic situations, where the success of one depends in part on the choices made by others. Game theory was developed in the 1930s, and introduced to biology and evolutionary theory in the 1970s when it was recognized that it applied to the strategies of organisms for catching prey, evading predators, protecting territories, and dating the other sex.
Game theory has turned out to be an important tool in many fields, including economics, philosophy, anthropology, and political science. And of course software development, where it not only helps software developers to build games, electronic markets, and peer-to-peer systems, but also explains the behavior of people in teams, and the behavior of teams in organizations.
Evolutionary Theory
It is hard to imagine anyone not being familiar with evolutionary theory, which became very well-known ever since Charles Darwin published The Origin of Species, one of the most famous books ever, in 1859. What virtually all biologists agree on are the basic concepts of evolution: gradual genetic changes in species, and survival of the fittest by natural selection.
Of course, agreement on the basics doesn't prevent biologists from bickering endlessly about the details. The importance of random genetic drift (species changing for no reason), punctuated equilibriums (sudden drastic changes instead of gradual change), selfish genes (selection at the gene level instead of organisms or groups), and horizontal gene transfer (species exchanging genes with each other) have all been discussed, embraced, and disputed vigorously [Mitchell 2009:81-87]. (But confront them with Intelligent Design and suddenly biologists are united in their rejection of such unscientific nonsense.)
Evolutionary theory has contributed significantly to the study of all kinds of systems, whether they are biological, digital, economical, or sociological. It is said that teams, projects and products evolve, while adapting to their changing environments. And even though the kind of "evolution" in software systems is not the same as Darwin described, evolutionary thinking has helped in understanding growth, survival, and adaptation of systems over time. And this is why I consider evolutionary theory to be the brains of the body of knowledge of systems.
Chaos Theory
Though a number of discoveries about chaos were made earlier, the real breakthrough of chaos theory happened in the 1970s and 80s, with Edward Lorenz and Benoit Mandelbrot being the leading figures at the time.
Chaos theory taught us that even the smallest changes in a dynamic system can have tremendous consequences at a later time. This means that the behavior of many systems is ultimately unpredictable, because minor issues can turn into big problems, as any software team is eager to acknowledge. This innate unpredictability of dynamic systems has far-reaching consequences for estimation, planning and control, which is a well-known concern among climate scientists and traffic experts, but less readily accepted among project managers and functional managers.
Another topic addressed by chaos theory was the discovery of fractals and scale invariance, which is the concept that the behavior of a system, when plotted in a graph, looks similar on all scales.
Chaos theory is seen by some as the predecessor to complexity theory, and shares with it an appreciation for uncertainty and change, which is why I like to see it as the heart of the body of knowledge of systems.
The Body of Knowledge of Systems
There is not a single definition of complexity, and there is not a single theory covering all complex systems [Lewin 1999:x]. Scientists have been looking for fundamental laws that are true for all systems for ages, but so far they have been unsuccessful.
It seems reasonable to ask - exactly what is this thing called "complexity theory?" For although there are many definitions of CT [complexity theory], it has been suggested, that there is no unified description. . [Wallis 2009:26]
Each system is different, and lessons learned with past results are no guarantee of future performance. And so it appears that what we have is a collection of theories that are sometimes complementary, sometimes overlapping, and sometimes contradictory.
Furthermore, there are plenty of smaller studies that, each in their own right, have brought significant contributions to the field of complex systems. We could call them the eyes, ears, fingers and toes of the body of knowledge. For example, the work on dissipative systems gave us insight into spontaneous pattern-forming, and how systems can self-organize within boundaries. The work on cellular automata taught us how complex behavior can result from simple rules. From the study of artificial life we learned how information processing works in agent-based systems. Thanks to learning classifier systems we came to understand how genetic algorithms enable living systems to be capable of adaptive learning. And thanks to developments in social network analysis we now understand how information propagates among people in a network.
Despite the problem that the body parts don't match properly in some places, and that the figure looks uglier than Freddy Krueger in a tutu, the body of knowledge of systems is alive and kicking (see Figure 1). And, when applied to complex systems, we call it complex systems theory.
Figure 1 - The Body of Knowledge of systems.
Are We Abusing Science?
In agile software development, we regularly hear references to scientific terms such as self-organization and emergence.
At the heart of complex adaptive systems theory's relevance to software development is the concept of emergence, and the factors leading to emergent results. [Highsmith 1999]
For example, an ant colony, the brain, the immune system, a Scrum team, and New York City, are self-organizing systems. [Schwaber, Beedle 2002]
Scrum is not a methodology, a defined process or set of procedures. It's an open development framework. The rules are constraints on behavior that cause a complex adaptive system to self-organize into an intelligent state. This is taken from Tom Hume's blog entry about Jeff Sutherland's presentation.
Is it justified to apply complex systems theory to software development? Do the complexity scientists themselves agree that words like self-organization and emergence not only apply to ant hills, the brain, and the immune system, but also to agile teams?
Some scientists have not so nice things to say about people like us borrowing their scientific terms. They say we use scientific terminology without bothering about what the words really mean. They say we import scientific concepts without any conceptual justification. And they say some of us are intoxicated with words, indifferent to what they actually mean [Sokal 1998:4].
OK, I cheated a little. Sokal's rant was not directed at agilists using (or abusing) complexity science, but at people in general. Still, the signal here is clear. To really hammer it in, here's another quote that hits closer at home:
Not unexpectedly, the complexity gurus are most upset with how complexity science terms are loosely, if not metaphorically, defined and tossed into our managerial discourse - one [guru] goes as far as to suggest that the book[s] offer many insights for managers, but one should simply black out all references to complexity science. [Maguire, McKelvey 1999:55]
Ouch!
Alright, I cheated again. This rant was directed at management literature abusing terms from complexity science, not agile literature. But... we are warned.
We have to be careful when carrying over terms from complexity science to other disciplines, including management and software development. For example, when a small issue in a software project unexpectedly turns out to have big consequences, it is all too easy to say that this is typical "chaotic" behavior of the system. But, without really understanding what chaos actually means from a scientific viewpoint, we might be making ourselves the laughing stock among complexity scientists around the world...
So, is the term self-organizing team an example of abuse of science?
And what about emergent design? Is that abuse of science as well?
Personally, I don't think so. But it may be wise to remain critical and skeptical at all times.
A New Era: Complexity Thinking
When you apply complex systems theory to software development and management you are treating your organization as a system.
This is not new. System dynamics, originally developed in the 1950's (and not to be confused with dynamical systems theory) is a technique developed to help managers understand and improve their industrial processes. System dynamics was one of the first techniques to be able to show how even seemingly simple organizations can have unexpected nonlinear behaviors [Stacey 2000a:64]. System dynamics recognized that the structure of an organization, with its many circular, interlocking, and sometimes time-delayed relationships between organizational parts, is often a more important contributor to an organization's behavior than the individual parts themselves. System dynamics has helped managers to improve their understanding of business processes, while at the same time pointing out that the properties of an organization are often a result of the entire system, and cannot be traced back to individuals in the organization. System dynamics is not part of the body of knowledge of systems. Instead it is a tool, like a 60-year old calculator, to make the body of knowledge interesting for managers who like using numbers.
A newer but similar technique is called systems thinking, developed in the 1980's and popularized by Peter Senge's book The Fifth Discipline [Senge 2006]. It is about understanding how things influence each other within a whole. Systems thinking is a problem-solving mindset that views "problems" as parts of an overall system. Instead of isolating individual parts, thereby potentially contributing to unintended consequences, it focuses on cyclical relationships and non-linear cause and effect within an organization. Systems thinking is very similar to system dynamics, though the latter typically uses actual simulations and calculations in an attempt to analyze the impact of alternative policies objectively. Systems thinking is said to be more subjective in its evaluation of complex structures, because it has no clear definition of usage [Forrester 1992]. Its main contribution is for people to concentrate on problematic systems instead of problematic people. I would say that systems thinking is like a 30-year old camera that is able to give managers a more complete picture of their organization, from various interesting but subjective angles.
The study of complexity in social systems is called social complexity. Unfortunately, neither system dynamics nor systems thinking recognize that social complexity cannot realistically be analyzed and adapted in a top-down fashion [Snowden 2005]. Simulating organizations with simplistic models, or drawing teams and people with bubbles and arrows, falsely suggests that managers can analyze their organization, modify it, and then steer it in the right direction. System dynamics and systems thinking recognize non-linearity, but they are still grounded in the idea that top management can somehow construct a "right" kind of organization that is able to produce the "right" kind of results. In their approach to applying the body of knowledge of systems to organizations they are little more than 19th century deterministic thinking in a 20th century jacket [Stacey 2000a]. The 21st century is the age of complexity. It is the century where managers realize that, in order to manage social complexity, they need to understand how things grow. Not how they are built.
I wrote a book, called Management 3.0, which applies complex systems theory in a way that does not contradict its own message of non-linearity, non-determinism and uncertainty. My management 3.0 model applies complexity thinking. It assumes that managers cannot construct and steer a self-organizing team. Instead such a team must be grown and nurtured. It acknowledges that productive organizations are not managed with models and plans. Instead it must emerge through the power of self-organization and evolution. I like to see complexity thinking as the light which feeds all that grows. It is the energy source from which everything is derived and produced. Calculators and cameras are interesting. But they are useless without light.
Summary
Complexity science is a multi-disciplinary approach to research into systems, which builds on earlier achievements in the fields of general systems theory, cybernetics, dynamical systems theory, game theory, evolutionary theory, and game theory. Social complexity is the study of social groups as complex adaptive systems. And complexity thinking is about treating social groups as complex adaptive systems.
It is widely acknowledged that findings in complexity science can be applied to social systems, like software development teams and management, though it is still unclear how far we can go in copying system concepts from one discipline to another. But at the very least, software teams, team leaders, and development managers can be inspired to solve their problems by looking at other kinds of complex systems. Because history proves that the greatest advancements are made when ideas from one field are adopted and adapted in another field.
This article is an adaptation from a text out of the book "Management 3.0: Leading Agile Developers, Developing Agile Leaders," by Jurgen Appelo. The book is published by Addison-Wesley, in Mike Cohn's Signature Series, and is available from January 2011.
References
Anderson, David. Agile Management for Software Engineering. Upper Saddle River: Prentice Hall Professional Technical Reference, 2004.
Augustine, Sanjiv. Managing Agile Projects. Upper Saddle River: Prentice Hall Professional Technical Reference, 2005.
Forrester, Jay W. "System Dynamics, Systems Thinking, and Soft OR" Massachusetts Institute of Technology, August 18, 1992
Gleick, James. Chaos. Harmondsworth Eng.: Penguin, 1987.
Highsmith, Jim. Adaptive Software Development. New York: Dorset House Pub, 1999.
Larman, Craig. Agile and Iterative Development. Boston: Addison-Wesley, 2004.
Lewin, Roger. Complexity. Chicago: University of Chicago Press, 1999.
Maguire, Steve. and Bill McKelvey. "Complexity and Management: Moving from Fad to Firm Foundations". Emergence. Vol. 1, Issue 2, 1999.
Mitchell, Melanie. Complexity. City: Oxford U Pr, N Y, 2009.
Richardson, K.A. "Systems theory and complexity: Part 1" E:CO Vol. 6 No. 3 2004 (a)
Schwaber, Ken and Mike Beedle. Agile Software Development with Scrum. Englewood Cliffs: Prentice Hall, 2002.
Senge, Peter. The Fifth Discipline. San Francisco: Ignatius Press, 2006.
Snowden, David. "Multi-ontology sense making: a new simplicity in decision making" Management Today. Yearbook 2005, Vol 20
Sokal, Alan and Jean Bricmont. Intellectual Impostures: Postmodern Philosophers' Abuse of Science. Economist Books, 1998
Stacey, Ralph D. et.al. Complexity and Management. New York: Routledge, 2000 (a).
Waldrop, M. Complexity. New York: Simon & Schuster, 1992.
Wallis, Steven E. "The Complexity of Complexity Theory: An Innovative Analysis" E:CO Vol. 11, Issue 4, 2009
© 2011 Jurgen Appelo
Click here to view the complete list of archived articles
This article was originally published in the Spring 2011 issue of Methods & Tools
Methods & Tools Testmatick.com Software Testing Magazine The Scrum Expert |