Aug 4, 2019
My guest today is IT consultant and author Jeff Sussna. Jeff’s liberal arts background has given him a unique perspective on digital transformation. In this episode, we explore the relevance of cybernetics to today’s complex design and DevOps challenges.
Jorge: So Jeff, welcome to the show.
Jeff: Thanks for having me. It’s great to be here.
Jorge: For folks who are not familiar with you and your work, how do you describe what you do?
Jeff: Well on a basic mechanical level, I founded and lead a consulting agency in Minneapolis. Our focus is helping organizations learn how to move fast without breaking things, and we do that through the entire digital product lifecycle from design through product management all the way to development and operations. It’s about bringing together agile and devops and design thinking. We do coaching. We do workshops. A lot of it is about helping people understand: what are we really trying to accomplish when we do things like agile?
I go into a lot of organizations where they do scrum for example, and they may do it reasonably well, and they have a bunch of agile activities going on but they’re not necessarily really getting where they want to go. And typically that’s because they don’t fully understand where are they supposed to be going and how is agile supposed to actually help them get there. And it’s interesting because the way I got there was actually a somewhat unusual path and I think that that path and how it’s gone along really informs my work and my approach to my work.
Jorge: I am not familiar with your backstory, so you’ve said that and now I’m completely curious.
Jeff: That was intentional. I figured I’d give you a chance rather than just rambling on for 20 minutes give you a chance to say, “Now would you like to ramble on for 20 minutes and tell us about your background?”
Sure, so my background is actually liberal arts. I studied visual arts, anthropology, and political science in college and one day my advisor suggested that I should take a class that had absolutely nothing to do with what I was studying. And so I took an artificial intelligence programming class. This was back in the early 80s, the sort of first or maybe second golden age of AI. And I became captivated both by that but more so programming and and system programming. And I graduated from college and managed to worm my way into the software industry.
I’ve been there for the last 30 years. I’ve built systems. I’ve led organizations across the entire development, QA, and operation spectrum. But I always had this background of being in artsy liberal arts person. And I found myself thinking about things a little bit differently and I can never put my finger on it until I read Tim Brown’s Change by Design book, which is a popular introduction to design thinking and went, “Oh, that’s it! That’s me. That’s how I naturally think about things.” At the same time, cloud computing was starting to come about. And when I put those two things together what I realized is that service — because we were talking about it as a service and software as a service and things like that — service and the human-centered design that’s part of service is really at the heart of everything we do.
And so that’s really been behind my approach to my work ever since then. And one of the things that happened when I was in college and I was reading about artificial intelligence and playing with it was I read someone, some little paragraph somewhere, about some guy named Norbert Wiener who back in the ’40s had invented something called control theory that was peripherally related to AI. And I kind of went okay, that’s interesting and didn’t make much of it and went about my business.
And then about five or six years ago, I happened to be in the library one day and I saw a book called Dark Hero of the Information Age, which was a biography of Norbert Wiener. I thought, “Oh, I remember that guy, you know, I should check that out. I should find out more about him.” So I read it and I was instantly captivated. And I was introduced to this world of cybernetics, which was the thing that he was really responsible for.
And cybernetics is really interesting because it was a very big deal in the ’40s, ’50s, and into the ’60s. Wiener wrote a book about cybernetics, which is about 50% post grad level math, you literally can’t read it unless you’re unless you’re a math major. But from what I’ve read apparently in the ’50s every single college student in America was walking around with a copy of that book under their arm. So it was a very big deal and then for a variety of reasons it fell out of favor and disappeared and was completely forgotten.
The irony is that anytime that you say a word that begins with cyber — you know cyber-terrorism, cyber-security, cyberspace — the cyber comes from cybernetics. And cybernetics is really at the heart and the origin of computing and actually the heart in the origin of information. When we talk about information architecture, information theory, we have to talk about cybernetics. And it gives kind of a different flavor to what information is and how we work with it.
So what is cybernetics? Cybernetics is the idea that in complex systems — particularly the kinds of systems we find in the natural world and in the social world, whether it be cities or economies or companies or markets — that control has to be adaptive. It has to be based on listening and responding as well as just telling people or things what to do.
If you think about the most basic cybernetic device, it’s a thermostat. A thermostat doesn’t actually control the temperature of the air in the room directly. What it does is it, you could say that it listens: it detects what the temperature is, and it detects the fact that the temperature isn’t what it’s supposed to be, and then it tells the furnace to do something about it. And so the furnace pumps warm air into the room the room warms up, the second law of thermodynamics kicks in the room starts to cool down again. The stat goes. “Uh oh, things aren’t as they’re supposed to be! We better do something about it.”
So you could say that the thermostat in the furnace have actually no control whatsoever over the temperature of the air in the room. Only the second law of thermodynamics does that. But they’re continually having a relationship where they’re adjusting things. And the way that works is based on a principle that Wiener developed or identified called feedback. And we use that word all the time. But feedback has a very specific meaning which is information about the gap between actual and expected. So if the thermostat is set to 72 degrees and the temperature in the room is actually 71 degrees the thermostat gets some information that says well, it’s one degree colder than its supposed to be.
So we have a tendency to think about information as the thing, right? We architect it, we store it in databases, we pass it back and forth. But from a cybernetic perspective, information doesn’t have any real meaning aside from the context in which it’s happening. And its purpose is not just to be a thing its purpose is to help you understand what it is you need to do.
One of the things that I learned from studying art and also to some degree from my own just kind of life is that mistakes happen. Things don’t always go the way we expect them to. And that’s perfectly fine. That doesn’t prevent us from getting to a good place if we can have kind of a dance, and to some degree give up the idea that we’re fully in control, and instead have a relationship with our world of, “What is it that you’re telling me and what do I need to do based on that?”
So it’s much more relational. I think that companies are beginning to discover that. The reason they’re reaching out for things like agile is that they’re realizing that they can’t control the markets anymore the way they used to, so they have to have the ability to understand and respond to situations — environments — over which they have less and less control and ability to predict.
Jorge: You talked about Wiener’s book and how fifty percent of it is college level math, and that brings to mind the the idea that some of this stuff can be complicated for folks. And hearing you describe it in this way, it sounds more accessible than other introductions I’ve heard before to the subject — and more relevant.
I’m hearing you say that and thinking, “Yeah, definitely.” I mean that maps to my experience of reality; the fact that if you’re going to act you have to get a read on your surroundings and then you must have some kind of model where there’s an objective that you’re going towards and you need to somehow compute at some level the difference between where you are and where you want to be and adjust your direction.
So the question is, when presented at that level it is kind of obvious. Why do you think it fell out of favor?
Jeff: Well, part of it was Wiener’s fault. He was a very eccentric person. And well, let me take a step back first and say that that Wiener’s first book was called Cybernetics: Communication and Control in the Machine and the Animal. That’s the one that’s full of math. He wrote another book called The Human Use of Humans, which is much more accessible and it presents the concepts of cybernetics in a much less technical way. The amazing thing about it is it also predicts many of the trends that we’re seeing now in terms of the dangers of computer-centric society. It’s quite an amazing book given that it was written something like 60 years ago.
But aside from that, I think the reason it fell out of favor was to some degree because it’s too simple and too elegant on a very simple level. I think that part of it is that basically what it’s talking about is circular causality. If you really kind of go beyond the surface of just well, we have a thermostat, we want to control the air if we talked about your example of I have an objective and I need to make sure I get to my objective, the real implication of cybernetics is that you’re also adjusting your objective. Right?
If you look at things like Lean Startup and the whole idea of a pivot, right? Step one of Lean Startup is let’s make sure that we’re accurately getting where we want to go. But Step 2 of Lean Startup is let’s make sure we’re trying to get to the right place. That is not exactly a 20th century Western approach to thinking about things. There’s been interesting things written about the relationship between cybernetics and systems thinking and more kind of Eastern philosophical approaches. So I think, to be honest to some degree, it just blew people’s minds and the world wasn’t ready for it.
And what I’m seeing now is that maybe the world is starting to get ready for it. It is beginning to be sort of culturally resuscitated again and people are starting to become interested in it again and going, “Oh, maybe there’s actually something here.”
Jorge: I am very intrigued by this notion of the relationship between systems thinking and Eastern philosophy. You have written very compellingly about this, and I’m wondering if you can delve a bit more on that connection.
Jeff: Well, that’s a big one. Well, there’s… I’ll actually refer to a very interesting book by Joanna Macy called General Systems Theory. Now I’m not remembering the name; it’s something like Buddhism and General Systems Theory. And she is a system thinking practitioner. She’s also a Buddhist teacher and practitioner. And she talks a lot about the Buddhist view of interdependence, which on one level means that the reason that you and I are here now is because of a whole set of things that happen that brought us to this place.
You know if if Wiener hadn’t thought about feedback systems and if Claude Shannon hadn’t figured out how do you transmit feedback reliably in a noisy channel there would be no such thing as information Theory there would be no such thing as computers. There were no such thing as binary logic there would be no such thing as Zoom you and I wouldn’t be sitting in different cities talking to each other.
So on one level it means that the causality behind what you and I are doing right now is much richer and much larger and much more complex and intertwined and tangled. So to some degree your karma and my karma and Norbert Wiener’s and Claude Shannon’s karma are all intertwined with each other.
On another level and a deeper level what it means is that when I think about myself and who I am that is defined as much by my relationship to you and my relationship to Apple Computer who made the computer that I’m using as it is my idea of who I am internally separate from the world. That this whole idea of you and I and the other things that we see around us is being fundamentally separate from each other is according to Buddhism A) an illusion and B) the cause of suffering — because it is an illusion. So you could say that you caused my experience and I caused your experience as much as each of us causing our own experience.
So there is a circularity to how and why things happen, which is a very Buddhist view which comes very much out of an Indian tradition and very compatible with a cybernetic view. Particularly when you go beyond this idea that cybernetics is just about, “How do I control things out there?” and the notion that what it’s really about more fundamentally is, “How is it that I dance with this relationship that I have with the world where myself and my environment are creating and driving each other?”
Jorge: The image that comes to my mind hearing you describe this, which is an image that I believe comes from the Buddhist tradition, is this notion of the Net of Indra, where there are jewels that are all interconnected and all the jewels reflect the other jewels and you can’t intervene in one of them without impacting the others.
And in this notion of systems thinking, one of the distinctions I make between that worldview and other approaches is that you’re taking in a holistic perspective — as holistic a perspective of the situation as possible — whereas if you contrast it in a more reductionist approach, where we try to divide so we can control.
It is a completely different approach, and I’m wondering — just because this subject can get fairly abstract fairly quickly — if there are ways that that impacts your approach both to how you do your work and perhaps the work that you do for clients?
Jeff: Very much so. And it actually has very practical down-to-earth ramifications both for IT and also I think for design. One of the things that we’ve begun to learn in IT is that as we go to the cloud, the systems we manage become more complex. Which means that the parts become more and more intertwined with each other and I think actually the Net of Indra is a wonderful metaphor for that, where it’s no longer possible to say, “Well there’s a database over here, and there’s a network over here, and we have an ERP application over here, and a website over there, and they’re all independent from each other, and we can manage them separately.” It doesn’t really work anymore; they all impact each other.
And one of the practical ramifications of that is that when things break, what we typically try to do is to find the “root cause.” What is the one thing that was the original source of the problem? And in complex systems, you can’t actually do that. What you find are contributing causes. That the problem happened because of A and B and C and D coming together. And if any one of them had not happened or happened differently or happened a little slowly or happened at a different time of day either the outage wouldn’t have happened or it would have been more or less severe.
And this perspective is actually really influenced by work that people have done in industrial safety systems, people who work on things like looking at airplane accidents, nuclear power plant meltdowns, that kind of thing. And they’ve been discovered for they’ve come to realization seems like this whole idea of identifying human error — the train conductor, the train driver fell asleep, that’s why the train crashed therefore we need to automate the train and get rid of the drivers — that that doesn’t actually work. That you need to take a much more holistic perspective on how all of the pieces fit together.
Why did the train conductor fall asleep? Well, there are lots of technical reasons, there are political reasons, there are bureaucratic, financial, so on and so forth. And you have to look at them as a whole, and you have to understand that when you fix one thing, you cannot fully predict what changes will ripple through the system. So you might fix one thing a break another, and you there is no way to guarantee that you won’t do that.
I also think that has very profound implications for design right now, because design is going through this process of grappling with ethics. That we thought Facebook and Twitter would be the most wonderful thing in the world, and what’s happening instead or in addition, perhaps, is that they are enabling manipulation of democratic processes and online hate and bullying and so on and so forth.
And there’s an idea that as designers, you have a responsibility to design systems that don’t cause harm. The problem is that what you’re trying to design are very, very complex systems and on some level, while it’s important to think in terms of doing good and not doing harm, I think you also need to confront the inevitability that you will do harm on some level that there will be unintended consequences.
And what’s more interesting — and to me where the cybernetic approach comes in — is you could say that doing harm is is a very compelling version of there being a gap between actual and desired, right? We wanted to build a system that would help people collaborate better and instead we built a system that’s starting to help people dislike each other more.Let’s assume that’s going to happen and let’s look for it and let’s design for it in a much more continuous way.
Jorge: We’re recording this in the second week of July. And I bring up that that time stamp because next week, we will be celebrating the 50th anniversary of the lunar landing, which in my mind is kind of the apex of big pre-planned projects. People refer to things that are hard to do as moonshots. Hard to do but achievable, right?
And because this is happening next week I’ve been watching documentaries and reading books and listening to podcasts on the subject — I’m just fascinated by it. And one of the things that’s come up over and over again is that the people who are a part of that project, many of them have expressed the belief that they would not have been able to successfully land people on the moon if Apollo 1 hadn’t catastrophically burned in the launch pad.
That accident was kind of a jolt that the program needed to bring up all these flaws that they had not accounted for. And they completely redesigned the command vehicle as a result of that happening. And unfortunately if three astronauts hadn’t died in that accident, they probably wouldn’t have had the shock to the system that the system needed in order to — pardon my French, to get their asses in gear, basically.
Jeff: Well, that’s a pretty provocative statement.
Jorge: It’s not mine. It’s… I was very surprised to hear that, but it’s a feeling that I’ve heard expressed several times by these folks that the accident is what actually got them to the Moon.
Jeff: So that’s a very interesting… It is a provocative statement, whether it’s yours or not, and I’ll give you a couple of responses to it.
The first one is that one of the things that I think is mostly healthy — there is a little misunderstanding — is the whole idea of moving fast and breaking things is being met with new skepticism. Right? My business is predicated on the idea that it’s possible to move fast without breaking things. And I teach people how to do that.
I think the way that you do that is that you break things in much smaller units and much earlier in the process when it’s safer to do it. I am certainly not recommending that doing things where people die as a learning mechanism is a good thing. However, I will go out on a limb. I had some things happen in my life when I was younger, which, looking back, felt like potentially large failures at the time. And when I look at what happened as a result, my life got tremendously better as a result. One of which was that I had cancer when I was 20 years old.
I was at college, I’d been having my first year in college was sort of a mess. I’d gotten my act together; I was doing very well. I was very happy. I got very sick. I left college. I went home. I went through chemotherapy treatments — this was back in the early ’80s when chemotherapy was really awful — I spent time in the oncology wing of the University of Pennsylvania hospital, watched a lot of people die from leukemia, face the prospect of my own death.
It was the best thing that ever happened to me. It gave me a level of sort of resiliency in my life. And it’s funny because you know at some point they said, “Okay, you’re done with your treatment, you’re in remission. Now, you can go on with your life.” Two weeks later, I was back in school. People were kind of freaked out like, “Who is this guy?” And, “He was gone and now he’s back. And what does that mean?”
But it was a it was a very positive thing for me and one of the things I’ve learned is that, you know, we have this idea of well, “Fail early, fail fast.” Is it really good? What we really want to do is learn. And it is true that what we really want to do is learn, but I think we have to deal with the fact that one of the ways we learn is by failing. By getting it wrong.
Jorge: I hope my comment about the astronauts didn’t come across as callous. I don’t think that anyone involved in that program, from what I’ve heard them say, I don’t think any of them wished for that to happen, much as what you are relating is an experience that fortunately, I’ve not had myself but I’ve read of folks saying, “You know, I almost died and it was the best thing that happened to me.” Because somehow it forces the… It’s like a focusing force, right?
And you’re talking about learning, which as I understand it, an important part of systems thinking and cybernetics, right? This idea that you’re adjusting based on feedback. There is implicit in that the idea that the system is somehow modified as a result of of the adjustment. And I’m wondering, just to bring this home to folks, if there are any mechanisms that you yourself use to either formalize that learning or to capture it or to integrate it into your life in a kind of a structured way?
Jeff: I think it’s a couple of things. One is I like to joke that I should offer a fixed fee consulting service where all I do is walk around your organization and say the same sentence over and over again, which is, “Make your work smaller.” Give yourself more opportunities to get feedback, to learn, to find out that you’re wrong in smaller and safer ways.
I think the other part of it — and this is one that I think that organizations that are trying to adopt agile and design thinking and DevOps and Lean Startup and so on and so forth really struggle with — is it requires a certain level of trust.
Ranulph Glanville, who was a designer and a cyberneticist made a really fascinating comment when he said that the controller is controlled by the control. In other words, if you think you’re in charge, if you think you’re in control at whatever level, you’re really not. And I think that the more that we can let go of thinking that we are and also thinking that we need to be, the more we can discover that we can actually get where we want to go in a way that feels sloppy but can be very efficient.
I’ll give you a straightforward example from my experience. The first time I worked with an offshore and group doing development, I was told by the US representative as I started the project, he said you have to give them a really good requirements. I said yeah, I’m good at that. I know I’m a good writer blah blah blah.
And of course, I was way too busy, so I gave him really lousy requirements. Kind of poetic. And the initial version of the software they gave me was about 70 degrees off from what I wanted and I got really annoyed for about five minutes and then I realized you know, it’s my own fault. You know, if you look at the requirements I gave them, you could imagine how they would get the result. So I sent them this long laundry list of everything that was wrong.
And 48 hours, they came back with something that was 20 degrees off from what I wanted. So I sent another laundry list, 24 hours later it was about 3 degrees off. In other words, it was really good. And afterwards, I sat back and I thought, “Okay, well, how long did the process take? How much work did it take? And how good was the output?”” And I realized it was really good and it was really fast and it was really efficient.
It felt very sloppy at the time but it actually was very precise. And I realized that this was a very powerful way of working and it was really at the heart of what agile was actually about: that you can get where you want to go if you have uncertainty about that in a way that feels very bumpy, but if you can relax into it, it can be extremely effective. I think the relaxing into it is really hard for all of us.
Jorge: I agree …
Jeff: So if you wanted to say it in a nutshell what cybernetics is about at its deepest heart. I think it’s about working in smaller units and relaxing into it.
Jorge: I love that Jeff. That’s that’s actually a great place to to wrap it up. We didn’t get to your book, but I do want to call it out: you wrote a fantastic book for O’Reilly called Designing Delivery, which is about these subjects. And I am going to link it in the in the show notes. Where can folks follow up with you? What’s the best place to send them to?
Jeff: They can find me on Twitter at Jeff Sussna or they can find me on my company website at sussna-associates.com
Jorge: Fantastic, so I’m going to include those in the show notes as well. I am thrilled that we had the opportunity to have this conversation. I think it’s a very important subject and I hope it’s not the last time that you and I get to catch up on this
Jeff: Agreed, this has been great. It’s been really enjoyable. Thanks a lot. Appreciate it.