This is Floyd Marinescu
I am here with Jim Coplien and Bob Martin at the JAOO Conference and here we have 2 very interesting divergent opinions on what is the value of TDD. So let's open up the floor and let each one have a couple of minutes of say. Let's hear you guys talk about it!
Bob Martin: First thing I need to say is: I am sitting here next to one of my heroes. I read Jim's book in 1991-1992, changed the way I thought about software, changed the way I thought about C++ in particular, so it's a great honor for me to be here. I think we have a disagreement - I'm not sure, possibly, it may a difference in perspective - but my thesis is that it has become infeasible, in light of what's happened over the last 6 years, for a software developer to consider himself "professional" if he does not practice test driven development.
Jim Coplien: Well it may be good, because you did this at your keynote yesterday: you said what you mean by "test driven development". I have adopted a very strong position against what particularly the XP community is calling test driven development. And I have audited this versus a lot of tutorials at about 4 conferences in the past 6 months and they give a very consistent story on what they mean. Here, it was a little bit different yesterday, so maybe for the sake of making this a meaningful conversation you can quickly reiterate so we are on the same page.
B: So I have 3 laws of test driven development. The first one is: a test driven developer does not write a line of production code until he has written a failing unit test, and no production code can be written until there is a failing unit test.<br/>
J: Per se, the main concerns I have about TDD are not problematic with respect to what you've just said in isolation, so if it's no more and no less than that we may not have a big disagreement. What my concern is, then, comes out of doing broad work with a lot of clients and a little bit of interactions with other consultants and other scrum masters who have seen these things happening in their project. And we've seen 2 major problems: one is that use of TDD without some kind of architecture or framework into which you're working - which was very strongly Kent's original position: you use TDD to drive your architecture - leads to a procedural bottom-up architecture because the things you are testing are units.
We just had a discussion upstairs about "is TDD the same as unit testing?" Well, no, it's a little more, but unit testing was a great idea in Fortran, when you could build these layers of APIs and the units of organization of the software were the same as the units of testing, but today the units of organization of the software are objects and we're testing procedures and there is a little bit of a mismatch. Now, if you are using the procedures to drive your architecture, I think you are trying to build a 3 dimensional structure from 2 dimensional data, and you end up going awry. And one of the things we see a lot, in a lot of projects, is that projects go south on about their 3rd sprint and they crash and burn because they cannot go any further, because they have cornered themselves architecturally. And you can't refactor your way out of this because the refactoring has to be across class categories, across class hierarchies, and you no longer can have any assurances about having the same functionality.
The other problem we've seen is that this destroys the GUI and this is what Trygve [Reenskaug] and I talk a lot about, because you have this procedural architecture kind-of in a JAVA class wrapper; you no longer are driving the structure according to domain knowledge and things that are in the user's conceptual model of the world, which is where object orientation came from. I mean even Kent, as he's very often said: "you can't hide a bad architecture with a good GUI." The architecture will always shine through to the interface, and I strongly believe that, and that is why I believe we need something in the infrastructure that gives you a picture of what the domain model is out at the interface. Then, if I want to apply Uncle Bob's 3 rules I probably don't have a problem with that, but I want a starting place that captures this other dimension, which is the structural dimension.
Alright. But you do not accept the thesis that the practice of test driven development is a pure requisite to professional behavior in 2007.
I absolutely do not accept that.
OK. So we can come back to that one because I think that is an interesting topic, just on the topic of professionalism, but before we do that: there has been a feeling in the Agile community since about '99 that architecture is irrelevant, we don't need to do architecture, all we need to do is write a lots of tests and do lots of stories and do quick iterations and the code will assemble itself magically, and this has always been horse shit. I even think most of the original Agile proponents would agree that was a silliness. I think if you went and talked to Kent now he would be talking about what he always talked about: metaphor, whatever the heck that was.
And in fact he says this, in "XP explained" or something, his page 131 of his book he says: "Yes, do some up-front architecture, but don't knock yourselves out".
Sure. OK. But now let me come back and throw a different light on this. I think architecture is very important, I've written lots of articles and books about architecture, I am the big architecture freak. On the other hand I don't believe architecture is formed out of whole cloth. I believe that you assemble it one bit at a time, by using good design skills, by using good architectural skills, over the weeks and months of many iterations. And I think that some of the architectural elements that you create, you will destroy; you will experiment in a few iterations with different forms of architecture. Within 2 or 3 iterations you will have settled into the architecture you think is right and then be entering into a phase of tuning. So my view of that is that the architecture evolves, it is informed by code that executes, and it is informed by the tests that you write.
J: I do agree that architecture evolves, I do believe it's informed both by the code that you write and, maybe even earlier, by use cases that come in, that inform you about things that are relating to scope and other relationships; but if you try to do things incrementally, and do them literally incrementally, driven by your interaction with the customer without domain knowledge up-front, you run the risk that you do it completely wrong.
I remember when I was talking with Kent once, about in the early days when he was proposing TDD, and this was in the sense of YAGNI and doing the simplest thing that could possibly work, and he says: "Ok. Let's make a bank account, a savings account." What's a savings account? It's a number and you can add to the number and you can subtract from the number. So what a saving account is, is a calculator. Let's make a calculator, and we can show that you can add to the balance and subtract from the balance. That's the simplest thing that could possibly work, everything else is an evolution of that.
If you do a real banking system, a savings account is not even an object and you are not going to refactor your way to the right architecture from that one. What a savings account is, is a process that does an iteration over an audit trail of database transactions, of deposits and interest gatherings and other shifts of the money. It's not like the savings account is some money sitting on the shelf on a bank somewhere, even though that is the user perspective, and you've just got to know that there are these relatively intricate structures in the foundations of a banking system to support the tax people and the actuaries and all these other folks, that you can't get to in an incremental way. Well, you can, because of course the banking industry has come to this after 40 years. You want to give yourself 40 years? It's not agile.
So you want to capitalize on what you know up-front, and take some hard decisions up front, because that will make the rest of the decisions easier later. Yes, things change; yes, architecture evolves; and I don't think you'd find anyone who will say "put the architecture in concrete". I also do not believe in putting the code in place, that is, the actual member functions, up front. You put the skin, you put the roles, you put the interfaces that document the structure of the domain knowledge. You only fill them out when you get a client who is willing to pay for that code, because otherwise you are violating Lean. So you do things "just in time," but you want to get the structure upfront, otherwise you risk driving yourself into a corner.
So I would say that a little differently, and take exception to some of it. I would not very likely fill in the interfaces with abstract member functions or defunct member functions. I might create objects that will fill the place of interfaces. So, in Java terms, I might have an "interface-something" with nothing in it, but I am not going to load it with a lot of methods that I think might be implemented one day. That is something that I am going to let my tests drive, the requirements drive, and I am going to be watching it like a hawk to see if there is any kind of architectural friction that would cause me to split that interface.
But the problem is: that's like saying that words have meaning apart from any definition. And so the fact that I call something a mule, without saying what a mule is, doesn't make it a mule. Like Abraham Lincoln said, "Calling a mule an ass doesn't make it one." And so the thing that gives meaning to stuff is the member functions as semantics. You don't want to go crazy and you don't want to be guessing, and here is where I agree with Kent. He says in the "XP Explained" book: "You don't want to be guessing" and that is true. But I do want to assert what I know and there are some things you just know about the structure of a telecom system, a banking system. You know that you don't build a recovery object. I was on a restructuring project in a large telecom company once where they were redoing a toll switch using object oriented techniques and modern computer science techniques, and I got assigned to work with the guy who was making the recovery object. Well this is ludicrous, recovery isn't an object, but yet his superficial knowledge of the domain let him do that. If you get down to understanding what the member functions of that are then you will see this isn't even an object. So you ask: "How do I know it's not an object? What are its member functions?" "Uh... to recover". "Great. That is a lot of help." Actually I think there are people who have capitalized on this and it's now called SOA. That is the danger.
You want to have something there to give the object meaning.
OK. And I would even agree with that. You need something there to give the object meaning. I am going to be really minimal about that.
Me too. No disagreement.
And then I am going to let executing code inform my future decisions, so I am not going to create a massive architecture, or a huge system based on speculation.
That is right. No disagreement.
So back to the beginning: how long would you spend before you started writing executable code. Let's say, on a system that will eventually wind up being two million lines of code?
So, 2 million is, in my experience, pretty small. I am working with hundreds of millions. Before the first executing code... it depends a lot on the individual system, but let's say I were building a simple telecom system. What I would probably do, let's say I am doing it in C++, I would have at least constructors and destructors in place and be able to start to wire up important relationships between the objects and...
Would you have tests, testing those wirings?
I would have tests for those wirings. An obvious test is to make sure, when the system comes up and goes down, that the memory is clean, for example. Half an hour.
Excellent. So where is our disagreement? Perhaps our disagreement is on the notion of TDD and professionalism. That was the second part.
I think that is a separate disagreement, but maybe we can put this one to rest, this is nice. But the thing I want to make clear for the audience is that, again, I think when I am running into people that are doing things right, that avoid the kind of problems they talked about earlier, it's not TDD out-of-the-book or TDD out-of-the-box. So, people have found a way to move to what Dan North now calls BDD, for example, which I think is really cool (if you ignore the RSpec part and all the stuff which is kind of dragging it back to too low of a level).
So there are a lot of people doing the right thing and my concern is that they are calling this good thing TDD and then people are going to buy books and they are going to look up TDD and they are going to find this old thing which is "architecture only comes from tests," which I have heard four times in tutorials in the past 6 months, and that is just, like you say, horse shit. But now, on to the professionalism issue, how would you know a professional if you saw one?
They practice TDD? (smiling)
"Professional," to me, is just someone who makes money for doing a job in that area.
Yeah, I know, and I am going to push on that one, because I think that something our industry has lacked is a standard of professionalism.
We'll take your definition as a starting point and then we can talk about that.
But that is not actually my definition, I was joking. I think that nowadays it is irresponsible for a developer to ship a line of code that he has not executed in a unit test, and one of the best ways to make sure that you have not shipped a line of code that you have not tested is to practice TDD.
J: I do disagree with that. Now, I think there is something deeper that is important, and let me attack this by example. As an example of something I could do as an alternative: I could wave my hands and say a lot of things about code inspections or pair programming and those are good and probably have more value, but it's kind of an independent discussion. But let me give you something that I think hits the nail on the head even more importantly. Let's look what a unit test is. What a unit test does is: looks at an AP of a procedure and kind of goes and hits the state space of the arguments and maybe hits a half a dozen of them, or a hundred or a few million of (2 to the 32nd power) or whatever, and so you're just doing hit and miss. That is really heuristic, you've got to be really lucky to find bugs doing that.
What I think is more powerful is "design by contract." So you have preconditions, post conditions and invariants. Now, the technology isn't there in most languages. They haven't matured to the point where Eiffel has, where you can statically check these things, but you can build additional infrastructure to do that kind of thing. I think it has all the advantages TDD, there are these advantages (supposed advantages), about: I am going to think hard about the code, I am going to focus on the external view and so forth. And I have found, at least for me, that contracts do that more effectively than tests do. Furthermore, they actually give you broader coverage because you are covering the entire range of the arguments rather than just randomly scattering some values in there.
Now, Bertrand Meyer actually taken this further and he has something called CDD, which is Contract Driven Development, where what he does is: he takes contracts and he kind of feeds random numbers at them and if they don't meet the preconditions you don't run them because you know that test will fail, but he tests if the post conditions hold after you run the test, and if they don't it's a bug. And they have actually done this. They have a tool that automatically runs tests. They have done this on the Eiffel library and they ran it about a week, they found 7 bugs in the 20 year old Eiffel library - that is kind of interesting. But it comes from a part of the code where you are expressing intentionality in a way that has hope of being traced back to something of business importance, and the problem about TDD, as most people practice it down at the class level, is that it's really, really difficult to trace those APIs at a class level sometimes all the way up to business significance.
So I am having trouble with that. As I remember Eiffel - and I actually thought this discussion was put to bed a long time ago - as I remember Eiffel and "design by contract," you specify preconditions, post conditions and invariants around every method and around your class, the invariants of your class. Test driven development, or a suite of unit tests, virtually does the same thing, it specifies a set of incoming checks on the arguments, outgoing checks on the returned values, explores the state space, as you said, of the methods. So I always thought that they were one-to-one, you could always transform contracts into unit test or transformunit tests into contracts, with the exception that the direction of the dependencies is different, and you know that I am a big dependency freak. Unit tests depend on code, on production code, which I think is good, production code doesn't depend on unit tests; whereas contracts are smeared through the code, which bothers me.
I think you are creating a dualism that needn't be created, in that there is one thing, which is the code, the code is the design, it's what's delivered, anything else is not Lean. In typical projects that use unit testing, the code mass is about the same as a test mass, and where there is code, there's bugs. You cut your velocity in half. There is well known examples, the most famous example is the ADA compiler where actually, use of test driven development increased the number of bugs in the code, because your code mass increases, because you have more tests. If you are using assertions you have this nice coupling, that is essential coupling, between the semantics of the interface and the code itself. Whereas with the tests the coupling is a lot messier and hard to manage. There was another point you've made I was going to react to...
I am surprised that you think the code mass is different.
J: In my experience, it is. If I look at how people actually use this: I like it when I see a JUnit spec that looks like assertions, but a lot of the time it isn't.
I agree with that: there are messy tests, but there is messy code. I don't like arguments that the "tool is easy to abuse therefore you shouldn't use it," that would invalidate almost everything...
That isn't my argument. My argument is: how I am seeing this being used in broad practice - and they are not getting it.
Well, OK. And do you see contracts being abused in broad practice?
First of all, they are not being used enough...
Right! By the way, since we've just got a couple of minutes left, just a trivia question - and I don't know the answer. Who is it that first used "DD" with some letter in front of it? We've got CDD now, we've got BDD, TDD and I don't know what else and the earliest one I can remember is Rebecca Wirfs-Brock, Responsibility Driven Design. Was there an earlier one?
DD ... was a UNIX command to do disk dump... but that probably doesn't count. Thank you Bob, good seeing you again.