Apparently, Bertrand Meyer says
direct assignment of object fields, completely messes up the architecture. So it's a small problem that becomes a huge one.
The problem is small in the sense that fixing it is very easy in the source. You just prohibit, as in Eiffel, any direct access to fields and require that these things be encapsulated in simple procedures that perform the job-procedures which, of course, may then have contracts. So it's really a problem that is quite easy to kill in the bud.
I think that this passage points out an ironic flaw in the way programmers - in particular language and framework designers - perceive programming. The general form of this thinking is: we have problem X, which is indicated (let's say) by the proliferation of direct attribute assignments. So we say, okay, we know how to handle these kinds of problems in code: we just declare that the symptom (direct attribute access) is illegal, it's an error condition, and so another component of the software-generation system (the programmer) will react to this event properly and stop creating the problem X.
This is consistent with the way that software is written: if we have a problem X, which is indicated by a buffer overrun - well, it's too difficult to specify all the myriad ways that problem X might actually manifest - so we fix the indication, and we prevent it from manifesting. We don't sit down and classify every bizarre way that a buffer overrun could affect a running program: there are too many different things that could happen, it's arguably not even possible to specify what problem X might be when it manifests, so we fix the rules so that the variety of problem is impossible.
I think that this may be why Guido van Rossum is such a genius - his perception is largely untainted by this misconception.
The misconception is that you can change a programmer's behavior in the same way you can change a program's behavior. In the case of the programmer, you need to look deeper. You need to look at X and figure out what it actually is. In some cases, direct attribute access is actually not a problem at all - it can cause problems, but it doesn't necessarily. In the cases where it does cause problems, what's really going on? The programmer isn't developing a sufficiently strong encapsulation "contract" between the class and its customers. So we want to encourage the programmer to really think about that because that issue becomes more important as the problem scales up.
In this case, it turns out that while it sounds like a reasonable request to make of the programmer - devote more time to abstract design so that the interfaces will be cleaner and we can manage the project as it scales - it might not actually be possible. The programmer needs to see a working prototype of their system in order to judge how it's going to work in the end, and in order to do that they are going to need to make some sub-optimal choices about the initial abstractions they employ. They won't know which code is simply an implementation detail and which code is really externally published until they find themselves saying "Hey, I just wrote something like N yesterday, and this O looks a lot like an N, why don't I try to use that code I wrote before."
Here's where it starts to bump into a problem I've been thinking about a lot lately. I've been looking at the schedule I set out for myself towards the beginning of last week. A lot of things have happened and a lot of work has gotten done, but almost none of it has gone as planned. We've all been trying to talk about the schedule and update it as much as we can, but a lot of the discussion has either been "Hmm, this is just much harder than I thought it would be" or "I know it's taken 3 days so far but it still looks like a 1-day task." This is notwithstanding getting sick and other vaguaries which affect the rate of progress. Even when work is going ahead full steam, and even when there is nobody "in the loop" but myself, it's incredibly difficult to figure out what is going to need to be done in advance of the doing of it. It's also interesting to observe how even the level of certainty about how close the estimate is needs to be explored - there is a layer of meta-certainty there which is very difficult to express. For example, what I'm working on now, a MIME-generation API, was something that I thought would take a few minutes. Having written the inverse of this code and more than "half" of the generation step, I thought it would be completely trivial. It's not very hard, but it does turn out that it's more than a few hours of work to get done, because there were problems that I had not considered in my previous, super-simplified implementation that turn out to completely break the model I was previously working with.
The Agile Manifesto says that we should value "responding to change over following a plan". In general I tend to agree with the Agile way of thinking, but I don't quite get this one. I don't think it's wrong, really, but it's reactionary. What it's reacting against is the established convention of following an imposed plan which development and/or customer communication have shown to be incorrect. They're saying throw out the plan and keep the customer happy, don't set a schedule up front before it's clear what the priorities should really be.
I can't wholeheartedly agree, though, because I've seen the very positive effect that a plan can have on one's progress. When the plan is actually working, and it's possible to execute to it, it's very personally satisfying and fulfilling. Also, it provides a spur to keep working to achieve a goal that you (arguably) know is attainable.
It's hard to look at a plan and say that it's wrong, though, especially when one isn't really sure. When I'm looking at a 1-day task that has unexpectedly turned into a 2-day task, I hope that it will stay a 2-day task. I have a hard time understanding why things slip and what makes a programming task more or less work, despite a decade of programming experience and 5 years of desperately attempting to estimate tasks accurately before performing them.
To get back to the original point, my emotional reaction as a programmer is to reinforce the existing assertions about deadlines, in the hopes that the wetware in my brain will react to these invariants sensibly, as a component of a software system would. Just as Mr. Meyer sets forth certain assertions about typing and software contracts and assume they will have some bearing on unrelated problems.
On a small, distributed, agile, open-source team using Python, I find my personal time-management strategies reverting to the same comfortable shared hallucinations that make static typing, const correctness, and waterfall product planning all make sense.
Obviously I need to respond more directly and actively to change, to prototype my time and schedule management the same way I'd prototype a program, by trying some strategies quickly to see whether they work without investing too much effort, and when they do work carrying forward with them as far as they'll go.
The main question that I'm wrestling with at the moment is, how do I reconcile this with the slow-moving, context-heavy kind of thinking that programming requires? Each failed prototype is a big, expensive task switch which throws nasty heisenbergian effects into scheduling. Each time I try to measure the effectiveness of the schedule, it slips some more, because by even thinking about measuring the schedule, I'm not thinking about programming.
I think that the answer has something to do with regimentation, i.e. spending an hour, the same hour, every day thinking about planning and absolutely no time outside of that hour.