Before moving to the product engineering department in my company, I had spent my professional career in services organizations supporting, implementing and, eventually, designing custom software systems for use by a single customer. In that time, with few exceptions, that meant following the Waterfall methodology in an attempt to assess all of a customer's requirements in advance so that we could make a (futile) attempt to produce an accurate time and cost estimate.
Waterfall
For the fortunate uninitiated, Waterfall is a design process borrowed from (and better suited to) non-software engineering projects. In Waterfall, the project team follows a clear sequence from interviewing stakeholders through documenting project requirements, designing a comprehensive implementation, executing it, performing internal and stakeholder QA, and finally to deploying. There are many variations of it, but the intent of each version is to avoid late changes (e.g. after the satellite is already in orbit) that could be prohibitively expensive.
As for the name, I think it is meant to connote the process's steady progression. In my experience, it also does very well at capturing the sense of knowing it will end by plunging headlong into a free fall before a rocky conclusion.
My part in this process was most recently as an architect. I took the requirements for a project and outlined the pages, components, services, code libraries, and integration points with third-party software and services that would fulfill those requirements.
UML
Toward the end of my stint as an architect I decided to revisit UML for the first time since college, as part of a constant effort to make Waterfall work for me. Not to spell out every detail, UML is essentially a common set of diagramming stencils which provide a consistent model for defining a system's components, interactions and behaviors. What makes this more impressive than a run-of-the-mill Visio or whiteboard diagram, is that, as a standard modeling language, it is defined well enough to be machine-readable so that tools can generate "skeleton" code from and validate programmer code against them.
The reason to seek out a standard modeling language was to avoid the needless problems of inventing my own design notation -- which, by this time, had become embarrassingly mature. I was returning to UML specifically, because a diagramming "language" can represent a lot of detail concisely. This is important, because Waterfall has a way of requiring more and more detail as you adapt over time to avoid repeating the shortcomings of past projects.
However before I could incorporate this into my documentation (and certainly before I could get budget approval for a $600 piece of software), I moved into product engineering and away from Waterfall. Good riddance!
TDD + AI
In engineering I have been inculcated in TDD (see my previous post for a demonstration of it and my new found zealousness for it). As my professional fixation shifted from UML to TDD, those two concepts collided in the transition and gave me a new idea: if UML defines a system down to the unit and TDD defines how a unit will operate successfully before it's written, wouldn't it be a relatively small step to automate writing the code to pass the tests and implement the whole system?
This isn't getting a roomful of monkeys to type out all of Shakespeare's 38 plays in order, just a collection of his sonnets individually.
As ridiculous as I make that sound, I don't consider it at all outside the realm of possibility, just of reasonableness for all but the most large projects -- and even then, that's only because this technology doesn't exist yet. To my knowledge, we're missing a standard for tightly associating tests to diagrams in UML, and we're certainly without a tool that can process such a TD-UML diagram to produce a full implementation.
As ridiculous as I make that sound, I don't consider it at all outside the realm of possibility, just of reasonableness for all but the most large projects -- and even then, that's only because this technology doesn't exist yet. To my knowledge, we're missing a standard for tightly associating tests to diagrams in UML, and we're certainly without a tool that can process such a TD-UML diagram to produce a full implementation.
As you consider this, let me reply, "yes, this would produce less efficient code ... at first."
Today unit tests can be failed that don't perform to certain non-functional standards (e.g. performance), and a hypothetical automation tool could continue to iterate over different implementations even after passing a test in order to discover faster variations. Ultimately, just as it's rare to find a developer anymore who insists on writing their own assembly for performance reasons, I think we could expect this to eventually produce as fast, or faster, running code than a person could.
Today unit tests can be failed that don't perform to certain non-functional standards (e.g. performance), and a hypothetical automation tool could continue to iterate over different implementations even after passing a test in order to discover faster variations. Ultimately, just as it's rare to find a developer anymore who insists on writing their own assembly for performance reasons, I think we could expect this to eventually produce as fast, or faster, running code than a person could.
As for whether this would produce less elegant code, that plays into why I'm excited about this idea instead of being worried. But that's a topic for another day.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.