A short description of what we do with respect to professional development environments at HDM

Professional large scale development

The big picture: Generative Computing

We see a lot of evidence that large scale development nowadays is based on generative computing approaches. Enterprise Java Beans e.g. cannot be productively used without a massive amount of tooling to support developers. In many projects a combination of XML meta-data and frameworks is used to create extensible software. Frame Processors are used to generate template processors (Struts tiles etc.). Model-driven architecture is already a buzzword and begins to spread outside of OMG as well. And last but not least Aspect Oriented Programming starts getting into the focus of many developers (AspectJ, AspectWorkz).

Flexibility is a big problem for standard software. Many products suffer from feature-bloat and still lack vital features for some customers. Creating flexible frameworks that can cover special domains and allow specialized applications to be built is top notch developer know-how. The lecture will cover the concepts of domain analysis and production line engineering as well because both technologies are intimately tied to generatve approaches.

But before we tackle these things it is good to start with the basics: A professional large scale development process. Not high-end generative computing yet but still necessary to achieve the higher goals later.


Professional to me means automated as much as possible. And convenient for even large teams. Professional also means that the whole lifecycle of a project is covered. Which ties the development environment back to the software architecture: how do we split development? How do we package source code and deliverables? How do we secure our artifacts?

But there is more to large scale developemnt than just tooling. How do we structure our development process? When do we create heartbeats? Do we use rolling baselines or fixed baselines?

And last but not least: do we use some form of extreme programming/scrum or do we follow a conventional top down method like Rational Unified Process (yes, I call it top-down no matter how much Rational tries to make it look "Xtreme").

And I would like to pass on 16+ years of experience in large software projects - from Unix kernels, embedded controls to frameworks and web portals.


Not so sure about it yet. I'd like to tie in students with specific experiences (eclipse, cvs, ant etc.). I'd also like to run it as close as possible to real development which means we need some project to work on. Team work as always with each team focusing on one tool but participating/using other tools.

An old idea: Distributed System Development Environment

This talks about requirements of a large scale distributed development environment and how an XML information bus could tie all the different meta-data of tools and runtime environemnts together. The goal is to have much more impact control and traceability as we have now. Ironically, while our object systems turn more and more distributed, the tooling behind stays local and file based.