Welcome to the kriha.org weblog

What's New

Karl Klink on High-Quality Software on Mainframes

As many large companies are currently considering (or are already in the process of implementing) a move back to mainframe systems this presents an opportunity for our students to learn more about a promising technical area. Something they cannot learn from the current staff at most universities as these professors have not worked on those machines for decades.

Mainframes are also a very interesting way to run a universitie's computing tasks with low overhead and low system administration costs. Just think about hundreds of virtual linux instances which can be created on the fly. Students would then use their notebooks to connect.

But mainframes frequently require a higher code quality than what is usually achieved in "office-like" products. Karl Klink is a well-known expert in the area of high-quality software development and also the father of Linux on the IBM mainframes. This "skunkwork" project turned into a vast success over the last years and at the same time makes the mainframes accessible to a new generation of software engineers.

New technologies for secure software

One of the most exciting areas of security seems to be grid computing. The globus toolkit uses so called proxy references to achieve a secure way to delegate user rights across different services. This is the first time I see a delegation mechanism that does not require the client to offer some form of token that allows the uncontrolled creation of further access tokens on server side. Bad examples are delegating userids and passwords and the kerberos way of shipping a TicketGrantingTicket from a client to other servers. True - it is only a TGT but it can be used in unintended ways on the server side.

Security Extended Linux (SELinux) is a very interesting way to restrain the almighty user/program alliance from causing too much damage. In other words: the typical access control lists of operating systems allow a program to use all the rights of a user to achieve whatever it wants. A user cannot put restrictions on her rights. The objects protected by ACLs to protect the entry to their services but then take over a users rights completely. A nice example of this problem is shown in "how many rights does the cp program need". Or the discussion of "the confused deputy problem" on Confused Deputy Problem . SELinux does not use the usual answer to this problem: capabilities. It uses the sandbox model instead by putting programs and objects into domains and restricting the actions which can be perfromed in a domain. I will discuss the new book on SELinux shortly.

Another interesting software architecture is provided by OSGI - the open service gateway interface. It uses a sandbox model to restrict services downloaded to embedded platforms. This technology is used e.g. in automotive industries. The Java based security architecture follows a concept like it is used in smartcards (see the finread architecture) using isolation. But unlike finread the downloaded services do not get permissions through code signing. Instead - separately configured permissions need to be defined as specified by the Java2 Security Architecture. See the interesting article fromm engineers at BMW (:.==). The reuse of packages between services seems to be an unsolved problem here. Expect more to come on OSGI security.

The mainframe is back, alive and kicking. I will make an announcement for a very interesting talk on mainframe software shortly but here I'd like to mention an excellent new redbook from IBM on Websphere Security on Z/OS. This redbook covers the complete application server security and its integration with mainframe internal components and external systems. At a handy size of merely 786 pages it is a quick read (;-).

The Grid computing groups suggest a system for inter-domain security assertions. The problem of mapping users, rights and attributes between virtual organisations and existing companies can be solved in different ways but the typical approaches based on setting a complete federation system on top of the existing infrastructure seems to be hardly workable. Other solutions describe a SAML like assertion language (or abstract syntax) which allows a user to be mapped to another identity on a receiving system. The system uses semantics and interacting services to create those assertions. The assertions themselves come partly from the sending and partly from the receiving organisation but no super-structure is required.

Developments driving security today

In a recent report on future security trends Gartner group notices that the perimeter model of security borders is no longer sufficient. This has been noticed by others like Dan Geer in his remark on where a companies borders today are (they are exactly when a company has no longer the authority to enforce a key). Garter Group created the "airport" security model" which knows different security zones with different QOS, trust and technical relations.

But I believe the future of security technology will be even more different than that. The concept of end-to-end security based on trust coming from central repositories will have to go. Channel based security is less effective and will be replaced by object based security - as is proposed by e.g. webservices WS-Security standard. But

One driver for security is the demand for ad-hoc formation of virtual organisations - across different existing organisations with their respective infrastructures. This is not the first time that this demand has been noticed, e.g. 10 years ago at the UCLA a project called "Infospheres" created a framework for ad-hoc collaboration between organistations (e.g. in case of a catastrophic event). Today Grid computing is covering this area and creates new technologies (see above). In general the lack of central security repositories (which is almost equal to loosing the "domain" or "realm" concept raises big problems for current security technology.

Another driver is mobility with its associated host of small devices running some form of embedded software. This can be smartphones, car-entertainment systems and so on. Most of these devices either forbid dynamic updates of the software or pose a big security risk. But the fact is that most operating systems using the typical ACL model of access control are unable to deal with downloaded code. It requires either sandboxing or a capability based system to allow safe download of code.

Take a look at the video on POLA by Mark Miller and Mark Stiegler (get it via the erights.org homepage.). It uses the SkyNet virus from Terminator III to explain the principles of POLA and also covers ideas for a secure browser.

And after watching the video look at the latest threats like credential stealing via popups or the good old session stealing through web-trojans while you have a session running in a different browser window. Unfortunately I am not a big expert in browser design but the way browsers combine a global object model (DOM) with an interpreter engine looks like a barely controlled disaster waiting to happen. The web-trojan/session stealing bug is not a bug at all. Instead, web authors need to prepare their pages by inserting random numbers to tag them. When a page without a valid random number is requested the web site can be sure that the link was not delivered through itself. And the popup-stealing trick is the same that has been used with frames: Code from site A is effective in a connection to site B. And the fix will only fix this particular problem. A real solution could be to create a "closure" with a specific DOM part, an interpreter instance and a connection - thereby restricting the browser from mixing sites. Look at the DARPA browser analysis in this context.

Java vs. .NET - impersonation and delegation on different platforms

Denis Plipchuk's e-book from OReilly is not only a good comparison of the security mechanisms used in both platforms. It is also a good introduction to the problems of impersonation, delegation and how these mechanisms are interwoven with the platform technology. Plipchuk distinguishes several layers of platform security: User, machine security, enterprise, machine configuration and application configuration which is a good conceptual framework.

Here is what I have learnt:

  1. Both platforms get more and more similiar through the use of Kerberos, GSS-API, provider based security mechanism, PKI support.

  2. Only .NET additions and extensions are able to bring more security to the windows platforms.

  3. Web services security is still a moving target.

  4. .NET does support impersonation only in windows-only networks and has no support for delegation across the internet at all.

  5. What surprised me was the many facettes of security available on MS platforms and the huge rule of IIS in the context of authentication and impersonation.

There is too much in this pdf file to list here but the content is definitely worth reading. Btw: the whole series of articles is available on the web but if you are busy I'd recommend buying the complete pdf at a low price of 5 dollar.

J2EE security (even with EJB3.0) is far from perfect as this little piece from Michael Nascimento Santo's blog shows (the problem with credentials from JAAS not available, the problem with dynamic instance based security etc.)

Blog to find a job - an effective reputation system

Blogging certainly has gotten quite popular. But is it simply a public diary for people with a slightly exhibitionistic nature or can it provide real value for others? One talk at Open Source Conent Management event number 4 in Zurich (more details to this event will follow) offers an interesting idea: Use your blog to find a job.

Sounds funny at first glance but when you look at the properties of a blog one can see that employers might get some real information from a persons blog:

  1. A blog which is current shows a certain level of activity and engagement on the side of the prospective employee

  2. A blog which is current shows that the author is up to date with respect to latest technology

  3. A blog - like most diaries - contains information on personal habits, emotional skills, goals etc. which are otherwise hard to evaluate during a short job interview.

  4. Du to the cross-referencing inherent in blogging the bloggers social and communicative involvement with a community can be known

  5. The cross-referencing of blogs serves also as a reputation system that is rather hard to fake and which gives credibility to the other informatin extracted from the blog.

Taken together these things should cause students to become active bloggers if they want a decent job afterwards. But there are also critical things in blogging which came up in our discussion: The granularity of the entries is a problem. Once you start discussion something the entries turn frequently into some kind of "micro article" which is at least 15-20 lines long but can be much longer. This disrupts the flow of a blog.

Based on a real-time event (blogging from Bagdad while the US-bombs are falling) blogging seems to be most efficient. Reporting on ones daily routines can be quite boring on the other hand.

Finally: a blog is a personal store for information and the social connectivity through cross-references is just a side-effect. But that is OK I guess.

Support for collaboration at the workplace and privately has improved a lot - at least the technical means are now much better. Most good projects run some kind of wikiwiki for everyday communication and storage (or simply as a way to survive in restricted Intranets which outlaw ftp etc.). Chat is an invariable tool for development groups and should be integrated as a plug-in in eclipse. But what is still lacking is a culture of communication in many projects and that is quite hard to establish.

Kerberos - network authentication middleware

Authentication across networked machines is still a hot topic. The MIT Kerberos system has become a standard way to authenticate users across networked machines and to achieve single-sign-on. Jason Garman's book "kerberos - the definitive guide" is very helpful in explaining the principles behind kerberos and how to use it as a security tool. I found the book to be extremely readable. The author explains the shift in threat models from host based to network based very well. His explanation of the security protocol underlying a kerberos implementation is good to understand and still precise. Readers learn how Kerberos protects their secrets by generating and using session keys.

The chapter on attacks against kerberos (e.g. a man in the middle attack made possible if authenticating servers do not use the full kerberos protocol to validate client credentials) was very helpful.

The book is very helpful if you plan a cross-platform installation of kerberos on windows and unix machines. But it also shows scalability problems in the area of system management with kerberos. Especially if cross-domain trust is needed. And the fact that kerberos does not deal with authorization is one of the reasons why windows/unix integration via kerberos is still a cumbersome process. And the problems of using passwords as credentials are also explained.

Many companies (both software providers and users) are focussing on kerberos as the protocol for single-sign-on. Public-key infrastructures look much better initially but the key handling problems behind PKI still prevent a large scale use in many cases. To many companies the concept of central key distribution centers are quite natural.

Kerberos integration via other protocols (PAM, GSSAPI etc.) is an important topic but I had the feeling that this would almost warrant another book as the problems behind are very complicated.

All in all I can only recommend kerberos and Jason Garman's book on it.

Web Application Performance - no silver bullet

I spent a couple of days trying to speed up our new internet application which uses a lot of XSLT processing (a model2+ architecture). While this is not generally a problem because the resuling pages will be cached it is a problem for those pages which cannot be cached. The results I have been seeing where that every additional request to uncached pages resulted in a new generation run which extended the response times for ALL those requests.

In other words: the relation between number of parallel requests and average response time is linear and leads to unacceptable response times after around 5 parallel requests. First we thought that this behavior must be the result of a bug, an undetected synchronization point, a wrong servlet model etc. The general hope - short before deployment - was for a SILVER BULLET: to find ONE CAUSE for the performance problem. And for a short moment it looked like I found one: a method with an unusally long runtime value in the profiler - not counting the sub-methods. But a friend soon pointed out that this might only be the effect of the garbage collector. Sadly, this was quite to the point.

Unfortunately, the silver bullet case is not typical, especially not with web applications. Some short considerations made us realize that the behavior described above is simply typical for CPU bound processes/threads - and XSLT transformations are very CPU intensive. But until we realized this we also learned a lot of other areas which participate in bad performance: Application Server settings, garbage collection choices, transformation architecture. And we started evaluating other xsl compilers even though e.g Gregor crashes with our style sheets.

What we don't want to do is to compromise our architecture but we might be forced to split some stylesheets into a pipeline of smaller ones.

Peformance measurement tools for our application server have been far from optimal yet: With WSAD profiler (used remotely against a small Unix box) I have to restart application server, profiler agent and my profiling client after every short run. (At least the kernel settings are now correct so that restart is a bit faster). Still, I could not get the resource advisors going.

What I learned from this case again was that performance testing needs to start early on the real hardware and that it is a tedious process which takes much longer than expected.

Starting Monday I will go through our memory allocations and check how the GC works. Perhaps I can squeeze out some more performance..

The things we don't tell our wifes...

are usually not what we do at the dentist. That's why this headline on a full-page ad by the german dentists (actually - which organisation is behind this?) caught my attention. Turns out its an ad against the planned health card which is supposed to also contain medical statements and history in electronic form. Protected by signatures from medical professionals.

The argument of the dentist organisation goes like this: There are things we don't want to expose to medical professionals (or our wifes). To only partially expose medical details makes the whole electronic health file quite useless. But the dangers for our privacy are enormous. Doctors and patients should not be "transparent" (in german it says "gläsern meaning: like glass)

Let's look at the arguments line by line.

  1. Since when does a german medical organisation care about the privacy of their patients? As far as I know no doctor or dentist would earn one penny by protecting the privacy - which makes the whole effort already quite suspicious.

  2. What could be so secret in going to a dentist that I would not tell my wife about? I tried my best but except some phantasies about big-busted assistents bending over the dentist chair .. I've come up quite empty handed. If you got any idea please let me know. So why is the DENTIST organisation fighting the health records in the hands of patients so much? Only a really mean spirited security guy would guess that this could have something to do with result tracking, avoidance of double and triple x-rays etc.

  3. And: my wife could not read the data from my health card anyway: this would require either my pin or a medical health certificate from a person with a medical profession.

  4. But for sure the most interesting argument is the one where the dentists claim that neither patients nor doctors should become transparent. In argumentation around security topics it frequently helps to change or substitute roles to get a fresh view on things. In this case we need to replace the doctor with e.g. our auto-mechanic. What would you say if he would tell you that your relation is based on "trust" and that all data are best kept by him. With respect to the bill and the repair work done you just have to trust him. He knows best.

    This example makes us realize how the dentist argument works: it tries to confuse the distinction between a patients right to privacy (and if a doctor becomes herself a patient she can claim the same rights to her privacy) and the rights of a doctor WHEN ACTING AS A MEMBER OF A MEDICAL PROFESSION. There is a need to control those actions (e.g. to allow legal disputes of the quality of service or potential mistakes during treatment). Of course the one person to control these actions is the patient himself - if he got the data. And exactly those data should not leave the medical offices if it goes after the dentists.

Clearly, defining the authorization and access control rules for medical data on the health card won't be easy. But the dentists arguments against medical records under patient control are simply a way to fend off request for result tracking, quality control etc. If you go to prozahn - datenschutz you will find more suspicious arguments: according to the dentist organization the patient data are best kept at the doctor and not on a card. "Who else would be interested in them?". They try to discredit the ministry for health as having an ill-advised interest in patient data. Of course, as long as the doctors have exclusive control over the patient data real control and tracking is impossible.

Again, there are risks behind those data, e.g. when they would get into the hands of insurances or employers. But while the ministry of health is certainly a candidate for suspicion the patients should not believe for one second that an organisation of medical professionals has intentions which have anything to do with what the patients need...

So who really is behind the "dentists-ad"? It is of course the Kassenzahnärtliche Vereinigung", a lobby-organisation of german dentists. And at the same time an organisation which could be disintermediated through the health card: when doctors and health-insurances could directly perform the accouting without involving the Kzbv.

The health card is surely a political minefield.

How to use the Websphere Profiler

The following is only a start into the area of profiling. Here the Websphere Profiler is shown which is based on the JVM Debugging API. Websphere itself has more profiling interfaces but this is another topic.

WASD and Websphere include support for profiling. This includes an agent that needs to run at the target side and a Profile Perspective in WASD that controls this agent. Profiling can be done in the WASD development environment or against a live application running in a real websphere application server. Only in the case of a real application server will the data on concurrency etc. be meaningful. Literature on application profiling can be found here: J2EE Application Profiling in Websphere Studio . And here a tool report on WASD Profiler. The best introduction can be found as Chapter 17 in the IBM Redbook on IBM Websphere V5.1 Performance, Scalability and High Availability. It covers on more than 1000 pages everything about load-balancing, availability and caching etc.

How to get the software:

  1. Eclipse/WASD

    Even if you only want to profile with eclipse you need the IBM Controller Agent up and running. It comes with your WASD. Go to the bin subdirectory and execute SetConfig.bat to register the agent. Go to your services view on Windows and check whether the agent is up and running.

  2. Solaris

    Get the Agent Controller for your platform and follow the installation instructions. If you need to do this by hand: The software comes as a rac.tar file which is zipped. It can be found on the IBM Websphere site. Unpack it in a subdirectory of the application server, e.g. /export/opt/5.0.1.7/. In the bin subdirectory open the file RAStart.sh and adjust the home directory path to where rac is installed. Execute RAStart.sh to get the daemon going. (Please note: if you kill the daemon it will take up to 4 minutes before you can run a new one.)

Install EAR file: The EAR that you want to profile needs to be installed on the application server. It MUST run properly or you will not get reasonable output from profiling (or you will not see the agent at all) If you only want to test remote profiling you can use the EAR that comes with the article from above. (see attached file). It is best to unpack the EAR and install the .war file only. In this case it is necessary to specify a doc root entry "MyEnterprise" when uploading the war module to websphere admin client. ("install application" item)

Change application server configuration: Use the WAS admin console to change several settings of the server which should be profiled. You will need to follow the instructions from Chapter 17 of the performance redbook (see above). ---- See "Profiler configuration for WebSphere Studio test environment" on page 765 for information on how to profile an application in the WebSphere Studio test environment. Profiler configuration for a remote process: The remote server should have been started with profiling enabled. To configure your application server for profiling, do the following: * In the Administrative Console select Servers -> Application Servers -> appserver -> Process Definition. Enter -XrunpiAgent into the Executable arguments field and click OK. * Go to Servers -> Application Servers -> appserver -> Process Definition. Select Java Virtual Machine from the Additional Properties pane. Enter -XrunpiAgent -DPD_DT_ENABLED=true into the Generic JVM arguments field and click OK. * Save the configuration and restart the server(s). * The next step is to attach to a Remote Java process for the server instance: a. In WebSphere Studio Application Developer, switch to the Profiling and Logging perspective. How to switch to the Profiling and Logging Perspective is shown in Figure 17-2 on page 765. b. Select Profile -> Attach -> Remote Process. This option can also be obtained by right-clicking in the profiling monitor. * Go to "Generic Profiler configuration (local and remote environments)" on page 767. ----

Specify library path for application server: WAS will try to find the library libpiAgent.so when you specify the above JVM values. Therefore it is required that you extend the LD_LIBRARY_PATH in the websphere configuration to include the profiling libraries. Use the admin console and go to Application Server->[your server]->Process Definitons->Environment Entries->LD_LIBRARY_PATH and add the path to the installed lib directory of the profiler (eg. /opt/rac/lib).Watch out for different separators on Unix (:) and Windows (;). On a newly installed machine it could be the case that you will have to create the LD_LIBRARY_PATH variable selecting "new".

Prepare Eclipse/WASD: Under Windows->preferences->Logging and Profiling->hosts add the name of the remote host. Click on connection test to check the connection. Always use fully qualified host names. Open menu "profile" and select attach->remote server. Select the one you want to profile. Switch to the profiling perspective and follow the instructions from Chapter 17 or the other links from above.

Fixing problems: Sometimes it may happen that no agents are shown when you want to attach to a remote or local agent. This usually is a sign for your application server having problems running the application. In those cases the agents seem to have a problem telling the profiler about this condition. The agents disappear instead from the profiling view even though they are still running and answer correctly to the connection test request. Here is what seems to work: * stop the application server you are profiling. * Stop the controller agent (by using ./RAStop.sh in the rac/bin directory) * Wait 3-4 Minutes and start the controller agent (by using ./RAStart.sh). If start fails, redo this step until it works (;-) * start your application server * go to the workbench/profiling perspective and attach again to your controller agent. If you want to know whether the agent is still running on the remote site or whether your local eclipse has an open connection to the remote agent there are two useful tools available. On a Unix machine use __lsof | grep 10002__ to see all processes having an open on port 10002 (the agent default port) and on windows download the windows tcp tracker which will do the same. Tip: if you have problems with a large application, try to profile the demo program from [http://www-106.ibm.com/developerworks/websphere/library/techarticles/0311_manji/manji1.html] first. It comes as a small EAR file and provides a showcase for profiling.

Performance related resources: News about carbage collection and performance in new JDK's and Design for scalability - an update

OBSOC - Reflections on the software and social architecture behind the T-COM/Microsoft scandal

I have to admit that without Tagesschau.de and Heise.de confirming the story of the chaos computer club I wouldn't have believed this for one second. Reading the Datenschleuder article on the T-COM hack - hack is actually much to strong a label for how the security leaks where discovered - stressed the muscles in my neck to their limits from shaking my head. First some facts for the benefit of our international friends as most of the documentation provided is in german unfortunately.

Dirk Heringhaus published how he detected lots of severe bugs in the OBSOC web portal of T-COM. This portal is used by T-COM customers and personnel to administrate contracts, buy services etc. It also interfaces with many other systems within Telecom. Heringhaus discovered the following security failures:

  1. Changing URLs allowed to work around authentication and authorization

  2. Password handling for customers and admins was way below standard and allowed easy password guessing

  3. Customers and Admins used the same entrance (with the same low level security)

  4. Implementors of OBSOC held critical data (e.g. SQL-Server backups on private domains, including domain passwords)

  5. It was possible to break through from web application space into windows domain space.

Those are the most critical TECHNICAL failures. The whole story gets much worse when one adds the way the companies involved dealt with the problems:

  1. No or delayed reaction

  2. Even in critical situations important messages need to go through the call center.

  3. Only private contacts help to solve those contact problems

  4. Solutions are provided using the "patchwork pattern" -fix a little bit every time.

This is the point where one could start pondering over the unhealthy alliance of two monopolies (Telecom and Microsoft) and the role and value customers play under such conditions. And the role politics play by not punishing those companies or by not making current EU laws also valid in Germany. Historically there are many sweet connections between the ruling social democrats and the former state-owned Telecom.

And if you want to have some fun - read the Microsoft brochure on web-services where they brag with the way OBSOC was implemented using .NET. I think the author would now rather bite his tongue off...

In any case: go to the download page of t-com hack to get the full details like role levels, network info and lots of T-COM and MS bragging.

But this would be a cheap shot (why be hard on a company who's latest IE fixes lasted exactly ONE DAY - from Friday 7/31/04 to Saturday 8/1/04 before it had to be revoked and replaced?)

Though tempting I will not go down this road and instead speculate a bit on the software architectures behind the security failures.

  1. No central entry to OBSOC software?

    Heringhaus told T-COM about a problem with a variable X in some URLs and it was sometimes fixed the next day. But other uses of X in different requests where not affected. This points to a specific software pattern, in this case a model 1 architecture that uses templates. Those templates to be secure need to run the same security checks every time they are called. A familiar problem with this architecture is that programmers tend to forget those checks as they are not enforced architecturally.

    In the same direction points the use of the well known Microsoft patchwork pattern to security: fix only what you absolutely have to fix. No architectural considerations needed.

  2. No context sensitive access control checks?

    Changing an account number resulted in seeing data related to this - foreign - account. This can only happen if no context sensitive access control is implemented. As a customer with an account you will need the right "can read or write account information". This is quite normal method level access control. But it is valid ONLY in context with the restriction: ONLY YOUR ACCOUNT. How the implementors of T-COM and Microsoft could miss this one totally escapes me.

  3. From Web to domain?

    This is what I am not really familiar with because I always teach to NEVER EVER expose Windows domains/shares etc. to the internet as we all know that the protocols used here are not able to do this securely. Why the T-COM infrastructure allowed thisa again totally escapes me.

  4. Authentication

    T-COM customers use weak authentication. But should this be true for Admins also? Now finally entries for Admins are separated from global internet access of customers. Again - unbelievable that this was not the case till now.

Again, I can only recommend reading the information available at datenschleuder. But a last statement on the costs of security as depicted by Wilfried Schulze in his presentation on IT-Security - Riscs and Measures (sorry, in german). Schulze says that the price of security compared to the losses is small. Yes, if done the T-COM and Microsoft way: focus on convenience and drown everything in techno-speak soup a la web-services.

But fixing the problem of OBSOC will NOT BE CHEAP because obviously there is no security framework in place at T-COM. Otherwise this application would have never been deployed and reactions on problems would have been very different. Just think about the way user-ids and passwords where handled. And this is actually the part that is most worrying: a huge company that has no concept of security for their clients. No wonder T-COM and Microsoft consider themselves a perfect match (sorry, could not resist (;-))

To be fair: only now IT-Security personnel learns to ask questions about the software deployed. What requests does it expect? Parameters? Actions performed? This means that IT-Security people will have to stock up on software know-how and the software programmers will have to realize that just programming a solution is not enough anymore. Unfortunately both areas: IT-Security in networks and secure software development are rather separated and only few people can live in both worlds.

Let's assume your company has a security framework deployed that requires e.g. security sign-offs during various stages of software planning and implementation. Would your IT-Security experts have noticed the problems with state kept on client side without further checks? Don't be too sure! IT-Security in many cases today is still dominated by network security.

And finally: I don't think that the problems described result from using the .NET framework. But there is an astonishing gap between the high-tech concepts described in .NET and the basic security problems found in OBSOC. This raises the question: did the programmers even understand what they where using and doing?

Naming Service and Deployment in J2EE

Naming Services have a profound impact on application and migration in J2EE - as I learnt recently the hard way. In 1997 I was involved in one of the first component broker installations and I still remember how we tried to separate development, test and production areas e.g. through the use of different name spaces (based on DCE cells at that time). Looks like the concepts are still pretty much the same. Clustering adds some more complexity to this with different name space standards for path information etc.

When today our backend connector failed to deliver data - something that worked flawlessly yesterday - I got the suspicion that the naming service involved might not work properly. The exception thrown indicated a failing name space resolve of our session facade.

That's when I started looking for some recent info on naming in J2EE and for a JNDI browser that would let me see what was really deployed. (I must say that our deployment is really modeled after the J2EE roles and I have little or no influence on deployed code once it left the development phase). Here is what I found:

The first surprise was that there are now three different levels of naming services: cell, node and server. Cell and node where actually quite familiar from DCE but now every application server is also a naming service.

Then I discovered that e.g. Websphere already comes with a namespace browser - actually a dumper - called dumpNameSpace. Search for it in help or go there: dumpNameSpace . You will find the tool in wasd/runtimes/basexx/bin and a shell script to run it. It prints out whatever it finds in a given naming service (specified through its port).

In our case when I ran the tool I noticed that some errors where reported when the dumper tried to follow URLs from the node naming service to the application server naming service. After a complete restart of the whole cluster the errors disappeard and our objects showed up in the application server naming service.

If you don't like command line batch tools get a visual namespace browser

What certainly pays off is to get familiar with the concepts of clustering and name spaces as e.g. in Effects of naming on migration . If you are just interested in how the implementation of the naming service changed in websphere you can find the latest information here: What's new in WAS5 - Naming. The diagram from above is also from websphere world and it made things a lot clearer (e.g. that it is not a good sign if an application server does not answer naming requests...). And there is a good paper on Websphere clustering on Redbooks.ibm.com , together with more information on availability and reliability.

Highlights of last terms software projects

If you thought your private notes on your pda or smartphone were hidden from prying eyes: take a look at Bluetooth security hacks . For a nice example of model driven software development look at Strutsbox and for an overview of all projects (in german) go to the treatment list.

Character sets and encodings - always a dreadful thing?

Get some good advice on character sets and encodings from my friend and colleague Guillaume Robert:

The main reason why character encoding is a mess, is the confusion people do between "what" they want to define and "how" they define it.

Example 1: what: a list of integers in the [0;255] range. how: with 0,1,coma in a binary encoding sample: 01010101,00001111,11110000

Example 2: Case of HTML pages what: an HTML page with characters from ISO-10646 how: with characters from ISO-8859-1 sample: abcéö &#1106;. Contrary to what people think, browser only render ISO-10646 characters: Browser rendering. With a HTML header such as <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> you just indicate that the HTML source is written with ISO-8859-1 characters Charsets. Since ISO-8859-1 is a subset of ISO-10646 and contains most of the characters you'd need, the source may only be written with no particular care. what: c'est l'été ! (in ISO-10646) how: c'est l'été ! (in ISO-8859-1). Things turn tricky when you need to display a character which does not exist in your source encoding (ISO-8859-1 is an old small set of only 256 characters). Then you'd use the escaping mechanism and reference the ISO-10646 character using its code.

Example 3: what: é,á,{Euro},{CYRILLIQUE DJÉ SERBE} (in ISO-10646) how: é,á,&#x20ac;&#x0452; (in ISO-8859-1) how: &#x00E9;&#x00E0;&#x20ac;&#x0452; (in ASCII 7bits).

Example 4: HTML entities For convenience in HTML, most common escaping have a user-friendly alternate syntax: &#x00E9; <=> é &#x0452; <=> €. Extended ISO-8859-1 Some widely used characters are not part of ISO-8859-1 because they appeared later (Euro) or were forgotten (French "oe"). The [128-159] range in ISO-8859-1 is a reserved unused range. For convenience, web browsers map these characters in this range. Be aware that some nodes in your source generation chain may still ignore this range (SAX for example). &#128; <=> €.

See also: chars.html and internat.html Hope it helps Cheers Guillaume

Service Oriented Architecture (SOA)

Good articles on SOA can be found here: Developerworks on SOA. I am not sure about the role and effects of SOA but the hype is rather big right now. But I guess once Web Services are REALLY implemented everywhere (and perform like RPC but without the smell of RPC) SOA will be the next big thing. Or not (;-).

Granular access control - what we can learn from multi-level databases

Multi-function cards like the proposed german healthcard offer a lot of information about patients, medication, treatment etc. A big problem with respect to privacy and user control is the granularity of the information: "File" is not really appropriate here as some information in a medical statement might be public while others need to be controlled by the patient. While reading through Charles and Shari Pfleegers book on "security in computing" I stumbled of the chapter on database security. I am not a database expert but I have some statistics background so this chapter was really quite interesting for me. Besides inference and aggregation problems where users can extract more information from database tables than they are entitled to multilevel database technology caught my attention.

The book lists the following forces:

  1. "The security of a single element may be different from the security of other elements of the same record or from other values of the same attribute"

  2. "Two levels - sensitive and nonsensitive - are inadequate to represent some security situations.

  3. The security of an aggregate - a sum, a count or a group of values in a database - may be different from the security of the individual elements.

The quotes are all from page 343ff.

This is exactly like the same case as with health information on smartcards. It is a problem of granularity, integrity and confidentiality. The book lists a number of techniques to solve those problems, e.g. partitioning (does not really solve the problem of distinct views on the same data and introduces redundancy problems), encryption (integrity locks which blow up data storage needs considerably) and trusted front ends (an intermediate which performs additional access controls like a filter but needs to throw away a lot of information). Other alternatives are commutative filters (like method control by containers in J2EE which tries to use the backend functionality) or distributed databases (which in case of DB frontends tends to develop into a DB itself)( from pages 346ff.)

In case of finread smartcards we have intelligent front-ends which protect partitioned data. The combination of smartcard data is still an open issue.

The book itself is a sound and well written handbook of computer security. It covers cryptography, operating system security, administration (with risk analysis). Software issues like virus technology are described as well. But it is not a book on software security. SSL, Kerberos are not explained in depth but the explanations on security problems are well worth the money. It gives you the basic know-how to then read something like websphere or weblogic security manuals. If you need a handbook on security this one is a good one.

Richard Gabriel, Patterns of Software

Didn't know that this wonderful book is now freely available (took the reference from Jorn Bettins article on MDSD). Richard Gabriel is one of the fathers of LISP and tells the story of how an incredibly powerful technology did not make it in the end. Covers famous concepts like "worse is better" but also some sad stories about todays management (especially moving when he tells how the newly hired CEO always called in sick when layoffs had to be announced. If you need something to read at the beach - get it.

Workshop on generative computing and model-driven-architecture

The computer science and media department at HDM offers another workshop on gen.comp. and MDA. This time we will see

  1. a practical use of MDA at large companies (by Markus Reiter, Joachim Hengge - Softlab/HDM)

  2. Automatic composition of business processes between companies - using semantic technologies and SOA. (by Christoph Diefenthal - Fraunhofer IAO/HDM)

  3. Practical metamodelling techniques using Smalltalk (by Claus Gittinger, exept AG)

  4. MDA used for a large scale enterprise application integration framework (by Marcel Rassinger, e2e Basel)

I will give a short introduction to what we did in generative computing this term and what we learned. Practical work included building a generative support package for Struts using eclipse modeling framework and JET, AspectJ applications, bytecode generation etc.

Like last time I expect some lively discussions around questions like:

Do you need generative approaches if you have a truly powerful programming language? (Are domain specific languages necessarily different from implementation languages? Do you always have to generate code or is intelligent interpretations of models a more flexible alternative?)
Service-oriented architecture puts a lot of emphasis on semantic technologies lately (RDF(s), OWL). Is this competition to UML/MOF concepts by the OMG? is XML more than a dump format?
Current limits of MDA use in Industries?
How does one define custom semantics in modelling languages and which tools provide support for this in UML?
What do employees need to learn to successfully use generative technologies? (Is there really some value in theoretical computer science? (;-))
Genertion without models - useful? What are the options (frame processing, templates etc.)? What are the limits?

If you can read german I'd suggest to read the article by Markus Voelter on MDA in the latest object sprectrum magazine. It provides an overview to current issues with MDA. Download from Voelter's homepage or a short introduction to MDSD by Dave Frankel. The MDSD homepage has more resources to generative approaches. Softmetaware has a nice collection with MDA/MDSD tools e.g. OpenArchitectureWare, an open source framework for MDA/MDSD generation purposes. Or go for Jorn Bettin, Model-Driven Software Development which covers MDSD quite extensively.

The german health card - security architecture

This term card systems where a major topic in my lecture on internet security. With the help of my students I tried to understand the large scale architecture of bit4health. First I'd like to show you which resources I used and then what we learned.

Most of the resources around the healthcard can be downloaded from DIMDI, a large medical database. But you should start with an article by Christiane Schulzki-Haddouti "Signaturbuendnis macht Dampf" (Thanks to Ralf Schmauder for the tip). She discusses the healthcard in context with two other big projects: the jobcard (see below) and the signature card. The plans for the healthcard are ambitious at least. It should be introduced in 2005 and everywhere available in 2006. But a lot of questions are still open:

Can an existing infrastructure e.g. from financial organisation be re-used?
Which card-readers will be selected? Where will patients use them?
What functionality will be provided on the card and which one on the network?
And last but not least: who will issue, authorize, maintain, store etc. vital informations?

But more important than the question when the card will be introduced is its architecture, especially the security architecture.

I started the analysis downloading two pdf files from dimdi. The papers provide a general overview and a more specific view on the security architecture. Interestingly, they use the terminology of the IBM Global Services Method which we have covered this term.

The papers are not bad but it is quite easy to get lost in high-level IT terminology and diagrams. The security architecture is based on RM-ODP - the reference model for distributed computing. This is an architecture that cleanly separates security services and mechanisms. The healthcard projects defines extensibility and interoperability as general goals and needs this separation. Security requirements are defined as well but again on a fairly high level.

The whole healthcard security design was still quite unclear after going through these papers. Then I found the "telematik buch" - written by two medical professionals. This book in turn gives you all the background information needed to understand the security problems behind the health card.

Again it became obvious that with large scale projects like the healthcard a top down analysis - starting with a security context diagram of the involved entities - is key to a sensible use of security technology. You just have to understand the problem domain to get the security right.

A small example: Between the doctors and the insurance organisations there exists an organization which collects the invoices from the doctors, does the invoice processing with the insurances and then hands back the money to the doctors. Of course there are security technologies which could establish a secure direct connection between the doctors and the insurances but this proposal would probably require changes in law.

Another example also from the book: In Germany patients can pick the doctor or hospital of their choice. A doctor can state the results of an examination and e.g. propose a treatment in a hospital but the receiver of this statement is unclear at the time of issuing. One consequence of this is that a public key based encryption (using the public key of the receiver) is not possible because the receiver is not know in advance.

Several technologies can mitigate this problem - ranging from storing all data on the patients healthcard to keeping all data on a network and storing only access keys on the smartcard.

The book also covers the financial background of the healthcards. It does an analysis of the current, mostly paper based procedures e.g. with pharmaceutical prescriptions, and calculates the costs involved. This gives a base for financial calculations of the possible expenses for a new solution. Doctors, pharmacies, hospitals and emergency vehicles all need to be equipped with the new technology but it is unclear where the benefits really are. And without benefits no investments will be made.

I will discuss two rather hot topics with the proposed smartcard based healthcard: Control of medical actions and decisions and the question of internet pharmacies. Once the medical professionals are all equipped with a "medical profession card" - which includes a digital signature - they can sign their reports and statements and put those signatures on the patients healthcard. Right now most of this information is kept in the doctors storage and is not protected against changes or loss. Stored on the patients card this information allows e.g. expert (systems) to check the correctness of a treatment and doctors will be obliged to study the information put there by colleagues. Many doctors tend to put forward concerns about the patients privacy if in reality they are concerned about becoming transparent with respect to the correct treatment.

The proposed healthcard system will require clear standards for medical data exchange. Repositories and schemas need to be defined and maintained. Once those are in place, the medical professional can confirm actions (like accepting a prescription and handing out a specific drug) with their signatures. A pharmacist can put this information right on the patients card and the patient can use it later on for reimbursement by the insurance. The pharmacy needs to install a smartcard reader/writer station. But how would this work on the internet? Besides legal issues, the patient needs a written statement by the pharmacy, signed and placed on the card. How would e.g. doc morries put this onto a patients card? If a patient owns a reader/writer - perhaps connected to a PC which in turn is connected to the internet - this would not be a problem (see the finread article below). But exactly this point has been left unspecified yet: Who will own/control the readers/writers? And a smartcard system without those will put the patients in a rather helpless position. And I doubt that the local pharmacy will allow their readers to be used for internet orders (;-)

Finread Card Reader

The bit on the proposed healthcard above makes it rather clear that the reader/writer of smartcards plays a central role in every security architecture. The financial industry is very much aware of the current problems with PIN/TAN devices and the vulnerability of PCs as e-business devices. Finread is a proposed standard for a class 5 smartcard/reader writer which can run embedded software used to protect the smartcards. PC applications can still use card information but all access (in secure mode) is controlled by so called Finread Card Reader Applications running within the reader.

Mobile Communication - how is it different to fixed, static networks?

Distributed systems have long suffered from an exaggerated quest for transparency. Hiding concurrency and remoteness from application programmers was both neccessary and dangerous at the same time. It seduced programmers to a programming style which disregarded the fact of distribution and ended with slow or bug-ridden software (best explained in the famous Waldo paper).

Mobile communication has the potential for the same misunderstandings and mistakes. Let me explain why - and at the same time tell you what I've learned from Jochen Schillers wonderful book on mobile communication.

Wireless networks have some important characteristics which make them very different from wired networks and which require a different thinking and way of programming. Good examples here are the problems of hidden and exposed terminals where some participants in wireless communication can reach some others but not all within a group. Event though a node or terminal cannot communicate with another node it can still be hindered by this node. E.g. because both nodes try to send messages at the same time which can make reception at a third node impossible. Schiller explains all those complications in the first chapters of the book.

Cell design of wireless networks can only be understood if one knows the many ways the wireless spectrum can be divided between participants: space, time, frequency and code separation are important concepts and Schiller excells in explaining them in a way that even more software oriented engineers can grasp the concepts. No need to be an electrical engineer here. These chapters in the book provide a solid base for the introduction of several wireless technologies ranging from satellite communicatione via GSM/UMTS to WLAN and Bluetooth. I was able to recognize the importance of home and visitor registries as the main patterns for roaming in wireless networks and ended with a good understanding of GSM architecture.

Then the fun really starts in the chapter on mobile communication on the network layer. Here Schiller shows how basic assumptions from fixed networks about availability, speed and routing in the network break down completely. Communication in wireless networks can be possible in one way but not the other. Counting transmit time using hops can be misleading and so on. And the biggest problems show up in the area of routing. Static routing schemes require an always-on state which will quickly drain the batteries of small mobile devices. Dynamic routing schemes are needed here.

The part on mobile IP is one of the best in the book. Schiller explains the problems mobile nodes face (note that this has NOTHING to do with wireless - just moving your laptop around the world). Solutions require agents in the home and target network and Schiller shows that tunneling in both directions is needed for transparent mobility. The security problems related to mobile ip are largely unsolved. They require manipulation of routing tables which are clearly a security problem if done without authorization. IPv6 will help here only to a certain degree.

Another highlight of the book is the part on TCP/IP for mobile devices. Did you know that some of the best algorithms used by TCP break down once they are used in wireless networks? Modern TCP assumes a congested network when packets are lost. Receivers throw away packets when they cannot keep up with the traffic and consequently TCP makes the senders fall back into a much slower sending mode quickly. But this is exactly the wrong behavior in wireless networks. Here lost packets are NOT a sign of a congested network. Instead, a nearby source of noise might have wiped out some packets and reducing the sending rate will simply decrease throughput. Schiller shows several improvements to TCP using proxies or different algorithms.

In the chapter on support infrastructures for mobile services Schiller discusses the big problems of mobile clients: Caching, disconnected operation with synchronization and replication issues, push technologies (like OTA), adaptation to device capabilities etc. He also explains why WAP over GSM had to fail and why iMode is a big success: Interactive applications (e.g. browsing) over a circuit switched connection just does not work. It is in this chapter where actually a new book starts - one on WAP2.0, mobile multimedia applications, mobile development environments and operating systems and so on. And not to forget security which seems to be quite critical with all the push technology used.

But what I liked best was all the information on ad-hoc networks. Self-organization using wireless connectivity could become a very important feature e.g. for 3rd world countries. It is comparable to the current drive toward peer-to-peer architectures in distributed systems. This technology has the potential to be a disruptive technology. If you want to learn more grep for "mesh networks" on google or got to Schillers homepage

Automatic composition of web services into business processes using intermediates

Together with the Fraunhofer IAO, Christoph Diefenthal, a student of mine developed a system which automatically creates and runs a business process (in this case a procurement) between two companies. The companies involved need not adjust their web service interfaces. Transformation and conversion of services and parameters will be handled by an intermediate service which can run anywhere. But the companies need to create both a semantically rich description of their own service and what kind of services they expect from their partners. Those descriptions are input to the intermediate (flow) service which creates a combined process description and then translates it into a BPEL4WS format. A new web service is generated which implements and runs the process description.

The work uses several description schemas (RDF, RDFS, OWL, BPEL4WS) to achieve its goal. Some problems still exist though, especially in the area of security where the concept of end-to-end security and legal responsibility needs to be made compliant with web intermediates which act in place of the original requestors.

The (meta)-modelling approach will be presented by Christoph Diefenthal at our workshop on generative compouting and MDA at 1st July.

The German Jobcard architecture - technical and political aspects of its security design

Ralf Schmauder sent me this link to a C'T article on the planned Jobcard . Very good article by Christiane Schulzki-Haddouti on the technical and political aspects of the jobcard proposal. The jobcard is basically a smartcard system that allows access to an employees data controlled by the employee herself. The data are all kept in a central location. (That's why access control is such a hot issues to get the "Datenschuetzer" off your back.

What I'd like to show here is that the whole jobcard architecture consists of two different parts. One part is the central storage service which technically controls all the employees data. Data creators like employers send their data to this place. There is no permission by the employee needed. But public organizations which want access to those data need an electronically signed statement from the employee - that's what the smartcard is for. This statement together with the request is verified by another organization and access to data is either granted or denied.

What is really important to note is that data storage and authorized access are two different, technically independent systems. It takes one legal change and the whole authorization system can be discarded - the data in the central storage made available to whichever organization is now entitled to them.

The problem is that the employees key does not really protect her data. It is used only to give permission for access. If you look at the central role of the storage service you can see why privacy advocates have all the reasons to get worried. Again, like in the case of the health card the difference is in how the smartcard is used. But without intelligent readers like finread card readers in the hands of the general public all smartcard solutions are rather a joke.

But the difference between existing (paper based architecture) and the future jobcard system is worth being compared. I am using here two diagrams from the article. The first one shows the current architecture with the employer maintaining one-to-one relations with the government organizations. Those one-to-one relations could easily be turned into e-services. A standard data format and a secure way of transport is all that is needed. But then every data sink would need a relation to the single data source - or get the data from another sink.

The current situation turned into e-services does not change much but it makes the privacy problems more visible which could cause political problems. A new system is needed that ensures data exchange but also satisfies privacy advocates.

The jobcard architecture as shown in this diagram is a publish-subscribe pattern. Watch how the employer has moved from the center of the system (the data source) to the side. A publish-subscribe patterns decouples consumers and producers from each other. Specifically it allows new consumers to be added without a need to bother the producers. Clearly a big difference to the one-to-one relations today.

This makes data interchange much easier than before and raises the question of who controls the data. The jobcard architecture therfor adds an authorization part to the system. Draw a line between the top 3 objects (employer, central storage and one consumer) and everything below that line belows to the authorization system.

The important question is now simply: for which purpose is the key on the employees smartcard used? The current proposal uses it to sign authorizations for data access. These authorizations are checked by the authorization system and access is granted.

But the key could also be used to encrypt and protect the data. This would put the employee really in control of her data. She could verify a request and if granted, send a version of her data to the receiver. But that would of course substantially reduce the political flexibility with respect to changing the rule for data access.

In one sentence: just cut away everything that is below the central storage service and you will have no technical problem accessing the private data - because the privacy is not in the data.

New free computer science books - checked IBM Redbooks lately?

The last two newsletters from the redbooks site where choke-full with very interesting books and papers on current IT topics.

Enterprise Infrastructures and Architectures are quite difficult to grasp if you are new to the business. Lots of layers and many functionalities rest on different machines. Topics like clustering, single-sign-on and identity management are complicated but neccessary. The following redbooks covers those topics in detail and are not only applicable in the context of IBM products. Typical example: how do you configure a web server for SSL mode? How do you secure web infrastructures internally? Where do you store credentials? How do you solve delegation problems? How do you implement RBAC in a web application server context? Can you get your application server to use an LDAP? How does your server handle JAAS? If those terms are new to you - get those redbooks

But before I start with the redbooks I'd like to point you to two papers not from IBM: If you are interested in security, the Security Introduction from BEA is very much worth reading because the core terminology and principles behind distributed and application server security is introducted in a very readable way.

If you are interested in security services in distributed systems, get the Security Service Specification from OMG. It clarifies things like delegation etc. Not such an easy read but fundamental.

On security and identity management

  1. Develop and Deploy a Secure Portal Solution. Very good. Covers concepts like MASS, authentication and authorization infrastructures and SSO.

  2. Identity Management Design Guide with IBM Tivoli Identity Manager

  3. Enterprise Security Architecture Using IBM Tivoli Security Solutions

  4. IBM WebSphere V5.0 Security WebSphere Handbook Series. A classic. Covers almost EVERYTHING important in web application security in the context of application servers. SSO, RBAC, JAAS, X.509 certs etc.

On enterprise infrastructure: LDAP, caching

  1. Understanding LDAP - Design and Implementation - Update. No need to buy an LDAP book. This one covers setting up LDAP directories as well as securing them.

  2. Architecting High-End WebSphere Environments from Edge Server to EIS. Bring content as far as possible to the edge of your network or to your customers. Dynamically update edge caches from dynacache in your application server. Get information from your EIS systems.

On application server technology: system management, monitoring, clustering

  1. Overview of WebSphere Studio Application Monitor and Workload Simulator. Always something that is considered too late in a project. Remember Lenin: trust is good but control is better..

  2. IBM WebSphere Application Server V5.1 System Management and Configuration WebSphere Handbook Series. Distributed system management is essential if you want to run your web applications in a large scale environment.

  3. IBM WebSphere V5.1 Performance, Scalability, and High Availability WebSphere Handbook Series. A monster book on clustering, availability and reliability.

  4. Enterprise Integration with IBM Connectors. Very good and easy introduction to the J2EE connector architecture - a real core piece of J2EE.

On business integration with web services

  1. Using Web Services for Business Integration

  2. Service Oriented Architecture and Web Services: Enterprise Service Bus. SOA is the new buzzword on the block.

  3. Patterns: Implementing an SOA using an Enterprise Service Bus. The Enterprise Service Bus could well be the future architecture for all EAI systems.

E-voting at the euro and community elections in Germany - Koblenz rulez?

Dennis Voelker sent me a link to a Heise newsticker article on e-voting in Germany . According to this report the city of Koblenz used an e-voting machine for the euro and community elections last Sunday. "And it worked flawlessly". At the small price of a half a million Euro.

Just a few questions to the head of election central:

  1. What do you mean by "worked flawlessly"? Do you mean that the machines did not crash during the voting office hours? Or do you claim a correct election based on the fact that there where no major delays or queues in front of the booth? According to the report the queues where not longer than in previous years. Is that good? Is that still good enough for 500.000 Euros spent?

  2. How do you get proof of a correct election? Do you base your statements on the correctness of software and hardware which you derive in turn from testing and certification principles during manufacturing? Or did you install a system that allows end-to-end verification by voters and which is independent of the voting machines?

  3. The head of election central is quoted in this article saying that the major reason for implementing the voting machines is to save time - in other words to get the results earlier. In this case a couple of hours after the voting booth closes. Has waiting some time for the result of a vote been a problem lately? Has somebody been complaining? How come that in times like these when public money is scarce a city spends 500.000 Euro on a system that will be used EVERY FOUR YEARS (Ok, let's say two years) and which will give us a couple of hours less waiting time for the results?

  4. I don't want to beat a dead horse here but I just can't make a case for e-voting. Does it save money? Most helpers during elections in Germany are unpaid. People can easily wait a bit more for results - Remember: the machines do not promise A BETTER RESULT (;-) - In this case the money would be well spent. All in all this whole issue looks like a solution looking for a problem.

Note

Could an e-voting savvy person perhaps tell me the rationale behind the push for evoting? And perhaps answer those questions above as well?

Beyond Fear Tour 2004

11-12 of May 2004 I was touring with students of the HDM through the south-west corner of Germany and Switzerland. This was a part of our yearly week reserved for field trips. Our field trip had the topic security/safety and we did it with our motorbikes. Originally we had planned to take part in a bike safety training one afternoon but this did not work out: our group was not big enough. Perhaps next year. Our service team (;-) did a great job in supporting us bikers - only every once in a while they got lost and ended up on a smaller mountain (;-)

The pictures where taken by Marco Zugelder and Dietmar Tochtermann and you can find more at Marco's homepage . How inventive students in computer science can be if needed: Dietmar's custom made read light....

It was a wonderful trip and next year we will take at least 3 days for our excursion.

On Motorbikes and Security, more?

What have motorbike safety and IT-Security in common? The answer is simply the attitude towards risk. In both areas you can watch the full range of behavior towards risk: From denying it (nothing's gonna happen) to strict risk avoidance (you must be nuts to ride a bike). From doing frequent exercises to riding only rarely. And tons of myths about security: "If you drive only a bit on Sundays it is very unlikely to get killed - the risk increases with the miles you drive". In IT terms this is equivalent to statements like: "linux is safer than xx". First: the so called sunday driver is not at all safer from being killed because if he gets into a critical situation very likely he has no practised response to it. And blunt statements about the security of operating systems have too many hidden caveats to be of real value. Linux may be safer than xx IFF you know it well AND keep it up to date AND have good system management practices etc.

Fear is an imporant element of riding a bike. Too strong, fear turns into panic and kills you easily. In fact - many serious accidents with bikes are the result of panic reactions. A very common and deadly one being the inability to increase the lean angle in curves beyond practised limits in case of an emergency. Drivers seemingly glued to their bikes then tend to crash straight into the opposite traffic. So for a biker it is a neccessary survival skill to go "beyond fear". If you want to learn more about this read Die obere Haelfte des Motorrads by Bernd Spiegel, a behavioral scientiest and seasoned bike trainer.

Is that all there is about bikes and security? No. Motorbikes have one quality that makes them very special: a notion of freedom which shows e.g. in wildly different bikes - many of them modified by their owners - and biker habits. Bikers insist on their freedom to ride and live in a certain way - even if this puts them at risk sometimes. They tolerate a certain amount of safety features like helmets. But they would reject the full body armor because it would kill the fun in riding a bike. And that's where the whole things becomes political as well: For years government and industry have tried to establish new rules and standards, e.g. euro-helmets, standards for biker clothes etc. Some of it was quite useful, some only served the interests of the industry. And some rules like closing certain roads for bikers are trying to influence the trade-offs between safety/risk/fun/common interests etc. and get dangerously close to taking away the freedom to move. But it showed clearly that safety or security is a TRADE-OFF between different values.

The idea to offer an IT-Security related tour on motorbikes for students and colleages is about a year old. And it got its name when Bruce Schneier's latest book Beyond fear was published. In this book Schneier reminds us that security is a TRADEOFF between risk and gain. It does not mean complete risk avoidance. And he also shows that freedom (individually and on the scale of a society) is tied to taking risks.

Like with bikes fear is a common concept in security - why would one think about security if not for fear of something? And like in the case of bikes fear can be counter productive. Fear is a basic instinct and neither bikes nor security questions (e.g. risks of terror attacks) are well suited to reactions based on basic instincts.

He tries to put security back into perspective by asking 5 crucial questions for each proposed solution:

What are the assets that you want to protect?
What are the risks to those assets?
How well does the security solution mitigate those risks?
What other risks are caused by the security solution?
What costs and trade-offs does the security solution impose?

(Taken from Beyond Fear, page 14). Schneier puts security back into its social context and makes its political and social side visible. He demonstrates his approach using many security measures put in place after 9/11 by the US government but which turn out to be mostly useless of even damaging to individuals or society. (It was only yesterday that the FBI announced for the Xth time vague warnings on terroristic threats - nothing specific (sic) for the summer. For an unknown reason the color based risk levels currently used in the USA have not been adjusted to this. Perhaps somebody simply could not figure out what "orange" would really mean compared to "green" or "red" and that the risks are always there but life goes on?). "Nothing specific" - yes, just enough to scare the some percent of the population into ridiculous safety measures (duct tapes etc....)

But I'd like to ask those questions also on a much smaller scale: within companies e.g. What is the right kind of IT-security for a campus? for a public broadcast station (see below)? What are security processes really worth and would do they really cost? OK, so sharing of a user ID and password between colleagues is a risk because there is trust involved in this, e.g. by somebody going on vacation giving her credentials to a colleague so that work can go on uninterrupted. Yes, there is a risk that this trust can be abused. But how many times is this really the case? And what does this obvious sign of trust mean for teams and their informal relations, motivation etc.? It is true - security problems start with trust. But what kind of problems do you experience without trust?

And Schneiers questions even work on the very small - individual- level, e.g. with motorbikes. Is the only proper response to a risk assessment for motorbikes really to no longer ride a bike? Or are the trade-offs of this security measure simply too high? This leads back to the important statement that security is not about risk avoidance. It is about risk management. Here is a place to learn more about bike safety: Stehlin bike training Which means there is a lot you can do to improve safety and security on a bike (I do not really distinguish between these terms here for two reasons: a) we don't have two different words for this in german - it's all "Sicherheit" and b) a lot of things happening on a bike are intentional and belong therefor to the security category, even if it's your own intentions (;-).

In the next weeks I will try to use Schneiers catalog within my lecture on internet security where we are going to discuss and evaluate the planned german health card. Remember: what are we trying to protect? This seems to be the crucial question as it implies the others already. I noticed that sometimes the answer seems to cover almost everything: "The nations safety" being a very popular one right now. The more vague the assets are the harder the proposed measures can be evaluated if they really improve the situation. Such cases need to raise an alarm already. But although the opposite can be true: A simple asset needs to be protected and the mechanism used is completely out of bounds. Such cases usually hide the real target of the security mechanism deployed. This was the case with the lately revoked law that allowed law enforcement to spy extensively on citiziens without proper cause.

Schneiers book is a political book as well. It tries to restore reason in security discussions - which has obviously been lost after 9/11. He also starts discussing security itself as a means to weaken democracy. 24 May 2004 on German TV station 3sat: a group including former German secretary of internal affairs Gerhard Baum discussed the same topic with respect to Germany: Terrorists try to raise FUD. The resulting hysteria - nurtured by the media - is used by different groups to change laws and raise money. Did you know that we still have many laws made at the time of the RAF terrorism - even though this threat is long gone? The same mechanisms as in the US work in Germany too: something happens, the media cry wolf and politicians feel the need to blind activism. And our liberties go down the drain.

Anything I did not like about Schneiers book? He makes it look like the instrumentalization of security by the US government is only a side effect of misguided political activism, a mistake only. But one could show that the way security measures are currently used in the US and other countries are far from accidents and the side effects are the real goals: Redirect money and cut down on civil liberties. Governments, military industry and media are in perfect sync to play the FUD tune. And to this effect they march in lockstep with terrorist forces and create the same symbiotic relation as between right-wing Israeli and radical Palestinians: They need each other and whenever the danger of peace increases, one of them will strike. But that is probably a different book then and the author would be Michael Moore. And Schneier always mentions the Iraq war together with the "fight against terrorism" - is this really the same to US eyes? Has the CNN propaganda really done such a good job?. Somebody said on TV that this mingling of causes was one of the main reasons for the Iraqi tortures by US solders. Again wrong of course: the US have long left the ground of the Geneva convention through a series of top level polictical decisions and the tortures in IRAQ are no accidents at all (remember: the job was to find weapons of mass destruction) and why else are the US fighting the UN court for human rights so much? But the believe that their fight in IRAQ is somehow related to 9/11 makes it much easier for the lower ranks...

But enough of this. The question is: is Schneiers catalog really useful for practical security evaluation? I believe it is and will show this with a different topic: the planned German job card - a centralized system for all data related to your employment with a card system that supposedly protects the employees data. The assets are clear: your data - but does the proposed security solution really protect them (and from whom)? We will take a look at the system design of the job card and it will become evident that the card related layer is really only an add-on: all the privacy protection of this system is really provided by the laws behind it and - unfortunately - not in the potential of such a card system itself. In other word: change the law and the card system based privacy control is gone with the wind because your data are NOT really protected by your private key. We will discuss this in an article on card systems in the near future.

Store and forward communication with bluetooth - helping the mobile biker, more?

This idea came up during our bike trip: One thing a group of bikers immediately notices is that communication is a problem once you are on your way. The systems currently in use, e.g. from Baer are both expensive and non-standard which means that after spending a lot of money on such a system you will frequently still be unable to communicate with a new group because they use a different system. Also, cable based solutions are not perfect in case of helmets.

But with the latest developments in smartphones - especially by the bluetooth features - this situation could be improved in a rather cheap and easy way: why not build an ad-hoc network between those phones?

The idea is roughly like this: All drivers are equipped with bluetooth devices. At start they snchronize their devices (make them know each other). While driving each smartphone works as a store and forward server for messages. In case of larger distances the participating devices will use e.g. GPRS to deliver messages.

There are a number of open questions tied to this idea but some features of a motorbike group might help here as well: The usual distance between drivers can exceed 10 Meters but frequently drivers are within this distance as well. Front and back rider are usually always the same riders. A reasonable group will insist on riders keeping their place within the group. Speed is usually not very high.

Let's discuss some properties of the idea:

  1. How many devices can be connected? Bluetooth has the limit of 8 active devices per piconet. We would probably need a scatternet design with every phone being a master and slave in different nets.

  2. When would the devices use GPRS as a fallback? How could the algorithm work?

  3. Power: do the bluetooth targets need to be active most of the time? Do they have to be plugged into onboard electricity?

  4. User interface: can we create a generic user interface box which is fixed at the steering bar? Should it be a bluetooth target as well? This would get rid of cabling. There are already helmets equipped with bluetooth receivers available. Our bluetooth user interface box would have a simple display, some very simple manual controls (need to work with gloves) and targets could download user interface (meta)data through bluetooth.

  5. How would the distributed algorithms for mobile systems work? I've read Joachim Schillers excellent book on mobile communication and found the chapters on ad-hoc nets very interesting. In our case GPRS could provide part of a fixed infrastructure if one is needed. Need to look some more into wireless mesh-networks as well.

  6. Up to which speed do bluetooth networks really work? this could be a killer.

But even if this idea turns into nothing: who could now claim that an excursion with bikes is not a very inspiring event?

Security and system management in heterogeneous environments, more?

This is about the problems of virus control etc. in relatively free and fast moving organisation like broadcast companies, universities and so on. And it is also a result of your bike trip where we visited a large public broadcast company which was severely hit by the worm "sasser" at that time. A student of mine had done an excellent thesis on a security issue and during the discussion we learnt that our campus network is not so much different from such a company: Lots of different systems and personnel mostly concerned with their creative tasks. This makes a tough approch using standardization and rules very hard.

It also became clear that companies are not prepared for worms like sasser: the organization of defensive measures during an attack still needs immprovement. And system management plays a major role in all these activities. But how do you do distributed system management across heterogeneous platforms? At our university we will start a project in fall -together with a company specializing in virus defense - to investigate security solutions in heterogeneous and open environments and we will extend the experiences then to our friends in the broadcast business.

Another beautiful thing about this visit during our bike trip was that our students saw how deeply organizational, financial and technical problems are intertwined when it comes to security infrastructure. Many thanks to Mr. Haensch and Reno Pankau for the invitation and their hospitality.

Building automation and security, more?

One of the biggest surprises during our bike trip happened when we visited Sauter AG in Basel, Switzerland. None of my students had expected so much computer science and IT behind building technology. My friend Thomas Krieg showed us current products and developments in building technology (we also have a current student project in this area, see Usability and Building Automation and we were all impressed about the level of complexity in this area.

But there exist also a number of problems: What happens if windows based control machines crash due to a virus or worm? Cannot happen? Yes, not easily. But what if somebody makes a mistake and connects to the wrong network? And there is market pressure for unified networks (regular data and control combined), remote control etc.

Imagine somebody breaking into the building network of a large skyscraper: elevators, heating and air conditioning, location security, fire control etc. We are talking serious risks here.

Do you remember the discusson about the power outages in the US last summer? There is a strong assumption that the major part of the disaster was caused by windows PC used for control of the network which where down due to a virus or worm going around at that time. We clearly are lacking an operating system that is protected from malicious downloads and other attacks. And I am not sure if the answer really is Linux: the ACL protection mechanism is pretty much the same as in Windows. Perhaps we need something more radical in the line of eros/e.

And just think about what a person could learn about an appartment from sensors etc. What if I know that room temparatures in a building or flat are below 15 degrees celcius? Nobody home -time for a visit with the heavy tools...

Perfect profiles could be created about the living habits of the inhabitants.

Again, a wonderful visit and many thanks to Thomas and Sauter AG.

Opening up smart cards - an open architecture, more?

Our last stop was at UBS Card Systems in Zurich where Rene Müller gave an excellent introduction to the problems and possibilities of intelligent smartcard solutions. The business complexity behind card systems in the financial community is staggering:

But it gets much worse when we talk about multi-function cards where different companies can download data through intelligent card readers. Bruce Schneier did a rather critical security analysis of such cards a while ago and concluded the study with the remark that the coexistence of different software and data makes the cards vulnerable to attacks. But again, how many cards to you want or need? Having many keys etc. on one card would be very nice from a usability point of view.

The topic of financial card systems was a perfect fit to what I am currently doing in my internet security course: health card, job card and the security problems related with them. BTW: the health card project in Germany has still not decided on a card reader standard. This e.g. makes it almost impossible for a patient to use an internet pharmacy with her health card. How would the internet pharmacy record the medication on a card to allow reimbursement?

Again, a very interesting visit and many thanks to Rene Müller for his excellent presentation and open discussion.

Mediaspace - a next generation augmented room

I got this very interesting paper from Prof. Jan Borchers at the RWTH Aachen. The concept if mediaspace is something that could make a real difference in the way we study e.g. source code in groups. The current room situation at universities does not foster creative, interactive groups.

Interactive rooms for interacting people

It rarely happens that you see a solution for a problem on the very next day. Wednesday eve I was right in the middle of the third session on generative computing when we started taking a close look at the XDoclet source code. We did not really intend to do this but it just seemed to be a good idea at that point in time. Nicolas Schmid, a student of my class had just finished his talk on XDoclet when I thought taking a look at the implementation might teach us something on how to parse Java code etc. And I also wanted to see how Xdoclet used templates to generate the output. XDoclet btw. is code generator which generates Enterprise Java Beans artifacts from an annotated Java source file. It needs to both understand Java and the JavaDoc syntax used within comment elements to express XDoclet meta-data. The meta-data basically describes EJB options needed for generation of homes (factories), remote interfaces, relationships etc.

The sourcecode was quickly downloaded because the university is equipped with lots of wireless access points. A couple of students had their laptops with them as well and started downloading additional components. Quickly we got lost in the source code: we could not find the parser used by XDoclet in its source code. One laptop was attached to the central beamer and was heavily used to browse documentation and class files with us doing interpretations on the fly. The atmosphere was quite productive and fun but some things where clearly missing:

  1. Only a small number of laptops was available which limited the number of students doing parallel (re-)search. It took us much longer to find the solution to the missing parser than probably neccessary.

  2. The students found interesting bits and pieces but could not project their findings to the group because only one beamer was available. This makes interactive work really cumbersome and needs to be fixed.

  3. The rooms architecture was "classic lecture" style with seats arranged in lines - each row a bit higher than the one before. This may be a good architecure for cinemas - giving everybody a clear view at the movie - but is clearly big s**t for interactive groups. Nobody can easily get to drawing boards. No additional screens or projectors at the walls etc.

At that moment I noticed that our overall productivity was clearly handicapped through the (lacking) equipment and room architecture. Why did I not take the whole group into one of our lab rooms? I had decided against going into one of our lab rooms. These rooms are equipped with standard PC's (towers) and big screens. Good perhaps for clearly organized exercises. But really bad for experimental, research type work for several reasons: The seats are arranged again in rows. Big Screens hide the individual group members from each other. No sub-groups can form. No group discussion is possible. The machine noise is rather loud. The arrangement and noise invite students to just browse instead of focussing on the problem. Individual work results cannot be shown easily to the whole group. (Does VNC help here? Need to check on this)

So what kind of room architecture would fit to an interactive research environment? Before I dive into something I saw today during a thesis presentation I have to say something about current university lecture styles. Coming from the industry - which is by no means a perfect example of good interactive learning environments - the classical lecture style of teaching looks ancient to me. It is not really productive as well because one can easily read a book if a lecture consists of a oneway stream of knowledge. So I usually create rather interactive lectures where I do develop something together with the students - mixed with some more top-down know-how transfer in between. But I have not found a solution for classes which I run in a more seminar-like style. Except that we do not only theoretical work there but would also like to take a look at code etc. But this leads back to the room problem: The way we use our computers currently does not support such a style.

A side-effect of this style is that frequently I pick examples which are new to me as well and require an investigative, interactive approach to come to a solution. This style depends on student participation heavily. The downside is that in many cases the students h ave taken some good notes from our work but I am left with nothing in my hands. Sometimes I think about using our copying whiteboards for this but many times I just forget about this option and have to ask a student later for her notes to make a copy. The new tablet PCs could be a solution for this problem but I also found the interactive effects of a big copying whiteboard quite productive.

Back to the possible solution for interacting groups: Today Stefan Werner - my student who created an implementation for an interactive audio-space (to become part of an interactive room at RWTH Aachen) presented his thesis to a couple of students. During his talk he showed a video from an interactive room at Stanford University which hit me like lightning: The rooms had many large flat displays (actually back-projected planes) at the sides. The students could directly manipulate the contents of the large displays, e.g. sending pieces from on display to another display (enlarged). The seats where ordered around an oval table and every student could reach a display easily. A lot of interactions and discussions where going on. The table itself had a large screen built into it.

I do not really believe to much into technical solutions for social things like interacting groups but in this case I believe I could really see the benefits of such an interactive room.

So how does this relate to other technical approaches, e.g. to support e-learning? I am afraid my style is the absolute opposite of e-learning: it is extremely interactive and depends on group communication, individual participations etc. Technology supporting my style needs to enable LIVE interaction and get the technical obstacles (small screens for individual viewers, noise, isolated machines etc) out the the way.

Before I forget: we found the parser. It is generated! using JavaCC and therefore not included in the source download (;-). Browsing through the JavaCC template used by XDoclet I decided to start my class on generative computing next time with an intro on building scanners and parsers (probably stealing shamelessly from Terrence Parr's excellent fragments found at ANTLR homepage . The pdf files cover building recognizers by hand and basics on language and parsing.

Secure Coding (O'Reilly) and more literature

This title on secure coding from Mark Graff and Kenneth R.Van Wyk (two security oldtimers) comes from O'Reilly and is written for developers who need an entry level text on secure programming. Written in a conversational style the book introduces abstract concepts like security principles (e.g. defense in depth) and goes through a number of security flaws found throughout the years in several areas - mostly Unix like operating systems. Like most books on security it faces the awareness problem: without security concepts one does not even see security problems. This is especially true for software developers which have to grapple with lots of other technical problems at the same time. With many real-life examples the authors create awareness for the many faces of security problems. A typical case e.g. is when one developer changes source code after a couple of month and creates a huge security hole because he did not realize certain side-effects.

The focus of the book is also more on host based security issues and in this case the complete absence of THE platform windows and its derivatives is a bit of a surprise. The development of Internet Explorer e.g. would have created a wonderful case of feature creep causing an endless stream of security problems. According to an article I read recently (sorry, forgot the url), most IE security bugs where caused by later additions and plug-ins. The core itself had some bugs as well - like probably every ohter browser - but became stable after those where fixed.

The commented bibliography at the end of the book is quite useful. For code examples from the book there is a special site on securecoding .

What you should not expect from the book: it does not go really deep into the details of secure programming, e.g. web applications security, use of cryptographic protocols, JAAS and Kerberos etc. This is not a problem because it leaves room for the abstract principles, creates initial understanding of security in software and keeps the book small. But the book also wastes quite a bit of room for VERY generic advice and is sometimes just a bit too conversational (many repetitions of the same argument). And the central use case: SYNC flood attacks on TCP/IP is not really this productive for a software developer - after all its only a DOS attack. And the basic problems of DOS attacks - especially if they come in their distributed form DDOS - are not explained like it is done in the article from GRC.COM where a full blown DDOS attack is analyzed both technically and with its social background. The basic problem of DDOS attacks is to fight them upstream if possible. Once they reach the app it is quite late. And no tricks are mentioned on how to detect e.g. a DOS attack on servlet level (when an http client continuously sends requests and closes the channel without waiting for responses). So what else could you read afterwards? The following list is of course only a partial list.

  1. The OMG's security specification has helped me quite a couple of times. It is very clear with respect to terminology and explains complicated things like delegation. See the OMG homepage for this specification and related papers. Especially useful for distributed appliction security - and which apps aren#t distributed nowadays?

  2. Web application security is a hot topic and I found help at Open Web Application Security homepage. They have a nice paper on all the important topics. Very much recommended.

  3. From OWASP you will find pointers to David Endlers homepage with excellent white papers on cross site scripting attacks and session ID hijacking. Absolutely recommended.

  4. Razvan Peteanu wrote "Best Practices for secure development" - probably the best compilation of software frameworks and technologies for secure programming I have seen yet. The white paper is free. You will also find lots of good resources (e.g. How to use Kerberos) and explanations for the major security frameworks (JAAS) and the role they play iin development.

  5. Web application infrastructure security is complicated: single-sign-on, https, SSL, certificates, Role based access control and many different ways to authenticate people. Internal users, external users etc. Certificates and CA's and so on. I found the Websphere Security Redbook(s) really very helpful. They explain the whole chain from incoming requests and their authentication through how directory information is used to authorize requests and even explain servlet engine and EJB descriptive security. For single-sign-on etc. get the redbooks on Tivoli security from the same site. Good and free information not only useful for Websphere addicts.

  6. From the IBM Systems Journal I took an article from Margarete Hondo on e-business security. Some of the best concept level security I found on this topic.

  7. Tools play a major role and I found some useful proxy type loggers in the book on "Hacking Web Applications". (Part of the famous "hacking xxxx" series from WROX press. The site which accompanies the book has a good list on tools.

  8. SAML - Secure Association Markup Language from Oasis Open is an XML based standard for communicating security information but it also discusses the problems with token based security and attacks on web services. You will also find good information in the documentation from liberty alliance for their Single-Sign-On technology.

  9. Separating security information from business logic should be an obvious choice nowadays. Java Secure Authentication and Authorization is one approach possible. Find lots of good explanations and examples for JAAS on Developerworks and Javaworld . I remember a good article on JBOSS security which tried to further separate authorization from code. Instance level authorization is a good search term as well.

  10. SSL -countless possibilities for errors here and still a good approach if channel based security is enough for your case. Securing you applications via JSSE was a nice article which I believe I got from OnJava . Uses a dummy http client (browser) and server to show how you secure communications with JSSE. The problem is mostly in understanding what kind of certificates you need to install in which key-ring.

    If you REALLY need to understand https, SSL, secure messages etc. get "SSL and TLS. Transport Level Security from Eric Rescorla. Well written, with SSL code. Shows protocol problems as well. Best thing I ever found on SSL.

  11. Need to write a security document for your application? Eric Rescorla wrote a very good RFC on what information MUST be included with every program. Extremely useful and free.

Security Analysis Methods (under construction - will get more diagrams etc.

Secure coding leans on one side heavily on advanced programming technologies like frameworks and design patterns to allow changes in busines models to be separated from changes in security infrastructure. But it is also quite closely related to security analysis. I am currently compiling a new lecture on security analysis and it will encompass the following things:

Security Context Diagram.

This type of overview diagram comes in two flavors. The external version collects all external interfaces of the system or application in question and tags those connections with protocol type, autentication and authorization information, delegation forms, User input checks etc. The internal form tracks conncections and interfaces between internal components of the application (e.g. to databases), configuration information, credential handling and mutual identification of the components. The style of the security context diagram is similiar to context diagrams used in architecture methodologies from major companies.

Homologation Questionaire

Once you have created those diagrams a couple of times you will probably recognize that certain types of architectures always raise the same questions and problems. This is when you should turn your diagram tags and comments and the architectural types themselves into questions within a homologation questionaire: Ask questions about the architecture used and if you want you can add the security consequences beforehand. A typical example would be to ask about the tiers used in an application with a comment on two tier apps not being allowed to directly access corporate databases. Such a questionaire is extremely helpful to software developers BEFOREHAND as I learned when working for a large corporation. Just remember that most developers don't think about security because they have other problems.

Organizational Security Model.

Draw a diagram of how much security is provided through infrastructure already. Are there areas where application developers do not find help within the company? Can they use a central CA easily or do the have to invent security infrastructure for each application? This will be the points where security problems creep up. Draw another diagram of how security is organized in the respective organization. Is the security department part of development? Do security engineers jointly work with developers on software or infrastructure projects - sharing responsibility for delivery as well as security? Or is security organized as a controlling entity only? Like Quality Assurance used to be - a separated organization which in reality oftenly consists of a bunch of wise-guys which provide no real help at all to developers. Nowadays, especially in the context of agile methodologies, QA has become an integrated part of the development process e.g. through the use of Unit Tests. This has proven much more effective than later tests through QA specialists.

Trust Model

Create a copy of the security context diagram(s) but remove all technical stuff. Just keep the connections. At every node, add the trust that this node puts into its partner at every connection. Is this trust somehow checked? Make all assumptions about the quality of a partner or connection explicit. Trust is where you will experience security problems.

Mental Model Diagram

What is a browser? Is it seen as a viewing engine or more as an exectution engine or interpreter? Most people view browsers as viewer and do not see the real capabilities of those engines. The plug-in mechanism turns a browser into a generic execution engine - and the mime type declarations select the respective sub-execution engines. If a company strictly differentiates between internet and intranet security (organizationally) and the intranet group introduces a new type of plug-in for internal purposes, chances are that the internet security group does not know about this. The result is that the firewalls do not block the corresponding mime type(s) and a huge security hole is created. A security context diagram could catch this as well. And I don't know how to represent the mental model diagram yet.

Risc Analysis and Risc Integral

Risc analysis is when you put both probability and consequences into one diagram. A typical example for this is a First Cut Risc Analysis where every row contains one type of possible risc. First all probabilities for the given riscs are determined and then - for the same set of riscs - the consequences are estimated. The overal risc level is then the combination of both values.A combination of two dimensions with two levels each (low and high) gives a matrix with four sections with the lower left corner containing the easy stuff: low odds, low effect. The upper right corner contains the critical candidates: high risc, high effect. But watch out for the dreaded Risc Integral. This concerns mostly the area where high riscs are paired with low probability. Think about nuclear power plant types of riscs: not very likely to blow up but IFF... Or better: a sharp left turn with no way to see opposite traffic when you want to pass a slower vehicle - but at the same time a low probability that traffic in the opposite direction will be there at all because it is a lonely country road. Some people create a risc integral over both risc dimensions and: pass other cars SOMETIMES. Instead of recognizing that one car coming in the other direction will create a complete catastrophe. This example is taken from the book "The upper part of the motorcycle" by the behavioral scientist and motorcycle trainer Bernd Spiegel (in German).

Threat Model

A more elaborate version of the risc analysis is the so called threat model which extends the table of threats with more information: how difficult is the attack for which attacker? What is the attackers gain? What are the riscs for the attacker of being caught? What are the consequences and probabilities (as before).

Attack Trees and Forrests

A very useful risc analysis instrument are so called attack trees (or forrests) which use the two elements "goal" (with sub-goals) and "penetration point" to create a lifecycle analysis of a system under investigation. The fact that the method can cover the whole lifecycle is immensely valuable. The diagrams can be detailed to any level necessary. I learned about attack trees in Bruce Schneiers "Secrets and Lies" but in was not until I read an article in the websphere magazine (must have been February or march 04) that I started understanding and using them.

Project Management Survival Tips

After talking about the good old days with a friend of mine I realized that we had been mentioning a couple of tips and tricks which helped us (sometimes) to achieve our goals. I am not talking about stuff like "Deathmarch projects" by Ed Yourdon. This book is THE survival guide for IT-projects. Like the good old Kenny Rogers song: Know when to hold 'em, know when to fold 'em.

But there are also a lot of simple things which can make life easier. Here I'd like to discuss the following topics:

How to avoid endless review/sign-off cycles
How to turn a vague specification into a structured delivery mechanism
What to say when somebody in a planning session says he or she can't deliver the functionality.
What kind of language to use for technical documentation in large projects
What to say to your bosses boss when he asks "How are things going?"

The safest way to never exit from a review or sign-off cycle is presenting the whole document every time to the person or group which needs to sign it off. Let's say you change and adjust your document after a meeting to comply with the change requests and present it again at the next meeting. The result will be that the reviewers will go again through the document and find some new things. Big mistake. The solution is to go into a first meeting with your complete document (which was sent out earlier of course to give the reviewers time to read it). In the meeting you will collect all critical items. Later at your office you will create a table with one row for every critical point. The rows will get uniqe numbers. After going through the items you need to tag the items with either "accepted" or "rejected for reason..". After some time you call in a new meeting and the reviewers will receive THE TABLE ONLY. Your document remains unchanged and will not be used in the meeting. All there is to discuss is the list of critical items. Going through the list item by item will make the list of critical things shorter and shorter. At the end of the meeting you will create a new list with fewer items and this will be the base for the next meeting. This way the meetings will finally converge on a signed document.

What if you can't come to an agreement on some topics? The possibilities are as follows: Wait till the deadline approaches and the business pressure gets so high that almost everything will be signed off. This will not make new friends and is a dangerous method as well. Don't abuse it. Another and better choice is to invite your upper management to the final meeting. Be well prepared with respect to the critical items, e.g. you should have mentally changed sides by performing the same analysis as your reviewers.

I learned the way how to deal with vague specifications from my former Boss Andrew. When we received an extremele vague spec. from a supplier of mission critical technology we did not enter a long discussion cycle about this paper. Instead, he had me sit down and WRITE THE SPECIFICATION by myself. This "spec" was very detailed - following the table pattern from above with each row having an ID. We sent the spec to our supplier and for a while we where the owners of the document. After some time the supplier had written his own - detailed - spec and we agreed that he'd take it over again. If we had started discussion the vague base document we'd probably still discuss it years later (;-)

What do you say when somebody in a meeting on a new project says he or she can't do x until z? The most important thing to do right then is TO MAKE THEM SAY A DATE. Do not let them go without a commitment. If they don't meet their own deadline it will look extremely bad. If they don't want to commit - escalate. Not being able to name ANY delivery date looks even worse. If you don't do it that way YOU will look bad in the end.

Academic wording has nothing to do in an IT specification. Keep your sentences to statements only. No wondering, guessing etc. Just matter of fact statements. Especially in large companies with lots of overhead an academic type paper will only create endless meetings and discussions. It's like in the US movies: everything you say can be used against you.

Of course you always try to get into your room at the office without taking a glance into your bosses boss' room. You don't want to answer his or her questions. But let's say you made the mistake or your boss is just coming out of her room anyway. The unavoidable question will be: How are things going? There is only one possible answer: "Couldn't be any better". Your boss will then enter into some short smalltalk on relatively safe topics like last weekends football game and after a while you will be able to take refuge in your room again. But say anything like:" yeah, everything is OK. There is this litte issue on YY but nothing big" and you will not get into your room for at least an hour. Your bosses boss will slowly start tearing more and more information from you - which he or she can't really evaluate anyway because you talk tech lingo and she is a top manager. In the end your bosses boss will get the feeling that there really is something wrong just because she or he does't understand your tech lingo. And will talk to your direct boss who will be not amused because now its his turn to clean things up again. So keep your big techie mouth shut and say:"C'est la fete" if you absolutely have to be funny but never something else. If there REALLY is a problem your colleagues and your direct boss are the only ones who need to know about. The problem will be filtered, cleansed and made digestible and finally reportet to your bosses boss by your direct boss. That's how it works in large, mid-sized and small companies.

Patterns for performance and extensibility

If you over the years have gotten used to leverage design patterns as an analytical tool for large software packages, start reading with cycle three of this book. Here the authors list the most important patterns used in eclipse and they give the necessary background information on architecture. The patterns used in eclipse are surprisingly simple: handle/body, proxy, adapter, visitor. Nothing fancy and far less reflective than one would have thought if you look at the extreme flexibility achieved in eclipse.

What makes the use of design patterns special in Eclipse is the care that has been taken to achieve performance: the observer pattern delivers high-value events. Virtual proxies prevent loading to many things up front. A manifest allows the workspace to offer services without pre-loading the necessary plug-ins. At runtime delegates are used when the functions are finally needed. Examples also show how expensive tasks can be batched by putting them between brackets. The same pattern e.g. can also be used in Struts to add access control at a central location (basically a template/hook variant). Like in distributed systems one notices that many architectural features go beyond language capabilities and need to be enforced through rules. E.g. many Eclipse interface have public methods but are nevertheless not intended for subclassing.

I consider the book as an ideal candidate for advanced design pattern classes. The participants should already know the basic patterns of course. A bit questionable seems to me the use of JUnit as the guiding example throughout the book. It forces us to learn eclipse and at the same time understand the unit-test approach. While unit tests are without doubt extremely valuable during product development it makes a difference if you are developing a product using JUnit or learning a complex framework and JUnit at the same time.

Finally I can say that after going through most of the examples I have now a much better understanding of the way an IDE works. For my generative computing class I've also downloaded the Eclipse Modeling Framework documentation and I will go through this next.

Bernard Clark (IBM), Methodology in Architecture Development

The architecture and management of large projects is something that is missing in most computer science curricula. The reasons are easily explained: Many research-oriented Professors never participated in real-live large scale projects and the consulting know-how needed in those projects is a valuable property of companies like IBM. Against this background I am even more proud to announce that my friend Bernard Clark - he is a senior consultant with IBM - will conduct a series of half-day workshops on the methodology behind architecture development. The know-how he will present has been acquired by himself and many colleagues working in the services and consulting areas at IBM and is kept current due to the fact that it reflects daily practice. Students will have a chance to learn what it means designing the architecture of a large system. They will learn the importance of "soft factors" like architecture documents, project definitions etc. and they will have to prepare artifacts by themselves.

We will chose a project from the financial sector as a guinea pig and create the central architecture documents.

A business model for online music

In previous newsletters Clay Shirky has already shown that the music industrie, especially the RIAA, will just drive private users to the use of darknets - cryptographically closed small networks of private persons where music and video swapping happens. He also gave social reasons why RIAA will fail with the attempt to enforce the "material" characteristics that media had in former times: books, records, CD's, DVD's etc. Nowaday everything is "being digital" and the old copy protection mechanisms don't work anymore.

In his latest newsletter he shows that the RIAA is basically repeating the mistakes that where made during the american prohibition phase where millions of US citzens where criminalized and techniques where developed by citizens and the mob to prevent detection. He says the same is now happening in the peer-to-peer world and encryption develops rapidly. Most users never had a use for encryption - some privacy fanatics excluded. But now - after being criminalized - the users WILL use encryption. Sounds convincing and is just more bad news for the music industry.

But let's just for a moment forget about RIAA and so on and focus on something I'd like to have as a service from the music industry.

Who am I? I mean as a potential customer and music listener (;-). I'm not rich and because this is so I have much more money than time - because I have to work. So with respect to media my limiting resource clearly is time and also opportunity (the chance to listen WHEN I have time wherever I am).

More and more media are competing fiercely for my time because that is something I cannot increase and I guess media time is pretty much fixed for most people nowadays.

A consequence of this is that I rarely listen to music anymore. A couple of minutes while driving a car every week -that's it. I don't buy CDs anymore - I don't have the time to buy them and if I have them at home I don't listen to them because again I have no time.

So I was basically a lost case for the music industry until a some time ago when a colleague of mine talked about an old Deep Purple song (child in time) and how he had spent the weekend trying to reproduce the piano solo. I could not remember how this solo went but somehow really wanted to hear that song. I sat down and for the first time used a file sharing tool to download the song and a couple of minutes later I was listening to "child in time" and afterwards continued the discussion with my friend.

But the immediacy of the whole thing made a deep impression on me. I remembered that I used to like music and that I had missed it for many years - simply because of lack of time and opportunity.

A couple of weeks later I had learned some more things about this new way of getting music: downloading is tedious for two reasons: first there are a lot of fakes out there or badly ripped pieces which are just a waste of time. So this was a negative experience. Second, the internet is just like an endless jukebox and I loved to browse and listen to songs I did not know. Or had on vinyl records at home, or somewhere on CD but would never be able to find it exactly when I wanted to listen to that piece. So this was again quite something remarkable. And third: downloading sucks, even if the song is not a fake. What if I don't like it? Why download the whole album just to throw it away after a listening a bit to it. Why can't I browse quickly through the music on the internet? You can say there is preview of files currently downloading but that's not it. The only thing that comes close is streaming media.

Things started to get clear for me. But before I tell you my dream I have to say number four: I don't want an archive. I have a vinyl based and a CD based at home and I hate them. Too tedious. No new stuff.

But aren't mp3 players with 20 Gigs a wonderful thing? The wonderful iPod?

Ok now it is time to wrap it up: mp3 archives are only great because we are not connected. In a couple of years we WILL BE PERMANENTLY CONNECTED. A smartphone like device will be my mp3 player. The songs will come from a music subscription service where I pay a monthly fee. The music will be streamed not downloaded. The smartphone will receive the songs via bluetooth and play them through my bluetooth headset. The search interface of my service will be excellent and I can browse the whole internet for information on songs and my service will be able to stream them to me instantaneously. I don't have an mp3 archive and I still don't need one. Archives suck because I have to maintain them and as we know by now I dont't have the time... I may have some playlists stored either in the smartphone or at the service. And I will be able to listen to music and radio wherever and whenever I want. We have a family account with the service so that my kids can do the same.

And sorry Clay, encryption will not be used to hide music swapping because nobody would be interested in doing so anymore (;-).

O'Reilly Emerging Technology Conference

If you have a chance to go I's recommend the talk on mobile peer-to-peer mesh networks (for 3rd world countries but certainly not exclusively for them). It is part of the untethered track . The session on digital democracy is also a hot one, especially if combined with the e-voting discussions that currently happen in the US. The collaborative technologies workshop will probably cover many things that will become reality over the next years - I hope they thought about the social acceptance issues behind the new collaboration tools as well. We don't need another disaster like with the current crop of workflow tools.

It pleases me that the conference puts a lot of emphasis on the social-technological aspects of the emerging technologies like peer-to-peer. I think one cannot understand the impact of peer-to-peer systems and mobile communications without understanding the social side of it as well. Is this one of the reasons that the RIAA has such a hard time to come up with a business model for BITS of music?

And last but not least: the session on how to program the new Microsoft smartphone could prove very interesting. The abstract said something about realizing the limits of small devices.

"Security Experts" - where do they hatch?

Perhaps you are unemployed. You lost your programming job to some guy in India or Slovenia and you are now looking for new opportunities. Perhaps you've seen what happened in Hamburg lately: Police in large numbers stationed around a military hospital you've never heard about. A whole city gone crazy with fear of a terrorist attack against the hospital. Ominous warnings from the CIA (or was it the FBI?) where mentioned. And when you think about what happened you see the term "security expert" popping up again and again in newspapers and magazines and very prominently on all TV stations.

It is the security experts who put out the warnings about possible terrorist attacks. They explain the threats in detail and are allways shown in this special way - you know, like in the mysteries: No titles, no information on current employment, eduction etc. Not looking directly into the camera (;-). Just the label "security expert". Now, doesn't this look like a real opportunity for you to end your unemployment? Become a security expert.

Surely you have done some time in the military - this counts as "former security officer at the military", or? You don't have to tell anybody that you where just the usual private who got drunk every night at the canteen - because nobody will ask you anything concrete about your qualifications or where you've got the terrible threats you are informing us about. In times like these the good old FUD (fear, uncertaincy and disinformation) principle works like a charm with the media. If only you could figure out how to make real money with your new found talent. Perhaps Hamburg needs another "security expert"?

So the next time you see a "security expert" on TV, ask yourself the following questions:

Who IS that person? (professional carreer etc.)
Where is this person employed?
Who pays for what this person says?
Who benefits from what this person says? (e.g. who gets more funding for equipment, personel)
How could this person have learnt the information told?
What makes somebody a "security expert" and could this be a former bum usually sleeping under some bridge if not acting in security matters?
When a "security expert" announces that the state is in [red, orange, green etc.] alarm state - ask yourself what good this "information" really does? Or will it only raise FUD in the general population?
A frequent flyer program for terrorists

Just in case you thought that security madness is soleily an american phenomenon - think again after reading the story about iris scanning at the Frankfurt airport (C't magazine January 04). There it says that the airport will install iris scanning technology and some frequent flyers will get a chance to register themselves for a scan. After they are found trustworthy and having their iris scan in the database they will be considered privileged flyers and can enter the planes without further checks.

Sounds reasonable - no doubt. Until you realize that the terrorists of 11 September could have passed the test without problems if they where frequent flyers. And then you realize that knowing their identity through an iris scan would have done you no good at all: The identities where all true and good. And then you finally realize that preventing this kind of terrorism has nothing to do with identity checking at all because the terrorists where so called sleepers and their records where clean. States can use identity checks as a security means for those people who a) hide their identity because they are known to be doing bad things - they got a criminal record and b) fear getting caught and punished. Both preconditions are wrong in this case: The identity of the terrorists WAS KNOWN. And they did not fear state measures because they where ready to die anyway.

The logical conclusion is that security measures like the one in Frankfurt only target regular people (some good, some with a criminal record) but they won't help a bit against true terrorists ready to die.

It has been said that state organizations tend to act pro-active in such cases simply to avoid being accused of doing nothing to protect the good citizens. This is certainly the case in many security measures that have been installed after 11 September, some of them being really ridiculous as the regular reader of the WIRED newsletter knows.

But in this case what Frankfurt Airport is doing is much worse than just an annoyance: it damages overall security in an extreme way: As Bruce Schneier explains in his cryptogram newsletters, this measure separates the population into three groups: trusted and good, untrusted, and trusted and bad. The untrusted are checked, e.g. for weapons. The trusted and good are not getting checked but this is no problem as they are not up to something evil. But the trusted and bad are not checked as well - causing the real damage. And no identity check will ever separate the trusted and bad ones from the trusted and good ones. Thanks Bruce, I guess I finally got the message....

So if Mr. Bin Laden comes to you and asks how to get his terrorists savely on board you know what you have to tell him: get your people a frequent flyer program and the with a blink of an eye...

But I haven't told you everything yet: When the government requested a certified technique for checking the "liveness" of an iris scan, no company was able to put something on the table. Some claimed they would not publish their "trade secrets" because their technology is SOOOOO secret and good. Does this sound familiar? Go to Schneiers site and search for the hilarious story on how the japanese mathematician tricked each and every fingerprint system last year - with a 10 dollar budget and some gelatine. And the government reaction sounds also familiar: they pulled the requirement of certified liveness back. Like if you can't meet a certain low level of heavy metals in baby food - raise the legal limits.

Security that isn't - when so called security makes it worse for you

A family member loses a key to the main entrance door. No big deal - a mister minute can't be far and you will be helped. But your wife hasn't bought a regular lock with regular keys back then. No, it is from WILKA - a well known maker of locks. And with it came a small plastic card with a number imprinted on it. At the mister minute they tell you that you have to go to a special lock service and they will have to order the key from WILKA. It will take around 14 days and a whopping 25 Euro per key with shipping. Ok, it is a rip-off but it gets worse.

You go to the lock service, equipped with the WILKA plastic card and the number on it. The guy behind the counter takes the card and pulls out one of those ancient copy-makers that where used to make copies of your visa or master card. He puts your card and a paper slip into the device and moves the upper part over the lower part to make a copy. Darn it - did not work! He throws the broken paper slip into a trash can (note, now the shop has a copy of your number on an only slightly damaged slip and could send it later to WILKA to order more keys for my lock). He repeates the procedure and this time it works.

So far your security is not compromised because the shop does not know yet who you are or better: where you (and your lock) live. Now the guy asks you for your name and phone number - don't you want to get notified when the keys have arrived at the shop? Most people will now hand over this information and now the shop knows where the keys will fit.

Your security on the other hand has now gone done the drain completely: if the shop owner decides to abuse the information and makes additional keys (e.g. by creating a copy when the ordered keys come back from WILKA) somebody will be able to break into your house without damaging your lock the least bit.

So let's have a look at WILKAs original argument: these locks are so much more expensive because they are much safer: nobody can make copies of the original keys - only WILKA can. But the way they use the authorizing plastic card completely compromises this. Two lock services which I had asked about the quality of the lock itself confirmed that the WILKA copy protected lock is not safer against break ins than regular locks. But at least a burgler would have to break the cheap lock while a corrupted lock service can break into your house without a trace using the WILKA system.

I like this example becaues it shows nicely what I call "security for dummies". Encoded car keys are in the same league: stuff that does not really present a problem for the professional crook but which costs you a lot of money and sometimes even makes things less secure than without this "security for dummies".