What's New

Intelligent Web Day at HDM

Do we need to destroy Google? Is Google getting too dangerous? And why?

Questions like these show an increasing realization of the semantic power behind the www. The web can be used to predict share courses, the development of companies and topics etc. But how do you tap into this ocean of information?

Our first intelligent web day tries to give some answers on these questions.

The use of SPARQL & Ontologies will be explained and demonstrated by Stefan Göhring, Fraunhofer IAO. Learn how to organize and retrieve knowledge automatically and perhaps even use automated reasoners to draw conclusions. Innovative interaction design and how to use modern AJAX technologies to really improve GUIs will be demonstrated by Andreas Selter, User Interface Design GmbH. Many thanks to our friend Michael Burmester for getting us in touch with Mr. Selter.

The afternoon will show you ways to dramatically improve the use of pages through Semamtic Net und Search-Technologies. Annotations with behavior added dynamically can do wonders for your GUI. Michael Junginger and Andreas Prokoph, IBM will explain how it works.

And finally, Sebastian Blom, Uni Karlsruhe, will show the extraction of semantic data from web and Wikipedia

Note

9.00, Friday 14.12.2007 at HDM Stuttgart, Nobelstrasse 10. See HDM homepage for detailed information. Live stream and chat are available.

Security in Online Worlds

Found on the capability mailing list: Martin Scheffler, "Object-Capability Security in Virtual Environments".

This excellent thesis uses object capabilities to solve security problems in virtual worlds like non-centralized access control, user-controlled access rights, anonymous access and context dependent access rights (distance, time, properties). It also provides a perfect introduction to modern capability theory.

On top of this the author wrote a virtual prototypical client-server virtual environment software called EMonkey. It uses the secure language E and the JMonkey Java game engine . Sources for JMonkey . Dfinitely somebody for one of our next games or security days. And don't forget that Anja Beyer will do a talk on security in virtual worlds on our games day 7.12.2007.

4th Gamesday at HDM - Designing virtual worlds

This gamesday has its focus clearly on design issues and game content It is not as technical as the previous days sometimes were. Rene Schneider will start with an introduction to game patterns, i.e. how to design game elements in a way that makes games both understandable AND enjoyable. Those patterns are non-technical and geared towards the discursive structure of games. Stefan Baier will explain modern game design from the perspective of an owner of a game content company: streamline studios in Amsterdam. He knows the industry and its problems really well.

Security will play an ever larger role in virtual worlds. Virtual goods are traded in large quantities and some worlds like secondlife have a decidely business oriented plan. Anja Beyer of the TU Illmenau has security in online games as a special area and we are looking forward to meet a colleague from the security field.

And this gamesday is also very oriented towards beginners. The next three sessions are all perfectly suited for students from other areas or simply people interested in how games work. Armin Bauer from the Universtiy of Stuttgart will explain the basics of computer graphics for games. During lunch break several live game demonstrations will be presented: Crysis, Unreal Tournament 2004, Nintendo Wii, Die SIMS2, World of Warcraft, FIFA 2007, Need for Speed Carbon, Portal. And after lunch Jörg Scheurich of the Linux User Group Stuttgart will give an introduction to 3D Design, again oriented towards beginners.

The gamesday comes right after the two day symposium on media ethics in computer games and virtual worlds and it only suits us to have a discussion on game ethics on our day as well. Rupert Jung of HDM will give an interesting talk on this topic.

And last but not least one of the fathers of gaming and game development at HDM - Thomas Fuchsmann, well known for his activities in "die Stadt NOAH" will give a talk on game balancing and game play. Learn what makes a good game, what kind of tools are needed and how the process works from somebody who has been there and done that.

Note

9.00, Friday 7.12.2007 at HDM Stuttgart, Nobelstrasse 10. See HDM homepage for detailed information. Live stream and chat are available.

Developer Day - why agile people need agile methods and agile languages

To me the most important result of our developers day was to recognize the need for a new lecture. It needs to cover build and test theories and procedures and their connection to architecture. Being agile means doing a test-driven development that - depending on the size of the project - will require different procedures. There is no doubt about continuous integration speeding up the detection of bugs. But what if your product takes eight hours to build? Techniques of modular and incremental builds are needed then. Isabell Schwertle and Ralf Schmauder - CS&M alumnis now working at the IBM development and research lab in Böblingen did an excellent job in explaining their procedures in this area.

Testing was and is a hot topic in software development. Surprisingly its connection to architecture is not fully seen or appreciated everywhere. Software architecture itself can make testing hard or easy, e.g. by supplying necessary meta-data for test procedures as part of the development. Incremental testing, starting with unit tests, is clearly the right way to go. A few theoretical thoughts on complexity, computabilty etc. should make it clear that only rather small modules can be tested completely and that we should use this opportunity in any case.

Build and test require your most experienced developers. Organizational separations into different teams are sometimes needed due to the size of projects but in general are a problem for the quality of a product.

People and quality - some core topics from the agile manifesto. Markus Leutner from T-Systems is a long-time user and evangelist of agile methods -especially extreme programming (XP) in large projects. Project management and agile methods at the first glance seem to be contradictory. Agile methods focus on people and quality, project management traditionally on budget and time.

Leutner explained that it makes sense to split a large project into two different levels: the lower level applying agile methods and the higher level doing overall planning and cost control. He pointed out several areas which can cause conflicts when methods from the agile manifesto collide with traditional project planning: time and human resource issues leading the field here.

Agile methods really are a totally different way to attack software projects. The focus is very much on the individual developer. It is very productive but can cause fear with traditional managers due to a more coarse grainded time planning. And it is not for every development group nor for every customer. The team and the customer need to establish a trust relation - something that is not possible with some.

After years of waterfall methodologies and project planning myths it is very refreshing to see agile methods in action. Finally we recognize that software needs to be built at the end. That operating systems do not execute word documents yet.

Gaylord Aulke from Zend is a well known PHP evangelist. He demonstrated the agility of PHP, especially with release 5 and in connection with the Zend framework. PHP has left the small web-project niche and enterend mainstream business projects of a much larger scale. It fits well together with agile methods: agile people need agile tools. PHP now offers things we know from J2EE or .NET like relational mapping, security frameworks and Gaylord Aulke had no problem admitting that Ruby on Rails was a major driver for some of the new additions.

He also explained areas in PHP which -due to the fact that it is an interpreted language - a developer coming from a C++ or Java background will necessarily do wrong.

The developer day ended with Dierk König from Canoo in Basle - author of the Groovie in action book - demonstrated more agile language features. Groovie is special with respect to Java: it does support static typing but allows much more dynamic mechanisms as well. Like most users of agile, dynamic languages he does real-time programming during the talk to demonstrate the easy way programs can be built with a dynamic language. Ruby on Rails was also a driver for Groovie but its tight connection with Java makes it a perfect fit for Java fans.

During the day I had some rather philosophical insights and ideas. When e.g. did we start to fear code? Agile people seem to like code, not fear it. I believe that the answer to this question is tied to another question: when did programming stop being fun?

I do believe that e.g. the start of the shrink-wrapped software world - driven by Bill Gates and Microsoft - was also the moment where code fears were born. Especially with managers that believed into the binary, componentized world of ressources. That is quite easy to understand: management gets much easier with fixed size components. Code requires understanding and that in turn requires people. And people are not so easy to handle as resources are.

And with the birth of C++ the fun was gone. The resulting tools and environments were targeted at large companies with many people. Not necessarily brilliant coders like Kent Beck or Erich Gamma. Large companies started to make software and tools for large companies: but-ridden bloatware. Java looked promising for a little while - until we drowned it in J2EE and enterprise environments and never got around to change it into a more dynamic language.

Generative computing seemed to rescue us from the disaster. But we had to realize that while generative computing is surely a good thing in certain cases it has a steep learning curve by itself. And it is actually abused when only used as the glue to make ugly environments work. The rift between XML and code in .NET and Java should not be healed with generative technologies. The real question is: why do we need so much configuration stuff in XML? Is it because our static languages do not allow us to survice without pulling things out of the code? And what happens then? Who validates XML against code?

But things are changing: Ruby, Groovy and others are forcing us to think again about the "last programming language ever" as C++ has been called once. Agile people need intelligent, dynamic languages that use higher-level abstractions.

That leaves us with the last philosophical question of the day - that I forgot to ask Dierk and Gaylord: What is there in Groovy, Ruby or PHP that Smalltalk did not have? I suspect we are now coming full circle back to language features that we already hat long ago. And it is a sign for computer languages being much more than just a technical thing: they are deeply social and sometimes our reactions towards them are less than rational or logical. But the approach of good old Smalltalk seems to be alive and kicking in Groovy and Ruby: Programming is fun. Even children should be able to do so. If it is ugly and works bad it is probably not the users fault but a result of bad language design.

Developer Day November 2007 - Agile Development

For the outsider it may seem like with the start of the new millennium the development community has been split into two hostile parties: Afficionados of project management, Rational Unified Process or V-Model on one side - agile developers following extreme programming or scrum guidelines on the other side. And now the rift seems to have reached even the languages: C++, Java, C# and J2EE/.Net on the traditional side, Ruby, Python, Groovy etc. on the agile side.

But this impression is wrong. There was ALWAYS the difference between those who know how to talk and plan and those who know how to build software and systems. Between those who pray to the god of processes and talk about ressources - and those who know how to DO things with agile and quick tools and languages. Between those who believe in final testing and those who know that only modular testing on unit level will improve performance. Between those who believe in tools and those who develop tools to help their team in an agile development.

In short: the difference between those groups has been described as the difference between "packers" and "mappers". Packers talk formal and process bullshit. Mappers build. The difference has always been there and people in the konw like Fred Brooks, Tom DeMarco, Richard Gabriel and Allistair Cockburn have expressed it in excellent books and papers.

What causes the wrong impression for the outsider is the fact that the agile group is getting MUCH STRONGER lately. People start to understand what test-driven development, continuous integration etc. really mean. That those are development practices that are geared toward people - not resources. And they are supposed to really help developers achieve better quality.

The new languages support this as well. They are light-weight, dynamic (yes runtime type control), and sometimes interpreted. They offer higher abstractions but what is most important: they are fun to use. Ever heard a .NET or J2EE developer mention the word "fun" during their work?

It is time to take a closer look at all these developments. What does a young developer need to know about professional development? Isabel Schwertle and Ralf Schmauder - both MI Alumnis and now with IBM - will demonstrate and explain the basics of succesful software development.

Is it possible to use agile and extrem programming methods even in large projects? Markus Leutner and Gaylord Aulke are going to prove that this is really the case.

What is behind all the hype on SOA and Business Process Management? Tom Wolters of IBM will explain it to you.

And what is the magic behind Groovy? Dirk König - well known author of the Groovy book and member of Canoo in Basle, will give an introduction and answer your questions.

Both beginners and experts will be able to learn something during the event and I am sure some lively discussions will be held. Even non-developers can get an impression of where we are currently with the art of developing software.

Note

Friday 23.11.2007 at Hochschule der Medien, Nobelstrasse 10, Stuttgart. See www.hdm-stuttgart.de for more information. As always the event is free of charge. A live stream and chat service are provided.

What makes games tick? Thomas Fuchsmann on Game Play and Game Balancing

Hot off the press: Thomas Fuchsmanns thesis on game play and game balancing. It is not an easy thing for a software developer to tackle a "soft" topic like the question how games become "playable". And on top of this to target not only one compter game genre and instead give an overview of most every genre that exists in computer games. Thomas Fuchsmann has done exactly that and he has topped it off with the additional development of a game play editor for "Die Stadt Noah" - our own large scale game project. He was able to show the difficulties in large scale role playing games to let the users influence the games on one side and still be able to have a consistent game on the other side. No character is allowed to become too strong, chance is important but only in very limited quantities etc.

Readers will get an understanding of game design and what makes good games. The focus is mostly on adventure games but ego shooters, MMOPRGs, strategy games etc. are all covered as well. Our next games day on the 7th of December will focus on game design, game patterns etc. and game play and balancing will be important topics there.

The thesis will be available for download on this site shortly.

Web Application Firewalls (WAFs)- possibilities and limitations

Sebastian Roth has now finished his thesis on WAF technologies including an evaluation of current products. Thanks to Mr. Haecker of Thinking Objects a company which specializes in security and consulting, it is generally available for interested parties and can be downloaded here.His thesis contains three parts. The first one is a very thorough investigation of attack vectors and gives a deep understanding of web and internet related technologies and their weak spots. The second part investigates how WAF technology works, what it can do and where supposedly limits will be met. A testbed and testcases is defined for the final part of the thesis: a test of several products - both commercial and open source.

The results are quite encouraging. But there is one caveat that Sebastian Roth put a lot of emphasis on: Don't use a WAF if you are not ready to go through involved configurations and integration issues with your current applications and infrastructure. It is nothing that works out of the box.

True, some products understand and learn from traffic seen. They really are pattern matching engines for http and html and soon also for xml/soap protocols. They are even stateful and remember fields in html forms that went out. If e.g. a "hidden" field changed during transport the WAF will block the reception of the form. The same goes for cookie tampering. But again, some involved configuration is needed in most cases.

What are the advantages? To me the core advantage is not only a defense in depth approach within firewall zones. To me the biggest advantage is that a WAF will buy you time in case of a vulnerability in an important busines service. In such a case you cannot simply block the service - it is vital for your business. But a WAF allows fine-grained stateful filtering without requiring immediate code changes in your application. This gives your developers a chance to come up with a solution for the vulnerabilities that might last longer than only a couple hours.

But as always there are some weak points. Dynamic technologies like AJAX/JSON and in general Javascript are bad for filtering as everybody who ever tried to debug/test AJAX applications has probably noticed. WAFs like descriptive protocols like http and html/xml and not so much dynamic constructions. And some "interesting" programming techniques like dynamically constructing URLs at the client side might run into problems. The WAF learns all the URLs within a page that went to a client and blocks all URLs that come from this client but have not been embedded in the page that went out. If the WAF is configured to do so. So watch out for some interference between a company WAF and your application.

If you are planning to do some penetration tests you might want to take a look at the test approach and test tools described in the thesis. They will help you to get reproducable results.

The future of computer science and its curriculum

Last week I've spent two days attending the IBM Academic Days at the development lab in Böblingen. A group of professors together with specialists from IBM tried to take a look at the future of computer science. I won't go into technical details and instead take a look at a bigger picture here. To me the main undercurrent behind most presentation was that computer science is hitting some severe barriers right now. At the same time where our technology mingles with society in intricate ways we have to admit that we do not really know why and how. We are in the cargo area, not in the driver seat.

Two examples. The first one is about knowledge management. Knowledge is an important production factor as most people seem to acknowledge (notwithstanding the development towards "zero-brain management" and "erfolg durch Null-Hirn" according to Gunther Dueck). So knowledge as a "ressource" (like in canned goods) seems logical. But knowledge is NOT a ressource that can be externalized. A book does NOT contain knowledge. It is the interaction of humans with media that CAN create knowledge - if we are able to understand the author, if we want to learn something new etc.

Of course it is nice to learn that socialising is the most important and effective way to spread knowledge across people. This is good for our consultants which now can tell the boss of a company that his people need to "socialize" more to make certain processes work better. But where is the "socialize" button in our technology?

The second example is from social software. We know wikis and blogs etc. can be effective. We understand the technology (it is actually quite easy stuff). Then we should be able to predict where social software will be accepted and which groups, departments and companies will have problems embracing this technology. On top of that we should be able to explain exactly WHY a group does not accept a wiki as a means to improve communications. The answers are in the communication and authority structures in this group and perhaps between the group and the rest of the company and the specific ways social software changes, enables or modifies those structures and affects the people behind.

The reality of computer science with respect to the new challenges is quite sobering and consists of several response types.

There is nothing new for computer science to deal with. Business as usual. We build it and they use it.
The soft-factors are important. Lets re-invent important concepts from philosophy, sociology and psychologywhich have long been dumped and embarrass ourselves using those.
I would have to explain the interdependencies between social and technical structures. Unfortunately I am only a specialist for technical things. Therefore I will have to be content with crude observations instead of detailed explanations and predictions.
I am a computer science person but I understand that this is not enough to explain the future use of technology in society. I will work together with specialists from sociology, philosophy, psychology and arts to understand the interdependencies of those systems. Perhaps it will be possible to use concepts from cybernetics as a common framework for explanations. I think that a relatively broad education covering many different areas is a valuable asset here. Employees should understand both technology and content/media and not restrict themselves unneccessary.

I guess it is clear that personally I find only the last response type as sufficient. But is it a successful approach? Let's take a look at what the industry says about our students. The following list consists of recent statements and obersvations about our students and their success in the industry.

  1. Our students show success in small as well as in global companies

  2. Our students do not need a lot of hand-holding and mentoring during their thesis in companies.

  3. Our students start quickly on a new project and are able to show a first prototype after a rather short time. This and the next topic are mentioned frequently by companies.

  4. Our students get a rather broad eduction in computer science and most companies mention this as a very important success factor.

  5. Despite the broad education our students get into very special areas like Z/OS or VSE development and sometimes even down into firmware and embedded control. It looks like a broad education is not a from them. Special knowledge can be acquired quickly.

  6. Our students learn to USE technology successfully. In other words: they can start quickly doing successful work in their job. Most companies mention this as another success factor. Why is the ability to "Do things" an important asset? Because it is a trust and confidence building measure. The company and the student don't know each other. The student takes self-confidence from the fact that she can really DO things. The company gets confidence when the student shows real abilities early on.

  7. Our students learn concepts (not enough yet, but we are getting better). A Stefan Bungart of IBM pointed out: knowledge gets outdated faster than ever. But concepts last much longer.

  8. Our students show social competence and skills. This goes much further than simply being able to present ones ideas. It means to be a successful team member and a successful communicator to other groups, business, customers etc.

To me it looks like computer science is about to go full circle. It started as the broad science of cybernetics and its representatives were able to discover laws and techniques across many areas (Bateson, von Förster, Ashby etc.). Then it went into the mindless depths of x86 chips, windows, applications and .com stock quotes. And now with wireless ubiquitous computing, autonomic computing, social software, virtual worlds, collective intelligence and IT-based security laws it is coming back right into the middle of society.

The big question for universities nowadays must be: does the curriculum for computer science really reflect the need for broad and concept based knowledge? Or is much of the curriculum simply a waste of time?

Findability and collective intelligence

What a curious term. Being able to find things is a core human quality. And the book on findability explains many wayfinding techniques - old ones and new ones. Its author manages to explain the inherently social and personal qualities behind the relevance of search results. And he also explains why relevance (consisting of recall and precision) needs to suffer once document numbers increase: scale free network effects in language terms are one of the reasons.

The internet of things becomes a new terrain for google once everything physical has a number or name and can be found via wireless techniques. And marketing efforts are getting desperate because of an abundance of push and a lack of pull information offered everywhere. How much pull vs push is used on your site? Do you let users pull information according to their needs?

Search is never far from collaborative or collective methods. Take a look at the recent special of computer.org on search technology. Attentional meta-data - a word that easily scares people but it simply means all the data that can be gathered from watching what catches the attention of people: where do they look in google results?

And it gets better: the personal and collective (as a workgroup e.g.) search history can be used to improve the relevance of new queries. But then, are we going to constrain ourselves to the ever similiar search behavior of our groups? How fuzzy should search be?

Machine learning, once part of the shunned AI movement is rapidly becoming a core technology. Automatic clustering, recommendation generation, behavioral analysis and predictions based on bayesian networks. The new book on Programming Collective Intelligence uses Python code to explain advanced machine learning technologies e.g. like google page ranks. Algorithms to express similarity are shown as well.

It cannot be overlooked - the "soft" sciences of psychology and sociology are getting more and more into computer science.

Why objects have failed and what turns a PC (back) into an enabling technology?

Just a few comments on two important papers I have recently read. The first on is the paper Alan Kay wrote for his NIS grant and it deals with technologies to empower people. Its title is rather ambitious: Steps Toward The Reinvention of Programming, A Compact And Practical Model of Personal Computing As A Self-Exploratorium. The paper follows the old Smalltalk philosophy that programming should be easy - it is an important tool after all. What made programming so hard nowadays? What made the fun go away? The paper suggests that a lack of message passing and too much emphasis on the static interfaces of objects was on of the reasons.

The suggestions go to the core of how we build systems. Why do we not understand the systems we built? Because the systems have no ability to express themselves. Kay mentions systems that should be their own model while at the same time be useful to the users. Compactness is a core quality of such a system and the bootstrapping process becomes a piece of art.

Operating systems are questioned in this paper (not uncommon for Smalltalk people). They do not provide "real objects" and they mix things in a kernel that should not go there (like drivers). Some ideas seem to go into the same direction a Microsofts singularity OS. Others emphasize de-coupling through message passing.

As a PC is an end-user device a new level of abstraction is needed. Important concepts need to be represented e.g. through intelligent actors and high-level objects instead of low level calls. This sounds similiar to some of my ideas in the area of usability and computing: without better abstractions we won't be able to control the chaos of low-level functions anymore.

The objects in question should be composable, scriptable and software development should be like authoring a presentation. Applications are not really useful in such a context. And presentations are not presentations anymore - they are the real thing and can be modified, scripted and extended as they are the living system itself.

Particles and fields metaphor: losely-coupled coordination of massive amounts of parallel agents. Kay mentions the way ants communicate food places via scents on the ground. The ground can be simulated through particles in a computer. Many problems of finding and coordinating can be solved easily this way. Some of these ideas remind me of Peter Wegners "Interactions vs. Algorithms" and not surprisingly Kay again emphasises the importance of message passing compared to object networks.

Other important topics discussed include the use of pseudo time instead of realtime in systems, replication and transactions on massively parallel objects and more. A fascinating aspect is "recursively embedded bootstrapping to quickly bring up and/or port a system with its own assistance". I would love to try this approach. And I'd love to find somebody who could explain the follwing 3 paragraphs from the paper:The kernel of our bootstrapper is a simple self-describing dynamic object engine. Mechanisms in the kernel are limited to those that are necessary and sufficient for it to describe its own structure and behavior. Because this description is complete (every detail of the kernel is fully described) the whole system becomes pervasively late-bound, able to modify dynamically any part of itself from within. Key to the tractability of this approach is the separation of the kernel into two complementary facets: representation of executable specifications (structures of message-passing objects) forming symbolic expressions and the meaning of those specifications (interpretation of their structure) that yields concrete behavior. Concrete behavior describes the messaging operation itself (the unit of communication between objects) and the sequencing of messaging operations within an object's response to a given message (method implementations, reified as first-class functions). Representation and meaning are thus mutually supporting. Bootstrapping them into existence requires artificially (and temporarily) introducing a 'fixed point' in their interdependence. We accomplish this by writing a static approximation to the object and messaging implementation in which object structures can be created and interpreted as symbolic expressions to yield concrete behavior. Once complete, the resulting system can generate behavior that completely replaces its own original approximations with a fully-fledged dynamic implementation of messaging and methods. This event is a singularity: once reached nothing can be known about the original approximations (they are, and should be, irrelevant); the resulting kernel is both self-sustaining and amenable to expansion from within towards any form of end-user computing environment. There are ideas similiar to the kernel language approach in Mozart/Oz.

What drives people like Kay et.al.? It becomes visible in statements like the following: One of our favorite inspirations is John McCarthy's Recursive functions of symbolic expressions which outlined a theory of programming that could also be thought about mathematically. The argument led to a half page eval for the system expressed in the system. But I guess now is the time that you take a look at the paper by yourself.

The second paper I recently read is Richard Gabriel's famous "why objects have failed - Notes on a Debate". The paper was once presented at OOPSLA as far as I know and it contains lots of useful insights into what went wrong with our object systems and languages. It contains some rather old accusations like the static thinking in a world of failure (instead of building resiliant, self-healing systems, we try to avoid every failure through static typing. Gabriel also sees a severe lack of undestanding of different computing concepts (functional, logical etc.) in the OO community.

The list is rather long: failure to understand re-use, failure in encapsulation and so on. I suggest you take a look at the paper by yourself - it is very inspiring. By a happy coincidence I visited today one of my students who is doing a thesis on advanced modeling at SAP in Waldorf. Thilo Espenlaub works on context dependent attributes to improve B2B communications through enhanced semantics. Classes can have attibutes in one context and lack them in other contexts - not something that is easily represented in our OO languages but which seems to be a quite natural concpet for real-world modeling.

Application Security - Reflections on our Security Day

So you are famous now! Heise Security does a feature on your application almost every day. Your developers are frantic to fix one more cross-site-scripting. web-trojan (XSRF) bug or simply a nice SQL-injection problem. Patch after patch gets released only to be found incomplete or buggy. Upper level management knows your name by now. What can you do?

Basics first. Today we are dealing with many applications that were built BEFORE injection attacks become common knowledge. What is worse - and our first speaker Florian Grassl made it perfectly clear - is that our core interfaces (like http request objects used in JSPs) are very old and have NO CLUE whatsoever about input validation. To me this was one of the core ideas taken from the security day: interfaces need to be way more abstract and insulate developers from validation problems. Ruby on Rails e.g. does have those interfaces. Offering a bare-bone http request object to JSP users (you know, JSPs were sold as something designers were supposed to handle..) is simply asking for problems. It is simply too easy to hack a short request.refer or some other variable into your JSP.

So what are we going to do with those old applications? Forget about re-designs. Nobody has the time nor the money to do those. From patch to patch - that is what is really happening out there. And we all know the old hacker rule: look for vulnerabilities where vulnerabilities have been already found once. What about getting complete lists of input/output requsts and their parameters? Forget about it. If they would exist the application would probably not have any input validation problems because the architects thought about it beforehand.

It is clear that input validation is an architecture problem. Every new application should come with a description of input and output including all parmeters, types and formats. But there is more: the application UI should be designed to allow white-listing which means it should use fixed tokens to the max and outlaw free text fields as much as possible.

But for older application a different approach is needed. Florian Grassl used an "aspect" approach. Servlet Filters are a good candidate to externalize input validation. One problem: the request object is read-only. This means one has to put it into a wrapper object which has a writeable buffer. All unwanted request parameters (including headers) should be suppressed. The filter commands should be given descriptively but one could also use programmatic filter commands.

Florian Grassl's talk prepared the ground for the following discussions on application security and vulnerability management in general. In his thesis he used the OWASP approach intensively and also the tools from Owasp.org. Webgoat e.g. proved to be especially helpful.

The next topic presented was an evaluation of web application firewalls (WAF) including a very thorough investigation of attacks, tools and the necessary test-setup to achieve real test results. Sebastian Roth's thesis gave some more answers to our legacy web application problem. He pointed out two major reasons to use a WAF. The first is simply defense-in-depth: it is always better to have several lines of defense and a WAF located in the DMZ does provide more security. The price to pay is configuration and rules. But more and more WAFs seem to be able to take over existing e.g. WSDL based definitions from the application interfaces themselves.

But the other reason may even be more important in case of a public exposure in your application. Frequently you cannot just close down a certain service or request because of business reasons. It is no problem to shut down a "rate-this-page" button for a couple of days - nobody is using it anyway. But you can't just turn off vital business interfaces in your internet applications. WAFs allow you to manipulate the request without source code changes and instead of just turning it off. No need to get into the frantic patch-exposure-new patch- new exposure cycle that we know from many internet applications. It gives your developers time to REALLY come up with a patch that might help and not miss the problem next to it. And don't forget: once a vulnerability has been posted in public the clock is ticking but it is usually NOT possible to fix it within minutes in software. A WAF can save your butt!

Sebastion Roth concentrated his tests on WAFs with only few elements of self-learning (or none at all like mod-security). The results are very promising and I cannot wait to post the URL to his thesis (curtesy of Thinking Objects GmbH, the sponsors) here once it is done. Take a special look at his test-approach and test-tools. In one of our software projects we are currently attacking a Ruby-on-Rails application and we intend to use the testbed for this work.

The third talk by Mr. Haas of McAfee had its focus on vulnerability management. He made it no secret that the tools and processes he showed were targeted at larger companies. Not so much for the price of the tools but the complexity and weight of the involved processes. He showed the development of a security strategy starting with policies, assets etc. and ended with tests and evaluations of the defined processes for compliance reasons.

He also did not keep it a secret that new regulations like SOX and BaselII are currently driving IT and Software projects in the security area.

Finding the company assets seems to be more complicated than one might think. It should be a simple case of a database lookup to find servers, get their configuration, software versions, purpose and location. This is not really the case as everybody working in larger companies know. Nothing is usually more outdated than the hardware/software configuration database - due to increased mobility, tighter deadlines etc.

A good solutioni for this problem is to use network scanning techniques to find new hardware or detect changes in hardware and software configuration. This can even be done without agents running on the boxes as Mr. Haas showed.

The afternoon started with an introduction to strong authentication in enterprises by Dr. Ben Fehrensen of UBS AG. Ben is a colleague of mine and excells in security topics. He explained the use of smartcard technology for strong authentication e.g. in a kerberos-based Windows Network. Based on PKI, he showed the use of several different keys for authentication, encryption and signatures and why a safe copy of the encryption key is needed.

He even went into the problems of mobile devices. What happens if a user loses a smartcard while on the road? Is it necessary to first establish a connection to a domain controller before a replacement card can be used?

At the end of Dr. Fehrensens talk the participants had an understanding of the first step to enterprise security: stron authentication. And we all agreed that on the next security day we should take the next step and explain the different ways to do secure delegation between nodes.

Up to that point we had mostly talked about systems that have lower ERL ratings (e.g.2) which means they are kind of safe to use but one can do better. Dr. Gnirss of IBM Development GmbH in Böblingen showed us a system with an ERL5+ rating for the hardware and an ERL3+ rating for the software: IBMs z system series of mainframes.

He gave a general introduction of the security concepts behind this type of machine and two things became very clear:

Availability and realiability are core security features in todays transactions and hot swap needs to cover basically all parts of the hardware. In most cases users should not even notice that there was a problem. Processing Units e.g. need to track themselves using dual cores and in case of an error they are automatically turned off and a spare unit takes over.

Security people today still focus almost exclusively on the correctness part of business processes: do not allow unauthorized access to resources etc. But that is simply not enough. Todays services absolutely have to be available.

The other key feature he pointed out was the high level of isolation between operating systems due to the LPAR approach. The logical partitioning allows almost complete virtualization of processors and environments. And it avoids the current problems of hidden channels on PC type dual core cpu's e.g. through cache manipulations.

Isolation is a core feature for security because it allows security analysis to take place in reasonable time and with a high degree of correctness. Without isolation everything is possible and security problems cannot be calculated.

Unfortunately I had to attend an examination during the rest of the talk and that is why I could not raise the point Keys Botzum made in his Hardening Application Servers paper. Given the networked characteristics of application server clusters isolation on lower levels is helpful but far from sufficient to avoid security problems. His example: If an application server cluster uses the same symmetric key on many machines - including front-end servers like web-servers on different machines - then a compromised key will compromise the whole cluster, no matter where it is running. Such a key allows e.g. application server administration which then includes passwords for databases etc.

Then finally it was show-time. By sheer coincidence Wednesday night at my hotel I had turned on the TV to watch some news. I landed in Günther Jauchs Stern-TV infotainment show and there was a guest who was described as being a "security-specialist". His name was Sebastian Schreiber - and I knew that for our Security Day two days later Sebastion Schreiber from Sys was supposed to talk on penetration testing. But as I had not met him before I did not know whether he was the same person.

The question was solved Friday afternoon at 17.00: he was (;-)

Sebastian Schreiber first took us on a tour of live-hacking. Using google features like inurl:price etc. he located web shops and services offering pizzas etc. A simple look at the html code already gave the idea to change some parameters and get a cheap piece of pizza...

He showed us how to collect live images from cams throughout the city, he showed how to remote control other peoples handy or how to do a man-in-the-middle attack on e-banking. It became very clear that everything below transaction signatures in e-banking is simply asking for MIM attacks.

He also mentioned concerns about the new law in Germany that forbids certain hacking tools or processes. It was easy to see that several of the well known vulnerabilities of large applications in the past could no longer be posted today. The new law really helps companies hide behind weak and dangerous applications because posting the vulnerabilities puts the white-hat in danger. Again a case where lobbying has made life for companies easier at the price of worse security for the public

The good news: Sebastian Schreiber offered us documents and contract templates in case we do penetration testing in our university classes.

After the live-hacking session he showed how penetration testing is done at Sys, explained the different kinds of testing (black-box, white-box), the use of tools and the process of documentation which is in itself quite critical as dangerous findings should not find their way outside of Sys or the companies in question. This e.g. forces Sys to run their own print shop.

Penetration tests are another tool in the fight against vulnerable applications and every team should use them. Unfortunately they are not really cheap as a lot of manual work is involved. Of equal importance for a development team is the knowledge on how those attacks work. But don't overestimate the effects of live-hacking: developers tend to ignore the risks once they are back at their desks...

The security day ended around 18.30 and we got a lot of feedback mainly saying that they can't wait for the next one. The next Security Day might have two main streams: One could be "the people problem", in other words the psychology behind security. Up till now psychology has played a minor role in security. Only the attacker has received some attention (e.g. Kevin Mitnick). The truth is that risc awareness and estimates are all based on psychology and that we do a really bad job there (see Bruce Schneier on security and psychology). Dr. Scheidemann of ApSec might want to add a talk on risc awareness and cryptography which comes to some surprising conclusions on developers, users and their use of cryptographic tools.

The second stream could be enterprise security. Identity Management, secure delegation, federation of identities etc. are core topics and e.g Michael Watzl of tesis has already committed for a talk on identity management. Looks like this will be another interesting day.

Security Day October 2007 - Application Security

The second security day at HDM has its focus on application security. Florian Grassl (CS&M) starts with the development process necessary to achieve better application security. Sebastian Roth (CS&M) covers web application firewallsand their limits, Mr. Haas of McAfee explains vulnerability management and Dr. Benjamin Ferensen of UBS Ag explains enterprise wide strong authentication. What does a mainframe environment offer applications with respect to security? Dr. Manfred Gnirss of IBM Germany - Development GmbH gives an overview of technologies and procedures. And last but not least Sebastian Schreiber of Syss will explain penetration testing and give a live hacking demo.

Note

Friday 19.10.2007 at Hochschule der Medien, Nobelstrasse 10, Stuttgart. See www.hdm-stuttgart.de for more information. A live stream is provided.

Software Quality vs. Time spent - some unnerving statistics

As a regular receiver of Jeff Sutherlands Object Technology newsletter I found the link to Joel Spolski, Hitting the High Notes . The article contains many surprising facts, e.g. why you should hire the best programmers for your software company and not the cheapest. Because software is digital and copying does not cost a penny. This means you can invest into quality software because duplication (and therefor more profit) is free. This is unlike the physical, analog world. From Joel: "Essentially, design adds value faster than it adds cost."

The article also has lots of data from empirical studies done on productivity and quality and the variation is staggering, the standard deviation even in the top 25 percent very high.

But there is more: in his plea for top notch people Joel says you can't win with mediocre people because todays software market is a "winner-takes-all" game. (Think Ipod, Office etc.). There is little room for second place.

Joels observations and data contradict todays "project management" believe system which is based on exchangable ressources instead of the best people. And adding more people does not help - something we learnt from Fred Brooks "mythical man month" because of communication overhead cost etc.

Sadly, the skill to run a highly productive software department based on an excellent environment for excellent people seems to be no longer existent. If you want to take a glance at excellence, get Beautiful Code by Andy Oram (yes, the one from the p2p book) and Greg Wilson. That means if your methodology still allows you to believe that a good product is based on good software made by good developers and not only on good project management...

Google Tech Talks - an excellent source of information on mashups, scalability etc.

Several people pointed me to google tech talks. These videos are an excellent source of information. I would recommend to take a look at some samples like the one on Model Based Testing. According to Richard Liu it explains what you can do once you have a model. You might want to take a look at Practical Model Based Testing as well.

Mashups and gears seem to be something really important in web development. Unfortunately the browser security model does not really support mashups (same origin policy...). Learn more about mashups and Google Gears from David Crockford's talk (thanks to Mark Miller/Captalk)

Marc Seeger pointed me to 13 tech talks from the Seattle Conference on Scalability where archtitects explain scalability and how it is achieved in Google. Watch all on "Scalable Systems" and get the whole list of tech-talks .

The thing from the internet - a good way to create awareness?

In the captalk mailing list I found this little gem: "fifties" posters on security themes. The "thing from the internet" sounds like "the monsters from mars" and looks actually quite funny. You can use the posters for free in an educational environment (creative commons license with attribution from Indiana University). I am just not sure whether this way of creating awareness is relly OK. It follows the typical way media "inform" people nowadays: all surface and no real reasons - and always for the benefit of commercial interests: Why is it that people need to fear "the thing from the internet"? Is this REALLY something that people working with computers need to be concerned about or is it a sure sign of a bad software architecture? Again, the poster suggests that worries about malware are in order and really important - but they are not.

During the cold war the US govenment ran the "duck and cover" features which showed how to behave during a nuclear attack. Instead of really informing the people on the devastating effects of a nuclear bomb explosion in ones environment the features downplayed the effects. The short videos look extremely ridiculous today but their political effect was quite useful for the US government: Instead of questing the aggressive politics of the US they gave the impression that people could protect themselves in case of an attack.

Usability and Security

In June My friend Roland Schmitz and myself have written an article on the relation between usability and security for <KES> magazine. It speculates on user interface design in a world of reduced authority compared to the typical windows style of ambient authority. Usability and Security (in German) as part of the BSI forum

Semantics, Data-Mining, Search, Web3.0 and modeling - a new "Day" or two?

Looks like we are going to get a Web3.0 day in the coming term. The semantic web seems to be changing - at least where explicit tagging is concerned. Automatic capturing of semantics is key, e.g. in the "stealth information architecture" approach by my colleague Richard Liu. He tries to capture meta-data indirectly through XML elements used in pages. Another approach is purely statistic: autonomy software uses bayesian networks etc. to create semantics. Others use natural language processing like the norwegian company FAST. How do we compare the results of searches? When do we use explict tagging versus automated procedures? How do we use meta-data intelligently? What is the role of collaboration in creating semantic information?

Frank Falkenberg is currently using the UIMA framework to enhance pages automatically and drive client side javascript with those meta-data tags. A combination of improved semantic technology and AJAX.

And it looks like it might be well worth the effort to take a look at the future of the Web from a computer linguistic point of view - represented by our new lecturer Mrs. Zinsmeister from the University of Tübingen.

For a moment I was tempted to bring modeling in UML vs. Ontologies into this mix but decided against it: it is better to plan a separate "model day" where we can reason about the two approaches and whether there is a common ground between both. Stefan Göhring is building an ontology for the medical area but the way he does it and the tools he uses are kept generic and independent of the problem domain. Timo Kehrer is doing research on advanced modeling with UML and we should be able to compare both approaches.

It think the "model day" should really concentrate on modeling, transformations etc. and not go into the MDA direction. Modeling is getting more and more independent of crude mapping approaches to code and it is important to understand this new role of modeling.

How does modeling interact with the recent drive towards more dynamic features in programming languages? (c# and ruby being two examples).

An integrated view on web application security, testing and web application firewalls (WAFs)

The next security day at HDM is already taking form: web application security is going to be one of the key topics there. We will take a look at the whole chain of tools, software and problems, starting with application architecture. The main question there is what can be done by the application architecture to improve security? Florian Grassl is currently working on a thesis in this area and the is looking at servlet filters etc. to establish control. But are these filters independent of the application? Probably not. They could use meta-data that were needed during the construction of the application.

this brings us to the next part of the chain: Web application firewalls? What are the possibilities and limitations using WAFs? Sebastian Roth is currently trying to find an answer to this question in his thesis. After establishing a sound conceptual framework he will test actual WAFs using his approach. For his attacks he is extending WebGoat on the server side. For the client side he uses a text-based browser in Python with a small Domain Specific Language (DSL) to describe the attacks.

The final part of the chain is the testing software, e.g. eXpecco by eXept AG. How do we interface such a tool (based on a workflow engine and activity diagrams) with the system under test?

The core question behind all this is: do we re-create or duplicate information needed during the construction of the application or can we use meta-data established by the application to drive testing and WAF? In a model-driven development environment the requirement and development models should be able to drive the other tools, e.g. by establishing unique identifiers for all elements of the GUI and by describing the syntax of values and communications.

Another important topic is platform security. Sven Höckl just finished his thesis on Linux Security. He did an investigation of existing Linux security frameworks and the concepts behind. On top of this he did his own implementation "SecGuard" which uses a clean Linux Modules based approach to integrate with the kernel.

As a further benefit he investigated several virtual machine technologies during his developments and I intend to use those for my operating systems lecture and exercises.

The 10th BSI Security Conference - a short recap of events and topics

(Pictures curtesy of Manfred Rueter, City of Bottrop). It was a very successful conference for the computer science and media faculty at HDM. Thomas Müller was one of only four students invited to present their thesis work at the conference. He talked on the security of the Vista operating system (bitlocker etc.) and its shortcomings. He is currently writing an article on his results for <KES> - the security magazine of BSI. Benjamin Mack used the poster session to present the current work on WEF , the Web Exploit Finder that raised interest even from the dutch government. You can read a whitepaper on the WEF poster session .

Note

The authors of WEF are looking for students to further develop the tools. If you are interested, please contact the authors via their homepage xnos.org.

And myself, I participated in the panel session on the last day, next to representatives of Microsoft, eBay, Heise Verlag, Bitcom and BSI - moderated by Ranga Yogeshwar.

What did we learn at the conference? First a bit about the BSI itself. It is really the place government organizations look for help with respect to IT-Security. There were a lot of government officials present. The atmosphere was rather relaxed and not very critical towards companies or minister Schäuble. Only the "Bundestrojaner" - the trojan horse like software supposed to help the BKA spy on us - raised some concerns and the BSI was obviously not happy about its publicity. That's why it got re-labelled as "remote forensic tool"...

Malware is still a BIG topic for organizations and companies as well. Suprisingly nobody wants to put their foot down on the software makers because of the sorry state of their software. Instead - a new topic has been pushed: user awareness and education. In other words: let's put the responsibility on the user and we get around building safe systems.

In his keynote Minister Schäble again mentioned all the measures planned towards total control of the population. The general theme was: more and better integrated data collection using federated systems when the amendments do not allow central systems. The talk reflected the current approach of law enforcement and politics: just ask for more data and if questioned mention the t-word (terror). The argument for more detailed and available data on citizens presented by another speaker goes like that: in case of an emergency, rescue forces need to be able to get data on elderly, handicapped people and where they live. Sounds convincing and might surely save some lives over a longer period of time. But what if those data get into the wrong hands? The mob would pay a lot for those data .

I visited a number of sessions on the german health card. Web services and Web services security played a large role there - for reasons of interoperability only. One thing puzzled me a little: the so called "comfort-signature". It basically splits the signing process into two parts: in the morning a signature card gets "unlocked" by its owner. Later, another member of the health profession can use a different authentication mechanism to actually create a signature with the signature card. I was wondering whether the signature created would reflect the fact that it was created using a "comfort-signature". If so, it would provide a way out of liability issues for the health profession (like "sure your honor, this is my signature. But on this day the secretarys dog had swallowed her RFID authenticator and every time he walked past the PC ...."

If have also seen a very promising model-driven approach toward security in the austrian health card development, driven by the University of Innsbruck. We need to get in touch with them. The approach could result in a new UML Profile.

Federated security mechanisms played a major role in the implementation of a virtual electronic file.

What else was interesting? 3-D face recognition works very well and is currently tested at airports. They use some funny names for it (unattended border control etc.) and I was wondering whether they would use it as a substitute for proper auhtorization and control befor entering a plane. The future here is frightning. I talked to the speaker afterwards (actually I did this a lot because BSI executed a tight regime with respect to questions and usually closed it after two questions) and he said there is nothing preventing the use in major magazins, subways etc. An important technology with respect to bio-data is multi-modal fusion of manifest modalities to retrieve non-manifest modalities. Big brother is getting more tools...

Johannes Landvogt represented Minister Schaar of the BfDI and presented a list of 10 commandments for data protection.

"Why Johnny can't encrpyt" - this question has been answered with a reference to usability problems: it is simply too hard to use encryption. Volker Scheidemann added another point: risk awareness. In many cases people don't own the data - they belong to the company. And peoply underestimate the risk therefore. Basically Volker Scheidemann emphasizes the role of psychology just like Bruce Schneier - or perhaps before Schneier. Besides his interest in risk awareness he specializes in cryptography at his company. We will likely see him at our next security day.

Interesting people - we met a lot of them. Representatives from universities, companies and government and it will take me some more time to contact everybody and make them aware of our faculty and programs. E.g. Michael Watzl of Tesis

The law in times of rapid changes via IT

Note

the stream from our Digital Rights Day. Please note that the event starts after some time..

The following is a short write-up of things learnt at the Digital Rights Day™. The first speaker, Andreas Lober of SCHULTE RIESENKAMPFF specializes in law of the virtual worlds.. He gave some very intersting examples of how different lawyers think compared to e.g. computer science people. This is in part our problem because we tend to think about problems as belonging to the yes/no category with hopefully an algorithmic solution. The law works differently and it became very apparent during the discussion of non-repudiation in the internet. There is basically NO form of non-repudiation used in virtual worlds, e-business applications etc. At least not in our sense of a qualified digital signature being used to sign an electronic document. Does this mean that a judge will always refuse to accept a piece of e-mail or a screenshot (yes, a screenshot!!) as proof or evidence? Not at all. The law knows some workarounds: a screenshot together with an i affidavit (statement under oath) will have some legal value even if it is technically (IT) just a bunch of bits.

But the law NEEDS a solution in case of a dispute and that's why it accepts the affidavit as a substitute for a digital signature. And the law knows much more than just correctness in the sense of an undisturbed hash value: it knows intentions, motives etc. and will use all this in its process of creating justice (in the sense of law).

But it gets crazier yet: what if I show up in front of a court - equipped with a fully qualified digital signature of my opponent under a document? The law knows also LESS then IT. In this case it is very likely that the judge does not understand a digital signature and we are right in the middle of a fight between external experts called in by the court. (thanks to Andrea Taras for pointing this out).

Mr Lober made it very clear that the virtual world - especially if closely modeled according to its physical original - is full of rights violoations. Try to model a city in enough detail and the brand signs on large houses become a problem. Do not wear t-shirts not bought virtually in SecondLife.

Local laws will still rule the virtual world in the next couple of years - whatever that means in your case. There are 2 year old disputes between members and SecondLife just on WHICH court is in charge of the dispute...

Open Source - a way to get rid of responsibilities as a software vendor? Not by a far cry as Dr. Gregor Zeifang and Dr. Axel Funk of CMS Hasche Sigle pointed out. After explaining the existing forms of open source licensing (GPL etc.) and its sometimes viral attributes it was clear that a) those licenses ARE valid in Germany and b) they will NOT serve as a means to get rid of liabilities by way of saying: it is free so I got nothing to do with bugs and damages resulting from those.

The talk was very detailed and ended with some good advice on what to put at the head of a software file to ensure your rights and to keep you from being sued. The clear rule was: always put some EULA statements additionally to the open source statements into your products.

Next came Thomas Hochstein - prosecutor in Stuttgart - and gave an introduction to internet law. I have rarely seen such a precise and to the point presentation of such a large and vague topic. He explained the three core relations member-member, member-provider,all-state and gave excellent examples in each area.

Especially the law of the providers/intermediates is critical. If you can manage to present yourself as just a platform provider (like secondlife supposedly does) your responsibilities are fastly reduced. It gets more difficult once you are a real intermediate with a publishing business (like a forum). While the law protects you from getting sued immediately once one of your members violates sombody elses rights you are responsible for taking action. And because many collaborative sites do not enforce authentication strictly the prevention of repeated violations is a hard thing to do. And when you fail to do so you can be sued for not ensuring a third parties rights repeatedly.

Another way to reduce your responsibilities is to use special laws convering the broadcast of live events. Those cannot be controlled as tight as recorded features and therefore the responsibility of publishers is reduced. (In th e US TV stations now delay broadcasting of live events to allow censoring the content in case of another "Janet Jackson Nipple Event"). A live upload and redistribution through youtube should be considered a live broadcast.

The liability threat to providers and publishers of third party content should not be underestimated. Flickr e.g. currently uses a self-categorization system as a means to prove "responsibility" on their side - even if this means that the german flickr.de will only show one third of all pictures uploaded - much to the dismay of the german Flickr users.

Hochsteins statement that "the first time is free" was challenged by students claiming that they got sued already the first time.

A very important part of Thomas Hochsteins talk was a clear statement on the rights of third parties even on the internet. Distributing naked pictures of a celebrity is a violation of her rights and cannot be simply tolerated. I am very glad that he refrained from using the well-abused topics of child-pornography and terrorism for his arguments which were nevertheless convincing (or perhaps because he only talked about the everyday violations.

There is no doubt that peoples rights are getting violated on a daily base on the internet. But do we have to demand full and strict authentication befor access to the internet is allowed? And what kind of measures can we demand from providers? When do legal measures create a climate of uncertaincy and finally censorship with site-owners?

These questions are far from easy to answer. And the ever growing abilities of IT add another problem to it: Mr. Hochstein mentioned car license plates as an undisputed means to identify drivers. But what happens once there is a scanner system installed that lets law enforcement track all movements continuously? What about networks of video cameras with roaming ability that can track you over a long distance? What when federation technologies turn previously separated (and separated due to security concerns) databases into one big virtual DB with realtime query capability?

I think that lawers have not yet realized the full impact of IT technologiy as it is currenty rolled out. IT technology really changes the quality of many tools and mechanisms used by the law and law enforcement. And what about our right to anonymity?

The afternoon started with a talk by two representatives of the Chaos Computer Club Stuttgart. Hanno Wagner and Torsten Jerzembeck gave an overview of projects, mechanisms and tools used by authorities and the state to gather and combine data on citizens. And according to them the private businesses are far worse than the state. The recommendation was clear: be careful with your data and don't hesitate to challenge collectors. The whole number of data pots created is really staggering and of questionable value as Bruce Schneier also points out: to much data easily leads to increasing numbers of false positives. And another problem becomes evident: law enforcement has a justified interest in detailed data on citizens, e.g. to rescue them in case of an accident or catastrophy. But what happens once those data get in the wrong hands? We will have to come to a decision on whether we will try to achieve maximum safety and security or whether we will not save some people because we don't want all data being accessible to law enforcement. This goes deep into a discussion of freedom and its costs.

And that was exactly the topic of our last speaker, Kurt Jaeger of the "Arbeitskreis Vorratsdatenspeicherung". He gave an overview of the current plans and laws on telecommunication data being collected in advance. In a way this makes it impossible to access the internet in an anonymous way. There is no technical necessity for it but there was none in the case of pre-paid mobile phones as well...

Jaeger pointed out that there is a huge asymmetry between the data collectors and the citizens with respect to the data collected and that this will put the citizens at a clear disadvantage.

One of our guests, Dr. Lutz (patent lawer) asked for a more detailed, case based definition of freedom and it turned out that this is not an easy task to do.

The talk of our "Freiheitsredner ended with a nice discussion about how to make people sensitive for the right to privacy and its enemies. Looks like our Digital Rights Day did a good job there.

At the end it was clear that we will have more days on social and political topics in the context of IT and computer science. The next Digital Rights Day should cover software patents and manage to get somebody responsible for data protection and civil rights to us. I'd like to add a technical session of profiling people as well. (see below). And last but not least the concept of anonymity in a digital world needs clarification.

Note

And last but not least: Andrea Taras pointed me to a Spiegel article discussing a nice flash animation on total control by Johannes Widmer (panopti.com). For the MTV generation...

And thanks to Sebastian Stadtrecher a very good video on Sie haben das Recht zu Schweigen , recorded at the 23. Chaos Communication Congress (December 2006). Definitely worth watching.

5. IBM Day - Data Warehousing, Data Mining etc.

Data has become the oil of business processes and operations. Most companies rely on fast access to data for their daily operations. But it is not only the fast access that is needed. Data Analytics is playing an ever increasing role in decsision making and companies spend millions to create data warehouses with intelligent software doing on-demand analytics.

There is a social and political side as well as we have seen during our digital rights day last week: Mass-data being collected in advance, the use of federation to combine physically (and legally) separated databases and the reality of total tracking by permanent data collection (video, car license plate scanning, 3-D face recognition etc.).

Specialists from IBM GBS will talk about on demand information and how it is created, how it works in large data-warehouses and the latest changes in technology and business.

The agenda:

  1. Christian Brabandt, IT-Consultant with IBM GBS, Information On Demand - an Introduction. What is Information On Demand and how does it support business processes within corporations?

  2. Otto Görlich, IBM Central Region Technical Sales SWG Sales Public & Commercial. Accelerate information on demand with dynamic warehousing from IBM - The changing role of the data warehouse.

Note

Friday 22.06.07 at HDM Nobelstrasse 10. Room 148. 9.30 - 12.15. The talks are open to the public and free of charge.

Digital Rights Day at HDM

Computer Science is at the core of social and political developments in many areas. Digital content and its protection are one aspect of this. But computer science is getting instrumentalized as well for purposes that threaten our civil rights in a massive way. There is hardly a day without announcements of state officials telling us about new ways to track every move. The amendments seem to bother nobody within the political and security organizations anymore.

A short example: in February I read in our weekly village magazine (the one the tells you when to put out your garbage cans) that my personal registration data held by the state are now available online through a portal. The service started 1. January and I could "opt out" if I wanted (the article didn't mention the slight problem of going back in time to do so). I had a hard time to locate the portal but found a lot of marketing information by the company that runs the portal and how they would perfectly support "large companies interested" in the data - for a fee of course. There was no mention of MY rights of course.

The Digital Rights Day will cover a lot of ground here and provide ample grounds for discussions. Open Source, privacy protection and internet law (even in virtual worlds like secondlife). Kurt Jaeger - one of the "Freiheitsredner" will give an introduction to what freedom really means. Lawyers, the Chaos Computer Club and other specialists will be present or give talks. You can find the program as usual at HDM homepage

Third Games Day at HDM

With the BSI Conference and the security book getting ready I didn't have a chance to properly announce our third Games day on my own homepage! Let me at least say some words about the topics that made this day our most successful games day ever.

The agenda can still be found at the HDM event list and the stream (thanks to Michael Gerlinger) is still available and well worth watching.

The event was divided in two major sections: Game engine design and virtual worlds. Let me start with the Game engines. Kai Jäger and Clemens Kern demonstrated their own game engine project and its amazing quality. The focus was on "how do I create my own game engine" and both MI students gave a perfect introduction. It is doable, but it takes quite a lot of software and computer graphics skills. Besides the computer graphic stuff there were two things of general import for software developers. Both authors emphasized the importance of agile methods in developing such an engine. This does not mean chaos rulez. Quite the contrary. But it means that mistakes will be called mistakes and refactored asap, without a chance to let them fester into "the big ball of mud". The other topic was the way the "world" of such an engine is captured. The authors used octed trees for this purpose: Just imagine a dice that is cut into eight smaller dices and those are again split into eight smaller dices each. This goes down till every dice only contains a certain number of polygones. The trick in this is in the fact that a sub-dice will ONLY be split further down if there are polygones within it. This means that empty space within a scene is represented cheaply by a certain dice within the tree. A perfect example of an algorithm that scales well with the complexity of the scene data. Oh, I almost forgot: adding some dirt to technical scenes makes the scene look much more realistic. This is absolutely true, I still remember how run-down the cargo space-ship in Alien I looked in certain places. Of course there are limits to what two developers can achieve in eight month. The sun light in certain scenes was pre-generated as part of the texture and did not change during the virtual day, the same goes for the shadows naturally. I will later talk about an engine that can do those dynamically in realtime.

The second talk - Florian Born of Trinigy - showed something that should not be possible in game engine design at all: multi-platform development. I.e. supporting several different hardware platforms with the same software framework. Mr. Born started with a comparison of the PC platform (directX 9 and 10), the XBox and the Sony Playstation. After going through all the differences - and he really went down to the bottom of the chip-design by talking about out-of-order-execution, ram access times etc. - it seemed unlikely that there could be a common platform hiding the differences. Mr. Born then went into explaining core principles of multi-platform design in a world that counts CPU cycles with gusto and tries to use the specific hardware to its limits. This seems to be the antithesis of multi-platform development which always has a touch of "least-common-denominator" approaches.

Polymorphism is bad and should not be used to hide implementation differences! What??? This runs counter to most software engineering classes which deal with OO development. The explanation is that in C++ virtual methods are used to implement this feature. Instead, macros are used to implement different hardware mappings directly. The top level framework showed surprisingly few platform-specific functions (some cannot be hidden because they are visible up to the game logic level (special devices e.g.).

Multi-platform development of games is really hard but Mr. Born explained that there are considerable economic forces behind it. It is just too expensive to develop a game for each platform from scratch. He mentioned many other technical differences about these platforms and how they should be treated so go and watch the stream for more information.

There cannot be a games day without our own game development: "Die Stadt NOAH" - a funny adventure developed by a group of 30 students from different faculties of HDM, Uni Stuttgart, Musikhochschule Stuttgart etc. Thomas Fuchsmann gave a short presentation on where the project is heading and how it is managed and developed. He and Stefan Radicke started the project about a year ago and both can be rightly called the fathers of game related development at HDM. You can download the presentation .

The game engine block ended with a demonstration of Cryteks CryENGINE by Valentin Schwind, our resident CG artist and MI student. Vali showed amazing pictures generated with the new engine. Pictures which were virtually undistinguishable from real photos. Used to program an ego-shooter playing in a jungle scenario the vegetable rendering was just unbelievable. But so were many other features like realtime daylight. As a general trend one can say that whatever used to be pre-generated for performance reasons is now done in realtime at high resolutions.

Asked about the hardware needed for this engine the answer simply was: the latest! (1 Gig vram etc.). The engine gives a glance at what future games will have to offer: completely realistic scenes rendered in realtime. This goes for the sound as well of course. The features demonstrated by Vali were too many to repeat here. Go and watch the stream!

The afternoon block on virtual worlds began with a furious and very entertaining talk by Thomas Kasemir of IBM. He showed what IBM is doing in SecondLife. There is still a lot of learning involved. Most of the IBM efforts are done in a collaborative manor where developers and others interested in building virtual worlds got together and started building the IBM representation there. Even the fully automated house from our friend Jochen Burkhard had been turned into a virtual model.

SecondLife has gotten a lot of bad press recently (child pornography, PR dollars wasted on empty spaces etc. But I believe that besides all this there are amazing opportunities in virtual worlds. And many, many open questions. One e.g. was "why are there chairs in a virtual conference room?". This can be generalized into the question: what will be the role of physics in a virtual world? Where will physical metaphors work and where do they simply get in our way? Do virtual houses need a roof if it does not rain? But what if a roof is a metaphor for security? how about navigation in virtual worlds? Stairs are stupid - but they work in games. Why is that so? What have games and virtual worlds in common? How are media presented in virtual worlds?

After Thomas' talk I realized that there is work for all faculties of HDM when it comes to virtual worlds: content questions, usability problems, technical issues etc.

Next came Claus Gittinger of eXept AG, famous smalltalk guru and now a lecturer in our faculty as well. He demonstrated openCroquet - the collaborative 3D environment on top of Squeak. Claus showed several core principles of Smalltalk in action. One was "Model and View are one" and goes directly against the well known MVC pattern which separates a model from its views - at the high price of bad usability. The other one was "everything is an object" and means that there need not be a distinction between objects and their presentations: objects ARE their presentations. Croquet lets you collaboratively manipulate 3D objects using a replication mechnism. Several ways to implement such a thing were discussed with the audience. But the core message was: A 3D environment should be part of your development environment and its manipulation should be possible for kids. There was a lot to learn from this principle for designers of secondlife etc.

Finally Stefan Radicke - who is currently doing his diploma thesis at a games company - demonstrated game development for Nintendos WII console and its remote controllers (nunchuck etc.). First he showed the new features of the WII remote and the Nunchuck with respect to motion detection etc. And then he demonstrated his new framework that allows to coonnect WII controllers to PCs. His talk contained many important messages for WII developers and described the interfaces to Nintendo and the toolkits provided. The talk ended with lively discussion of the possibilities for absolute position management (probably not possible because the WII controls cut off sensor data at certain values).

By then it was 17.30 and there were still lots of interested people both in the room as watching via Internet. The Games Day was again a huge success- many thanks to the organization team and their tremendous effort. In the winter term the games day will be held together with the Media Ethics Award and we will discuss the social relevance of games and virtual worlds with many well-known speakers. For a games presentation during this event I am looking for active gamers willing to demonstrate their favorite game to the public. Adventures, shooters, strategy games, MMOGs etc. are welcome! Please send me a short note if you are interested (WOW level > 50 required (;-))

Beyond Fear Tour II

Elke Kniesel and Sebastian Friling made some pictures of our tour.

This year was supposed to bring something new for me: a new bike. The neccessary excuse to buy one was quickly found - the yearly field trip week at HDM. After a lot of searching (within me and on mobile.de) I ended up buying a almost new blade a few days before the planned field trip and ended the dry period in southern Germany by doing so (in other words: the moment I signed the contract it started raining...).

The rain did not stop us from going on our field trip which took us first to the University of Furtwangen where my friend Friedbert Kaspar was waiting for us. We wanted to talk about security initiatives at Furtwangen and also compare our bachelor and master programs a little. Alas - the visit was cut short because we were almost 3 hours late due to excessive rainfall between Stuttgart and Furtwangen. Still we learnt some interesting things about the computer science master at Furtwangen (which is only 3 semesters and gives diploma students a one semester break). And we heard some disturbing news on the acceptance of bachelor programs consisting of only 6 semesters by other countries. We will surely stay in touch with Friedbert on those topics.

We then got back into our wet overalls and gloves and started on our way to Ballrechten-Dottingen in the area called "Markgräflerland". And the warm wind coming up the Höllental was welcome. The weather got much better and after ditching some enemy fire (aka radar guns) we made it to our pension - the Gästehaus Schwab in Ballrechten-Dottingen. The day ended with a short walk to a local restaurant run by a farmer - a so called "Strausswirtschaft" where we tasted Schnitzels and asparagus.

After a wonderful breakfast we started towards Zurich and UBS where the section of Christian Kunth expected us. We were supposed to hear a talk on defensive measures taken by UBS to protect the banking infrastructure from external attacks (DDOS etc.). Such attacks can have serious consequences for the bank due to its responsibilities towards international customers. The presentations included some information on Intrusion Detection as well. Christian Kunth, Victor Fieldhouse and our own MI alumni Mathias Schmidt did a very good job in explaining attacks and countermeasures.

After some coffee we went up one floor to the section of Andreas Hofmann where Intranet and Internet platforms for the bank are being developed and maintained. We got several short presentations on content management features and ended up with a longer discussion on testing, test environements etc.

A couple of hours later it was time to say good bye and we headed back to Ballrechten-Dottingen. The plan was to visit Freiburg in the evening but alas - once we had dinner at a nice restaurant at the end of the "Albtal" - a wonderful route for bikers - we were all suddenly getting really tired and came up with an alternative: lets get some beer and stay at the pension. We did exactly that and had a nice evening together.

In the morning some had to head back to Stuttgart right away and the rest went on a short tour into the french Vogesen. Our goal was Hartmannsweilerskopf - a famous battle field of World War One. The french and german generals killed thousands of young people trying to gain control of a small mountain top - the Hartmannsweilserskopf. The battlefield installations are kept in a rather original state and the way to the center is crowded with craters. A rather sobering visit and some good lectures on security and safety can be learned from this: security at the state level is always something that touches several others. There is no security for one state only because it means insecurity for the others. And like todays race between hackers and IT-security there was a race between those who tried to dig themselves in and those who came up with ever stronger bombs to crack the bunkers. The battlefield shows the usual madness of WWI battles: enemy trenches only a few meters apart with the occasional visit to play cards - interrupted by fierce attacks once a Lieutenant meant to do something for his career..

After the battlefield we had lunch at Munster (tarte flambee) and afterwards we split into a fast group heading back to Stuttgart via the highway and a slower group heading back via country roads.

One of the side-effects of our tour was a test of several gizmos and gadgets like GPS, Navi etc. Sebastian Friling served as our tour guide most of the time and did a wonderful job - many thanks to him. We did make some plans for next year which included e.g. Tuscany. Does anybody know some artists at the HDM who also ride a bike?

The "people" problem

Computer science seems to have discovered what Peter Morville called the "people" problem in this latest book on "ambient findability" - which is quite nice actually and has been brought to my attention by my friend Sam Anderegg. The author clearly states that computer science has long envisioned a user that does not really exist: a rational being that makes rational choices and e.g. performs rational searches. But since the problem of relevance seems to be so hard to solve in information retrieval we had to face the human factors finally: what's relevant to you does not have to be relevant for me and vice versa.

More authors are discovering the "people" problem. Bruce Schneier e.g. posted a draft on psychology and security recently where he puts a lot of emphasis in pointing out that we tend to deal with risks in the same way as our ancestors did. Unfortunately the world has changed a bit since and we now fall prey to politicians which use our weaknesses in judging risks for their purposes.

Typical examples where we judge risks in an unreasonable way are: rare but horrible risks are overestimated, the common risk (like driving a car) is vastly underestimated. As soon as our emotions find something good in a bad thing we tend to downgrade the risk. Even the order of words describing a risk seems to affect our judgement.

But finally I guess we will have to contest that risk judgement all by themselves are inevitably relative to a persons believe and other emotional systems and sometimes defy a rational classification. The lower right corner in the usual risk-consequences diagram: cases with low probability but horrible consequences are just an example for this.

Usability and security is certainly an area where the usual people concept of computer science seems to be a little too thin. It starts with wrong abstractions: our operating systems offer users things like TCP/IP but fail to offer a system wide concept of e-mail addresses. The email address is of quite some importance for users but the systems treat them as strings. We need to stop calling something a security issue when we actually mean safety. An example for this is "slanty" design whose credo is to sometimes take usability away. This may sound like a sacrilege to usability specialists firmly rooted in "user centered design" which claims that the user is always right. He is not! Slanty design tries to correct this by building systems that make the bad way harder to use. They give a nice example: The carousel in airports where people pick up their luggage is usually very crowded. People who already got one piece of their luggage - usually some kind of trolley - are now waiting for their other pieces. Unfortunately they do not step back while waiting but block the other people from getting to their luggage. Airports have tried many things: Signs asking people to step back, curtesy lines and bands which last only until two "braves" decide to storm them. Slanty desing goes down a different path: Give the immediate area surrounding the pick-up zone a slight slope so that trolleys tend to roll down. And make the floor something with rubber bristles which make standing kind of uncomfortable. People will automatically step back without signs or lines.

It is "slanty" design to require users who want to install an application to put it into a specific place via drag and drop. This prevents the accidential "double click" resulting in automatic installation of a trojan.

Security-Industrial Complex

I've had some rather frightening revelations lately, one of them being about the SIC, the Security Industrial Complex. I don't know exactly what caused this: was it the omninerd paper on Operating System Vulnerabilities 2006 which concluded that neither Windows nor Mac OS are safe "out of the box" - exactly those systems which are targeted for a large population of rather amateur users. Or was it the permanent attacks of politicians on human rights using IT technology? Or perhpas the title of the BSI IT-Security conference im May: Can Security incerease business?

Anyway, something must have triggered the analogy to the military-industrial complex which successfully manages to extract billions and billions of money from taxpayers worldwide which get the offer of getting killed in a war as return on investment (ROI). There is now a large industry catering to the security scared population of KMUs and enterprises as well as the common home user. If the analogy to the military-industrial complex is true then the SIC cannot be really interested in safe and secure systems. It would kill its business.

Let's just assume for a short moment that this is really the case. What kind of behavior would such a SIC show?

  • It would show no real interest in fundamentally safe and secure systems - just like the MIC does not have an interest in peace. Therefor it would actively try to prevent significant improvements like the Symantec-MS discussion on kernel features lately showed. Improvements to the kernel architecture would make certain add ons unnecessary. It is extremely hard to kill business that generated billions per year.

  • The SIC would focus on tools and technologies which do NOT solve the fundamental problems. Like virus checkers and firewalls. Those tools and technologies are NOT a law of nature in IT but the sign for significant deficits in systems as built and delivered! Same goes for intrusion detection systems to a certain degree.

  • Like a good MIC the SIC would raise FUD campaigns trying to scare people into investing into their tools and technologies. Dire warnings are needed to achieve a situation where common business people when traveling talk about their firewalls and virus checkers on their machines. Not exactly at the business level of abstraction I would assume.

  • It won't work without political and legal support: Companies and organizations need to be convinced that there is no life without complete background checks of all employees and future employees being performed - at a nice rebate - by the new crop of background check specialists. It works best when political lobbyists manage to make buying tools and services mandatory.

  • It is beneficial for the SIC if there is no more mention of the SAFETY of a system as delivered to customers. Cars need to go through a technical approval process before they can be sold. IT products don't and this is a major reason for the security problems: there is NO security without safety and safety measures kill a lot of securiy problems. The damaging effects of viruses and trojans are a SEFETY problem because they are caused by a lack of POLA - which is a general software problem in the areas of safety and quality.

  • The SIC will create a large supporting infrastructure consisting of university classes teaching attack prevention measures, small companies running security audits and penetration tests, certification processes for the security specialist and so on. This increases the number of people not at all interested in fundamental improvements.

The conclusion is rather scary: all the developments mentioned above are REAL. And if we apply the duck principle (if it quacks like a duck, waddles like a duck, flies like a duck and generally behaves like a duck it IS A DUCK) we can only conclude that there is a large SIC already in place and chances for fundamentally safe systems and applications are getting slim - just like the chances for peace in the presence of the military-industrial complex.

A few remarks on the relation between security and business. It is true that the third world economy suffers from a lack of trusted procedures with respect to money, investment etc. And this might hurt them more than technological deficits. A good case for the importance of trust and trusted systems is made in The digital pathby the capability people (Miller, Stiegler etc.).

Does insecurity hurt the IT business? Not really. As long as the software industry manages to push the risk of their software towards the customers the risk and damages do not show up against their balance. Is this going to happen? Not likely as the popular "public-private partnerships" of lately show: they are just another word for the economy finally taking over all public structures through their lobbyists. Security investments into actually unnecessary tools and technologies further the gross national product as well! We will have to find a way to reemburse efforts that save/conserve energy or resources without generating additional and useless business transactions.

MI-Stammtisch, Zürich Section

A few pictures from our zurich-branch.

Authority Reduction in Vista

I have once said that Microsoft gave up on the idea of a safe home-PC and that its focus is on tightly controlled company computers driven by central registries and SMS. Vista has some new features in the area of User Account Control (UAC) which is based on a label-based access control methodology. You might remember the famous *-properties of systems in organizations with classified data: no-read-up, no write-down policies should prevent data exposure. What are the effects of those new Vista features on end-user security? Joanne Rutka gives a very interesting overview of the technology behind and shows the specific way Microsoft uses label based access control: it is used to protect the Microsoft core software and NOT to protect user data. The policies are actually the opposite ones compared to classic *-properties.. Please not that this does not really implement authority reduction as it does not restrict the authority of higher levels and even exposes them to luring attacks through lower level software. Rutka describes very interesting implementation quirks of integrity levels in Vista and gives an example of how to configure your system for severel users with different integrity levels (all actually belonging to ONE real user). She runs her office applications in the highest level and her browser/mail apps in the lowest level. This does really make sense because it protects the office (sensitive) apps from lower level applications which could try to use e.g windows messaging to control high-level shells (of course Vista seems to leave enough bugs that this is still possible but that might actually be something related to the inability to block certain calls due to accessiblity features which would not work otherwise. Another example of how a design error leads to more and more critical decisions later on...). Anyway, go and read the "Running Vista every day" by Joanna Rutko which I found via Schneiers cryptogram. I need to find out how they separate User Identity and roles without falling into the environment trap: one user using different accounts for security reasons gets a separate environment for every one (just try Itunes on a PC). The current solution using Integrity Levels (IL) seems to provide some security without creating too much of a usability nightmare for end-users. But due to the granularity problem the security is not really tight.

A last comment on this: Vista automatically runs installation programs (detected by some heuristic) in administration level. This is nonsense of course because an OS that does run foreign code in admin mode to install stuff won't be able to control the installation. Microsoft knows this very well as they propose the opposite (declarative installation) in their research OS Singularity and OSGI does the same. Users are required to insert the admin password to change into admin mode: this is another nonsense as all that is needed at this moment is a trusted path through which an OS can get proof of a) a user is really requesting the change (not a program) and b) this user has been authorized to do the change (which is a configuration setting done by some administrator before). There is NO reason at all that users need to know or insert the Admin password (which is a functional ID btw. and bad for that reason too).

The computer as an appliance, the limits of testing - reflections on our Security Day

Like the Games Day and the Web2.0 day the presentations on our Security Day gave us lots of things to ponder over. It started with my friend and colleague Roland Schmitz telling us about Digital Rights Management and its special form of mobile DRM. It looks like current DRM approaches suffer from a lot of problems like interoperability problems, usability problems (in case of lost or changed hardware) and so on. There seem to be two current ways of thinking towards the protection of digital content: turn the PC from a general computing device into an appliance that is tightly controlled by the industry. Much like a TV today. In this environment the use of content at the end-user site can be tightly controlled as well. Unfortunately neither the hardware support needed (Trusted Computing Platform) nor the standards for interoperable use of such content exists yet.

The market is full of strange solutions like the Microsoft Zune player that does not use Microsofts own DRM solution for Windows but comes with its own. Much fuzz is around the topic of "downloading" and "copying" protected content. What happens in case of songs as gifts? What happens if somebody inherits a lot of protected content - will he be able ot use it? Or is all content lost after the owner dies?

And last but not least there is the problem of privacy protection which suffers when vendors can trace the use of content.

Anyway, this first approach tries to overcome the problems Ed Felton mentioned in his talk on digital media, copying etc. that we have seen last term in current topics. The PC is so powerful that it can be used for anything - like copying digital content. And it is hard to turn it into an appliance.

There is a lot of fear about this approach as well. Windows Vista is able to not only prevent the copying of protected digital content. Given an end-to-end protection scheme within the hardware used for playback it is able to even prevent protected content from being played through ANALOG channels as well.

Mobile DRM tries to restrict the use of protected content on smartphones, PDAs etc. Those machines typically come with only a small amount of permanent storage which makes them unfit for large archives. And it raises the question whether DRM in general might not be such a good idea anyway.

Let's assume a szenario where we are "always online" with our mobile devices. Do we really need and want an archive of "downloaded" songs. Or would we rather use music and video content in an "on demand" way through streaming. Let's furhter assume a flatrate for music use as well. Then why do we want an archive anyway? To hand it over as our heritage to our children? Not very likely that they will have the same taste and preferences toward content like we do.

Given fast and broad connectivity the existance of large archives in homes which replicate the same content over and over seems archaic. Why bother? Our time is getting more and more valuable and maintaining archives is not the best way to use it. And our archive does not work collaboratively. It does not tell us about preferences from friends or titles that fit well to our preferences.

But there is the issue of privacy. A web service might tell us about new songs that we might like - at the price of our privacy. In the future we will permanently be forced to choose between comfort and privacy - I guess this dualism will be a major force in marketing efforts of most companies (and perhaps the state as well): make the intended solution look like it is the most comfortable one and people will trade in their personal rights en masse.

After the talk on DRM Juergen Butz showed us some of the results of his conceptual study on mobile security in the enterprise. The number of devices, systems, protocols and attack vectors in this field is mind boggling. He discussed briefly approaches for general low level authentication through 801.x protocols (which are not really as safe as they look because they leave ports open after initial authentication. A hub connected at the client side would let other devices use the same port as well. Or the use of TNC to validate a client platform software and system status. If this works a server can rely on the fact that the company rules have been enforced at the client side.

Again another approach to restrict the PC as a general computing platform through enforcement of global (or company wide) rules.

Usability and security are in some serious conflicts in the area of mobile security. To achieve some degree of safety the users must forget about their own UBS sticks and accept a lot of rules. Actually, user training is considered essential to achieve security with mobile devices.

Juergen Butz mentioned an interesting idea: Mobile technology can also be used to increase safety of machines, e.g. by informing client machines (which might be disconnected from the main company net for a longer time) to update their software due to new exploits. In general UMTS or GPRS seems to be the technology of choice for mobile systems. They let client machines connect through public channels instead of using wireless networks from other sources, e.g. partner sites.

This is all well but I doubt that small and medium size companies will spend the money and time on implementing many of the required changes. They lack the experience and money. This again raises the question: why do companies use general purpose computers like PCs with their boundaries just to worry permanently about their safety? We have seen other approaches in the history of IT: X-Terminals, 3270 Terminals etc. Perhaps moving back to some tightly controlled mainframe and many rather dumb terminals is not such a bad idea in mission critical environments.

Right after Juergen Tobias Knecht from 1and1 - the well known provider which is now part of United Internet gave us a demonstration of the "underground economy": people living off the illegal use of the internet and the web. He showed us the unbelievable amount of data that is collected at 1and1 just through their mail server logs. The huge number of daily spam attacks. An essential part of the "abuse" team at 1and1 where Tobias Knecht is a member is fast response to attacks. Not within hours but within minutes of spam attacks.

He took us on a tour of protective measures like maintaing large numbers of honeypots to study current attacks to be able to react on time. And yes, root servers are a problem because the customers are responsible for their safety. But 1and1 is able to detect a spamming root server within minutes and can take it down in this case.

The most fascinating part of his talk was on the underground socienty and their participants, rules and business conduct. There is a considerable professionalism within those groups with a high degree of division of labour: the inventors of new attacks, the assemblers of attacks, the vendors and the cashiers and last but not least the drop points where "stolen" goods are sent to and distributed world-wide.

The amount of credit card data flowing through IRC channels is staggering. There are roughly 35-45 IRC servers dedicated to illegal activities and each one serves thousands of channels. One channel was shown during 24 hours and the amount of customer balances at bank accounts which were exposed through those data came close to 1.6 million euro. And that was just one channel.

I strongly recommend taking a look at his talk in the video stream that we captured.

The talks in the morning of our event where closed with Mr. Strobel form cirosec security. He explained the way his company helps other companies evaluate and test their infrastructure and applications through penetration tests and scans.

He gave an overview of typical application level programming errors that lead to attack vectors and he strongly recommended the use of web application firewalls. Those firewalls seem to be able to "learn" the right protocols and will take action once e.g. new parameters show up in a form. It requires careful training of the WAF though.

A very interesting point was when he mentioned model checking approaches, e.g. trying to check the correctness of a firewall configuration by tracing paths through it and the associated infrastructure. This method is close to confinement analysis in take/grant systems and is finally based on graph traversal.

Even checking whether a certain security policy fits to the implementation in firewalls seems to be possible using a model checker approach.

Why is model checking such an intersting technology? There are several reasons for it. The first one is obviously the complexity of rules and environments that make a complete empirical test rather cumbersome or even impossible. But another and more important reason is that Mr. Strobel reported 75% of all activities done in the evaluation of applicaton level security are done MANUALLY.

This has rather dire consequences. If a team that performs penetration testing takes about 10 days to come to a conclusion this is a) rather costly and B) cannot be repeated as soon as the software changes. This means that for extended periods of time an application is not going through penetrtion testing simply because of money reasons.

While this may sound as a purely theoretical threat I unfortunaltely was on the receiving end of such a problem only last Monday: our web application (which had been penetration-tested successfully) exposed XSS vulnerability in a routine that had been added after the tests (in a quick and dirty manor working around our usual mandatory schema definitions and checks).

The funny thing is: we do employ input checking technology at our front-end proxy servers - but perhaps not enough. This is also a strong argument for defense in depth through several places of validation within an infrastructure.The point where I did not quite agree with Mr. Stobel was that I still believe an application should define the grammar and semantics of its input/output language and not rely on WAFs "learning" the protocol. But I definitely need to look into this technology a bit more.

And this was surely not the last time that we had Mr. Stobel with us.

The afternoon started with Micheal Schmidt reporting on this thesis work at UBS in the area of host intrusion detection (HIDS). He is more and more looking at both NIDS and HIDS from the same abstract point of view and the differences seem to get smaller. Unfortunately I missed most of his talk due to a studen exam and when I came back he showed us how to create a HIDS in five minutes for free: By laying traps of all kinds (/dev/nit e.g. is turned into da device driver taht simply shuts down the machine if somebody uses it.) Many tricks use the fact that only the admins of a machine know what is real and what is a trap (like a faked php script with supposedly well known vulnerabilities. Calling it simply reports the fact to the HIDS). Really funny and quite clever. I will tell more about his work in the future.

Next came Thomas Müller with his preview on Vista security through trusted platform modules. He is currently working on a thesis in this area and has been invited to talk at the BSI conference on this topic - congratulations. He gave an overview of the (few) uses of TPM in Vista which basically boil down to secure boot and encrypted hardware. he also showed quite a number of weak points in the implementation and the use of virtualization raises lots of questions.

But protecting sytems with TPM is a very complex topic. What happens if somebody uses changeable discs? What happens if more than one OS is used on a platform? Thomas will give us a final report in a couple of month when he has finished his investigations.

Our security day ended with Daniel Baier explaining the use of qualified digital signatures. There are some surprises in store for everybody. What do you think about a PKI implementation that creates the private keys for users. Those keys are then shipped to customers. This is quite comfortable but nevertheless it breaks everything PKI stands for. PKI is based on the fact that NOBODY except of the owner of a private key knows that key. (I am not talking about key escrow for encryption purposes in companies - this is OK as nobody can use this key to impersonate somebody. But there is NO reason whatsoever to share a private key. Not with anybody. And certainly not with some state or privately based authority.

But perhaps the reason for sharing of private keys is just the same kind of secret that is behind RFID chips in passports and ID cards? What could be the reasons except to spy on the citizens through remote reading of personalized and biometric data? A smartcard chip based solution would have been much safer and much cheaper. This is just another proof for the theory of the "citizen is the enemy" that seems so popular with our governments nowadays. Daniel also showed nice use cases for digital signatures and there is no doubt that the security of transactions could be increased considerably by using them.

Did you now that the issuer of qualified digital signatures can be made to pay for faked signatures? And taht the law requires the use of QUALIFIED e-signatures - signatures created through certified authorities - for use in legal cases?

When the Security Day came to an end everybody seemed to be quite content about its results. The talks were excellent and we seem to have attrackted new visitors as well. Where we will have to improve is in our notifications to our friends in the industry. In other words: we need a decent newsletter informing them about our events.

Copyright and Copyleft

I have changed the copyright for my site to the Creative Commons Attribution License 2.5. To understand why read: Peter Gutmann, the cost of Vista Content Protection and Lawrence Lessig's fight against copyright based monopolies