Plat_Forms 2012 Announcement

Plat_Forms 2012 in April canceled

The below content was written for an execution of Plat_Forms in April 2012 which we had to cancel. Some of the content is therefore now obsolete. Please see Plat_Forms 2012 for details.

Abstract

"Plat_Forms" is a competition in which top-class teams of three programmers compete to implement the same requirements for a web-based system within two days, using different technology platforms (say, Java EE, .NET, PHP, Perl, Python or Ruby). The results will provide new insights into the real (rather than purported) pros, cons, and emergent properties of each platform. The evaluation will analyze many aspects of each solution, both external (usability, functionality, reliability, performance, etc.) and internal (structure, understandability, flexibility, etc.).

Plat_Forms has been executed successfully in 2007 with 9 teams and in 2011 with 16 teams. Plat_Forms 2012 will take place on April 3rd and 4th in Berlin, Germany. It will focus on scalability and cloud computing and will hardly involve front-end programming. Information on how to apply can be found below.

Contents

  1. Overview
  2. How the contest will proceed
  3. Do we really need another contest?
    1. The platform decision situation
    2. What we need
    3. Why it is still missing
    4. What we've gotten so far
  4. How to apply
  5. Infrastructure
  6. The task
  7. Rules of behavior
    1. What is allowed
    2. What is not allowed
  8. Semi-public review
  9. Results hand-over
  10. Explaining what you did
  11. Evaluation and "winning"
  12. What's in for the teams and their home organizations?
  13. References

1. Overview

Software development platforms for web applications (such as Java EE, .NET, PHP, Perl, Python, Ruby, etc.) are a critical factor of development productivity today. The pros and cons of the various platforms are by-and-large known in principle, but how the pros trade off against the cons in one platform and how that compares to another platform is the topic of quasi-religious wars only, not a subject of objective analysis, as almost no data is available that allows such direct comparison.

Plat_Forms is a contest that will change this. It will have top-class (and hence comparable) teams of 3 programmers implement the same specification of a web-based application under the same circumstances and thus generate a basis for objective comparison of the various characteristics that the platforms generate.

In just 2 days, the teams will implement all of the requested functionality and optimize the usefulness of the resulting system (functionality, reliability, etc.), the understandability of the code, the modifiability of the system design, the efficiency, and most importantly scalability and availability. The teams will be implementing a small web service interface on Amazon Web Services that will have to scale to thousands of simultaneous requests. See "The task" below for details.

The contest will be conducted on April, 3-4, 2012. At the end of the 2 days, the teams hand over their source code, version archive, solution infrastructure, and a documentation of what they did.

These items will then be subject to a thorough evaluation according to scientific standards with respect to the criteria mentioned above. As most of the results cannot be quantified and many cannot even be ranked in a neutral fashion, there will be no one-dimensional result ranking of the systems. Rather, there will be an extensive report describing the findings. Depending on the results, the organizers may or may not declare one or a few of the systems and teams winners with respect to one particular criterion (and do so for some or all of the criteria).

The contest is organized by Freie Universität Berlin, the Open Source Business Foundation, and 'iX magazine.

2. How the contest will proceed

  • Before Wed 2012-02-29 Fri 2012-03-16: Teams apply for participation in the contest as described under "How to apply" below
  • Fri 2012-03-02: Teams are notified whether they will be admitted to the contest. At most four teams per platform will be admitted and at most six platforms; up to 24 teams overall. Accepted teams will now receive accounts for Amazon Web Services in order to prepare their infrastructure for the contest.
  • Mon 2012-04-02: Teams set up their development environments at the contest site. For details, see "Infrastructure" below.
  • 2012-04-03, 9:00: The contest starts. The organizers will explain (in a presentation format) the requirements of the system to be developed, will hand out a short document containing the details, and will answer any immediate questions that may arise. See "The task" below.
  • 2012-04-03, 10:00: The teams start developing the software using their favorite platform and tools. Reusing existing software and reading the Web is allowed, getting external help is not. For details, see "Rules of behavior" below. The teams are asked to make intermediate versions accessible for user feedback, see "Semi-public preview" below.
  • 2012-04-04, 18:00: The teams stop developing software and hand over their result in the form of a VMware image of a server machine. For details, see "Results hand-over" below. Teams that believe they have reached the best cost-benefit ratio of their development before the allotted time is over are allowed to hand-over their results earlier and will have a shorter work time recorded.
  • 2011-04-18, 17:00 (that is, 15:00 UTC): The teams submit post-hoc design documentation. For details, see "Explaining what you did" below.
  • 2011-04-18: Evaluation of the systems starts. It will investigate all categories of quality criteria, both internal and external. For details, see "Evaluation and winning"
  • End of 2012 / early 2013: Results of the contest will be presented. The details of when, where, and how are still to be determined. See "What's in for the teams and their organizations?" for what is known already.

3. Do we really need another contest?

Absolutely. But it is not "another", it is the only one of its kind.

3.1. The platform decision situation

Every year, several hundred million dollars are spent for building web-based applications, yet nobody can be quite sure in which cases which platform or technology is the best choice. Quasi-religious wars prevail.

Some platforms are often claimed to yield better performance than others, but nobody can be quite sure how big the difference actually is.

Some platforms are often claimed to yield higher productivity in initial development than others, but nobody can be quite sure how big the difference actually is.

Some platforms are often claimed to yield better modifiability during maintenance than others, but nobody can be quite sure how big the difference actually is -- or if it really exists at all.

So as a program manager one can almost consider oneself lucky if a company standard prescribes one platform (or if expertise is available only for one) so that the difficult choice needs not be made.

However, that means that many (if not most) projects may use a sub-optimal platform -- which sounds hardly acceptable for an industry that claims to be based on hard knowledge and provable facts.

3.2. What we need

What we need is a direct comparison of the platforms under realistic constraints: a task that is not trivial, constrained development time, and the need to balance all of the various quality attributes in a sensible way.

3.3. Why it is still missing

So if such a comparison is so important, why is nobody else doing it?

Because it is difficult. To do it, you need:

  • participant teams rather than individuals, or else the setting will not be realistic;
  • top-class participants, or else you will compare them rather than the platforms;
  • a development task that is reasonably typical, or else the result may not generalize;
  • a development task that is not too typical or else you will merely measure who of the participants happened to have a well-fitting previous implementation at hand;
  • participant teams that take the challenge of implementing something on the spot that they do not know in advance;
  • the infrastructure and staff to accommodate and supervise a significant number of such teams at once;
  • an evaluation team that is capable of handling a heterogeneous set of technologies (which is a nightmare);
  • an evaluation team that dares comparing these fairly different technologies in a sensible yet neutral way;

3.4. What we've gotten so far

For these reasons, all previous platform comparisons were very restricted. Several, such as the c't Database Contest or the SPEC WEB2005, concentrate on one quality dimension only (typically performance) and also provide participants with unlimited time for preparing their submission. Others, such as the language comparison study by Prechelt [3][4], are broader in what they look at and may even consider development time, but use tasks too small to be relevant.

In 2007 we carried out a first instance of the Plat_Forms contest which was a huge success. Details on the results, including the complete technical report, can be looked up on the Plat_Forms 2007 results page.

In 2011 we repeated the contest, this time with four platforms (Java, Perl, PHP and Ruby) and 16 teams. We were able to confirm some of the 2007 findings and observed some new ones. See the Plat_Forms 2011 results page for details.

The 2012 execution will move the focus away from the HTML user interface of a simple web application towards the high availability and strong scalability promised by cloud computing.

4. How to apply

At most four teams per platform will be admitted to the contest. It is not the purpose of the contest to compare the competence of the teams; we will therefore strive to get the best possible teams for each platform to make it more likely that significant differences observed in the final systems can be attributed to the technology rather than the people.

Teams interested in participating please apply by sending a Request for Admittance in the form of the 3-page application form at https://www.plat-forms.org/platforms-2012-application. Multiple teams from the same organization are allowed if (and only if) they apply for different platforms.

Teams must agree that the result of their work (but not frameworks etc. that they bring along) will be released under an open source license.

From among the applications, teams will be selected so as to maximize their expected performance. So please include in your application information that is useful for us for judging what performance we can expect from you -- but do not exaggerate or you may publicly embarrass yourself. The selection process is intended to be performed with the help of a contest committee for which we will invite an expert representative from each platform.

5. Infrastructure

The following information is preliminary. We will provide an update with possible modifications of some details two weeks before the contest.

At the contest, each team will be provided with roughly the following infrastructure:

  • Electrical energy (230 V, german-style Schuko socket)
  • chairs and tables
  • an internet connection, via an RJ45 connector serving Ethernet. The available bandwidth is not yet known (we hope for a 1 Gbit/s link), bandwidth management is probably on a best-effort basis.
  • sufficient food and drink

All teams will work in a few large conference rooms on the same hallway.

Things you need to bring yourself:

  • computers, monitors, keyboards, mice,
  • a server computer if required,
  • network cables, network switch/hub,
  • some medium to hand over your results (see Results hand-over below),
  • perhaps printer, printer paper,
  • pens, markers, scissors, adhesive tape as needed,
  • perhaps desk lamp, pillow, inflatable armchair etc.
  • coffee mug,
  • backup coffee mug.

6. The task

We will obviously not tell you right now in full detail what the development task will be. However, here are some considerations that guide our choice of task:

  • It will be a web-based application with a simple RESTful web service interface.
  • It will be some kind of messaging service, where users can send each other messages.
  • It will require persistent storage of data.
  • It will require scalability. This is achieved simply by running the service on multiple nodes (computers) and providing a stateless implementation (i.e. if a single node fails the system overall does not fail and only that node's current requests are lost but no other data).
  • You can use the simple services provided by the Amazon Web Service infrastructure for scalable persistence (e.g. S3, DynamoDB) and load balancing. (These services are well-documented and can be learned in a day.)

In your solution you should strive for a good balance of all quality attributes.

7. Rules of behavior

7.1. What is allowed

During the contest you may:

  • Use any language, tool, middleware, library, framework, and other software you find helpful (just please mention as many of these as you can foresee in your application).
  • Use any Amazon Web Service you need.
  • Reuse any piece of any pre-existing application or any other helpful information you have yourself or can find on the web yourself. Anything that already existed the day before the contest started is acceptable.
  • Use any development process you deem useful.
  • Ask the organizer (who is acting like a customer) any question you like regarding the requirements and priorities.

7.2. What is not allowed

During the contest you may not:

  • Disturb other teams in their work.
  • Send contest-related email to people not on your team or transfer the requirements description (or parts thereof) to people not on your team.
  • Have people from outside of your team help you or "reuse" work products from other teams. There are two exceptions to this rule: you may use answers of the customer and user-level preview feedback as described below.
  • Run infrastructure services, such as load balancers, memory caches or database systems, on your own machines. You must use Amazon Web Services for those.

8. Semi-public review

During the contest, teams will be able to obtain feedback from the internet public if they wish to do so. For this purpose, the team should open their test system for public access.

The organizers will put up a blog where the teams can announce their release plan (if any), releases, and access URLs, and where anybody can comment on the prototype systems. The teams are allowed to use this user-level feedback for improving their system. They are not allowed to take or use code-level information.

9. Results hand-over

The technology used for building the systems in the contest will be very heterogeneous. It would therefore be impractical for the contest organizers to try to execute them from source code alone, not to speak of obtaining similar behavior in a performance test.

We thus require each team to deploy their solution as follows:

  • It runs solely on Amazon Web Service infrastructure.
  • If you have set up any special kind of authentication to any parts of your infrastructure, you are required to hand over the necessary credentials.
  • You must hand over a documentation describing the architecture of your solution, the services involved, etc. from which we must be able to make changes to your solution, e.g. adding new nodes in order to support a higher number of concurrent users. See Explaining what you did below.

In addition to the documentation you need to hand over a file that is an archive (zip or tar.gz) containing a snapshot of all source artifacts (source code, build files, database initialization scripts, configuration files, etc.) that are part of the solution. The contents of this archive must be sufficient in principle to recreate your solution from scratch, given the infrastructure software (such as operating system, build tools, application server etc.). Furthermore, a second file must contain your whole source code version archive so the organizers can analyze some aspects of the development process.

The files can be handed over on any medium that can be read on a Windows or Linux system. This can be as simple as giving us a download URL, a git pull request, a single DVD-R or a USB stick with a FAT or NTFS file system.

At the time of the result handover, the teams will also send a cryptographic fingerprint of the image file and of the archive files to the organizers by email, so that a replacement medium can be accepted should the original medium fail to be readable (please keep your files around!).

10. Explaining what you did

Both source code and build/configuration/deployment of your system are fixed at hand-over time. In addition you will have to prepare and submit a document afterwards that explains the following points:

  • the architecture of your system
  • your approach to development (priorities, implementation orders etc.)
  • the rationale of each important design decision you have identified
  • etc.

The documentation on the architecture of your system is especially important. Without it, we will have a hard time understanding what you did.

11. Evaluation and "winning"

We will attempt to evaluate all of the following aspects of the system and its development:

  • External product characteristics: functionality, ease-of-use, resource usage, scalability, reliability, availability, security, robustness/error checking, etc.
  • Internal product characteristics: structure, modularity, understandability, modifiability (against a number of fixed, pre-determined scenarios), etc.
  • Development process characteristics: Progress over time, order and nature of priority decisions, techniques used, etc.

The details of this evaluation will be determined once we get to see the systems that you built. The evaluation will be performed by the research group of Professor Lutz Prechelt, Freie Universität Berlin.

We will not compare all systems in one single ranking by some silly universal grading scheme. Rather, we will describe and compare the systems according to each aspect individually and also analyze how the aspects appear to influence each other.

Therefore, there may be "winners" of the contest with respect to individual aspects (or small groups of related aspects) where we find salient differences between the platforms or the teams. However, there will not be a single overall winner of the contest.

12. What's in for the teams and their home organizations?

So why should you participate in the contest if you cannot win it?

Two reasons:

1. Category "riches and beauty": We will probably award (modest) monetary prices, just not across platforms. However, we will nominate a best solution among the solutions on each individual platform.

2. Category "eternal fame": The detailed evaluation will provide the organizations of the well-performing teams and platforms with some of the most impressive marketing material one can think of: concrete, detailed, neutral, and believable.

References

  1. c't Database Contest. (German call for participation, English call for participation). Entscheidende Maßnahme: c't 13/06, (German description of results).
  2. SPEC WEB2005 benchmark.
  3. Lutz Prechelt. An empirical comparison of C, C++, Java, Perl, Python, Rexx, and Tcl for a search/string-processing program. Technical Report 2000-5, 34 pages, Universität Karlsruhe, Fakultät für Informatik, Germany, March 2000. (The detailed evaluation of the previous study mentioned above).
  4. Lutz Prechelt. An empirical comparison of seven programming languages. IEEE Computer 33(10):23-29, October 2000. (A short summary of [3]).