On June 28th and 29th, I had the opportunity to participate in judging the Open Source Software Development (OSSD) competition at the 2014 National Technology Student Association Conference.
This competition was quite exciting for me — it’s one I wrote in early 2009.
OSSD made its debut at the 2013 national conference, but this was the first year I was able to judge. To summarize, I’m very impressed by the quality of work entered. Several of the entries displayed a level of technical knowledge unmatched in many professionally developed open source projects.
In reviewing and scoring real entries, the other judges (Angela Roller and Rob Little), event coordinator (Brandon Frye), and I had the opportunity to review the competition guidelines and scoring system in context. I’d like to reflect a bit on some personal observations and lessons learned; in the spirit of open source, it seems an open blog post is most fitting (and potentially helpful to students, teachers, and TSA leadership).
Significant credit for information below is owed to the event coordinator and other judges; opinions expressed and any errors are, of course, mine alone.
Project Structure and Development Practices
It seems there’s never been a better time to get involved in open source software. The open source software world has exploded since this competition was written, GitHub and other services to facilitate open source have blossomed, and open source projects have evolved a much more consistent layout and structure. In my role at BitPay I can attest — having written quality, valuable, open source software is among the strongest resume components prospective developers can possess.
Even though the event guidelines are somewhat flexible in project requirements, many of the projects submitted at this conference followed the same rigorous development, testing, and documentation standards present in today’s most prominent open source software projects. These projects clearly stand out among the entries, but the competition guidelines could probably better define and reward excellence in this area.
Components of a High Quality OSS Project
Before considering potential improvements to the guidelines, let’s nail down some of the components and practices we could expect from an exceptional open source project. Many excellent examples of the following can be see in the trending repos on GitHub.
Nearly every type of open source project should have a well-written readme. Without concise documentation, even the best open source projects will be forgotten or recreated. At a minimum, a readme should contain a brief description and “getting started” instructions, and many project readmes go far beyond.
This is how OSS projects get found. A good presence should succinctly describe the project, explain its usage, and otherwise onboard developers and users. This could be anything from a full-blown, multi-page, carefully-designed website, to a GitHub project page with a well-written readme.md (which for some projects, would be more effective than a website). The important concept is end-user and developer discoverability.
Source Control & Versioning
All software projects, particularly OSS projects, should be using some form of source control. Today, Git is typically the preferred choice for distributed software projects, though there are alternatives. A properly structured OSS project is using
git branching properly, following some sort of consistent pattern of development, and utilizing Semantic Versioning.
Almost all OSS projects should be using some sort of package management to handle dependencies. Most OSS projects geared toward developer consumption should also — themselves — be listed in package managers.
Projects should include thorough documentation of all functionality provided by the software. An explanation of methodology is often useful for prospective developers evaluating the project. Where relevant, software performance benchmarks should be included. Projects with APIs should — at a minimum — provide basic documentation for all API endpoints. In many cases, this includes both auto-generated documentation and more human overviews.
Stable software is expected to incorporate automated testing so that code contributions can be reviewed quickly with confidence. OSS projects without adequate tests are practically unusable in many business and otherwise mission-critical applications.
Projects should strive for full code-coverage with both unit testing and integration testing. Code-coverage reporting with a solution like coveralls is typically considered best practice. Continuous integration with a solution like Travis CI is a signature of well-developed OSS projects.
Improving the Competition Guidelines and Scoring
Taking into account the discussion above, we can formulate some ideas for improvement.
In the open source world, projects have to make their case quickly and succinctly to attract users and developers; this is usually the job of the readme (and often a web presence which expands upon the information in the readme). Rather than expecting a separate (one-page, essay-format) project description, it may be more fitting to submit the OSS project’s readme. This rewards project documentation quality in precisely the same way that it’s rewarded in the software industry and open source world. It also incentivizes participants to focus less on eloquent prose and more on substantive documentation.
This may seem less intuitive for types of projects which will not normally have a highly-visible readme: a) projects not aimed at developers  and b) projects with little reason to be open source. Let’s examine these cases a little further.
A — Projects not aimed at developers are primarily marketed to their end-users via a website. For these projects, the primary function of the readme is to onboard new project developers. In this case, too, the readme is actually a better description of the project for judges to assess the areas in both the Documentation and Software Design portions of the event scoring.
B — Several of this year’s entries — by either misinterpretation or lack of project focus — were not open source projects, but rather, documentation of student experiments with implementing open source software (configuring software to follow designs made by others). By clarifying the event guidelines to expect a readme, these projects would be pushed in the right direction.
I’d love to see entries better rewarded for thorough documentation of the software itself. Perhaps the Documentation section of the scoring could include a Software Documentation criterion to account for this.
Development Practices & Automated Testing
I’d also love to see entries better rewarded for excellence in software development practices. The expectations above in Source Control & Versioning and Package Management could be integrated into the Software Development criteria. A new criterion for Automated Testing would help to reward participants for following best practices in that area.
Assumption of Graphical User Interface
Another minor area for improvement is the Aesthetics and Artisanship criterion. Many open source projects won’t have a need for a graphical user interface — terminal programs, code libraries, and other types of projects. Perhaps this particular criterion could be changed to a slightly more general User Experience category, and emphasis placed on attention to the user experience for the intended audience.
Market, Education, or Social Value
Finally, it would be helpful for the scoring guidelines to better define a method to reward projects that provide clearly unique value. In evaluating entries, it seemed there was an underwhelming score difference between truly exceptional, valuable, new software and less-functional remakes of existing software. I believe this was the intention of the new Complexity criterion, though it would be helpful to further clarify.
Some good objective measures of value could include user and developer community involvement — if there’s an active base of users and developers in the real open source world, it is likely that the project has staying power. Metrics like the number of pull requests, followers, and outside developer contributions could be taken into account for this purpose.
It seems to me that the greatest challenge with administering the OSSD competition is judging logistics. With OSSD entries submitted on-site (a physical binder with code on some digital media), judging must be done extremely quickly. Given the standard amount of time, each entry would only be given about 6 minutes per judge; with more entries, this time trends to zero. Six minutes is barely enough time to skim the documentation and preview of the presentation, much less to judge project quality.
This year, we were able to “make do” by simply working harder and longer. We were unable to adequately test software or review code, and we were forced to rely almost entirely on the contents of the printed materials. Had the judges and event coordinator been less dedicated (or under stricter time constraints), projects with slightly superior aesthetics and documentation may have outperformed projects with vastly higher quality implementations, relevance, and social value.
I can see this problem being solved in several ways. Most simply, OSSD could adopt a similar submission format as is followed when submitting website design entries (submitted and judged prior to the conference). However, there seems to be significant value in submission and judging being done on-site — it allows for more accurate scoring and probably brings more entrants to be physically present at the conference (improving the quality of submissions, as it is possible for students to attend the conference exclusively for OSSD).
Assuming it is best to avoid fundamentally changing the structure of the competition in this way, several project submission content changes given above could drastically improve this situation.
Probably the most important potential improvement is in code submission. As it stands, code simply isn’t adequately reviewed.
One idea for consideration: entries could be easily submitted via pull request on GitHub or even by pushing to private Git repos with access issued by the conference. Probably the simplest solution is to submit a GitHub or similar url. This would allow code, project structure, development practices, and included documentation to be reviewed much more efficiently than with storage media. A simple web application could make reviewing these URLs much more efficient and less error prone for judges. (Teams, are you listening? OSSD project idea.)
I hope these thoughts are valuable to those involved with the Technology Student Association and other groups interested in providing competitive opportunities in open source software development. The growth of open source software is continuing to change the landscape of the tech world, opening up huge opportunities for young people interested in building the future.
: Though most open source projects seem to be aimed at developers, there are a growing number of high-profile open source projects catering more toward users who are unlikely to dig into the projects’ code. These include projects like Bitcoin, Wordpress, Open Office, and Blender, as well as projects requiring a degree of code verifiability, as is required by many projects in the Bitcoin space.
It seems that most open source projects are aimed at developers — this would be logical, as developers (and users who otherwise modify or extend the code) are the demographic most benefitted by a project being open source. Whereas open source software with a different end user base than developer base mainly provides utility by being free and openly licensed, open source software aimed at developers gains even further utility from the code being extensible and modifiable.
This competition is now titled Software Development in the National TSA High School Competitions summary. I appreciate the shorter title, and quite frankly, software that hasn’t been open sourced by 2014 isn’t software, it’s a liability.
Especially with the digital revolution happening in finance (with the explosion of Bitcoin and cryptocurrency), modern software companies and users need not and can not afford to trust 3rd parties with unchecked control of their devices.
This article was originally posted July 5, 2014 at jason.dreyzehner.com.