Software Engineering

From Jtkwiki
Jump to navigationJump to search

Principles

Modelling

Each software system has an application domain, and it models the entities that form the system which constitutes the application domain.

Practices

See also teaching computing

  • Keep a record of the development process
    • Use something like cvs, subversion, git or whatever.
  • Establish a defined and automated build process
    • Use a tool such as make or ant.
    • Distinguish between original and dependent files.
    • Do not archive dependent files.
  • Build automated tests
    • Tests must run non-interactively
    • Tests must return a zero (success) exit code only if successful
  • Choose adequate names for classes, and for items presented to users.
    • A name should be suitable for all values that an object of the class can take, and it should be unsuitable for all values that are not suitable for this class. A name satisfying this condition is maximally expressive. However, expressiveness does depend on the imprecisions of natural language and so cannot always be optimised.

Building Software


Software Architecture

Web Technologies

Concepts, Languages, Technologies, ...

  • EJB
  • JavaSpaces
  • Hadoop Map / Reduce
  • Business Process Execution Language (BPEL) - "orchestration" of business processes
  • Representational State Transfer (REST)
  • OSGI
  • Scala, a functional and object oriented language, using the JVM
  • Clojure, a functional language with immutable objects and reference types, using the JVM
  • NoSQL databases
  • Jetbrains Meta Programming System (MPS)
  • JavaScript Object Notation (JSON) aims to be a simple but universal notation for objects. While this addresses an important requirement, JSON unfortunately lacks a mechanism for stating the class that an object is an instance of (probably due to its JavaScript origins). This severely limits its usefulness. JSON also requires that names of attributes are quoted, which is perhaps a minor quirk but creates unnecessary scope for specifying attribute names that are awkward or even impossible to map.


Authentication

  • Acegi (now Spring Security) is an authentication framework based on Spring. Acegi Security in one hour seems like a good introduction.


Notes

Agility really is a consequence of adequate modelling. For a system that provides behaviour without internally modelling the problem domain (aka Universe of Discourse etc.), accommodating a change of requirements will likely result in large scale changes in design and implementation. In contrast, if the system is based on a detailed and highly adequate model, the extent of changes in design and implementation caused by a requirements change will correspond closely to the extent of the requirement change itself. Thus, small changes in requirements can be accommodated in an agile manner.

Somewhat metaphorically, a system that provides required behaviour without adequate internal modelling is "brittle" like a hash: If one bit in the requirements (the input) changes, all bits in the system (the hash) are subject to change.

Microservices may be considered an inescapable consequence of structuring teams based on Scrum or other Agile concepts, by way of [Conway's Law]

Reusability really reflects the adequacy of a model independently of the purpose (i.e. independently of the current set of requirements).

The real use of prototoypes is that the software developers get to engage with the problem domain in a symbolic manner. It is an added benefit that prototypes provide opportunity for clients (or other non-developer stakeholders) to get to see the developing system, but (contrary to a somewhat popular view), this is not the main benefit.

Rewriting from scratch may be a bad idea, some say it's a thing you should never do or consider it harmful

Requirements Generally, requirements should not be considered as boxes that need ticking by whatever means. They are pieces of information for determining the adequate system design. If a design makes meeting the requirements easy, natural and "intuitive" from the implementors' perspective, that is a good sign that the design captures some of the essence of the application domain.

Predictions of Doom

JSON

JSON is getting increasingly popular because of the (perceived) ease of using it. However, this ease is to a substantial extent due to lack of mechanisms for ensuring consistency. Therefore, databases using JSON (such as various "NoSQL" systems) are doomed to mushrooming of heterogeneity. The structure and semantics of objects and documents will diverge, forcing developers to add more and more defensive checks (e.g. whether an expected field is indeed present). For any kind of long-term storage (i.e. databases), this will result in an untenable / unmaintainable state over time.

  • It is true that JSON is much easier to type than XML, but while this may be convenient during some stages of development and debugging, it is not relevant in production use.
  • The consistency problem may be addressed by introducing mechanisms analogous to DTDs and schemas for XML. In the long run, this will render JSON as a redundant re-invention of XML, however.
  • JSON is not really an object notation as there is no mechanism whatsoever to describe the class which an "object" is supposed to be an instance of.


HTML5 Web Storage

The web storage introduced as part of HTML5 is doomed to provide rich pickings for breaches of privacy and other attacks.


Wayland, Mir (and Other X11 Replacements)

These projects are in real danger of being doomed to repeat the history that has informed X11. During its evolution, X11 has not only accumulated lots of legacy stuff (a fact that some Wayland folks seem to be very eager to elaborate on), but it has also accumulated a lot of wisdom (in the sense used in "Rewrites Considered Harmful?") and it comprises a solid core that has quite truly stood the test of time (a fact that X11 replacement evangelists seem rather less aware of).

Personally, I'm quite concerned that replacing X11 (with Mir, Wayland or whatever), rather than evolving it, will throw out many very "babies" that are very valuable for everyday productive work, along with the "bathwater" that undeniably has accumulated over the decades. While I'm generally ok with the idea to let people make their experiences their own way and to go through cycles of re-inventing / re-discovering long known concepts in the process, I don't want to be dragged along in such a cycle, and this is what I see coming my way if e.g. major Linux distributions dump X11 support. Here are some rather unsorted remarks to highlight some of the valuable features and how they may be lost:

  • All (reasonably well behaved) X11 clients process their command line arguments through a common function (XtOpenApplication or its predecessors), and therefore understand a common set of parameters (such as -geometry, -display etc.). Will the candidate replacements do the same? Will they provide compatibility to honour these parameters to the extent their alternative display model permits? (And if they can't provide such compatibility, do they have good reasons to break it?)
    • The Wayland server apparently does not provide any fonts at all, so the -fn parameter will break, and instead of using a consistent mechanism for selecting fonts (using e.g. xfontsel, xfd etc.), users may end up having to put up with many different, incompatible and inconsistent font concepts and systems used by different clients.
  • X11 provides a decent set of vector based primitves for drawing lines etc. As a result, the bandwidth required for drawing "line art" is independent of the size of the target window. In contrast to this, Wayland clients have to provide the server with pixels to render, so bandwidth grows with window size.
    • The argument that clients nowadays all draw an entire window's pixmap anyway is a typical case of throwing out the baby with the bathwater. It may well be that the set of primitives needs extending or overhauling. However, leaving each client with the burden of providing the set of primitives it needs may result in much duplication (although some standard client-side library can surely mitigate that). And more fundamentally, the size independent complexity afforded by vector oriented primitives cannot be achieved with a pixel oriented concept.
  • The X11 toolkits like Athena may look retro, but they enable a high level of productivity, e.g. their scrollbars are still more powerful than those provided by the current GTK or Qt scrollbars.
    • This fact disproves the claim that clients need to draw their windows pixel by pixel in order to provide a high quality GUI.
  • Avoiding flickering (e.g. caused by the server showing a window's background while waiting to receive the foreground content from the client) is nice, but it is overrated.

As a final note, I find it intriguing to see that the APIs people draw up for "rich" web applications increasingly look like X11 APIs...