The following is a loose set of notes and thoughts triggered by Jentery Sayers's piece of the same title, defining the concepts of Minimal Computing . This is far from scholarly, I'm not an academic but I do find this discussion very interesting.
Minimal Definitions is focussed around academic applications and Alex Gil's question :
What do we need? - Alex Gil
My reflections are coloured by a career spent building software, predominantly for the web.
The emerging definitions of minimal computing and its frameworks fascinate me, in part because they engage various histories of technology at the intersection of aesthetics and politics. - Jentery Sayers
I'm reminded of the Law of Requisite Variety aka Ashby's Law . To achieve minimalism, either the variety should be reduced or the complexity of the control-system increased (hiding the complexity from the end user).
In its technical implementation, Jekyll only moves the complexity around, the overall reduction in the system's complexity is negligible. Does the ability to modify the software affect its minimalism?
Agree that reducing the design and UI functions is beneficial and in keeping with the principles of Minimal Computing, so far.
The notion of a "minimal" system is very subjective. In most cases, assuming that the complexity is essential, what remains can be hidden from the user (at the expense of understanding/learning and flexibility) or revealed at the surface (at the expense of first-time comprehension). See Bret Victor .
working through these entanglements of writing with programming.
Kunth's Literate Programming  while appealing has never taken a strong hold in the code bases that I work on. Vikram Chandra would seem to agree on some degree of separation in the properties of each medium, in his book: Geek Sublime .
An example that I prefer is the OpenBSD Operating System  with well tended and groomed source-code coupled with an extensive and well thought out manual. The implementation separate from the operation of the program yet both following the same core principles and UNIX Philosophy. One medium for the machine, another for the human.
Thinking on the division between the creation of tools and the creation of content (as I'm working on the tools side of the balance): the protocols and delivery methods of web-content are far from minimal. Managing the raw material in git, publishing over FTP, serving HTML over HTTP and then continuing the conversation over mail (SMTP) and proprietary systems like Twitter and Face Book.
This could be consolidated into the use of Git alone (or another peer-to-peer versioning/branching system). In large development teams a form of "conversation" is carried out over branches, pull-requests and merges . Not only is the creation of the content collaborative but a permanent record is kept of the development of the ideas within.
Another Git protocol advantage is its peer-to-peer nature. It can use a centralised server, but not essential. Always-on servers consume massive amounts of energy, idling for the majority of the time, coordinated pulling of git repositories could be a foundation for a far more energy efficient transfer, retrieval and storage of textual records (or any digitised media).
While HTML is a well adopted format browsers are hugely resource hungry. Most modern sites are close to unusable in text-based tools like Linx. Distributing content in simple formats like markdown over protocols similar to git would almost certainly reduce expended computing power.
Computing power, it seems to me, is largely consumed by graphics. For example, the capabilities offered by the latest smart-phones plateaued a few years back. The main driver for the upgrade-cycle is the higher-resolution cameras, that store larger image-files that, in turn, require a more pixel-dense screen to display them in all their detail.
As image quality increases so do the demands on the network but also the production of content to utilise it. The demand is generated by the capabilities of the devices. The complexity of the machines seems to be reducing the processor-cycles used for creative activities and increasing those expended on consumption.
As a professional programmer, my requirements are minimal. I'm writing this on a machine from 2007 which is blazingly fast for most tasks.
Simple isn't the same as easy. Computer processors are very "simple", they are capable of doing simple things repetitiously and very quickly. For instance nearly all arithmetic in a processor can be performed just by adding. This is simple but not easy.
A problem I see in the projects that I work on is the drive to make the barriers low for the users. In so doing constraints are placed on functionality, there are fewer paths to follow so fewer mistakes can be made. The trade-off is made when the user wishes to perform a creative (rather than prescribed task) the system is inflexible.
You can reduce system complexity to a degree, after which it can only be reordered.
There are alternative technologies that could be adopted that support these requirements, such as Unikernel Operating Systems. By reducing the components of a server to only those needed for a specific task, the code can be tiny, the energy consumption minimal and they can even be turned off between page requests .
Minimal Technical Language
This area is one of the toughest social issues in professional software-development. Debates over tools and methods are unending and ever present. Especially as terminology is continually being coined and re-purposed.
Movements like "micro service architecture" aim to reduce the need for common implementation methods as long as components of large systems describe to one-another how they can interoperate .
More to follow...
- Minimal Definitions, Minimal Computing - Jentery Sayers
- The User, the Learner and the Machines We Make
- The Beauty of Code
- The Law of Requisite Variety
- Up and Down the Ladder of Abstraction
- Literate Programming
- Geek Sublime
- Git pull requests
- Lynx Browser
- Unikernels: Rise of the Virtual Library Operating System