About Me
I'm a father, husband, musician, and software architect with a focus on building large scale distributed systems. I'm an empiricist who pursues minimalism, sustainability, and democratization.
I'm a father, husband, musician, and software architect with a focus on building large scale distributed systems. I'm an empiricist who pursues minimalism, sustainability, and democratization.
This site will have a range of topics that seem like they may be of interest to others (or for myself for external reference). I'll be incrementally building out the site content and design.
As part of my most recent revisiting of which Web solutions to adopt, this Web site is hosted on my home router.
The modern era is one in which technology has granted individuals significant freedom. Much of this is enabled by the Internet, but unfortunately while the Internet was conceived as a democratizing force (and many are still trying to move it in that direction including TBL) it has instead become a channel through which power can be consolidated more than ever before.
While the underlying design of the Internet allows for decentralization, the use in practice has led to certain sites becoming entrenched brokers of information. This was started by companies that provided better solutions which was subsequently fed by network effects, convenience, complacency, and lack of technical knowledge. The end result amounts to data monopolization; something which falls outside of the radar of what is typically monitored for market health.
The Internet itself allows for individual empowerment which is manifested as individuals being able to more easily run their own businesses (gig work) and more easily distribute their work (content creation). This is certainly shifting power into the hands of individuals, but given the centralized brokerage of that information that is offset by the benefits afforded to those brokers. While having a more traditional job may seem more restrictive, it also represents a significant investment on the part of the employer. This is perhaps most glaringly obvious when it comes to content creation where the work produced (and therefore the teams involved) ends up reflecting fairly directly on the company itself. This leads to a symbiotic relationship between the employees and employers where each is exposed to notable costs and benefits. The dynamics are certainly variable and there may be pathological imbalances, but such imbalances are far greater when companies are able to tap individual contribution without commitment, and where there is a readily available pool of such work and therefore the individuals themselves are fungible. This is even more pronounced given that the process itself tends to be indiscriminate and automated: the system amounts to the creation of controlled marketplaces where often the vendors are highly reliant on the specific marketplace but the marketplace is indifferent towards the vendor.
The underlying assertion here is not that the shift fundamentally flawed, nor to suggest that the some of the above concerns displace previous models (there will always be a spectrum). The concern is two-fold: the first being that what is often packaged as personal empowerment may have the opposite effect, which feels like a pretty standard grift that has become more normalized deviance (most baldly packaged as something like firing someone with the message that you're allowing them to pursue something else). The second being that these drawbacks are a function not of the Internet itself, but how it is used and that it can be counteracted by more conscious use: particularly given that technologies allow us to achieve equivalent results without centralization.
There is unlikely to be any particularly new ideas in the above, nor do I think that the majority of people will care enough to leave behind familiar conveniences, but it seems worthwhile to try to re-balance the scales as much as possible.
A few months ago I was in a conversation where someone stated that they couldn't believe how many people were ignoring the truth. While I certainly agreed with the sentiment, there's been plenty of reason lately to reassess our relationship with the truth. While I didn't want to pull the discussion off on an epistemological tangent, it left a strong impression that so many of us (including myself for most of my life) take the concept of truth for granted.
This line of thought was also bolstered by some of the essays in The Nineties. Throughout the late twentieth century into the early twenty first we were presented what could be seen as a monolithic sense of truth. While at varying points that could certainly be a cause for concern given that such "truths" may be curated or controlled - there were controls in place that provided a sense of confidence. While it has never ...to finish copying.
Towards the end of 2025, a motivation to pursue more flexible and ubiquitous Web interacts had me fiddling around with some supporting Emacs lisp code. The HTTP library within Emacs lisp makes use of callbacks which within the space of a minute or so had my mind tromping through the familiar question of "should I gravitate towards some kind of eventual object?" followed by a recurring "it could be cool if asynchronicity could be abstracted away." This is something that I've wrestled with over many years, invariably adopting whatever seemed most pragmatic for the environment in which I was developing (the only notable resulting practice being preference for asynchronous constructs (Promises/Observables) in interface definitions). This time, likely due to some combination of additional knowledge and lack of obviously available idioms within elisp, the train of thought went a bit further. To clarify an earlier term for subsequent use, I use "eventual" as a noun to refer to the general concept of representing a not-yet-resolved value as it is used in supporting information but hopefully avoids misaligned assumptions that may accompany more specific implementations.
The distinction between an eventual and a callback can be reduced to the use of the former as a return value. Indeed, many implementations of eventuals may reflect little more than a container for callbacks that can be utilized through that alternative channel. The two approaches are therefore somewhat analogous to the distinction between the use of return values and output parameters (which are common in languages like C but most higher-level languages have moved away from them). I've always toed the party line and fallen strongly in the anti-output parameter camp, but suddenly I found myself wondering whether I'd been wrong. At the lowest levels the difference is largely syntactic sugar - "return values" are those that are exchanged using a location designated by a convention such as an ABI (like the accumulator register) which is arguably far more aligned with the use of an output parameter than a return value. While there are a lot of potential advantages that are attached to the use of return values, it suddenly became less clear how many of them were just incidentally stacked on top of each other. The specific issue I was working through was the result of debating how to work around some of the concepts and conventions that were putatively making things easier but presently getting in the way. I either needed some additional design because the produced value wanted to pass through the magical return value conventions of the language, or I leaned into callbacks and likely end up with elisp that resembled ES5...and in both cases I wanted to stumble towards a reasonably consistent model.
This led me to rethinking the benefit of return values. The reason that they introduced additional hoops was ultimately a reflection of the fact that return values effectively presume an implicit binding environment. When you return a value, where does it go? There is a presumption that the values will be available to the calling code. This seems most likely to be a reflection of the low level implementation and Von Neumann architecture where ultimately that assumed environment is the current execution stack frame where usable locations are defined in terms of the available registers and memory offsets within the execution stack (typically relative to the address of the current frame - which can then also indirect to dynamic memory). That assumed environment seemed to speak directly to the challenge I was facing, in that if such environments were managed explicitly then shifting between the use of return values and something like callbacks would be straightforward. The use of an implicit binding environment carries further implications; while many practices seem to strive towards making logic more self-contained, this creates a pernicious coupling between the logic that is being called and that which calls it. Such association also seems to be built upon the low level use of memory which fairly directly feeds some of the synchronous/asynchronous bifurcation that has been swirling around my head for years, and perhaps perversely leads to the idea that return values act as smugglers for concerns around direct memory access that were a reason to move away from more raw use of output parameters.
I therefore started to think about an alternative programming model that relied more heavily on such explicit environments. This will start as a model, with the perspective that it can be supported by existing languages, though I'll also explore an idiomatic syntax. As with everything this is likely to overlap with existing ideas, and I will actively look to steal from prior art. I'll be starting with proving out the basic ergonomics of the model, and then proceeding to realize further value of using this approach. Conceptually the difference may be that typical practice is oriented towards the colocation of logic and data and the use of shared compute, whereas this model instead is oriented towards a boundless number of more powerful registers which can then be plugged into different logic (thereby also enabling more complete separation of Church and state). Due to the prevelant use of the concept of registers and since it seems snappy and sticky, the provisional name for this effort is "regi".
This shift has since led to rethinking how many things currently work in software across a range of concerns and whether regi may offer simpler options. Different topics will therefore be explored and information shared on this site. The first exploration will be done using Racket, as it is a language I'm fairly fond of and provides a path towards also establishing syntax - implementations in other languages will be explored as they are used (particularly in support of prospective interoperability). A C implementation is expected to evaluate impacts on some lower-level concerns such as memory management and safety.