While there may be some lurking conceptual benefits
to the nascent ideas of Regi, the first exploratory step in
practice is very uninteresting. As mentioned in its origin, it
is effectively equivalent to leaning in to the use of output
parameters. Beyond that it also introduces the concept of
explicitly defined "registers" rather than allowing direct
memory access, and as a result both inputs and outputs to any
subroutine are implemented as invocations.
In addition to disallowing direct memory access,
register references will also only have a single direction
within a function body: either a read register for input, or a
write register for output. This is expected to be leveraged
for several purposes as the system matures (and informs some
of the envisioned syntax). There are ideas to provide further
categorization based on properties such as mutability which
may afford some benefits in terms of safety and convenience,
but without working through the details it's unclear whether
that would become part of the register itself, or a container
within.
Inputs
Input values will be passed within registers, the values of
which can be accessed by passing a callback to the register.
Example pseudocode would therefore resemble:
someRegister { registerValue -> /* do some stuff */ }
This offers semantics similar to
let in Kotlin (etc.) and similar constructs in
other languages. More saliently, this is very
similar the use of constructs such as calling then
on EcmaScript promises, and is intended to serve a similar
role in enabling a less presumptious asynchronous
programming model (and making it ubiqutious).
Outputs
Within a function body, outputs are produced by passing a
value to a designated write register.
myfunc(output) {
output(someValue)
}
This is a fairly typical pattern of invoking callbacks,
which aligns with the origins of Regi, and should be
familiar to those who have spent time with runtimes such as
early NodeJS.
The client code is therefore responsible for passing
registers to the function (which is an essential aspect of
the explicit nature of this system), instantiating them as
necessary. This should be familiar to those who have spent
time in languages like C, but may feel clunky to
others. This is an area where some syntactical sugar may be
provided.
Dependencies / Doing Stuff
The above pattern will be extended to handle all
dependencies, leaving defined function bodies to be pure
userspace logic. Behaviors such as interacting with the
outside world should be represented as additional
manifestations of inputs and outputs. This should provide
encapsulation benefits similar to those promised by projects
such as Project Dana, and enable comparable benefits to
other approaches for containing side effects (side effects
in general will likely be given more attention). For example
a function that logs a message and produces a value would be
defined such as:
myfunc(log, result) {
log("returning foo")
result(foo)
}
This promises several benefits including a powerful means to
contain side effects, but raises some additional questions
in terms of aspects such as referential transparency which
will need some deeper attention.
Performing Multiple Operations
Out of the box, the absence of return values lends itself to
multiple logical steps to be implemented as a sequence of
invocations.
// Define registers above
handle(input, init)
// init is the output of handle and the input to filter.
filter(init, filtered)
processed(filtered, output)
display(output)
This offers a command-oriented flow which should be
familliar for those that have worked with assembly or some
higher-level languages like Tcl.
At the moment I'm not entirely sure how much the above
idiom should be changed, but I would much prefer
support for some type of Elixir-y piping/threading, so that
is definitely some sugar I'll be chasing down in languages
where it seems appropriate. This could allow the above to
instead be represented as something like:
input | handle | filter | processed | display
This would build on top of something like partial
application to allow any additional inputs or outputs to be
bound outside of the pipeline.
Initial Psuedo Syntax
As mentioned in the origin, the original idea is to treat
Regi as a model that can be used in multiple languages (with
some prospective benefits lurking in that idea) rather than
a language, but as I'm looking to validate some of the
ergonomics (and shareable declarations) I am also looking at
what may make sense from a syntax perspective.
Coming from a left-to-right backround, the basics of the
some canonical syntax would resemble the pattern
(inputs) function (outputs)
though I'd likely look at distinct lexical elements to
delimit the inputs versus outputs to keep parsing
simpler. This should hopefully provide relatively readable
code and provide a consistent path to the previously
mentioned ability to introduce piping as outputs on the
right of one invocation could connect as the inputs on the
left of the next (with implementation details to be worked
out).
Making use of some basic macros in Racket along with
the support it provides for using additional dots for some
infix notation allows code to be written along the lines of:
(define (string->credentials input result)
(:= ([tokens (input . tokenize . tokens)])
(tokens . <: . ((parse-netrc tokens) . :> . result))))
which demonstrates a mix of how some things may work and is
comprised of a mix of real and prospective code used to
parse credentials out of a netrc file. Regi-supporting
macros are declared with a : to allow for use of
slightly tweaked versions of standard symbols. :=
provides a let-style block which instantiates
the tokens register which is subsequently
populated by invoking tokenize. The call to
tokenize passes the registers themselves, whereas the
invocation of parse-netrc makes use of local
memory and therefore makes use of <:
and :> to retrieve values from and write
values to registers respectively.
This proves out some of the basic ideas of Regi. It
further demonstrates some of the practical details in terms
of the usage of registers; registers would be used at
modular boundaries, but values will ultimately need to be
worked with directly and at that point the behavior
documented above for reading from and writing to registers
will be used. Elsewhere the registers themselves may be
passed directly (such as the call to tokenize
above.
Regi: Origin
Towards the end of 2025, a motivation to pursue more flexible
and ubiquitous Web interacts had me fiddling around with some
supporting Emacs lisp code. The HTTP library within Emacs lisp
makes use of callbacks which within the space of a minute or
so had my mind tromping through the familiar question of
"should I gravitate towards some kind of eventual object?"
followed by a recurring "it could be cool if asynchronicity
could be abstracted away." This is something that I've
wrestled with over many years, invariably adopting whatever
seemed most pragmatic for the environment in which I was
developing (the only notable resulting practice being
preference for asynchronous constructs (Promises/Observables)
in interface definitions). This time, likely due to some
combination of additional knowledge and lack of obviously
available idioms within elisp, the train of thought went a bit
further. To clarify an earlier term for subsequent use, I use
"eventual" as a noun to refer to the general concept of
representing a not-yet-resolved value as it is used in
supporting information but hopefully avoids misaligned
assumptions that may accompany more specific implementations.
The distinction between an eventual and a callback can be
reduced to the use of the former as a return value. Indeed,
many implementations of eventuals may reflect little more than
a container for callbacks that can be utilized through that
alternative channel. The two approaches are therefore somewhat
analogous to the distinction between the use of return values
and output parameters (which are common in languages like C
but most higher-level languages have moved away from
them). I've always toed the party line and fallen strongly in
the anti-output parameter camp, but suddenly I found myself
wondering whether I'd been wrong. At the lowest levels the
difference is largely syntactic sugar - "return values" are
those that are exchanged using a location designated by a
convention such as an ABI (like the accumulator register)
which is arguably far more aligned with the use of an output
parameter than a return value. While there are a lot
of potential advantages that are attached to the use of return
values, it suddenly became less clear how many of them were
just incidentally stacked on top of each other. The specific
issue I was working through was the result of debating how to
work around some of the concepts and conventions that were
putatively making things easier but presently getting in the
way. I either needed some additional design because the
produced value wanted to pass through the magical return value
conventions of the language, or I leaned into callbacks and
likely end up with elisp that resembled ES5...and
in both cases I wanted to stumble towards a reasonably
consistent model.
This led me to rethinking the benefit of return values. The
reason that they introduced additional hoops was ultimately a
reflection of the fact that return values effectively presume
an implicit binding environment. When you return a
value, where does it go? There is a presumption that the
values will be available to the calling code. This seems most
likely to be a reflection of the low level implementation and
Von Neumann architecture where ultimately that assumed
environment is the current execution stack frame where usable
locations are defined in terms of the available registers and
memory offsets within the execution stack (typically relative to
the address of the current frame - which can then also
indirect to dynamic memory). That assumed environment seemed
to speak directly to the challenge I was facing, in that if
such environments were managed explicitly then shifting
between the use of return values and something like callbacks
would be straightforward. The use of an implicit binding
environment carries further implications; while many practices
seem to strive towards making logic more self-contained, this
creates a pernicious coupling between the logic that is being
called and that which calls it. Such association also seems to
be built upon the low level use of memory which fairly
directly feeds some of the synchronous/asynchronous
bifurcation that has been swirling around my head for years,
and perhaps perversely leads to the idea that return values
act as smugglers for concerns around direct memory access that
were a reason to move away from more raw use of output parameters.
I therefore started to think about an alternative programming
model that relied more heavily on such explicit environments.
This will start as a model, with the perspective that
it can be supported by existing languages, though I'll also
explore an idiomatic syntax. As with everything this is likely
to overlap with existing ideas, and I will actively look to
steal from prior art. I'll be starting with proving out the
basic ergonomics of the model, and then proceeding to realize
further value of using this approach. Conceptually the
difference may be that typical practice is oriented towards
the colocation of logic and data and the use of shared
compute, whereas this model instead is oriented towards a
boundless number of more powerful registers which can then be
plugged into different logic (thereby also enabling more
complete separation of Church and state). Due to the prevelant
use of the concept of registers and since it seems snappy and sticky,
the provisional name for this effort is "Regi".
This shift has since led to rethinking how many
things currently work in software across a range of concerns
and whether Regi may offer simpler options. Different topics
will therefore be explored and information shared on this
site. The first exploration will be done using Racket, as it
is a language I'm fairly fond of and provides a path towards
also establishing syntax - implementations in other languages
will be explored as they are used (particularly in support of
prospective interoperability). A C implementation is expected
to evaluate impacts on some lower-level concerns such as
memory management and safety.
About Me
I'm a father, husband, musician, and software architect with a
focus on building large scale distributed systems. I'm an
empiricist who pursues minimalism, sustainability, and
democratization.
About This Site
This site will have a range of topics that seem like they may
be of interest to others (or for myself for external
reference). I'll be incrementally building out the site
content and design.
As part of my most recent revisiting of which Web solutions to
adopt, this Web site is hosted on my home router.
Power Consolidation Through Information Brokerage and Gig Work
The modern era is one in which technology has granted
individuals significant freedom. Much of this is enabled by
the Internet, but unfortunately while the Internet was
conceived as a democratizing force (and many are still trying
to move it in that direction including TBL) it has instead
become a channel through which power can be consolidated more
than ever before.
While the underlying design of the Internet allows for
decentralization, the use in practice has led to certain sites
becoming entrenched brokers of information. This was started
by companies that provided better solutions which was
subsequently fed by network effects, convenience, complacency,
and lack of technical knowledge. The end result amounts to
data monopolization; something which falls outside of the radar
of what is typically monitored for market health.
The Internet itself allows for individual empowerment which is
manifested as individuals being able to more easily run their
own businesses (gig work) and more easily distribute their
work (content creation). This is certainly shifting
power into the hands of individuals, but given the centralized
brokerage of that information that is offset by the benefits
afforded to those brokers. While having a more traditional job
may seem more restrictive, it also represents a significant
investment on the part of the employer. This is perhaps most
glaringly obvious when it comes to content creation where
the work produced (and therefore the teams involved)
ends up reflecting fairly directly on the company itself.
This leads to a symbiotic relationship between the employees
and employers where each is exposed to notable costs and
benefits. The dynamics are certainly variable and there may be
pathological imbalances, but such imbalances are far greater
when companies are able to tap individual contribution without
commitment, and where there is a readily available pool of
such work and therefore the individuals themselves are
fungible. This is even more pronounced given that the process
itself tends to be indiscriminate and automated: the system
amounts to the creation of controlled marketplaces where often
the vendors are highly reliant on the specific marketplace but
the marketplace is indifferent towards the vendor.
The underlying assertion here is not that the shift
fundamentally flawed, nor to suggest that the some of the above
concerns displace previous models (there will always be a
spectrum). The concern is two-fold: the first being that what
is often packaged as personal empowerment may have the
opposite effect, which feels like a pretty standard grift that
has become more normalized deviance (most baldly packaged
as something like firing someone with the message that you're
allowing them to pursue something else). The second being that
these drawbacks are a function not of the Internet itself, but
how it is used and that it can be counteracted by more
conscious use: particularly given that technologies allow us
to achieve equivalent results without centralization.
There is unlikely to be any particularly new ideas in the
above, nor do I think that the majority of people will care
enough to leave behind familiar conveniences, but it seems
worthwhile to try to re-balance the scales as much as possible.
Taking Truth for Granted
A few months ago I was in a conversation where someone stated
that they couldn't believe how many people were ignoring the
truth. While I certainly agreed with the sentiment, there's
been plenty of reason lately to reassess our relationship with
the truth. While I didn't want to pull the discussion off on
an epistemological tangent, it left a strong impression that
so many of us (including myself for most of my life) take the
concept of truth for granted.
This line of thought was also bolstered by some of the essays
in
The Nineties. Throughout the late twentieth century into
the early twenty first we were presented what could be seen as
a monolithic sense of truth. While at varying points that
could certainly be a cause for concern given that such
"truths" may be curated or controlled - there were controls in
place that provided a sense of confidence. While it has never
...to finish copying.