Arc Forumnew | comments | leaders | submit | rocketnia's commentslogin

Are you familiar with continuations? Continuations are essentially snapshots of the call stack. By calling (ccc ...), you take a snapshot. By calling the snapshot with a value, you jump back to that old stack -- right at the position of the (cc ...) call -- and return a different return value than before.

This is a pretty weird behavior, but it can useful for avoiding the inversion of control dilemma in imperative code. That is, if code A and B need to communicate, should code A call code B, or should code B call code A? Sometimes the callee side becomes a mess of callbacks, rather than a direct imperative style. With continuations, both sides can be the caller, and neither side needs to be the callee: When A calls B with a value, we resume B's earlier call to A using that value as the result. This approach doesn't make it straightforward to design communication paths that work like this, but it is a building block.

Unfortunately, jumping around with continuations messes up another common imperative coding technique. If people write code to set up and tear down a resource, they often like to write all the code in the middle in a way that assumes that resource is already available. With continuations, that assumption can be incorrect: If you jump out of the code in the middle, you may have just skipped the tear-down code entirely. If you jump into the middle, you may have just skipped the setup code.

Dynamic-wind makes up for that shortcoming by letting you write a structured code block that always executes its setup and tear-down sections, even if it's entered or exited using a continuation jump. So when you do a jump, you might actually go through a few dynamic-wind handlers before you reach your destination.

If you're more familiar with exceptions, continuations and dynamic-wind can be seen as a companion of exceptions and "try { ... } finally { ... }" blocks. An exception throw is a jump to an outer level of the stack, but the jump may stop to execute a few "finally" sections before it reaches its destination.

In Arc (and Scheme), continuations and exceptions are supported in the same language. They're both the same kind of jump, executing all the same handlers in between; the "finally" handlers and "tear-down" handlers are basically the same kind of handler. In Arc, unless you write custom Racket code to invoke Racket's dynamic-wind directly, Arc has no way to write a setup handler, but there is an (after ...) construct for writing a finally/tear-down handler.

I think what I've called "setup" and "tear-down" are more commonly called winding and unwinding handlers, or before and after sections. I picked the terms "setup" and "tear-down" just in the hopes of painting a more concrete picture of why they're useful.

This has still been a pretty quick explanation relative to the complexity of the topic, and I haven't included even a single example. If you still have unresolved questions, that's absolutely understandable. :)

-----

2 points by jsgrahamus 3577 days ago | link

Thank you.

-----

2 points by rocketnia 3581 days ago | link | parent | on: Using Arc at work

Judging by that error message, it looks like the variable "=" or one of its dependencies might have been reassigned somewhere along the line. The second argument in that error message indicates that = is getting hold of your read-in data somehow, so it might be something you've defined for processing this data.

The dependencies of = include expand=list, expand=, map, pair, and setforms (among others), so if any of these has been overwritten, it might do something like what you're seeing.

By the way, I think if you're not using Anarki, there's a known bug in (readline ...) where it will spuriously combine each empty line with the following line (https://sites.google.com/site/arclanguagewiki/arc-3_1/known-...). Maybe this could explain the extra \n you're getting.

-----

2 points by jsgrahamus 3580 days ago | link

Thank you.

-----


Store-passing style! I've been using it in my programs too. It even came up again on David Barbour's blog yesterday.[1] It's kinda funny how everything keeps coming back to the same continuation-passing styles and store-passing styles.

I think I've followed all these to a nice, general conclusion in Staccato. :)

Staccato's going to have at least two completely different (and not necessarily compatible) families of side effects: At compile time, macros will install definitions as a form of side effect. At run time, microservices will have continuous reactive side effects for communicating with each other. Nevertheless, I'm taking one consistent approach to side effects that should work in both cases. (It might be pretty inefficient when used for continuous reactive effects, but I'm optimistic.)

Staccato has no static type system (yet?), but even if it did, I would expect to have to think about run time errors anyway: Usually, a program can be written that bides its time until the Earth gets consumed by the sun, and then there's no way it'll successfully proceed to return a value. So, I accept dynamic errors, but I'll be mindful of where and when any given error could gum up the works, e.g. whether it happens on the browser side or the server side, and whether it happens before or after some other side effects occur.

So the kind of purity I'm going for involves quasi-determinism in the sense I've seen Lindsey Kuper use it when talking about LVars[2]: As long as a program isn't swallowed by the sun or otherwise interrupted, it will always return the same value. I'll still need to be mindful of where and when errors may occur in the language (e.g. server-side or client-side), so that I know which of the program's side effects should be aborted or reverted.

If a "side effect" only communicates with the language implementation itself (e.g. for debugging or profiling), that's fine. We already trust the language implementation to implement the language semantics in a single deterministic way, so we can trust it to respond to these communications in a single deterministic way as well!

If a "side effect" is tame enough that it can be removed by dead code elimination if the result value is never used, that's fine too. Arguably it has no side effects at all; the effects are all represented in its result. This means there can be some minimal support for operations that read some value from an opaque external reference (e.g. a file handle, a socket). In order to preserve quasi-determinism, the output may only vary if the input does, so these operations will tend to take an explicit parameter designating the time/world at which to do the reading. If a program makes pervasive use of these operations, it will take the shape of a sort of store-passing style, though it doesn't ever need to return a new version of the store. (For context: Whereas Haskell's State monad is used for store-passing style, its Reader monad is simplified for this special case.)

I'll do all other side effects using a commutative monad. By commutativity, any two commands in this monad can be reordered, which should guide me toward easy refactoring, extensibility, and concurrency. If I need to write any code that depends on the result of an effect, this can't usually be done in a commutative way, but a commutative effect could still set up an asynchronous callback, which can run a separate set of commutative effects in a future tick. If a computation spans more than a few of these ticks, it'll start to look like continuation-passing style. Staccato's syntax is actually pretty nice for continuation-passing style code, so this isn't a problem. CPS necessarily sequentializes the code, but when I need concurrency, I can synchronize between concurrent code the way JavaScript programmers often do these days, using promises.[3]

[1] https://awelonblue.wordpress.com/2016/01/04/yield-for-effect...

[2] http://composition.al/blog/2013/09/22/some-example-mvar-ivar...

[3] To preserve quasi-determinism and to ensure that no two promise allocations give the same promise as their result, the allocation of a promise will itself be an asynchronous operation. (This is sort of a note to myself, because I haven't written up designs for the promise primitives yet.)

-----

3 points by rocketnia 3616 days ago | link | parent | on: Threads

Going by http://docs.racket-lang.org/guide/parallelism.html#%28part._..., it looks like any operation in a future that might be expensive is a "blocking operation," which can only be resumed by touching the future. Even multiplying a floating-point number by fixed-point integer is expensive enough to be blocking!

Without testing it myself, I'd guess there are a few things that might be blocking in your example:

* Converting a number to a string.

* Looking up the current value of stdout. This depends on the current parameterization, which is probably carried on the continuation in the form of continuation marks. According to http://docs.racket-lang.org/reference/futures.html, "work in a future is suspended if it depends in some way on the current continuation, such as raising an exception."

* Actually writing to the output stream.

Maybe "visualize-futures" would show you what's going on in particular.

-----

3 points by rocketnia 3614 days ago | link

I finally sat down to test it, and it looks like all three of those are blocking operations, just as I thought.

  arc> (= g 1)
  1
  arc> ($.future:fn () (= g 2))
  #<future>
  arc> g
  2
As a baseline, that future seems to work. It simply assigns a variable, which the documentation explicitly says is a supported operation, so there wasn't a lot that could go wrong.

  arc> (= f ($.future:fn () (= g $.number->string.3)))
  #<future>
  arc> g
  2
  arc> $.touch.f
  "3"
  arc> g
  "3"
That future had to call Racket's number->string, and it blocked until it was touched. The same thing happens with Arc's (= g string.3).

  arc> (= f ($.future:fn () (= g ($.current-output-port))))
  #<future>
  arc> g
  "3"
  arc> $.touch.f
  #<output-port:stdout>
  arc> g
  #<output-port:stdout>
That future blocked due to calling Racket's current-output-port. The same thing happens with Arc's (= g (stdout)).

  arc> (= sout (stdout))
  #<output-port:stdout>
  arc> (= f ($.future:fn () ($.display #\! sout) (= g 5)))
  #<future>
  arc> g
  #<output-port:stdout>
  arc> $.touch.f
  !5
  arc> g
  5
That future blocked on calling Racket's display operation. It finally output the #\! character when it was touched. The same thing happens with Arc's (disp #\! out), and the same thing happens if I pass in a string instead of a single character.

I tried using visualize-futures from Arc, but I ran across some errors. Here's the first one:

  arc> ($:require future-visualizer)
  #<void>
  arc>
    (def visualize-futures-fn (body)
      (($:lambda (body) (visualize-futures (body))) body))
  #<procedure: visualize-futures-fn>
  arc>
    (mac visualize-futures body
      `(visualize-futures-fn:fn () ,@body))
  #(tagged mac #<procedure: visualize-futures>)
  arc> (def wrn (x) write.x (prn))
  #<procedure: wrn>
  arc>
    (visualize-futures:withs
        (g 5
         f ($.future:fn () (= g $.number->string.6)))
      wrn.g
      (wrn $.number->string.6)
      wrn.g
      (wrn $.touch.f)
      wrn.g)
  5
  "6"
  5
  "6"
  "6"
  inexact->exact: no exact representation
    number: +nan.0
    context...:
     C:\Program Files\Racket\share\pkgs\future-visualizer\future-visualizer\private\visualizer-drawing.rkt:344:4: for-loop
     C:\Program Files\Racket\share\pkgs\future-visualizer\future-visualizer\private\visualizer-drawing.rkt:387:0: calc-segments
     C:\Program Files\Racket\share\pkgs\future-visualizer\future-visualizer\private\visualizer-gui.rkt:106:0: show-visualizer3
     C:\mine\prog\repo\anarki\ac.scm:1234:4
It seems to be dividing by zero there. I tried it in Racket, but I got the same error. This error can be fixed by tacking on (sleep 0.1) so that the total duration isn't close to zero:

    (visualize-futures:withs
        (g 5
         f ($.future:fn () (= g $.number->string.6)))
      (sleep 0.1)
      wrn.g
      (wrn $.number->string.6)
      wrn.g
      (wrn $.touch.f)
      wrn.g)
However, even that code gives me trouble in Anarki; the window that Racket pops up is unresponsive for some reason. So here's the same test in Racket, where the window actually works:

  Welcome to Racket v6.1.1.
  > (require future-visualizer)
  > (define (wrn x) (write x) (display "\n"))
  >
    (visualize-futures
      (let* ([g 5]
             [f (future (lambda () (set! g (number->string 6))))])
        (sleep 0.1)
        (wrn g)
        (wrn (number->string 6))
        (wrn g)
        (wrn (touch f))
        (wrn g)))
  5
  "6"
  5
  #<void>
  "6"
  >
In the pop-up, the panel at the left shows a summary of expensive operations:

  Blocks (1)
    number->string (1)
  Syncs (0)
  GC's (0 total, 0.0 ms)
If I look in the timeline and select the two red dots, this information comes up:

  Event: block
  Time: +0.0 ms
  Future ID: 1
  Process ID: 1
  Primitive: number->string

  Event: block
  Time: +109.744140625 ms
  Future ID: 1
  Process ID: 0
  Primitive: number->string
It looks like the first one is the number->string call inside the future, and the second one is the call that occurs outside the future. I guess it's still considered a blocking operation even if it happens in the main process, but fortunately it doesn't stop the whole program. :)

So number->string is a primitive that's considered complicated enough to put the future in a blocked state. To speculate, maybe the Racket project doesn't want to incur the performance cost of having the future's process load the code for every single Racket primitive, or maybe they just haven't implemented this one operation yet.

Going by this, futures can be useful, but they have a pretty limited set of operations. Still, mutation is pretty powerful: If needed, maybe it's possible to set up an execution harness where the future assigns the operation it wants to perform to a variable, and then some monitoring thread takes care of it, assigning the result back to a variable that the future can read.

Meanwhile, I wonder why the pop-up doesn't seem to work from Anarki. I seem to remember other Racket GUI operations haven't worked for me either. If the GUI works for anyone else, it might be that I'm on Windows.

-----

3 points by highCs 3614 days ago | link

Oh I get it I think. Futures are for computing arithmetic in parallel.

-----


Arc implements its global variables in terms of Racket's global variables, using _ as a prefix to prevent name collisions. Racket's ffi/unsafe module actually exports a variable called _list, underscore and everything, so there's a name conflict after all.

One way to fix this might be to modify this code in ac.scm:

  (define (ac-global-name s)
    (string->symbol (string-append "_" (symbol->string s))))
If you change that underscore to something else, then Arc's global variables won't conflict with _list (but they might conflict with something else).

Another fix would be to change your require form so it doesn't create a variable called _list. For instance, I think this would work:

  arc> ($:require (prefix-in ffi/unsafe- ffi/unsafe))
Now the variable should be available (on the Racket side) as "ffi/unsafe-_list".

-----

2 points by highCs 3643 days ago | link

It works. Thank you very much. I should have found the solution by myself. I'll try to do better next time.

-----

3 points by zck 3642 days ago | link

> I should have found the solution by myself. I'll try to do better next time.

Screw that. Asking for help is just fine. You weren't sure what was going on, so you asked for help. (This is something I struggle with too). Forget what you "should" have done; you tried and when you needed help, you asked for it, with a very good, simple example that showed the problematic behavior.

Nothing wrong with that. At all.

-----

2 points by rocketnia 3642 days ago | link

Seconded! I'm sorry if I gave a "should have known" impression.

-----

1 point by highCs 3642 days ago | link

Oh you didn't. Absolutely not. I was thinking loudly here (I'm always trying to improve), nothing wrong with your answer. Thanks again for your help :)

-----

2 points by highCs 3642 days ago | link

Cool. I think you are right. Thanks.

-----


"Does Arc 3.1 run properly though on MzScheme 372 or does it need to run on Racket?"

The only problem I've had running on MzScheme 372 is that it can be awkward to track down documentation for that version. Newer versions sometimes have more features available.

Arc doesn't have all the I/O primitives that Racket has -- Racket has an almost ridiculous variety of primitives for file manipulation, sockets, garbage collection, delimited continuations, FFI, concurrency, UI, etc. -- so Arc programmers have often hacked on the language implementation to support their applications. This hacking happens in Racket (aka MzScheme), so having a nice version of Racket is helpful.

-----


Anarki's "stable" branch is Arc 3.1 plus 15 extra commits (so far). There's a list of all the commits here:

https://github.com/arclanguage/anarki/commits/stable

I actually thought there were more crucial bug fixes on this branch, like akkartik said, but it seems that the rest of the commits are various improvements to Arc's usability from editors and from the command line.

Here's a holistic summary of the changes, so you don't have to trudge through the commits one by one:

- Adds a CHANGES/ directory which is supposed to host a summary of changes like this one. Anarki's master branch makes use of this directory, but so far it's been neglected on the stable branch. (I should probably add this list to it!)

- Adds an extras/ directory containing Vim and Emacs extensions.

- Adds arc.sh, a nicer way to run Arc from the command line. (Many of the other changes were made to support this.)

- Doesn't display the REPL when stdin is not interactive. (https://github.com/arclanguage/anarki/commit/eb55979588bb01d...)

- Outputs error messages to stderr rather than stdout. (https://github.com/arclanguage/anarki/commit/e518e3b323a63bc...)

- Makes it possible to execute Arc at the command line from directories other than the Arc directory. (https://github.com/arclanguage/anarki/commit/4df89245bb49ae2...)

- Interprets command line arguments as filenames of Arc scripts to load. (https://github.com/arclanguage/anarki/commit/5ac5d567bce0800...)

- Fixes a bug where mutating a list would sometimes fail depending on the state of the garbage collector. (https://github.com/arclanguage/anarki/commit/b683a84a6831fd4...)

-----

2 points by akkartik 3653 days ago | link

Thanks for that list! Maybe we should point people to the stable branch rather than vanilla arc 3.1 at http://arclanguage.github.io?

Edit 1 hour later: Check out the updated frontpage. Feel free to revert or ask me to do so.

-----


Thanks for putting up with the installation instructions you found, and thanks for trying to make life easier for the next person. :)

The instructions at /install are actually more than six years out of date. As of August 2009 (http://arclanguage.org/item?id=10254), Arc officially stopped depending on MzScheme 372. Shortly after that, the official releases of the language ran dry, and the Arc website (this website) entered a more preservational mode of maintenance. It even preserves those obsolete instructions!

Development of Arc continues thanks to unofficial efforts like yours. The most up-to-date material for Arc newcomers is on a community-maintained website: http://arclanguage.github.io/

-----

2 points by urs2102 3655 days ago | link

Thanks rocketnia - sorry I was afk for a few days. I had no idea that installation details changed. Is it possible to update the instructions on /install? I may update my setup to install racket first. Thanks!

-----

1 point by akkartik 3653 days ago | link

Sadly nobody here has access to official arc or this forum :/

-----


Most of those errors are saying it can't find a global variable named "stack". That's because when you do (= stack (newSA)), you're creating a global variable called "stack" in Arc but it's called "_stack" in Racket. In your macros, you're generating Racket code that uses the Arc variable name, so it's looking for a "stack" Racket global that doesn't exist. You can potentially fix this in your macros... but why use macros when you already have functions that do what you want? :)

  (= newSA $.newStackArray)
  (= deleteSA $.delete)
  (= pushSA $.pushStackArray)
  (= popSA $.popStackArray)
  (= fullSA? $.fullStackArray)
  (= emptySA? $.emptyStackArray)
  (= displaySA $.displayStackArray)
If you absolutely need macros, then here's a fixed version of pushSA that embeds its arguments as Arc expressions rather than Racket expressions:

  (mac pushSA (stack x)
    `( ($:lambda (stack x)
         (pushStackArray stack x))
       ,stack ,x))
  ; or...
  (mac pushSA (stack x)
    `($.pushStackArray ,stack ,x))
Fixing the others would be similar, but I picked pushSA as an example because it has two arguments.

Finally, I think this line just has a simple typo:

  typo:  (emptytSA? stack)
  fix:   (emptySA? stack)
How far does this get you? I haven't tried your code, and I don't know if this will fix all your problems, but maybe it's a start!

-----

3 points by cthammett 3734 days ago | link

Hey thanks this works great. I just need to fix an easy bug in the C code for pop.

-----

3 points by rocketnia 3737 days ago | link | parent | on: Why parents?

Much of what you're saying sounds feasible and even familiar. Removing parens from Lisp and adding infix operators is a popular idea, popular enough that we've made a list of projects that explore the possibilities:

https://sites.google.com/site/arclanguagewiki/more/list-of-l...

---

As I've learned more about the ML family of language syntax (e.g. SML, OCaml, Haskell, Elm, Agda, Idris), I've come to the conclusion that ML-style language designs treat their syntax as s-expressions anyway. Where Lisp function call expressions are nested lists, ML expressions are nested "spines." ML languages treat infix spines as sugar for prefix spines, just like we're talking about for Lisp.

  gcd (a + b) (c - square d)           (* ML *)
  (gcd (+ a b) (- c (square d)))       ; Lisp
  (gcd ((+) a b) ((-) c (square d)))   (* ML again *)
In terms of precedence, ML's prefix calls associate more tightly than infix ones. That's the other way around from Arc, where the infix syntaxes associate more tightly than the prefix ones:

  (rev:map testify!yes '(no no yes))  ; returns (t nil nil)
I think ML's choice tends to be more readable for infix operators in general. The alternative form of the gcd example would look like this:

  gcd a + b c - (square d)
In this example, the + and - look like punctuation separating the phrases "gdc a", "b c", and "(square d)", and it takes me a moment to realize "b c" isn't a meaningful expression.

So I think the ML syntax is a good start for Lisp-with-syntax projects. For the syntax I've been designing, I've stuck with Lisp so far, but I want to figure out a good way to integrate ML-style syntax into it.

---

That said, there's a reason I've stuck with Lisp for my syntax: We can't parse ML syntax into nested spines unless we know all the operators that it uses. If we want to support custom infix operators, then the parser becomes intertwined with the language of custom operator declarations, and we start wanting to control what operator declarations are created during pretty-printing. These extra moving parts seem like an invitation for extra complexity, so I want to be careful how I integrate ML-style infix into my syntax, if at all.

-----

2 points by zck 3734 days ago | link

> gcd a + b c - (square d)

> In this example, the + and - look like punctuation separating the phrases "gdc a", "b c", and "(square d)", and it takes me a moment to realize "b c" isn't a meaningful expression.

Agreed. This is possibly my main complaint with mixing infix syntax with prefix syntax.

> gcd (a + b) (c - square d) (* ML ) (gcd ((+) a b) ((-) c (square d))) ( ML again *)

This is the same in Haskell^1. But I really dislike how this is done. It conflates calling a function (by surrounding it in parentheses) with converting an infix function to prefix. And if the function is unfamiliar, you don't know which it is. I assume this is taken care of by the parser, so it's into the arbitrariness of syntax, not the clear standard of function application^2.

[1] I assume Haskell got it from ML.

[2] This is unless you assume some insane return type that's basically "if called infix with two arguments, return a number; if called with zero arguments, return a prefix function of two arguments that returns a number".

-----

2 points by rocketnia 3734 days ago | link

"I assume Haskell got it from ML."

I think so. I'm primarily familiar with Haskell, but I hope those examples at least work in SML too.

A lot of the syntactic similarity starts to break down once the examples include lambdas, pattern matching, and top-level declarations, but I think that's similar to how the similarity between Lisp dialects breaks down when we look closely enough.

---

"It conflates calling a function (by surrounding it in parentheses) with converting an infix function to prefix."

I was making it look similar to Lisp syntax for the sake of argument, but parentheses aren't actually used like Lisp function calls there. In that example, parentheses are just used for two purposes:

- Grouping.

- Referring to infix variables (+) without applying them, which would otherwise be impossible to do.

The syntax for function application is juxtaposition with left associativity, so "gcd a b" is grouped as "(gcd a) b".

Agda[1] and some notes on Epigram 2[2] track "spines" of elimination contexts for the purposes of type inference and implicit argument elaboration. I think Haskell might use spines to resolve type class instances as well. In that case, the meaning of gcd in ((gcd a) b) can depend on the types known for a and b. With this kind of trick, Haskell's instance resolution can supposedly be used to write what are effectively varargs functions[3].

[1] http://www2.tcs.ifi.lmu.de/~abel/talkIHP14.pdf

[2] http://mazzo.li/epilogue/index.html%3Fp=1079.html

[3] https://wiki.haskell.org/Varargs

-----

3 points by akkartik 3737 days ago | link

"We can't parse ... unless we know all the operators that it uses."

Yes, that was my reaction as well. I thought https://en.wikipedia.org/wiki/Polish_notation requires either a clear separation between operators and values (so no higher-order functions) or all operators to be binary. Parentheses (whether of the lisp or ML or other variety) permit mult-iary higher-order operations.

Also, doesn't https://en.wikipedia.org/wiki/Shunting-yard_algorithm require some sort of delimiter between operations? I thought it didn't work for nested operations without commas or parens, or the simplifying assumptions in the previous paragraph.

-----

2 points by rocketnia 3737 days ago | link

I think those simplifying assumptions are consistent with what dagfroberg was saying. I think "Polish notation" and "shunting-yard algorithm" make sense as terminology here, even if dagfroberg and I might each have slight variations in mind.

---

"so no higher-order functions"

I think we can get read-time arity information for local variables, but only if our read-time arity information for the lambda syntax is detailed enough to tell us how to obtain the arity information for its parameter. For instance...

  -- To parse 3: We've completed an expression.
  3 =:: Expr
  
  -- To parse +: Parse an expression. Parse an expression. We've
  -- completed an expression.
  + =:: Expr -> Expr -> Expr
  
  -- To parse fn: Parse a variable. Parse an expression where that
  -- variable is parsed by (We've completed an expression.). We've
  -- completed an expression.
  fn =:: (arg : Var) -> LetArity arg Expr Expr -> Expr
  
  -- To parse letArity: Parse a variable. Parse an arity specification.
  -- Parse an expression where that variable is parsed by that arity
  -- specification. We've completed an expression.
  letArity =:: (arg : Var) -> Arity n -> LetArity arg n Expr -> Expr
  
  -- To parse arityExpr: We've completed a specification of arity
  -- (Expr).
  arityExpr =:: Arity Expr
  
  -- To parse arityArrow: Parse a specification of some arity m. Parse a
  -- specification of some arity n. We've completed a specification of
  -- arity (Parse according to arity specification m. We've completed a
  -- parse according to arity specification n.).
  arityArrow =:: Arity m -> Arity n -> Arity (m -> n)
  
  -- ...likewise for many of the other syntaxes I'm using in these arity
  -- specifications, which isn't to say they're all *easy* or even
  -- *possible* to specify in this metacircular way...
This extensible parser is pretty obviously turning into a half parser, half static type system, and now we have two big design rabbit holes for the price of one. :)

Notably, Telegram's binary protocol actually takes an approach that seems related to this, using dependent type declarations as part of the protocol: https://core.telegram.org/mtproto/TL

-----

More