I'm skeptical. Are you doing coercions that often? That seems like bad practice. But if you share a little program or app, that may persuade me otherwise.
For coercions, I usually use the `as` alias:
(as type expr)
rather than
(coerce expr 'type)
I find it a lot more convenient because `type` is usually atomic while `expr` can be arbitrarily complex. It makes for a more readable result for the short arguments to come first.
In Anarki, `as` is more common than `coerce`, though we still do a lot of `coerce` that could probably stand to be cleaned up:
"There's one thing you can't do with functions that you can do with data types like symbols and strings: you can't print them out in a way that could be read back in. The reason is that the function could be a closure; displaying closures is a tricky problem."
I've sometimes wondered just what the connotation of 'tricky' is here. Is it hard in some theoretic sense, or just "Arc is a prototype so we don't have this yet"?
(Edit: Oops, I replied out of order and didn't read shawn's comment with the elisp examples before writing this.)
I suspect what pg means by it is primarily that it's tricky to do in Racket (though I'm not sure if it'd be because there are too few options or too many).
Essentially, I think it's easy to display a closure by displaying its source code and all the captured values of its source code's free variables. (Note that this might be cyclic since functions are often recursive.)
But there is something tricky about it, which is: What language is the source code in? In my opinion, a language design is at its most customizable when even its built-in syntaxes are indistinguishable from user-defined ones. So the ideal displayed format of a function would ideally involve some particular set of user-definable syntaxes. Since a language designer can't anticipate what user-defined syntaxes will exist, clearly this decision should ultimately be up to a user. But what mechanism does a user use to express this decision?
As a baseline, there's at least one straightforward choice the user can make: The one that expresses the code in terms of Arc special forms (`fn`, `assign`, `if`, `quote`, etc.) and Arc functions that aren't implemented in Arc (`car`, `apply`, etc.). In a language that isn't aiming for flawless customizability, this is enough.
Now suppose we try to generalize this so a user can choose any set of syntaxes to express things in terms of -- say, "I want the usual language, but with `=` treated as an additional built-in." If the code contains `(= x (rfn ...))`, then the macroexpander at some point needs to expand the `rfn` without expanding the `=`. That's not really viable in terms of Arc's macroexpander since we don't even know `(rfn ...)` is an expression in this context until we process `=`. So this isn't quite the right generalization; the right generalization is something trickier.
I suppose we can have every function printed in terms of its pre-macroexpansion source code along with printing all the macros and other macroexpansion-time state it happened to rely on as it macroexpanded the first time. That would narrow down the problem to how to print the built-in functions. And we could solve that problem in the language design by making it so nothing ever captures a built-in function as a first-class value, only as a late-bound reference to the global variable it's accessible from.
Or we could have the user specify their own macroexpander and make it so that whenever a function is printed, if the current macroexpander hasn't expanded that function yet, it does so now (just to determine how the function is printed, not how it behaves). This would let the user specify, for instance, that `assign` expands into `=` and `=` expands into itself, rather than the other way around.
These ideas are incomplete, and I think making them complete would be pretty tricky.
In Cene, I have a different take on this: If a function is printable (and not all are), then it's printable because it's a callable struct with a tag the printer knows about. It would be printed as a struct. The function implementation wouldn't be printed. (The user could look up the source code information based on the struct tag, but that's usually not printable.) There may be some exceptions at the REPL where information is printed that usually isn't available, because the REPL is essentially a debugging context and the debugger sees all. (Racket's struct inspectors express a similar debugger-sees-all principle, but I haven't seen the REPL take advantage of it.)
You're hitting on a problem I've been thinking about for years. There are a few reasons this is tricky, notably related to detecting whether something is a variable reference or a variable declaration.
(%language arc
(let in (instring " foo")
(%language scm
(let-values (((a b c) (port-next-location in)))
(%language el
(with-current-buffer (generate-new-buffer "bar")
(insert (prin1-to-string c))
(current-buffer)))))))
To handle this example, you'll need to know whether each form is a function call, a variable definition, a list of definitions (let-values), a function call, and which target the function is being called for.
For example, an arc function call needs to expand into `(ar-apply foo ...)`
And due to syntax, you can't just handle all the cases by writing some hypothetical very-smart `ar-apply` function. If your arc compiler targets elisp, it's tempting to try something like this:
(ar-apply let (ar-apply (ar-apply a (list 1))) ...
which can collapse nicely back down to
(let ((a 1)) ...)
in other words, it's tempting to try to defer the "syntax concern" until after you've walked the code and expanded all the macros. Then you'd collapse the resulting expressions back down to the language-specific syntax.
But it quickly becomes apparent that this is a bad idea.
Another alternative is to have a "standard language" which all the nested languages must transpile tO:
(%let in (%call instring " foo")
(%let (a b c) (%call port-next-location in)
(|with-current-buffer| (%call generate-new-buffer "bar")
(%call insert (prin1-to-string c)
(%call current-buffer)))))
Now, that seems much better! You can take those expressions and easily spit out code for scheme, elisp, arc, or any other target. And from there it's just a matter of adding shims on each runtime.
The tricky case to note in the above example is with-current-buffer. It's an elisp macro, meaning it has to end up in functional position like (with-current-buffer ...) rather than something like (funcall #'with-current-buffer ...)
There are two ways to deal with this case. One is to hook into elisp's macroexpand function and expand the macros as you go. Emacs calls this eager macroexpansion, and there are some cases related to autoloading (I think?) that make this not necessarily a good idea.
The other way is to punt, and have the user indicate "this is an elisp form; do not mess with it."
The idea is that if the symbol in functional position is surrounded by pipe chars, then the compiler should leave its position alone but compile the arguments. So
Then you'll be in for a nasty surprise: not only does it look visually awful and annoying to write, but it won't work at all, because it'll compile to something like this:
(let (ar-funcall2 (a 1) (b 2))
(ar-funcall2 _+ a b))
I am not sure it's possible to escape the "syntax concern". Emacs itself had to deal with it for user macros. And the solution is unfortunately to specify the syntax of every form explicitly:
I can't speak to elisp, but the way macro systems work in Arc and Racket, the code inside a macro call could mean something completely different depending on the macro. Some macros could quote it, compile it in their own way, etc. So any code occurring in a macro call generally can't be transformed without changing the meaning of the program. Trying to detect and process other macro calls inside there is unreliable.
I have ideas in mind for how macro systems can express "Okay, this macro call is over; everything beyond this point in the s-expression is an expression." But that doesn't help with Arc or Racket, whose macro systems aren't designed for that.
So something like your situation, where you need to walk the code before knowing which macroexpander to subject each part of it to, can't reliably treat the code as code. It's better to treat the code as a meaningless soup of symbols and parentheses (or even as a string). You can walk through the data and find things like `(%language ...)` and treat those as escape sequences.
(What elisp is doing there looks like custom escape sequences, which I think is ultimately a more concise way of doing things if new macro definitions are rare. It gets into a middle ground between having s-expression soup and having a macro system that's designed for letting code be walked like this.)
Processing the scope of variables is a little difficult, so my escape sequences would be a bit more verbose than your example. It's not like we can't take a Racket expression and infer its free variables, but we can only do that if we're ready to call the Racket macroexpander, which isn't part of the approach I'm describing.
(I heard elisp is lexically scoped these days. Is that right?)
This is how I'd modify the escape sequences. This way it's clear what variables are passing between languages:
(%language arc ()
(let in (instring " foo")
(%language scm ((in in))
(let-values (((a b c) (port-next-location in)))
(%language el ((c c))
(with-current-buffer (generate-new-buffer "bar")
(insert (prin1-to-string c))
(current-buffer)))))))
Actually, instead of just (in in), I might also specify a named strategy for how to convert that value from an Arc value to a Racket value.
Anyhow, once we walk over this and process the expressions, we can wind up with generated code like this:
We also collect enough metadata in the process that we can write harnesses to call these blocks at the right times with the right values.
This is a general-purpose technique that should help with any combination of languages. It doesn't matter if they run in the same address space or anything; that kind of detail only changes what options you have for value marshalling strategies.
I think there's a somewhat more convenient approach that might be possible between Arc and Racket, since their macroexpanders both run in the same process and can trade off with each other: We can have an Arc macro that expands its body as Racket code (essentially Anarki's `$`) and a Racket macro that expands its body as Arc code. But there are some difficulties designing the latter, particularly in terms of Racket's approach to hygiene and its local macros, two things the Arc macroexpander has zero concept of. When we switch from Racket to Arc and back to Racket, the hygiene information and local macro scopes will probably be obliterated.
In your arcmacs project, I guess you might also be able to have an Arc macro that expands its body as elisp code, an elisp macro that expands its body as Racket code, etc. :-p So maybe that's the approach you really want to take with `%language` and I'm off on a tangent with this "escape sequence" interpretation.
That's really cool, thanks for making the case for Lumen. I now have it on my todo list to determine how much the standard library of Lumen matches the names Arc uses.
On a slight tangent, ycombinator.lol is not affiliated with Y Combinator, and https://github.com/lumen-language is not affiliated with Scott Bell the creator of Lumen. I clarify this because it took me a while to figure out. Am I characterizing it right, Shawn?
> I now have it on my todo list to determine how much the standard library of Lumen matches the names Arc uses.
Lumen looks awesome. I was looking for the docs to determine how much of arc actually exists within lumen, but I couldn't find anything. So if you do this, please let me know.
Also, If I can get some time down the road, I'd like to implement some basic dom manipulation functions. Personally I see Lumen as the best means to do mobile app development in Arc (which is probably one of the best things that can happen for Arc IMHO). Arc on the server side, Lumen on the client side and a code base that's useable by both would be really nice.
Just to recap my opinion that I gave you over chat: the advantage alists have is that they're simple to support. Along pretty much any other axis, particularly if they grew too long, you'd be better off using some other data structure. Which one would depend on the situation.
You've got me curious now about how this relates to Amacx :)
I find that having tests allows me to start out in a sort of engineering mindset, in your terms, where I just get individual cases working one by one. But at the same time they keep me from growing too attached to a single implementation and leave me loose to try to think up more axiomatic generalizations over time. You can kinda see that in http://akkartik.name/post/list-comprehensions-in-anarki if you squint a little; I don't show my various working copies there, but the case analysis I describe does faithfully show how I started out thinking about the problem, before the clean general solution suddenly fell out in a flash of insight.
(Having tests doesn't preclude more systematic thinking about the space, and proving to myself that a program is correct. But even if I've proved a program correct to myself I still want to retain the tests. My proofs have been shown up too often in the past ^_^)
> You've got me curious now about how this relates to Amacx :)
Why, everything! :-) E.g. I start with: what if top level variables were implemented by an Arc table, and top level variable references were implemented by an Arc macro? That is, what if top level variables were built out of lower level language axioms, instead of being built in?
We end up with something that's kind of like modules, but doesn't do everything that we'd typically expect modules to do (though perhaps we could implement modules on top of them if we wanted to), and also does some things that modules don't do (for example we can load code into a different environment where the language primitive act differently).
To give a name to this thing that is kind of like modules but different, I called them "containers", because they're something you load code into.
Are containers useful? Well, I'm guessing it would depend on whether we'd ever want to want load code into different environments in our program. If we only want to load code once, and all we want is a module system, I imagine it'd probably be more straightforward to just implement a module system directly.
On the other hand, suppose we have a runtime that gives us some nifty features, but is slower than plain Arc. Now it seems like containers could turn out to be a useful idea. Perhaps I have some code that I want to load in plain Arc where it'll run fast, and other code that I want to run in the enhanced runtime and I don't mind that it's slower.
> I find that having tests allows me to start out in a sort of engineering mindset, in your terms, where I just get individual cases working one by one. But at the same time they keep me from growing too attached to a single implementation and leave me loose to try to think up more axiomatic generalizations over time.
Exactly!
This is the classic test driven development refactoring cycle: add features with tests, then refactor e.g. to remove duplicate code, and/or otherwise refactor to make the code more axiomatic.
Since "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp", one could, in theory, start with such a C or Fortran program and refactor towards an axiomatic approach until you had reinvented Lisp, with the program written in Lisp :-)
But in practice I think going the other way is sometimes necessary: that is, starting with some axioms, and seeing what can be implemented out of them.
I'm not sure why that is (why doesn't anyone keep refactoring a large C program and end up with Lisp?) but I suppose it might be because it's too cognitively difficult, or you end up at some kind of local maximum in your design, or something.
In any case, I find tests absolutely essential for working on Amacx... not just "nice to have" or "saves me a lot of time", but "impossible to do without them"!
I recall a quote -- by Paul Graham, I think -- about how one can always recognize Lisp code at a glance because all the forms flow to the right on the page/screen. How Lisp looks a lot more fluid while other languages look like they've been chiseled out of stone. Something like that. Does this ring a bell for anyone?
"While.. indentation reduction is one of the most important.. that benefit is less important once Parendown is around."
One counter-argument that short-circuits this line of reasoning for me: the '#/' is incredibly ugly, far worse than the indentation saved :) Even if that's just my opinion, something to get used to, the goal with combining defining forms is to give the impression of a single unified block. Having to add such connectors destroys the illusion. (Though, hmm, I wouldn't care as much about a trailing ';' connector. Maybe this is an inconsistency in my thinking.)
"the '#/' is incredibly ugly, far worse than the indentation saved"
I'd love to hear lots of feedback about how well or poorly Parendown works for people. :)
I'm trying to think of what kind of feedback would actually get me to change it. :-p Basically, I added it because I found it personally helped me a lot with maintaining continuation-passing style code, and the reason I found continuation-passing style necessary was so I could implement programming languages with different kinds of side effects. There are certainly other techniques for managing continuation-passing style, side effects, and language implementation, and I have some in mind to implement in Racket that I just haven't gotten to yet. Maybe some combination of techniques will take Parendown's place.
---
If it's just the #/ you object to, yeah, my preferred syntax would be / without the # in front. I use / because it looks like a tilted paren, making the kind of zig-zag that resembles how the code would look if it used traditional parens:
(
(
(
/
/
/
This just isn't so seamless to do in Racket because Racket already uses / in many common identifiers.
Do you think I should switch Parendown to use / like I really want to do? :) Is there perhaps another character you think would be better?
---
What's that about "the impression of a single unified block"? I think I could see what you mean if the bindings were all mutually recursive like `letrec`, because then nesting them in a specific order would be meaningless; they should be in a "single unified block" without a particular order. I'm not recommending Parendown for that, only for aw's original example with nested `let`. Even in Parendown-style Racket, I use local definitions for the sake of mutual recursion, like this:
My objection is not to the precise characters you choose but rather the very idea of having to type some new character for every new expression. My personal taste (as you can see in Wart):
a) Use indentation.
b) If indentation isn't absolutely clear, just use parens.
(I have no objection to replacing parens with say square brackets or curlies or something. I just think introducing new delimiters isn't worthwhile unless we're getting something more for it than just minimizing typing or making indentation uniform. Syntactic uniformity is Lisp's strength.)
I can certainly see how I would feel differently when programming in certain domains (like CPS). But I'm not convinced ideas that work in a specific domain translate well to this thread.
"What's that about "the impression of a single unified block"? I think I could see what you mean if the bindings were all mutually recursive like `letrec`, because then nesting them in a specific order would be meaningless; they should be in a "single unified block" without a particular order. I'm not recommending Parendown for that, only for aw's original example with nested `let`."
Here's aw's original example, with indentation:
(def foo ()
(var a (...))
(do-something a)
(var b (... a ...))
(do-something-else b)
(var c (... b ...))
etc...)
The bindings aren't mutually recursive; there is definitely a specific order to them. `b` depends on `a` and `c` depends on `b`. And yet we want them in a flat list of expressions under `foo`. This was what I meant by "single unified block".
(I also like that aw is happy with parens ^_^)
Edit: Reflecting some more, another reason for my reaction is that you're obscuring containment relationships. That is inevitably a leaky abstraction; for example error messages may require knowing that `#/let` is starting a new scope. In which case I'd much rather make new scopes obvious with indentation.
What aw wants is a more semantic change where vars `a`, `b` and `c` have the same scope. At least that's how I interpreted OP.
"What aw wants is a more semantic change where vars `a`, `b` and `c` have the same scope. At least that's how I interpreted OP."
I realize you also just said the example's bindings aren't mutually recursive, but I think if `a` and `c` "have the same scope" then `b` could depend on `c` as easily as it depends on `a`, so it seems to me we're talking about a design for mutual recursion. So what you're saying is the example could have been mutually recursive but it just happened not to be in this case, right? :)
Yeah, I think the most useful notions of "local definitions" would allow mutual recursion, just like top-level definitions do. It was only the specific, non-mutually-recursive example that led me to bring up Parendown at all.
---
The rest of this is a response to your points about Parendown. Some of your points in favor of not using Parendown are syntactic regularity, the clarity of containment relationships, and the avoidance of syntaxes that have little benefit other than saving typing. I'm a little surprised by those because they're some of the same reasons I use Parendown to begin with.
Let's compare the / reader macro with the `withs` s-expression macro.
I figure Lisp's syntactic regularity has to do with how long it takes to describe how the syntax works. Anyone can write any macro they want, but the macros that are simplest to describe will be the simplest to learn and implement, helping them propagate across Lisp dialects and thereby become part of what makes Lisp so syntactically regular in the first place.
- The / macro consumes s-expressions until it peeks ) and then it stops with a list of those s-expressions.
- The `withs` macro expects an even-length binding list of alternating variables and expressions and another expression. It returns the last expression modified by successively wrapping it lexical bindings of each variable-expression pair from that binding list, in reverse order.
Between these two, it seems to me `withs` takes more time and care to document, and hence puts more strain on a Lisp dialect's claim to syntactic regularity. (Not much more strain, but more strain than / does.)
A certain quirk of `withs` is that it creates several lexical scopes that are not surrouded by any parentheses. In this way, it obscures the containment relationship of those lexical scopes.
If we start with / and `withs` in the same language, then I think the only reason to `withs` is to save some typing.
So `withs` not only doesn't pull its weight in the language, but actively works against the language's syntactic regularity and containment relationship clarity.
For those reasons I prefer / over `withs`.
And if I don't bother to put `withs` in a language design, then whenever I need to write sequences of lexical bindings, I find / to be the clear choice for that.
Adding `withs` back into a language design isn't a big deal on its own; it's just slightly more work than not doing it, and I don't think its detriment to syntactic regularity and containment clarity are that bad. I could switch back from / to `withs` if it was just about this issue.
This is far from the only place I use / though. There are several macros like `withs` that I would need to bring back. The most essential use case -- the one I don't know an alternative for -- is CPS. If I learn of a good enough alternative to using / for CPS, and if I somehow end up preferring to maintain dozens of macros like `withs` instead of the one / macro, then I'll be out of reasons to recommend Parendown.
"A certain quirk of `withs` is that it creates several lexical scopes that are not surrounded by any parentheses. In this way, it obscures the containment relationship of those lexical scopes."
That's a good point that I hadn't considered, thanks. Yeah, I guess I'm not as opposed to obscuring containment as I'd hypothesized ^_^
Maybe my reaction to (foo /a b c /d e f) is akin to that of a more hardcore lisper when faced with indentation-based s-expressions :)
Isn't there a dialect out there that uses a different bracket to mean 'close all open parens'? So that the example above would become (foo (a b c (d e f]. I can't quite place the memory.
I do like far better the idea of replacing withs with a '/' ssyntax for let. So that this:
"Isn't there a dialect out there that uses a different bracket to mean 'close all open parens'? So that the example above would become (foo (a b c (d e f]. I can't quite place the memory."
I was thinking about that and wanted to find a link earlier, but I can't find it. I hope someone will; I want to add a credit to that idea in Parendown's readme.
I seem to remember it being one of the early ideas for Arc that didn't make it to release, but maybe that's not right.
---
"I do like far better the idea of replacing withs with a '/' ssyntax for let."
I like it pretty well!
It reminds me of a `lets` macro I made in Lathe for Arc:
(lets
a (bind 1)
(expr 1)
b (bind 2)
(expr 2)
c (bind 3)
(expr 3))
I kept trying to find a place this would come in handy, but the indentation always felt weird until I finally prefixed the non-binding lines as well with Parendown.
This right here
(expr 1)
b (bind 2)
looked like the second ( was nested inside the first, so I think at the time I tried to write it as
(expr 1)
b (bind 2)
With my implementation of Penknife for JavaScript, I finally started putting /let, /if, etc. at the beginning of every line in the "block" and the indentation was easy.
With the syntax you're showing there, you at least have a punctuation character at the start of a line, which I think successfully breaks the illusion of nested parens for me.
And would you look at that, ylando's macro (scope foo @ a b c @ d e f) is a shallow version of Parendown's macro (pd @ foo @ a b c @ d e f). XD (The `pd` macro works with any symbol, and I would usually use `/`.) I'm gonna have to add a credit to that in Parendown's readme too.
So I suppose I have ylando to thank for illustrating the idea, as well as fallintothis for interpreting ylando's idea as an implemented macro (before ylando did, a couple of days later).
"Isn't there a dialect out there that uses a different bracket to mean 'close all open parens'? So that the example above would become (foo (a b c (d e f]. I can't quite place the memory."
Oh, apparently it's Interlisp! They call the ] a super-parenthesis.
The INTERLISP read program treats square brackets as 'super-parentheses': a
right square bracket automatically supplies enough right parentheses to match
back to the last left square bracket (in the expression being read), or if none
has appeared, to match the first left parentheses,
e.g., (A (B (C]=(A (B (C))),
(A [B (C (D] E)=(A (B (C (D))) E).
Here's a document which goes over a variety of different notations (although the fact they say "there is no opening super-parenthesis in Lisp" seems to be inaccurate considering the above):
They favor this approach, which is also the one that best matches the way I intend for Parendown to work:
"Krauwer and des Tombe (1981) proposed _condensed labelled bracketing_ that can be defined as follows. Special brackets (here we use angle brackets) mark those initial and final branches that allow an omission of a bracket on one side in their realized markup. The omission is possible on the side where a normal bracket (square bracket) indicates, as a side-effect, the boundary of the phrase covered by the branch. For example, bracketing "[[A B] [C [D]]]" can be replaced with "[A B〉 〈C 〈D]" using this approach."
That approach includes what I would call a weak closing paren, 〉, but I've consciously left this out of Parendown. It isn't nearly as useful in a Lispy language (where lists usually begin with operator symbols, not lists), and the easiest way to add it in a left-to-right reader macro system like Racket's would be to replace the existing open paren syntax to anticipate and process these weak closing parens, rather than non-invasively extending Racket's syntax with one more macro.
No, you can interleave bindings and body expressions however you like, but the catch is that you can't use destructuring bindings since they look like expressions. It works like this:
(lets) -> nil
(lets a) -> a
(lets a b . rest) ->
If `a` is ssyntax or a non-symbol, we treat it as an expression:
(do a (lets b . rest))
Otherwise, we treat it as a variable to bind:
(let a b (lets . rest))
The choice is almost forced in each case. It almost never makes sense to use an ssyntax symbol in a variable binding, and it almost never makes sense to discard the result of an expression that's just a variable name.
The implementation is here in Lathe's arc/utils.arc:
> What aw wants is a more semantic change where vars `a`, `b` and `c` have the same scope. At least that's how I interpreted OP.
Just to clarify, in my original design a `var` would expand into a `let` (with the body of the let extending down to the bottom of the enclosing form), and thus the definitions wouldn't have the same scope.
Which isn't to say we couldn't do something different of course :)
Huh, I thought I had fixed the formatting in post, but apparently it didn't get saved. Too late to edit it now.
A container is an object which stores top level variables. When you use a top level variable "foo", that's a reference to "foo" in the container that you're eval'ing your code inside.
A container can be a simple Arc table, or some object like a Racket namespace which can be made to act like an Arc table. So suppose I have a container `c`, and I eval some code in c:
(eval '(def foo () 123) c)
Now `c!foo` is the function foo:
> (c!foo)
123
A function can always call a function in another container if it has the same runtime. A compiled function ends up having a reference to the container it was compiled in because the top level variables used in the function are compiled to have a reference to the container, but other than that it's just a function.
A function in one container can call a function in another container with a different runtime if the runtimes are compatible. Which they might not be. For example, a function compiled in the srcloc runtime can call a function in the mpair runtime because both runtimes compile Arc functions to Racket functions, but the mpair runtime wouldn't understand an Arc list passed to it from the srcloc runtime made up of Racket syntax objects. So you might need to do some translation between runtimes depending on how different they are.
A runtime is how we choose to implement a language once a program has been compiled and is now running.
For example, in Arc 3.2, `cons` creates an immutable Racket pair, `car` works with a Racket pair or a Racket symbol nil, and `scar` modifies an Arc cons with Racket's `unsafe-set-mcar!`.
These are all runtime decisions. We could create a different runtime. For example, Arc's `cons` could create a Racket mpair, and `scar` could use Racket's `set-mcar!`.
For Amacx I've written two runtimes so far. One I called "mpair" because it implements Arc lists using Racket's mpairs. The other I called "srcloc" because it allows source location information to be attached to Arc forms (lists and atoms).
Currently, in srcloc, source location information is attached to forms using Racket's syntax objects. Thus, in srcloc, Arc's `car` can be applied to a Racket syntax object which wraps a list or nil, and it will unwrap the syntax object and return the underlying value.
It's a coincidence that I chose to call my runtime "srcloc" and Racket happens to also store source location information in a struct they call "srcloc" :-)
I don't think this has come up before. An f-expr based Lisp would automatically have this property (at much performance cost). But we haven't discussed lexical macros as a separate construct.
I feel like something kinda related has come up a couple of times before: allowing local variables to override macros. I think there was even a one-line patch to ac.scm at some point. But it must have had some issue since it's not in Anarki :)
Then, well, you could conceivably change the semantics of "env" as it's passed around in ac.scm. Currently it's just a list of variables that are bound, and things test for whether a variable is present in that list. You could change it to a list of (variable-name macro-it's-bound-to-if-any), and have the special form (let-macro name arglist bodexpr . body) insert `(,name (fn ,arglist ,bodexpr) into env, while everything else puts in (variable nil), and change all the existing tests on "env" to search for "a list whose car is x" rather than "x", and lastly make ac-call call the macro-function on the expression if it finds one in the lexenv.
In theory, one could put arbitrarily complicated information, such as about deduced types of variables, into this "env" mapping, and implement some amount of compiler optimization that way.
First-class macros, of course, are the semantically nicest approach, but more difficult to compile.
Thanks for that example. I see now that you mostly only need to worry about multiple evaluation for the `%test` branch; `%then` and `%else` should be exclusive anyway, and I'm not concerned about the growth in macroexpanded code when duplicating a few s-expressions.
You could still have repeated use within a '%then' or '%else' block:
(aif (test)
(something)
(do %then %then))
But it should suffice to perform one evaluation in each branch. Cool! That seems simpler than some of the alternatives I'd been thinking about. I'd try to evaluate everything ahead of time and then realize that I shouldn't run `%else` if the `%test` returns a truthy value.
Didn't consider a double `%then` or `%else`, thanks. Raises some interesting problems.
Generally the plan is to have a special character control whether it gets bound or inlined, probably `!`. So say `%test!` would get inlined like right now, while using `%test` would bind (and doing both would also be possible). But for a `%then` you'd generally only want to bind if there's 2+ so I could count usages instead.