Arc Forumnew | comments | leaders | submit | evanrmurphy's commentslogin

I've been wondering the same thing! It seems like macros warrant being even more carefully factored and "functional" than functions do, since they're so error-prone.

Looking forward to hearing what others have to say about this.

Update: I've noticed rocketnia can be good at keeping his macros well-factored. Lots of one-liners in kernelish.arc [1]:

  (def fn-fx (func)
    (annotate 'fexpr func))

  (mac fx (parms env . body)
    `(fn-fx:fn ,(cons env parms) ,@body))

  (def fn-fexport (var val)
    (= fenv-table*.var val))

  (mac fexport (var . val)
    `(fn-fexport ',var ,val))

  (mac fdef (name parms . body)
    `(fexport ,name (fn ,parms ,@body)))

  (mac ffex (name parms env . body)
    `(fexport ,name (fx ,parms ,env ,@body)))
---

[1] https://gist.github.com/778492

-----

2 points by rocketnia 5356 days ago | link

Oops!

  -(mac fexport (var . val)
  +(mac fexport (var val)
     `(fn-fexport ',var ,val))
That would be the bug you found, I think. ^^;

Thanks for the compliment. >.> Anyway, that pattern started when I saw http://awwx.ws/parsecomb0:

  (def on-result (f parser)
    (fn (p)
      (iflet (p2 r) (parser p)
        (return p2 (f r)))))
  
  (mac with-result (vars parser . body)
    `(on-result (fn (,vars) ,@body)
                ,parser))
Since then, I've noticed that defining one-use auxiliary functions like 'fn-fx and 'fn-fexport for macros is nifty in a few ways:

- Less need for w/uniq.

- More straightforward expansions and fewer gensyms when debugging.

- More one-liners, ironically at the expense of brevity. (The argument list is usually written three times this way: Once in the macro signature, once in the function signature, and once in the macro body.) At least it's in bite-size pieces. ^_^;

I've also noticed that 'after and 'protect have the same pattern, and that at least one other person here (don't remember who) uses it as extensively as I do. So it's probably as good as advertised. :-p

-----

3 points by aw 5355 days ago | link

Yes, I find I' often write a macro whose only purpose is to allow a "body" to be written without having to enclose it in a function. For this specialized purpose, it'd be nice to have a macro defining macro to do that... though I haven't thought about how to do that myself yet.

-----

1 point by akkartik 5356 days ago | link

As a concrete example, consider defmacro! at http://letoverlambda.com/index.cl/guest/chap3.html#sec_5 (there's a convenient version at http://letoverlambda.com/lol.lisp). It uses a helper macro and still has a nested backquote. Looks like the two are independent mechanisms..

-----

1 point by rocketnia 5356 days ago | link

It could technically use another helper: It's a macro that abstracts away an act of backquoting, but there could be a function that abstracts away that backquoting at some point (the macro generating a function call that uses its own backquote form). Since that's not actually "nested macro calls," maybe it doesn't count. ^^

Anyway, I think it's a perfectly legitimate use of nested backquotes.

You hear a lot about how nested backquotes and double commas are difficult.

Well, that link is possibly the first place I've heard it claimed in a general way (not counting individual people having trouble with them), and it was pretty surprising to me....

Do you think it's because of the generating-code-to-generate-code premise itself or just because of idiosyncrasies in the way layers of quoting interfere with each other in the forms of ,', and backslash-doubling?

-----

2 points by akkartik 5356 days ago | link

The couple of times I've tried it I find I get tangled up in how I want to interleave macroexpansion vs evaluation. I'm sure I don't know all the tools available to me (', / ,, / ,', / ,,@ / ...)

I should try to come up with a simple example. Perhaps the canonical one is Peter Norvig's once-only that that link refers to (which also says nested backquotes are hard). I believe it's come up multiple times here, e.g. http://www.arclanguage.org/item?id=9918

Hold on, lemme push my latest commit and break wart on github :) Now you can try to make sense of my struggles at https://github.com/akkartik/wart/blob/9bd437782d6c3e862ae388... if you're so moved. (There's a 'Desired' comment halfway down which shows what I'm trying to get, and the testcase is right at the bottom of the file.)

So far nested backquotes are the only thing I've found that remained hard after using unit tests.

-----

2 points by rocketnia 5356 days ago | link

I think your desired behavior is wrong. You'd like the inner body of a once-only definition to be like this:

  `(let* (($x o$x))
     (+ ,$x 1))
But the macro user would write that body as `(+ ,$x 1), with the backquote and everything, and you don't have that backquote in your hypothetical expansion. What you're probably looking for is:

  `(let* (($x o$x))
     ,`(+ ,$x 1))
Or, more accurately:

  `(let* (($x o$x))
     ,(progn `(+ ,$x 1)))  ; listing all the expressions in the body
Sorry, that's all the time I have to look at it right now. XD

Also, this isn't much of a response to the "generally hard" topic (except to prove this example takes more thought than I've given it so far :-p ).

-----

1 point by akkartik 5356 days ago | link

That is extremely helpful, thanks! I think it's awesome that you can read my ugly code so fast.

Update: Now when I look at it again I notice defmacro! also uses the ,(progn ,@body) trick. Thanks again.

-----


To avoid polluting the global scope, you could keep your variables in a table instead:

  ; Our scope, which we'll use instead of the global scope

  (= dynamic-env* (table))
  
  ; Initial value 

  (= (dynamic-env* 'x) 5)
  
  ; Like your some-fn, just a function that uses x.
  ; Except that this references x in the table instead
  ; of the global scope 

  (def foo () (prn "x = " (dynamic-env* 'x)))
  
  ; Like let, but manipulates keys in our table
  ; instead of variables in the global scope 

  (mac dynamic-env*-let (var val . body)
    (w/uniq g-old-val
      `(let ,g-old-val (dynamic-env* ',var)
         (= (dynamic-env* ',var) ,val)
         ,@body
         (= (dynamic-env* ',var) ,g-old-val))))
  
  arc> (dynamic-env*-let x 10 (foo))
  x = 10
  5
  arc> (dynamic-env* 'x)
  5
I'm not sure if this addresses your threading issue though. This probably looks a lot like your code except that dynamic-env-let uses the table where your w/config uses the global scope.

And of course, if you were using this all the time you'd want to choose better names. Your fingers wouldn't be happy having to type "dynamic-env-let" very often. :P

-----


Thanks to rocketnia for first getting me to notice the Kernel programming language [1] and then for sharing some interesting code about it [2]. (I'm checking out kernelish.arc [3] right now. :)

---

[1] http://arclanguage.org/item?id=11882

[2] http://arclanguage.org/item?id=13325

[3] https://gist.github.com/778492

-----

2 points by rocketnia 5356 days ago | link

I hope it's useful to you. The R-1RK is pretty well-reasoned in most regards, so it's some good inspiration. ^_^

...But I can't leave well enough alone. Here are a few key places I disagree with the philosophical choices:

"G3: Dangerous [things] should be impossible to program by accident" is pretty bogus, and even if I give it the benefit of the doubt, it's poorly phrased. IMO, things should be made purposefully difficult only when that makes the useful things easier overall. I think most uses of G3 in Kernel are actually efficiency concerns

"G2: The language primitives should be capable of implementing all the advanced features of R5RS" is a bit scary. I can't say I don't do the same sort of thing by making Penknife a similar experience to Arc, but I suspect heavy inspiration from a single souce is usually more of a bias than a productive motivation. It'd be best to justify each "advanced feature" on its own merit and to be ready to take inspiration from lots of sources. But hey, if someone has have enough time on their hands to do a point-by-point restoration of another good system, or if they don't have enough time to weigh the options, it's a reasonable pursuit. (Maybe. ^^; How much of that is me telling myself that? 'Cause somehow I feel I fall into both the "enough time" and "not enough time" cases, and it sounds a bit like I'm looking for excuses. :-p )

But most importantly, I think "G1a: [All] should be first-class objects" is better served by making every primitive utility's domain explicit and extensible. That is, reducing the number of official ways to contrast the first-class-ness of particular objects. To be fair, Kernel puts a strong emphasis on supporting cyclic data structures, and that's one big case which extensible utilities probably need to plan for in advance (so that extensions aren't limited to being naive recursive traversals). But on the other hand, Kernel's 'equal? doesn't allow a programmer-defined type (an encapsulation) to provide custom behavior, so it actually makes those types second-class. @_@

-----

1 point by evanrmurphy 5355 days ago | link

> "G3: Dangerous [things] should be impossible to program by accident" is pretty bogus, and even if I give it the benefit of the doubt, it's poorly phrased.

The paper actually says, "Dangerous things should be difficult to do by accident" [1]. I don't mean to nitpick, but in this context I think the difference between "difficult" and "impossible" is significant.

Update: I just realized that Shutt's dissertation and the R-1RK are distinct, but the quote is consistent across them.

---

[1] From page 88 of jshutt.pdf, downloadable at http://www.wpi.edu/Pubs/ETD/Available/etd-090110-124904/

-----

2 points by rocketnia 5355 days ago | link

That difference is significant too, but I'm specifically talking about the difference between trying to make something difficult and neglecting to make it easy. I pick a tool in the first place because it lets me do things more easily.

...Oh, I misquoted it. XD Yeah, thanks for catching that.

Another thing I missed was finishing the efficiency sentence. I was going to devote another paragraph to criticizing the efficiency-should-only-arise-naturally rule for being hypocritical--that an eye to efficiency influences and probably compromises other aspects of the design from the outset--but I determined I actually agree with that rule for Kernel. I think efficiency of fexprs is one of the most significant things Kernel stands to prove, so I don't mind an explicit and upfront goal of efficiency, but efficiency isn't in itself a goal motivating Kernel (I think), so it's good for the rule to be nuanced a bit. (Had I completed the sentence, it would have covered some part of that, but I'm not sure which. XD )

-----

1 point by rocketnia 5355 days ago | link

Oh, that dissertation is new(-ish)! Last time I looked into Kernel it was just the R-1RK. Gotta read that sometime. ^_^

-----

1 point by evanrmurphy 5356 days ago | link

  arc> (load "kernelish.arc")
  Error: "Function call on inappropriate object #(tagged fexpr #<procedure>) ()"
Might be an easy fix. Regardless, I've gleaned a lot from reading the code.

  (feval '(assign quote (fx (expr) env expr)))
I'm guessing this came from Kernel, but it's brilliant.

  (feval '(assign fn
            (fx (parms . body) env
              (wrap (eval (list* 'fx parms '!ignored body) env)))))
Hmm... I'm not sure I get wrap.

-----

2 points by rocketnia 5356 days ago | link

In Kernel, applicatives (procedures) have underlying operatives (fexprs), and 'wrap is the axiomatic way to make an applicative, kinda like Arc's use of 'annotate to make macros. I don't know whether or not I'm not particularly attached to 'wrap; I kind of included it on a whim in order to support a nifty definition of 'fn.

As far as the bug goes, I still haven't tested the code, but here are a few fixes I just pushed, which may or may not be relevant:

   (def feval (expr (o env fenv*))
     (case type.expr
       cons  (fapply (feval car.expr env) cdr.expr env)
  -    sym   do.env.expr
  +    sym   (fenv-get env expr)
             expr))
  
   ; The 'applicative type is totally Kernel's idea.
   (def fapply (op args (o env fenv*))
     (case type.op
       fexpr        (rep.op env args)
  -    applicative  ((rep rep.op) env (map [feval _ env] args))
  +    applicative  (fapply unwrap.op (map [feval _ env] args) env)
                    (apply op (map [feval _ env] args))))
  
  ...
  
  +
  +; We use a singleton list so that an applicative can wrap an
  +; applicative. Using 'annotate does nothing to a value that's already
  +; of the given type.
  +
   (fdef wrap (fexpr)
  -  (annotate 'applicative fexpr))
  +  (annotate 'applicative list.fexpr))
  
   (fdef unwrap (applicative)
  -  rep.applicative)
  +  rep.applicative.0)
  +

-----

2 points by rocketnia 5352 days ago | link

Well, I've finally taken the time to debug kernelish.arc. It was really buggy. XD It seems to be working now, but let me know if you run across some more trouble. ^_^

Here's the link again: https://gist.github.com/778492

Another thing:

I'm guessing [that definition of quote] came from Kernel, but it's brilliant.

Well, I didn't (consciously) take that from Kernel. It's just what fexprs do, right?

On the other hand, I sorta did take the definition of 'fn from Kernel's '$lambda. I peeked at the R-1RK in order to discover that it used 'list*, and the implementation was straightforward from there. The definition I have is probably exactly the same as the Kernel one.

-----

1 point by evanrmurphy 5351 days ago | link

It seems to be working in my first tests too. This is really neat, thanks for putting it together.

-----

1 point by evanrmurphy 5352 days ago | link

Cool thanks, I look forward to trying it. :)

-----

2 points by akkartik 5356 days ago | link

Dijkstra, in the introduction to "A discipline of programming":

  Needless to say, none of the examples in this book have been run on a computer.
:-p

-----

2 points by rocketnia 5356 days ago | link

Wow, what a quote. XD Is this a more accurate version? I can't find any search results for yours, but there aren't many for this one either. :-p

  None of the programs in this monograph, needless to say, has been tested on a machine.

-----

3 points by akkartik 5356 days ago | link

I was running off memory :) I'll check with my copy when I get home tonight. (Discipline is one of ~ten books I still own. http://www.reddit.com/r/programming/comments/6f0fz/do_you_bu...)

---

Update 15 hours later: yep, your version is right. A couple more gems in that Preface:

"..it hurts me to see the semantics of the repetitive construct

  while B do S
defined as that of the call of the recursive procedure

  whiledo(B, S)
Do you think the BS was chosen accidentally? :)

I don't like to crack an egg with a sledgehammer, no matter how effective the sledgehammer..

Contrast the Alan Kay quote that says you should find the most complex construct you possibly need, and build everything in terms of it. (again my google fu is weak)

For the absence of a bibliography I offer neither explanation nor apology.

---

My predominant impression after reading Discipline was that EWD was getting high off his programming. He was trying really hard, but failing to get me high as well.

-----

2 points by rocketnia 5355 days ago | link

I like your book management strategy, BTW. :)

Contrast the Alan Kay quote that says you should find the most complex construct you possibly need, and build everything in terms of it. (again my google fu is weak)

If Alan Kay's the person to credit for OOP, I guess that doesn't surprise me. ^_^ A simple basis of complicated things is just fine. For instance, not all our lambdas actually need to be closures, but that doesn't stop us from reasoning about them as closures for the sake of reducing the number of concepts we're juggling.

The problem comes around when people forget how arbitrary any particular complicated basis is. XD I'm looking at you, "Arc needs OOP" threads.

So maybe the Dijkstra and Kay quotes are compatible, in a sense. Kay can be encouraging people to find appropriate foundations from which to implement concepts, and Dijkstra can be encouraging people to perceive the concept itself rather than taking its implementation for granted.

I guess I can't really say that without knowing more of the context of what Dijkstra and Kay believed. Still, a quote more opposite to Dijkstra's might be this one:

Beyond the primitive operators, which by definition can't be written in the language, the whole of the Arc spec will be written in Arc. As Abelson and Sussman say, programs must be written for people to read, and only incidentally for machines to execute. So if a language is any good, source code ought to be a better way to convey ideas than English prose.

- Paul Graham

I think it's similar to foundational math and philosophy. An implementation (model) of a system in terms of another system can admit implementation-specific proofs of things that are false in other implementations. Just because one model or formalization has been found doesn't make it uninteresting to continue considering the motivating informal system on its own merit.

-----

2 points by evanrmurphy 5355 days ago | link

Found that Alan Kay quote in The Early History Of Smalltalk [1]:

"take the hardest and most profound thing you need to do, make it great, an [sic] then build every easier thing out of it."

---

[1] http://www.smalltalk.org/smalltalk/TheEarlyHistoryOfSmalltal...

-----

3 points by rocketnia 5355 days ago | link

Awesome. :)

My next questions was, why on earth call it a functional language? Why not just base everuything [sic] on FEXPRs and force evaluation on the receiving side when needed? I could never get a good answer, but the question was very helpful when it came time to invent Smalltalk, because this started a line of thought that said "take the hardest and most profound thing you need to do, make it great, an then build every easier thing out of it". That was the promise of LiSP and the lure of lambda--needed was a better "hardest and most profound" thing. Objects should be it.

This so closely parallels the way I'm thinking about Penknife that I suddenly find it spooky that Smalltalk uses square brackets. XD

-----

1 point by evanrmurphy 5357 days ago | link | parent | on: Prepare-cxrs

> Does anyone know how to rewrite (list 'quasiquote xs)? `(quasiquote ,xs) doesn't work; `(,'quasiquote ,xs) works but isn't much better.

Just quasiquoted around about it at the REPL for 10 minutes. I think your way is best!

-----

2 points by evanrmurphy 5357 days ago | link | parent | on: Implicit gensyms

Really cool and hilarious indeed. :)

-----

1 point by evanrmurphy 5357 days ago | link | parent | on: Implicit gensyms

> Common Lisp's wise design decision to separate the variable namespace from the function namespace eliminates an entire dimension of unwanted variable capture problems. [...] when programming sophisticated macros it can be hard enough to keep track of symbols in a single, isolated namespace. Having any cross-pollination of names to consider just makes macro writing more difficult than it needs to be.

I still haven't run into any problems re: Arc being a lisp-1 with unhygienic macros. Has anybody else?

-----

2 points by rocketnia 5357 days ago | link

I have here and there, but renaming something is easy enough in a small codebase.

Still, when I'm writing code I don't know how I'll use in practice, I try to plug up abstraction leaks whenever I can, in the "why write programs with small bugs when you can write programs with no bugs" spirit.

I've been discovering that my purer utilities are useful in unexpected places, where what once was future-proofing is now essential to their newfound role. I don't think this is a particular advantage of hygienic/minimalistic stuff over more off-the-cuff or bundle-of-features systems, but I do enjoy being able to justify the "shape" of a utility completely in terms of its original motive.

...Whoops, you weren't asking about hygiene in general. XD Pretend I'm starting this post from scratch! * whoosh, changes costume*

I'm not sure what being a lisp-1 has to do with it, really. well, actually...

I suppose it's a little less likely for someone to need a local function than a local anything-else, but when they do need that, the local functions can still be captured by conventional gensym-savvy macros, right? So a lisp-2 can develop a culture of "don't use macros inside an 'flet unless you know what they expand to," whereas a lisp-1's corresponding "don't use macros except at global scope unless you know what they expand to" isn't quite as tractable.

When it comes to this kind of variable capture, I haven't encountered the problem or bothered writing code in a way that avoids it, even though I put up some possible techniques in a recent Semi-Arc thread. (This is exactly the issue Semi-Arc's hygiene should eliminate.) At this point, I almost consider my code to be better future-proofed if it's Penknife code that uses hygiene, and I'm just crossing my fingers that the Arc code will never capture in practice.

-----

1 point by akkartik 5357 days ago | link

Not me. Though I see 'rationally' that a single namespace adds more pressure on names, and I see that rocketnia is concerned about accidental lexical capture with his patterns like do.foo.

Perhaps dozens of programmers contributing code simultaneously makes collisions more likely. I doubt any of us have that sort of experience with lisp.

-----


> I'm actually writing a Python interpreter in Coffeescript

How intriguing! What got you interested in this project, if you don't mind my asking?

-----


Based on http://norvig.com/lispy.html, not http://norvig.com/lispy2.html. Some of the core operators use arc names rather than scheme, i.e. do, fn and = instead of begin, lambda and define/set!. Depends on underscore.js (http://documentcloud.github.com/underscore/).

There's some sloppiness in the code so don't look at it too closely: left-in console.log calls and window variables for debugging, a couple outdated comments. You can look to later commits for improvements if you're interested (note that the file gets renamed from lispy.coffee to sweet.coffee).

-----


Starting to really appreciate f-exprs, at least in theory.

-----

3 points by akkartik 5361 days ago | link

Wow, that's a crazy flame-y thread by LtU standards.

I find myself agreeing with Tom Lord. Which is perhaps a sign of how far arc has changed my views in the last 2 years.

The one place where he seems on shaky territory is in the discussion about what the 'true spirit of scheme' is. Perhaps scheme started out farther along the 'bondage and discipline' continuum than lisp to begin with. I don't know. This was, for me, the most telling quote:

compiler writers economically dominate Scheme standardization and don't want to admit that there's a useful language larger than what they can usefully compile

I want to look harder at SCM after reading this thread. Thanks for bringing it up.

-----

3 points by akkartik 5360 days ago | link

Also, with great timing on HN, this post by Kent Pitman c 2007 seems relevant: http://news.ycombinator.com/item?id=2104420

-----

1 point by evanrmurphy 5363 days ago | link | parent | on: Noob question.

I heard pg & rtm tried first-class macros but the performance was unacceptable. I would really like to have tried them out, seen results from some performance tests... or something!

How difficult of a hack would it be to give Arc first-class macros again?

-----

3 points by aw 5362 days ago | link

If you want to try out first-class macros to see what they could do for you, that's easy enough: write an interpreter for Arc. It'd be slow of course, but enough so that you could try out some different kinds of expressions and see if you liked what you do with it.

-----

1 point by evanrmurphy 5362 days ago | link

Yes. I'm actually taking a similar approach with more instant gratification: try using a lisp that already has them. http://picolisp.com/5000/-2.html

-----

2 points by rocketnia 5362 days ago | link

I was a fan of fexprs not too long ago, and I still kinda am, but they lost their luster for me at about this point: http://arclanguage.org/item?id=11684

Quoting from myself,

Quote syntax (as well as fexprs in general) lets you take code you've written use it as data, but it does little to assure you that any of that computation will happen at compile time. If you want some intensive calculation to happen at compile time, you have to do it in a way you know the compiler (as well as any compiler-like functionality you've defined) will be nice enough to constant-propagate and inline for you.

I've realized "compiler-like functionality you've defined" is much easier to create in a compiled language where the code-walking framework already exists than in an interpreted language where you have to make your own.

If part of a language's goal is to be great at syntax, it has a conflict of interest when it comes to fexprs. They're extremely elegant, but user libraries can't get very far beyond them (at least, without making isolated sublanguages). On the other hand, the dilemma can be resolved by seeing that an fexpr call can compile into a call to the fexpr interpreter. The compiler at the core may be less elegant, but the language code can have the best of both worlds.

This is an approach I hope will work for Penknife. In a way, Penknife's a compiled language in order to support fexpr libraries. I don't actually expect to support fexprs in the core, but I may write a library. Kernel-style fexprs really are elegant. ^_^

Speaking of such Kernel-like libraries, I've thrown together a sketch of a Kernel-like interpreter written in Arc. It's totally untested, but if the stars have aligned, it may only have a few crippling typos and omissions. :-p https://gist.github.com/778492

Can't say I'm a fan of PicoLisp yet, though. No local variables at all? Come on! ^_^

-----

1 point by evanrmurphy 5362 days ago | link

> Can't say I'm a fan of PicoLisp yet, though. No local variables at all? Come on! ^_^

Hmm... not sure what you mean by this. I can define a local variable at the PicoLisp REPL using let just as I would in Arc:

  : x   
  -> NIL
  : (let x 5
      x)  
  -> 5
  : x     
  -> NIL
The rest of your comment was really interesting. Thanks for the links!

-----

1 point by rocketnia 5362 days ago | link

Sorry, I spoke too soon. I could tell 'let and function arguments would be possible, but I was a bit put off by http://software-lab.de/doc/ref.html#conv. From http://www.prodevtips.com/2008/08/13/explicit-scope-resoluti..., it sounds like dynamic scope. Is that right? (I'm not in a position to try it out at the moment. >.> )

Speaking of speaking too soon, I may have said "user libraries can't get very far beyond [an fexpr language's core syntax]," but I want to add the disclaimer that there's no way I actually know that.

In fact, I was noticing that Penknife's parse/compile phase is a lot like fexpr evaluation. The operator's behavior is called with the form body, and that operator takes care of parsing the rest, just like an fexpr takes care of evaluating the rest. So I think a natural fexpr take on compiler techniques is just to eval code in an environment full of fexprs that calculate compiled expressions or static types. That approach sounds really familiar to me, so it probably isn't my idea. :-p

-----

1 point by evanrmurphy 5362 days ago | link

No harm done. :) PicoLisp appears to have lexical scoping but dynamic binding, although my PLT is too weak to understand all the implications of that. From the FAQ:

> This is a form of lexical scoping - though we still have dynamic binding - of symbols, similar to the static keyword in C. [1]

> "But with dynamic binding I cannot implement closures!" This is not true. Closures are a matter of scope, not of binding. [2]

---

[1] http://software-lab.de/doc/faq.html#problems

[2] http://software-lab.de/doc/faq.html#closures

-----

2 points by rocketnia 5362 days ago | link

Sounds like transient symbols are essentially in a file-local namespace, which makes them lexically scoped (the lexical context being the file!), and that transient symbols are bound in the dynamic environment just like internal symbols are. So whenever lexical scope is needed, another file is used. Meanwhile, (====) can simulate a file break, making it a little less troublesome.

-----

1 point by evanrmurphy 5362 days ago | link

But the let example I gave a few comments ago didn't use a transient symbol. Why does it work?

I chatted with PicoLisp's author, Alexander Burger, yesterday on IRC. If I catch him again, I can ask for clarification about the scoping/binding quirks.

-----

2 points by rocketnia 5361 days ago | link

I think it works because while you're inside the let, you don't call anything that depends on a global function named x. :) That's in the FAQ too:

-

What happens when I locally bind a symbol which has a function definition?

That's not a good idea. The next time that function gets executed within the dynamic context the system may crash. Therefore we have a convention to use an upper case first letter for locally bound symbols:

  (de findCar (Car List)
     (when (member Car (cdr List))
        (list Car (car List)) ) )
;-)

http://software-lab.de/doc/faq.html#bind

-----

More