Arc Forumnew | comments | leaders | submit | nlavine's commentslogin

I've actually been thinking about this a lot lately. I belive psyco is just a special case of partial application.

nfib is faster than fib because it's compiled with the knowledge that its argument is an integer. But you could make it even faster if you knew that its argument was an integer that fit into a machine word. (That might not be what you want to do with nfib, but I imagine a lot of numerical functions are going to be used almost always with numbers that fit in machine words.)

My thought is that you should be able to specialize functions for arbitrary predicates. For instance,

  (def fib (n)  ; yeah, it's a bad definition
    (specialize (and (number? n) (< n  (exp 2 32)) (> n 0)))
    (if (< n 2)
        n
        (* n (fib (- n 1)))))
Or maybe you could even declare specializations outside of the function. For instance, imagine writing

  (specialize (fib n) (and (number? n) (> n 0) (< n (exp 2 32))))
right next to some code that really needed a fast fibonacci function. And of course, if you think about it like that, there's no need to restrict it to function calls - that's just an arbitrary restriction. Let's say you need really fast mapping of a function.

  (specialize [map f _]).
Or even,

  (= fmapper (specialize [map f _]))

-----

1 point by sacado 6233 days ago | link

Yep, that's probably the next step after psyco.

-----

1 point by nlavine 6239 days ago | link | parent | on: Arc continuation puzzle

Thank you very much for explaining this, but there's one thing I don't understand.

The original value of k is the continuation of (catch throw) - a procedure which takes one argument, goes back to the let, and returns its argument as the value of (catch throw). This makes sense.

The second value of k is the continuation of (catch (k throw)). As far as I can tell, this expression does not return a value to anything. Why does its continuation take an argument? Is it just throwing it away?

-----

1 point by cchooper 6239 days ago | link

Yes, the value of (catch (k throw)) is just discarded, so you could pass anything to the continuation and it wouldn't matter. The only reason it takes an argument is because all expressions in Arc have to return something, even if the value isn't used.

-----


Agreed. This is exactly the problem that should be fixed. Your way would do it, although there are also some others.

Ironically, one great way is just to do no safety checks at all - the opposite of typing. I agree that we need something else, though.

-----

1 point by nlavine 6246 days ago | link | parent | on: Lists as functions on symbol arguments

Why are we assuming that keyword arguments must be passed as flat lists of keywords and values?

  (bar 1 2 ('foo 3) ('baz 4))
I agree the flat way is cleaner, but this is certainly a possibility too.

-----

4 points by nlavine 6246 days ago | link | parent | on: Will Arc ever be as fast as CL?

I would argue that speed is a property of programs and compilers rather than languages. For instance, this Arc program to compute the factorial of n:

  (def factorial (n)
    ((afn (a r)
       (if (= r 1) r (self (* a r) (- r 1))))
     1 n))
is very similar to this C program:

  long factorial(long n)
  {
    long a;
      while (n != 1) {
        a *= n;
        n--;
      }
  }
The only real difference is Arc's arbitrarily-sized integers, which you could probably turn off with a flag (or, better yet, implement in C - I think there's a GNU library for this).

It should be possible to write a fairly simple scanner for Arc that looks at an Arc program and decides whether it is trivially convertible to an equivalent C program. If it is, the Arc program could be converted. If, as I imagine, the mapping would let you write any C program as an Arc program, you could then argue that Arc was as fast as C.

The question would be, what about the features of Arc that aren't trivially convertible to C, like ccc? One response would be, "well, ccc is a pretty hard function to implement in C, but that's just a property of what ccc does - to implement the equivalent functionality, you just need this much complexity. A C programmer who wanted to use continuations would have to write the same function, so the slowness is not really because of Arc."

In other words, Arc gives you easy access to complex and hard-to-implement functions. This doesn't mean Arc is slow, just that Arc is powerful. One line of Arc code is probably much slower, on average, than one line of C code, but we don't really mind this, because the Arc code is referencing algorithms that, if they were implemented in C, would be just as slow as the Arc implementations. I think that is the true definition of speed, and the only fair one to hold Arc to. It implies, however, that speed will always be a property of individual programs and implementations, and not of languages.

-----

4 points by sramsay 6246 days ago | link

You're absolutely right: speed is a property of programs/compilers and not languages. And because of this, there's really no reason why we couldn't have an implementation of Arc that compiles to C (using, perhaps, Chicken Scheme's astonishing method), one that targets the JVM, one that's designed to be embedded, and so forth.

But this is really a socio-political decision, is it not? Do we (either PG or this burgeoning community of fans) prefer a benevolent dictatorship of the sort that governs languages like Perl and Ruby, or do we prefer the ramified computational episteme of modern Scheme?

There are good arguments on both sides, but I really think that letting a thousand flowers bloom (as the Scheme community has done) has great advantages. We get to use Scheme in lots of varied environments, with lots of different hardware, and with implementations optimized for lots of specialized tasks. This abundance is in part facilitated by Scheme's minimalistic standard, of course, and that has its own drawbacks -- code portability being the most serious one.

Personally, I'm not sure I'm down with the idea of a lisp optimized for "web applications" or "exploratory programming." It seems to me that the strength of Lisp lies precisely in its ability to become a language for "x programming." Ironically, PG himself has made some of the most eloquent arguments out there for this idea.

It seems to me that implementors are the ones who should take Arc and turn it into a "language for hacking cell phones" or whatever. Some domains will put a premium on speed, others on "smallness," still others on "embededness" or what have you. Nothing in the language should foreclose these options, but as you said, it's not clear that languages ever do. In the case of Lisp, we could write a compiler in which every function and macro is converted directly into highly optimized assembly. I don't think there's anything in Arc or any other language that would prevent that.

-----

3 points by pgwoden 6246 days ago | link

As I have no doubt that it is possible to create an Arc implementation that delivers fast execution, it is precisely the socio-political issue that I intended to address in opening this thread.

As long as Arc is implemented in MzScheme, the best it can do is asymptotically approach MzScheme's performance. The discussion in this thread suggests that Arc will not forever be implemented in this way, so the limitation will be lifted. Just how much interest there is in greased-lightning performance is not clear to me.

-----

7 points by sramsay 6246 days ago | link

Well, speaking only for myself . . .

I'm an English professor who does various kinds of computational analysis on large text corpora (so things like string handling, XML, and regex are really important to me). I've been known to write programs that take three weeks to run in Java, so I'm always looking for ways to make my programs fast without resorting to C. Nothing against C. It's one of my favorite languages. It's just not a lot of fun for string processing.

Basically, I always want to go high and fast with my languages, and that's one of the reasons I like Lisp. It's a super high level language, but (in my usage patterns) it outperforms languages like Ruby (which I adore) and Java (which I find increasingly annoying).

Now, my particular usage is perhaps a bit obscure, but it may generalize to other areas. I can't believe I'm the only one doing lots of text processing who wants a fast, high level language. In the end "web application programming" is really just a special case of text processing, so it may align with PG's goals at some more fundamental level.

-----

1 point by sacado 6246 days ago | link

I want a dictatorship ! :)

-----

1 point by nlavine 6248 days ago | link | parent | on: Dynamic Binding

Parameters look interesting, but the man page suggests that there's a pretty significant overhead in code tree nodes to using them. Do you think we can get that functionality cheaper?

The thing I like about my approach is the simplicity - everything expands to sets. If we had parameters, we'd probably want to modify set so that it worked on them. In that case, this approach would still work, and would be the most idiomatic way to do it.

It also separates functionality nicely, since it would work with any thread mechanism that allowed per-thread variables (all of them). Because it only expands to set calls, it's a general mechanism for dynamic binding, and the thread stuff should "just work" if they're used together.

-----

1 point by CatDancer 6248 days ago | link

We can't know what is "significant overhead" without testing. Arc already has its own overhead, so for all we know the added overhead of using parameters might be utterly insignificant. Or not.

Certainly a layered approach is a good idea: an underlying mechanism that provides bug-free dynamic binding functionality, and then a macro layer that allows people to use the functionality in a convenient and easy way.

You might be able to use set with MzScheme's thread local variables if you also used something like using "protect" to restore values on exceptions; but you wouldn't be able to use set with MzScheme's parameters because you need to specify what is the scope that you want the parameter to have a new dynamic value within.

The key challenge to using MzScheme's parameters is that you'd need a way to have Arc compile a variable reference "foo" into a Scheme parameter call "(foo)". However if someone did that I expect the result would be simpler because we wouldn't need the code to set and restore the values ourselves.

-----

1 point by nlavine 6249 days ago | link | parent | on: Dynamic Binding

It seems like the right way to go. And hopefully the fn-transformer function will make it easier for others to do the same thing : ].

-----


I'm not sure I quite understand this, but let me see if I can help clear up the confusion anyway.

A function, as distinct from a macro, has the property that when you call it, it will work the same way every time, even if you have provided some crazy bindings that shadow the functions it calls. For instance,

  (def prn args ; from arc.arc
    (do1 (apply pr args)
         (writec #\newline))

  (let pr (die)
    (prn 53)) ; doesn't die.
Hygienic macros work the same way.

Non-hygienic macros do not have this property. That is the basic issue.

(Note: your definition of hygiene may vary. In fact, mine might too; who knows?)

-----

1 point by eds 6258 days ago | link

Don't you mean

  (let pr [die]
    (prn 53)) ; doesn't die.
Otherwise it dies in binding pr to (die) and doesn't even get to (prn 53).

-----

1 point by nlavine 6258 days ago | link

Yeah. I didn't really have a clear definition for die. That's probably better.

I don't seem to be able to edit it now, though, so I'm afraid it will have to stay the bad way.

-----

2 points by nlavine 6258 days ago | link | parent | on: Suggest PG: Settable Function objects

Good idea! I have one question: can we generalize this?

We already have objects like tables which act one way in functional position and another way in argument position. Can we say that you're suggesting adding a third way = setter target position?

If so, can we extend annotate in a general manner to let you define objects which have all three aspects? (And in the future, maybe, we could let people define their own, but I'm not sure yet how this would work, or what else exactly you would use it for, so we should wait on that.) Here's your example, modified a bit:

  (= foo
      (with (var1 1 var2 2 var3 3 var4 4)
        (annotate
          (fn (v)                 ; functional position aspect
           (case v
             'var1 var1
             'var2 var2
             'var3 var3
             'var4 var4))
          (fn (newv v)            ; setter target aspect
            (case v
              'var1 (= var1 newv) ; (I think these would have to be symbols)
              'var2 (= var2 newv)
              'var3 (= var3 newv)
              'var4 (= var4 newv))
          (fn ()                  ; object aspect
            (list var1 var2 var3 var4)))))

-----

1 point by almkglor 6257 days ago | link

Hmm. Maybe a better generalization is to take advantage of this code in ac.scm:

   (define (ar-tagged? x)
     (and (vector? x) (eq? (vector-ref x 0) 'tagged)))
annotate creates a vector with 3 entries:

  (annotate 'foo (fn () nil))
  => #3(tagged,foo,<procedure>)
Let's then create a basic function which retains the type in the second entry of the vector, but adds a fourth entry containing a hash.

  (attach '= (fn (v) (settercode v))
    (fn () (readercode)))
  => #4(tagged,fn,<procedure>,#hash((= . <procedure>)))
The benefit here is we could add other aspects of the object - for example, getting a length, or determining the keys so we can traverse the object.

Hmm. Might actually implement this. Am planning a persistent table, such that assignments to persistent-table!key will automatically update a file on the server (which, incidentally, can also encapsulate a database instead)

-----

1 point by almkglor 6258 days ago | link

minor nitpick: 'case already has implicit (quote ...) on each of the objects in the case position.

-----

1 point by nlavine 6257 days ago | link

Oh, right. Good call. (feels abashed)

-----

3 points by nlavine 6259 days ago | link | parent | on: A macro for creating infix DSLs

This macro is excellent! Thank you!

I especially like the way it wraps the expressions you want in infix and doesn't affect ones outside of its body, allowing you to use infix syntax in one place and not make the rest of the program behave in unexpected ways.

This will be especially good one we can bring readtable support (or the equivalent) into arc. I saw this suggestion somewhere:

  <3 + 4 * 5> ; syntax for an infix math form.
And as you say, BNF parsers and whatever the notation for scanners is will be great.

-----

1 point by cchooper 6259 days ago | link

Thanks!

-----

More