Your comment detailed what the libs.arc file contained.
the code within tem.arc was originally part-in-parcel within those files, but was extracted out into its' own file. So he was just commenting that there is other code that exists in anarki that was originally included and that there may be more examples of such.
> It's also difficult to find documentation on what's in lib/
Can you give examples? If you give concrete examples we can try to improve matters.
Many libraries have a long header comment up top, like rocketnia's ns.arc.
Anarki comes with online help. Try saying `(help deftem)` at the commandline. Admittedly online help is patchy inside lib/ but make it a habit to check it if you aren't already.
Lastly, it's worth looking inside the CHANGES/ directory for major features.
> ..or to know how stable the code is, because no one seems to use most of it.
If by 'stable' you mean "we shouldn't change it", then nothing is stable, inside or out of the lib/ directory.
If by 'stable' you mean "working as intended", then tests are our proxy for intent. There's tons of tests under lib/ and I run the tests fairly often. If a library doesn't have tests then I have a lot less to say about it.
So "no one seems to use most of it" feels like an exaggeration. Most code under lib/ is constantly being monitored for bitrot.
> The dependencies for news should be libraries, but they still seem to bake in too many assumptions and interdependencies to be useful as standalone libraries.
I think here you're cargo-culting[1] notions of "good practice" without considering their costs. Not everything that other people use is a good idea for Arc. We have a tiny number of users, which makes it better to keep the community in a well-integrated codebase rather than fragment it across several "lines". Interfaces have a tendency to balkanize users on one side or other of them. We don't want that; our tiny community doesn't really benefit from the scaling benefits that interfaces provide.
If and when Arc grows 10x or something, and we have three different sets of html generation libraries, we can think about drawing better lines. Until then I suggest a different "best practice": YAGNI[2]
But those are comments from the same person I'm responding to?
Did you perhaps have a similar opinion as well? I certainly recall other people asking for a package manager.
To be clear, what I said is just my opinion, and I should have said "we don't need that" or "it's a bad idea" rather than "we don't want that", which sounds like I'm speaking for others. If a couple of us feel strongly about adding interfaces, you should feel free to do it and see how it goes. If I'm right then you'll eventually realize that it's only serving as bureaucracy. If you do enjoy it then I'll eventually realize that I'm wrong ^_^
I think API boundaries arise naturally from syntax that doesn't fit on one screen. Edits to one piece of code can only be quick and easy if they manage to preserve the assumptions that make the rest of the code work. These assumptions form an API boundary, and at some point it becomes easier not to break the code if that boundary is documented and verified.
Currently, we break up the API boundaries in Anarki by function and by file. We document the function APIs with docstrings, and we write verified documentation for them with (examples ...). We verify the file APIs with "foo.arc.t" unit test files.
If we put more effort into documenting or verifying those boundaries, it would hardly be a big change in how we use the language. We could perhaps add a bunch of "foo.arc.doc" files or something which could compile into introductory guides to what foo.arc has to offer (whereas the docstrings still act as API documentation). And we could perhaps add an alternative to Anarki's (require ...) that would load a library in a mode that faster faster, abiding by an interface declared in a "foo.arc.i" file.
It seems like the most drastic change in philosophy would happen if we broke Anarki apart into sections where if someone's working on one section and discovers they're breaking something in another section, they don't feel like they can fix it themselves.
I suppose that happens whenever people put Arc libraries in their own separate GitHub repos. There are actually a number of those repos out there, including my own (Lathe). Even aw, who was a strong believer in avoiding unnecessary black-box interface boundaries, would still put things in various repositories, and the hackinator tool would collect them into one file tree.
It also happens naturally with poor knowledge sharing. Seems like despite my attempts at explaining ns.arc, I'm still the only one with a very clear idea of how it works and what it's good for, so other people can't necessarily fix it when it breaks.
(Come to think of it, that makes ns.arc a great test case for pulling out an Anarki library into its own repo. That'd save other Anarki maintainers the trouble of having to fix it to keep Anarki's tests running.)
Hmm, even if some libraries are maintained separately from the Anarki repo, Anarki's unit tests can still involve cloning some of those repos and running their tests. If Anarki maintainers frequently break a certain library and want to help maintain it, they can invite its developer to fold it into the Anarki repo. And if the developer wants to keep maintaining it in their own repo, the Anarki project can potentially use a hackinator-style approach to maintain patches that it applies to those libraries' code. We have a lot of options. :)
"Hmm, even if some libraries are maintained separately from the Anarki repo, Anarki's unit tests can still involve cloning some of those repos and running their tests. If Anarki maintainers frequently break a certain library and want to help maintain it, they can invite its developer to fold it into the Anarki repo. And if the developer wants to keep maintaining it in their own repo, the Anarki project can potentially use a hackinator-style approach to maintain patches that it applies to those libraries' code. We have a lot of options. :)"
This would be fine. Whether we consider something an API has nothing to do with how we divide code into repositories or sections. Perhaps the key is to know who is calling us. I'd be happy to provide a guarantee to anyone who "registers" their repository with Anarki: if it's easy for me to run your codebase, I'll check it everytime I make a change to Anarki, and ensure that it continues to pass all its tests. If I break an interface, I'll fix all its callers and either push my changes or send the author a PR.
I don't know yet how "registering" would work. But we can start with something informal.
>I don't know yet how "registering" would work. But we can start with something informal.
Perhaps just the functionality to clone a repo into /lib under the requisite namespaces as a way to include remote dependencies, including pinning to branch and version? No need to "register" anything at first.
"It seems like the most drastic change in philosophy would happen if we broke Anarki apart into sections where if someone's working on one section and discovers they're breaking something in another section, they don't feel like they can fix it themselves."
Exactly! It's a huge problem in the programmer world today that we are indisciplined about this distinction. There's a major difference between making a boundary just to make edits easy and "APIs". APIs are basically signalling to most people that they shouldn't be crossing them, and guaranteeing that anybody calling them will be able to upgrade without modifying their code. That is worlds away from just not being able to see everything on a single screen.
Since we use words like "abstraction" and "interface" for both situations, it is easy to cross this line without any awareness that something big and irreversible has changed.
Currently nothing in Anarki is an API. I recommend we be restrained in using that term, and in introducing any APIs.
"There's a major difference between making a boundary just to make edits easy and "APIs"."
Maybe there's a difference somewhere in what we mean by "API." There's a distinction to make, but I wouldn't make that one.
I'm saying APIs arise naturally at boundaries people already feel aren't worth the effort to cross for various plausible reasons. Recognizing an API boundary is a positive thing: Then we can talk about it and help people understand the ways in which it's worth it for them to cross the boundary. But talking about it also makes it something people form as a concept in their mind and form expectations around, so it can become a more imposing boundary that way if we're not careful. We should seek a sweet spot.
The important distinction is that sometimes the reasons people don't cross boundaries are good ones, and sometimes they're baloney.
I think what you and I want to avoid for Anarki is documentation that gives people baloney reasons to think it's not worth it for them to edit the Anarki code.
While we can try to minimize the good reasons that exist, and we can encourage people to try it before they knock it, there's no point in denying that the good reasons sometimes exist.
If you ask me, the reason we have to settle for a "sweet spot" is totally a consequence of the fact that Anarki is made of text. Text loves bureaucracies. Someone sees text, they ask for the Cliff's Notes, and then someone has to maintain the Cliff's Notes.... So you've gotta get rid of the text, or you'll have an API boundary.
Programming without bureaucracy would be like having a conversation without writing it down. I believe in this. I think conversational programming is exactly what we can do to simplify the situation of programming in the world, and Era's my name for that project, but Anarki is a more traditional language, and it takes a more traditional trajectory into APIs.
Our documentation can try to convey "This is all subject to change! Edit it yourself, open an issue, or drop by on Arc Forum for some help! We count on contributions like yours to make Anarki a good language. Since others are welcome to do the same thing, watch out for changes in this space." We could put an "unstable, just like everything else here" tag on the corner of every single documentation entry.
Anarki is far from the only unstable codebase people have ever seen. Being a Node.js user, my code has regularly been broken by upgrades, and Node.js isn't even a version 0.0.x project. I think only a certain number of people would mind if Anarki promised stability and then couldn't deliver on it, let alone mind if Anarki made it clear in the API documentation that there was nothing stable to find there in the first place.
---
"guaranteeing that anybody calling them will be able to upgrade without modifying their code"
You know me well. I probably said that pretty insistently in the past. It's something obsess over and a major requirement I have for code I write (and hence all its dependencies), even though things like Node.js show me it's not a current norm in software.
It's the 100-year language ideal. Spend a lot of time to get the axioms just right, and then the language is stable and nobody ever feels the need to change it since there's nothing to improve on! In 100 years, Arc is supposed to be the language that's too perfect to be anything but the stablest language around.
I still believe in that ideal, but not in quite the same way. People will come up with a reason to promote their alternative language, even if it's just to say they're different than their parents. But I do believe languages can be designed to be more compatible with their forks, even to the point where it's difficult to say where one language ends and another begins. The resulting unified "ecosystem of all languages" could have a much simpler design than the "ecosystem of all languages" we currently have.
That's also part of my approach in Era. I make decisions in those codebases with the goal of simplifying the ecosystem of all languages.
I don't think this is something I would pursue in Anarki. Arc was very far from the the 100-year language upon release, it was unstable, and it seems like Anarki has pretty much been maintained by and for people who could embrace that part of Arc's journey.
The risk here is that everytime a newcomer tries out our codebase and finds it broken, we lose a potential recruit. And we don't have so much incoming attention that we can afford to be lax with it. Our situation is akin to a store that is mostly empty: we can't afford to just shut down longer, because that way lies death. We have to stay open for business.
Not allowing in obvious breakage is also a way to show respect for your collaborators, other people programming on the same codebase. When you come to Arc you would be demotivated if the news app didn't work, if you had to first get it working, figure out who broke it and how. It's the same with others. When we find a few hours to hack on something fun, we all prefer that it start out in a non-broken state. And we try to leave it in a non-broken state.
So I have a suggestion for you: everytime you create a PR, first try to break it yourself. Ask yourself what changed, and what it may affect. Try to catch the really obvious stuff yourself, like "does news load, does it look the same, can I submit a post?" Run the unit tests. Report in the commit message or PR what all you did by way of testing. These exercises will make you a better programmer. Tell yourself that the goal is to make changes in an efficient manner, solve the problem in a single PR, rather than require more changes to fix breakage in one change, that cause their own breakage, and on and on.
None of this is particularly serious. It's not like any breakage here is a deal-breaker. As long as others see you making an effort to get better I think you'll get a lot of support.
Certainly a lot of things could be tested more; this would help.
But also, we might want to set Github to require the tests to pass on a PR before letting it be merged. I could be wrong, but I think a PR can be merged even if the tests fail.
This is a slight step away from "anyone can commit anything", I acknowledge. But it's also a step towards stability. At least the person would have to fix or delete the failing tests before merging.
Ok. I will try to be more careful in the future. In fact, I won't even merge until someone looks at the commit, fair enough?
It isn't broken, though, at least not if "broken" implies an unusable state. I did test the forum before I pushed, I just didn't catch everything... I didn't even know prompt.arc existed, or what it was for, but apparently neither did you. ;)
But news does still work. It just doesn't work from the same location as previously, which I think is hardly setting fire to the store.
For what it's worth I agree with the app/news changes. So Good Job I say.
And with that said, after looking at the anarki directory it's apparently becoming a dumping ground for people to place files at the top level. I'm not sure why this is happening, but I hope others take a look at what you have done with app/news and continue the trend of organizing things better.
I hadn't tried running it yet. You also make some good points that I hadn't immediately considered just from scanning the change. I wasn't aware of prompt.arc, for example. Thanks.
Heh, I really hope there never comes a day when Arc has vendor code. I'd really like for it to continue encouraging people to hack on their dependencies. And that requires keeping the code nearby. So a separate directory for apps is fine, and you're right that it allows versions of the same app to coexist. But hopefully we never start shoving libraries into a system path..
I was assuming it would work like PHP where dependencies are just cloned into into a local /vendor folder namespaced by owner/repo.
But that assumes there's a package manager, which assumes there's a consensus and standard around package management, and namespaces, and modules, and... we can just stick a pin in that for now.
I think the problem might be that Arc doesn't support some of data types that you are getting from the json response. For example Arc does not support booleans true/false it only has 'nil' or not a nil value. That's why, in the past, I used Andrews library [1].
That said I haven't had a running install of arc in many years so I could be off base.
Looking through the Anarki logs I think I'm seeing signs of our first edit war. It's been very gradual so we haven't noticed until now.
Anarki used to use Andrew's library from http://awwx.ws/fromjson (perhaps an older version of your link, i4cu) I switched it out back in 2009 for a faster replacement (https://github.com/arclanguage/anarki/commit/dad4dc6662), not realizing that I was breaking the read side in this rather obvious manner. Since then a bunch of work has happened on emitting JSON, but apparently nobody tried parsing JSON until now.
We could retrofit these bugfixes, but I think it would just give up the speed gains to parse things character by character. May be worth just returning to Andrew's version.
But I'll try working with the existing version first, just in case it's still fast.
Just my two cents, but if I were using Arc today I'd consider doing what could be done to conform to the edn spec [1]. It was born out of Clojure, but is not Clojure specific. Also, as I understand it json is actually a subset of edn thus the two would not collide. But for now I think just getting a json parser running would be a win.
Also, I don't get why the Anarki repository contains a JSON parser implemented in Racket, when the built-in Racket JSON parser works just fine with Arc:
I got an error saying #t was an unknown type when I tried to do a boolean check on it, as it was passed from a JSON response. It's the reason I started this thread to begin with.
Thanks! I think you're right that the existing JSON parser is unnecessary. But there's still an open problem about translating to Arc:
arc> (w/instring in "{\"foo\": true}" ($.read-json in))
#hash((foo . #t))
We'd like this to translate to:
#hash((foo . t))
Perhaps what we're looking for isn't related to JSON at all, but a general shim for translating the output of Racket functions. That could be broadly useful.
Why do we need to explicitly translate #t to t? Arc has no problem handling Racket booleans.
I'd like to see perhaps a specific brief example, where using Racket booleans within Arc actually is a problem, because I haven't encountered any myself.
If I try to use the hash from your example in actual Arc code, it works fine:
arc> (if ((w/instring in "{\"foo\": true}" ($.read-json in)) 'foo) 'yes 'no)
yes
arc> (if ((w/instring in "{\"foo\": false}" ($.read-json in)) 'foo) 'yes 'no)
no
> This parameter determines the default Racket value that corresponds to a JSON “null”. By default, it is the 'null symbol. In some cases a different value may better fit your needs, therefore all functions in this library accept a #:null keyword argument for the value that is used to represent a JSON “null”, and this argument defaults to (json-null).
If you set the JSON null to nil before running your example, it works as you'd expect:
I think we should get rid of json.rkt and use the Racket built-in. It's way better documented, and we should use that one. (But I'm not going to delete json.rkt myself, particularly when I know someone is working with it.)
For round-tripping between JSON and Arc, I would expect the JSON values {}, {"foo": null}, {"foo": false}, and {"foo": []} to parse as four distinct Arc values.
I recommend (obj), (obj foo (list 'null)), (obj foo (list 'false)), and (obj foo (list (list))). Arc is good at representing optional values as subsingleton lists:
; Access a key that's known to be there.
t!foo.0
; Access a key that isn't known to be there.
(iflet (x) t!foo
(do-something-then x)
(do-something-else))
Using my `sobj` utility, you can write (obj foo (list 'false)) as (sobj foo 'false).
Meanwhile, I don't think there's any big deal when we don't have Arc-style booleans....
; Branch on the symbols 'false and 'true.
(if (isnt 'false x)
(do-something-then)
(do-something-else))
; Alternative
(case x true
(do-something-then)
(do-something-else))
But now that I look closer at the ac.scm history (now ac.rkt in Anarki), I realize I was mistaken to believe Arc treated #f as a different value than nil. Turns out Arc has always equated #f and 'nil with `is`, counted them both as falsy, etc. So this library was already returning nil, from Arc's perspective.
There are some things that slip through the cracks. It looks like (type #f) has always given an "unknown type" error, as opposed to returning 'sym as it does for 'nil and '().
So with that in mind, I think it's a bug if an Arc JSON library returns 'nil or #f for JSON false, unless it returns something other than '() for JSON []. To avoid collision, we could represent JSON arrays using `annotate` values rather than plain Arc lists, but I think representing JSON false and null using Arc symbols like 'false and 'null is easier.
Having nil represent both false and the empty list is also what Common Lisp does.
Really, #t and #f are not proper Arc booleans[0], so it makes sense that Arc can't tell what type they are.
You can configure the value Arc chooses for a JSON null with the $.json-null function, which I think is fine as JSON APIs might have differing semantics.
That documentation may be wrong. On the other hand, it may be correct in the context of someone who is only using Arc, not Racket.
There are a lot of ways to conceive of what Arc "is" outside of the Racket implementations, but I think Arc implementations like Rainbow, Jarc, Arcueid, and so on tend to be inspired first by the rather small set of operations showcased in the tutorial and in arc.arc. (As the tutorial says, "The definitions in arc.arc are also an experiment in another way. They are the language spec.") Since #f isn't part of those, it's not something that an Arc implementation would necessarily focus on supporting, so there's a practical sense in which it's not a part of Arc our Arc code can rely on.
(Not that any other part of Arc is stable either.)
> Really, #t and #f are not proper Arc booleans[0], so it makes sense that Arc can't tell what type they are.
Really, #t and #f are not proper Arc anything, but the language apparently handles them so IMHO Arc should also be able to know what type they are. Otherwise, I fear, this will become a Hodge Podge language that will lose appeal.
Personally I don't care if Arc supports booleans. I only care that it can translate booleans (when need be) to a meaningful Arc semantic. That said, if we're going to support booleans then let's not create partial support.
Great, that looks fine. Then you can just access the `success` key like this:
($ (require json))
(if
((w/instring json "{\"success\":false}"
($.read-json json))
'success)
'success ; proceed to login
'fail ; must be a robot then
)
What's the thinking behind implementing captcha? I really, really hate ReCaptcha. For most sites the payoff to a spammer isn't really worth even the simplest robot test. A new site that throws ReCaptcha is an instant bounce.
>What's the thinking behind implementing captcha? I really, really hate ReCaptcha.
It might be useful if it only shows up on the login/signup page after failed attempts, but it might also be overkill. Personally, I prefer overt solutions to opaque ones like shadowbanning and throttling IPs. But mostly it seemed like a good way to figure out some basics (managing keys, how the app server manages the form loop, API responses and JSON.)
I could finish the test I have, push that and leave integration for later.
I agree with overt vs shadowbanning. My tendency is to just add tools for moderation. Let bots sign up and spam, but make it easy to take them down and ban them at that point. Arc is reasonably decent there.
Sorry, my comment was rather lazy. Let me try again.
I get the impression many of the people who want to start a HN like clone are considering using it as a content delivery platform. For example http://arclanguage.org/item?id=20452 notes a revenue sharing model for specialized content. Now, if that's the case these site owners will have both spammers and content scrapers to contend with. So my initial comment was also referring content scrapers too.
However there's another thing to consider: Session storage costs. About 6 months ago I went through a process to reduce the cost of data held in memory for session storage (redis in this case). The session data was continually analyzed for determining both who the bad users were and for knowing what value the good users are getting out of the app (feature planning etc). It was an interesting process, where just by reshaping the session data, thousands of dollars per month could be saved in db fees. Now I realize the someone starting a HN clone is probably not dealing with that, but I'd be willing to bet that part of the reason Captcha was implemented in HN proper was to reduce fees associated with the volume of requests, session costs and even network load. It's my feeling that, generically speaking, adding captcha functionality is a good Option to have.
> I'd be willing to bet that part of the reason Captcha was implemented in HN proper was to reduce fees associated with the volume of requests, session costs and even network load.
Let me add some info to that.... I'm not sure if anyone noticed, but HN has implemented new session management strategies. You can see this as your login is now maintained across multiple devices, where the arc code (that we have access to) logs you out upon logging in elsewhere. I also believe that when pg handed over the HN code significant changes occurred including how session data is stored and how that data is utilized to integrate with cloudflare. Obviously I'm making big guesses, because I don't have access to the code, but I'm willing to bet the changes HN has put in place would surprise everyone here.
Sadly everyone who sees HN today will come here and look for the source code not realizing what's available here is not modern nor comparable.
>Sadly everyone who sees HN today will come here and look for the source code not realizing what's available here is not modern nor comparable.
One thing I noticed when Arc gets brought up on HN is that everyone seems interested in the language but not so much the application. People seem to cargo-cult the forum for some reason.
> For most sites the payoff to a spammer isn't really worth even the simplest robot test.
Maybe I am missing something.... Isn't captcha just a fairly simple robot test (and thus preventing spam)? Or are you suggesting something even simpler? Because I've run a few sites and had tried implementing very simple programmatic obstacles and it really didn't stop the spammers.
Maybe the better question is - what would you suggest?
Simple captchas are totally fine. My problem is with Google's ReCaptcha in particular, where the problems have gotten so difficult that I mostly can't prove I'm a human.
Hmm... That's not my experience. 90% of ReCaptcha tests are invisible if not a simple checkbox. Only a small optional subset requires introducing a problem solver.
One time I spent a good 20 minutes identifying cars, signs, and storefronts before it would let me in, and that was with no VPN or Tor or anything. At some point they oughta be paying us. :-p
This is probably going to sound super crazy, but I have to say it...
I know you (akkartik) have a google account, because I remember when you moved your blog over to google's services (I think they call it 'circles' or some such). I also remember you created a news aggregator application that scraped content. Yes, I know, it was a long time ago in a galaxy far, far away..., but still...
I'm thinking that google identified your scraping work and deemed you a risky robot type, but they also probably correlated your IP from the scraping to your IP from your google services login and tagged you that way. So now, even if your IP changed, they'll continue to have you in their cross-hairs for, like, ever.
Any takers? If you'd like I can also look into who killed JFK...
a) My cookie acceptance policies are non-standard. (I no longer even remember what they are anymore.)
b) I'm often behind a VPN for work.
c) I'm often on my phone, or tethering from my phone.
Complaints about ReCaptcha are fairly common if you look on HN and so on. You don't have to have run a scraper to hit it, I don't think. I think you may be a robot from the future for never having problems with the pictures of signs and cars :p
Final minor correction: I've played with Google+ in the past (I actually worked at Google on Circles for a year) but I never moved my blog there. I just linked to my blog posts from there.
> Complaints about ReCaptcha are fairly common if you look on HN and so on.
Yeah I'm aware of the complaints, but in my mind HN wouldn't be the best resource of information for such an assessment. By default HN members are non-standard in most ways that would matter to ReCaptcha.
It's an interesting dilemma and one that I'm coming up on soon as I plan to release a new app in a few months time. In my case the intended audience for the app is very widespread and not specific to a tech audience. It could be that the vast majority of my users (if I get any - lol) would never have a problem, because the vast majority of people using the net don't know what a VPN is or how to change a cookie setting (just as examples).
I'll have to give it some more thought, but in the mean time, are you aware of any resources on the matter that would be more reflective than HN?
edit: I often find info like this [1]:
"Different studies conducted by Stanford University, Webnographer and
Animoto, showed that there is an approximately 15% abandonment rate when the
users are faced with CAPTCHA challenge."
But really I do expect to take some loss when using reCaptcha. The question really becomes is it worth it? After all spam can also cause users to leave and content scrapers can also de-value your product.
Certainly, it's an issue only tech-savvy people will have.
However, every step you put users through is a place where your funnel will leak. So in your place I wouldn't turn on captcha until I actually see spam being a problem.
Also, independently, since I am personally ill-disposed towards ReCaptcha I have zero urge to help maintain it in Anarki :) You're welcome to hack on it, though!
I think it's less important to have Recaptcha or not than it is to have a working POC for interaction with a remote JSON API, and for parsing JSON in general, since that opens up a lot of possibilities. Recaptcha itself is just the low-hanging fruit for that, since it's so simple.
As far as integration goes, we could just leave it up to whomever wants to do the work or make it easily configurable with the default being not to use it at all.
It's great to see a JSON API integrated in Arc. :)
I took a look and found fixes for the unit tests. Before I got into that debugging though, I noticed some problems with the JSON library that I'm not sure what to do with. It turns out those are unrelated to the test failures.
The JSON solution is a quick and dirty hack by a rank noob, and I'm sure something better will come along.
And in hindsight the problem with the (body) macro should probably have been obvious, considering HTML tables are built using (tab) and not (table). I'm starting to think everything other than (tag) should be done away with to avoid the issue in principle, but that would be a major undertaking and probably mostly just bikeshedding.
Ahh, well my whole comment is completely off topic; sorry. That's what 10 years of being a BA does to the mind :)
> Can you link to the post you are referring to?
I guess I should be using the expected terminology, that being a 'comment' not a 'post'. In my mind he 'post'ed a comment, but I understand I should have been more clear.
Ooh, interesting! We might want to figure out a long-term documentation system for Anarki; the existing arclanguage.github.io documentation is for Arc 3.1. And while that's great, it's suboptimal for cases like this, because it says "...there is no support for outgoing network connections." (https://arclanguage.github.io/ref/networking.html).
How about UDP calls? I sucked this CL snippet a while back (sorry I don't have the author info at hand). Creates a socket, sends data and receives data: