code logs -> 2016 -> Thu, 22 Dec 2016< code.20161221.log - code.20161223.log >
--- Log opened Thu Dec 22 00:00:46 2016
00:04 catadroid [catalyst@Nightstar-38ov1u.dab.02.net] has quit [[NS] Quit: Bye]
00:15 gnolam [quassel@Nightstar-t2vo1j.tbcn.telia.com] has quit [[NS] Quit: http://quassel-irc.org - Chat comfortably. Anywhere.]
00:38
<&[R]>
<Vornicus> 20: that was quick. <abudhabi> That's what she said. <-- http://i.imgur.com/aLq5iMW.jpg
00:52
< catalyst>
o:
01:02
<@Reiv>
That's why you do it /again/, duh, he's gotta /learn/
01:06 catalyst [catalyst@Nightstar-bt5k4h.81.in-addr.arpa] has quit [Connection closed]
01:07 catadroid [catalyst@Nightstar-38ov1u.dab.02.net] has joined #code
01:23
<@sshine>
Vornicus, jeroud: what game is that?
01:23
<~Vornicus>
adventofcode.com
01:50 * Vornicus has now fully caught up.
01:52
<@Alek>
Reiv: I may be inexperienced, but even I know that much XD
02:43
<&McMartin>
You now are also in a good position to defeat me for a few days, as holiday travel requirements will complicate my availability at the stroke of doom
02:57
<~Vornicus>
THE STROKE OF DOOM
02:59 * Alek pulls mind out of gutter, goes to make tea and bedwards
03:02
<&McMartin>
Well, it's not the stroke of midnight *here*.
03:12
<@Alek>
nor here, but I have to get up at 6.
04:02 Vash [Vash@Nightstar-uhn82m.ct.comcast.net] has quit [Connection closed]
04:11
<@sshine>
5:12AM here. yet another crossfit workout at 7AM.
04:12
<@sshine>
since I quit my last job Dec 1, I have become nocturnal.
04:59 * Vornicus awaits THE STROKE OF DOOM
04:59
<~Vornicus>
it is thoroughly possible I will get points this time
05:09 catadroid` [catalyst@Nightstar-j5h4u4.dab.02.net] has joined #code
05:10
<~Vornicus>
POINTS
05:11 * Vornicus D:s at B
05:12 catadroid [catalyst@Nightstar-38ov1u.dab.02.net] has quit [Ping timeout: 121 seconds]
05:13
<&McMartin>
You have defeated me, then
05:13 * McMartin was #102 :(
05:13 * McMartin does however have a boarding pass.
05:17
<~Vornicus>
Okay B isn't quite as bas as I thought, looks like
05:21
<&McMartin>
It still looks plenty bad over here >_>
05:21
<~Vornicus>
Heh
05:26 * Vornicus thinks he's got it by hand.
05:26
<~Vornicus>
Hoooly cats
05:27
<~Vornicus>
http://adventofcode.com/2016/leaderboard/day/22
05:28
<~Vornicus>
#65, #6
05:28
<&McMartin>
Well done
05:29
<~Vornicus>
Now to write code that does it~
05:35
<&McMartin>
Heh
05:35
<&McMartin>
My code isn't working
05:39
<&McMartin>
Derp. THere we go.
05:44
<&McMartin>
Nope, that won't work either
05:45
<&McMartin>
Grump.
05:49 * Vornicus has never tried to program this type of puzzle before.
05:53
<&jeroud>
McMartin: #aocspoilers for discussion. :-)
05:53
<&jeroud>
I haven't done 22 yet.
05:54
<&jeroud>
But I'm up to 18 of last year's problems. L
05:56
<&McMartin>
aha
06:50 Reiv [NSwebIRC@Nightstar-ih0uis.global-gateway.net.nz] has quit [Ping timeout: 121 seconds]
07:53 catadroid` is now known as catadroid
07:53
<&jerith>
So, CS people. The lowly EE in your midst requires some advice.
07:53
<&jerith>
I have a system that I need to interact with.
07:54
<&jerith>
It seems to have some internal locks, which it must hold during certain operations it performs.
07:54
<&McMartin>
DANGER WILL ROBINSON
07:54 * McMartin locks the isolation chambers
07:54
<&[R]>
Locks, as in mutexes?
07:54
<&McMartin>
OK, we're safe for now.
07:55
<&[R]>
Or file locks, or what?
07:55
<&McMartin>
Or is this not visible from the outside?
07:55
<&jerith>
I need to interact with it in such a way as to avoid deadlocks (so I can't call it while it's waiting for a response from me) and race conditions.
07:55
<&jerith>
The system is docker.
07:55
<&jerith>
My thing is a volume driver plugin.
07:56
<&McMartin>
OK
07:56
<&McMartin>
So, what you're asying is that it has some locks, and when you call functions in it, those locks get held
07:56
<&jerith>
Docker makes "mount" and "unmount" calls for every volume a container needs when that container starts and stops.
07:56
<&McMartin>
Or do you have to command the locking and unlocking operations?
07:57
<&jerith>
I am inferring the existence of the locks based on the ease and consistency with which I can render it entirely unresponsive.
07:57
<&McMartin>
Or is it that the docker holds locks while it calls *you* in various ways
07:58
<&jerith>
The problem with the "unmount" call is that I don't know if anything else is currently using the volume I've been told to unmount.
07:58
<&McMartin>
In the general case, note that this is the problem we've been trying to solve by reinventing concurrency entirely. >_>
07:58
<&McMartin>
Unfortunately, here it seems like the textbook answer doesn't help you.
07:59
<&McMartin>
But the textbook answer is "impose a locking discipline such that if there are N locks, they will only be locked in a fixed order"
07:59
<&jerith>
The obvious solution is "ask docker about all containers, grovel through the giant pile of data, filter out everything not using that volume, examine what's left".
07:59
<&jerith>
Except docker won't respond while it's waiting for me.
08:00 * Vornicus finds himself thinking about the dining philosophers problem, wonders why they always say forks. Clearly if you need two of them it's got to be chopsticks. Also, finds himself combining the dining philosophers problem with the parable of the long spoons.
08:00
<&McMartin>
Vornicus: ... in the discussions of the dining philosophers problem with which I am familiar, it was in fact chopsticks.
08:01
<&McMartin>
jerith: This is a dumb question, but, uh
08:01
<~Vornicus>
mm. every time I heard it it was forks and I was all "what the fuck are you eating that needs two forks"
08:01
<&jerith>
The next obvious solution is "schedule a check-and-unmount operation for later", but then I might be told to mount the volume agoin between the "check" and "unmount".
08:01
<&McMartin>
Are you sure that this is a locking mechanism and not, say, masking out interrupts or a similar effect
08:01
<&McMartin>
Maintain a queue of pending actions and work through them as you can
08:02
<&McMartin>
This does imply that docker is willing to take asynchronous errors.
08:02
<~Vornicus>
wikipedia uses forks.
08:02
<&McMartin>
As by the time you ack the unmount, you won't know if the unmount will work.
08:02
<&jerith>
I don't know what the implementation is, but I can consistently deadlock by calling docker while it waits for me.
08:02
<&jerith>
I can ack the unmount and handle it later.
08:02
<&jerith>
I can't do that for mounts.
08:02
<&McMartin>
That feels less like traditional deadlock and more like "Can't call BIOS/DOS interrupts while servicing a timer/keyboard interrupt"
08:03
<&McMartin>
Mount could check to see if there's an unmount in progress and abort it if so
08:03
<&jerith>
Does that change the behaviour model?
08:04
<&jerith>
I can treat "mount" and "unmount" as atomic.
08:04
<&McMartin>
There are only 3 events here, so, we can do this exhaustively
08:04
<&McMartin>
There's unmount acked, unmount completed, and mount called
08:04
<&jerith>
(Or I can lock around them internally.)
08:04
<&McMartin>
unmount acked *must* come first in this.
08:05
<&McMartin>
that leaves two possible interleavings.
08:05
<&McMartin>
If unmount completes before mount comes in, that's fine, that's normal
08:05
<&jerith>
I think I left out an important thing.
08:05
<&McMartin>
If unmount is still groveling through data when the mount comes in, mount can say "hey, never mind" and the status is that after the mount was called the disks are mounted
08:06
<&McMartin>
I don't know enough about docker to know if that's good enough.
08:06
<&jerith>
If there are N things that need a particular volume, I get N mount calls and N unmount calls.
08:06
<&McMartin>
Eventually, or all at once?
08:07
<&jerith>
One mount when a thing starts, one unmount when it stops. Arbitrary overlapping of lifetimes.
08:07 celticminstrel [celticminst@Nightstar-h4m24u.dsl.bell.ca] has quit [[NS] Quit: And lo! The computer falls into a deep sleep, to awake again some other day!]
08:07
<&McMartin>
WIth the understanding that I'm spitballing at midnight while multitasking, here's my first intuition for The Plan.
08:08
<&McMartin>
- Maintain an atomic reference count for each volume
08:08
<&McMartin>
- Any mount/unmount that isn't a transition to or from 0 does nothing but alter that reference count
08:08
<&jerith>
The "documented" (in github issue comments) solution is for your volume driver to count calls.
08:08
<&McMartin>
- Transition to 0 queues a future cleanup
08:09
<&McMartin>
- Transition from 0 aborts any future-queued cleanups, unless it's reached an "irreversible" state, at which point it must block until the unmount is complete, and then starts from scratch.
08:09
<&jerith>
This month's docker "helpfully" provides a "id" field that is guaranteed to be the same for each mount/unmount pair, so you can track those instead.
08:09
<&McMartin>
That... um.
08:09
<&McMartin>
I don't see the use case for that here
08:09
<&jerith>
Refcounts fail when you miss or forget events.
08:10
<&McMartin>
Oh, is our command stream unreliable then
08:10
<&McMartin>
OK
08:10
<&McMartin>
Um
08:10
<&jerith>
I have a strong preference for avoiding internal state in my thing.
08:10
<&McMartin>
If you miss an unmount, that's all she wrote
08:10
<~Vornicus>
I'm going to say "You're screwed"
08:10
<&jerith>
Yeah.
08:11
<&jerith>
The *correct* solution is for docker to just say "... and here are the other things that still need this volume" in the unmount call.
08:11
<&McMartin>
This is like writing a garbage collector where malloc() isn't guaranteed to give you a memory block that isn't already in use.
08:11
<&jerith>
Which is can do, because it has that information. Except it doesn't.
08:12
<&jerith>
*it
08:12
<&McMartin>
Right
08:12
<&jerith>
Err, it doesn't do that. It does have the information.
08:12
<&McMartin>
I see no way out of maintaining your own internal state here.
08:12
<&McMartin>
And given that you can miss mount/unmount events, that's insufficient.
08:12
<&jerith>
I have no way of maintaining my own internal state in a manner that keeps it consistent with docker's.
08:12
<&McMartin>
If you miss an unmount event, your volume is never going away.
08:13
<&McMartin>
If you miss a mount event, that container is fucked but good.
08:13
<&McMartin>
The latter seems inescapable, the former is "merely" a performance issue
08:13
<&jerith>
And anything that lets me query docker's state means I can just query it when I need it.
08:13
<&jerith>
Except that deadlocks sometimes.
08:13
<&McMartin>
Right.
08:13
<~Vornicus>
Ok so wait
08:13
<&jerith>
So I can track short-term state.
08:13
<&McMartin>
What is the state that you are actually sharing here?
08:13
<&McMartin>
File handles?
08:14
<&McMartin>
An API to a resource you exclusively control?
08:14
<~Vornicus>
You have a method that you can use to check docker's state but while you're *doing* that docker is otherwise unresponsive and that sucks right?
08:14
<&McMartin>
What do *you* do when the mount count transitions to/from zero?
08:14
<&jerith>
I'm mounting and unmounting remote filesystems.
08:14
<&McMartin>
OK, so you're getting "Container X has started/stopped"
08:15
<&McMartin>
Or are you getting "hey, I need filesystem X plz"
08:15
<&jerith>
So mount is easy. "Is /var/lib/volumes/foo already mounted? If not, mount it."
08:15
<~Vornicus>
And you don't want to fuck around while docker is unresponsive
08:15
<&jerith>
I'm getting "mount volume foo" and "unmount volume bar".
08:16
<&McMartin>
OK
08:16
<&McMartin>
The ids *do* help
08:16
<&McMartin>
But you still need state unique to you.
08:16
<&McMartin>
Possibly also on disk, if your process is ephemeral
08:16
<&McMartin>
But you have some persistent entity that corresponds to each ID.
08:17
<&McMartin>
You know when you've missed a mount because you get an unmount from a system you've never heard of.
08:17
<&jerith>
Vornicus: My current model (which may not be entirely accurate, because it's a pain to test) is that docker holds a lock around "start container", "stop container", and "list container info".
08:17
<&McMartin>
It's impossible to tell if you've missed an unmount; that's equivalent, to you, to a container that is still running.
08:17
<~Vornicus>
So while in LCI mode you can't start or stop containers?
08:17
<&McMartin>
I think it's the other way around that's burning him.
08:17
<&McMartin>
During Stop, he gets a call
08:18
<&McMartin>
He'd like to call back into docker to LCI, but can't, because the list of containers is in the process of being modified.
08:18
<&jerith>
Yes, that.
08:18
<&McMartin>
OK, second cut at plan
08:18
<&McMartin>
(a) have a directory that you can controll access to with a file lock.
08:18
<&jerith>
And a deadlocked docker is Very Bad News for my infrastructure, because each worker machine runs 20 to 100 containers.
08:19
<&jerith>
Oh, one more thing. I can't use the ids.
08:19
<&jerith>
I may be able to in the future.
08:19
<&McMartin>
... ok, this solution requires ids
08:19
<&McMartin>
Without ids, you're stuck with "atomically updated reference count"
08:19
<&McMartin>
*with* ids, you create a table of "id using X"
08:19
<&McMartin>
And unmounts only count if they match the id you previously noted.
08:20
<~Vornicus>
Why can't you use the ids?
08:20
<&jerith>
But docker 1.12 (which has the ids) is not supported by the orchestration stuff I'm using.
08:20
<&McMartin>
Use the filesystem, with a directory protected by a file lock, fully mutex, to protect yoruself from yourself.
08:20
<&jerith>
That supports docker 1.7 to 1.11.
08:20 * Vornicus facepalms
08:20
<&jerith>
Yay docker.
08:20
<&McMartin>
Under what circumstances do you miss events?
08:20
<~Vornicus>
how is this thing popular
08:21
<&jerith>
McMartin: That's not clear. A restart of the docker daemon probably does it.
08:21
<~Vornicus>
--guess I could ask that about... rather a few core internet techs, don't mind me
08:21
<&jerith>
I don't know what happens if my plugin is dead while docker wants to call it.
08:22
<&McMartin>
"It's a more lightweight version of the old moka5-style desktop virtualization solutions"
08:22
<&jerith>
Vornicus: docker is popular because it makes container stuff easy.
08:22
<&jerith>
The big win is the image stuff.
08:22
<&McMartin>
I think the real question here is still "why are containers good" and this is either completely obvious or completely opaque depending on which part of computer use you live in :)
08:22
<&jerith>
You can say "run a container starting with this image" and Stuff Happens.
08:23
<&McMartin>
Right
08:23
<&jerith>
Pretty much everything else was "here are some things on the local filesystem, run me a container with that".
08:23
<&McMartin>
This is either DLL bundling gone completely mad, or a simplification of the earlier Easy Way, which was "here is a complete virtual machine, start it up"
08:23
<&McMartin>
Containers, AIUI, act and look like VMs without actually *being* them.
08:24
<&McMartin>
But, well
08:24
<&jerith>
(Where "container" is "process group with various kernel-level namespacing and limiting things".)
08:24
<&McMartin>
In a past life I might have in fact written and maintained shims for libc and ntdll.dll to make VMware Player behave in a manner unnervingly like this
08:25
<&jerith>
Containers are supposed to *look* like lightweight VMs that happen to share the same kernerl.
08:25
<&jerith>
-r
08:25
<&McMartin>
Yeah.
08:25
<&jerith>
In practice, it's rather more complicated.
08:25
<&McMartin>
I have some experience in this space, but it was with heavyweight VMs.
08:25
<&jerith>
But from the outside you can often handwave that away.
08:25
<&McMartin>
And that let me enforce exclusive access on stuff more freely, I think.
08:26
<&McMartin>
For maximum safety, I think here I would sacrifice some performance and use atomic filesystem operations to maintain my plugin's state
08:26
<&McMartin>
That way if my RAM gets wiped due to *anything* crashing, we know what's what.
08:26
<&McMartin>
The only tricky part is if we outlive the thing that's sending messagwes.
08:26
<&jerith>
Actually, something you said earlier might work.
08:27
<&jerith>
Internally, I track "this is a volume I want to unmount".
08:27
<&McMartin>
Note that the overlapping lifespans thing makes you highly isomorphic to the classical retain/release mechanism for heap management.
08:27
<&McMartin>
So that's how you decide that you do in fact want to unmount it.
08:27
<&jerith>
I can lock around my internal mount and unmount operations, because nothing calls docker for those.
08:27
<&McMartin>
(if you get two starts and one stop, no unmount should happen at all)
08:28
<&McMartin>
Yeah, that lock-around-unmount is my "oh, something irreversible has begun, now you have to wait"
08:28
<&jerith>
I *don't* lock around my list operation (because that calls docker), but all that will do is clear the "unmount this thing" flag.
08:28
<&jerith>
Any mount operation also clears that.
08:29
<&jerith>
So mount becomes "lock, clear flag, mount, unlock".
08:29
<&jerith>
Unmount becomes "set flag, check and maybe clear, lock, unmount if flag is set, unlock".
08:31
<~Vornicus>
so wait if the docker daemon goes down does it basically unmount everything, and you miss those?
08:32
<&jerith>
Then I need a separate periodic background task (in the same process, so it shares flags and locks) that schedules an unmount for each mounted volume in case we missed its last unmount call.
08:32
<&jerith>
I was typing my answer while you asked the question. :-)
08:32
<&McMartin>
OK, right
08:32
<&McMartin>
That's where I got some leeway you lack.
08:32
<&McMartin>
I was an injected DLL.
08:33
<&McMartin>
If I had the equivalent of "the daemon crashed", I crash too.
08:34 * McMartin does a little pencilwork
08:34
<&McMartin>
Your stated protocol looks sound.
08:34
<&McMartin>
The worst case is a mount coming in after unmount sets the flag, or the mount happening before the unmount gets a chance to do anything at all
08:34
<&McMartin>
But in that worst case where unmount is called, mount is called, then unmount begins to happen...
08:35
<&McMartin>
... the check-and-maybe-clear step should clear it.
08:36
<&jerith>
My meta-question is this: Are there any tools or algorithms for dealing with this stuff, or is it all ad-hoc stuff in fallible meatbrains?
08:37
<&jerith>
I'd be satisfied with a thing that I could feed my modle and protocol to and have it spit out "here's a deadlock" or "here's a failed invariant".
08:38
<&McMartin>
That would be worth about twelve Turing Awards~
08:38
<&jerith>
Leslie Lamport's TLA+ looks like a thing that can do that, but it requires a certain upfront investment.
08:38
<&McMartin>
Yeah, basically, it's "enforce a discipline across the entire system"
08:38
<~Vornicus>
yeah, that sounds like a Hard Problem
08:38
<&McMartin>
If you don't control Docker your only option is to never call it in situations where it might be doing something improtant
08:39
<&jerith>
Well, "after looking at it a while I can't prove anything" is an acceptable (albeit undesirable) response.
08:39
<&McMartin>
Yeah, but, well
08:39
<&McMartin>
We are maybe at the Marie Curie stage of this even still
08:39
<&jerith>
Docker likes to become unresponsive all on its own sometimes.
08:39
<&jerith>
But usually it comes back to life after a bit.
08:40
<&McMartin>
Our formal definition of race condition was both too restrictive and insufficient in ways that mattered for the implementation of java.lang.StringBuffer, in Java 1.4
08:40
<&McMartin>
Which means that "yeah, you really have to know what the problem being solved is" is my immediate takeaway for all such things.
08:40
<&McMartin>
The model I suggest for this is actually something more like interrupt processing.
08:41
<&jerith>
McMartin: "Being poisoned by the invisible emanations from the materials we're working with?"
08:41
<&McMartin>
There are things I need to Not Do when interrupts are disabled.
08:41
<&McMartin>
jerith: But they're so glowy!
08:41
<&McMartin>
But yes, my intended meaning in this metaphor is that we know enough to seriously hurt ourselves.
08:41
<&McMartin>
And our best solutions are drastic isolation.
08:42
<&McMartin>
The best form of concurrency I know of is message-passing based systems where "send message to X" and "wait for a message to come in" are your only primitives.
08:42
<&McMartin>
It turns out that Go popularized this and may have invented important work there
08:42
<&McMartin>
But the general notion of Communicating Sequential Processes is older
08:42
<&jerith>
I remain convinced that locks and such are useful only as a very low level primitive and real work should be done way above that level using tools that it is possible to meaningfully reason about.
08:43
<&jerith>
McMartin: I'm not intimitely familiar with Go's implementation, but I've heard that it's prone to deadlocking if you're not careful.
08:44
<&jerith>
I much prefer Erlang's message passing mechanism.
08:44
<&jerith>
Each "process" only has one message queue, and everything goes through that.
08:44
<&McMartin>
Yeah
08:45
<&McMartin>
I'm not convinced that's sufficient to implement all forms of synchronization
08:45
<&McMartin>
I'm pretty sure you can fake everything with channels.
08:45
<&jerith>
Anyway, mutable state is really the major problem.
08:46
<&McMartin>
"remote filesystem is/is not mounted" is some pretty intrinsically mutable state
08:46
<&jerith>
If you get rid of that (or aggressively localise it in both time and space) most of the problems go away.
08:46
<&jerith>
And then when you *need* the mutable state you don't have all this other extraneous mutable state getting in the way.
08:49
<&McMartin>
My preference would be for docker to handle mount counts for you and only send mount/unmount requests in circumstnaces where it really is a thing that must happen
08:49
<&McMartin>
But that may be assuming you are implementing a narrower API than you perhaps truly are.
08:49
<&McMartin>
(e.g., there may be no guarantee that two mounts of the "same" volume get the same volume)
08:50
<~Vornicus>
"Our formal definition of race condition was both too restrictive and insufficient in ways that mattered for the implementation of java.lang.StringBuffer, in Java 1.4" <--- now I'm curious
08:55
<&McMartin>
The old definition was that a race condition happened whenever two accesses happened without being protected by the same mutex and at least one of them was a write.
08:55
<&McMartin>
So, Java protected all those internal fields with mutexes, so that condition never held
08:55
<&McMartin>
(It's too restrictive because lock-free algorithms exist, that's not the fun part.)
08:56
<&McMartin>
So there was a bug in StringBuffer where it locked, checked the length, unlocked, relocked, and then did something that relied on that old length being valid.
08:57
<&McMartin>
Which meant you could, say, empty the stringbuffer in between and get a buffer overrun.
08:58
<&McMartin>
This is now a classic bonehead error - you released your lock in the middle of a conceptually atomic operation - but "classic bonehead errors" are pretty shockingly young in our field.
08:58
<&McMartin>
But that's also where you get into "welp, you're kind of screwed in the general case" arguments
08:58
<~Vornicus>
Wait, that's bonkers
08:58
<&McMartin>
Whether something is a possible race condition or not depends on whether or not a block of code represents a conceptually atomic operation or not.
08:59
<&McMartin>
And that's not expressible as a formalism; it's specified ad-hoc for every piece of a system.
09:00
<&McMartin>
It's bonkers, but those lock and unlock operations were inserted *by the compiler* as responses to the synchronized keyword.
09:00
<&McMartin>
Even at the time I suspect that this is an error that would not be made if someone were hand-writing locking/unlocking code
09:00
<&McMartin>
... but if they were they would probably also not be using recursive locks, which would mean deadlocks *everywhere*
09:05 Kindamoody[zZz] is now known as Kindamoody
09:12 Kindamoody is now known as Kindamoody|afk
09:33
<&jerith>
McMartin: Yeah, you might have separate mount points for the same volume for different containers or something.
09:34
<&jerith>
(Except there's another volume driver API that expects you to provide a mount path given only the volume name, so that's not really feasible.)
09:34
<&jerith>
Docker is a mess.
10:55 Vornicus [Vorn@ServerAdministrator.Nightstar.Net] has quit [Ping timeout: 121 seconds]
11:39 catadroid` [catalyst@Nightstar-7oh2p3.dab.02.net] has joined #code
11:43 catadroid [catalyst@Nightstar-j5h4u4.dab.02.net] has quit [Ping timeout: 121 seconds]
11:44 Emmy [Emmy@Nightstar-9p7hb1.direct-adsl.nl] has joined #code
11:44 mode/#code [+o Emmy] by ChanServ
13:41 Alek [Alek@Nightstar-cltq0r.il.comcast.net] has quit [Ping timeout: 121 seconds]
13:45 Alek [Alek@Nightstar-cltq0r.il.comcast.net] has joined #code
13:45 mode/#code [+o Alek] by ChanServ
14:07 catadroid` is now known as catadroid
16:59 celticminstrel [celticminst@Nightstar-h4m24u.dsl.bell.ca] has joined #code
16:59 mode/#code [+o celticminstrel] by ChanServ
18:09 catadroid` [catalyst@Nightstar-vbfe92.dab.02.net] has joined #code
18:12 catadroid [catalyst@Nightstar-7oh2p3.dab.02.net] has quit [Ping timeout: 121 seconds]
19:07 gnolam [quassel@Nightstar-t2vo1j.tbcn.telia.com] has joined #code
19:07 mode/#code [+o gnolam] by ChanServ
19:32 starkruzr [quassel@Nightstar-7qsccf.fios.verizon.net] has quit [Operation timed out]
19:35 catadroid [catalyst@Nightstar-vbfe92.dab.02.net] has joined #code
19:35 catadroid` [catalyst@Nightstar-vbfe92.dab.02.net] has quit [The TLS connection was non-properly terminated.]
19:40 Vornicus [Vorn@ServerAdministrator.Nightstar.Net] has joined #code
19:40 mode/#code [+qo Vornicus Vornicus] by ChanServ
19:45 catadroid` [catalyst@Nightstar-vbfe92.dab.02.net] has joined #code
19:45 catadroid [catalyst@Nightstar-vbfe92.dab.02.net] has quit [The TLS connection was non-properly terminated.]
20:39 Reiv [NSwebIRC@Nightstar-ih0uis.global-gateway.net.nz] has joined #code
20:39 mode/#code [+o Reiv] by ChanServ
21:15 catadroid` is now known as catadroid
21:19 catalyst [catalyst@Nightstar-bt5k4h.81.in-addr.arpa] has joined #code
22:09
<&McMartin>
The Atrocitron has been released. https://hkn.eecs.berkeley.edu/~mcmartin/if/games/bin/atrocitron.z5
22:41
<&McMartin>
And so has Galaxy Patrol 2.1 final. https://www.dropbox.com/s/1x081y0cxv6d5pn/galaxy_patrol.nes?dl=1
22:56
< catalyst>
https://gist.github.com/anonymous/526582f2e1b74e9fe6baadb6daa6d7fe
22:56
< catalyst>
:d
22:56
< catalyst>
I think I am a wizard
22:56
<@Tamber>
Not a wizzard?
22:57 mode/#code [+oo catalyst catadroid] by Tamber
22:57
<@abudhabi>
Thankfully not a whizzard!
22:58
<@abudhabi>
catalyst: Does that do what I think it does? Simply converts a String?
22:58
<@abudhabi>
Converts whatever into a String, I mean.
22:59
<@catalyst>
yeah
22:59
<@catalyst>
make_string("Hello! There are ", 3, " things in your ", foo);
23:00
<@abudhabi>
Am I ever glad I don't have to deal with that shit.
23:00
<@catalyst>
It's a bit tedious
23:08 Alek [Alek@Nightstar-cltq0r.il.comcast.net] has quit [Ping timeout: 121 seconds]
23:09 Vorntastic [Vorn@Nightstar-4r29rl.sub-174-199-29.myvzw.com] has joined #code
23:11 Alek [Alek@Nightstar-cltq0r.il.comcast.net] has joined #code
23:11 mode/#code [+o Alek] by ChanServ
23:14 Vorntastic [Vorn@Nightstar-4r29rl.sub-174-199-29.myvzw.com] has quit [Ping timeout: 121 seconds]
23:52 Emmy [Emmy@Nightstar-9p7hb1.direct-adsl.nl] has quit [Ping timeout: 121 seconds]
--- Log closed Fri Dec 23 00:00:47 2016
code logs -> 2016 -> Thu, 22 Dec 2016< code.20161221.log - code.20161223.log >

[ Latest log file ]