code logs -> 2006 -> Fri, 24 Nov 2006< code.20061123.log - code.20061125.log >
--- Log opened Fri Nov 24 00:00:12 2006
00:17
<@ToxicFrog>
aughajlslidtdjdlf
00:17 * ToxicFrog flails
00:24 * MyCatOwnz gives ToxicFrog a beer and a Squeak VM.
00:25
<@ToxicFrog>
I don't know Smalltalk.
00:25
<@ToxicFrog>
And the problem isn't expressing the ideas I want in code, it's forming the ideas.
00:27
< MyCatOwnz>
Ah. C'est un probleme plus grand que je peut aide vous résolver..
00:28
< MyCatOwnz>
I wonder where all the French people hang out on IRC?
00:35
< Jan[Maginot]>
Nous n'utilisons pas IRC. Il y a des murs que le besoin a construit.
00:37 * MyCatOwnz mettre sa dictionairre. =)
00:37
< MyCatOwnz>
*mets ?
00:40
< MyCatOwnz>
What on Earth did that second sentence mean?
00:40 ReivSLEP is now known as Reiver
00:40
< MyCatOwnz>
Erm, I mean, what on Earth did that second sentence mean, please?
00:41
< Jan[Maginot]>
Bah, you'd only sell it to the Germans.
00:42
< Ev3>
Hah! French entirely sucks. It's the only language stupid enough not to spell words simple enough for you to pronounce them by sound. My favorite example is Marsailles.
00:45
<@ToxicFrog>
Reiver! We were discussing function closures, I believe.
00:45
<@Reiver>
Were you? That was convinient for backscroll purposes then, wasn't it?
00:45
<@ToxicFrog>
...what?
00:47
< MyCatOwnz>
I haven't yet grokked the concept of function closures (read descriptions a few times, never had any practice with them to make it sink in), but I *think* that was either the best or the corniest joke I've heard in at least the last fifteen, maybe twenty hours.
00:47
< MyCatOwnz>
I can't tell which.
00:47
<@ToxicFrog>
Function closures are Awesome (tm).
00:48
<@Reiver>
My joke clearly wasn't.
00:49
< MyCatOwnz>
Reiver: nah, it was good.
00:49 Jan[Maginot] is now known as Janus
00:49
< MyCatOwnz>
Reiver: I award bonus marks for nerdiness, y'see. If you'd managed to tie, say, "Nothing sucks like a VAX!" into it, for example, I'd be attempting to get you to produce standup DVDs right now so that I could hoarde them.
00:50 Syloq [Syloq@Admin.Nightstar.Net] has joined #code
00:51 Syloq is now known as Syloqs-AFH
00:52 Chalcedon is now known as ChalcyMusic
00:53
<@ToxicFrog>
Reiver: anyways. Are you still interested in learning about function closures?
00:53
<@ToxicFrog>
(bearing in mind that none of this applies to Java, since it lacks both function closures and the features that would make them usable)
00:54
<@Reiver>
I certainly am, but would prefer to learn at a point in time after about three hours time?
00:54 * Reiver has three hours before buisnesses close for the weekend and still has no job. >.>
00:55
<@ToxicFrog>
I...will /probably/ be awake then. Alright.
00:56
<@Reiver>
Well, if not then
00:56
<@Reiver>
Then the weekend?
00:56
<@Reiver>
Just, er
00:56
<@ToxicFrog>
MCO: I have a practice problem (from "lists and lists") that makes good function closure practice!
00:56
<@Reiver>
Not /now/. I am willing to learn later if you are willing to be patient. >.>
00:56
<@ToxicFrog>
And I am.
00:57
<@Reiver>
Excelent
00:57 * Reiver toddles off to phone more folks and such.
00:58
< MyCatOwnz>
ToxicFrog: function closures... innat where you save (part of?) the thread's execution state before entering some subroutine, then maybe restore it afterwards?
00:58
<@ToxicFrog>
Nothing to do with threads.
00:58
<@ToxicFrog>
You're thinking of continuations, I think.
00:59
< MyCatOwnz>
I might've been mixing up some elements of something-with-continuation, indeed.
00:59
<@ToxicFrog>
call-with-current-continuation, which is a Lisp construct?
00:59
< MyCatOwnz>
That'll be the one.
01:00
<@ToxicFrog>
Yeah. Different animal.
01:00
< MyCatOwnz>
Odd thought. QBASIC is just as powerful as Lisp.
01:00
<@ToxicFrog>
And somewhat more brain-eating.
01:00
<@ToxicFrog>
....ummmm.
01:00
<@ToxicFrog>
That depends on how you define "powerful".
01:00
<@ToxicFrog>
If you mean "can solve the same set of problems" I think that might be correct.
01:00
< MyCatOwnz>
Not quite.
01:00
<@ToxicFrog>
Actually, no.
01:01
<@ToxicFrog>
You can't solve the pocket problem in qbasic.
01:01
<@ToxicFrog>
Doesn't have first-class functions.
01:01
< MyCatOwnz>
In QBASIC you can output strings into a text file and subsequently have the interpreter run that file, giving you a pretty ugly hack to allow metaprogramming.
01:02
<@ToxicFrog>
Anyways.
01:02
<@ToxicFrog>
Continuations are where you save thread state and restore it later.
01:02
< MyCatOwnz>
You can then extend that to sort-of-first-class functions by putting seperate functions into seperate .BAS files, using global variables to pass parameters and accept results.
01:02
<@ToxicFrog>
Function closures are where a function carries the scope information it was declared in, not the scope it's called in.
01:02
< MyCatOwnz>
(Which is safe, 'cuz BASIC never had anything even *remotely* resembling a need for thread-safety.)
01:04
< MyCatOwnz>
I'm a little fuzzy on the concept of "functions" here.
01:04
<@ToxicFrog>
...'
01:04
<@ToxicFrog>
buh?
01:04
< MyCatOwnz>
Can these functions directly access variables in the scope of the place where you call them?
01:05
<@ToxicFrog>
No. This is true of pretty much all lexically scoped languages, though, not a property of function closures.
01:07
< MyCatOwnz>
Aight. So there's no difference, conceptually, between defining a C function and passing it around as-first-class using function pointers and declaring a (insert clever language here) function and passing it around via whatever method your functional-ish language supports?
01:07
<@ToxicFrog>
The term is "first-class function value" and no, there isn't. But again, this has nothing to do with function closures.
01:08 * ToxicFrog has no idea where you'
01:08
<@ToxicFrog>
re going with this.
01:08
< MyCatOwnz>
ToxicFrog: the brick wall I'm running up against here is that, in C, for example, any function I declare and then call has a scope that allows it to see *no* variables except for its arguments and the global variables of the program.
01:09
<@ToxicFrog>
Yes. This is because in C, you /cannot define functions except at global scope/
01:09
<@ToxicFrog>
You can't go, say:
01:09
<@ToxicFrog>
int foo() {
01:09
<@ToxicFrog>
int bar() {
01:09
<@ToxicFrog>
return 4;
01:09
<@ToxicFrog>
}
01:09
<@ToxicFrog>
// bar is a function that can only be called from inside foo
01:09
<@ToxicFrog>
return bar()+8
01:09
<@ToxicFrog>
end
01:09
<@ToxicFrog>
Err.
01:09
<@ToxicFrog>
Replace the end with }
01:09
<@ToxicFrog>
I switched to Lua for a moment.
01:10
< MyCatOwnz>
Whatev. It made sense. You're already doing something my language won't support, there's no need to pretend it's not pseudocode ;)
01:10
<@ToxicFrog>
Right.
01:10
< MyCatOwnz>
...fuck. Git!
01:10
<@ToxicFrog>
?
01:10
<@ToxicFrog>
So, depending on how you interpret it, either C doesn't have function closures, or it does, but this doesn't matter, because there's only one scope you can declare functions in and that's global.
01:11
< MyCatOwnz>
I now have an irresistable urge to find a language that lets me do that and play with it until I see how it affects programming style =D
01:11
<@ToxicFrog>
Lua or Scheme both let you do that.
01:11
<@ToxicFrog>
Indeed, in Lua it's fairly common to do, say
01:11
< MyCatOwnz>
ToxicFrog: so, in that code above, can bar() access variables in the scope of foo()?
01:11
<@ToxicFrog>
Yes.
01:12
<@ToxicFrog>
The trick is, say that bar() can be called from outside foo().
01:12
<@ToxicFrog>
It can still access foo's variables.
01:13
<@ToxicFrog>
A trivial example of this:
01:13
<@ToxicFrog>
function foo(n)
01:13
<@ToxicFrog>
local function tmp()
01:13
<@ToxicFrog>
return n+4
01:13
<@ToxicFrog>
end
01:13
<@ToxicFrog>
return tmp
01:13
<@ToxicFrog>
end
01:14
<@ToxicFrog>
> f = foo(8)
01:14
<@ToxicFrog>
> f()
01:14
<@ToxicFrog>
-> 12
01:14
<@ToxicFrog>
> g = foo(0)
01:14
<@ToxicFrog>
> g()
01:14
<@ToxicFrog>
4
01:14
<@ToxicFrog>
> f()
01:14
<@ToxicFrog>
12
01:15
<@ToxicFrog>
Even though you're calling the returned function from global scope, it still sees the n that's in foo's scope.
01:15
< MyCatOwnz>
Eh? You only showed foo() being called as parts of f() and g() there.
01:16
< MyCatOwnz>
I'm confused. The only things you appeared to show there were an inner function and an odd habit of currying.
01:16
<@ToxicFrog>
Eh?
01:16
<@ToxicFrog>
I'm confused now.
01:16
<@ToxicFrog>
Foo is not called as part of f or g.
01:16
<@ToxicFrog>
f and g are functions that are returned by foo.
01:17
< MyCatOwnz>
"> f = foo(8)" ---?
01:17
< MyCatOwnz>
"> g = foo(0)" ////?
01:18
<@ToxicFrog>
"call foo with the single argument 8. It will return a new function. Assign that function to the global variable f."
01:18
<@ToxicFrog>
As you can see from the definition of foo(). It defines and returns a new function.
01:20
<@ToxicFrog>
And, yes, currying is a use of function closures.
01:20
<@ToxicFrog>
But if you understand currying, how do you not understand function closures?
01:21
< MyCatOwnz>
Failure to understand here. "f = foo(8)" looks to me like nothing more sophisticated than saying "assign the return value from a call to foo() with the parameter 8 to some variable f"
01:21
< MyCatOwnz>
ToxicFrog: because currying is a rather simple special case.
01:22
<@ToxicFrog>
That is, indeed, exactly what "f = foo(8)" means.
01:23
< MyCatOwnz>
K, so that defines a *function* f() which returns the return value of foo(8) when called, right?
01:24
<@ToxicFrog>
...no.
01:24
<@ToxicFrog>
It /calls/ the return value of foo(8) when called.
01:24
<@ToxicFrog>
Because it is the return value of foo(8)
01:24
< MyCatOwnz>
!
01:24
<@ToxicFrog>
f = foo(8); f(); is equivalent to foo(8)()
01:24
<@ToxicFrog>
?
01:25
< MyCatOwnz>
Ahhhhhhhh! I see! So foo()'s "return tmp" line is referring something equivalent to a function pointer (or, Hell, a reference) to tmp()?
01:25
<@ToxicFrog>
...
01:25
<@ToxicFrog>
Foo declares tmp just a few lines before that!
01:26
<@ToxicFrog>
It declares a function, tmp, and then returns it!
01:26 * Ev3 bites ToxicFrog.
01:26
< MyCatOwnz>
My brain short circuited and mis'terped that as "return tmp(n);"
01:26
< MyCatOwnz>
Which really *didn't* make sense.
01:27
<@ToxicFrog>
...yeah.
01:27
< Ev3>
RAWR!
01:27
<@ToxicFrog>
No, it's returning the function tmp itself.
01:27 * Ev3 pouts and hops off.
01:27
<@ToxicFrog>
Which when called returns the value of n, even though n is no longer in scope by the time you call it.
01:27 * ToxicFrog eyebrows at Ev3.
01:28
< MyCatOwnz>
Yeah, the scoping is a bit alien there.
01:28
<@ToxicFrog>
...how so?
01:28
< Ev3>
You are supposed to say something funny to cheer me up which you always did.
01:28
<@ToxicFrog>
It's standard lexical scope except that it explicitly has "local" in front.
01:28
<@ToxicFrog>
Aaw.
01:28
<@Reiver>
(I think he's busy Eve)
01:28
< Ev3>
But nevermind I can see you are busy, carry on.
01:28 * MyCatOwnz hugs Ev3.
01:28 * ToxicFrog gives Ev3 a candied metatable
01:28 * Ev3 hugs MyCatOwnz
01:29
< MyCatOwnz>
Ev3: sorry, I've distracted him. Programmer is busy educating n00blet. :/
01:29
< Ev3>
It's ok.
01:29
< Ev3>
It's not like I'm anyone important anyway :p
01:29
< Ev3>
I just linger here because.. well, people are here.
01:29
< Ev3>
And I get to ask the most inane and stupid questions.
01:30
< Ev3>
But carry on, you were talking about tables?
01:30
<@ToxicFrog>
Function closures, actually.
01:30
< MyCatOwnz>
ToxicFrog: argh. T'other thing that was breaking my head was the question of where n was stored, given that it was out of scope by the time you called f() and g(). But yeah, obviously the constant value of n() must be stored in the definitions of f() and g(). Aight.
01:30
<@ToxicFrog>
Nope, not the constant value.
01:30
<@ToxicFrog>
f() can actually assign to its local version of n.
01:31
<@ToxicFrog>
What actually happens is that rather than returning the raw function, it wraps the function in references to the scope it was declared in.
01:31
<@ToxicFrog>
So inside f(), n is actually a reference to a candied instance of foo's scope.
01:32
<@ToxicFrog>
Err. Reference /into/. It refers to the variable n in that scope, not to the scope itself.
01:32
< MyCatOwnz>
...candied? Fancy term for, "f() has created its own copy of n?"
01:32
<@ToxicFrog>
But this is interpreter implementation details and mostly you can just wave your hands and say "it's stored wherever the interpreter thinks it should be"
01:32
<@ToxicFrog>
Yes.
01:33
<@ToxicFrog>
And when foo() gets called again, n has a different value for that call, but this doesn't affect the candied scope that f carries with it.
01:33
< MyCatOwnz>
ToxicFrog: well, yes, there is that. OTOH, I can't reliably use anything I can't reliably understand, and I have a *very* hard time understanding anything that I haven't a vague idea of how the machine achieves it.
01:35
< MyCatOwnz>
ToxicFrog: so does f's candied copy of n act like a static variable? i.e. if f() includes some code that occasionally alters its copy of n according to some condition, will the change in its candied version of n's value propogate to all subsequent calls to f()?
01:35
< MyCatOwnz>
*static local variable, I mean.
01:35
<@ToxicFrog>
Yes.
01:35
<@ToxicFrog>
Exactly.
01:35
< MyCatOwnz>
Woohoo!
01:36
< MyCatOwnz>
ToxicFrog: allow me to link you to a page that perfectly sums up how I feel about an awful lot of high level languages: http://c2.com/cgi/wiki?BoyThisStuffMakesMeFeelStupid
01:36
<@ToxicFrog>
So if tmp was, say, "n = n + 1; return n;" then calling f multiple times would return 5,6,7,8,9,...
01:36
< MyCatOwnz>
Some of the finer details are different, but if I wrote that, the title would've been exactly the same. =)
01:37
< MyCatOwnz>
ToxicFrog: and "g = foo(8); print("%d,%d,%d,%d",g(),g(),g(),g());" would spit out 8,9,10,11. Makes sense.
01:38
< MyCatOwnz>
I can see why they called that feature candying. It sounds pretty delicious.
01:39
<@ToxicFrog>
Actually, I just pulled "candy" out of the air.
01:39
<@ToxicFrog>
As far as I know the process of giving the closure a static copy of the scope doesn't have an official term.
01:39
< MyCatOwnz>
Ah. So *that*'s how technical terms are derived.
01:40
< MyCatOwnz>
So, questioning a possible difference between function pointers and the first-class function variables in Lua, etc. Can one change the target of a function variable-thingy?
01:40
< MyCatOwnz>
Or are they fixed, à la references?
01:40
<@ToxicFrog>
Ok. We're having a bunch of terminology collisions here.
01:41
<@ToxicFrog>
- C++ "references" are a /specific type/ of constant reference. C/++ pointers are also a kind of indirect reference.
01:41
<@ToxicFrog>
- variables do not have types, only values do. You can assign, and reassign, any value to a variable you wish. This includes assigning something else to a variable that currently holds a function reference.
01:42
<@ToxicFrog>
- all functions are passed by reference, not by value.
01:42
< MyCatOwnz>
"variables do not have types, only values do" <-- is that a feature of Lua's type system, or does it appear often in other HLLs?
01:44
< MyCatOwnz>
ToxicFrog: so if, in that above piece of code, you did "f = g", f() and g() would both return values from the same ascending set, irrelevant of which order you called which one in, and the resources allocated back when "f = foo(4)" was defined will be garbage collected?
01:45
<@ToxicFrog>
Yes. Exactly.
01:45
<@ToxicFrog>
And it's a feature of dynamically-typed, weakly-bound languages in general, not just Lua.
01:47
< MyCatOwnz>
Cool. What does the term, "weakly-bound," refer to, please? The extremely late binding that one gets when playing with references to functions (such as f() and g())?
01:47
<@ToxicFrog>
Types don't bind to variables. If I assign an int to x, this doesn't force it to always be an int.
01:48
<@ToxicFrog>
I /think/ that "dynamically typed" implies "weakly bound", but I can't prove it.
01:48 ChalcyMusic is now known as Chalcedon
01:48
< MyCatOwnz>
Excuse me for sounding Neo-ish, but, "Whoa."
01:48
<@ToxicFrog>
This is just a rephrasing of "variables don't have types, only values do"
01:48
< MyCatOwnz>
ToxicFrog: that's okay. I think that sex is more fun than logic, but I can't prove that, either.
01:48
<@ToxicFrog>
Heh.
01:49
<@ToxicFrog>
So. That's function closures, and a bit more besides.
01:50
< MyCatOwnz>
If I ever meet you, I'm going to fucking kill you.
01:50
<@ToxicFrog>
And declaring functions inside functions is a common practice in Lua. It comes in handy.
01:50
<@ToxicFrog>
...wha?
01:50
< MyCatOwnz>
Cirrhosis of the liver, you see.
01:50
<@ToxicFrog>
...plz to be making sense now?
01:50
< MyCatOwnz>
I must owe you so much booze by now... =D
01:50
<@ToxicFrog>
Heh. I don't drink, so~
01:51
< MyCatOwnz>
Heh. I'd have paid money to see the look on your face just then. =)
01:55 * ToxicFrog ponders how to implement downcasts in Lua.
01:56
<@ToxicFrog>
Or, for that matter, if I should at all.
02:01
< MyCatOwnz>
For the purposes of containers? I don't think Lua even really makes that neccessary, does it?
02:01 Reiver is now known as ReivOut
02:01
<@ToxicFrog>
Containers? Expand please?
02:01
< MyCatOwnz>
I can't think of any uses of downcasting that're actually neccessary in any non-C++ language.
02:02
<@ToxicFrog>
Well, it's an actual transformation, not a reinterpret_cast<>
02:02
<@ToxicFrog>
Eg, taking a GenericEvent and turning it into a PlayerJoinEvent, say, which is a subclass of GenericEvent.
02:03
<@ToxicFrog>
Basically, I have something on the wire that's a serialized Event, and there's two ways I could handle turning this into an Event -
02:04
<@ToxicFrog>
evt = GenericEvent:New()
02:04
<@ToxicFrog>
evt:Deserialize(buf) -- evt is now a PlayerJoinEvent
02:04
<@ToxicFrog>
or:
02:04
<@ToxicFrog>
evt = GenericEvent:Deserialize(buf) -- which is a static method that creates a returns a new Event of the appropriate subclass
02:05
< MyCatOwnz>
Ah, yeah, makes sense.
02:06
<@ToxicFrog>
I think I will go for the latter, as it's both a simpler implementation and more readable.
02:06
< MyCatOwnz>
I see. So you deserialise an object at the top of the class hierarchy and it has a field that identifies it as being an instance of something lower down in the class hierarchy.
02:07
<@ToxicFrog>
Yeah. Doing this as a static method in the superclass seems the way to go.
02:07
< MyCatOwnz>
Well, you do have one massive advantage here. Since you wrote the Lua object-orientation system yourself, in Lua, there's nothing stopping you doing *completely* novel things with it and even bypassing it altogether.
02:08
< MyCatOwnz>
Hear that noise? That's the sound of a million C++ programmers' eyeballs exploding with sheer, blind envy.
02:08 * Janus goes to get more eyes.
02:08
< MyCatOwnz>
Janus: you do that, I'll get the BBQ fired up.
02:09
<@ToxicFrog>
Heh.
02:10
<@ToxicFrog>
Still, my muse appears to have deserted me.
02:11
< MyCatOwnz>
ToxicFrog: sorry for sapping your brains with stuupid n00blet questions =)
02:11
<@ToxicFrog>
No, this happened before you started asking questions.
02:11 * ToxicFrog decides to contemplate the nature of the Plotkin server implementation.
02:12
< MyCatOwnz>
Oh. Well, I apologise for indulging your urge to procrastrinate when faced with a seemingly intractable problem.
02:12 * McMartin cameos
02:12
<@ToxicFrog>
Thing is, it's not intractable. I can think of several different solutions. I'm just having trouble converting them into the intermediate thought-structures from which I actually lay out code.
02:12 * ReivOut cremes!
02:12
<@McMartin>
MCO, your problem with function closures is that you're idea of "procudure call" is sneaking in an idea of "call stack", and the existence of closures breaks call stack discipline.
02:13
<@McMartin>
Er, your.
02:13
<@ReivOut>
(Cameo creme, leave the innuendo at the door, thankyou I'll be here all night)
02:13
<@ReivOut>
*flee*
02:13
< Janus>
Take a break, paint a nude painting, sculpt a nude sculpture, write a nude play. That way, you'll be fresh-ful of new and bothered ideas.
02:15
<@ToxicFrog>
Hmm. Right. The problem with contemplating the original server code is that neither Plotkin nor I actually wrote good code there ;.;
02:16
<@McMartin>
Closures can also be modeled by explicitly allocating the environment on the heap and carrying pointers around, so there's usually a straightforward translation from closure-based-functional to OO.
02:17
<@McMartin>
Not always the reverse, because functional languages aren't big on letting you write variables, and the scoping gets complicated.
02:17
< MyCatOwnz>
McMartin: well yeah, 'cuz that's how my computer does it.
02:17
<@McMartin>
No, the computer only does it because the C/Pascal/ALGOL compilers say to.
02:17
<@McMartin>
Stack frame management is generally not done at the hardware level.
02:18
<@McMartin>
(Closures give you a call tree instead of a call stack, and new calls will not necessarily append to the branch you're on)
02:18
< MyCatOwnz>
Yes, of course. The C compiler says so. That's why the CPU has a special register for the stack pointer and hardware-level support for C/Pascal/ALGOL-like function calls, with explicit "push" "pop" and "call" instructions.
02:18
< MyCatOwnz>
McMartin: I mean, what?
02:19
<@ToxicFrog>
MCO: /stack/ management has hardware support.
02:19
<@ToxicFrog>
However, actually dividing the stack into frames, determining what goes into those frames, etc is the compiler's job.
02:20
<@McMartin>
And there's hasn't been hardware support for call-based register allocation since the VAX.
02:20
<@ToxicFrog>
Indeed.
02:20
<@McMartin>
Also, if you're programming a MIPS based machine like the PS1 or PS2, even return addresses are passed in registers, and if you want have a call depth of more than 1, you have to code in a stack discipline by hand.
02:21
<@McMartin>
Also also, PUSH, POP, etc., can be used for operand stacks for direct compilation of languages like Java, Forth, or PostScript.
02:21
<@McMartin>
Also also also, IIRC, CALL is a macro on the x86.
02:21
< MyCatOwnz>
ToxicFrog: OTOH, if I sit here all day trying various option combinations to a C compiler, I could eventually bludgeon it into not producing stack frames at all, but rather doing it the old assembley way - PUSH and POP the requsite parameters and returns for each function without bothering to include any pointers into executable code beyond those that the CPU leaves behind when it hits a "call" instruction.
02:22
<@McMartin>
No, that's still a stack frame, it's just implicit in the PUSH and POP instructions.
02:22
< MyCatOwnz>
McMartin: you sure about that? Last time I checked, it was a single instruction on the Z80, and I know that *that* is 8080-compatible, instruction for instruction.
02:23
< MyCatOwnz>
(CALL, I'm referring to.)
02:23
<@McMartin>
The version of CALL that MASM uses takes arbitrary numbers of arguments.
02:23
<@McMartin>
That one, at least, is a macro.
02:23
<@McMartin>
That particular call discipline, however, is also Really Rare even on old machines unless they were seriously short on registers.
02:24
<@McMartin>
Which, admittedly, the x86 was.
02:24
< MyCatOwnz>
McMartin: yes, that'll be a macro to extend "CALL d(a,b,c)" into "PUSH c; PUSH b; PUSH a; CALL d" (or in some other order).
02:24
<@McMartin>
Yeah. That's where it's going, though.
02:24
<@McMartin>
There's nothing stopping a better compiler from deciding to pass the first few arguments in EAX and EBX etc.
02:25
< MyCatOwnz>
Uhuh. That, however, is merely an alteration to the parameter passing convention.
02:25
<@McMartin>
And on, say, the PS1/PS2, where there are 32 general purpose registers, most smallish routines don't bother touching memory at all unless they have to make calls of their own (since they have to cache the return address).
02:25
<@McMartin>
Indeed.
02:26
<@McMartin>
All that's necessary to make LISP calling conventions work is to reassign SP before the call.
02:26
< MyCatOwnz>
Handling of where to head to and from is still handled by CALL and IRET, using the stack.
02:26
<@McMartin>
When accessing the stack, you generally don't use POP, though.
02:26
<@McMartin>
You use SP+x, where x is some integer.
02:27 Janus [~Cerulean@Nightstar-10302.columbus.res.rr.com] has quit [Ping Timeout]
02:27
<@McMartin>
If you want to go the OO-translation route, you put the environment for the call on the heap somewhere, and set the "stack pointer" to that.
02:27
<@McMartin>
You can generally be more structured than that, but the first LISP implementations didn't bother.
02:28
<@McMartin>
Return addresses are always following stack discipline, note.
02:28
<@McMartin>
Which is why in quite a few languages you don't actually want to translate function calls into CALL instructions at all times.
02:28
<@McMartin>
Due to some icky bits of the C ABI it's mandatory to in C, but this is not strictly necessary.
02:29
< MyCatOwnz>
So, wait. "<@McMartin> No, the computer only does it because the C/Pascal/ALGOL compilers say to."
02:29
<@McMartin>
(In particular, CALL d; IRET can be optimized down to JUMP D)
02:30
<@McMartin>
Yeah, that's in reference to the local variables in the procedure.
02:30
< MyCatOwnz>
It's merely that the computer does it that way because the C compiler doesn't tell it to use any more complicated form than what the hardware supports with near-zero complexity.
02:30
<@McMartin>
Implementing the C calling convention directly in RAM on a 6502 is a total nightmare, and way harder than many other approachers.
02:30
<@McMartin>
For a silly example, a Fortran 77 compiler won't keep variables on the stack, but instead in the data segment.
02:31
<@McMartin>
This has the effect (specified by the language) of making every single local variable "static" in C terms.
02:31
< MyCatOwnz>
Whereas, if you're brave enough to play with fire, you can implement other, radically more powerful, calling conventions with only marginally more work?
02:31
<@McMartin>
C stacks are only remotely efficient on machines for which MOV reg, mem[reg+constant] is a single instruction.
02:32
<@McMartin>
It's hard to call it "playing with fire" when it predates C by two decades.
02:32
<@McMartin>
Most modern hardware has a MOV reg, mem[reg+constant] instruction, idly, but most hardware was designed well after C was the Only Language Used For Systems Programming.
02:32
<@McMartin>
Neither exists in a vaccuum.
02:33
< MyCatOwnz>
McMartin: playing with embers, then. But the point still stands, we're talking about around two or three well thought-out instructions of overhead to a function call in order to get a cleverer convention.
02:33
< MyCatOwnz>
Would that be the logic behind designing and building a LISP machine?
02:33
<@McMartin>
Uh, so, the LISP machine predates even FORTRAN.
02:34
<@McMartin>
Its assembler conventions and memory model have a lot to do with why a lot of LISP's primitives have silly names.
02:34
< MyCatOwnz>
That if you wrote the systems software to use LISP's calling convention rather than C's, it becomes nearly-as-efficient as C for tasks in C's idealised problem domain and much more efficient than C for tasks outside of it?
02:34
<@McMartin>
In particular, "car" and "cdr" are so named because they're mnemonics for Contents of Address/Decrement part of Register.
02:34
<@McMartin>
It's really more ML's calling convention that's the interesting one on x86.
02:35
< MyCatOwnz>
McMartin: would this have something to do with ML being apparently the fastest language on the planet or something crazy like that?
02:35
<@McMartin>
Straightforward ML code translated directly to C will blow out the stack on the C implementation but not the ML one.
02:35
< MyCatOwnz>
I see.
02:35
<@McMartin>
No, not Machine Language. Meta-Language.
02:35
<@McMartin>
Basically, "return f()" statements translate to GOTOs when compiling.
02:35
< MyCatOwnz>
McMartin: yes, I know what ML is. ML, CAML, OCAML.
02:36
< MyCatOwnz>
(I wasn't thinking of assembler, heh.)
02:36
<@McMartin>
Yeah, so the bit in OCaml that will reliably beat C++ is code with exceptions, and that's because of the conventions for saving variables (and not needing precise destructors)
02:36
<@McMartin>
But OCaml can throw and catch an exception an arbitrary number of call frames in two instructiosn
02:36
<@McMartin>
instructions.
02:36
< MyCatOwnz>
Erm, no.
02:37
<@McMartin>
Uh, I've looked at the generated assembler.
02:37 Stephenie [Safyra@Nightstar-25904.ok.ok.cox.net] has joined #code
02:37
< MyCatOwnz>
The bit in OCalm that lets it beat C++ is that OCaml isn't fucking insane. The speed is a side issue. =)
02:37
<@McMartin>
It retrieves the catcher's stack pointer (one instruction), and jumps to the cached handler, which is relative to that stack pointer (two).
02:38
<@McMartin>
This also beats setjmp/lngjmp on architectures with reasonable numbers of registers, but it gets Hard To Test Fairly very fast.
02:38
< MyCatOwnz>
McMartin: so when people make statements like, "If we put as much time into optimising LISP as had been put into optimising C, LISP would be just as fast," they really weren't kidding...
02:39
< MyCatOwnz>
Wait, clarify. Do you mean setjmp/lngjmp gets really hard to test really fast, or the ML exception handling gets hard to test?
02:39
<@McMartin>
ML has some advantages over LISP, but that's beyond the scope of it.
02:39
<@McMartin>
Comparing setjmp/lngjmp to ML exceptions gets terribly unfair to one or both quickly.
02:40
<@McMartin>
In particular, ML can jump up stack frames with total impunity because it doesn't have to worry if some intermediate call malloc()ed something.
02:40
< MyCatOwnz>
Becase calling setjmp() is like asking the computer to think very hard to see if it can come up with any excuses to make your code explode, all over the place.
02:40
<@McMartin>
Because ML is garbage-collected.
02:40
<@McMartin>
That's the other advantage of non-C/C++ exceptions; they actually work~
02:41
<@McMartin>
The key thing in all of this is just that in C and especially C++, leaving a function requires doing Work, and it doesn't in the garbage-collected languages.
02:41
< MyCatOwnz>
Okay, so there's two questions I have to bomb you with here.
02:41
<@McMartin>
In C++, "}" can perform arbitrary amounts of computation, because there's some unknown and unbounded number of destructors that Need Calling Right Now.
02:41
<@McMartin>
This is actually an advantage in its own right.
02:42
< MyCatOwnz>
C (and Pascal) have the advantage in systems programming that a void func(void) has exactly the same calling convention as a hardware interrupt.
02:42
<@McMartin>
... not on MIPS it doesn't.
02:42
<@McMartin>
I don't know about X86, but MIPS and 6502 both return differently from interrupts than from calls.
02:42
< MyCatOwnz>
Also, the CALL/IRETting that gets done on hardware interrupts can be preemptible, quite effortlessly.
02:43
< MyCatOwnz>
McMartin: I apologise for thinking in x86/z80 assembley (ptoooie!), but those're the only two machine languages I'm even remotely familiar with.
02:43
<@McMartin>
Um. Assuming you don't re-enable interrupts before you've finished safely backing up your registers.
02:44
<@McMartin>
Fair enough. 6502 and MIPS are the only ones I know, so our ideas of "natural" are likely to be at odds~
02:44
< MyCatOwnz>
Except for one thing - I admit quite freely that my idea of "natural" is fucking baroque =D
02:45
<@McMartin>
MIPS was designed on the principle of "how few primitives do we really need, anyway", which is why there isn't, for instance, an actual dedicated stack pointer.
02:45
<@McMartin>
There is a dedicated return-address register and a dedicated interrupt-return-address register, but that's it.
02:45
<@McMartin>
In any event, continue.
02:46
< MyCatOwnz>
The interrupt handler needs to back up whatever was in the CPU's registers (whichever ones it decides to use, anyway) when it was called and then restore those registers before IRETting. That way, it's (kinda) infinitely reentrant and it'll never blow a running process waway.
02:46
<@McMartin>
Right. We're taking for granted here that if the interrupt handler writer was stupid and re-enabled interrupts before he finished backing it up, he's asking to be pwn3d by an interrupt coming in before he finished saving it.
02:47
<@McMartin>
On MIPS, this means you can actually trash your own return address and lose the process entirely.
02:48
<@McMartin>
But yeah, assume that no errors were made.
02:48
< MyCatOwnz>
McMartin: except that by making itself preemptible it doesn't need to go to the trouble of reenabling interrupts. A higher priority interrupt can interrupt *it*, and when that finishes the first interrupt handler will be returned to, get its job done, and then return back to the user's program with all registers intact - transparently, even.
02:49
<@McMartin>
Yeah, so, that's not true for MIPS, because the act of letting an interrupt happen overwrites a register.
02:49
< MyCatOwnz>
Crikey. Which register does it overwrite?
02:50
<@McMartin>
The "Interrupt return address" register.
02:50
<@McMartin>
Processing an interrupt overwrites that register, disables all interrupts, then hands control to the handler.
02:51
<@McMartin>
The handler does the caching it needs, then re-enables interrupts if it wants to be pre-emptible.
02:51
< MyCatOwnz>
Ah. On an 8080/86-ish chip, the PC is saved to the stack before jumping to the interrupt handler, so that the interrupt handler can return to the same place safely with IRET.
02:51
<@McMartin>
It has to disable them again for the restore of that one register, and then there's a special trick for returning from the interrupt and re-enabling interrupts simultaneously that involves intentionally abusing the instruction fetch pipeline.
02:52
< MyCatOwnz>
And you always PUSH the contents of any registers you need to use inside the interrupt handler onto the stack and POP them back later, to avoid clobbering the previous program's data.
02:52
<@McMartin>
(MIPS has no hardware stack.)
02:52
< MyCatOwnz>
So, question.
02:52
<@McMartin>
Right.
02:52
< MyCatOwnz>
C is basically an overgrown PDP-11 assembler.
02:53
<@McMartin>
With a lot of PDP-11 features stripped away.
02:53
<@McMartin>
Yeah.
02:53
<@McMartin>
(6502 is basically a stripped-down and 8-bit version of the PDP-10)
02:53
<@McMartin>
Also, that's not a question
02:53
< MyCatOwnz>
Which chip is more similar to a PDP-11 CPU? The 6502 &| MIPS chips, or the 8080/8086 ICs?
02:53
<@McMartin>
MIPS is nothing like it, MIPS being the first RISC chip ever designed.
02:54
<@McMartin>
I don't know the PDP-11, but know the 6502 was inspired by the earlier PDPs.
02:54
<@McMartin>
You'd have to find a hardware historian.
02:54
<@McMartin>
The question I think you're leading towards, however...
02:55
<@McMartin>
the answer would be "A non-stack-based language running on any such system would use its hardware stack, if any, solely for storing return addresses."
02:55
<@McMartin>
"Everything else would be stored in The Rest Of Memory."
02:55
< MyCatOwnz>
I get the impression that C might feel a lot less fucked-up to me because I'm used to x86 hardware, which is vaguely wrapped around some of the facets of the C language.
02:55
<@McMartin>
C is designed for a generic stored-program computer, actually.
02:56
<@McMartin>
There's some concept of procedure call for any hardware, and hardware that doesn't have it can fake it.
02:56
< MyCatOwnz>
McMartin: with its reliance on the stack? Yes, it makes sense on any random von Neumann machine, but I suspect it's only worthwhile on ones with lots of stack support in the hardware.
02:56
<@McMartin>
C only demands that each invocation of a function get its own variables.
02:57
<@McMartin>
You could, in theory, write a C compiler that malloc()ed space for local variables on each call and free()ed on return.
02:57
<@McMartin>
And then use the stack only incidentally as a side effect of CALL statemetns.
02:57
<@McMartin>
statements.
02:57
< MyCatOwnz>
True dat. But in practice you don't. In practice, you write a C compiler that allocates space for local variables by moving the stack pointer up and down.
02:58
<@McMartin>
On the 6502, the stack is (a) fixed in memory and (b) 256 bytes in size.
02:58
<@McMartin>
I suspect that C compilers for the 6502 build their own stack at, say, $FFFF down.
02:58
< MyCatOwnz>
McMartin: hahahahah, sweet! That explains a lot!
02:59
<@McMartin>
However, it probably wouldn't bother intermingling return addresses with the local variables, unless they wanted to recurse more than 127 times.
02:59
<@McMartin>
128 nested JSR statements will break your computation on a 6502, so don't do that~
03:00
<@McMartin>
Noticably less than that if it's a 6502 in an Atari 2600, which has 128 bytes of RAM and so mirrors its RAM four times from $00-$01FF
03:00
< MyCatOwnz>
...?
03:00
<@McMartin>
Not all the address lines coming out of the chip are used.
03:00
< MyCatOwnz>
How much video memory did the Atari 2600 have?
03:01
<@McMartin>
It didn't. You had to cycle count and drive the TV's beam into HBLANK and VBLANK by hand.
03:01
< MyCatOwnz>
How on *earth* (obscenely clever hackery aside) do you manage to animate anything on that?
03:01
<@McMartin>
(Also, 32 bytes for background data and the like; those were, strictly speaking, I/O registers, not memory)
03:02
< MyCatOwnz>
Sounds like the upper limit of you hardware resources would be, ummm... Space Invaders or Pong.
03:02
< MyCatOwnz>
...oh. Oh, yeah.
03:02
<@McMartin>
You've seen Solaris screenshots, right?
03:02
<@McMartin>
One "advantage" of this is that your "sprites" were 8 units wide and as tall as you could get away with
03:02
< MyCatOwnz>
Heh, sweet.
03:02
<@McMartin>
"Units" could be made larger for bigger sprites, too.
03:03
<@McMartin>
http://www.atariage.com/screenshot_page.html?SoftwareLabelID=450
03:03
<@McMartin>
They got pretty god at driving the hardware late in its career.
03:03
<@McMartin>
Those parallax lines accelerated as they got closer to you, too.
03:03
<@McMartin>
And now, I must away again.
03:04
< MyCatOwnz>
That is the beautiful thing about consoles.
03:04
< MyCatOwnz>
Fixed hardware platforms means that some of the devs eventually become *obscenely* good at hacking it around, leading to people achieving effects on PS2's that're hard to match on far more modern PC graphics cards. ^_^
03:06 Thaqui [~Thaqui@124.197.36.ns-12825] has quit [Ping Timeout]
03:17
< MyCatOwnz>
...I have a screwdriver.
03:17 * MyCatOwnz cackles.
03:22 Thaqui [~Thaqui@124.197.9.ns-13331] has joined #code
03:45 Takyoji [~Takyoji@Nightstar-25280.dhcp.roch.mn.charter.com] has joined #code
03:46
< Takyoji>
How can you have visitors connect through a https protocol on your website?
03:46 ReivOut is now known as Reiver
03:50
< MyCatOwnz>
Takyoji: Apache?
03:50
< Takyoji>
How though?
03:50
< MyCatOwnz>
A section in httpd.conf to handle the HTTPS connections.
03:50
< Takyoji>
hmm
03:51
< MyCatOwnz>
Couldn't tell you offhand what to do. You're probably using apache 2 and I've only played with the 1.3 branch.
03:53 MyCatOwnz [~mycatownz@Nightstar-379.dsl.in-addr.zen.co.uk] has quit [Quit: Sleepin'. Nini.]
03:53 You're now known as TheWatcher
03:53
<@Reiver>
So!
03:54
<@Reiver>
Function closures or some such thing.
03:54
<@Reiver>
Today, or another day?
03:54
<@ToxicFrog>
I'm dead on my feet, so...another day?
03:54
<@ToxicFrog>
There's a bunch of backscroll about it some three hours ago, though.
03:56
<@Reiver>
NP!
03:56
<@Reiver>
Nini, Mr TF.
03:56
<@Reiver>
Did you have a good thanksgiving?
03:58
<@ToxicFrog>
You asked me that earlier, I think, and then as now my response is "yes, but I'm in Canada, it was a month ago"
03:59
< Takyoji>
GAH! I hate the damned company that hosts our website.. $20 a month and hardly allows anything!
03:59
< Takyoji>
The free version of Tripod.lycos.com is practically better
04:00
< Takyoji>
too bad my brother isn't allowing me to run our own webserver.
04:00
<@ToxicFrog>
http://www.dreamhost.com/hosting.html
04:01
< Takyoji>
oh lord
04:01
<@ToxicFrog>
Take a look at the $20/mo "Code Monster" column.
04:02
<@ToxicFrog>
And now, sleep.
04:04
< Takyoji>
holy lord
04:07
<@Reiver>
Oh right.
04:07 * Reiver apologises.
04:07
<@Reiver>
For some reason I keep thinking you're in Seattle, TF. >.<
04:08
<@Reiver>
(Yes, I know that's where McM and Pi And Raif are. I think that's the problem~)
04:27
<@McMartin>
I'm not in Seattle.
04:31
<@Reiver>
Oh.
04:31 * Reiver frowns.
04:31
<@Reiver>
Did you used to be?
04:36
<@McMartin>
No.
04:43
<@Reiver>
Hrm.
04:43
<@Reiver>
Where are you then? I coulda sworn...
04:44
<@McMartin>
Various cities in California.
04:44
<@Reiver>
Hrm.
04:44
<@Reiver>
OK.
04:44
<@Reiver>
I apologise, I have no idea why I thought you were in seattle, I just did.
04:52 Janus [~Cerulean@Nightstar-10302.columbus.res.rr.com] has joined #Code
05:06 * Janus wonders if anyone's still kicking it.
05:11 * Stephenie blinks
05:15 * Janus wonders how one goes about upgrading Memory...
05:16 ReivZzz [~reaverta@IRCop.Nightstar.Net] has joined #Code
05:17 mode/#code [+o ReivZzz] by ChanServ
05:17
< Janus>
It's the easiest, cheapest way to increase performance, but I'm not quite sure if I have purchase the same type that came with it, or if I can mix some 512s with the 256s.
05:17 Reiver [~reaverta@IRCop.Nightstar.Net] has quit [Killed (ReivZzz (Ghostbusting.))]
05:17 ReivZzz is now known as Reiver
05:21 * Janus likes to mix batteries around so they colour coordinate, so it's nice that ram sticks look the same.
05:32 Janus [~Cerulean@Nightstar-10302.columbus.res.rr.com] has quit [Quit: bed. kthxbye]
05:35 Chalcedon [~Chalceon@Nightstar-869.bitstream.orcon.net.nz] has quit [Killed (NickServ (GHOST command used by Forj))]
05:36 Chalcy [~Chalceon@Nightstar-869.bitstream.orcon.net.nz] has joined #code
05:36 mode/#code [+o Chalcy] by ChanServ
05:36 Chalcy is now known as Chalcedon
06:19 Reiver is now known as ReivOut
06:30 EvilSLEPLord is now known as EvilDarkLord
06:46 You're now known as TheWatcher[afk]
07:17 Takyoji [~Takyoji@Nightstar-25280.dhcp.roch.mn.charter.com] has quit [Quit: Leaving]
07:21 Chalcedon [~Chalceon@Nightstar-869.bitstream.orcon.net.nz] has quit [Ping Timeout]
07:21 Chalcedon [~Chalceon@Nightstar-869.bitstream.orcon.net.nz] has joined #code
07:21 mode/#code [+o Chalcedon] by ChanServ
07:23 ReivOut is now known as Reiver
08:03 AnnoDomini [~fark.off@Nightstar-29172.neoplus.adsl.tpnet.pl] has joined #Code
08:34 Chalcedon is now known as ChalcyKitty
08:50 ChalcyKitty is now known as Chalcedon
09:15 Chalcedon is now known as ChalcyZzz
09:33 timelady [~romana@Nightstar-15011.lns7.adl2.internode.on.net] has joined #Code
10:04 You're now known as TheWatcher[wr0k]
10:06 timelady [~romana@Nightstar-15011.lns7.adl2.internode.on.net] has quit [Quit: sleeeppp]
10:32 Thaqui is now known as ThaquiSleep
10:39 ThaquiSleep [~Thaqui@124.197.9.ns-13331] has quit [Client exited]
11:41 EvilDarkLord [althalas@Nightstar-17046.a80-186-184-83.elisa-laajakaista.fi] has quit [Ping Timeout]
11:43 EvilDarkLord [althalas@Nightstar-17046.a80-186-184-83.elisa-laajakaista.fi] has joined #code
12:19 Reiver [~reaverta@IRCop.Nightstar.Net] has quit [Ping Timeout]
12:22 ReivZzz [~reaverta@IRCop.Nightstar.Net] has joined #Code
12:22 mode/#code [+o ReivZzz] by ChanServ
12:48 AnnoDomini [~fark.off@Nightstar-29172.neoplus.adsl.tpnet.pl] has quit [Ping Timeout]
12:51 ReivZzz [~reaverta@IRCop.Nightstar.Net] has quit [Ping Timeout]
12:54 ReivZzz [~reaverta@IRCop.Nightstar.Net] has joined #Code
12:54 mode/#code [+o ReivZzz] by ChanServ
12:54 AnnoDomini [~fark.off@Nightstar-29812.neoplus.adsl.tpnet.pl] has joined #Code
13:39 ReivZzz is now known as ReivSLEP
13:47 Chalcy [~Chalceon@Nightstar-869.bitstream.orcon.net.nz] has joined #code
13:47 mode/#code [+o Chalcy] by ChanServ
13:48 ChalcyZzz [~Chalceon@Nightstar-869.bitstream.orcon.net.nz] has quit [Ping Timeout]
14:32 MyCatOwnz [~mycatownz@Nightstar-379.dsl.in-addr.zen.co.uk] has joined #code
15:24 You're now known as TheWatcher
15:37 Pi [~sysop@Nightstar-6915.hsd1.or.comcast.net] has quit [Ping Timeout]
16:21 Safyra_Away [Safyra@Nightstar-25904.ok.ok.cox.net] has joined #code
16:21 Stephenie [Safyra@Nightstar-25904.ok.ok.cox.net] has quit [Ping Timeout]
16:33 MyCatOwnz [~mycatownz@Nightstar-379.dsl.in-addr.zen.co.uk] has quit [Quit: Stuff, doing thereof.]
17:18 You're now known as TheWatcher[afk]
17:32 MyCatOwnz [~mycatownz@Nightstar-379.dsl.in-addr.zen.co.uk] has joined #code
17:45 Janus [~Cerulean@Nightstar-10302.columbus.res.rr.com] has joined #Code
17:52 EvilDarkLord is now known as EvilSchemingLord
18:27 You're now known as TheWatcher
19:05 Chalcy is now known as Chalcedon
19:43 MyCatOwnz is now known as MyCatFoods
19:51 EvilSchemingLord [althalas@Nightstar-17046.a80-186-184-83.elisa-laajakaista.fi] has quit [Ping Timeout]
19:53 EvilSchemingLord [althalas@Nightstar-17046.a80-186-184-83.elisa-laajakaista.fi] has joined #code
20:03 Janus [~Cerulean@Nightstar-10302.columbus.res.rr.com] has quit [Ping Timeout]
20:14 Janus [~Cerulean@Nightstar-10302.columbus.res.rr.com] has joined #Code
20:43 Pi [~sysop@Nightstar-6915.hsd1.or.comcast.net] has joined #code
20:43 mode/#code [+o Pi] by ChanServ
21:25 Thaqui [~Thaqui@124.197.9.ns-13331] has joined #code
21:37 MyCatFoods is now known as MyCatOwnz
21:44 Chalcedon is now known as ChalcyDressing
21:45 ReivSLEP is now known as Reiver
21:54 Janus is now known as Jan[essay]
22:07 Mahal is now known as MahalCleaning
22:16 Safyra_Away is now known as Stephenie
22:19 Reiver is now known as ReivOut
22:42 Jan[essay] [~Cerulean@Nightstar-10302.columbus.res.rr.com] has quit [Quit: Essay is important, must focus--]
23:00 ChalcyDressing is now known as ChalcyWaitingForReiver
23:06 AnnoDomini [~fark.off@Nightstar-29812.neoplus.adsl.tpnet.pl] has quit [Quit: Some people find sanity a little confining.]
23:19 ChalcyWaitingForReiver is now known as ChalcyWaiting
23:33 ReivOut is now known as Reiver
23:34 ChalcyWaiting is now known as ChalcyWedding
23:39 Chalcy [~Chalceon@Nightstar-869.bitstream.orcon.net.nz] has joined #code
23:39 mode/#code [+o Chalcy] by ChanServ
23:40 ChalcyWedding [~Chalceon@Nightstar-869.bitstream.orcon.net.nz] has quit [Ping Timeout]
23:59 Reiver is now known as ReivOut
--- Log closed Sat Nov 25 00:00:12 2006
code logs -> 2006 -> Fri, 24 Nov 2006< code.20061123.log - code.20061125.log >