code logs -> 2009 -> Mon, 24 Aug 2009< code.20090823.log - code.20090825.log >
--- Log opened Mon Aug 24 00:00:55 2009
00:02 You're now known as TheWatcher[zZzZ]
00:51 GeekSoldier [~Rob@Nightstar-8573.midstate.ip.cablemo.net] has quit [Quit: Praise "BOB"!]
00:52 Consul [~Consul__@Nightstar-4614.dsl.sfldmi.ameritech.net] has quit [Ping Timeout]
00:53 Consul [~Consul__@Nightstar-4614.dsl.sfldmi.ameritech.net] has joined #code
00:53 mode/#code [+o Consul] by ChanServ
01:19 Tarinaky [~Tarinaky@Nightstar-16638.plus.com] has quit [Client exited]
02:31 Consul [~Consul__@Nightstar-4614.dsl.sfldmi.ameritech.net] has quit [Ping Timeout]
02:32 Consul [~Consul__@Nightstar-4614.dsl.sfldmi.ameritech.net] has joined #code
02:32 mode/#code [+o Consul] by ChanServ
03:04 Attilla [~The.Attil@92.0.115.ns-3186] has quit [Quit: ]
03:19 gnolam [lenin@Nightstar-1382.A163.priv.bahnhof.se] has quit [Quit: Z?]
03:41 Derakon[AFK] is now known as Derakon
03:50
<@Derakon>
...how many cookies do you eat in one sitting?
03:50
<@Derakon>
Mischan.
03:55
<@Vornicus>
Ten bazillion.
03:55
<@Derakon>
And how big are they?
03:56
<@Vornicus>
Very, very small.
04:00 SmithKurosaki [~Smith@Nightstar-7213.cpe.net.cable.rogers.com] has quit [Operation timed out]
04:17 McMartin [~mcmartin@Nightstar-7615.dsl.pltn13.sbcglobal.net] has quit [Quit: Kernel upgrade]
04:22 McMartin [~mcmartin@Nightstar-7615.dsl.pltn13.sbcglobal.net] has joined #code
04:22 mode/#code [+o McMartin] by ChanServ
04:50 Thaqui [~Thaqui@121.98.166.ns-22683] has joined #code
04:50 mode/#code [+o Thaqui] by ChanServ
05:45
<@Derakon>
Hrm. Tricky.
05:46
<@Derakon>
Okay, so the Player has an animation "crawlturn", for when he needs to turn around when crawling.
05:46
<@Derakon>
I want the player to be able to reverse the turnaround by pressing in the original direction of travel.
05:46
<@Derakon>
Right now, the animation "completes" when it either reaches its last frame, or reaches its first frame (because it was reversed).
05:47
<@Derakon>
I want some clean way to differentiate those two cases...
05:47
<@Derakon>
I don't really like the thought of examining the animation's current frame, nor of a passed-along boolean that says "I went off the end".
06:04 Syloqs-AFH [Syloq@ServicesAdmin.Nightstar.Net] has quit [Connection reset by peer]
06:17 AnnoDomini [AnnoDomini@Nightstar-29519.neoplus.adsl.tpnet.pl] has joined #Code
06:17 mode/#code [+o AnnoDomini] by ChanServ
06:19 Namegduf [~namegduf@Nightstar-7714.148.broadband5.iol.cz] has quit [Ping Timeout]
06:32 SmithKurosaki [~Smith@Nightstar-4395.dsl.teksavvy.com] has joined #code
06:32 mode/#code [+o SmithKurosaki] by ChanServ
06:57 Derakon is now known as Derakon[AFK]
07:11 Namegduf [~namegduf@Nightstar-7714.148.broadband5.iol.cz] has joined #code
07:11 Tarinaky [~Tarinaky@Nightstar-16638.plus.com] has joined #code
07:20 Rhamphoryncus [~rhamph@Nightstar-16476.ed.shawcable.net] has joined #code
07:32 Namegduf [~namegduf@Nightstar-7714.148.broadband5.iol.cz] has quit [Quit: Headin' out.]
08:35 Vornicus is now known as Vornicus-Latens
09:04 Thaqui [~Thaqui@121.98.166.ns-22683] has quit [Client exited]
09:29 You're now known as TheWatcher
09:54 Thaqui [~Thaqui@121.98.166.ns-22683] has joined #code
09:54 mode/#code [+o Thaqui] by ChanServ
11:11 Rhamphoryncus [~rhamph@Nightstar-16476.ed.shawcable.net] has quit [Quit: Rhamphoryncus]
12:05 Thaqui [~Thaqui@121.98.166.ns-22683] has quit [Client exited]
12:36 Attilla [~The.Attil@92.0.115.ns-3186] has joined #code
12:37 mode/#code [+o Attilla] by ChanServ
12:48 gnolam [lenin@Nightstar-1382.A163.priv.bahnhof.se] has joined #Code
12:48 mode/#code [+o gnolam] by ChanServ
--- Log closed Mon Aug 24 13:03:25 2009
--- Log opened Mon Aug 24 13:03:29 2009
13:03 TheWatcher [~chris@Nightstar-29731.dsl.in-addr.zen.co.uk] has joined #code
13:03 Irssi: #code: Total of 20 nicks [13 ops, 0 halfops, 0 voices, 7 normal]
13:03 mode/#code [+o TheWatcher] by ChanServ
13:04 Irssi: Join to #code was synced in 56 secs
14:58 AnnoDomini [AnnoDomini@Nightstar-29519.neoplus.adsl.tpnet.pl] has quit [Ping Timeout]
15:04 AnnoDomini [AnnoDomini@Nightstar-29644.neoplus.adsl.tpnet.pl] has joined #Code
15:04 mode/#code [+o AnnoDomini] by ChanServ
15:40 Namegduf [~namegduf@Nightstar-7714.148.broadband5.iol.cz] has joined #code
16:00 Vornicus-Latens is now known as Vornicus
16:03 Tarinaky is now known as ilovebeer
16:05 Syloqs_AFH [Syloq@Admin.Nightstar.Net] has joined #code
16:07 Syloqs_AFH is now known as Syloqs-AFH
16:17 ilovebeer is now known as iloveTarinaky|you_should_to
16:18 iloveTarinaky|you_should_to is now known as Tarinaky
17:22 Derakon[AFK] is now known as Derakon
18:14
<@gnolam>
http://prog21.dadgum.com/29.html
19:52 Rhamphoryncus [~rhamph@Nightstar-16476.ed.shawcable.net] has joined #code
20:00
<@Consul>
Do you feel that modern high-powered computers have made coders too lazy? Answer A) Yes, or B) David Beckham.
20:00
< Namegduf>
Yes.
20:01
<@Derakon>
I feel that if I had to write Jetblade in C, I'd have accomplished maybe a third of what I've actually done so far.
20:02
<@Consul>
One of the reasons I'm really interested in the dsPIC processor for synthesis applications is because I really miss coding right to the hardware.
20:02
<@McMartin>
There are a wide array of problem domains where this was never a good idea.
20:02
<@McMartin>
This is in large part why COBOL was invented in the first place. I'll stick with Python.
20:03
<@McMartin>
I'll take Python over shell scripts, too, for both efficiency and speed.
20:03
<@Derakon>
(For that matter, if I were working with non-modern hardware, I wouldn't be able to make remotely as good graphics because Blender would be unusable.)
20:04
<@Consul>
Clearly, I have different desires than most of you.
20:04
< Namegduf>
I feel that high-level and "totally blind to what is efficient/inefficent" don't essentially have to conflict.
20:05
<@Consul>
The dsPIC would allow me to design standalone synthesis modules.
20:05
<@McMartin>
VHDL is a high level language and yet operates at a lower level still
20:05
<@Derakon>
I think there is value in understanding how computers actually work well enough to be able to make functional programs in C. But it's not my language of choice.
20:05
< Namegduf>
I think the big issue isn't even high level languages.
20:06
< Namegduf>
It's designers of frameworks and abstractions that do so entirely on the basis of what is convienient for the moment, with no real eye towards the huge mess they're building
20:06
< Namegduf>
Because the computers can (probably) run it.
20:06
<@Derakon>
Eeehn, I'd be more generic than that.
20:06
<@Derakon>
It's people not understanding the difference between functional and elegant.
20:07
< Namegduf>
What I would like to see is a high-level language that attempted to only provide those functions which could be done efficiently, had proper complexity class documentation/guarantees for individual operations, and so forth.
20:08
< Namegduf>
Perhaps providing inefficient operations, but properly warning against them.
20:08
< Namegduf>
Instead of pretending you can insert anywhere in an array just fine, or turning all arrays into some other structure so it can be supported decently (at a cost of overall speed).
20:09
< Namegduf>
And performing similar ugliness in the name of simplicity.
20:09
<@Derakon>
Er, most language documentation I've seen will warn you when a built-in is worse than constant-time.
20:09
< Namegduf>
You mean O(n)?
20:09
<@Derakon>
No, I mean O(1).
20:10
<@Derakon>
O(n) is linear time.
20:10
< Namegduf>
Weird, most array operations would be O(n) at the least, I'd expect, and I've never seen a high level language warn against operations on them.
20:10
< Namegduf>
(Some exceptions for start/end insertation perhaps)
20:10
< Namegduf>
Even on a hash, you're not working in constant time.
20:11
< Namegduf>
Er, not hashes.
20:11 * Namegduf needs to sleep.
20:11
<@Derakon>
Hashtable operations tend to be amortized constant time.
20:12
<@Consul>
Ie, no matter what you want to do, it takes the same amount of time?
20:12
< Namegduf>
The only thing I've seen which seems to document everything in that manner is the C++ STL.
20:12
< Namegduf>
(And very little of that is straight 'constant time')
20:12
<@Derakon>
Consul: "constant time" means "the cost of a given operation does not depend on the size of the data set".
20:12
<@Consul>
Ah, I see.
20:12
<@Derakon>
"Amortized constant time" means "on average, the cost of a given operation does not depend on the size of the data set".
20:13
<@Derakon>
Even if some operations do depend on the size of the data set, you have a bunch of constant-time operations that must occur before you get one of those non-constant operations, and you can "farm out" the non-constant cost to those constant time operations, making the entire thing, on average, constant time.
20:14
< Namegduf>
Ew.
20:15
< Namegduf>
I'm not overly fond of the whole "implement basic data structures as something much slower in general so we can provide nicer guarantees for operations which can't actually be done on that data structure efficiently" thing, either.
20:15
<@McMartin>
Amortized constant time: std::vector
20:15
< Namegduf>
(Arrays are the big example that comes to mind.)
20:15
<@McMartin>
Because sometimes append requires you to allocate a new chunk.
20:16
< Namegduf>
I don't think std::vector is (amortized) constant time for inserting in the middle.
20:16
<@McMartin>
No, because it's an array.
20:16
<@Consul>
I just want a box that can make cool sounds.
20:16
< Namegduf>
Precisely.
20:16
<@Derakon>
McM was talking about appending onto the end of the array.
20:16
<@McMartin>
Arrays are *never* constant time for inserting in the middle, or they wouldn't actually be arrays.
20:16
<@Consul>
And because it's only for my own use, I can argue that the ends justify the means. :-)
20:16
< Namegduf>
Precisely.
20:17
<@McMartin>
As it happens, Java, Haskell, and ML all have big-O documentation on their library classes.
20:17
<@Derakon>
Arrays are amortized constant for expanding by appending to the end.
20:17
<@McMartin>
Java in particular pushes at least two implementations of everything depending on what, precisely, you wanted.
20:17
<@McMartin>
For that kind of instruction, Java is actually currently best of breed, I think.
20:18
< Namegduf>
Big-O isn't all I meant, though. Slow constant time is still slow, and needs documenting, too.
20:18
<@Derakon>
Mmmm, sqrt.
20:18
<@McMartin>
These are what profilers are for, really.
20:18
< Namegduf>
Profiling due to lack of documentation is a little ugly
20:19
<@McMartin>
Well
20:19
<@McMartin>
It depends on what you mean by "slow" here.
20:19
<@Derakon>
In general, it's my feeling that all you should worry about when you're writing the code the first time is big-O speed.
20:19
<@McMartin>
There's "it's several hundred instructions"
20:19
<@McMartin>
And then there's "it does I/O"
20:19
<@Derakon>
Later you can come back with a compiler and see which bits are actually slow, and then optimize them.
20:19
< Namegduf>
Both could do with noting.
20:19
<@Derakon>
Worrying about constant-time costs when you first write your code is silly.
20:20
<@McMartin>
The former is pretty silly.
20:20
<@Derakon>
What does "the former" refer to here?
20:20
<@McMartin>
Worrying about several hundred instructions. The latter is critical, because it translates to "it's the equivalent several million instructions"
20:21
<@Derakon>
Ahh, yes, like some classmates who asked the prof how to speed up their networking code. He pointed out that their biggest slowdown was from fprintf.
20:21
<@AnnoDomini>
Mumble. It always confuses me when I have to choose between vertical and horizontal flips.
20:21
<@AnnoDomini>
For me, it's intuitive that a horizontal flip would flip it using the horizon as an axis.
20:22
<@McMartin>
Essentially, the thing that's important with modern computers isn't "bah, back in my day we cared about CPU efficiency", it's that the CPU is now almost always never the actually limiting factor.
20:22
<@AnnoDomini>
But it's always the other way around.
20:22
<@Consul>
Great conversation, guys, but I have to go mow the lawn now. Write me some code to solve THAT one. :-)
20:22
<@Derakon>
Robotic lawnmowers exist~
20:22
<@McMartin>
Google "roomba", add blades~
20:22
<@Derakon>
They're basically Roombas writ large.
20:23
<@Consul>
Don't think I haven't thought about building one. :-)
20:23
< Namegduf>
McMartin: Actually, in my experience, poorly architectured systems care about neither.
20:23
<@Consul>
Problem is, we have one property line with no barriers.
20:23
<@McMartin>
Namegduf: Yes, that's why they're poorly architectured.
20:23
<@Derakon>
String up a virtual fence then.
20:23
<@Consul>
So, I'm thinking GPS...
20:23
<@Derakon>
GPS resolution isn't so hot for this kind of thing.
20:23
< Namegduf>
McMartin: Well, and there's a lot of them, because even I/O has increased in performance.
20:23
<@Consul>
Overkill, probably.
20:23
<@McMartin>
That doesn't mean that what makes a system fast these days isn't how small the inner loop is, but rather how few round-trips it makes.
20:23
< Tarinaky>
PCs aren't the only type of computer. My Uni's Physics department has an undergrad project making a satalite.
20:24
<@McMartin>
There are too many negatives in that sentence and I may have tripped over some.
20:24
< Tarinaky>
Needless to say it doesn't have an impressive clock speed xD
20:24
<@Derakon>
I note for the record here that early versions of Fusillade (my first game) were unbearably slow for one of our users here on the menu screen because every frame, the highscore file was being read over NFS.
20:24
< Namegduf>
I think you got it.
20:24
<@Consul>
I would like to say, for the record (following on from Tarinaky), that sometimes, writing straight to the hardware is exactly what you need. And with that, I'm out. BBIAB.
20:24
<@Derakon>
Ta-ta.
20:24
<@McMartin>
I'm pretty sure a satellite qualifies as an "embedded device"
20:25
<@Derakon>
Heh.
20:25
<@Derakon>
Nah, just huck a laptop onto the thing. It'll be fine~
20:25
<@McMartin>
And yeah, IO has improved too, but it's the ratio that matters for GP computing. Once you hit "enterprise" as anything other than a buzzword, then you find out whether or not you scale.
20:25
< Namegduf>
I appreciate it isn't much about CPU speed anymore, because it's relatively cheap; I do find little reason why high level languages need to be so much lower than low level ones, however.
20:25
<@McMartin>
Spoiler: Probably not the first time.
20:25
< Tarinaky>
McMartin: Doesn't make them any less important.
20:25
<@AnnoDomini>
Yesterday, I read about FPGA circuits generated with genetic algorithms.
20:25
<@AnnoDomini>
Was cool.
20:25
<@McMartin>
Tarinaky: Have I said they weren't? Working on them is vastly more expensive in programmer time.
20:26
<@Derakon>
Namegduf: from my experience with Cython, a lot of time in Python is spent on dynamic type handling.
20:26
<@McMartin>
"Programmers are lazy now" is actually "devteams have much smaller time to market"
20:26
<@Derakon>
Since Cython basically is Python with static types.
20:26
<@AnnoDomini>
10x10 FPGA cell with one input and one output. It was supposed to differentiate between 1kHz and 10kHz on the input, giving 1 or 0 on the output.
20:26
<@Derakon>
And it gave me a 30% speedup on map generation.
20:26
< Tarinaky>
Dunno. I've only been half watching the conversation because I don't have all the theory I needed to keep up with you all the time.
20:26
< Namegduf>
My best guess was that it's because of unnecessarily complex, constant time or otherwise, operations to provide nice guarantees.
20:26
<@AnnoDomini>
And it did.
20:26
< Namegduf>
I guess that'd count as a case of that.
20:26
< Tarinaky>
Plus I missed the start.
20:26
<@AnnoDomini>
Using only 37 cells.
20:26
<@McMartin>
Guarantees are very important.
20:27
< Namegduf>
Guarantees provided/not provided being documented are.
20:27
<@McMartin>
It was not long ago that everything took five times as long to develop and still was full of arbitrary code execution exploits every five lines.
20:27
<@McMartin>
And that's "everything", including stuff with no right to
20:27
< Namegduf>
Yes, I know, but that wasn't really what I was talking about.
20:28
< Namegduf>
I was saying that they should provide less nice functionality that requires a stupid amount of time to actually emulate on a real computer to make programming easy in high level languages.
20:28
<@AnnoDomini>
It's a very cool idea, and I think I'm going to explore it a bit more. I mean, doing what I usually do - design circuits - and at the same time being able to say without shame that I have no idea how they work? Win-win.
20:29
<@Derakon>
We could look at this from a simple economics standpoint. The consumer wants more features, and is willing to invest in more hardware to support those features. The programmer's time is therefore valuable, and low-level languages, with the concomittant longer time to market, are not as good of choices.
20:29
< Namegduf>
I don't think the current divide of "low level, no base library, is fast" and "high level, provides pre-existing code for everyting, is slow" is needed.
20:30
< Namegduf>
I think you could provide a decent higher level language that wasn't horribly slow if they actually designed it to be fast from the start.
20:30
<@Derakon>
So, what, you want to pick and choose the bits that the language does for you and that you do yourself?
20:30
<@Derakon>
You do that, and now the language developers have a massive time-to-market.
20:30
<@AnnoDomini>
If I can get my first degree, I'll be damned if that won't be the topic of my second.
20:30
< Namegduf>
And suddenly, the work has shifted from every developer to just the language developers.
20:30
< Namegduf>
Big improvement.
20:31
<@Derakon>
Anyway, it's not like HLLs aren't designed software. There's a lot of people looking at how they're put together and how to make them faster.
20:32 * AnnoDomini wonders if there's a Pay For Toady's Multithreaded Programming Classes fund.
20:32
<@Derakon>
Heh.
20:32
< Namegduf>
I actually find Boost somewhat interesting on that front.
20:33
< Namegduf>
And I don't really like Boost, mostly because it's not standard enough yet.
20:33
< Namegduf>
It's an attempt to give C++ a larger base library, making it cheaper/easier to develop with, without any real cost in speed (at least, not in your own code).
20:35
< Namegduf>
I do agree; programmer time is valuable.
20:37
< Namegduf>
I think there's an unfortunate trend to go too far with that, though, and risk building systems which get so nasty that speed becomes a genuine issue (instances of I/O, context switches, or pure CPU time) and a tendency in high level languages to near-completely throw out what is fast in the name of what makes an easy language.
20:38
< Namegduf>
The first is obviously simply an error of the designers.
20:38
< Namegduf>
(Albeit an annoyingly common one)
20:38
<@Derakon>
So, uh, if programmers make crappy software, then either help them make it better or don't use it.
20:38
< Namegduf>
I do; not everywhere at once, though (both options, too).
20:39
<@Derakon>
I don't think that forbidding programmers from using tools that generally enable them to work faster is somehow going to improve overall code quality, though.
20:40
< Namegduf>
Not providing tools which slow things down a lot to provide would be nice, though.
20:41
<@Derakon>
Be specific?
20:42
< Namegduf>
You mentioned dynamic type handling previously.
20:42
<@Derakon>
It's amazingly helpful to not have to worry about types, yes.
20:42
<@AnnoDomini>
Indeed.
20:42
<@Derakon>
In fact, ditching static typing seems to be the one universal of high-level languages.
20:43
< Namegduf>
I find it a real pain to try to remember what every operation guarantees will happen if this happens to any of the numerous things it could possibly be, but I'm used to C/C++.
20:43
<@Derakon>
Thing is, you can yourself keep track of the various possible values and types that a given operation could accept.
20:43
< Namegduf>
Yeah, or I can use static types, which does it for me and lets the compiler check.
20:43
<@Derakon>
Generally, dynamic typing doesn't save you from making mistakes; it just makes it so you don't have to be explicit about casting all the time.
20:44
< Namegduf>
I think one of the biggest mistakes in C was implicit casting being allowed, ever.
20:44
< Namegduf>
Given the dangers it presents as regards overflow and similar.
20:44
<@Derakon>
You want "int foo = 1.5" to cause a type error?
20:44
< Namegduf>
Yes.
20:45
<@Derakon>
I think we chalk this up to different programming philosophies.
20:45
< Namegduf>
Haha, yeah.
20:45
< Namegduf>
I like to make when I'm doing something like casting explicit and visible in the code.
20:45
< Namegduf>
Because it requires some note to guarantee that it's safe.
20:45
<@Derakon>
It seems to me that you want to be very precise about what you're writing and what you expect the computer to do, while I prefer to let the computer "think for me" in situations where I'm confident it'll make the right decision.
20:46
< Namegduf>
Well, I'm happy with it working out what it wants to do at a low level (I don't code assembly).
20:47
< Namegduf>
But I think you might be right in general.
20:47
< Namegduf>
I'm not fond, particularly, of having it think for me where this incurs a runtime overhead, unless there's an otherwise entirely unavoidable gain to development.
20:48
< Tarinaky>
Shorter, simpler code that can be written faster with less thought is, generally speaking, a gain...
20:50
< Namegduf>
That's true, but I think languages have taken a certain "development ease at any cost" attitude towards things, and I personally find some of the features annoying.
20:51
<@Consul>
bool lawns_mowed = true;
20:51
< Namegduf>
I mean, sure, dynamic typing is convienient to just get it working, but to actually make sure it's safe, I need to think about every possible value things could have and make sure the requirements of everything are met, and that's quite difficulty. I find static typing's compiler-provided checks helpful.
20:51
< Namegduf>
*difficult
20:52
<@McMartin>
It's astonishing the amount of horror that currently takes place at the assembler level so that C can be a remotely sane deployment environment
20:53 * McMartin says this having recently been doing some assembler work on OS X to mess with dyld's mind.
20:53
<@McMartin>
To be fair, this has more to do with run-time linking, which we think of as modern because our idea of an old computer is a micro.
20:53
<@AnnoDomini>
Assembly is like working with magma in DF.
20:53
<@McMartin>
Run-time linking is older than any of us.
20:54
< Namegduf>
One day I will actually play DF and understand that, AnnoDomini.
20:54
< Namegduf>
One day.
20:54
<@McMartin>
(My favorite command sequence as part of gcc's standard prologue in OSX:)
20:54
<@Derakon>
Anno mainly means that it's easy to shoot yourself in the foot.
20:54
<@Derakon>
But I think of C in that context, so~
20:54
< Namegduf>
Ah, haha.
20:54
<@McMartin>
(push %ebx; call label_1; label_1: pop %ebx)
20:54
< Namegduf>
C is very easy to shoot yourself in the foot.
20:54
< Namegduf>
I prefer C++.
20:54
<@Derakon>
McM: and what does that accomplish?
20:55
<@Derakon>
Namegduf: the followup: "In C++ it's harder [to shoot yourself in the foot], but you blow your whole leg off."
20:55
< Namegduf>
I've heard of that, yeah.
20:55
<@Vornicus>
Pop quiz! Draw a circle of radius 2 on a unit square grid. What proportion of the circumference is in each square of the twelve it passes through?
20:55
< Namegduf>
Held it for a while; my view is basically that it's right. Quite easy to make a horrible, horrible design in C++.
20:55
< Namegduf>
And then you're screwed.
20:55
<@Derakon>
Vorn: intuitively it should vary with sine/cosine somehow.
20:56
< Namegduf>
You need to do a little planning. I'm fond of DbC, want to try it on something large in some form sometime.
20:56
<@AnnoDomini>
Namegduf: Any reason for not playing DF now?
20:56
< Namegduf>
AnnoDomini: I don't think I have it, I'm sleeping in an hour, and I was planning to get around to fixing some documentation for something.
20:57
<@McMartin>
Derakon: It's part of the interaction with the debugger/stack trace mechanism.
20:57
< Namegduf>
I'm just... distractable, haha.
20:57
<@McMartin>
It also puts the start of the "real" function in %ebx.
20:57
<@McMartin>
Which is part of the ABI
20:57
<@Vornicus>
Der: All the information you need is in the problem.
20:57
<@Consul>
When am I going to have my Star Trek computer where writing programs is as easy as telling the computer what you need it to do?
20:57
<@AnnoDomini>
Namegduf: I rather meant 'now' in the general sense. Like, why don't you have it? :P
20:57
<@McMartin>
And then loading local constants and such is (target-label_1+%ebx)
20:57
< Namegduf>
Haha.
20:58
<@McMartin>
Consul: Once that technology exists, you will find that people don't know what they need done.
20:58
<@Derakon>
Vorn: yeah, that was an off-the-cuff answer without much thought. I'm a bit distracted ATM talking about languages and reading up on optics.
20:58
< Namegduf>
AnnoDomini: Not sure, I've always been busy when the interest struck me.
20:58
<@Vornicus>
--oops. forgot to mention, the center is on a vertex of the grid.
20:58
<@Consul>
McMartin: Probably true.
20:58
<@McMartin>
Absolutely true. Ask anyone who's had to develop for a customer. >_<
20:58
< Namegduf>
Haha.
20:58
< Namegduf>
Yeah.
20:58
<@Derakon>
Vorn: practically speaking, I'd probably whip up a quick large-scale sim of the problem. >.>
20:58
<@Consul>
McMartin: I've had to do that, too. :-P
20:59
<@Consul>
Of course, I was writing in PHP at the time.
20:59 * AnnoDomini busily constructs what he thinks will be a layman-comprehendable diagram of his alarm system.
20:59
<@Derakon>
Render a few thousand points on the circle, cast them to locations on the grid, and get their proportions.
20:59
<@Consul>
I'm now seriously thinking about a design for a robot lawnmower.
20:59
< Namegduf>
I need to try some more interesting languages. I want to get as far away from Perl as possible.
20:59
<@Derakon>
Heh.
21:00
<@Derakon>
Python!
21:00
< Namegduf>
"Zero but true" terrifies me.
21:00
<@AnnoDomini>
When I'm done with this crap, sometime in September, I'm going to make a series of tutorial videos for DF's Adventure Mode.
21:00
< Namegduf>
Well, not really. I just find it ugly.
21:00
<@AnnoDomini>
And probably Legends Mode, since it's helpful for DFAM.
21:00
< Namegduf>
Neat.
21:00
<@Derakon>
Anno: step 1: pick up a chunk of dirt. step 2: throw it up into the air. Repeat 1000 times.
21:01
<@AnnoDomini>
Derakon: Nono. Step 1: Install AHK.
21:01
<@AnnoDomini>
:P
21:01
<@Derakon>
Adventurer Hacker Kore?
21:01
<@AnnoDomini>
Auto Hot Key.
21:02
<@AnnoDomini>
Nowadays, DF sports its own macro system, but it's buggy and I haven't learnt it yet.
21:02
<@AnnoDomini>
Has a tendency to segfault, from the bug reports I've read.
21:02
<@Derakon>
Ahh. AHK of course is Windows-only.
21:02
<@AnnoDomini>
Well, aren't there Lunix-alternatives?
21:03
<@Derakon>
I am looking forward to the day when adventurers get access to all the fortress-mode skills though.
21:03
<@Derakon>
Probably; haven't checked.
21:03
<@Derakon>
These days, any game whose gameplay involves "repeat this tedious process until you're strong enough that you can probably survive the actual interesting bits" loses me as a customer.
21:04
<@AnnoDomini>
You weren't far off in your 1000 throw guesstimate (if it was such). It gives you a little bit over the minimum for Legendary+5 skill, which is probably the cap.
21:04
<@Derakon>
Well, mostly what you want is the stat-ups, right?
21:04
<@AnnoDomini>
Yeah.
21:04
<@Derakon>
Not that disemboweling your enemies with a thrown pebble hurts.
21:04
<@Consul>
Derakon: It probably comes as no surprise that my favorite genre of game is the Myst-style first-person puzzler.
21:04
<@AnnoDomini>
Uh.
21:04
<@Derakon>
Hurts you, anyway~
21:04
<@AnnoDomini>
Derakon: Have you tried throwing ammunition?
21:05
<@Derakon>
No, I haven't.
21:05
<@Derakon>
My experiences with ammunition in adventurer mode generally consisted of getting shot from 30 paces away when my sight radius was 3 paces.
21:05
<@Consul>
Well, maybe that's my second favorite genre, right behind a well-designed platformer.
21:05
<@AnnoDomini>
It's shorter range than bows/crossbows, but it doesn't consume ammo (you can pick it up) AND it's probably deadlier, since you can more easily train Throwing.
21:06
<@AnnoDomini>
Derakon: See, you're approaching this from the wrong angle.
21:07
<@AnnoDomini>
The right angle is training Ambusher to Legendary+5 and Thrower to Legendary+5, for effectively 'total pwnage' to use the vernacular.
21:07
<@AnnoDomini>
They can't target you if they cannot SEE you.
21:08
<@AnnoDomini>
The way perception is handled also means you can pick out those who are sleeping and clobber them into death without waking them up, and nobody noticing.
21:08
<@AnnoDomini>
And did I mention Throwing doesn't spoil your stealth?
21:09 Kazriko [~kaz@Nightstar-26123.gdj-co.client.bresnan.net] has quit [Ping Timeout]
21:09
<@AnnoDomini>
I'm none too sure about shooting, as sources claim it does, but I've found on several occassions that it doesn't.
21:09
<@AnnoDomini>
Okay, I think I'm done rambling.
21:09 * AnnoDomini gets back to the graphics editor.
21:11 Kazriko [~kaz@Nightstar-26123.gdj-co.client.bresnan.net] has joined #code
21:48
<@AnnoDomini>
I'm going to need a bigger canvas.
22:16
<@AnnoDomini>
http://i28.tinypic.com/29osx6t.jpg
22:16
<@AnnoDomini>
How's this for a first draft?
22:16
<@AnnoDomini>
Besides the fact that it's partly in Polish. :P
23:00
< Rhamphoryncus>
yarr, reading scrollback. AnnoDomini: horizontal flip is horizontal *motion*. A given point on the object will have no vertical motion
23:01
< Rhamphoryncus>
Consul: I believe you can bury lines in the ground to act as a fence to the robot mowers
23:04
<@Vornicus>
It also works on dogs.
23:07
< Rhamphoryncus>
regarding language design: I still believe I can make a compiled python on par with C's performance
23:08
<@McMartin>
Python's terp performance is notoriously bad.
23:08
<@McMartin>
It does a great deal of recomputation that it doesn't need
23:09
< Rhamphoryncus>
terp?
23:09
< Rhamphoryncus>
oh interpreter
23:09
<@McMartin>
Yeah
23:10
< Rhamphoryncus>
it's an interpreter. That should explain enough
23:10
<@Consul>
I wonder if oil painters ever talk about terp performance.
23:10 * Vornicus wonders about the progress of Unladen Swallow.
23:11
<@McMartin>
Rham: You can do wonderful things with well-organized ASTs. You get close to JIT, especially for non-inner-loops.
23:11
<@McMartin>
With sufficiently pathological input, JIT beats *everything* including C but that's beside the point
23:12
< Rhamphoryncus>
Depends. Good profiling could let a compiler hard-code the JIT's optimistic specialization :)
23:14
<@McMartin>
It involves oscillating the input so that the "common case" for your branch prediction changes.
23:14
<@McMartin>
You could have the static code time its shunts to different code to dance with the input, I suppose
23:15
< Rhamphoryncus>
exactly
23:15
<@McMartin>
Let me repharse
23:15
<@McMartin>
rephrase
23:16
< Rhamphoryncus>
If you have a variable that's an unbounded integer you could have a fast path for 32-bit integers
23:16
<@McMartin>
"Dynamic JIT can, with sufficiently pathological input, produce a system that will beat any statically compiled code"
23:16
<@McMartin>
On instruction mix grounds, mainly
23:16
<@McMartin>
That's like thin locks, not really a JIT issue. More a support-library issue
23:17
< Rhamphoryncus>
can you give a more specific example what you think JIT can do better?
23:17
<@McMartin>
It was a demo
23:17
<@McMartin>
Essentially, data comes in, this dictates the code path
23:18
<@McMartin>
So you have a bunch of branches, and branches are generally written to have a default answer
23:18
<@McMartin>
If you guess wrong, the result is that you instruction pipeline is trashed, otherwise, the branch isn't there at all
23:18
< Rhamphoryncus>
ahh, I see
23:18
<@McMartin>
The JIT can notice it's getting it wrong and recompile it. Hence the "statically compiled" caveat being added in.
23:19
< Rhamphoryncus>
Don't modern CPUs do branching statistics?
23:19
<@McMartin>
I know they *can*
23:19
<@McMartin>
I believe some of the current ones use hints though instead of trying to rely on a dynamic branch predictor
23:19
<@McMartin>
OTOH, maybe this was a SPARC or something.
23:19
<@McMartin>
*shrug*
23:20
< Rhamphoryncus>
I think the compiler can try to override it
23:20
<@McMartin>
At the very least, there needs to be a "if you can't decide yet, decide this way"
23:20
< Rhamphoryncus>
But either way I agree with you
23:21
<@McMartin>
Anyway. Even by interpretation standards, the Python interpreter is apparently really bad
23:21
< Rhamphoryncus>
There are cases where no forethought can sufficiently predict your deployed usage profile
23:21
< Rhamphoryncus>
I've not seen any evidence of that
23:21
<@McMartin>
(It does things like dynamically type check its own constants)
23:22
<@McMartin>
This may have improved in 3.x, actually
23:22
< Rhamphoryncus>
Python's gotten a lot of work on optimizing it, so if it's slow there's usually a good reason
23:23
< Rhamphoryncus>
constants should be pretty cheap type checks anyway
23:24
< Rhamphoryncus>
Oh, and they have a fast path for really common types. They use a bitfield
23:31 Tarinaky [~Tarinaky@Nightstar-16638.plus.com] has quit [Client exited]
23:32
<@Derakon>
I wouldn't mind an improvement in the Python terp speed~
23:47
<@Vornicus>
Nor I.
--- Log closed Tue Aug 25 00:00:32 2009
code logs -> 2009 -> Mon, 24 Aug 2009< code.20090823.log - code.20090825.log >