code logs -> 2011 -> Thu, 17 Nov 2011< code.20111116.log - code.20111118.log >
--- Log opened Thu Nov 17 00:00:04 2011
00:05 Derakon [chriswei@Nightstar-f68d7eb4.ca.comcast.net] has quit [[NS] Quit: Lost terminal]
00:06 AD[Shell] [abudhabi@9D46A2.088371.A474A5.6EEC27] has joined #code
00:22 Derakon[AFK] is now known as Derakon
--- Log closed Thu Nov 17 00:39:01 2011
--- Log opened Thu Nov 17 00:39:10 2011
00:39 TheWatcher[zZzZ] [chris@Nightstar-3762b576.co.uk] has joined #code
00:39 Irssi: #code: Total of 30 nicks [10 ops, 0 halfops, 0 voices, 20 normal]
00:39 Irssi: Join to #code was synced in 46 secs
01:06
< Lingerance>
You can get USB video-card things
01:09 Kindamoody[zZz] is now known as Kindamoody
01:11
< ToxicFrog>
RichardBarrell: ...in a laptop?
01:11
<@McMartin>
You can buy such laptops, though
01:11
<@McMartin>
I think there's someone in QA with such a setup
01:12
< ToxicFrog>
Yeah, but it'll be cripplingly expensive for what you get.
01:12
<@McMartin>
Yeah
01:12
<@McMartin>
Well
01:12
<@McMartin>
Unless it's just 'it has HDMI and DVI out and you can use them both at once'
01:18
< RichardBarrell>
I've got an HDMI and a VGA out, but there aren't enough outputs on this GPU to drive 'em both.
01:18
< RichardBarrell>
ToxicFrog: meh, counting up the amount of time I spend using my laptop, I'm willing to spend some money on the next one I buy.
01:24
< ToxicFrog>
Fair enough
01:25
< ToxicFrog>
I spend most of my time on the laptop, but it's hard to say that I'm "using" it since all the money goes into the server (or, well, the gaming desktop) and the laptop just acts as an X terminal for it.
01:35 Rhamphoryncus [rhamph@Nightstar-14eb6405.abhsia.telus.net] has quit [Client exited]
01:44
< RichardBarrell>
That reminds me, there still aren't any open source rdesktop servers, are there? :/
01:45
< RichardBarrell>
Last time I tried rdesktop to log into a machine in a colo it was faster than VNC o'er the LAN here. :/
01:45
< RichardBarrell>
ToxicFrog: do use any of the X11 compression systems?
01:47 Attilla [Obsolete@Nightstar-f29f718d.cable.virginmedia.com] has quit [Ping timeout: 121 seconds]
01:48 * ToxicFrog finally gets around to enabling ramzswap on the laptop
01:48
< ToxicFrog>
RichardBarrell: NoMachine NX
01:48
< ToxicFrog>
Which is fantastic and, IME, kicks the shit out of RDP
01:49
< ToxicFrog>
That said, yes, there are free RDP servers.
01:51
< ToxicFrog>
ben@thoth ~/Desktop/IMAPCAR2/1DC $ sudo swapon -s
01:51
< ToxicFrog>
FilenameTypeSizeUsedPriority
01:51
< ToxicFrog>
/dev/ramzswap0 partition3999740112864-1
01:51
< RichardBarrell>
Next to one time that I get to swipe some company budget I want to blow ~?700 on a Windows 7 Professional desktop with a crapton of RAM so that I can have a box that everyone in the studio can rdesktop into for cross-browser testing w/ VirtualPC.
01:52
< RichardBarrell>
ToxicFrog: ramzswap sounds crazy. You end up with a fixed-size compressed portion of your memory?
01:52
< ToxicFrog>
Not...exactly
01:52
< ToxicFrog>
You reserve a portion of your memory for use as swap
01:52
< ToxicFrog>
Pages swapped to this are stored compressed
01:53
< ToxicFrog>
In this manner, it can store typically 3-6x the amount of memory that's actually allocated
01:53
< ToxicFrog>
It's slower than direct memory access, but much, much faster than disk swap
01:53
< RichardBarrell>
Compression ratios are that high? oO
01:54
< ToxicFrog>
LZMA is really good and RAM pages almost always hold easily compressible data.
01:54
< RichardBarrell>
Just seems mad to me that you'd do it with a swap device rather than putting the compression directly into Linux's virtual memory so that *any* of the RAM used by userspace could be used for compressed pages, rather than a fixed chunk of it.
01:54
< ToxicFrog>
If you're processing gigabytes of text you might get ten to one or better~
01:55
< ToxicFrog>
Well, the decompression is fast, it's not instantaneous
01:55
< RichardBarrell>
Eh, it's competing with disk seeks as you say, it wins almost by default.
01:55
<@Namegduf>
If it was integrated into generic virtual memory, it'd start competing with normal RAM access
01:56
<@Namegduf>
The usual level of reluctance to "swap out" things and similar logic is useful.
01:56
< ToxicFrog>
Yeah, that too.
01:56
< ToxicFrog>
All the logic is already there in the swap system; why not use it?
01:56
<@Namegduf>
A variable size swap device would possibly be nicer.
01:56
<@Namegduf>
(Maybe you could rig it with tmpfs, but probably not worth the effort)
02:14
< ToxicFrog>
It would be nicer, but trickier.
02:15
< ToxicFrog>
RichardBarrell: er. Right, but what you seem to be proposing is compressing all of memory, which is a severe global performance hit.
02:15
< ToxicFrog>
(and you still need somewhere to decompress to, at which point you've just reinvented swap)
02:36 AD[Shell] [abudhabi@9D46A2.088371.A474A5.6EEC27] has quit [Ping timeout: 121 seconds]
02:37 AD[Shell] [abudhabi@9D46A2.088371.A474A5.6EEC27] has joined #code
02:37
< RichardBarrell>
ToxicFrog: nnno, what I'd rather do is compress some arbitrary quantity of memory instead of some fixed quantity of memory.
02:37
< RichardBarrell>
If I'd meant "compress all memory" then I would have said so.
02:38
<@Namegduf>
Like I said, you could hack it up with tmpfs or something else that is RAM-backed and expands on its own.
02:38
<@Namegduf>
It's just not been done that way by stuff I've seen that does it, usually because you need to know your usage scenario to set it up to be useful anyway.
02:38
< RichardBarrell>
Yes, I do have one microscopic nanosmidgen of an idea of the magnitude of the difference between RAM access times and the time it takes to decode a 4k page of LZMA.
02:38
<@Namegduf>
Or just because it's harder.
02:39
< RichardBarrell>
Maybe I'm just cynical but I usually bet on "because it's harder". :P
02:39 RichardBarrell [richard@Nightstar-3b2c2db2.bethere.co.uk] has quit [Connection closed]
02:49 RichardBarrell [richard@Nightstar-3b2c2db2.bethere.co.uk] has joined #code
03:37
<@McMartin>
Man
03:37
<@McMartin>
OK
03:37
<@McMartin>
Treating Scheme as if it were Haskell?
03:38
<@McMartin>
Bad idea.
03:38
<@McMartin>
Holy crap.
03:38 * McMartin rewrites the file code to be imperative, which simplifies the list and string processing considerably
03:38
< RichardBarrell>
Treating Haskell as if it were Scheme works right up until you get your first type error.
03:38
<@McMartin>
Not when your program processes files. =D
03:38
< RichardBarrell>
Treating Scheme as if it were Haskell works right up until you get your first type error too.
03:39
< RichardBarrell>
In the former case, you swear at GHC.
03:39
<@McMartin>
No, here I'm trying to pull data out of a file and process it
03:39
<@McMartin>
In Haskell, you write it as reading the whole file into a list, and then doing list wackiness with it.
03:39
<@McMartin>
In Scheme, strings are Different, and trying to treat them like lists eats a fuckton of memory, looks ugly as sin, and tends to make you go quadratic.
03:39
< RichardBarrell>
Does Scheme not have lazy ByteStrings? :D
03:40
<@McMartin>
Not that I've found.
03:40
<@McMartin>
It *does*, however, have "input ports backed by strings"
03:40
<@McMartin>
So if I write this as a peek-based stream transformer, like I should have in the first place, this code will get much cleaner.
03:41
< RichardBarrell>
Y'know, the IO problem didn't even occur to me.
03:41
< RichardBarrell>
The naive [Char] as String type in Haskell is practically deprecated, at least for writing any kind of systems software.
03:42
<@McMartin>
Right, but this is a lexer, so it's fine, and it compiles to "read chars as you need them"
03:42
<@McMartin>
And taking the tail of the stream tosses them away
03:42
<@McMartin>
As oppose to allocating a new string, every single time. -_-
03:42
<@McMartin>
(Scheme strings look more like arrays.)
03:42
< RichardBarrell>
Also I don't know the implementation details of Scheme implementations' file IO and strings, whereas I know much of GHC's offhand. ?_?
03:43
<@McMartin>
The other issue with Scheme is that it doesn't have "where" so you have to use letrec for everything
03:43
< RichardBarrell>
I was thinking of the other issue with porting Haskell programs - where you go inadvertently become insane, implement Monad transformers, and then find that the calling conventions for the are so intricate
03:45
< RichardBarrell>
that they're only usable with the aid of a type-checker. Or parser combinators or something.
03:45
<@McMartin>
http://pastebin.starforge.co.uk/492
03:45
< RichardBarrell>
s/go //, s/for the /for them /, it's late and I have been shouting at Gnome.
03:46
< RichardBarrell>
I assume that you're using PLT Scheme, right?
03:46
<@McMartin>
In Haskell, I'd just throw partition at my input list of characters.
03:46 Eri [Eri@Nightstar-3e5deec3.gv.shawcable.net] has quit [Ping timeout: 121 seconds]
03:46
<@McMartin>
Gambit-C, actually, so R5RS with some extensions.
03:46
< RichardBarrell>
By "assume" you can safely assume that I mean "guess" ;)
03:46 * McMartin is sacrificing advancedness for a superior FFI, basically.
03:47
< RichardBarrell>
FFI is more important than it looks. I bet you that 90% of the reason why Java programs always had sinful ugly UIs was because Java's FFI was (still is?) so painful that it demoralised the programmers working on the platform integration parts.
03:49
<@McMartin>
Yeah, so, Gambit's FFI is "there is a (c-lambda (args) ".h file snippet") construct, and for simple data types it Just Works"
03:49
<@McMartin>
Which is getting close to Lua levels of embeddability
03:49
< RichardBarrell>
What does Gambit-C's string type look like under the hood? The strict string types in Haskell are (pointer to buffer, start index, stop index) so that you can take substrings in constant time. And the lazy ones are just lazy lists of strict strings.
03:50
<@McMartin>
I'm not sure
03:50
< RichardBarrell>
s/does/do/, man I'm stupid right now.
03:50
<@McMartin>
I *suspect* it is a vector of char, and I suspect that Gambit chars are UCS-4.
03:51
<@McMartin>
I'm still refamiliarizing myself with the core language and then Gambit's extensions to make it a usable applications language.
03:51
< RichardBarrell>
By "the string types in Haskell" I mean Data.ByteString and Data.Text, both of which are libraries outside of the language definition, though I think that they ship with GHC by default now.
03:51
<@McMartin>
Right
03:51
< RichardBarrell>
GHC's FFI is: foreign import ccall unsafe sin :: CDouble -> CDouble -- Just Works(TM)
03:52
< RichardBarrell>
and is really fast too
03:52
<@McMartin>
Right, but then you have to make sure you IO Monad up exactly the right things.
03:52
<@McMartin>
I've used Haskell bindings to SDL, it was mostly pretty slick
03:52
< RichardBarrell>
but the other direction (calling Haskell from C) and registering callbacks is much slower and far more laborious.
03:52
<@McMartin>
But a few thing required IO that really shouldn't have, like mapping RGB triplets to 32-bit color codes.
03:52
<@McMartin>
*things
03:53
< RichardBarrell>
That's irritating. unsafePerformIO is justified in that case.
03:53
<@McMartin>
To be fair, you're mapping it through a mutable object's fields
03:53
<@McMartin>
However, I - having hacked SDL *deeply* in C - happen to know that it's not a mutation op
03:54
<@McMartin>
My "pick up Scheme" project is going to be a lexer and maybe a parser for the toy language in one of my compiler textbooks.
03:54
<@McMartin>
I wrote a terrible lexer already for it in an evening, so now I'm going to try to do it right.
03:55
< RichardBarrell>
Haskellers are *supposed* to be cool with internal mutation so long as the side-effectful bit isn't externally-visible, hence Control.Monad.ST - you can write single-threaded computations that use mutable arrays and references that don't need the IO monad - but they can't perform any IO and have to be deterministic.
03:56
<@McMartin>
Yeah
03:56
< RichardBarrell>
Arbitrarily large quantities of unsafePerformIO are allowed provided that you come up with a proof in pencil and paper that you haven't violated referential transparency and the monotonicity assumption. ^_^
03:56
<@McMartin>
I think they misunderstood "this reads stuff out of a mutable field" - which is what the docs say it does - with "this value can change without warning" - which it can't do. The act of mutating it is the thing that needs IO, and if you do that I believe it all works out. ^_^
03:57
<@McMartin>
It's otherwise pretty great except for the part that being lazy means it crams all its activity into the VBLANK period which doesn't end well.
03:57 * RichardBarrell snerks.
03:57 * McMartin wrote a cellular automaton system in it, though. Nice and compact.
03:58
< RichardBarrell>
There is a generic combinator somewhere for forcing entire data structures to normal form.
03:58 * McMartin nods
03:58
< RichardBarrell>
I still haven't worked out yet whether I find the laziness helpful overall or not.
03:59
<@McMartin>
Scheme is actually Extremely Imperative.
04:00
< RichardBarrell>
Apparently SPJ's take on it is that the laziness is painful but necessary but it's the thing that enforces that people always mark side-effectful subroutines as being in the IO Monad.
04:00
< RichardBarrell>
That might not be his *actual* opinion though, it's just something he wrote on a presentation slide once. ;)
04:01
< RichardBarrell>
...and that's fine too. Three cheers for wishing that emacs ran scheme instead of elisp. :)
04:01
<@McMartin>
Heh
04:01
<@McMartin>
I think I can actually make gambit or racket do that
04:01
< RichardBarrell>
TBH though, everything global all the time actually seems like the right model for an interactive editor where you want to hook *everything* and you want to do that *all the time*.
04:02
< RichardBarrell>
If Emacs ran Scheme then people would start using closures to hide private variables and pretty soon it'd be a total ballache to get anything done in it. :/
04:02
<@McMartin>
heh
04:02
<@McMartin>
I still haven't come up with a good way to do module control.
04:03
< RichardBarrell>
How do you mean?
04:03
< RichardBarrell>
What do you mean by "module control"?
04:04
<@McMartin>
If I want to define, say, six functions, but make four of them private, but shared between the other two.
04:05
<@McMartin>
Right now I'm just giving them long names to be poor man's namespaces.
04:05
<@McMartin>
Gambit apparently has namespaces, but the docs just say "TODO"
04:08 Alek [omegaboot@Nightstar-10752b3e.il.comcast.net] has quit [[NS] Quit: beroot and bed]
04:10
< RichardBarrell>
You could hide all six functions in a closure and then expose two of them. Seems icky though.
04:11 Eri [Eri@Nightstar-3e5deec3.gv.shawcable.net] has joined #code
04:12
< RichardBarrell>
For open-source software, IMHO the whole concept of a "private" function or variable as meaning anything other than "you can ignore this detail if you want to" strikes me as insane. The long-names solution is kindest to people who might have to harm your code in unusual ways in future.
04:16 Alek [omegaboot@Nightstar-10752b3e.il.comcast.net] has joined #code
04:19 RichardBarrell [richard@Nightstar-3b2c2db2.bethere.co.uk] has quit [Connection closed]
04:32
<@McMartin>
Well. When you use it to guard interfaces, it becomes "you don't have to compensate for this detail being there" and it's necessary to make the ABIs work.
04:38 SmithKurosaki [smith@Nightstar-e26015c4.home1.cgocable.net] has joined #code
05:27 SmithKurosaki [smith@Nightstar-e26015c4.home1.cgocable.net] has quit [Ping timeout: 121 seconds]
05:34 * McMartin sighs at scheme
05:34
<@McMartin>
"The procedure no-such-file-or-directory-exception? returns #t when obj is a no-such-file-or-directory-exception object and #f otherwise."
05:36
< Tamber>
... o.0
05:39
< celticminstrel>
What's the issue there, the name length?
05:39
<@McMartin>
Yeah, also the circularity of the docs.
05:43 * McMartin also eyes the new version of Aquamacs.
05:43
<@McMartin>
Yes. An OSX version of Emacs seriously needs visual-basic-mode
05:43
<@McMartin>
The fuck, guys.
--- Log closed Thu Nov 17 05:54:29 2011
--- Log opened Thu Nov 17 05:54:37 2011
05:54 TheWatcher[zZzZ] [chris@Nightstar-3762b576.co.uk] has joined #code
05:54 Irssi: #code: Total of 27 nicks [10 ops, 0 halfops, 0 voices, 17 normal]
05:55 Irssi: Join to #code was synced in 48 secs
06:06 Eri [Eri@Nightstar-3e5deec3.gv.shawcable.net] has quit [[NS] Quit: Leaving]
06:08 ErikMesoy|sleep is now known as ErikMesoy
06:17 Eri [Eri@Nightstar-3e5deec3.gv.shawcable.net] has joined #code
06:18
< ToxicFrog>
...is it just me or does RichardBarrel kind of miss the point of private?
06:21
< Tamber>
A little, yes.
06:22 Derakon is now known as Derakon[AFK]
06:43 celticminstrel [celticminst@Nightstar-5d22ab1d.cable.rogers.com] has quit [[NS] Quit: And lo! The computer falls into a deep sleep, to awake again some other day!]
06:57 You're now known as TheWatcher
07:18
< Tamber>
Morning, Watcher.
07:32 Kindamoody is now known as Kindamoody|out
07:49 * TheWatcher eyes the slowly ascending orb of the wretched daystar
07:49
< TheWatcher>
Apparently, yes.
07:52 You're now known as TheWatcher[afk]
09:01 Rhamphoryncus [rhamph@Nightstar-14eb6405.abhsia.telus.net] has joined #code
09:33 You're now known as TheWatcher
10:00 Rhamphoryncus [rhamph@Nightstar-14eb6405.abhsia.telus.net] has quit [Ping timeout: 121 seconds]
10:00 Rhamphoryncus_ [rhamph@Nightstar-14eb6405.abhsia.telus.net] has joined #code
10:04 Rhamphoryncus_ [rhamph@Nightstar-14eb6405.abhsia.telus.net] has quit [Ping timeout: 121 seconds]
11:00 You're now known as TheWatcher[d00m]
11:19 Attilla [Obsolete@Nightstar-f29f718d.cable.virginmedia.com] has joined #code
11:50 You're now known as TheWatcher
12:00 gnolam [lenin@Nightstar-202a5047.priv.bahnhof.se] has joined #code
12:21 Rhamphoryncus_ [rhamph@Nightstar-14eb6405.abhsia.telus.net] has joined #code
13:08 celticminstrel [celticminst@Nightstar-5d22ab1d.cable.rogers.com] has joined #code
13:22
< gnolam>
So... has anyone got a seppuku knife?
13:22
<@McMartin>
Not... handy?
13:23
< gnolam>
It's apparently not just ordinary documentation. The contract apparently calls for an "extensive user manual".
13:23
< TheWatcher>
You poor bastard
13:24 * Tamber read that as "expensive user manual"
13:27
< TheWatcher>
arghfuck, no wonder this isn't working, I'm missing half the sodding data in the hash.
13:27
< gnolam>
Heck, I'm still a bit confused as to who the users even are.
13:27
< TheWatcher>
i find that assuming "drooling morons" is generally a safe bet~
13:28
< gnolam>
Yeah, that's just the thing. Do I assume "drooling morons" or do I assume "eats Monte Carlo simulations for breakfast"?
13:28
< Tamber>
You mean the two are mutually exclusive?
13:29
< Tamber>
How many PHD-level folk do you know who regularly forget how things like pens, or doors, work? ;)
13:30
< TheWatcher>
Well, you could write two manuals!
13:31
< Tamber>
hehe
13:31
< gnolam>
If you include Ph D students, I know a few who occasionally forget how /their legs/ work.
13:31
< Tamber>
Hah!
13:31
<@McMartin>
QWOP is hard, man.
14:28
< Vornotron>
Tamber: all of them
14:30
< Tamber>
:p
14:31 * kwsn yawns
14:46 kazrikna [kazrikna@Nightstar-843a343b.arkaic.com] has quit [Ping timeout: 121 seconds]
14:48 kazrikna [kazrikna@Nightstar-843a343b.arkaic.com] has joined #code
14:49 sshine [simon@Nightstar-883ecc1d.brahmaserver.dk] has quit [Client closed the connection]
14:49 sshine [simon@Nightstar-883ecc1d.brahmaserver.dk] has joined #code
15:16 You're now known as TheWatcher[afk]
15:17
<@Tarinaky>
In Java, how do I access the last element of a TreeMap?
15:22
< Stalker>
Click on it.
15:33 Stalker [Z@Nightstar-3602cf5a.cust.comxnet.dk] has quit [[NS] Quit: I really love that hat.]
15:37 Kindamoody|out is now known as Kindamoody
15:42 celticminstrel is now known as celmin|away
16:13 Stalker [Z@Nightstar-5aa18eaf.balk.dk] has joined #code
16:30 RichardBarrell [richard@Nightstar-3b2c2db2.bethere.co.uk] has joined #code
17:19
< ToxicFrog>
Tarinaky: http://download.oracle.com/javase/1,5,0/docs/api/java/util/SortedMap.html
17:34 Derakon [chriswei@Nightstar-f68d7eb4.ca.comcast.net] has joined #code
17:34
< Derakon>
So confused
17:35
< AD[Shell]>
Yr confus?
17:35
< Derakon>
I have a function, changeHistScale. It accepts a min and max value, which are the black and white points respectively for a B&W image.
17:35
< Derakon>
So say I have an image, an array of pixel data.
17:35
< Derakon>
image.min() returns 97, image.max() returns 104.
17:35
< Derakon>
changeHistScale(97, 104) works properly.
17:35
< Derakon>
changeHistScale(image.min(), image.max()) does not.
17:35
< AD[Shell]>
Try intermediate variables?
17:36
< Derakon>
Why the fuck should that make a difference?
17:36
< AD[Shell]>
I don't know.
17:36
< Derakon>
Hell, the image data isn't even being modified here!
17:36
< Derakon>
Yeah, see, I'm not looking for a workaround so much as I am an explanation.
17:36
< ErikMesoy>
Try making one of the variables fixed to debug the other?
17:36
< Derakon>
What variables?
17:36
< AD[Shell]>
Also, check the function's interior variables for storing the inputs.
17:36
< ErikMesoy>
call with (97, image.max())
17:37
< Derakon>
The function should see no difference between the two, which is why I'm confused.
17:37
< Derakon>
image.min() / image.max() should be evaluated before reaching the function.
17:37
< AD[Shell]>
But is it?
17:38
< Derakon>
(FWIW calling with (97, image.max()) works, but (image.min(), 104) doesn't)
17:38
< ErikMesoy>
Debugging progress! :p
17:38
< AD[Shell]>
In what way doesn't it work?
17:38
< Derakon>
AD: well, I'm printing out the passed-in values as soon as the function starts, and they're the same regardless.
17:38
< Derakon>
In the way that the image display isn't properly rescaled.
17:38
< AD[Shell]>
How is it scaled?
17:38
< Derakon>
As I said earlier, these two values set the black and white points for the image.
17:39
< Derakon>
So anything below 97 should render as black, anything above 104 as white.
17:39
< AD[Shell]>
Okay.
17:39
< Derakon>
I'm really looking for an explanation for how the hell these two different ways of calling the function could possibly result in different behaviors though.
17:39
< AD[Shell]>
Magic.
17:40
< Derakon>
Fuck magic sideways with a backhoe and no lube.
17:40
< AD[Shell]>
Magic enjoys it and asks for more! :P
17:40
< ErikMesoy>
Try an oversized backhoe.
17:41
< AD[Shell]>
Heh.
17:43
< Derakon>
After some further research: the type of image.min() is numpy.uint16, not int. That's the problem.
17:43
< Derakon>
So mystery solved.
17:43
< Derakon>
Well, except that numpy.uint16 should work, but at least I have an explanation for how the differing behaviors are possible.
17:46
< Derakon>
...ah, the min value gets negated as part of a calculation. That'd do it.
17:49 Kindamoody is now known as Kindamoody[zZz]
17:56 * Derakon mutters at Windows' SMB server, which appears to be crash-prone.
17:57
< Derakon>
And I have no idea how to fix it, because every goddamned Google result is about using Linux.
17:57
< Derakon>
(Or, if you exclude Linux, OpenBSD)
18:14 You're now known as TheWatcher
20:02
< ToxicFrog>
to be fair, while I've had more problems with SMB on windows than I have time to recount them, I've never observed it to crash outright.
20:15
< Derakon>
Yeah, turns out another Windows machine was able to connect to it fine, so there was something between my laptop and the target machine that was breaking down.
20:21
< Derakon>
The weird thing being that rebooting the target machine fixes the problem.
20:21
< Derakon>
Which is why I'm assuming the fault lies with Windows. Though I suppose alternately there could be some extant connection on my laptop that is only dispelled by the target machine shutting down.
20:45 Vornotron [vorn@ServerAdministrator.Nightstar.Net] has quit [Ping timeout: 121 seconds]
20:57
< Rhamphoryncus_>
Derakon: care to clarify that numpy crack so I can know to stay away form it in the future?
20:57 Vornicus [vorn@ServerAdministrator.Nightstar.Net] has joined #code
20:57 mode/#code [+qo Vornicus Vornicus] by ChanServ
21:07
< RichardBarrell>
Rhamphoryncus_: Assuming that I parsed the backscroll correctly, Derakon had some code that attempted to negate a numpy.uint16, which isn't going to work out so great seeing as that's a 16-bit unsigned integer.
21:09
< RichardBarrell>
Rhamphoryncus_: numpy is, IMHO, quite a nice library (if you don't mind the way things are named too badly) whose main flaw is being difficult to install from source because its dependencies (BLAS, LAPACK and I think a few other friends) are a little bit heinous to compile and install.
21:09
< Rhamphoryncus_>
oh right, I wasn't think through the full consequence of python code leaking a uint16 through a calculation
21:10
< Rhamphoryncus_>
liskov substitution ftw :P
21:14
< RichardBarrell>
uint16 isn't a subclass of int though, it just happens to implement all of the same operators. I'm not sure that Liskov quite applies to ducks.
21:16
< Rhamphoryncus_>
details interfere with my rant.
21:17 * Derakon reads up.
21:17
< Derakon>
Yeah, it was just a matter of me not realizing that calling the min() and max() methods on a Numpy array could return anything other than a standard int/float. Which is silly of me.
21:18
< Rhamphoryncus_>
Derakon: begs the question of why uint16 exists
21:18
< Derakon>
For when you need to store positive integers and don't need more than 2 bytes per.
21:18
< Derakon>
We use that all the time for our cameras, for example.
21:19
< Rhamphoryncus_>
a uint16 object *in python* will use tons more than 2 bytes
21:19
< Derakon>
That same uint16 in a Numpy array won't though.
21:20
< Derakon>
And you really want to be able to extract and insert values into those arrays without worrying about type transformations.
21:22
< Rhamphoryncus_>
a uint16 won't exist in a numpy array. A numpy array would store it raw in the array, then copy it into a proper object when you access it
21:22
< Rhamphoryncus_>
box/unbox
21:22
< Derakon>
Define "exist", then. It's 2 bytes used to represent an unsigned integer.
21:23
< Derakon>
That sounds like a uint16 to me.
21:23
< Rhamphoryncus_>
C uint16 vs python uint16
21:24
< Rhamphoryncus_>
python uint16 is boxed and has a refcount and type pointer, as well as the 2 byte payload
21:24
< Rhamphoryncus_>
C uint16 is unboxed and just has the 2 byte payload
21:26
< Rhamphoryncus_>
the python form has alignment requirements that would, on a 64-bit computer, pad the whole thing out to 24 bytes, nevermind if there's allocation overhead
21:26
< Derakon>
So...if I understand you correctly, when I do "foo = someNumpyArray[i]; foo *= -1", you want a different result from if I just did "someNumpyArray[i] * -1"?
21:26
< Derakon>
Since you're objecting to the existence of a Python uint16?
21:26
< Derakon>
Or is it that you think I should get more bytes allocated to storing the value, so it'd be an unsigned 32- or 64-bit value?
21:26
< Derakon>
In which case rolling over to 0 wouldn't work as expected.
21:27
< Derakon>
I recognize that extracting a value from a Numpy array and storing it in a Python object is going to create a lot of extra baggage.
21:27
< Derakon>
But you still need the type to behave itself.
21:28
< RichardBarrell>
but ideally you only have some small constant number of uint16s in local variables at any one time
21:28
< Rhamphoryncus_>
The semantic difference very likely is the point. They assume extracting your unboxed uint16 should give you the semantics of a C uint16 (with all the quirks and foibles), rather than simply being an int
21:28
< RichardBarrell>
and the vast bulk of the uint16s that you're making use of are all in big unboxed arrays where the per-object amortises down so that they really do take only 2+epsilon bytes apiece.
21:29
< RichardBarrell>
(not disagreeing, belabouring the point)
21:29
< Rhamphoryncus_>
Performance wise there is absolutely no point having a python uint16 type rather than an int point
21:29
< Derakon>
Understood.
21:29
< Derakon>
I'm not trying to argue the performance benefits of the uint16 here; in fact, I manually upcast to float when I realized what was going on.
21:30
< Derakon>
I'm just saying that we can't go around changing types willy-nilly because people may well be depending on the behaviors of those types.
21:30
< RichardBarrell>
Rhamphoryncus_: A primitive-type-inferring JIT could make that not so true, if one were available for Python. :)
21:31
< Rhamphoryncus_>
It seems like a design decision rooted in "Well, I'm a C programmer, but I want *some* of the benefits of using Python"
21:31 ErikMesoy is now known as ErikMesoy|sleep
21:31
< Rhamphoryncus_>
You want to avoid *some* bugs, but not *too many* bugs.
21:32
< Rhamphoryncus_>
(not you personally.)
21:32
< Rhamphoryncus_>
RichardBarrell: it'd be a sad day for python :P
21:32
< Rhamphoryncus_>
Actually, a decent JIT would get the performance benefit with normal int
21:33 * jerith jumps in with no context at all.
21:33
<@jerith>
RichardBarrell: pypy?
21:34
< RichardBarrell>
jerith: ideally some day. That's what I had in mind.
21:34
< Rhamphoryncus_>
jerith: pypy's jit is decent. You can get improved performance without telling it you want it to fail silently if you screw up slightly
21:34
< Rhamphoryncus_>
Excluding rpython of course
21:35
< RichardBarrell>
Rhamphoryncus_: why would a JIT that transparently makes some (but not all) numeric code faster without affecting its semantics be a sad day? oO
21:35 * jerith occasionally has coffee with one of the pypy devs, so has a curiously shaped knowledge of it.
21:36
< Rhamphoryncus_>
RichardBarrell: the uint16 type has a semantic difference. A primitive JIT that optimized only it (which is fairly easy) would encourage people to use that semantic difference.
21:36 celmin|away is now known as celticminstrel
21:36 Vornicus is now known as VVash
21:36
< RichardBarrell>
Nobody would start with uint16. They'd start with int and float.
21:37
< Rhamphoryncus_>
I said there was no performance benefit to having a python uint16 type. You said a JIT could make a difference.
21:41
< RichardBarrell>
I'm only disputing the idea that it'd be a "sad day".
21:42
< RichardBarrell>
On a related note, you can take rich numerical types away from me when you pry them from my cold, dead hands.
21:43
< RichardBarrell>
That includes the narrow ones too, not just the wide ones like rationals and complex reals.
21:43
< Rhamphoryncus_>
Currently python code using that uint16 type is around 0.00001% of all python code. With a significant performance benefit that only applied to that type (and the related C-like types) that'd jump way up, probably around 50%, and inflict a ton of bugs of the sort Derakon hit.
21:43
< Derakon>
Where do you get that .00001%?
21:44
< Rhamphoryncus_>
most programs don't use numpy
21:44
< Derakon>
Ah, your posterior. :p
21:46
< Rhamphoryncus_>
It obviously has significant uses, but mostly in the scientific field
21:46
< Rhamphoryncus_>
Which is not the majority of python programs
21:46
< Derakon>
Heh.
21:46
< Derakon>
There's one hell of a lot of scientific Python programmers out there, buddy.
21:46
< RichardBarrell>
No, really, nobody would ever build that. That's insane. Anyone who *was* going to implement optimising numerical code down to primops on unboxed values would apply the same optimisation techniques to int and float first. Not sad.
21:46
< Derakon>
They probably don't outnumber the application developers, but basically anyone who wants to do scientific analysis prototyping uses either Matlab or Python.
21:47
< Rhamphoryncus_>
RichardBarrell: you said primitive, not me
21:47
< Rhamphoryncus_>
Maybe I misunderstood your intention due to the context
21:47
< RichardBarrell>
Rhamphoryncus_: by "primitive" I mean "unboxed value". GHC's terminology for it.
21:48
< Rhamphoryncus_>
.. heh
21:48
< Rhamphoryncus_>
yup, that gives totally different meaning to your statement :)
21:49
< RichardBarrell>
Oh I see the miscommunication. I wrote "primitive-type-inferring" meaning ((primitive type) inferring) and accidentally conveyed (primitive (type inferring))
21:49
< RichardBarrell>
Anyway. uint16 arrays are a perf win anyway just for cache pressure. ;)
21:50
< Rhamphoryncus_>
boo :)
21:50
< Rhamphoryncus_>
wait, you said array
21:50
< Rhamphoryncus_>
I have no argument with an unpacked (primitive) array. That's all good stuff.
21:51
< RichardBarrell>
and you need numeric types that match the types stored in your unboxed arrays in order to avoid having insane behaviour when you peek and poke them. :)
21:51
< Rhamphoryncus_>
I don't see how
21:52
< Rhamphoryncus_>
You could even have it quietly inflict wrap-around when storing an oversized int if you really wanted to. You don't need it to inflict wrap-around at every step of the operation.
21:53
< RichardBarrell>
a, b = ra[0, 1]; c = a + b; ra[2] = c; assert(ra[2] == c); # that's not a very nice assertion error; why can I not store things losslessly into the same array that I just took them out of!?
21:54
< RichardBarrell>
Er
21:54
< RichardBarrell>
a, b = ra[0], ra[1]; # forgive us our syntax errors
21:54
< Rhamphoryncus_>
I would personally prefer to have a range assertion when you store back into the array, which'd make the problem all quite obvious.
21:56
< RichardBarrell>
So if I actually *desire* wraparound at 2^16 I'm going to have to put (& 0xFFFF) on every store?
21:57
< Rhamphoryncus_>
sure, unless you're doing it on every store and wanted to make it the default :P
21:59
< RichardBarrell>
So now I can't implement, say, CRC32 in pure Python because you won't let me have a uint32_t. Or I can, but it's such a faff that I end up writing a small C program instead and loading it with ctypes.
21:59
< RichardBarrell>
Just... give me lots of numeric types, you pysco'pathic lunatic! ;)
21:59
< Rhamphoryncus_>
heh
22:00 * Rhamphoryncus_ ponders
22:02 Taki^ [Taki@Nightstar-98d9afe5.consolidated.net] has quit [Client closed the connection]
22:04
< Rhamphoryncus_>
My language will require explicit (& 0xFFFF). I do appreciate the usecase of implementing CRC32, but the issue is how to support it without creating worse problems.
22:05
< Rhamphoryncus_>
if uint16 were in the stdlib it would be a worse problem ;)
22:08
< Derakon>
So basically, you consider numbers behaving as expected in a mathematical sense to be more important than numbers behaving as expected in a computer hardware sense.
22:12
< Rhamphoryncus_>
are you saying CRC32 isn't a mathematical usage?
22:13
< Rhamphoryncus_>
They're both mathematical to me, both important, but I have to pick one
22:13
< Derakon>
Well, CS is applied math.
22:22
< Derakon>
Blah, this is not the greatest setup we have here.
22:23
< Derakon>
One computer which both me and my coworker need interactive access to.
22:23
< Derakon>
So if I'm remote desktopping onto it, then he can't use it directly, and vice versa.
22:32
< TheWatcher>
There is a way around that (can't remember it off the top of my head, but I could probably find it again), but it's probably... less than strictly adhering to microsoft's licensing terms
22:37 gnolam [lenin@Nightstar-202a5047.priv.bahnhof.se] has quit [[NS] Quit: Z?]
22:37 You're now known as TheWatcher[T-2]
22:41
< RichardBarrell>
Derakon: I dislike the assumption that "numbers in a mathematical sense" means the reals and nother else.
22:41
< RichardBarrell>
It smacks of having never heard of any maths beyond high school trigonometry.
22:42
< RichardBarrell>
Discrete maths is maths too, you bigots! ;-;
22:43 You're now known as TheWatcher[zZzZ]
22:46
< Derakon>
RB: well, integer math is often useful, so we can allow for ints in addition to floats. :)
23:25 Syloqs-AFH [Syloq@NetworkAdministrator.Nightstar.Net] has quit [Ping timeout: 121 seconds]
23:31 Syloqs_AFH [Syloq@NetworkAdministrator.Nightstar.Net] has joined #code
23:33 Syloqs_AFH is now known as Syloqs-AFH
23:51 Derakon [chriswei@Nightstar-f68d7eb4.ca.comcast.net] has quit [[NS] Quit: leaving]
23:54
<@McMartin>
... somebody in this office has Nyancat as their ringtone.
23:56
<@McMartin>
Oh, cute
23:56
<@McMartin>
(define (curry f n)
23:56
<@McMartin>
(if (zero? n)
23:56
<@McMartin>
(f)
23:56
<@McMartin>
(lambda args
23:56
<@McMartin>
(curry (lambda rest
23:56
<@McMartin>
(apply f (append args rest)))
23:56
<@McMartin>
(- n (length args))))))
--- Log closed Fri Nov 18 00:00:18 2011
code logs -> 2011 -> Thu, 17 Nov 2011< code.20111116.log - code.20111118.log >

[ Latest log file ]