code logs -> 2013 -> Sat, 22 Jun 2013< code.20130621.log - code.20130623.log >
--- Log opened Sat Jun 22 00:00:13 2013
00:18 Derakon[AFK] is now known as Derakon
01:04 Vornicus [vorn@ServerAdministrator.Nightstar.Net] has quit [[NS] Quit: Leaving]
01:14 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
01:28 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
01:28 mode/#code [+o himi] by ChanServ
01:50 Typherix is now known as Typh|offline
02:05 BlackWidow [NSwebIRC@Nightstar-4d9dbf81.fios.verizon.net] has joined #code
02:06 BlackWidow [NSwebIRC@Nightstar-4d9dbf81.fios.verizon.net] has left #code [""]
02:08 Derakon is now known as Derakon[AFK]
02:20 ktemkin[awol] is now known as ktemkin
02:28 celticminstrel [celticminst@Nightstar-e83b3651.cable.rogers.com] has joined #code
02:28 mode/#code [+o celticminstrel] by ChanServ
02:41 RichyB [RichyB@D553D1.68E9F7.02BB7C.3AF784] has quit [[NS] Quit: Gone.]
02:44 RichyB [RichyB@D553D1.68E9F7.02BB7C.3AF784] has joined #code
02:51 Vorntastic [Vorn@Nightstar-8ff263a4.sub-70-211-5.myvzw.com] has joined #code
03:15 Kindamoody[zZz] is now known as Kindamoody
03:36 Vorntastic [Vorn@Nightstar-8ff263a4.sub-70-211-5.myvzw.com] has quit [Ping timeout: 121 seconds]
03:37 Vorntastic [Vorn@Nightstar-8ff263a4.sub-70-211-5.myvzw.com] has joined #code
03:37 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
03:51 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
03:51 mode/#code [+o himi] by ChanServ
03:56 Vorntastic [Vorn@Nightstar-8ff263a4.sub-70-211-5.myvzw.com] has quit [Ping timeout: 121 seconds]
03:56 Vorntastic [Vorn@Nightstar-8ff263a4.sub-70-211-5.myvzw.com] has joined #code
03:57 Turaiel[MARC] is now known as Turaiel
04:15 Turaiel is now known as Turaiel[MARC]
04:17 Harlow [Harlow@Nightstar-fe8a1f12.il.comcast.net] has joined #code
04:20 Vornicus [vorn@ServerAdministrator.Nightstar.Net] has joined #code
04:20 mode/#code [+qo Vornicus Vornicus] by ChanServ
04:20 VirusJTG [VirusJTG@Nightstar-09c31e7a.sta.comporium.net] has quit [[NS] Quit: Program Shutting down]
04:20 Vorntastic [Vorn@Nightstar-8ff263a4.sub-70-211-5.myvzw.com] has quit [[NS] Quit: Bye]
04:21 cpux [cpux@Nightstar-98762b0f.dyn.optonline.net] has joined #code
04:21 mode/#code [+o cpux] by ChanServ
05:23 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
05:27 celticminstrel [celticminst@Nightstar-e83b3651.cable.rogers.com] has quit [[NS] Quit: And lo! The computer falls into a deep sleep, to awake again some other day!]
05:36 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
05:36 mode/#code [+o himi] by ChanServ
05:56 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
06:09 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
06:09 mode/#code [+o himi] by ChanServ
06:21 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
06:34 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
06:34 mode/#code [+o himi] by ChanServ
06:42 Kindamoody is now known as Kindamoody|out
06:46 ErikMesoy|sleep is now known as ErikMesoy
08:48 Harlow [Harlow@Nightstar-fe8a1f12.il.comcast.net] has quit [[NS] Quit: Leaving]
08:56 Orth [orthianz@3CF3A5.E1CD01.B089B9.1E14D1] has quit [Ping timeout: 121 seconds]
09:13 You're now known as TheWatcher
09:13 AverageJoe [evil1@Nightstar-4b668a07.ph.cox.net] has joined #code
09:41 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
09:54 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
09:55 mode/#code [+o himi] by ChanServ
10:02 Orthia [orthianz@3CF3A5.E1CD01.B089B9.1E14D1] has joined #code
10:02 mode/#code [+o Orthia] by ChanServ
10:05 AverageJoe [evil1@Nightstar-4b668a07.ph.cox.net] has quit [[NS] Quit: Leaving]
11:11 * McMartin is beyond hope
11:11 * McMartin is writing a custom allocator.
11:43 Orthia [orthianz@3CF3A5.E1CD01.B089B9.1E14D1] has quit [Ping timeout: 121 seconds]
11:46 Orthia [orthianz@3CF3A5.E1CD01.B089B9.1E14D1] has joined #code
11:46 mode/#code [+o Orthia] by ChanServ
12:03
<@Azash>
McMartin: Oh dear
12:03
<@Azash>
Something to malloc large sets of memory and then hand those further to whatever uses the engine, or?
12:31
<@froztbyte>
VMM allllllllllll the things
12:45
<~Vornicus>
That /is/ beyond hope
12:47
<~Vornicus>
though this reminds me, I have a task I need to figure out how to perform, regarding templates: I have several map types, and an operation that's common to them both, but isn't in the standard library for some reason: I need to merge two maps with disjoint key sets.
12:47
<~Vornicus>
(and in the process create a new map)
13:38
<&ToxicFrog>
...what language isthis?
13:40
<~Vornicus>
C++
13:41
< ktemkin>
Vornicus: You just want to create a new map that has all the elements of two maps?
13:42
<~Vornicus>
yepperoni. I have several map types I need to do this with, which is why I want to only write the thing once
13:45
< ktemkin>
When I first read that, it looked as though you were using STL maps with different typeargs.
13:45
<&ToxicFrog>
My std::map is a bit rusty, but don't you just end up with something like for (auto it = lhs.cbegin(); it != lhs.cend(); ++it) { rhs[it->first] = it->second; }
13:45
<&ToxicFrog>
Where lhs and rhs are std::maps or subclasses thereof.
13:45
< ktemkin>
I'd assume if you were using std:map, you'd just create a new map and then insert() both of the maps that you want to merge.
13:45
< ktemkin>
*std::map
13:46
< ktemkin>
newMap.insert(firstMap.begin(), firstMap.end());
13:46
< ktemkin>
Repeat for the second map.
13:47
< ktemkin>
But it's been a very long time, so I may be mistaken.
13:47
<&ToxicFrog>
Oh, that's even nicer
13:50
< ktemkin>
Assuming they're disjoint, that should work. If they're not, insert won't overwrite elements, so the first insert would get priority.
13:53
<~Vornicus>
Right, so the question really is, since this is a common task and needs to happen on many different map types, I want to put it into a template, and that's the part I'm not familiar with
13:55
<&ToxicFrog>
I don't think you need to template it all of the map types are subclasses of std::map
13:56
<&ToxicFrog>
*if all
13:56
<&ToxicFrog>
Unless you want additional constraints, e.g. both arguments are the same map type and it always returns the same type
13:57
<~Vornicus>
THat's the thing, it's basically required.
13:57
<~Vornicus>
This isn't Java, there isn't an Object to fall back on
13:57
<&ToxicFrog>
In which case I think something like this: template<class MapType> MapType merge_maps(const MapType & lhs, const MapType & rhs) { ...create, populate and return a new MapType... }
13:58
<&ToxicFrog>
That will be a compile-time error if you ever try calling it with two values that aren't the same type, or that don't support the right methods.
14:04
< ktemkin>
Depending on the size of those maps, you may not want to return-by-value.
14:05
<&ToxicFrog>
I was sloppy with the return type~
14:07
< ktemkin>
... actually, I'd have to weigh that in my head for a bit.
14:09
< ktemkin>
If you're just creating a map and then inserting into it twice, depending on your compiler, you likely wind up with return-value optimizations.
14:13
<~Vornicus>
I kind of need it.
14:14 gnolam [lenin@Nightstar-b2aa51c5.cust.bredbandsbolaget.se] has joined #code
14:14 mode/#code [+o gnolam] by ChanServ
14:24 * Vornicus pokes at his code. Hasn't quite gotten to the dictionary merge part yet, but seems to have forgotten where the hell he was.
14:24
<&ToxicFrog>
Remind me, what are you writing and why?
14:26
<~Vornicus>
I'm writing a thing that optimize resource production in a space 4x. I'm writing it in C++ because there's lots and lots and lots of processing to do, and Python turned out to be waaaaay too slow.
14:28
<~Vornicus>
Der suggested I could use various genetic algorithms but I couldn't find one that actually bothered working for the Extremely Discrete stuff I was working on.
14:28
<&ToxicFrog>
Doesn't python have a looks like python, compiles to C sublanguage? Cython?
14:30
< ktemkin>
I was actually going to suggest that you profile your python part and only port over the bottleneck
14:30
<~Vornicus>
It does, I guess. At some point though I thought to myself "this is a good opportunity to remember how to use C++"
14:30
<~Vornicus>
ktemkin: the difficulty is actually that it's about 80% of the code, that's the bottleneck. It's less that any particular part is slow, it's that there's lots and lots and lots going on.
14:33
< ktemkin>
How slow is slow, in terms of your C code?
14:33
< ktemkin>
er
14:33
< ktemkin>
with regard to python code
14:34
< ktemkin>
To disambigufy my fubar'd sentence: "How slow do you mean when you say your python code is slow?"
14:37
<~Vornicus>
It takes over four hours to run on my current data set; I've never seen it finish, and I don't know how long it will take to do that. I do know that it needs to process (though not, by a long shot, store) several billion records.
14:38 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
14:39
< ktemkin>
My point is that you're only going to garner an approximately linear speedup by switching languages; if your algorithm was intractable in python, it'll be intractable in C++.
14:39
<~Vornicus>
Sure, and that's going to be 50-fold.
14:41
< ktemkin>
Have you tried any python implementations other than CPython?
14:41
<~Vornicus>
Yes. The numbers above are from pypy, which is about 5x faster than CPython.
14:41
< Syka>
for some reason, my mind is reading pypy as 'pippy'
14:42
< ktemkin>
Cython-generated-C is reportedly much faster than PyPy, though it has its own downsides.
14:43
<~Vornicus>
The other thing working in C++ does is that I don't have the absolutely gigantic object creation overhead that Python has; using a sequence type that isn't Vector drops me from linear to logarithmic time for that segment... but in Python, lists are implemented in C and other data structures aren't.
14:45
< ktemkin>
It's probably good practice for you to get back into C++ anyway. My point was more that, if possible, I would get a general idea for the runtime of your program across varying sized datasets.
14:45
< ktemkin>
If you have a python program that executes in exptime, it's not suddenly going to finish anytime soon because you switch to a faster base language.
14:47
<~Vornicus>
I actually have algorithm analysis in my notebook.
14:48
<~Vornicus>
It's something like O(n^6)
14:52 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
14:52 mode/#code [+o himi] by ChanServ
14:52
< ktemkin>
What's the approximate size of your input set/n?
14:52
<~Vornicus>
n is about 100 at the moment.
14:55
< ktemkin>
Okay, so (as a really, really loose example approximation) assume your teensiest step executes in about one tenth of a microsecond, and you're going to be running about 100^6 iterations of that step.
14:56
< ktemkin>
It'd still take about 1.15 days of CPU time for your program to run.
14:56
<~Vornicus>
yep.
15:00 ktemkin is now known as ktemkin[awol]
15:01
<~Vornicus>
I've done this work. I know where it goes. Every single thing I can avoid having to do makes my code faster. Switching from O(sqrt(n)) to O(log(n)) is still a thousand-fold speedup when working with a billion elements.
15:33
<~Vornicus>
This is also why I have two different versions of my optimizer, one for addition-type frontier joins, and one for merge-type; the addition-type, having to use that dictionary join, will only actually use that join when it needs to.
16:05 Vornicus [vorn@ServerAdministrator.Nightstar.Net] has quit [[NS] Quit: Leaving]
16:14 Turaiel[MARC] is now known as Turaiel
16:42 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
16:47 celticminstrel [celticminst@Nightstar-b7a93457.dsl.bell.ca] has joined #code
16:47 mode/#code [+o celticminstrel] by ChanServ
16:51 Derakon[AFK] is now known as Derakon
16:55 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
16:55 mode/#code [+o himi] by ChanServ
17:02 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
17:03 VirusJTG [VirusJTG@Nightstar-09c31e7a.sta.comporium.net] has joined #code
17:13 Turaiel is now known as Turaiel[MARC]
17:15 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
17:15 mode/#code [+o himi] by ChanServ
18:16 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
18:29 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
18:29 mode/#code [+o himi] by ChanServ
19:24 Karono [Karono@Nightstar-a97724cd.optusnet.com.au] has joined #code
19:34 Kindamoody|out is now known as Kindamoody
19:39 Karono [Karono@Nightstar-a97724cd.optusnet.com.au] has quit [Connection closed]
20:05 celticminstrel [celticminst@Nightstar-b7a93457.dsl.bell.ca] has quit [[NS] Quit: And lo! The computer falls into a deep sleep, to awake again some other day!]
20:22 Kindamoody is now known as Kindamoody[zZz]
20:42 Derakon [Derakon@Nightstar-a3b183ae.ca.comcast.net] has quit [Ping timeout: 121 seconds]
20:43 Derakon [Derakon@Nightstar-a3b183ae.ca.comcast.net] has joined #code
20:43 mode/#code [+ao Derakon Derakon] by ChanServ
21:38 himi [fow035@Nightstar-5d05bada.internode.on.net] has quit [Ping timeout: 121 seconds]
21:51 himi [fow035@Nightstar-5d05bada.internode.on.net] has joined #code
21:51 mode/#code [+o himi] by ChanServ
21:57
< McMartin>
04:00 <@Azash> Something to malloc large sets of memory and then hand those further to whatever uses the engine, or?
21:57
< McMartin>
There's a lot of dynamic churn, so this is really an object recycler that guarantees no fragmentation amongst memory chunks of a pre-set size.
21:58
< McMartin>
Without being at the mercy of exactly how good libc's allocator is under what circumstances.
21:58
< McMartin>
So it's actually not *that* large - I would estimate in the use cases I'm thinking of if I were to actually use this, at no point would it ever exceed a few hundred kilobytes...
21:59
< McMartin>
... but in doing something similar in UQM we actually managed to fragment memory so hard on some systems that malloc started failing after a couple hours of operation in a 32-bit address space
22:01
< McMartin>
also from backscroll, albeit more recent, if you're going from O(sqrt(n)) to O(log n) that is not a constant-factor speedup, that is an asymptotic jump.
22:01
< McMartin>
I'm not 100% confident that O(sqrt(n)) is still considered "polynomial", because while, technically, it *is*, it is also sublinear
22:04
< McMartin>
Also, as for my allocator, this came out from my dicking around with Scheme
22:04
< McMartin>
I was in a mode of "wow, you can do surprisingly efficient things with singly-linked lists"
22:04
< McMartin>
And then I had the design of an allocator show up and there's only one language that could ever use that~
22:13 Vornicus [vorn@ServerAdministrator.Nightstar.Net] has joined #code
22:13 mode/#code [+qo Vornicus Vornicus] by ChanServ
22:50
<@Azash>
McMartin: Ah, quite nice anyway
22:51
< McMartin>
If it becomes stable or turns into something, I can probably actually use it to replace the handful of things I'm still using C++ for in Monocle, which will make linking it in elsewhere easier, but I'm not willing to trust it with that at the prototype stage
22:51
< McMartin>
But still, I got to play cons-cell surgery games in C ^_^
22:55 ErikMesoy is now known as ErikMesoy|sleep
22:55
<@Azash>
cons-cell?
22:58
< McMartin>
The "cons cell" or "pair" is the fundamental unit of allocation in traditional Lisp languages.
22:58
< McMartin>
("Traditional" because it isn't in Clojure)
22:59
< McMartin>
It's basically two pointers, each to either to some atomic value or to another cons cell
22:59
< McMartin>
So the basic data type is singly-linked list.
22:59
< McMartin>
"Surgery" is when you do things by reassigning the pointers in the cells instead of allocating new ones and linking in bits that reuse the old parts
23:00
< McMartin>
Imperative LISP code as a result ends up having a great deal of similarity with the more viciously arcane C pointer juggling techniques.
23:00
< McMartin>
The kind of stuff that in every other language is modestly folded away inside the standard library
23:00
< McMartin>
This is part of why LISP is treated as a functional language~
23:05
<@froztbyte>
<McMartin> There's a lot of dynamic churn, so this is really an object recycler that guarantees no fragmentation amongst memory chunks of a pre-set size.
23:05
<@froztbyte>
COSS COSS
23:06
<@froztbyte>
(well, cyclic stores in general)
23:14
<@Azash>
McMartin: I see, thanks
23:33 celticminstrel [celticminst@Nightstar-b7a93457.dsl.bell.ca] has joined #code
23:33 mode/#code [+o celticminstrel] by ChanServ
23:34 celticminstrel [celticminst@Nightstar-b7a93457.dsl.bell.ca] has quit [[NS] Quit: And lo! The computer falls into a deep sleep, to awake again some other day!]
23:34 celticminstrel [celticminst@Nightstar-b7a93457.dsl.bell.ca] has joined #code
23:35 mode/#code [+o celticminstrel] by ChanServ
23:53
< McMartin>
froztbyte: Yeah, in UQM it was a ring-queue
23:53
< McMartin>
I can post what I've got in a bit, once I clean up the formatting some
23:53
< McMartin>
I'm pretty sure there's a standard name for it; it's too simple to not have one.
23:54
< McMartin>
"Memory pool"
23:56
< McMartin>
I'm modifying it slightly, which gives it slightly worse performance if you free things in exactly the wrong order with pathologically bad configuration inputs
23:57
< McMartin>
And it's built on top of malloc and friends, but tries to call them less.
--- Log closed Sun Jun 23 00:00:10 2013
code logs -> 2013 -> Sat, 22 Jun 2013< code.20130621.log - code.20130623.log >

[ Latest log file ]