code logs -> 2017 -> Tue, 14 Nov 2017< code.20171113.log - code.20171115.log >
--- Log opened Tue Nov 14 00:00:31 2017
00:07 himi [sjjf@Nightstar-dm0.2ni.203.150.IP] has joined #code
00:07 mode/#code [+o himi] by ChanServ
00:09
<@TheWatcher>
Idly McM, did you see this (opus magnum spoilers): https://twitter.com/crypticsea/status/929204220334501888
00:11
<&McMartin>
I did not see *that* but I have seen that technique used to astonishing effect
00:12 Kindamoody [Kindamoody@Nightstar-eubaqc.tbcn.telia.com] has quit [Connection reset by peer]
00:14
<@TheWatcher>
That's crazy mesmerising
00:15
<&McMartin>
This is one of two reasons that I think OM is better than SpaceChem
00:15
<&McMartin>
The first is that it's less obviously an IDE with cheevos
00:15
<&McMartin>
The second is that good solutions are actually beautiful to watch
00:20 Kindamoody|autojoin [Kindamoody@Nightstar-eubaqc.tbcn.telia.com] has joined #code
00:20 mode/#code [+o Kindamoody|autojoin] by ChanServ
00:33 Jessikat [Jessikat@Nightstar-bt5k4h.81.in-addr.arpa] has quit [Connection closed]
00:41 Jessikat [Jessikat@Nightstar-hb5vcd.dab.02.net] has joined #code
00:52 celticminstrel [celticminst@Nightstar-krthmd.dsl.bell.ca] has joined #code
00:52 mode/#code [+o celticminstrel] by ChanServ
01:05 Derakon_ is now known as Derakon
01:05 mode/#code [+ao Derakon Derakon] by ChanServ
02:18 RchrdB [RchrdB@Nightstar-qe9.aug.187.81.IP] has quit [[NS] Quit: Leaving]
02:24 Kindamoody|autojoin [Kindamoody@Nightstar-eubaqc.tbcn.telia.com] has quit [Connection closed]
02:25 Kindamoody|autojoin [Kindamoody@Nightstar-eubaqc.tbcn.telia.com] has joined #code
02:25 mode/#code [+o Kindamoody|autojoin] by ChanServ
02:43 Vornlicious [Vorn@Nightstar-n52kni.sub-70-197-82.myvzw.com] has joined #code
02:46 Vorntastic [Vorn@Nightstar-1l3nul.res.rr.com] has quit [Ping timeout: 121 seconds]
02:48 Derakon is now known as Derakon[AFK]
03:22 Derakon[AFK] is now known as Derakon
03:59 Jessikat` [Jessikat@Nightstar-0l73kv.dab.02.net] has joined #code
04:02 Jessikat [Jessikat@Nightstar-hb5vcd.dab.02.net] has quit [Ping timeout: 121 seconds]
04:34 Vornlicious [Vorn@Nightstar-n52kni.sub-70-197-82.myvzw.com] has quit [Connection closed]
04:35 Vorntastic [Vorn@Nightstar-n52kni.sub-70-197-82.myvzw.com] has joined #code
04:50 Vorntastic [Vorn@Nightstar-n52kni.sub-70-197-82.myvzw.com] has quit [[NS] Quit: Bye]
04:50 Vorntastic [Vorn@Nightstar-1l3nul.res.rr.com] has joined #code
05:00 Derakon is now known as Derakon[AFK]
05:05 VirusJTG_ [VirusJTG@Nightstar-42s.jso.104.208.IP] has joined #code
05:07 VirusJTG [VirusJTG@Nightstar-42s.jso.104.208.IP] has quit [Ping timeout: 121 seconds]
05:08 Vornlicious [Vorn@Nightstar-n52kni.sub-70-197-82.myvzw.com] has joined #code
05:12 Vorntastic [Vorn@Nightstar-1l3nul.res.rr.com] has quit [Ping timeout: 121 seconds]
05:13 abilal [r@Nightstar-rql055.liskov.tor-relays.net] has quit [Ping timeout: 121 seconds]
05:20 abilal [r@Nightstar-6m6pkf.laquadrature.net] has joined #code
05:35 Vornicus [Vorn@Nightstar-1l3nul.res.rr.com] has quit [Ping timeout: 121 seconds]
06:08 celticminstrel is now known as celmin|sleep
06:18 Vornicus [Vorn@Nightstar-1l3nul.res.rr.com] has joined #code
06:18 mode/#code [+qo Vornicus Vornicus] by ChanServ
06:25 Vornicus [Vorn@Nightstar-1l3nul.res.rr.com] has quit [Ping timeout: 121 seconds]
06:55 Jessikat` is now known as Jessikat
07:07 himi [sjjf@Nightstar-dm0.2ni.203.150.IP] has quit [Ping timeout: 121 seconds]
07:53 Soare [r@Nightstar-21fkeb.enn.lu] has joined #code
07:53 abilal [r@Nightstar-6m6pkf.laquadrature.net] has quit [Connection closed]
08:09 Soare [r@Nightstar-21fkeb.enn.lu] has quit [Ping timeout: 121 seconds]
09:52 Kindamoody|autojoin is now known as Kindamoody
10:28 Kindamoody [Kindamoody@Nightstar-eubaqc.tbcn.telia.com] has quit [Ping timeout: 121 seconds]
10:29 Jessikat` [Jessikat@Nightstar-pvd502.dab.02.net] has joined #code
10:32 Jessikat [Jessikat@Nightstar-0l73kv.dab.02.net] has quit [Ping timeout: 121 seconds]
10:34 Kindamoody [Kindamoody@Nightstar-eubaqc.tbcn.telia.com] has joined #code
10:34 mode/#code [+o Kindamoody] by ChanServ
11:03 mac is now known as macdjord|slep
12:10 gnolam [lenin@Nightstar-ego6cb.cust.bahnhof.se] has quit [[NS] Quit: Blegh]
12:43 Degi [Degi@Nightstar-8jctgl.dyn.telefonica.de] has joined #code
12:44 macdjord [macdjord@Nightstar-a1fj2k.mc.videotron.ca] has joined #code
12:44 mode/#code [+o macdjord] by ChanServ
12:47 macdjord|slep [macdjord@Nightstar-a1fj2k.mc.videotron.ca] has quit [Ping timeout: 121 seconds]
13:06 Alek [Alek@Nightstar-7or629.il.comcast.net] has quit [Ping timeout: 121 seconds]
13:11 Alek [Alek@Nightstar-7or629.il.comcast.net] has joined #code
13:11 mode/#code [+o Alek] by ChanServ
13:24 Degi_ [Degi@Nightstar-fgtfje.dyn.telefonica.de] has joined #code
13:27 Degi [Degi@Nightstar-8jctgl.dyn.telefonica.de] has quit [Ping timeout: 121 seconds]
14:20 gnolam [quassel@Nightstar-f22.ckv.119.62.IP] has joined #code
14:20 mode/#code [+o gnolam] by ChanServ
14:44 Jessikat` is now known as Jessikat
15:16 Vornlicious [Vorn@Nightstar-n52kni.sub-70-197-82.myvzw.com] has quit [[NS] Quit: Bye]
15:16 Vorntastic [Vorn@Nightstar-1l3nul.res.rr.com] has joined #code
15:21 VirusJTG_ [VirusJTG@Nightstar-42s.jso.104.208.IP] has quit [[NS] Quit: Leaving]
15:22 VirusJTG [VirusJTG@Nightstar-42s.jso.104.208.IP] has joined #code
15:22 mode/#code [+ao VirusJTG VirusJTG] by ChanServ
15:29 Alek [Alek@Nightstar-7or629.il.comcast.net] has quit [Ping timeout: 121 seconds]
15:32 Alek [Alek@Nightstar-7or629.il.comcast.net] has joined #code
15:32 mode/#code [+o Alek] by ChanServ
16:27 gnolam [quassel@Nightstar-f22.ckv.119.62.IP] has quit [[NS] Quit: Blegh]
16:59 Jessikat` [Jessikat@Nightstar-iu0cra.dab.02.net] has joined #code
17:02 Jessikat [Jessikat@Nightstar-pvd502.dab.02.net] has quit [Ping timeout: 121 seconds]
17:11 Kindamoody is now known as Kindamoody|afk
17:27
< ErikMesoy>
Today at work I encountered a fascinating example of the Fractally Wrong that is so often memed about.
17:27
< ErikMesoy>
Basic Wrongness: A Word document, and I say "document" lightly, where each page of text has been replaced by an image of that page's text, making it impossible to highlight, scroll more finely than one page at a time, or do various other operations.
17:28
< ErikMesoy>
Wrongness +1: There's significant formatting and coloring on the text too.
17:28
< ErikMesoy>
Wrongness +2: Also graphs and various images which are now pixel-part of the page-image in the same image-object as their page-worth of text.
17:29
< ErikMesoy>
Wrongness +3: And highly-formatted tables with merged cells, split cells, invisible cells, numeric data.
17:30
< ErikMesoy>
Wrongness +4: The numeric data in the cells is frequently absent, does not add up, inconsistently formatted between . and , separators, inconsistently formatted for thousands separators.
17:30
< ErikMesoy>
Wrongness +5: I'm given this document for *translation* and expected to preserve (read: recreate) the formatting.
17:31
<&ToxicFrog>
.........
17:31
< ErikMesoy>
Here's to six hours of formatting tables with Color Row, Remove Cell Border, Align Mid, Bold Column, Merge Cell, etc!
17:31
< Jessikat`>
Flamethrower.
17:32
< ErikMesoy>
The person I received this "document" of many page-sized images from did not have a text version. Nor did they person they received it from in turn, nor the person I was to translate for. Because I definitely did _ask_ in the hope of possibly not bulling through so much ......... .
17:33
< ErikMesoy>
The managers are impressed with my work, though! Which means I will probably be rewarded with more of it.
17:44
< ErikMesoy>
Oh, I nearly forgot Wrongness +6: One of the page-images was duplicated thrice. Pages 6,7,8 were all of the same tables with the same data.
17:54 mac [macdjord@Nightstar-a1fj2k.mc.videotron.ca] has joined #code
17:54 mode/#code [+o mac] by ChanServ
17:57 macdjord [macdjord@Nightstar-a1fj2k.mc.videotron.ca] has quit [Ping timeout: 121 seconds]
18:51 Jessikat` [Jessikat@Nightstar-iu0cra.dab.02.net] has quit [[NS] Quit: Bye]
18:59 Jessikat [Jessikat@Nightstar-bt5k4h.81.in-addr.arpa] has joined #code
20:41
< Degi_>
https://hackernoon.com/the-parable-of-the-paperclip-maximizer-3ed4cccc669a
21:01
<&McMartin>
I have definitely seen the "Corporations are the Rogue AIs the people who don't understand AI are worrying about" argument several times before.
21:07
< ErikMesoy>
So has the guy who came up with the paperclips example.
21:08
< ErikMesoy>
So many times that he put a FAQ up for why no, it isn't.
21:28 gnolam [quassel@Nightstar-hsn6u0.cust.bahnhof.se] has joined #code
21:28 mode/#code [+o gnolam] by ChanServ
21:31
<&McMartin>
I'm aware of the general arguments in addition to the general arguments re: Grey Goo being a physically implausible scenario, but do you have a handy link for those?
21:31
<&McMartin>
I have a surprising number of friends who are entirely rational except for the part where they seem to secretly agree with all the wacky extropian precepts.
21:34
< ErikMesoy>
McMartin: The big item was TLDR "Corporations aren't intelligent" and there were a number of sub-items on points such as "The paperclipper is a specific danger of superhuman self-replicating AI, please do not hijack it for standard complaints about how Capitalism Is Bad". I don't have a link, the LW scene mostly died years ago.
21:34
<&McMartin>
Oh.
21:34
<&McMartin>
Yeah, no, this is specifically the "superhuman self-replicating AI doesn't have that failure mode" I was looking for
21:35
<&McMartin>
Your link spent two sentences on it.
21:35
< ErikMesoy>
My link? I'm not Degi.
21:35
<&McMartin>
Er, the previou slink, then
21:35
<&McMartin>
(In this case, "why exactly do you expect me the reader to accept that their office supply automation technology could become a major player in the heroin trade overnight"
21:35 himi [sjjf@Nightstar-v37cpe.internode.on.net] has joined #code
21:36 mode/#code [+o himi] by ChanServ
21:36
<&McMartin>
(Or, you know, at all)
21:36
< ErikMesoy>
Oh, that one's down to pop culture and bad reporting working on the "round to nearest cliche" algorithm.
21:36
<&McMartin>
I have only actually encountered the Paperclip Maximizer in the context of the universe-destroying Paperclip Apocalypse, which in turn I have always considered a macro-scale Grey Goo scenario
21:37
<&McMartin>
And Grey Goo specifically doesn't work on energy requirement and material scarcity objections that the Paperclip apocalypse handwaves away
21:38
< ErikMesoy>
AIUI, Grey Goo is about robotic microlife self-replicating on all mass; Paperclipper is a different thing about the dangers of superhuman AI with holes in its value system.
21:39
<&McMartin>
And a gigantic boatload of incredibly shaky assumptions about what superhuman AI implies or even means
21:39
<&McMartin>
That's why I wanted a go-to premade list of them~
21:41
< ErikMesoy>
All I can say is "not in the versions I've seen". The versions I've seen have *implausible* but not incredibly shaky assumptions; planning around the danger of Paperclipper AI is seen as similar to building earthquake-resistant buildings.
21:42
< ErikMesoy>
Yes, it's unlikely to happen; yes, even if it happens it's still unlikely to happen in my lifetime; it's still good engineering practice to mitigate for risks that are very unlikely but have massive potential downside.
21:42
< ErikMesoy>
And in the process it's producing some interesting philosophy of ethics and decisionmaking as people try to formalize certain morals explicitly.
21:43
<&McMartin>
I admit I come from the school of philosophy that considers those formalization exercises to be ways of saying "but I realllllly want it to be OK to kill the people I hate", but that's neither here nor there
21:44
<&McMartin>
The article linked is a version of the paperclip AI that is entirely plausible until it gets into the heroin trade, but it's not the usual paperclip Ai
21:44
<&McMartin>
This one controls purchasing decisions and was told to maximize its paperclip supply
21:45
<&McMartin>
So it empties the company accounts to buy paperclips and then refuses to dispense any, because dispensed paperclips lowers the count
21:45
<&McMartin>
That is an entirely plausible failure mode.
21:45
<&McMartin>
The part where it then realizes that it could buy more paperclips if it had more money and went into the drug trade to do so...
21:46
<&McMartin>
... this is a less plausible course of behavior for your office supply automation system
21:46
< Mahal>
Indeed.
21:46
<&McMartin>
But there's always some step in these scenarios where it stops being "malfunctioning automation" and becomes "infinitely perverse Djinn with command of unlimited material resources"
21:47
<&McMartin>
And this is usually snuck in under the words "superhuman AI"
21:47
< ErikMesoy>
Okay, do you want me to try to describe what those steps look like without magic?
21:47
< ErikMesoy>
Well, *might* look like. In general terms.
21:47
<&McMartin>
I know what they look like
21:47
<&McMartin>
The part I don't accept is that the facility hosting said device is physically capable of acting on the conclusions.
21:47 Jessikat [Jessikat@Nightstar-bt5k4h.81.in-addr.arpa] has quit [Ping timeout: 121 seconds]
21:47
<&McMartin>
This is where Grey Goo comes in
21:48
< ErikMesoy>
Okay, I can step up to that too, if I remember correctly
21:48
<&McMartin>
Grey Goo isn't impossible because you can't write the program
21:48
<&McMartin>
Grey Goo is impossible because you locally run out of germanium or sunlight.
21:48
< Degi_>
Why germanium?
21:48
< Degi_>
Why not silicon?
21:48
<&McMartin>
I dunno, why not silicon.
21:49
< ErikMesoy>
So, I will be stating some assumptions here that I do not necessarily agree with myself. As far as I'm concerned, these are mostly "Not obviously true, but not obviously incredibly shaky either."
21:49
< Degi_>
Or maybe even carbon, apparently diamond has semiconducting properties and carbon is in the air.
21:49
<&McMartin>
We don't use pure silicon. We rely heavily on rare earth elements because it's the only way we can get components that small.
21:49
<&McMartin>
Degi_: Yeah, now you're saying "assume that the barriers we hit with these materials don't exist"
21:49
<&McMartin>
And it's still an energy problem
21:50
<&McMartin>
"Assume that they can directly convert matter to energy too"
21:50
<&McMartin>
This is the "ok, but there are 20 ninjas stopping you from doing this"
21:50
< ErikMesoy>
Grey Goo implausible, ok, move along to Paperclipper?
21:50
<&McMartin>
Go ahead and make the statements
21:51
< ErikMesoy>
Assumption 1) Human brains do not run on irreplaceable quantum magic or special soulstuff that God is stingy with or anything of the sort. Human brains are physical objects that can be simulated and replicated given sufficient hardware in the necessary detail to replicate a human mind in software.
21:51
< ErikMesoy>
Assumption 2) Computing will keep advancing, if not necessarily exactly to Moore's Law specs. At some point that "sufficient hardware" is likely to first become available, then cheap.
21:52
< ErikMesoy>
Assumption 3) Humans do not have the best possible minds in potential mindspace, nor even near the best possible in near-human potential mindspace. There can exist things-like-humans that are significantly smarter than humans.
21:54
<&McMartin>
This seems unrelated to the Paperclipper, which is AIUI definitionally not smart enough to recognize the Sorcerer's Apprentice failure mode.
21:54
< ErikMesoy>
I'm getting there.
21:54
<&McMartin>
... and which, if it were a humanlike intelligence, would be in a condition morally identical to slavery
21:55
< ErikMesoy>
Sure. Maybe I can spell out an Assumption 4) There will be some slavery. That seems not obviously true, but not obviously incredibly shaky either.
21:56
<&McMartin>
That, uh, produces a very different failure mode, so part of the issue here is that the assumptions you have stated appear to be building towards a scenario I do not recognize as the one you are attempting to defend.
21:56
< ErikMesoy>
So. Following from A1 and A2, human mind simulations/uploads/similar are likely to happen. Then, following A2, human mind simulations/uploads are later likely to be runnable in parallell at high speed - a week of computer runtime on the future Beowulf cluster will be able to get you the equivalent of five domain experts thinking together for a month.
21:56
<&McMartin>
OK, I don't require a defense of the existence of weakly transhuman AI
21:57
<&McMartin>
And the paperclip apocalypse does not falter on the presence of one until it destroys the entire *universe*.
21:57
<&McMartin>
er, on the *absence* of one
21:57
< ErikMesoy>
This will feed back into A2 because you can now simulate, for example, "hardware designers" at high speed.
21:57
< ErikMesoy>
(Critical paths will be slow elsewhere, but there will be accelerated steps.)
21:57
<&McMartin>
Yeah, you've just snuck in your assumption of unlimited material resources
21:57
< ErikMesoy>
No, I don't think I have.
21:58
< ErikMesoy>
Being able to run faster-thinking hardware designers will speed up the improvement rate of hardware. I'm assuming a future Beowulf cluster capable of running these, not more.
21:58
<@himi>
I'm not sure that assumption 2 will hold, personally - we're running into physical constraints already, and unless quantum computing is a good replacement for general purpose computation we're going to hit a hard wall in the next couple of decades
21:58
<&McMartin>
Yes, but you're justifying the wrong question
21:58
<&McMartin>
Assume a literal god
21:59
<&McMartin>
Assume that it is running a paperclip factory
21:59
< ErikMesoy>
himi: Sure! Assumptions are not obviously true!
21:59
<&McMartin>
At some point it starts stripmining the entire countryside
21:59
<&McMartin>
As an early step
21:59
< ErikMesoy>
McMartin: Fine, assume *your question* is answered in the negative, in general, forever. All public discourse about "paperclippers" is wrong.
21:59
<&McMartin>
I am claiming this has already failed the smell test
21:59
<&McMartin>
Sigh
21:59
<&McMartin>
The thing you're walking me through is not the thing I'm objecting to at this time
22:00
<@himi>
Also, by definition simulating something will require a superset of the resources that the original will require, quite often meaning that it'll run slower
22:00
<&McMartin>
I'm saying I don't buy the scenario even in the godlike intelligence scenario
22:00
<@himi>
McMartin: I missed the URL for the original article you were referring to?
22:00
<&McMartin>
The original article was a variant that does not have this problem because it does not presuppose godlike AIs.
22:01
< ErikMesoy>
I should get to bed, tell me if you want me to continue recapitulation of The LessWrong Paperclipper Danger (as distinct from Various Reporting Paperclipper Dangers, which I will happily concede are all terrible) tomorrow.
22:01
<&McMartin>
It flies off the rails for reasons similar to my objections, though
22:02
<@himi>
Yeah, I've never come across any arguments about AI being mindbogglingly dangerous which don't run off the rails pretty quickly . . .
22:02
<&McMartin>
Yeah, I'm familiar with the LessWrong argument and reject 90% of their axioms as being the kind of thinking that led them to re-invent the Old Testament God.
22:03
<&McMartin>
himi: In this case, it's the office supply automation thing and it gets the CEO's purchasing authorization and was told to maximize its paperclip supplies.
22:03
<&McMartin>
This starts with the plausible failure mode of "spends all liquid assets on paperclips and refuses to dispense any"
22:03
<&McMartin>
But then it goes into "and then decides to go into the heroin trade for more money to buy paperclips with and gets everyone involved arrested"
22:03
<@himi>
Uh-huh
22:04
<@himi>
I get how that could be a viable failure mode
22:04
<@himi>
Except not
22:04
<&McMartin>
At which point I Have Questions About How It Could Perform Those Secondary Transactions.
22:04
<&McMartin>
And the asnwer to these questions is always "IT IS TOO MIGHTY FOR YOUR PUNY MIND TO GRASP"
22:04
<&McMartin>
Which wasn't my objection.
22:05
<&McMartin>
And if they *do* go into it it is usually something like "it will interpret its basic capabilities as constraints and hack itself to get around them"
22:05
<&McMartin>
Which requires the strongly-transhuman AI because humans can't do that to themselves even if you speed them up~
22:05
<@himi>
Yeah
22:06
<@himi>
If the AI is sufficiently intelligent to do /that/ then it's going to be sufficiently intelligent to reason about why it would be fucking stupid (though it might not be well enough informed to make good decisions)
22:07
<&McMartin>
Also "why has this being been set to manage office supplies"
22:07
< ErikMesoy>
himi: Orthogonality thesis.
22:07
<@himi>
And that's the point where it's slavery, and I honestly don't think we'll do that
22:08 * himi must drop the kid off at school now - will be back in five minutes
22:08
< ErikMesoy>
https://www.fhi.ox.ac.uk/wp-content/uploads/Orthogonality_Analysis_and_Metaethic s-1.pdf TLDR there is not a unitary "intelligence" that makes things sufficiently intelligent to reason themselves into agreeing with you. they may reason that *you would object* and then do it anyway.
22:08
<@Alek>
we wouldn't, bureaucrats and politicians might.
22:08
<&McMartin>
One of the side objections to assumption 3 is also, indeed, that there is not an "intelligence" that you can maximize exponentially a la Moore's law
22:09
<&McMartin>
Massively-parallel human-scale intelligences working in concert has a name already: "societies"
22:09
<@mac>
himi: The basic point is that just being smart enough to recognise /what the consequences will be/ does not mean that it will /consider those consequences to be bad things/.
22:10
<@Alek>
Vornicus: https://i.imgur.com/SfcAenl.jpg
22:11
<&McMartin>
Which also means that you don't get to pull that original objection re: using it as a metaphor for capitalism
22:11
<&McMartin>
Corporations absolutely are weakly-transhuman intelligences by the definition used for them.
22:11
<&McMartin>
They're just running on Actual Humans.
22:12
<&McMartin>
I do indeed also claim that the burden of proof is on those who claim that the dysfunction that arises from this is solely due to the Weakness Of Flesh, but that's why the LessWrong fellow travelers don't invite me to their debates =P
22:45
<@himi>
ErikMesoy: there's obviously not a single unitary version of "intelligence", but if you're intelligent you're able to reason about the consequences of your actions and ask ethical questions about them - I don't think there's any reasonable definition of "intelligent" that /doesn't/ incorporate that
22:47
<@himi>
Depending on the nature of whatever AI we end up with they may make very different moral choices to what we'd want, but they'd be making moral choices rather than simply acting out whatever programming we built in - if all it was doing was acting out extant programming then we're not talking about an AI going rogue, we're talking about human programmers screwing up
22:48
<@himi>
Automated but not intelligent systems are far more likely to implement the paperclip . . . thing? . . . than a real AI, because human programmers tend to be stupid
22:49 Degi_ [Degi@Nightstar-fgtfje.dyn.telefonica.de] has quit [[NS] Quit: Leaving]
22:52
<@himi>
McMartin: your point about societies being massively parallel human scale intelligences is also a very strong argument against the AIs suddenly taking over - a single AI, even if it's genuinely a lot smarter than an individual human, won't necessarily be smarter than the entire society
23:14
<@gnolam>
himi: intelligence *only* implies problem-solving ability. Nothing else.
23:16
<@gnolam>
In absolutely no definition does it include ethics or morality.
23:16 gnolam [quassel@Nightstar-hsn6u0.cust.bahnhof.se] has quit [[NS] Quit: Z?]
23:19
<@himi>
. . . that's a very narrow definition to be used on an entity that's got problem solving ability that approaches or is equivalent to a human's . . .
23:20
<&McMartin>
This is exactly the argument I was trying to *not* have, and wanted a list of arguments that treated all of them as orthogonal
23:21
<@himi>
I don't think I understand what kind of discussion you wanted?
23:22
<&McMartin>
A list of objections to the Paperclip Apocalypse scenario that are objections of the form "why is it even possible for your automated factory to enter the heroin trade" as opposed to of the form "godlike intelligences are intrinsically impossible"
23:22
<@himi>
Ah
23:22
<&McMartin>
Because as you saw, when I said "show me how this works at all" the first four axioms listed involved how you get to a godlike intelligence.
23:22
<@himi>
Yeah
23:23
<@himi>
My objections tend to be that you probably /can't/ get to a godlike intelligence, rather than "duh, your factory AI doesn't have pockets to hide it's illegal wares in"
23:23
<@himi>
Because essentially all the arguments about how dangerous AI will be are based on the assumption that it'll be godlike in power
23:23
<&McMartin>
I feel the godlike intelligence is unnecessary for these arguments in the first place
23:24
<&McMartin>
Mmm. That's less true unless you're dealing with LessWrong and its penumbrae, I think.
23:24
<&McMartin>
The default AI apocalypse I'm used to is the one where they become cheaper than human labor and 6.5 billion people are left to starve with no resources
23:24
<@himi>
Factories not having pockets is certainly a pragmatic argument against a lot of the doomsday scenarios
23:25
<@himi>
That assumes a peculiarly stupid AI, though, for something that can replace every function that humans currently perform
23:27
<@himi>
argh - so many commitments away from my keyboard
23:27
<@himi>
Off to a doctor's appointment this time
23:32
< ErikMesoy>
Somewhat facetiously: "Make a reddit post asking for suggestions on how to do it; simulate-and-destroy many internal people commenting on which of these suggestions are practical."
23:41 JustBob [justbob@Nightstar.Customer.Dissatisfaction.Administrator] has quit [[NS] Quit: ]
23:46 JustBob [justbob@ServerAdministrator.Nightstar.Net] has joined #code
23:46 mode/#code [+o JustBob] by ChanServ
23:46 Kindamoody|afk is now known as Kindamoody
--- Log closed Wed Nov 15 00:00:32 2017
code logs -> 2017 -> Tue, 14 Nov 2017< code.20171113.log - code.20171115.log >

[ Latest log file ]