AI

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (92 of them)

treeship's basilisk

BIG HOOS aka the steendriver, Tuesday, 16 October 2018 15:08 (five years ago) link

AI will become more capable than it is at present. But AI will remain an adjunct to human intelligence and human agency for quite a long time. If I were you, treesh, I'd worry much more about the same old human capacity for greed, cruelty and abuse of power, as abetted by AI, than about AI's ascendancy over humans.

A is for (Aimless), Tuesday, 16 October 2018 17:31 (five years ago) link

I have a quasi-mystical, maybe heideggerian belief that consciousness and intelligence are wholly separate things and computers don’t “experience” anything the way living things do. There is no presence there. So I don’t believe the machines will have an agenda of their own. But I think they still might fuck stuff up.

Trϵϵship, Tuesday, 16 October 2018 17:56 (five years ago) link

I could see an algorithm going awry and screwing up the financial system or something; and the program programmed itself so no one knows how to fix it. Somethinf like that.

Trϵϵship, Tuesday, 16 October 2018 17:57 (five years ago) link

one other important distinction between go and overwatch: i've never jacked off to go fanfiction but i frequently actually you know what never mind

― himalayan mountain hole (bizarro gazzara)

ah shit now i'm depressed thinking about the probable existence of "hikaru no go" hentai

dub pilates (rushomancy), Tuesday, 16 October 2018 18:47 (five years ago) link

incidentally i find roko's basilisk to be an utterly hilarious example of just how shitty humans are at thinking logically

dub pilates (rushomancy), Tuesday, 16 October 2018 18:50 (five years ago) link

Yeah i wasn’t serious about that specific danger. Sometimes I forget that online I don’t have my trusty slide whistle to let people know when I’m being ironic.

Trϵϵship, Tuesday, 16 October 2018 19:11 (five years ago) link

decades away from success in games like Overwatch

No way. I mean self-driving cars are getting reasonable - turn up the speed, add guns - can it be that hard?

Uhura Mazda (lukas), Tuesday, 16 October 2018 19:14 (five years ago) link

Also the computer beats ordinary players in games like that all the time. Like sports games—it’s not merely running a script in those cases.

cuz the computer cheats - video games are not necessarily a great example.

last I heard AI was getting really good at Heads Up No Limit Texas Hold 'em, though apparently if you're playing 3-way it's not as good. that to me is a bit more interesting.

frogbs, Tuesday, 16 October 2018 19:21 (five years ago) link

I could see an algorithm going awry and screwing up the financial system or something; and the program programmed itself so no one knows how to fix it. Somethinf like that.


our financial systems are perfectly capable of doing this through human input alone tbf

himalayan mountain hole (bizarro gazzara), Tuesday, 16 October 2018 19:25 (five years ago) link

I have a quasi-mystical, maybe heideggerian belief that consciousness and intelligence are wholly separate things and computers don’t “experience” anything the way living things do.

My really uninformed sense is that there is often a presumption that if you build up enough complexity in terms of computation/intelligence then consciousness might emerge. This seems backwards to me, since if you build up from intelligence towards consciousness then you've more or less already ensured that the AI will always be more complex than (or at best equally complex as) the environment it's reacting to. You've predetermined what does and does not count as information. Consciousness (in that Heideggerian sense) exists "prior" to that distinction (or doesn't draw it). So what you'd need, by contrast, is an AI with an ability to adapt to (and reduce the complexity of) an outside environment more complex than it is.

ryan, Tuesday, 16 October 2018 19:26 (five years ago) link

In other words, AI would need to be capable of what Gotthard Gunther calls a logic of reflection (in which the information/not-information distinction can be suspended).

ryan, Tuesday, 16 October 2018 19:27 (five years ago) link

I could see an algorithm going awry and screwing up the financial system or something; and the program programmed itself so no one knows how to fix it. Somethinf like that.

High-frequency trading and the 440 million dollar mistake

"There was some problem with the program," says Felix Salmon, finance blogger for Reuters in New York.

"We don't know exactly what. They switched it on and immediately they started losing literally $10 million a minute. It looks like they were buying high and selling low many, many times per second, and losing 10 or 15 dollars each time. And this went on for 45 minutes. At the end of it all they wound up having lost $440 million."

1-800-CALL-ATT (Karl Malone), Tuesday, 16 October 2018 19:29 (five years ago) link

lol someone screwed up an if statement

frogbs, Tuesday, 16 October 2018 19:30 (five years ago) link

if AI were REALLY smart it would have used two of itself to invent how to crash the stock market, and then used three of itself to be able to do it with plausible deniability

El Tomboto, Tuesday, 16 October 2018 19:34 (five years ago) link

ryan, reducing the complexity of input data by using a limited (although, yes, often still quite high) number of parameters to internally represent and "think" about a problem is already a very integral part of how the approaches that people these days call "AI" work.

Dan I., Tuesday, 16 October 2018 21:19 (five years ago) link

It's the whole von Neumann "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk" thing...

Dan I., Tuesday, 16 October 2018 21:23 (five years ago) link

This is relevant to today's discussion: https://www.theverge.com/2018/10/16/17985168/deep-learning-revolution-terrence-sejnowski-artificial-intelligence-technology

DJI, Tuesday, 16 October 2018 21:37 (five years ago) link

Right, I’ve heard about this in the context of these networks creating fake videos. They really generate new things that seem realistic, right?

They are, in a sense, generating internal activity. This turns out to be the way the brain works. You can look out and see something and then you can close your eyes and you can begin to imagine things that aren’t out there. You have a visual imagery, you have ideas that come to you when things are quiet. That’s because your brain is generative. And now this new class of networks can generate new patterns that never existed. So you can give it, for example, hundreds of images of cars and it would create an internal structure which can generate new images of cars that have never existed and they all look totally like cars.

accidentally type "\\sars_images" and we've got new strains of deadly virus sars

for i, sock in enumerate (Sufjan Grafton), Tuesday, 16 October 2018 22:52 (five years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.