Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (3821 of them)

Or they just help us get plane tickets: http://www.wired.com/2014/08/viv/

schwantz, Sunday, 3 May 2015 20:49 (nine years ago) link

two months pass...

https://www.youtube.com/watch?v=X_tvm6Eoa3g

Balkan-Boogie (soref), Saturday, 18 July 2015 15:58 (eight years ago) link

when earth & humanity are long gone, there will be bots drifting through the galaxy in eternal courtship

ogmor, Saturday, 18 July 2015 17:30 (eight years ago) link

If I ever went on a date it would probably go exactly like that.

AdamVania (Adam Bruneau), Saturday, 18 July 2015 17:36 (eight years ago) link

Chappie more like Crappie amirite?

passive-aggressive rageaholic (snoball), Sunday, 19 July 2015 18:08 (eight years ago) link

It was diabolically poor. It actually put me off cinema for a bit.

quixotic yet visceral (Bob Six), Sunday, 19 July 2015 19:03 (eight years ago) link

six months pass...

Nature: Mastering the game of Go with deep neural networks and tree search

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

h/t hoooos

Karl Malone, Wednesday, 27 January 2016 21:10 (eight years ago) link

i guess this should be the AI thread. post your comments about how AI is impossible because you saw a clip of a robot falling over here.

Karl Malone, Wednesday, 27 January 2016 21:11 (eight years ago) link

important work they're doing over there *eyeroll*

Οὖτις, Wednesday, 27 January 2016 21:11 (eight years ago) link

the go thing, you mean?

Karl Malone, Wednesday, 27 January 2016 21:12 (eight years ago) link

yeah

re: AI in general, I wouldn't say it's impossible but it is very very very far away

Οὖτις, Wednesday, 27 January 2016 21:12 (eight years ago) link

speaking of neural networks, there's this link caek accidentally posted: http://www.wired.com/2016/01/apple-buys-ai-startup-that-reads-emotions-in-faces

Karl Malone, Wednesday, 27 January 2016 21:13 (eight years ago) link

it's close enough to figure out how you react to advertisements

Karl Malone, Wednesday, 27 January 2016 21:13 (eight years ago) link

I am not impressed

Οὖτις, Wednesday, 27 January 2016 21:14 (eight years ago) link

I mean congratulations you've spent billions of dollars and tons of other resources on doing something a baby can do, good job

Οὖτις, Wednesday, 27 January 2016 21:15 (eight years ago) link

(sorry I don't mean "you" you, not trying to make this personal)

Οὖτις, Wednesday, 27 January 2016 21:16 (eight years ago) link

This is pretty exciting:
http://www.eurekalert.org/pub_releases/2016-01/miop-sba012716.php

schwantz, Wednesday, 27 January 2016 21:17 (eight years ago) link

haha, it's ok

buuuuuuut, when i was a baby, i wasn't capable of reading human emotions from millions of people at any given moment and then feeding that information to advertising corporations. of course, as i grew older i developed this ability but by that time other babies had already submitted job applications so mine was at the bottom of the pile

Karl Malone, Wednesday, 27 January 2016 21:18 (eight years ago) link

lol

Οὖτις, Wednesday, 27 January 2016 21:21 (eight years ago) link

but yeah the "reading human emotions" aspect does not impress me as a technological feat in and of itself. Biology still obviously way superior in that department. otoh the "helping corporations make even more effective advertisements!" aspect is just gross and sad.

Οὖτις, Wednesday, 27 January 2016 21:23 (eight years ago) link

also, i think admitting that certain AI capabilities are similar to what a baby can do suggests enormous potential in the near term. the difference in capabilities of babies and adults seems enormous to us, but when you consider it on a logarithmic scale, they're very close. the difference between einstein and the livestreaming tech guy idiot in oregon is not very large in the grand scheme of things. if an AI's learning curve has already increased from an earthworm to baby level, einstein really isn't that far away.

obviously i'm referring to the scientific names of these universally agreed upon scales here

Karl Malone, Wednesday, 27 January 2016 21:25 (eight years ago) link

thought the thread bump might be for Minsky

RIP big man

Brad C., Wednesday, 27 January 2016 21:28 (eight years ago) link

yeah as i understand it the hope/fear is that at some unknown point of basic sophistication the gap all of a sudden closes itself

something i don't get about the superintelligence fear is why these new gods intelligent in ways we can't even imagine are just assumed to also be terminally discompassionate and sociopathically fixated on widget-making or nuclear supremacy

i do sometimes worry about the politics and very notions of intelligence of a lot of the people who do the actual work on this stuff, let alone of course the people who pay for it

rip minsky, yeah.

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 21:30 (eight years ago) link

enormous potential in the near term

there's always been enormous potential lol, it's the "near term" part that seems to be constantly pushed out

Οὖτις, Wednesday, 27 January 2016 21:32 (eight years ago) link

I mean this fear of robots becoming *actually intelligent* and destroying humanity has been around basically since the concept of "robot" was first formalized, well before the first computers even existed.

Οὖτις, Wednesday, 27 January 2016 21:34 (eight years ago) link

something i don't get about the superintelligence fear is why these new gods intelligent in ways we can't even imagine are just assumed to also be terminally discompassionate and sociopathically fixated on widget-making or nuclear supremacy

nick bostrom's book is basically about this (a lot of people seem to assume it's a kurzweil style book, but it's really all about risk management). he talks a lot about the end goals of an AI and their unintended consequences. one thing that comes up often is that for just about any goal, having more resources would be beneficial. or eliminating obstacles to the goal (such as humans).

The risks in developing superintelligence include the risk of failure to give it the supergoal of philanthropy. One way in which this could happen is that the creators of the superintelligence decide to build it so that it serves only this select group of humans, rather than humanity in general. Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence, because we might get it.

http://www.nickbostrom.com/ethics/ai.html

Karl Malone, Wednesday, 27 January 2016 21:36 (eight years ago) link

xp ninety years not rly a v long time... in pre-singularity years

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 21:37 (eight years ago) link

i guess i am unconvinced that something can simultaneously be "superintelligent" and have an extremely rigid and unadaptable "goal system". people would get bored caring about paperclips, let alone one of these things.

really the problem w the whole line of speculation right is a lack of understanding of what we mean by intelligence let alone superintelligence

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 21:40 (eight years ago) link

agree that it could well be an alien brain w v incompatible values, also i suppose agree w the unmade point that the only really altruistic and compassionate thing to do is exterminate us

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 21:42 (eight years ago) link

yeah, i think part of the misunderstanding is that everyone tends to anthropomorphize AI. i mean, you're right: people would get extremely bored caring about paperclips. but computers aren't people. they'll switch off between 0 and 1 until they're gone.

agreed about lack of understanding of what the terms mean, though. i read through one of Edge's little collections of smart people talking about stuff, the AI issue, and it was incredibly frustrating because just about every writer seemed to be defining the same terms in different ways.

Karl Malone, Wednesday, 27 January 2016 21:45 (eight years ago) link

really the problem w the whole line of speculation right is a lack of understanding of what we mean by intelligence let alone superintelligence

^^otfm

Οὖτις, Wednesday, 27 January 2016 21:48 (eight years ago) link

humanity's never really gotten around to a good working definition of what "consciousness" is but here we are thinking we have some fancy new way to create it (apart from the old fashioned way of biological reproduction + social engineering), even when we don't know or can't agree on what *it* really is

Οὖτις, Wednesday, 27 January 2016 21:50 (eight years ago) link

people have been overestimating the proximity of AI for decades, in the sense of an AI as some kind of autonomous problem-solving agent, but maybe to an equal extent underestimating the kinds of intelligence programmers have built to work in specific problem spaces

if you had shown me Google search autocomplete 25 years ago, I don't think my reaction would have been, "Oh that's just an algorithm, where's some real AI?"

Brad C., Wednesday, 27 January 2016 21:56 (eight years ago) link

obviously there's no denying technological advances. But yeah I don't think I consider what a Google search engine does "intelligence" in any meaningful way, and yeah maybe that is related to it being in the service of a specific, non-autonomous function.

Οὖτις, Wednesday, 27 January 2016 22:02 (eight years ago) link

it's more like a representation/prediction of group intelligence

though yeah "intelligence" maybe not the word for millions of Google searchers

Brad C., Wednesday, 27 January 2016 22:03 (eight years ago) link

xposts

i don't know, i guess i don't think that emulating "consciousness" is necessarily essential to a superintelligence. again, the anthropomorphizing thing is a problem. but maybe i'm going too far down the Turing road, thinking that the most important things to measure are outcomes (if an AI can detect human emotions via facial muscle movements more accurately than a human being can and react accordingly, then who cares if it's "conscious" or not?)

Karl Malone, Wednesday, 27 January 2016 22:13 (eight years ago) link

if an AI can detect human emotions via facial muscle movements more accurately than a human being can and react accordingly

begs the question of what constitutes "more accurately" and "react[ing] accordingly"

Οὖτις, Wednesday, 27 January 2016 22:17 (eight years ago) link

of the top of my head, for "more accurately", assume that 100 people are asked to film themselves while thinking of either a tragic or sexy memory. a test group of humans then views the videos and guesses the emotions on display, while an AI also completes the same task. the AI is more accurate if they guess the emotion more frequently than the humans do.

"react accordingly": i don't know, i guess it would just be recognizing when someone is sad or angry and backing off for a while (in contemporary Siri terms, maybe holding off on the automated reminder that your toddler's doctor's appointment is tomorrow afternoon). or laughing at a joke. or doing the fake laughter thing you have to do when someone that is respected tells a mediocre joke in a public setting.

it might seem that "consciousness" is necessary to register human emotions and react as humans do, but i'm not sure that's true.

Karl Malone, Wednesday, 27 January 2016 22:26 (eight years ago) link

dreaming seems to be a signal of consciousness. but does an AI have to dream in order to complete tasks at superhuman levels?

Karl Malone, Wednesday, 27 January 2016 22:28 (eight years ago) link

do you really want me to deconstruct the problems with your examples cuz they seem obvious to me

Οὖτις, Wednesday, 27 January 2016 22:28 (eight years ago) link

what if the superintelligent ai says stuff like that all the time

I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 22:31 (eight years ago) link

oh great thx for outing me

Οὖτις, Wednesday, 27 January 2016 22:37 (eight years ago) link

better test would record how 100 people react to jute gyte dinner music and compare with ai's reaction

I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 22:39 (eight years ago) link

xpost i don't know, do what you want i guess

i guess it's likely that i'm just totally misunderstanding what you're saying. i mean, you're saying that emulating "consciousness" is a pre-requisite for superintelligence, right? (earlier you wrote "humanity's never really gotten around to a good working definition of what "consciousness" is but here we are thinking we have some fancy new way to create it"). i'm trying (and failing) to argue that it's not necessary to obtain superintelligent results.

i guess an unstated premise i'm using that others might not agree with is that our brains and computers are already very similar. when i feel "sad" i think that's the result of many stimuli working in concert, leading to my neurons doing what they do.

loool sufjan

Karl Malone, Wednesday, 27 January 2016 22:42 (eight years ago) link

also, i guess an obvious point, but there are AI research paths like machine learning that aren't trying to emulate the brain's behavior, and certainly aren't trying to create "consciousness"

Karl Malone, Wednesday, 27 January 2016 22:44 (eight years ago) link

anyway lolz aside I'm gonna take a crack at this cuz it's a slow day at work

assume that 100 people are asked to film themselves while thinking of either a tragic or sexy memory. a test group of humans then views the videos and guesses the emotions on display

this scenario is subject to a lot of problems that plague sociology/psychology experiments, not all of which can be controlled for. Are "tragic" and "sexy" memories actually typically accompanied by facial expressions? (OK tears stereotypically accompany "tragic", but "sexy"? I dunno what facial expression correlates to "sexy"). Are the test subjects intentionally emoting for the camera or otherwise not presenting an objective sample set? Do the people doing the filming write down or otherwise indicate what they're thinking of while their being filmed? How reliable is that? What if their faces are not expressive? Are they all filmed the same way (lighting and framing do a lot of work with film...)? etc. etc.

"react accordingly": i don't know, i guess it would just be recognizing when someone is sad or angry and backing off for a while (in contemporary Siri terms, maybe holding off on the automated reminder that your toddler's doctor's appointment is tomorrow afternoon). or laughing at a joke. or doing the fake laughter thing you have to do when someone that is respected tells a mediocre joke in a public setting.

this is something that actual humans have problems doing. People misread other people's emotional cues *all the time*. It is socialized, learned behavior, and it varies really widely among people, situations, social strata, culture. This is hardly a simple operation for an AI to complete.

xxp

Οὖτις, Wednesday, 27 January 2016 22:49 (eight years ago) link

please keep in mind that i came up with the scenario in less than 60 seconds

Karl Malone, Wednesday, 27 January 2016 22:50 (eight years ago) link

i guess an unstated premise i'm using that others might not agree with is that our brains and computers are already very similar.

yeah I don't agree with this at all. When I was referring to consciousness upthread you can just swap that out for "human brain" or whatever. We have a very very limited understanding of how the brain works. By contrast we have a very detailed understanding of how computers work. There is a massive gap between the two, not the least of which is how our brains manage to process, sift and recall such a vast amount of information while using so little energy.

xp

Οὖτις, Wednesday, 27 January 2016 22:52 (eight years ago) link

AI as "a machine that thinks like a human" is a pretty dated definition, the idea that machine intelligence can augment human cognition in ways that are nearly instant or imperceptible is the goal of most projects, or creating software that can adapt to new situations using past recorded data

the article about the hacker who is trying to out-tesla tesla on the augmented driving front, building a self-driving system that reacts based on recorded human responses to traffic conditions seems to be on the right track, whether or not his work is viable

general emulation of things we consider "consciousness" is a route that's well-trodden in the chatbot "can I tell whether this is a human" way and isn't really that important outside of customer support or w/e

μpright mammal (mh), Wednesday, 27 January 2016 22:55 (eight years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.