Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (3931 of them)

xpost i don't know, do what you want i guess

i guess it's likely that i'm just totally misunderstanding what you're saying. i mean, you're saying that emulating "consciousness" is a pre-requisite for superintelligence, right? (earlier you wrote "humanity's never really gotten around to a good working definition of what "consciousness" is but here we are thinking we have some fancy new way to create it"). i'm trying (and failing) to argue that it's not necessary to obtain superintelligent results.

i guess an unstated premise i'm using that others might not agree with is that our brains and computers are already very similar. when i feel "sad" i think that's the result of many stimuli working in concert, leading to my neurons doing what they do.

loool sufjan

Karl Malone, Wednesday, 27 January 2016 22:42 (eight years ago) link

also, i guess an obvious point, but there are AI research paths like machine learning that aren't trying to emulate the brain's behavior, and certainly aren't trying to create "consciousness"

Karl Malone, Wednesday, 27 January 2016 22:44 (eight years ago) link

anyway lolz aside I'm gonna take a crack at this cuz it's a slow day at work

assume that 100 people are asked to film themselves while thinking of either a tragic or sexy memory. a test group of humans then views the videos and guesses the emotions on display

this scenario is subject to a lot of problems that plague sociology/psychology experiments, not all of which can be controlled for. Are "tragic" and "sexy" memories actually typically accompanied by facial expressions? (OK tears stereotypically accompany "tragic", but "sexy"? I dunno what facial expression correlates to "sexy"). Are the test subjects intentionally emoting for the camera or otherwise not presenting an objective sample set? Do the people doing the filming write down or otherwise indicate what they're thinking of while their being filmed? How reliable is that? What if their faces are not expressive? Are they all filmed the same way (lighting and framing do a lot of work with film...)? etc. etc.

"react accordingly": i don't know, i guess it would just be recognizing when someone is sad or angry and backing off for a while (in contemporary Siri terms, maybe holding off on the automated reminder that your toddler's doctor's appointment is tomorrow afternoon). or laughing at a joke. or doing the fake laughter thing you have to do when someone that is respected tells a mediocre joke in a public setting.

this is something that actual humans have problems doing. People misread other people's emotional cues *all the time*. It is socialized, learned behavior, and it varies really widely among people, situations, social strata, culture. This is hardly a simple operation for an AI to complete.

xxp

Οὖτις, Wednesday, 27 January 2016 22:49 (eight years ago) link

please keep in mind that i came up with the scenario in less than 60 seconds

Karl Malone, Wednesday, 27 January 2016 22:50 (eight years ago) link

i guess an unstated premise i'm using that others might not agree with is that our brains and computers are already very similar.

yeah I don't agree with this at all. When I was referring to consciousness upthread you can just swap that out for "human brain" or whatever. We have a very very limited understanding of how the brain works. By contrast we have a very detailed understanding of how computers work. There is a massive gap between the two, not the least of which is how our brains manage to process, sift and recall such a vast amount of information while using so little energy.

xp

Οὖτις, Wednesday, 27 January 2016 22:52 (eight years ago) link

AI as "a machine that thinks like a human" is a pretty dated definition, the idea that machine intelligence can augment human cognition in ways that are nearly instant or imperceptible is the goal of most projects, or creating software that can adapt to new situations using past recorded data

the article about the hacker who is trying to out-tesla tesla on the augmented driving front, building a self-driving system that reacts based on recorded human responses to traffic conditions seems to be on the right track, whether or not his work is viable

general emulation of things we consider "consciousness" is a route that's well-trodden in the chatbot "can I tell whether this is a human" way and isn't really that important outside of customer support or w/e

μpright mammal (mh), Wednesday, 27 January 2016 22:55 (eight years ago) link

imo we're going to find out more about the human brain by creating systems that learn than we are going to create systems that learn by determining how the human brain works

μpright mammal (mh), Wednesday, 27 January 2016 22:56 (eight years ago) link

the idea that machine intelligence can augment human cognition in ways that are nearly instant or imperceptible is the goal of most projects

sure, this is something we're already living with.

but when people talk about AI superintelligences taking over, I don't think this is what they're referring to - they're referring to something that not only does what a human brain can do, but does it exponentially better. And we're nowhere near the former, much less the latter.

Οὖτις, Wednesday, 27 January 2016 22:58 (eight years ago) link

I think it's more a matter of creating systems that have a gestalt decision-making process or evolutionary algorithm that comes up with things that humans would not, or would possibly not even conceive of

making machines think like humans is silly, imo, we should determine the better parts of abstract reasoning and develop that

μpright mammal (mh), Wednesday, 27 January 2016 23:00 (eight years ago) link

machines that not only _do not_ do what human brains do, but do things in a way so differently that it seems foreign to our ideas of cognition

μpright mammal (mh), Wednesday, 27 January 2016 23:01 (eight years ago) link

that makes more sense to me than trying to build the nine millionth robot that can't walk through a door

Οὖτις, Wednesday, 27 January 2016 23:04 (eight years ago) link

(just to bring it all back full circle)

Οὖτις, Wednesday, 27 January 2016 23:04 (eight years ago) link

yes

i always warn against conceptually anthropomorphizing AI in these kind of discussions, and then end up up in a wormhole of rebutting anthropomorphic arguments anyway. and inevitably i mention sexy memories and things fall apart

Karl Malone, Wednesday, 27 January 2016 23:06 (eight years ago) link

hey you're the one that said "our brains and computers are already very similar"

Οὖτις, Wednesday, 27 January 2016 23:08 (eight years ago) link

with sexy results

Οὖτις, Wednesday, 27 January 2016 23:08 (eight years ago) link

we've come a long way. our computers' sexy memories are now not so different from our own.

I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 23:09 (eight years ago) link

ilx plays a mind forever voyaging imo

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 23:11 (eight years ago) link

at ilx, we've developed an ai that is convinced it left its sunglasses in the booth at lunch as those very sunglasses sit atop its monitor

I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 23:12 (eight years ago) link

chilling

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 23:14 (eight years ago) link

I think that people are definitely trying to build computers/AIs that they can't understand (see my memristor article above, or even certain types of machine-learning). These also seem like the ones (IMO) that are most likely to yield the most interesting AIs or consciousnesses.

schwantz, Wednesday, 27 January 2016 23:20 (eight years ago) link

oh hey this thread

The DeepMind Go thing looks really really cool and I'll definitely read the paper but it's basically a big search problem with a relatively small representation and a clear reward signal. It's nothing like learning to act within the complexity of the real world, which is the big thing that nobody has any idea how to do.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:25 (eight years ago) link

I sure don't

μpright mammal (mh), Thursday, 28 January 2016 00:25 (eight years ago) link

mh & km, you talk of emergent results, but what results are these? What do you expect your hypothetical non-anthropomorphic AI to do? And how will the AI do this without some (necessarily anthropomorphic?) semantic understanding? To accomplish anything that would impress me or shakey, an AI would have to manipulate things in the world, take a variety of sensory (and to us, possibly extrasensory) measurements, and "think" in a way that allowed it to either create something novel or make a useful "true" "assertion" (and this latter accomplishment would require semantic understanding in order to communicate that assertion).

You seem loath to anthropomorphize AI, but I'm skeptical that useful AI accomplishments can be achieved without very human-like semantic understanding.

I'd also like to argue with the proposed timeline that's been touted itt, as if a hard-coded parlor trick (computers can beat humans at rock-paper-scissors, too) means that AI has reached "baby level." It hasn't, and I'm skeptical that we've even reached "earthworm level" (cf. https://en.wikipedia.org/wiki/OpenWorm).

Have you read this?
http://www.skeptic.com/eskeptic/06-08-25/#feature

It's 9 years old, and I can hardly say with confidence that it's irrefutable, but the article makes a convincing, comprehensive case against anything but narrowly specific, hard-coded AI (like a program that plays Go).

I'd like to see an argument as to how, e.g., google will ever remotely understand what the hell I want on the Internet.

bamcquern, Thursday, 28 January 2016 00:30 (eight years ago) link

Google is pretty good at understanding what people want on the Internet tbh. Maybe just not you.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:35 (eight years ago) link

google has virtually no semantic understanding

bamcquern, Thursday, 28 January 2016 00:36 (eight years ago) link

and its basic underlying principles don't even try to

bamcquern, Thursday, 28 January 2016 00:37 (eight years ago) link

Comparing AI to organic intelligences isn't really that informative - their strengths and weaknesses are so different.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:37 (eight years ago) link

But what is it that AI-proponents itt expect AI to eventually do?

bamcquern, Thursday, 28 January 2016 00:39 (eight years ago) link

Google doesn't need a whole lot of "semantic understanding" to do a good job of ranking search results. They do have more than "virtuallly no" component that explicitly handles this stuff anyway - the Knowledge Graph is a big part of their system these days.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:40 (eight years ago) link

put a lot of people out of work xp

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:41 (eight years ago) link

be a terrible replacement for spurned religious beliefs

bicyclescope (mattresslessness), Thursday, 28 January 2016 00:41 (eight years ago) link

xp That's really the only thing I'm sure about in the medium-term. I don't think that means that AI is bad or dangerous, but society will need to work out how to handle a jump in unemployment long before it has to worry about killer superintelligences.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:42 (eight years ago) link

http://www.ncbi.nlm.nih.gov/pubmed/8110662

μpright mammal (mh), Thursday, 28 January 2016 00:44 (eight years ago) link

the rhetoric of inevitability around ai is so maddeningly stupid, where the hell did it come from?

bicyclescope (mattresslessness), Thursday, 28 January 2016 00:45 (eight years ago) link

again, mostly repetition using human-known rules, but molecular design using AI to address combinatorial problems can give results that would be found through brute force repetition but might be unintuitive to humans -- coming up with novel solutions that people might not stumble upon

molecular modeling in genetics is huge right now

μpright mammal (mh), Thursday, 28 January 2016 00:48 (eight years ago) link

Like if tesla or whoever builds a self-driving car, that would be a pretty incredible (and seemingly inevitable) accomplishment, but guidance systems aren't new, and it would hardly be "strong AI."

"Ranking search results" is not an admirable feat. That's glorified punch card territory.

Knowledge Graph is scarcely functionally semantic. I don't care that google can crib the appropriate parts of wikipedia and put them in an infobox.

bamcquern, Thursday, 28 January 2016 00:49 (eight years ago) link

I accept that AI research is useful, but using an AI tool so that a human can make a decision about a drug is a far cry from KM's implied "earthworm to baby to Einstein" scenario.

bamcquern, Thursday, 28 January 2016 00:50 (eight years ago) link

self-driving cars are kind of defeatist when it comes to artificial intelligence, because by design it has to emulate actions humans do -- the control systems of cars, shape of roads, reacting to other drivers not under the same control (although organizations are starting to recognize the need for cars to be able to communicate with other cars) mean they're stuck with a number of constraints

μpright mammal (mh), Thursday, 28 January 2016 00:54 (eight years ago) link

general purpose language semantics are kind of a brick wall when it comes to knowledge, but it's not inconceivable that you could have a biomass-consuming big old robot lumbering through the countryside that would be self-sustaining, maybe eventually self-repairing, and would be able to learn from past interactions what works and what doesn't, assuming one of the things it has to learn is not to walk into a canyon or river

the problem being that spoken or written language is the basis for shared knowledge, and there's no real "language" of artificial beings

μpright mammal (mh), Thursday, 28 January 2016 00:57 (eight years ago) link

yet

μpright mammal (mh), Thursday, 28 January 2016 00:57 (eight years ago) link

it comes down to whether you think human cognition is a special thing, or just an amazingly huge number of iterations of things that worked or didn't, and we don't have the equivalent of randomly throwing molecules together over billions of years until single cell organisms rise up

μpright mammal (mh), Thursday, 28 January 2016 00:59 (eight years ago) link

it's not quite what we're going for in that we are looking for a particular response, but a simple step to programs that write programs to create that intended response exist:
http://www.primaryobjects.com/2013/01/27/using-artificial-intelligence-to-write-self-modifying-improving-programs/

μpright mammal (mh), Thursday, 28 January 2016 01:04 (eight years ago) link

To repeat myself

New Yorker magazine alert thread

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:06 (eight years ago) link

that's why we have to have the computer program it for us, then program a better version, and continue onward for a few trillion cycles

μpright mammal (mh), Thursday, 28 January 2016 01:10 (eight years ago) link

and then it forgets the password to itself because storage capacity turns out to be finite

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:14 (eight years ago) link

Like if tesla or whoever builds a self-driving car, that would be a pretty incredible (and seemingly inevitable) accomplishment, but guidance systems aren't new, and it would hardly be "strong AI."

"Ranking search results" is not an admirable feat. That's glorified punch card territory.

Knowledge Graph is scarcely functionally semantic. I don't care that google can crib the appropriate parts of wikipedia and put them in an infobox.

― bamcquern, Thursday, January 28, 2016 12:49 AM (15 minutes ago) Bookmark Flag Post Permalink

I actually agree that AI with an understanding of how it affects the world, creativity, real conversational ability is not going to turn up any day soon. But "glorified punch card territory" is horseshit.

Anyway: AI is whatever hasn't been done yet

conditional random jepsen (seandalai), Thursday, 28 January 2016 01:14 (eight years ago) link

Yes, obviously it's glorified bazillions of punch cards territory

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:22 (eight years ago) link

it's the infinite monkeys/typewriters problem, only there's a set number of monkeys (although faster monkeys keep appearing every day) and they have everything every monkey has ever typed is available for reference, and you start out with a proscribed outcome of a copy of Hamlet

then you introduce the problem that you want something in the vein of Hamlet, but with some new plot twists, but you don't have humans capable of saying whether the result is sensible or good. so you need some parameters, like grammatical rules, and some other way to evaluate whether the story is any good without employing infinitely many humans

μpright mammal (mh), Thursday, 28 January 2016 01:22 (eight years ago) link

no one should lump me together with mh; i have ~zero expertise (sorry if i implied i did - like most other things i'm interested in, i'm an amateur and easily schooled).

and also sorry if i implied that superintelligence is inevitable. i don't think that. but i do think it's possible, and if it is, it presents incredible problems. i suppose i often fall into the appeal to authority fallacy, but when people like hawking, musk, gates, woz, etc are explicitly warning about AI (from an open letter published last July - "AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades", and they voiced similar warnings on the risks posed by superintelligence) i pay attention. it's possible that everyone on ILX is more knowledgeable than those guys. but... i don't think so. no offense. if there's even a sliver of possibility that they're correct, it's something worth discussing. to my knowledge, i've never seen anyone (here or on the internet in general) rebut nick bostrom's points about the security/containment problems with superintelligence. everything i've read in opposition just attacks the idea of superintelligence ever existing in the first place. so it seems like there's the group of people who dismiss AI in general, and then there's the group of people who are open to the possibility of AI and think it could be a huge existential problem, and very few people in between. since the smartest people in the room fall into the latter category, i tend to pay attention to what they say.

(also i mentioned the earthworm/baby/einstein thing just because i do think that it's possible that an AI capable of teaching itself would be able to do so at an exponential rate, don't think that human intelligence is the ceiling, and the difference between the least and most intelligent human is not as large as we think it is.)

but for real don't lump me in with mh because i feel sorry for anyone who has to be on the People Who Bring Up Sexy Memories team

Karl Malone, Thursday, 28 January 2016 01:22 (eight years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.