Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (3926 of them)

with sexy results

Οὖτις, Wednesday, 27 January 2016 23:08 (eight years ago) link

we've come a long way. our computers' sexy memories are now not so different from our own.

I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 23:09 (eight years ago) link

ilx plays a mind forever voyaging imo

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 23:11 (eight years ago) link

at ilx, we've developed an ai that is convinced it left its sunglasses in the booth at lunch as those very sunglasses sit atop its monitor

I expel a minor traveler's flatulence (Sufjan Grafton), Wednesday, 27 January 2016 23:12 (eight years ago) link

chilling

denies the existence of dark matter (difficult listening hour), Wednesday, 27 January 2016 23:14 (eight years ago) link

I think that people are definitely trying to build computers/AIs that they can't understand (see my memristor article above, or even certain types of machine-learning). These also seem like the ones (IMO) that are most likely to yield the most interesting AIs or consciousnesses.

schwantz, Wednesday, 27 January 2016 23:20 (eight years ago) link

oh hey this thread

The DeepMind Go thing looks really really cool and I'll definitely read the paper but it's basically a big search problem with a relatively small representation and a clear reward signal. It's nothing like learning to act within the complexity of the real world, which is the big thing that nobody has any idea how to do.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:25 (eight years ago) link

I sure don't

μpright mammal (mh), Thursday, 28 January 2016 00:25 (eight years ago) link

mh & km, you talk of emergent results, but what results are these? What do you expect your hypothetical non-anthropomorphic AI to do? And how will the AI do this without some (necessarily anthropomorphic?) semantic understanding? To accomplish anything that would impress me or shakey, an AI would have to manipulate things in the world, take a variety of sensory (and to us, possibly extrasensory) measurements, and "think" in a way that allowed it to either create something novel or make a useful "true" "assertion" (and this latter accomplishment would require semantic understanding in order to communicate that assertion).

You seem loath to anthropomorphize AI, but I'm skeptical that useful AI accomplishments can be achieved without very human-like semantic understanding.

I'd also like to argue with the proposed timeline that's been touted itt, as if a hard-coded parlor trick (computers can beat humans at rock-paper-scissors, too) means that AI has reached "baby level." It hasn't, and I'm skeptical that we've even reached "earthworm level" (cf. https://en.wikipedia.org/wiki/OpenWorm).

Have you read this?
http://www.skeptic.com/eskeptic/06-08-25/#feature

It's 9 years old, and I can hardly say with confidence that it's irrefutable, but the article makes a convincing, comprehensive case against anything but narrowly specific, hard-coded AI (like a program that plays Go).

I'd like to see an argument as to how, e.g., google will ever remotely understand what the hell I want on the Internet.

bamcquern, Thursday, 28 January 2016 00:30 (eight years ago) link

Google is pretty good at understanding what people want on the Internet tbh. Maybe just not you.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:35 (eight years ago) link

google has virtually no semantic understanding

bamcquern, Thursday, 28 January 2016 00:36 (eight years ago) link

and its basic underlying principles don't even try to

bamcquern, Thursday, 28 January 2016 00:37 (eight years ago) link

Comparing AI to organic intelligences isn't really that informative - their strengths and weaknesses are so different.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:37 (eight years ago) link

But what is it that AI-proponents itt expect AI to eventually do?

bamcquern, Thursday, 28 January 2016 00:39 (eight years ago) link

Google doesn't need a whole lot of "semantic understanding" to do a good job of ranking search results. They do have more than "virtuallly no" component that explicitly handles this stuff anyway - the Knowledge Graph is a big part of their system these days.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:40 (eight years ago) link

put a lot of people out of work xp

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:41 (eight years ago) link

be a terrible replacement for spurned religious beliefs

bicyclescope (mattresslessness), Thursday, 28 January 2016 00:41 (eight years ago) link

xp That's really the only thing I'm sure about in the medium-term. I don't think that means that AI is bad or dangerous, but society will need to work out how to handle a jump in unemployment long before it has to worry about killer superintelligences.

conditional random jepsen (seandalai), Thursday, 28 January 2016 00:42 (eight years ago) link

http://www.ncbi.nlm.nih.gov/pubmed/8110662

μpright mammal (mh), Thursday, 28 January 2016 00:44 (eight years ago) link

the rhetoric of inevitability around ai is so maddeningly stupid, where the hell did it come from?

bicyclescope (mattresslessness), Thursday, 28 January 2016 00:45 (eight years ago) link

again, mostly repetition using human-known rules, but molecular design using AI to address combinatorial problems can give results that would be found through brute force repetition but might be unintuitive to humans -- coming up with novel solutions that people might not stumble upon

molecular modeling in genetics is huge right now

μpright mammal (mh), Thursday, 28 January 2016 00:48 (eight years ago) link

Like if tesla or whoever builds a self-driving car, that would be a pretty incredible (and seemingly inevitable) accomplishment, but guidance systems aren't new, and it would hardly be "strong AI."

"Ranking search results" is not an admirable feat. That's glorified punch card territory.

Knowledge Graph is scarcely functionally semantic. I don't care that google can crib the appropriate parts of wikipedia and put them in an infobox.

bamcquern, Thursday, 28 January 2016 00:49 (eight years ago) link

I accept that AI research is useful, but using an AI tool so that a human can make a decision about a drug is a far cry from KM's implied "earthworm to baby to Einstein" scenario.

bamcquern, Thursday, 28 January 2016 00:50 (eight years ago) link

self-driving cars are kind of defeatist when it comes to artificial intelligence, because by design it has to emulate actions humans do -- the control systems of cars, shape of roads, reacting to other drivers not under the same control (although organizations are starting to recognize the need for cars to be able to communicate with other cars) mean they're stuck with a number of constraints

μpright mammal (mh), Thursday, 28 January 2016 00:54 (eight years ago) link

general purpose language semantics are kind of a brick wall when it comes to knowledge, but it's not inconceivable that you could have a biomass-consuming big old robot lumbering through the countryside that would be self-sustaining, maybe eventually self-repairing, and would be able to learn from past interactions what works and what doesn't, assuming one of the things it has to learn is not to walk into a canyon or river

the problem being that spoken or written language is the basis for shared knowledge, and there's no real "language" of artificial beings

μpright mammal (mh), Thursday, 28 January 2016 00:57 (eight years ago) link

yet

μpright mammal (mh), Thursday, 28 January 2016 00:57 (eight years ago) link

it comes down to whether you think human cognition is a special thing, or just an amazingly huge number of iterations of things that worked or didn't, and we don't have the equivalent of randomly throwing molecules together over billions of years until single cell organisms rise up

μpright mammal (mh), Thursday, 28 January 2016 00:59 (eight years ago) link

it's not quite what we're going for in that we are looking for a particular response, but a simple step to programs that write programs to create that intended response exist:
http://www.primaryobjects.com/2013/01/27/using-artificial-intelligence-to-write-self-modifying-improving-programs/

μpright mammal (mh), Thursday, 28 January 2016 01:04 (eight years ago) link

To repeat myself

New Yorker magazine alert thread

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:06 (eight years ago) link

that's why we have to have the computer program it for us, then program a better version, and continue onward for a few trillion cycles

μpright mammal (mh), Thursday, 28 January 2016 01:10 (eight years ago) link

and then it forgets the password to itself because storage capacity turns out to be finite

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:14 (eight years ago) link

Like if tesla or whoever builds a self-driving car, that would be a pretty incredible (and seemingly inevitable) accomplishment, but guidance systems aren't new, and it would hardly be "strong AI."

"Ranking search results" is not an admirable feat. That's glorified punch card territory.

Knowledge Graph is scarcely functionally semantic. I don't care that google can crib the appropriate parts of wikipedia and put them in an infobox.

― bamcquern, Thursday, January 28, 2016 12:49 AM (15 minutes ago) Bookmark Flag Post Permalink

I actually agree that AI with an understanding of how it affects the world, creativity, real conversational ability is not going to turn up any day soon. But "glorified punch card territory" is horseshit.

Anyway: AI is whatever hasn't been done yet

conditional random jepsen (seandalai), Thursday, 28 January 2016 01:14 (eight years ago) link

Yes, obviously it's glorified bazillions of punch cards territory

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:22 (eight years ago) link

it's the infinite monkeys/typewriters problem, only there's a set number of monkeys (although faster monkeys keep appearing every day) and they have everything every monkey has ever typed is available for reference, and you start out with a proscribed outcome of a copy of Hamlet

then you introduce the problem that you want something in the vein of Hamlet, but with some new plot twists, but you don't have humans capable of saying whether the result is sensible or good. so you need some parameters, like grammatical rules, and some other way to evaluate whether the story is any good without employing infinitely many humans

μpright mammal (mh), Thursday, 28 January 2016 01:22 (eight years ago) link

no one should lump me together with mh; i have ~zero expertise (sorry if i implied i did - like most other things i'm interested in, i'm an amateur and easily schooled).

and also sorry if i implied that superintelligence is inevitable. i don't think that. but i do think it's possible, and if it is, it presents incredible problems. i suppose i often fall into the appeal to authority fallacy, but when people like hawking, musk, gates, woz, etc are explicitly warning about AI (from an open letter published last July - "AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades", and they voiced similar warnings on the risks posed by superintelligence) i pay attention. it's possible that everyone on ILX is more knowledgeable than those guys. but... i don't think so. no offense. if there's even a sliver of possibility that they're correct, it's something worth discussing. to my knowledge, i've never seen anyone (here or on the internet in general) rebut nick bostrom's points about the security/containment problems with superintelligence. everything i've read in opposition just attacks the idea of superintelligence ever existing in the first place. so it seems like there's the group of people who dismiss AI in general, and then there's the group of people who are open to the possibility of AI and think it could be a huge existential problem, and very few people in between. since the smartest people in the room fall into the latter category, i tend to pay attention to what they say.

(also i mentioned the earthworm/baby/einstein thing just because i do think that it's possible that an AI capable of teaching itself would be able to do so at an exponential rate, don't think that human intelligence is the ceiling, and the difference between the least and most intelligent human is not as large as we think it is.)

but for real don't lump me in with mh because i feel sorry for anyone who has to be on the People Who Bring Up Sexy Memories team

Karl Malone, Thursday, 28 January 2016 01:22 (eight years ago) link

imo a number of punch cards equal to the number of atoms on earth might be sufficient

μpright mammal (mh), Thursday, 28 January 2016 01:23 (eight years ago) link

Ever use voice commands on your phone? There's one very practical and widespread recent benefit of AI research.

AdamVania (Adam Bruneau), Thursday, 28 January 2016 01:24 (eight years ago) link

siri is basically dragon naturallyspeaking plus the eliza bot plus twenty years of faster computers

μpright mammal (mh), Thursday, 28 January 2016 01:25 (eight years ago) link

I had a buddy of mine in undergrad who wrote an evolving algorithm to make drum machine patterns. Inevitably after a few iterations trying to select for the grooviest phattest funkiest loops around, we would end up with a hit on nearly every 16th-note step, so even with some built-in preferences for the trad backbeat, fusion jazz fills were what you got. It was a fun project though.

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:29 (eight years ago) link

xps How is it horseshit? It sorts based on terms, but in a sophisticated ("glorified") way. The thing at the very top of my want-list for AI research is for a search engine to have any clue as to what I'm looking for on the internet.

That brainfuck program that eventually writes "hello" and "reddit" is disappointing.

I would reply to Hofstadter that whatever hasn't happened yet will be as equally underwhelming as what has happened. No one says that technology doesn't have the potential to improve efficiency. We're saying that technology is very unlikely to produce anything resembling "strong AI," which is a proposition you're not necessarily arguing against.

I'd go further and say that our service sector, which comprises about 81% of US jobs, is pretty secure from advancements in AI, and will probably merely be augmented and enhanced by it.

bamcquern, Thursday, 28 January 2016 01:36 (eight years ago) link

Self-driving cars will decimate the service sector in a couple of years.

schwantz, Thursday, 28 January 2016 01:38 (eight years ago) link

I would expect the concerns of hawking, musk, gates, et. al. over autonomous weapons is not due to their overwhelming strong-AI capability taking over the world, but rather their low cost, mobility and firepower, coupled with the fact that their owners will no doubt have extremely low standards about who these weapons kill or maim. The development of cheap mobile autonomous weapons is just an extension of the idea of land mines or booby traps, which are autonomous weapons once they are put in place, and which are highly indiscriminate.

a little too mature to be cute (Aimless), Thursday, 28 January 2016 01:39 (eight years ago) link

KM, smart celebrities can be wrong, and, yes, you are fallaciously appealing to authority by siding with them because they're smart celebrities.

If a program can teach itself something useful and novel in the vein of a superintelligent being, I think it will be doing it through feedback loops analogous to ours that require sensory input and actual experiences. How else will a superintelligent being develop semantic awareness without those things? And how will superintelligent beings develop exponentially if they're living experiential lives more or less like us?

bamcquern, Thursday, 28 January 2016 01:42 (eight years ago) link

http://www.bls.gov/emp/ep_table_201.htm

transportation and warehousing - 3%

bamcquern, Thursday, 28 January 2016 01:43 (eight years ago) link

I buried the lede on that page that was generating programs in brainfuck -- in the second part, the algorithm ends up generating programs that can add, subtract, and reverse strings -- all without having any idea what those operations are

given input and desired outcomes, it figured out subtraction

μpright mammal (mh), Thursday, 28 January 2016 01:49 (eight years ago) link

I completely buy that autonomous weapons are likely to be deployed within the next two decades! I totally buy that. I am also 100% confident that they will have multiple disastrous flaws that make them not at all an existential threat, and unlikely to be much of a threat at all to any sufficiently prepared and equipped target.

I'm far, far more concerned about the threat of pervasive semi-autonomous civilian "intelligence" that just happens to be readily exploited and abused by any half-curious IT dropout. Part of that is because it's my job but the other part of it is because it's my job I get to be intimately aware of how atrocious and shoddy all this shit is. Move fast and break things, indeed.

service desk hardman (El Tomboto), Thursday, 28 January 2016 01:52 (eight years ago) link

google has virtually no semantic understanding

― bamcquern, Wednesday, January 27, 2016 4:36 PM (1 hour ago

Absolutely! And this has been reinforced by ilx ... so many missed posts due to bad timing due to irrelevant Google Image Search results.

sarahell, Thursday, 28 January 2016 01:53 (eight years ago) link

An AI that learns through experiences but that happens to live on the infrastructure that our current IT ecosystem lives on - i.e. we don't in the meantime develop all-new kinds of memory and transistors and operating systems and trust networks that are basically nothing at all like what we have now - is going to be like an earthworm that becomes a baby that then becomes an army of pubescent Von Neumanns that all die instantly as soon as Mozilla decides their CA is no good

service desk hardman (El Tomboto), Thursday, 28 January 2016 02:02 (eight years ago) link

idk man every workplace has those proxies that install trusted root certs that let them crack open and spy on https sessions

μpright mammal (mh), Thursday, 28 January 2016 02:22 (eight years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.