Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (3715 of them)

re-route all that shit into designing electric trucks or saline batteries or meat substitutes

Οὖτις, Thursday, 7 November 2019 18:48 (four years ago) link

I think there are more motor vehicles registered in the USA than there are licensed humans to drive them all, and that's just looking at one nation, not the whole world. The developers of self-driving vehicles see numbers like that and they imagine the tsunami of cash that would flow toward anyone with the software to pull it off. With that kind of incentive, billions of dollars seem like petty cash.

A is for (Aimless), Thursday, 7 November 2019 18:59 (four years ago) link

Two cars in every garage, a gun in every hand, another couple cars in the drive, guns in the glove box and trunk, etc.

I'm scared my but won't fit in it. (Old Lunch), Thursday, 7 November 2019 19:08 (four years ago) link

robot cars made out of guns

Οὖτις, Thursday, 7 November 2019 19:11 (four years ago) link

delivering food to smarthomes

Οὖτις, Thursday, 7 November 2019 19:11 (four years ago) link

my hot take is that although that death above was horrifying and pointless, someone dies in a car wreck every 15 minutes in the US, and a lot of those were horrifying and pointless. i still think the rate of horrifying and pointless deaths will go down, the more automated vehicles take over. at the same time i know that the industry will inevitably enrich multiple completely insane crazy asshole corporation people ceo overlords. i guess i weigh all the saved lives against the addition of yet another new sector to the overcrowded population of crazy asshole corporation people ceo overlords, and think it's worth it

at home in the alternate future, (Karl Malone), Thursday, 7 November 2019 21:40 (four years ago) link

also xps i think electric/automated trucks is already a definitely industry hype thing

at home in the alternate future, (Karl Malone), Thursday, 7 November 2019 21:41 (four years ago) link

electric/automated trucks

these are different things

Οὖτις, Thursday, 7 November 2019 21:45 (four years ago) link

yep, i know

at home in the alternate future, (Karl Malone), Thursday, 7 November 2019 21:45 (four years ago) link

^ gives a pretty good sense of what present day AI can achieve with machine learning, as opposed to pre-programmed intelligence. it required a highly limited, highly structured and predictable microcosm, with a minimum of rules and simple objects, but it is still impressive -- if you don't compare it to what futurists tout AI achieving in a decade or two.

A is for (Aimless), Monday, 18 November 2019 21:29 (four years ago) link

is artificial superintelligence an actual thing? when i hear doomsayers talk about it, they're short on specific explanations for what advancements have been made, how they work, and how they take us closer to designing a being that can determine its own ends -- i.e. make decisions.

treeship., Sunday, 1 December 2019 21:29 (four years ago) link

a being? Lots of a.i. can make decisions. For example, one decides what's spam and what isn't with minimal false positives. It certainly does it much faster than any human could. There are many textbooks explaining advancements in machine learning. There are many examples of these advancements in action.

$1,000,000 or 1 bag of honeycrisp apples (Sufjan Grafton), Sunday, 1 December 2019 22:37 (four years ago) link

i guess make decisions is a bad category. it's more like, awareness, which is hard to quantify. or like, yeah, heidegger -- "being"

treeship., Sunday, 1 December 2019 22:52 (four years ago) link

I think maybe the term you're looking for is Artificial General Intelligence (AGI). We're still a long way away from that:

https://www.forbes.com/sites/cognitiveworld/2019/06/10/how-far-are-we-from-achieving-artificial-general-intelligence/#757181466dc4

o. nate, Monday, 2 December 2019 00:46 (four years ago) link

ah ok. thank you.

treeship., Monday, 2 December 2019 00:52 (four years ago) link

http://www.aidungeon.io/

𝔠𝔞𝔢𝔨 (caek), Friday, 6 December 2019 18:08 (four years ago) link

Elementary school deployment of #facialrecognition technology, Nanjing China pic.twitter.com/kR9l8IVKwj

— Matthew Brennan (@mbrennanchina) December 8, 2019

Peaceful Warrior I Poser (Karl Malone), Monday, 9 December 2019 00:46 (four years ago) link

fuuuuuuck that's terrifying.

Fuck the NRA (ulysses), Monday, 9 December 2019 01:17 (four years ago) link

would be better if they paired it with their product preference data so they could get an advertisement at the same time

Peaceful Warrior I Poser (Karl Malone), Monday, 9 December 2019 01:34 (four years ago) link

three weeks pass...

Diane Coyle had a typically astute piece on AI in the FT the other day (££), talking about standards setting for AI – makes two key points.

The other reason for thinking some consensus on limiting adverse uses of AI may be possible is that there are relatively few parties to any discussions, at least for now.

At the moment, only big companies and governments can afford the hardware and computing power needed to run cutting-edge AI applications; others are renting access through cloud services. This size barrier could make it easier to establish some ground rules before the technology becomes more accessible.

this may be slightly optimistic, but it is true that fewer players tends to make it easier to reach consensus on compliance standards. It's not clear what the regulatory body would be (OECD and G20 are both mentioned in the article as having made statements around AI. Given the comparative success of GDPR regulation by the EU beyond the borders of the EU, it may be that a non-global body will end up setting the de facto standards for AI production, data management and use.

The other key bit was

Even then, previous scares offer a hopeful lesson: fears of bioterrorism or nano-terrorism — labelled “weapons of knowledge-enabled mass destruction” in a 2000 Wired article by Bill Joy — seem to have overlooked the fact that advanced technology use depends on complex organisational structures and tacit knowhow, as well as bits of code.

It's certainly been my experience that machine learning technologies are best deployed into structured environments with high levels of existing expertise in the target purpose of the AI (structured environments here can mean anything from clear organisational responsibilities, well-understood business rules and the latent organisational expertise such things embody to the fact that self-driving vehicles, superproblematic when tied to libertarian fantasies about driving, are already being used sensibly in large-scale factory environments.

But it was another bit in the article that caught my eye:


There are at least two reasons for cautious optimism. One is that the general deployment of AI is not really — as it is often described — an “arms race” (although its use in weapons is indeed a ratchet in the global arms race).

That bit in parenthesis reminded me of this cheery War on the Rocks article about how, due to 'attack time compression' (the time from the launch of a nuclear attack to it striking), AI is being increasingly considered as the only thing fast enough to come to a decision to retaliate in time (important because of the principles of MAD - if you can't guarantee you can assured destruction on the key targets of the country launching the strike then MAD breaks down). ie no human in the decision process at all. Three equally cheerful solutions:

  • More robust second strike (retaliation post first strike) capability: "This option would pose a myriad of ethical and political challenges, including accepting the deaths of many Americans in the first strike, the possible decapitation of U.S. leadership, and the likely degradation of the United States’ nuclear arsenal and NC3 capability. However, a second-strike-focused nuclear deterrent could also deter an adversary from thinking that the threats discussed above provide an advantage sufficient to make a first strike worth the risk."
  • Increased and improved spying (sorry 'surveillance and reconnaissance'), such that you know *prior* to a launch that a launch is going to take place and can act first lol. "This approach would also require instituting a damage prevention or limitation first-strike policy that allowed the president to launch a nuclear attack based on strategic warning. Such an approach would be controversial - nooooooo shit -, but could deter an adversary from approaching the United States’ perceived red lines."
  • Get in there first on reducing your uh opponent's? enemy's? time to react through compressing the attack time further. so that uh - checks notes - you would force other countries to come to the negotiating table and put in place some standards around MAD and deterrence. "Such a strategy is premised on the idea that mutual vulnerability makes the developing strategic environment untenable for both sides and leads to arms control agreements that are specifically designed to force adversaries to back away from fielding first-strike capabilities. The challenge with this approach is that if a single nuclear power (China, for example) refuses to participate, arms control becomes untenable and a race for first-strike dominance ensues."
(there's a lovely blood-red vein of American exceptionalism throughout the piece - "Russia and China are not constrained by the same moral dilemmas that keep Americans awake at night. Rather, they are focused on creating strategic advantage for their countries.").

"Admittedly, each of the three options — robust second strike, preemption, and equivalent danger — has drawbacks."

"There is a fourth option. The United States could develop an NC3 system based on artificial intelligence. Such an approach could overcome the attack-time compression challenge."

thumbsup.gif

"Unlike the game of Go, which the current world champion is a supercomputer, Alpha Go Zero, that learned through an iterative process, in nuclear conflict there is no iterative learning process."

Fizzles, Saturday, 11 January 2020 17:06 (four years ago) link

this may be slightly optimistic, but it is true that fewer players tends to make it easier to reach consensus on compliance standards. It's not clear what the regulatory body would be (OECD and G20 are both mentioned in the article as having made statements around AI. Given the comparative success of GDPR regulation by the EU beyond the borders of the EU, it may be that a non-global body will end up setting the de facto standards for AI production, data management and use.

it's not clear that it's enforceable in practice, or even what it's supposed to mean for that matter, but the GDPR already appears to attempt to constrain "AI": https://en.wikipedia.org/wiki/Right_to_explanation#European_Union

𝔠𝔞𝔢𝔨 (caek), Friday, 17 January 2020 07:14 (four years ago) link

warning: no one involved in the making of this article knew how to use quotation marks

Airbnb has developed technology that looks at guests’ online “personalities” when they book a break to calculate the risk of them trashing a host’s home.

Details have emerged of its “trait analyser” software built to scour the web to assess users’ “trustworthiness and compatibility” as well as their “behavioural and personality traits” in a bid to forecast suitability to rent a property.

...The background check technology was revealed in a patent published by the European Patent Office after being granted in the US last year.

According to the patent, Airbnb could deploy its software to scan sites including social media for traits such as “conscientiousness and openness” against the usual credit and identity checks and what it describes as “secure third-party databases”. Traits such as “neuroticism and involvement in crimes” and “narcissism, Machiavellianism, or psychopathy” are “perceived as untrustworthy”.

It uses artificial intelligence to mark down those found to be “associated” with fake social network profiles, or those who have given any false details. The patent also suggests users are scored poorly if keywords, images or video associated with them are involved with drugs or alcohol, hate websites or organisations, or sex work.

It adds that people “involved in pornography” or who have “authored online content with negative language” will be marked down.

The machine learning also scans news stories that could be about the person, such as an article related to a crime, and can “weight” the seriousness of offences. Postings to blogs and news websites are also taken into account to form a “person graph”, the patent says.

This combined data analyses how the customer acts towards others offline, along with cross-referencing metrics including “social connections”, employment and education history.

The machine learning then calculates the “compatibility” of host and guest.

https://www.standard.co.uk/tech/airbnb-software-scan-online-life-suitable-guest-a4325551.html

But guess what? Nobody gives a toot!😂 (Karl Malone), Saturday, 18 January 2020 01:42 (four years ago) link

So, if someone else having my name commits a crime 1800 miles from my place of residence and it is written up in The Podunk Telegraph and Weekly Shopper, does this software assign that crime to me or ignore it?

I like the old model, where people who choose to offer services to the public must allow the public to pay for and use those services.

A is for (Aimless), Saturday, 18 January 2020 04:40 (four years ago) link

two weeks pass...

WHAT HAS OCCURRED CANNOT BE UNDONE

I have trained a neural net on a crowdsourced set of vintage jello-centric recipes

I believe this to possibly be the worst recipe-generating algorithm in existence pic.twitter.com/cwQwOpUNDv

— Janelle Shane (@JanelleCShane) February 7, 2020

totally unnecessary bewbz of exploitation (DJP), Friday, 7 February 2020 19:57 (four years ago) link

(h/t Ned)

totally unnecessary bewbz of exploitation (DJP), Friday, 7 February 2020 19:58 (four years ago) link

Also, in case these recipes are making you thirsty: https://aiweirdness.com/post/189979379637/dont-let-an-ai-even-an-advanced-one-make-you-a

totally unnecessary bewbz of exploitation (DJP), Friday, 7 February 2020 20:07 (four years ago) link

Eat, drink and be merry, for tomorrow you'll definitely be dead.

Ned Raggett, Friday, 7 February 2020 20:22 (four years ago) link

brb removing all internal rinds

seandalai, Saturday, 8 February 2020 02:24 (four years ago) link

AI Travis Scott is lit
https://vimeo.com/384062745

Fuck the NRA (ulysses), Wednesday, 19 February 2020 13:43 (four years ago) link

Scrolling through that twitter thread, pretty sure I just made an office spectacle of myself when I got to the recipe entitled 'Potty Training for a Bunny'.

Sammo Hazuki's Tago Mago Cantina (Old Lunch), Wednesday, 19 February 2020 13:49 (four years ago) link

I don't know how I've failed to learn by now that I CAN NOT read these AI threads while I'm at work.

https://pbs.twimg.com/media/EQMrV37U8AAmGDG.jpg

Sammo Hazuki's Tago Mago Cantina (Old Lunch), Wednesday, 19 February 2020 13:51 (four years ago) link

It's incredible

totally unnecessary bewbz of exploitation (DJP), Wednesday, 19 February 2020 14:21 (four years ago) link

oh my, I just got to the fanfic:

The neural net read a LOT of fanfic on the internet during its initial general training, and still remembers it even after training on the jello-centric data.

Except now all its stories center around food. pic.twitter.com/WcMahhzc0j

— Janelle Shane (@JanelleCShane) February 8, 2020

totally unnecessary bewbz of exploitation (DJP), Wednesday, 19 February 2020 14:24 (four years ago) link

I think we need to call the police

Today's AI is much closer in brainpower to an earthworm than to a human. It can pattern-match but doesn't understand what it's doing.

This is its attempt to blend in with human recipes pic.twitter.com/kYnL7kT48B

— Janelle Shane (@JanelleCShane) February 8, 2020

totally unnecessary bewbz of exploitation (DJP), Wednesday, 19 February 2020 14:25 (four years ago) link

that is amazing

Li'l Brexit (Tracer Hand), Thursday, 20 February 2020 21:45 (four years ago) link

This is huge.

The creator of the YOLO algorithms, which (along with SSD) set much of the path of modern object detection, has stopped doing any computer vision research due to ethical concerns.

I've never seen anything quite like this before. https://t.co/jzu1p4my5V

— Jeremy Howard (@jeremyphoward) February 20, 2020

But guess what? Nobody gives a toot!😂 (Karl Malone), Friday, 21 February 2020 17:23 (four years ago) link

(i have no insight on anything at all whether "anything like quite like this" has happened before - i suspect that many people in the field have given up their research due to ethical concerns)

But guess what? Nobody gives a toot!😂 (Karl Malone), Friday, 21 February 2020 17:25 (four years ago) link

quote "anything like quite like this" end quote

But guess what? Nobody gives a toot!😂 (Karl Malone), Friday, 21 February 2020 17:25 (four years ago) link

if jeremy howard says this specific case is a big deal then i believe him.

but this happened a fair amount in the 60s and 70s throughout science and technology (including CS).

and less prominently, i know tons of people working in ML who vocally refuse to work on vision (which is perhaps the most obviously dangerous application). many of us also refuse to do anything in ad tech, i.e. surveillance and fraud. they didn't work on this stuff for decades (thanks joe!) and then publicly recant though, so they're not box office like he is.

𝔠𝔞𝔢𝔨 (caek), Sunday, 23 February 2020 05:43 (four years ago) link

e.g.

If you're wondering why you'd never heard of him, it's because he stood up at the ACM Silver Anniversary party and gave a seven minute keynote about how military-industrial complicit computing folk sucked and should quit. Grace Hopper walked out; he ended up blackballed. pic.twitter.com/xjdvLXtuZ8

— Os Keyes (@farbandish) February 9, 2019

(also albert einstein)

𝔠𝔞𝔢𝔨 (caek), Sunday, 23 February 2020 05:48 (four years ago) link

one month passes...

https://github.com/elsamuko/Shirt-without-Stripes

lukas, Monday, 20 April 2020 18:35 (three years ago) link

shirts without boolean operators

Jersey Al (Albert R. Broccoli), Monday, 20 April 2020 18:37 (three years ago) link

two weeks pass...

https://www.youtube.com/watch?v=iJgNpm8cTE8

DJI, Wednesday, 6 May 2020 02:58 (three years ago) link

Once it deviated from the original my brain just sort of dipped out. There are so many uncanny valleys.

El Tomboto, Wednesday, 6 May 2020 04:07 (three years ago) link

three weeks pass...

GPT-3 is pretty crazy! https://arxiv.org/abs/2005.14165

I certainly don't understand most of what's going on in the paper, but it seems that GPT-3 is basically the same as the now-famous GPT-2 language model, but massively scaled up (up to 175 Billion features). Apparently this enables the model to perform much better on "one shot" or "few shot" learning across a wide variety of tasks. That means learning to perform a task after only a few demonstrations, the way a human would, rather than the hundreds or thousands of demonstrations that had previously been required.

In addition to things like trivia questions, translation, and reading comprehension, it's shown to do a pretty good job at arithmetic (from a language model that hadn't specifically been taught to do math!), and can write news articles that are almost indistinguishable from human-written articles (human raters have a very hard time telling the difference). Be sure to check out the generated poetry in the appendix (written in the style of Wallace Stevens), which are imo pretty impressive.

Dan I., Tuesday, 2 June 2020 21:42 (three years ago) link

parameters, not features, sorry

Dan I., Tuesday, 2 June 2020 21:43 (three years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.