Artificial intelligence still has some way to go

Message Bookmarked
Bookmark Removed
Not all messages are displayed: show all messages (3713 of them)

Diane Coyle had a typically astute piece on AI in the FT the other day (££), talking about standards setting for AI – makes two key points.

The other reason for thinking some consensus on limiting adverse uses of AI may be possible is that there are relatively few parties to any discussions, at least for now.

At the moment, only big companies and governments can afford the hardware and computing power needed to run cutting-edge AI applications; others are renting access through cloud services. This size barrier could make it easier to establish some ground rules before the technology becomes more accessible.

this may be slightly optimistic, but it is true that fewer players tends to make it easier to reach consensus on compliance standards. It's not clear what the regulatory body would be (OECD and G20 are both mentioned in the article as having made statements around AI. Given the comparative success of GDPR regulation by the EU beyond the borders of the EU, it may be that a non-global body will end up setting the de facto standards for AI production, data management and use.

The other key bit was

Even then, previous scares offer a hopeful lesson: fears of bioterrorism or nano-terrorism — labelled “weapons of knowledge-enabled mass destruction” in a 2000 Wired article by Bill Joy — seem to have overlooked the fact that advanced technology use depends on complex organisational structures and tacit knowhow, as well as bits of code.

It's certainly been my experience that machine learning technologies are best deployed into structured environments with high levels of existing expertise in the target purpose of the AI (structured environments here can mean anything from clear organisational responsibilities, well-understood business rules and the latent organisational expertise such things embody to the fact that self-driving vehicles, superproblematic when tied to libertarian fantasies about driving, are already being used sensibly in large-scale factory environments.

But it was another bit in the article that caught my eye:


There are at least two reasons for cautious optimism. One is that the general deployment of AI is not really — as it is often described — an “arms race” (although its use in weapons is indeed a ratchet in the global arms race).

That bit in parenthesis reminded me of this cheery War on the Rocks article about how, due to 'attack time compression' (the time from the launch of a nuclear attack to it striking), AI is being increasingly considered as the only thing fast enough to come to a decision to retaliate in time (important because of the principles of MAD - if you can't guarantee you can assured destruction on the key targets of the country launching the strike then MAD breaks down). ie no human in the decision process at all. Three equally cheerful solutions:

  • More robust second strike (retaliation post first strike) capability: "This option would pose a myriad of ethical and political challenges, including accepting the deaths of many Americans in the first strike, the possible decapitation of U.S. leadership, and the likely degradation of the United States’ nuclear arsenal and NC3 capability. However, a second-strike-focused nuclear deterrent could also deter an adversary from thinking that the threats discussed above provide an advantage sufficient to make a first strike worth the risk."
  • Increased and improved spying (sorry 'surveillance and reconnaissance'), such that you know *prior* to a launch that a launch is going to take place and can act first lol. "This approach would also require instituting a damage prevention or limitation first-strike policy that allowed the president to launch a nuclear attack based on strategic warning. Such an approach would be controversial - nooooooo shit -, but could deter an adversary from approaching the United States’ perceived red lines."
  • Get in there first on reducing your uh opponent's? enemy's? time to react through compressing the attack time further. so that uh - checks notes - you would force other countries to come to the negotiating table and put in place some standards around MAD and deterrence. "Such a strategy is premised on the idea that mutual vulnerability makes the developing strategic environment untenable for both sides and leads to arms control agreements that are specifically designed to force adversaries to back away from fielding first-strike capabilities. The challenge with this approach is that if a single nuclear power (China, for example) refuses to participate, arms control becomes untenable and a race for first-strike dominance ensues."
(there's a lovely blood-red vein of American exceptionalism throughout the piece - "Russia and China are not constrained by the same moral dilemmas that keep Americans awake at night. Rather, they are focused on creating strategic advantage for their countries.").

"Admittedly, each of the three options — robust second strike, preemption, and equivalent danger — has drawbacks."

"There is a fourth option. The United States could develop an NC3 system based on artificial intelligence. Such an approach could overcome the attack-time compression challenge."

thumbsup.gif

"Unlike the game of Go, which the current world champion is a supercomputer, Alpha Go Zero, that learned through an iterative process, in nuclear conflict there is no iterative learning process."

Fizzles, Saturday, 11 January 2020 17:06 (four years ago) link

this may be slightly optimistic, but it is true that fewer players tends to make it easier to reach consensus on compliance standards. It's not clear what the regulatory body would be (OECD and G20 are both mentioned in the article as having made statements around AI. Given the comparative success of GDPR regulation by the EU beyond the borders of the EU, it may be that a non-global body will end up setting the de facto standards for AI production, data management and use.

it's not clear that it's enforceable in practice, or even what it's supposed to mean for that matter, but the GDPR already appears to attempt to constrain "AI": https://en.wikipedia.org/wiki/Right_to_explanation#European_Union

𝔠𝔞𝔢𝔨 (caek), Friday, 17 January 2020 07:14 (four years ago) link

warning: no one involved in the making of this article knew how to use quotation marks

Airbnb has developed technology that looks at guests’ online “personalities” when they book a break to calculate the risk of them trashing a host’s home.

Details have emerged of its “trait analyser” software built to scour the web to assess users’ “trustworthiness and compatibility” as well as their “behavioural and personality traits” in a bid to forecast suitability to rent a property.

...The background check technology was revealed in a patent published by the European Patent Office after being granted in the US last year.

According to the patent, Airbnb could deploy its software to scan sites including social media for traits such as “conscientiousness and openness” against the usual credit and identity checks and what it describes as “secure third-party databases”. Traits such as “neuroticism and involvement in crimes” and “narcissism, Machiavellianism, or psychopathy” are “perceived as untrustworthy”.

It uses artificial intelligence to mark down those found to be “associated” with fake social network profiles, or those who have given any false details. The patent also suggests users are scored poorly if keywords, images or video associated with them are involved with drugs or alcohol, hate websites or organisations, or sex work.

It adds that people “involved in pornography” or who have “authored online content with negative language” will be marked down.

The machine learning also scans news stories that could be about the person, such as an article related to a crime, and can “weight” the seriousness of offences. Postings to blogs and news websites are also taken into account to form a “person graph”, the patent says.

This combined data analyses how the customer acts towards others offline, along with cross-referencing metrics including “social connections”, employment and education history.

The machine learning then calculates the “compatibility” of host and guest.

https://www.standard.co.uk/tech/airbnb-software-scan-online-life-suitable-guest-a4325551.html

But guess what? Nobody gives a toot!😂 (Karl Malone), Saturday, 18 January 2020 01:42 (four years ago) link

So, if someone else having my name commits a crime 1800 miles from my place of residence and it is written up in The Podunk Telegraph and Weekly Shopper, does this software assign that crime to me or ignore it?

I like the old model, where people who choose to offer services to the public must allow the public to pay for and use those services.

A is for (Aimless), Saturday, 18 January 2020 04:40 (four years ago) link

two weeks pass...

WHAT HAS OCCURRED CANNOT BE UNDONE

I have trained a neural net on a crowdsourced set of vintage jello-centric recipes

I believe this to possibly be the worst recipe-generating algorithm in existence pic.twitter.com/cwQwOpUNDv

— Janelle Shane (@JanelleCShane) February 7, 2020

totally unnecessary bewbz of exploitation (DJP), Friday, 7 February 2020 19:57 (four years ago) link

(h/t Ned)

totally unnecessary bewbz of exploitation (DJP), Friday, 7 February 2020 19:58 (four years ago) link

Also, in case these recipes are making you thirsty: https://aiweirdness.com/post/189979379637/dont-let-an-ai-even-an-advanced-one-make-you-a

totally unnecessary bewbz of exploitation (DJP), Friday, 7 February 2020 20:07 (four years ago) link

Eat, drink and be merry, for tomorrow you'll definitely be dead.

Ned Raggett, Friday, 7 February 2020 20:22 (four years ago) link

brb removing all internal rinds

seandalai, Saturday, 8 February 2020 02:24 (four years ago) link

AI Travis Scott is lit
https://vimeo.com/384062745

Fuck the NRA (ulysses), Wednesday, 19 February 2020 13:43 (four years ago) link

Scrolling through that twitter thread, pretty sure I just made an office spectacle of myself when I got to the recipe entitled 'Potty Training for a Bunny'.

Sammo Hazuki's Tago Mago Cantina (Old Lunch), Wednesday, 19 February 2020 13:49 (four years ago) link

I don't know how I've failed to learn by now that I CAN NOT read these AI threads while I'm at work.

https://pbs.twimg.com/media/EQMrV37U8AAmGDG.jpg

Sammo Hazuki's Tago Mago Cantina (Old Lunch), Wednesday, 19 February 2020 13:51 (four years ago) link

It's incredible

totally unnecessary bewbz of exploitation (DJP), Wednesday, 19 February 2020 14:21 (four years ago) link

oh my, I just got to the fanfic:

The neural net read a LOT of fanfic on the internet during its initial general training, and still remembers it even after training on the jello-centric data.

Except now all its stories center around food. pic.twitter.com/WcMahhzc0j

— Janelle Shane (@JanelleCShane) February 8, 2020

totally unnecessary bewbz of exploitation (DJP), Wednesday, 19 February 2020 14:24 (four years ago) link

I think we need to call the police

Today's AI is much closer in brainpower to an earthworm than to a human. It can pattern-match but doesn't understand what it's doing.

This is its attempt to blend in with human recipes pic.twitter.com/kYnL7kT48B

— Janelle Shane (@JanelleCShane) February 8, 2020

totally unnecessary bewbz of exploitation (DJP), Wednesday, 19 February 2020 14:25 (four years ago) link

that is amazing

Li'l Brexit (Tracer Hand), Thursday, 20 February 2020 21:45 (four years ago) link

This is huge.

The creator of the YOLO algorithms, which (along with SSD) set much of the path of modern object detection, has stopped doing any computer vision research due to ethical concerns.

I've never seen anything quite like this before. https://t.co/jzu1p4my5V

— Jeremy Howard (@jeremyphoward) February 20, 2020

But guess what? Nobody gives a toot!😂 (Karl Malone), Friday, 21 February 2020 17:23 (four years ago) link

(i have no insight on anything at all whether "anything like quite like this" has happened before - i suspect that many people in the field have given up their research due to ethical concerns)

But guess what? Nobody gives a toot!😂 (Karl Malone), Friday, 21 February 2020 17:25 (four years ago) link

quote "anything like quite like this" end quote

But guess what? Nobody gives a toot!😂 (Karl Malone), Friday, 21 February 2020 17:25 (four years ago) link

if jeremy howard says this specific case is a big deal then i believe him.

but this happened a fair amount in the 60s and 70s throughout science and technology (including CS).

and less prominently, i know tons of people working in ML who vocally refuse to work on vision (which is perhaps the most obviously dangerous application). many of us also refuse to do anything in ad tech, i.e. surveillance and fraud. they didn't work on this stuff for decades (thanks joe!) and then publicly recant though, so they're not box office like he is.

𝔠𝔞𝔢𝔨 (caek), Sunday, 23 February 2020 05:43 (four years ago) link

e.g.

If you're wondering why you'd never heard of him, it's because he stood up at the ACM Silver Anniversary party and gave a seven minute keynote about how military-industrial complicit computing folk sucked and should quit. Grace Hopper walked out; he ended up blackballed. pic.twitter.com/xjdvLXtuZ8

— Os Keyes (@farbandish) February 9, 2019

(also albert einstein)

𝔠𝔞𝔢𝔨 (caek), Sunday, 23 February 2020 05:48 (four years ago) link

one month passes...

https://github.com/elsamuko/Shirt-without-Stripes

lukas, Monday, 20 April 2020 18:35 (three years ago) link

shirts without boolean operators

Jersey Al (Albert R. Broccoli), Monday, 20 April 2020 18:37 (three years ago) link

two weeks pass...

https://www.youtube.com/watch?v=iJgNpm8cTE8

DJI, Wednesday, 6 May 2020 02:58 (three years ago) link

Once it deviated from the original my brain just sort of dipped out. There are so many uncanny valleys.

El Tomboto, Wednesday, 6 May 2020 04:07 (three years ago) link

three weeks pass...

GPT-3 is pretty crazy! https://arxiv.org/abs/2005.14165

I certainly don't understand most of what's going on in the paper, but it seems that GPT-3 is basically the same as the now-famous GPT-2 language model, but massively scaled up (up to 175 Billion features). Apparently this enables the model to perform much better on "one shot" or "few shot" learning across a wide variety of tasks. That means learning to perform a task after only a few demonstrations, the way a human would, rather than the hundreds or thousands of demonstrations that had previously been required.

In addition to things like trivia questions, translation, and reading comprehension, it's shown to do a pretty good job at arithmetic (from a language model that hadn't specifically been taught to do math!), and can write news articles that are almost indistinguishable from human-written articles (human raters have a very hard time telling the difference). Be sure to check out the generated poetry in the appendix (written in the style of Wallace Stevens), which are imo pretty impressive.

Dan I., Tuesday, 2 June 2020 21:42 (three years ago) link

parameters, not features, sorry

Dan I., Tuesday, 2 June 2020 21:43 (three years ago) link

Important to point out that on most tasks, "one shot" or "few shot" learning still results in substantially lower accuracy than extensively-trained task-specific state-of-the-art results, but this style of lightly-trained learning had been (from what I can gather) a notorious weak point of models like this up until now, so it's a big step up.

Dan I., Tuesday, 2 June 2020 21:50 (three years ago) link

v cool

DJI, Tuesday, 2 June 2020 21:50 (three years ago) link

can't do much worse than we are right now

Fuck the NRA (ulysses), Tuesday, 2 June 2020 21:54 (three years ago) link

Get ready for people to start moving the Turing test goalposts: https://aiweirdness.com/post/620645957819875328/this-is-the-openai-api-it-makes-spookily-good

Dan I., Thursday, 11 June 2020 18:39 (three years ago) link

fucking hell
https://twitter.com/dog_fakes

Fuck the NRA (ulysses), Thursday, 11 June 2020 19:00 (three years ago) link

This is Soren. He is having an existential crisis, wondering if maybe he isn’t just a lamb after all. 14/10 pic.twitter.com/OHf7kFfQvj

— dog_fakes (@dog_fakes) June 10, 2020

Fuck the NRA (ulysses), Thursday, 11 June 2020 19:01 (three years ago) link

We only rate dogs. There is a broken pipe in the basement. Please don’t send Cheetos. This is a fire hazard. Thank you... 13/10 pic.twitter.com/JqA2yRGcJj

— dog_fakes (@dog_fakes) June 11, 2020

Fuck the NRA (ulysses), Thursday, 11 June 2020 19:01 (three years ago) link

i am deeply curious if the alt-text is human or machine generated

Fuck the NRA (ulysses), Thursday, 11 June 2020 19:02 (three years ago) link

Check Dan's link. The text is AI generated based on the images.

brain (krakow), Thursday, 11 June 2020 22:49 (three years ago) link

Ah, sorry, alt-text. Ignore my not reading properly. Apologies.

brain (krakow), Thursday, 11 June 2020 22:50 (three years ago) link

The picture texts are human written. I was mostly just impressed with the chatbot snippets from the link I posted, though I’m not sure I really trust the author not to have cherry picked or “cleaned them up”. The generated dog pictures are not special and have been possible for years I think.

Dan I., Friday, 12 June 2020 02:35 (three years ago) link

The generated dog pictures are amusingly grotesque. Humans like to be amused. Therefore the dog pictures serve their highest and best purpose, which is not to pinpoint the precise attainments of AI dog picture generation in June 2020 or educate people as to how long it might be before AI can generate wholly believable pictures of non-existent dogs. That sort of evaluation is better made via papers published in academic journals than on a Twitter feed.

A is for (Aimless), Friday, 12 June 2020 03:38 (three years ago) link

are u sure?

Yanni Xenakis (Hadrian VIII), Friday, 12 June 2020 04:03 (three years ago) link

Neither the text or the images are believably “human” to me, though the text falls into a sort of prose uncanny valley while the pictures are all just deeply fucked up

El Tomboto, Friday, 12 June 2020 04:22 (three years ago) link

i'm pretty sure the text was AI-written (and many of you didn't notice, heh!)

our god is a might god (Karl Malone), Friday, 12 June 2020 04:59 (three years ago) link

in fact, the entire post dan l posted was all about the text of those tweets - the AI generated visuals barely warrant a mention

our god is a might god (Karl Malone), Friday, 12 June 2020 05:01 (three years ago) link

Dan I, sorry!

our god is a might god (Karl Malone), Friday, 12 June 2020 05:01 (three years ago) link

My bad, the dog ratings text is generated, but the alt-text (which I do not see and do not know how to view) is human written. She doesn't really do a good job of explaining all that, tbh. It'll be fun to play with the API first-hand.

Dan I., Friday, 12 June 2020 14:35 (three years ago) link

and with that, i believe that AI has now reached 100% level

we are now in the age of 3fa23

our god is a wee lil god (Karl Malone), Friday, 12 June 2020 15:05 (three years ago) link

hold your cursor motionless over the image for the alt text. I would've been more impressed if that was machine-written as it seems genuinely aware.

Fuck the NRA (ulysses), Friday, 12 June 2020 16:14 (three years ago) link

yikes
http://www.shardcore.org/shardpress2019/2020/06/17/algonuts/

Fuck the NRA (ulysses), Friday, 19 June 2020 23:56 (three years ago) link

face depixelizer

🤔🤔🤔 pic.twitter.com/LG2cimkCFm

— Chicken3gg (@Chicken3gg) June 20, 2020

koogs, Saturday, 20 June 2020 18:45 (three years ago) link


You must be logged in to post. Please either login here, or if you are not registered, you may register here.