On the role of the philosopher in the automated society

According to Google this evokes automation. There's humans there though' - so I don't see it. But the guy on the far right is in the same pose as "the thinker" so it kind of works for my purposes. 

I don't know if you know, but I'm doing a masters - ya'know?

No? Well let me tell you, bro(1).


Automata, artificial intelligence, robots, computer programmes, whatever you want to call them, they're coming for your job.

It's been my job (kinda) for the last year to think about what this means for our idea of property. The idea being that when robots make everything, and do all the paperwork, we should probably let go of that notion of private property that's been holding us back for so long.

The idea being that if no one made it, how can anyone claim to own it. With an associated thought being that if robots do all the work we might want to let go of the idea of wages being linked to actually doing anything.

So basically what I'm arguing for is full socialism now, or in thirty years. When robots and stuff - whatever, you get it.

I'm not actually going to talk about that because it's in my thesis and TurnItIn scans the internet and I ain't spending thousands of pounds and a year of my time doing something only for some fucking robot to tell me that I plagiarised the idea from myself - which definitely is not a thing. Which could result in me getting very acquainted with the idea that I explore, of the actions of a robot stopping me from getting a job.

Spooky, eh? Not really.

One quite common reaction I get when I speak to people about this goes something like this;
"Well, that's not going to happen to my job for reason X."
To quickly put that one to bed, in a world where most of the work is done by robots there will still be stuff for people to do, my contention is that we probably will stop paying for a lot of stuff we currently pay for. Given that scarcity will essentially be over there is no reason to carry on using a measure (I'm talking about money) of the amount of shit you can get from the limited amount of shit available.

Also, given that the amount of free time that people have would go up massively, not having to work and all - there'd likely be no need to pay for someone to garden for you. Which would free up the guy who was paid to garden to do his own garden, if he where so inclined - or not if he wasn't.

So far so utopian, right?

There is however a problem that arises in this world, and in the next section, where I'll adopt a more serious authorial voice - much like I have in this sentence, I shall talk about that issue.

Essentially the issue is that;

We need to give serious consideration to the metric's that we programme artificial intelligence to use when making decisions. Or to put it another way, when I am asked about what course of action should be taken in a given scenario. I weigh up several consideration and then deliver my judgement. The reasoning that I used to come to the judgements involve certain concepts about right and wrong, the correct course of action to take in given situations. All of which it is very difficult to explain to another human who has been socialised in the same way that we have - and would come to largely the same conclusions about what to do in a lot of situations.

Now imagine trying to formulate it in such a way that it is possible to reproduce the moral dimension of decision making in a robot - which cannot undergo the same process of socialisation that human moral agents do and thus we cannot hope to teach them in the same way(3) that we learnt these either. So now imagine trying to formulate this process in such a way that you can break it down to a series of answers to yes/no questions and answers so it can be programmed into a computer. It might not be impossible, it is however highly implausible that we will be able to do it before artificial intelligence takes off and starts having to make decisions that currently humans make and exercise judgements as humans do.

If you think this is catastrophising you may want to consider the fact that in the very near future we are going to have driverless vehicles on our roads. Most if not all public transport will likely in the near future be driverless. Many light railway systems including London's Docklands Light Railway are already driverless - and the trend is accelerating. In the United Kingdom the Rail Maritime and Transport Union (RMT) is currently organising around the issue of the passenger safety on de-staffed services, and receiving a fucking beating from the Tory press for having the tenacity to highlight the safety of the public as an issue that deserves proper consideration(4).

The argument of the RMT and other unions on this issue could essentially be phrased thus;

"In the normal operation of a vehicle a driver may not be necessary for the accelerator to be pushed, the brakes to be applied and even for the doors to be operated - however in an emergency situation the human driver plays a large part in the management or evasion of catastrophes. Potentially saving lives. We have not yet reached a point where we can safely remove the human element totally from these situations and it not end in disaster."
- Made Up Interlocutor, No Place, Never.
The issue is not that it is not possible for the artificial intelligence to perform the day-to-day running of a thing. It is what happens when something extraordinary takes place. I am largely of the opinion that at some point we will iron the issues out(5) - I also think it likely that at some point AI will become so developed that it will act like it is autonomous(6) and will be capable of performing complex tasks and make decisions(7) which impact humans on a second by second basis. ie. All the fucking time.

Now consider this issue, if you programme this thing to make decisions on the basis of say, utility - which is to say programme it to do cost-benefit calculations of each decision and perform the action that it deems best based on this calculation. I might decide that on balance, we will derive most utility from killing all humans.

We could programme it to never kill all humans, so it might decide it best to maim all humans - we programme it not to maim all humans, so it emotionally cripples all humans and so on. You see the problem?

So what do we do?

The more astute amongst you will probably at this point have realised why I brought up the RMT and their current issue over driverless trains - which aren't so much using AI as they are getting signals from the track(8). It is because I intend to argue by analogy. Clever, no?

In a situation where we have AI doing complex functions and making decisions which effect peoples lives it is advisable that we have a human element in that system that can, if something goes wrong, make a judgement as to what to do. The even more astute amongst you may also realised that I've essentially just made an argument of the type seen in the introduction; "My job should be OK though, because X". I just made it for the ethicist, ethics being a branch of Philosophy.  I could be accused of prioritising my own profession - if you can call it that.

Slightly different though. The practice of ethics or situational judgements requires no special skill even to exercise these judgements as to what to do, well. Everyone does it all the time and most people are pretty good at it. So we could easily share that job around, do like a minute a day each or some shit, or have an alarm that goes off if something goes wrong - and whoever is about can deal with it. It'd go something like this.

AI: "Should we kill all humans?" 

Guy who was about: "No"


But anyway, I'm not here to get into specifics, I just think we should have a fail-safe for when robots and stuff. That's all I'm saying guys.

That's pretty much it. Oh yeah, there was one last thing.

Fuck the Evening Standard - be a better newspaper and specifically to the guys who do the society stuff. Do you think anyone cares what Lionel Ritchie daughter thinks about anything or what she had for breakfast - or the fact she has a maid, or whatever the fuck. Do you even care? Did you enjoy interviewing her? Is this what you did a degree in journalism for? Really? No didn't think so, stop it now. Before you give everyone a fucking tumour. plz. thnx.



(1) Or sis, or some pet name relevant to your gender whatever it may be - I'm a pretty liberal kinda' geeza, ma ni--. Oh. Erm, my friend.

(2) If you're reading this and you think you're "Anon." you're not - there's like six of you. Also, in all but one of the cases (Someone who did a job that robots are currently doing. An accountant I spoke to in a pub in Kensington, who was a bellend. Who didn't know this - that's AI doing fairly complicated accountancy functions not that he was a bellend which should have been obvious to everyone including him) it was pretty legit.

(3)Which would bring up some weird stuff in and of itself, like who's going to raise Little Baby AI til' legal maturity, what would that even involve?

(4)It seems obvious to the London Evening Standard - in an editorial during the last strike which I can't find online, but will look again for later - for example that the only people qualified to consider safety on public transport are office based management who play no part in the front line delivery of the service. The sycophantic tone of the coverage in managements favour leading one to wonder if TfL management have to pay Standard 'journalists' to rim them, or whether they consider it an honour and a privilege being thus prepared to render the service pro bono.

(5)I don't think we have yet, nor do I believe that you can safely replace a driver of a vehicle in a vehicle with a dude in a control tower, so I do think the RMT are taking a principled stand on this - I would also be willing to wager that the people who drive the fucking trains are better placed to judge then some cunt out of business school. Fuck you, Evening Standard hacks - maybe write something of value rather than attack pieces on workers, racially tinged articles insinuating mayoral candidates are terrorists, and gripping public interest pieces about what the children of rich men are having for breakfast. Someone else wrote a great thing about this that I encourage you to read. here.

(6) Whether it truly is, is a side issue. I'm very much of the view that if it walks like a duck and quacks like a duck, we should probably act as if it's a duck. Even whilst admitting it is possible that it's not a duck. Much like I admit the possibility that other people don't have minds or intentions, I don't think this or act like it's true however - because I'm not a Tory.

(7)Ibid. (Which means see above, ya' fucking philistine)

(8) Interestingly, the driver is only there to operate the door and in case of emergency on the Central, Northern, Jubilee and Victoria lines the main issue being that the London Underground was not built with escape routes and it has been pointed out that this might mean there would still need to be a member of staff on the train to lead passengers out of tunnels in emergencies. Clearly, defeating the fucking point - unless the point was to cause a dispute with a union that is successful in securing its members interests so as to defeat them in an industrial dispute to discipline the labour movement.


Popular Posts