Before the robots take over our jobs, they will have to learn our jobs.
This is already happening.
The New York Times profiled five people who have been put in this remarkable position. More than most, they understand the strengths (and weaknesses) of artificial intelligence and how technology is changing the nature of work.
Here is the Cliff Notes version of that article:
Rachel Neasham, travel agent
Ms. Neasham works for a travel booking app Lola. She knew the company’s artificial intelligence computer system — its name is Harrison — would eventually take over parts of her job. Still, there was soul-searching when it was decided that Harrison would actually start recommending and booking hotels.
At an employee meeting late last year, the agents debated what it meant to be human, and what a human travel agent could do that a machine couldn’t. While Harrison could comb through dozens of hotel options in a blink, it couldn’t match the expertise of, for example, a human agent with years of experience booking family vacations to Disney World.
Ms. Neasham sees it as a race: Can human agents find new ways to be valuable as quickly as the A.I. improves at handling parts of their job?
Diane Kim, interaction designer
Ms. Kim works as an A.I. interaction designer at x.ai, a New York-based start-up offering an artificial intelligence assistant to help people schedule meetings. X.ai pitches clients on the idea that, through A.I., they get the benefits of a human assistant — saving the time and hassle of scheduling a meeting — at a fraction of the price.
It’s Ms. Kim’s job to craft responses for the company’s assistants, that feel natural enough that swapping emails with these computer systems feels no different than emailing with a human assistant.
Dan Rubins, chief executive
Mr. Rubins created Legal Robot, a start-up that uses artificial intelligence to translate legalese into plain English.
Having reviewed nearly a million legal documents, Legal Robot also flags anomalies in contracts.
Legal documents are well suited to machine learning because they are highly structured and repetitive. The less time lawyers need to spend reviewing contracts, the more time they can spend on, say, advisory work or litigation.
Sarah Seiwert, customer representative
It took two weeks for Ms. Seiwert to notice that her company’s A.I. computer system was starting to pick up on her work patterns.
She is a customer representative at the online test-prep company Magoosh. When an email comes into Magoosh, its A.I. system reads the email, categorizes it and routes it to the appropriate employee. After a few months, it starts to automate responses for some common questions. This happens when the A.I. has seen enough examples of how human agents handled the request that it gains confidence that its answer will be correct.
Even though the A.I. is learning from human, Ms. Seiwert doesn’t foresee a future where she’s out of a job. Too many questions still require a level of human intuition to know the appropriate answer.
Aleksandra Faust, software engineer
Formerly known as Google’s self-driving car project, Waymo wants to build autonomous vehicles that can react properly under all kinds of unusual circumstances. Not only when drivers run red lights, but also when a child crosses an intersection riding a hoverboard while walking a dog.
Waymo’s cars have driven two million miles in the real world and billions more in computer simulations. But it’s impossible to program for every event.
Safety is a concern, but so is comfort. Take the process of braking at a red light. When human drivers see a red light, they tend to slow down gradually before coming to a full stop.
A sudden stop is dangerous because other drivers may not be paying attention. And it is jarring for the passengers.
Ms. Faust’s team creates different models for the most natural way a car should brake depending on how fast it is going.
We live in disruptive times.
No comments:
Post a Comment