AI, you and your work

Tuesday, January 31, 2017
by Cameron Shelley

The adoption of computers has profoundly impacted work.  It gave rise to a new class of laborer, e.g., the "knowlege worker". It  also replaced certain kinds of work, e.g., through automation.  Trade-offs of this type are a normal result of technological changes.

Currently, artificial intelligence is assuming a greater role in work.  Three recent article illustrate this trend and the sorts of trade-offs that come with it.

Will Knight discusses how AI has been used to guide call center workers in their dealings with the public.  A company named Cogito supplies software that monitors conversations and detects certain characteristics.  Those might include emotional tone, interrruptions, etc. The aim is to provide feedback to call center workers to help them achieve deal succesfully with callers.

An earlier use of the software showed a 28% increase in customer satisfaction.  Such software still has limitations: Conversational styles vary from person to person and culture to culture so that it is hard to create a set of universal profiles of good conversation. 

Timothy Revell writes about increasing use of AI software to track employee performance.  Computers that allow employees to do their work can also be used to monitor that work.  Making sense of such data has been the job of managers and consultants.  However, software itself can now be configured to rate workers on their productivity, etc.

Clearly, employers have a legitimate interest to see that their gear is being used in a positive and productive way.  At the same time, close monitoring of employees could become an unwarranted intrusion.  Furthermore, knowledge that they are being monitored could affect their efforts, notes Paul Bernal at the University of East Anglia:

The general creepiness will bother people, and that could be counterproductive if it affects their behaviour.

Steven Melendez notes reviews how AI is being used in the hiring process. Human resources people spend many hours groveling through heaps of resumes to identify the right candidates.  Software can be programmed to perform this sifting in a fraction of time time, matching details in resumes to job profiles.

Obviously, this sort of automation can save time.  In addition, software could be programmed to avoid the sorts of biases that human reviewers are prone to, such as discrimiation on the basis of race or gender. It may  also be able to discern undervalued candidates, a la Moneyball.

Of course, software can introduce or perpetuate bias as well as reduce it. Also, it is unclear from the article how recommendations made by the software will be treated by human resources staff.  Are they obliged to accept them?  Are they accountable for its performance?  Can they change how it works?