The theme of CSTV's Design & Society course is "good design". When I ask students what this expression means, they tend to think, first of all, about technical matters, e.g., efficiency, cost, usability, and so on. However, as the course progresses, we come to ethical issues, e.g., is the design "good" for people, and in what sense?
Although the ethical aspect of good design has always been important, it is becoming ever more immediate. I think this is because fewer designs today are simply objects while more are really services.
Consider a recent piece in the New York Times by Noam Scheiber about how Uber uses psychological "tricks" to manipulate Uber drivers. For example, when drivers log off of the service (to stop working), the app says something like, "Your are $10 away from making $300!" Two buttons, "Keep driving" and "Log off" are presented, with the former highlighted.
This response exploits a tendency of people to motivate themselves by setting goals, even arbitrary ones:
“It’s getting you to internalize the company’s goals,” said Chelsea Howe, a prominent video game designer who has spoken out against coercive psychological techniques deployed in games. “Internalized motivation is the most powerful kind.”
The obvious question raised by this and other of Uber's practices in motivating its drivers is: Is this practice ethically acceptable? Of course, any employer may incentive its employees with pay, and this practice appears to be of this kind. However, the implication is that the incentive is packaged in a way that subverts the employee's ability to refuse.
Then there is the complication that Uber is, by its own account, not the employer of Uber drivers.
This issue is ethical because it turns on whether or not the design of the Uber app adequately respects the dignity of its users.
A simple response would be to say that any design that manipulates the choices that its users makes is disrespectful and therefore unethical.
Yet, what if the manipulation is for the user's own good? A recent piece by Angus Chen at NPR shows how Brad Appelhans at Rush University Medical Center has modified vending machines to delay the delivery of unhealthy snacks relative to healthy ones. Items in the machine are labelled as "healthy" or "unhealthy" and delivery of unhealthy ones is delayed by 25 seconds. This "time tax" was observed to change snack choices about 5% in favor of healthy ones, all without lowering overall revenue at the machine.
Certainly, this design is psychologically manipulative in how it affects user choices, much like the Uber app. Is that acceptable? Does it matter that the goal of the manipulation is to increase the user's own health?
Design has always had a significant ethical aspect, which often involved safety of users. A blender that delivers powerful, electric shocks is a bad design. Today, when so many designs are computerized, there is much greater scope for ethical issues to arise.
Essentially, a highly computerized gadget is really a service. The Uber app is really a service that coordinates drivers with riders. A computerized vending machine is really a kind of convenience store in a box. Since such services are businesses, then all the ethical concerns that arise in business also arise in these designs.
So, design ethics has, to a large extent, become business ethics in another guise. It will take a while for this realization to become clear and widely known but we are working on it. Hopefully, (*ahem*) my new book "Design and Society: Social issues in technological design" will help.