The death of Johsua Brown in a collision between his Tesla and a big rig has led to some reflections on automation and trust. However, it also raises issues about in what ways our world is, or should be, readable by machines.
In BLDGBLOG, Geoff Manaugh examines connections between this incident and past efforts to make the world more, or less, readable by machines. Part of the problem in the recent collision was apparently that the white side of the 18-wheeler was not distinguished from the background by Brown's Tesla. In a way, it was camouflaged.
In the past, camouflage has been deployed deliberately to confuse cameras and surveillance. Consider the "dazzle" ship scheme of World War I.
In the future, Manaugh argues, American highways will likely be modified to become more legible to robotic sensing systems:
It will transition from being a “dumb” system of non-interactive 2D surfaces to become an immersive spatial environment filled with volumetric sign-systems meant for non-human readers. It will be rebuilt for perceptual systems other than our own.
Perhaps that will include a mandate for contrastive paint schemes.
I am reminded of a safety system for big trucks prototyped last year by Samsung. The "Safety Truck" includes cameras mounted on the front of big trucks and a large LED TV screen on the rear. The idea is for the scene ahead of the truck to be displayed on the screen on its back. That way, drivers behind the truck can see what is going on ahead of it. This information would help them to be able to pass the truck safely, if they intend to do so. In effect, the "Safety Truck" would be see-through!
I am no robotics engineer but I would guess that the "Safety Truck" would confuse the hell out of a robotic vision system on a self-driving car. This situation might well increase the risk of cars like the Tesla running into the back of these Samsung rigs.
So, the "Safety Truck" may be safer for human drivers but worse for robot ones. Which one will win?