The Communications of the ACM has an interesting short article on the rise of so-called social bots.  A social bot is software that emulates a human being on social media.  An example of a social bot would be a Twitter account that talks up Kanye West.

Social bots exist for a variety of reasons.  Some provide customer service for people with questions for their service providers.  Some provide misinformation in order to further their sponsor's social goals, e.g., spreading rumors about a rival in a political campaign.

Since social bots can distort important social processes, detecting them is an important task.  The article examines different methods for distinguishing social bots from actual people.  This problem becomes more difficult with time as bots learn to emulate the quirks of humans on social media.

In a related development, Google has proposed a number of guidelines to regulate the conduct of physical robots.  They include:

  1. Robots should not damage things, even if doing so would further meeting of goals;
  2. Robots should not use destructive means to maximize their rewards for meeting goals;
  3. Robots should emulate humans, on the assumption that since humans are still better at getting around, following their lead makes sense;
  4. Robots should avoid unnecessary risks;
  5. Robots should deal appropriately with imperfect knowledge;

I am reminded of Marvin Minsky's observation that knowing what not to do was the biggest challenge in AI and robotics.  Indeed, the whole program is incipiently ethical, in the sense that ethics involves identifying and explaining why some things that can be done ought not to be done nevertheless.

In any event, Google's guidelines seem reasonable.  Indeed, the problems that arise from social bots might be mitigated if they followed the same guidelines as Google proposes for their physical relatives.  

However, the situation with social bots also displays one of the weaknesses of Google's recommendations.  Social bots are effective precisely because they emulate human beings so well.  Yet, emulation of people is one of Google's guidelines.  

At some point, like self-driving cars, our bots will need some sense of ethics that goes beyond just emulating human behavior or serving people's whims.


Blog topics

  1. 2018 (17)
    1. December (2)
    2. October (2)
    3. August (2)
    4. July (3)
    5. June (3)
    6. May (2)
    7. March (2)
    8. January (1)
  2. 2017 (44)
    1. December (1)
    2. August (3)
    3. July (4)
    4. June (6)
    5. May (5)
    6. April (3)
    7. March (6)
    8. February (8)
    9. January (8)
  3. 2016 (95)
    1. December (7)
    2. November (13)
    3. October (13)
    4. September (15)
    5. August (20)
    6. July (18)
    7. June (9)