AI, discovery, and censorship

Thursday, December 1, 2016
by Cameron Shelley

My news feed put up an interesting pair of articles about applications of AI to what might be called knowledge discovery.

The first was an article by Adrienne Lafrance about the search for another Antikythera mechanism.  The Antikythera mechanism is an astronomical computer made in Hellenistic Greek times and found in a shipwreck off the island of Antikythera in 1901.

So far, the mechanism is a unique example of ancient prowess in mechanical computing.  Researchers have long hoped to find others like it.  Lafrance discusses the possibility that other examples may have been located already and are sitting, unrecognized, in museum collections.

There is hope that such items may be re-discovered through the use of artificial intelligence.  AIs undertaking searches through museum records may infer connections between artifacts in distant storage boxes that would not be apparent to anyone.  Perhaps other odd objects discovered in curious shipwrecks may be identified as cousins of this mechanism.

The second item was an article by Nathan Vanderklippe in the Globe and Mail regarding the use of AIs for censorship by Chinese authorities.  Studies of censorship on the popular Chinese social media site WeChat suggest that authorities are using sophisticated AI technology to moderate discussion of sensitive topics. 

The object, of course, is to delete mentions of prohibited topics, such as the Tiananmen massacre of 1989.  The software is now sophisticated enough to quickly recognize and erase discussion of this topic without simply deleting all references to "Tiananmen", "June 4", or "students."  The system also seems capable of learning to recognize circumlocutions that people may employ to talk about forbidden things indirectly.

The system is also apparently used by the Communist Party to rate citizens according to their Party spirit:

It has also begun to use big data techniques to develop a “social credit” system to measure a person’s trustworthiness in the eyes of the state, creating ratings that could profoundly affect the lives of those seen as challenging the party line.

Thumbs up for Xi Jinping!

Well, progress in technology comes with trade-offs, as readers of this blog well know.

I will leave by making one more connection.  Proliferation of fake news has become an issue in association with the recent US election.  Indeed, Russian authorities are thought to have played a role in its deployment.  I suspect that fake news has, so far, been largely created by humans to fool others.  Soon, though, AI software will likely take charge of this task too.

If so, then we will soon need some pretty good AI filters to block it all out.