<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Shaw, N</style></author><author><style face="normal" font="default" size="100%">Jackson, T</style></author><author><style face="normal" font="default" size="100%">Orchard, J</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Song, T</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks</style></title><secondary-title><style face="normal" font="default" size="100%">PLOS ONE</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2020</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0238454</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">15</style></volume><pages><style face="normal" font="default" size="100%">1-20</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In this work, we present a local intrinsic rule that we developed, dubbed IP, inspired by the Infomax rule. Like Infomax, this rule works by controlling the gain and bias of a neuron to regulate its rate of fire. We discuss the biological plausibility of the IP rule and compare it to batch normalisation. We demonstrate that the IP rule improves learning in deep networks, and provides networks with considerable robustness to increases in synaptic learning rates. We also sample the error gradients during learning and show that the IP rule substantially increases the size of the gradients over the course of learning. This suggests that the IP rule solves the vanishing gradient problem. Supplementary analysis is provided to derive the equilibrium solutions that the neuronal gain and bias converge to using our IP rule. An analysis demonstrates that the IP rule results in neuronal information potential similar to that of Infomax, when tested on a fixed input distribution. We also show that batch normalisation also improves information potential, suggesting that this may be a cause for the efficacy of batch normalisation—an open problem at the time of this writing.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Shaw, N</style></author><author><style face="normal" font="default" size="100%">Stockel, A</style></author><author><style face="normal" font="default" size="100%">Orr, R</style></author><author><style face="normal" font="default" size="100%">Lidbetter, T</style></author><author><style face="normal" font="default" size="100%">Cohen, R</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Towards provably moral AI agents in bottom-up learning frameworks</style></title><secondary-title><style face="normal" font="default" size="100%">AIES</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2018</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://dl.acm.org/doi/abs/10.1145/3278721.3278728</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">AAAI/ACM</style></publisher><pub-location><style face="normal" font="default" size="100%">2018 AAAI/ACM Conference on AI, Ethics, and Society</style></pub-location><pages><style face="normal" font="default" size="100%">271-277</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We examine moral machine decision making as inspired by a central question posed by Rossi with respect to moral preferences: can AI systems based on statistical machine learning (which do not provide a natural way to explain or justify their decisions) be used for embedding morality into a machine in a way that allows us to prove that nothing morally wrong will happen? We argue for an evaluation which is held to the same standards as a human agent, removing the demand that ethical behaviour is always achieved. We introduce four key meta-qualities desired for our moral standards, and then proceed to clarify how we can prove that an agent will correctly learn to perform moral actions given a set of samples within certain error bounds. Our group-dynamic approach enables us to demonstrate that the learned models converge to a common function to achieve stability. We further explain a valuable intrinsic consistency check made possible through the derivation of logical statements from the machine learning model. In all, this work proposes an approach for building ethical AI systems, coming from the perspective of artificial intelligence research, and sheds important light on understanding how much learning is required in order for an intelligent agent to behave morally with negligible error.</style></abstract></record></records></xml>