<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Shayan Shirahmad Gale Bagi</style></author><author><style face="normal" font="default" size="100%">Behzad Moshiri</style></author><author><style face="normal" font="default" size="100%">Hossein Gharaee Garakani</style></author><author><style face="normal" font="default" size="100%">Mohammad Khoshnevisan</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Blind Spot Detection System in Vehicles Using Fusion of Radar Detections and Camera Verification</style></title><secondary-title><style face="normal" font="default" size="100%">International Journal of Intelligent Transportation Systems Research</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2021</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/article/10.1007/s13177-021-00254-5</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">19</style></volume><pages><style face="normal" font="default" size="100%">389-404</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Sensors are the quintessential part of Blind Spot Detection (BSD) systems, which have a profound effect on the performance of the system. Every sensor has its unique deficiencies that can deteriorate the performance of the system under grievous circumstances. Hence, making vital tasks in BSD such as object detection arduous. Indeed, previous studies have demonstrated that data fusion techniques can diminish the adverse effects of sensors and improve detection accuracy in the BSD system. One of the main advantages of data fusion is to improve detection accuracy and reduce the processing time by multiple sensors cooperation. We propose a BSD model that objects are detected in consecutive time intervals in the BSD system. Then, association techniques are employed for multi-sensor fusion since all sensors data are not ordinarily ready for fusion simultaneously. It should be noted that the orthodox approach in data association techniques in BSD often includes a global nearest neighbor, joint probabilistic data association, and multiple hypothesis tests. We simulate and compare these techniques by tracking multiple targets and multi-sensor fusion using virtual data in MATLAB. Furthermore, we illustrate that our multi-sensor fusion detection accuracy in the BSD system is augmented compared to a single sensor BSD system.</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>12</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pouya Mehrannia</style></author><author><style face="normal" font="default" size="100%">Shayan Shirahmad Gale Bagi</style></author><author><style face="normal" font="default" size="100%">Behzad Moshiri</style></author><author><style face="normal" font="default" size="100%">Otman Adam Al-Basir</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Deep Representation of Imbalanced Spatio-temporal Traffic Flow Data for Traffic Accident Detection</style></title><secondary-title><style face="normal" font="default" size="100%">arXiv</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2021</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://arxiv.org/abs/2108.09506</style></url></web-urls></urls><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:149.737px;top:295.313px;14.944px;sans-serif;transform:scaleX(0.992358);&quot;&gt;Automatic detection of traffic accidents has a crucial&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:311.918px;14.944px;sans-serif;transform:scaleX(0.967624);&quot;&gt;effect&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:116.441px;top:311.918px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:126.693px;top:311.918px;14.944px;sans-serif;transform:scaleX(0.986304);&quot;&gt;on&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:142.474px;top:311.918px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:152.74px;top:311.918px;14.944px;sans-serif;transform:scaleX(1.04774);&quot;&gt;improving&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:218.748px;top:311.918px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:228.999px;top:311.918px;14.944px;sans-serif;transform:scaleX(1.03997);&quot;&gt;transportation,&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:325.717px;top:311.918px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:335.983px;top:311.918px;14.944px;sans-serif;transform:scaleX(1.04923);&quot;&gt;public&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:375.854px;top:311.918px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:386.106px;top:311.918px;14.944px;sans-serif;transform:scaleX(0.958551);&quot;&gt;safety,&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:426.365px;top:311.918px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:436.631px;top:311.918px;14.944px;sans-serif;transform:scaleX(1.00374);&quot;&gt;and&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:460.721px;top:311.918px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:470.973px;top:311.918px;14.944px;sans-serif;transform:scaleX(1.03807);&quot;&gt;path&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:328.522px;14.944px;sans-serif;transform:scaleX(1.02557);&quot;&gt;planning. Many lives can be saved by the consequent decrease&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:345.127px;14.944px;sans-serif;transform:scaleX(1.01315);&quot;&gt;in the time between when the accidents occur and when rescue&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:361.732px;14.944px;sans-serif;transform:scaleX(1.05134);&quot;&gt;teams are dispatched, and much travelling time can be saved&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:378.335px;14.944px;sans-serif;transform:scaleX(1.08122);&quot;&gt;by notifying drivers to select alternative routes. This problem&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:394.94px;14.944px;sans-serif;transform:scaleX(1.02809);&quot;&gt;is challenging mainly because of the rareness of accidents and&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:411.545px;14.944px;sans-serif;transform:scaleX(1.00836);&quot;&gt;spatial&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:123.958px;top:411.545px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:133.074px;top:411.545px;14.944px;sans-serif;transform:scaleX(0.989052);&quot;&gt;heterogeneity&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:219.121px;top:411.545px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:228.237px;top:411.545px;14.944px;sans-serif;transform:scaleX(1.03736);&quot;&gt;of&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:240.686px;top:411.545px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:249.801px;top:411.545px;14.944px;sans-serif;transform:scaleX(0.996018);&quot;&gt;the&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:269.722px;top:411.545px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:278.853px;top:411.545px;14.944px;sans-serif;transform:scaleX(1.00267);&quot;&gt;environment.&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:363.077px;top:411.545px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:372.193px;top:411.545px;14.944px;sans-serif;transform:scaleX(1.00872);&quot;&gt;This&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:400.437px;top:411.545px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:409.553px;top:411.545px;14.944px;sans-serif;transform:scaleX(1.00973);&quot;&gt;paper&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:446.913px;top:411.545px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:456.029px;top:411.545px;14.944px;sans-serif;transform:scaleX(0.936385);&quot;&gt;studies&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:428.148px;14.944px;sans-serif;transform:scaleX(0.934);&quot;&gt;deep&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:111.495px;top:428.148px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:121.178px;top:428.148px;14.944px;sans-serif;transform:scaleX(0.993856);&quot;&gt;representation&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:213.607px;top:428.148px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:223.291px;top:428.148px;14.944px;sans-serif;transform:scaleX(1.03736);&quot;&gt;of&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:235.739px;top:428.148px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:245.423px;top:428.148px;14.944px;sans-serif;transform:scaleX(1.01509);&quot;&gt;loop&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:272.83px;top:428.148px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:282.529px;top:428.148px;14.944px;sans-serif;transform:scaleX(0.986304);&quot;&gt;detector&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:334.803px;top:428.148px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:344.487px;top:428.148px;14.944px;sans-serif;transform:scaleX(1.00819);&quot;&gt;data&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:372.716px;top:428.148px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:382.4px;top:428.148px;14.944px;sans-serif;transform:scaleX(0.973068);&quot;&gt;using&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:416.457px;top:428.148px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:426.141px;top:428.148px;14.944px;sans-serif;transform:scaleX(1.02636);&quot;&gt;Long-Short&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:444.753px;14.944px;sans-serif;transform:scaleX(1.03974);&quot;&gt;Term&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:115.918px;top:444.753px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:126.05px;top:444.753px;14.944px;sans-serif;transform:scaleX(1.05326);&quot;&gt;Memory&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:180.82px;top:444.753px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:190.952px;top:444.753px;14.944px;sans-serif;transform:scaleX(1.08936);&quot;&gt;(LSTM)&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:243.241px;top:444.753px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:253.373px;top:444.753px;14.944px;sans-serif;transform:scaleX(1.03875);&quot;&gt;network&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:306.35px;top:444.753px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:316.482px;top:444.753px;14.944px;sans-serif;transform:scaleX(1.10058);&quot;&gt;for&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:335.191px;top:444.753px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:345.309px;top:444.753px;14.944px;sans-serif;transform:scaleX(0.99868);&quot;&gt;automatic&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:409.224px;top:444.753px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:419.356px;top:444.753px;14.944px;sans-serif;transform:scaleX(0.984784);&quot;&gt;detection&lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:477.458px;top:444.753px;14.944px;sans-serif;&quot;&gt; &lt;/span&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:487.59px;top:444.753px;14.944px;sans-serif;transform:scaleX(1.03736);&quot;&gt;of&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:461.358px;14.944px;sans-serif;transform:scaleX(1.00103);&quot;&gt;freeway accidents. The LSTM-based framework increases class&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:477.962px;14.944px;sans-serif;transform:scaleX(1.08402);&quot;&gt;separability in the encoded feature space while reducing the&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:494.567px;14.944px;sans-serif;transform:scaleX(1.05134);&quot;&gt;dimension of data. Our experiments on real accident and loop&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:511.17px;14.944px;sans-serif;transform:scaleX(1.0487);&quot;&gt;detector data collected from the Twin Cities Metro freeways of&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:527.775px;14.944px;sans-serif;transform:scaleX(1.0487);&quot;&gt;Minnesota demonstrate that deep representation of traffic flow&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:544.38px;14.944px;sans-serif;transform:scaleX(1.07566);&quot;&gt;data using LSTM network has the potential to detect freeway&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:560.983px;14.944px;sans-serif;transform:scaleX(1.08402);&quot;&gt;accidents in less than 18 minutes with a true positive rate of&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:577.588px;14.944px;sans-serif;transform:scaleX(1.05932);&quot;&gt;0.71 and a false positive rate of 0.25 which outperforms other&lt;/span&gt;&lt;br role=&quot;presentation&quot;&gt;&lt;span dir=&quot;ltr&quot; role=&quot;presentation&quot; style=&quot;left:81.6067px;top:594.193px;14.944px;sans-serif;transform:scaleX(0.999604);&quot;&gt;competing methods in the same arrangement&lt;/span&gt;.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Shayan Shirahmad Gale Bagi</style></author><author><style face="normal" font="default" size="100%">Hossein Gharaee Garakani</style></author><author><style face="normal" font="default" size="100%">Behzad Moshiri</style></author><author><style face="normal" font="default" size="100%">Mohammad Khoshnevisan</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Sensing structure for blind spot detection system in vehicles</style></title><secondary-title><style face="normal" font="default" size="100%">2019 International Conference on Control, Automation and Information Sciences (ICCAIS)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://ieeexplore.ieee.org/abstract/document/9074580</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pages><style face="normal" font="default" size="100%">1-6</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Sensor selection is an essential aspect of blind spot detection (BSD) systems. Indeed, we must choose the sensors appropriately to accomplish high accuracy and performance in any driving condition. Each sensor has exclusive properties which are suitable for a few specific circumstances. Therefore, a comprehensive study is warranted to determine the optimum number of sensors and the type of sensors for BSD. Although sensors have some deficiencies which can deteriorate the whole system's performance, a combination of multiple types of sensors together with data fusion methods in most cases can substantially compensate for sensors imperfection. In this paper, we have concentrated on multi-sensor fusion in BSD system and its advantages which cannot be achieved in a single-sensor BSD system. A sensing structure for the BSD system is proposed considering the indispensable factors in BSD coupled with sensors constraints, features, and specifications.</style></abstract></record></records></xml>