Prev: 2021.01.02 Next: 2021.01.04

Summary for 2021-01-03, created on 2021-12-24

Segmentation and genome annotation algorithms arxiv:2101.00688 📈 51

Maxwell W Libbrecht, Rachel CW Chan, Michael M Hoffman

**Abstract:** Segmentation and genome annotation (SAGA) algorithms are widely used to understand genome activity and gene regulation. These algorithms take as input epigenomic datasets, such as chromatin immunoprecipitation-sequencing (ChIP-seq) measurements of histone modifications or transcription factor binding. They partition the genome and assign a label to each segment such that positions with the same label exhibit similar patterns of input data. SAGA algorithms discover categories of activity such as promoters, enhancers, or parts of genes without prior knowledge of known genomic elements. In this sense, they generally act in an unsupervised fashion like clustering algorithms, but with the additional simultaneous function of segmenting the genome. Here, we review the common methodological framework that underlies these methods, review variants of and improvements upon this basic framework, catalogue existing large-scale reference annotations, and discuss the outlook for future work.

DSXplore: Optimizing Convolutional Neural Networks via Sliding-Channel Convolutions arxiv:2101.00745 📈 20

Yuke Wang, Boyuan Feng, Yufei Ding

**Abstract:** As the key advancement of the convolutional neural networks (CNNs), depthwise separable convolutions (DSCs) are becoming one of the most popular techniques to reduce the computations and parameters size of CNNs meanwhile maintaining the model accuracy. It also brings profound impact to improve the applicability of the compute- and memory-intensive CNNs to a broad range of applications, such as mobile devices, which are generally short of computation power and memory. However, previous research in DSCs are largely focusing on compositing the limited existing DSC designs, thus, missing the opportunities to explore more potential designs that can achieve better accuracy and higher computation/parameter reduction. Besides, the off-the-shelf convolution implementations offer limited computing schemes, therefore, lacking support for DSCs with different convolution patterns. To this end, we introduce, DSXplore, the first optimized design for exploring DSCs on CNNs. Specifically, at the algorithm level, DSXplore incorporates a novel factorized kernel -- sliding-channel convolution (SCC), featured with input-channel overlapping to balance the accuracy performance and the reduction of computation and memory cost. SCC also offers enormous space for design exploration by introducing adjustable kernel parameters. Further, at the implementation level, we carry out an optimized GPU-implementation tailored for SCC by leveraging several key techniques, such as the input-centric backward design and the channel-cyclic optimization. Intensive experiments on different datasets across mainstream CNNs show the advantages of DSXplore in balancing accuracy and computation/parameter reduction over the standard convolution and the existing DSCs.

Enhanced Pub/Sub Communications for Massive IoT Traffic with SARSA Reinforcement Learning arxiv:2101.00687 📈 17

Carlos E. Arruda, Pedro F. Moraes, Nazim Agoulmine, Joberto S. B. Martins

**Abstract:** Sensors are being extensively deployed and are expected to expand at significant rates in the coming years. They typically generate a large volume of data on the internet of things (IoT) application areas like smart cities, intelligent traffic systems, smart grid, and e-health. Cloud, edge and fog computing are potential and competitive strategies for collecting, processing, and distributing IoT data. However, cloud, edge, and fog-based solutions need to tackle the distribution of a high volume of IoT data efficiently through constrained and limited resource network infrastructures. This paper addresses the issue of conveying a massive volume of IoT data through a network with limited communications resources (bandwidth) using a cognitive communications resource allocation based on Reinforcement Learning (RL) with SARSA algorithm. The proposed network infrastructure (PSIoTRL) uses a Publish/ Subscribe architecture to access massive and highly distributed IoT data. It is demonstrated that the PSIoTRL bandwidth allocation for buffer flushing based on SARSA enhances the IoT aggregator buffer occupation and network link utilization. The PSIoTRL dynamically adapts the IoT aggregator traffic flushing according to the Pub/Sub topic's priority and network constraint requirements.

StarNet: Gradient-free Training of Deep Generative Models using Determined System of Linear Equations arxiv:2101.00574 📈 10

Amir Zadeh, Santiago Benoit, Louis-Philippe Morency

**Abstract:** In this paper we present an approach for training deep generative models solely based on solving determined systems of linear equations. A network that uses this approach, called a StarNet, has the following desirable properties: 1) training requires no gradient as solution to the system of linear equations is not stochastic, 2) is highly scalable when solving the system of linear equations w.r.t the latent codes, and similarly for the parameters of the model, and 3) it gives desirable least-square bounds for the estimation of latent codes and network parameters within each layer.

Copula Flows for Synthetic Data Generation arxiv:2101.00598 📈 9

Sanket Kamthe, Samuel Assefa, Marc Deisenroth

**Abstract:** The ability to generate high-fidelity synthetic data is crucial when available (real) data is limited or where privacy and data protection standards allow only for limited use of the given data, e.g., in medical and financial data-sets. Current state-of-the-art methods for synthetic data generation are based on generative models, such as Generative Adversarial Networks (GANs). Even though GANs have achieved remarkable results in synthetic data generation, they are often challenging to interpret.Furthermore, GAN-based methods can suffer when used with mixed real and categorical variables.Moreover, loss function (discriminator loss) design itself is problem specific, i.e., the generative model may not be useful for tasks it was not explicitly trained for. In this paper, we propose to use a probabilistic model as a synthetic data generator. Learning the probabilistic model for the data is equivalent to estimating the density of the data. Based on the copula theory, we divide the density estimation task into two parts, i.e., estimating univariate marginals and estimating the multivariate copula density over the univariate marginals. We use normalising flows to learn both the copula density and univariate marginals. We benchmark our method on both simulated and real data-sets in terms of density estimation as well as the ability to generate high-fidelity synthetic data

Dynamics, behaviours, and anomaly persistence in cryptocurrencies and equities surrounding COVID-19 arxiv:2101.00576 📈 9

Nick James

**Abstract:** This paper uses new and recently introduced methodologies to study the similarity in the dynamics and behaviours of cryptocurrencies and equities surrounding the COVID-19 pandemic. We study two collections; 45 cryptocurrencies and 72 equities, both independently and in conjunction. First, we examine the evolution of cryptocurrency and equity market dynamics, with a particular focus on their change during the COVID-19 pandemic. We demonstrate markedly more similar dynamics during times of crisis. Next, we apply recently introduced methods to contrast trajectories, erratic behaviours, and extreme values among the two multivariate time series. Finally, we introduce a new framework for determining the persistence of market anomalies over time. Surprisingly, we find that although cryptocurrencies exhibit stronger collective dynamics and correlation in all market conditions, equities behave more similarly in their trajectories, extremes, and show greater persistence in anomalies over time.

Few-shot Image Classification: Just Use a Library of Pre-trained Feature Extractors and a Simple Classifier arxiv:2101.00562 📈 9

Arkabandhu Chowdhury, Mingchao Jiang, Swarat Chaudhuri, Chris Jermaine

**Abstract:** Recent papers have suggested that transfer learning can outperform sophisticated meta-learning methods for few-shot image classification. We take this hypothesis to its logical conclusion, and suggest the use of an ensemble of high-quality, pre-trained feature extractors for few-shot image classification. We show experimentally that a library of pre-trained feature extractors combined with a simple feed-forward network learned with an L2-regularizer can be an excellent option for solving cross-domain few-shot image classification. Our experimental results suggest that this simpler sample-efficient approach far outperforms several well-established meta-learning algorithms on a variety of few-shot tasks.

Bankruptcy prediction using disclosure text features arxiv:2101.00719 📈 8

Sridhar Ravula

**Abstract:** A public firm's bankruptcy prediction is an important financial research problem because of the security price downside risks. Traditional methods rely on accounting metrics that suffer from shortcomings like window dressing and retrospective focus. While disclosure text-based metrics overcome some of these issues, current methods excessively focus on disclosure tone and sentiment. There is a requirement to relate meaningful signals in the disclosure text to financial outcomes and quantify the disclosure text data. This work proposes a new distress dictionary based on the sentences used by managers in explaining financial status. It demonstrates the significant differences in linguistic features between bankrupt and non-bankrupt firms. Further, using a large sample of 500 bankrupt firms, it builds predictive models and compares the performance against two dictionaries used in financial text analysis. This research shows that the proposed stress dictionary captures unique information from disclosures and the predictive models based on its features have the highest accuracy.

Exploring Transfer Learning on Face Recognition of Dark Skinned, Low Quality and Low Resource Face Data arxiv:2101.10809 📈 7

Nuredin Ali

**Abstract:** There is a big difference in the tone of color of skin between dark and light skinned people. Despite this fact, most face recognition tasks almost all classical state-of-the-art models are trained on datasets containing an overwhelming majority of light skinned face images. It is tedious to collect a huge amount of data for dark skinned faces and train a model from scratch. In this paper, we apply transfer learning on VGGFace to check how it works on recognising dark skinned mainly Ethiopian faces. The dataset is of low quality and low resource. Our experimental results show above 95\% accuracy which indicates that transfer learning in such settings works.

A novel policy for pre-trained Deep Reinforcement Learning for Speech Emotion Recognition arxiv:2101.00738 📈 7

Thejan Rajapakshe, Rajib Rana, Sara Khalifa, Björn W. Schuller, Jiajun Liu

**Abstract:** Reinforcement Learning (RL) is a semi-supervised learning paradigm which an agent learns by interacting with an environment. Deep learning in combination with RL provides an efficient method to learn how to interact with the environment is called Deep Reinforcement Learning (deep RL). Deep RL has gained tremendous success in gaming - such as AlphaGo, but its potential have rarely being explored for challenging tasks like Speech Emotion Recognition (SER). The deep RL being used for SER can potentially improve the performance of an automated call centre agent by dynamically learning emotional-aware response to customer queries. While the policy employed by the RL agent plays a major role in action selection, there is no current RL policy tailored for SER. In addition, extended learning period is a general challenge for deep RL which can impact the speed of learning for SER. Therefore, in this paper, we introduce a novel policy - "Zeta policy" which is tailored for SER and apply Pre-training in deep RL to achieve faster learning rate. Pre-training with cross dataset was also studied to discover the feasibility of pre-training the RL Agent with a similar dataset in a scenario of where no real environmental data is not available. IEMOCAP and SAVEE datasets were used for the evaluation with the problem being to recognize four emotions happy, sad, angry and neutral in the utterances provided. Experimental results show that the proposed "Zeta policy" performs better than existing policies. The results also support that pre-training can reduce the training time upon reducing the warm-up period and is robust to cross-corpus scenario.

Automatic Defect Detection of Print Fabric Using Convolutional Neural Network arxiv:2101.00703 📈 7

Samit Chakraborty, Marguerite Moore, Lisa Parrillo-Chapman

**Abstract:** Automatic defect detection is a challenging task because of the variability in texture and type of fabric defects. An effective defect detection system enables manufacturers to improve the quality of processes and products. Automation across the textile manufacturing systems would reduce fabric wastage and increase profitability by saving cost and resources. There are different contemporary research on automatic defect detection systems using image processing and machine learning techniques. These techniques differ from each other based on the manufacturing processes and defect types. Researchers have also been able to establish real-time defect detection system during weaving. Although, there has been research on patterned fabric defect detection, these defects are related to weaving faults such as holes, and warp and weft defects. But, there has not been any research that is designed to detect defects that arise during such as spot and print mismatch. This research has fulfilled this gap by developing a print fabric database and implementing deep convolutional neural network (CNN).

Generalized Latency Performance Estimation for Once-For-All Neural Architecture Search arxiv:2101.00732 📈 6

Muhtadyuzzaman Syed, Arvind Akpuram Srinivasan

**Abstract:** Neural Architecture Search (NAS) has enabled the possibility of automated machine learning by streamlining the manual development of deep neural network architectures defining a search space, search strategy, and performance estimation strategy. To solve the need for multi-platform deployment of Convolutional Neural Network (CNN) models, Once-For-All (OFA) proposed to decouple Training and Search to deliver a one-shot model of sub-networks that are constrained to various accuracy-latency tradeoffs. We find that the performance estimation strategy for OFA's search severely lacks generalizability of different hardware deployment platforms due to single hardware latency lookup tables that require significant amount of time and manual effort to build beforehand. In this work, we demonstrate the framework for building latency predictors for neural network architectures to address the need for heterogeneous hardware support and reduce the overhead of lookup tables altogether. We introduce two generalizability strategies which include fine-tuning using a base model trained on a specific hardware and NAS search space, and GPU-generalization which trains a model on GPU hardware parameters such as Number of Cores, RAM Size, and Memory Bandwidth. With this, we provide a family of latency prediction models that achieve over 50% lower RMSE loss as compared to with ProxylessNAS. We also show that the use of these latency predictors match the NAS performance of the lookup table baseline approach if not exceeding it in certain cases.

Algorithmic Complexities in Backpropagation and Tropical Neural Networks arxiv:2101.00717 📈 6

Ozgur Ceyhan

**Abstract:** In this note, we propose a novel technique to reduce the algorithmic complexity of neural network training by using matrices of tropical real numbers instead of matrices of real numbers. Since the tropical arithmetics replaces multiplication with addition, and addition with max, we theoretically achieve several order of magnitude better constant factors in time complexities in the training phase. The fact that we replace the field of real numbers with the tropical semiring of real numbers and yet achieve the same classification results via neural networks come from deep results in topology and analysis, which we verify in our note. We then explore artificial neural networks in terms of tropical arithmetics and tropical algebraic geometry, and introduce the multi-layered tropical neural networks as universal approximators. After giving a tropical reformulation of the backpropagation algorithm, we verify the algorithmic complexity is substantially lower than the usual backpropagation as the tropical arithmetic is free of the complexity of usual multiplication.

Progressive Correspondence Pruning by Consensus Learning arxiv:2101.00591 📈 6

Chen Zhao, Yixiao Ge, Feng Zhu, Rui Zhao, Hongsheng Li, Mathieu Salzmann

**Abstract:** Correspondence selection aims to correctly select the consistent matches (inliers) from an initial set of putative correspondences. The selection is challenging since putative matches are typically extremely unbalanced, largely dominated by outliers, and the random distribution of such outliers further complicates the learning process for learning-based methods. To address this issue, we propose to progressively prune the correspondences via a local-to-global consensus learning procedure. We introduce a ``pruning'' block that lets us identify reliable candidates among the initial matches according to consensus scores estimated using local-to-global dynamic graphs. We then achieve progressive pruning by stacking multiple pruning blocks sequentially. Our method outperforms state-of-the-arts on robust line fitting, camera pose estimation and retrieval-based image localization benchmarks by significant margins and shows promising generalization ability to different datasets and detector/descriptor combinations.

A Novel Multi-Stage Training Approach for Human Activity Recognition from Multimodal Wearable Sensor Data Using Deep Neural Network arxiv:2101.00702 📈 5

Tanvir Mahmud, A. Q. M. Sazzad Sayyed, Shaikh Anowarul Fattah, Sun-Yuan Kung

**Abstract:** Deep neural network is an effective choice to automatically recognize human actions utilizing data from various wearable sensors. These networks automate the process of feature extraction relying completely on data. However, various noises in time series data with complex inter-modal relationships among sensors make this process more complicated. In this paper, we have proposed a novel multi-stage training approach that increases diversity in this feature extraction process to make accurate recognition of actions by combining varieties of features extracted from diverse perspectives. Initially, instead of using single type of transformation, numerous transformations are employed on time series data to obtain variegated representations of the features encoded in raw data. An efficient deep CNN architecture is proposed that can be individually trained to extract features from different transformed spaces. Later, these CNN feature extractors are merged into an optimal architecture finely tuned for optimizing diversified extracted features through a combined training stage or multiple sequential training stages. This approach offers the opportunity to explore the encoded features in raw sensor data utilizing multifarious observation windows with immense scope for efficient selection of features for final convergence. Extensive experimentations have been carried out in three publicly available datasets that provide outstanding performance consistently with average five-fold cross-validation accuracy of 99.29% on UCI HAR database, 99.02% on USC HAR database, and 97.21% on SKODA database outperforming other state-of-the-art approaches.

Passenger Mobility Prediction via Representation Learning for Dynamic Directed and Weighted Graph arxiv:2101.00752 📈 4

Yuandong Wang, Hongzhi Yin, Tong Chen, Chunyang Liu, Ben Wang, Tianyu Wo, Jie Xu

**Abstract:** In recent years, ride-hailing services have been increasingly prevalent as they provide huge convenience for passengers. As a fundamental problem, the timely prediction of passenger demands in different regions is vital for effective traffic flow control and route planning. As both spatial and temporal patterns are indispensable passenger demand prediction, relevant research has evolved from pure time series to graph-structured data for modeling historical passenger demand data, where a snapshot graph is constructed for each time slot by connecting region nodes via different relational edges (e.g., origin-destination relationship, geographical distance, etc.). Consequently, the spatiotemporal passenger demand records naturally carry dynamic patterns in the constructed graphs, where the edges also encode important information about the directions and volume (i.e., weights) of passenger demands between two connected regions. However, existing graph-based solutions fail to simultaneously consider those three crucial aspects of dynamic, directed, and weighted (DDW) graphs, leading to limited expressiveness when learning graph representations for passenger demand prediction. Therefore, we propose a novel spatiotemporal graph attention network, namely Gallat (Graph prediction with all attention) as a solution. In Gallat, by comprehensively incorporating those three intrinsic properties of DDW graphs, we build three attention layers to fully capture the spatiotemporal dependencies among different regions across all historical time slots. Moreover, the model employs a subtask to conduct pretraining so that it can obtain accurate results more quickly. We evaluate the proposed model on real-world datasets, and our experimental results demonstrate that Gallat outperforms the state-of-the-art approaches.

Factor Analysis, Probabilistic Principal Component Analysis, Variational Inference, and Variational Autoencoder: Tutorial and Survey arxiv:2101.00734 📈 4

Benyamin Ghojogh, Ali Ghodsi, Fakhri Karray, Mark Crowley

**Abstract:** This is a tutorial and survey paper on factor analysis, probabilistic Principal Component Analysis (PCA), variational inference, and Variational Autoencoder (VAE). These methods, which are tightly related, are dimensionality reduction and generative models. They asssume that every data point is generated from or caused by a low-dimensional latent factor. By learning the parameters of distribution of latent space, the corresponding low-dimensional factors are found for the sake of dimensionality reduction. For their stochastic and generative behaviour, these models can also be used for generation of new data points in the data space. In this paper, we first start with variational inference where we derive the Evidence Lower Bound (ELBO) and Expectation Maximization (EM) for learning the parameters. Then, we introduce factor analysis, derive its joint and marginal distributions, and work out its EM steps. Probabilistic PCA is then explained, as a special case of factor analysis, and its closed-form solutions are derived. Finally, VAE is explained where the encoder, decoder and sampling from the latent space are introduced. Training VAE using both EM and backpropagation are explained.

Neural Networks for Keyword Spotting on IoT Devices arxiv:2101.00693 📈 4

Rakesh Dhakshinamurthy

**Abstract:** We explore Neural Networks (NNs) for keyword spotting (KWS) on IoT devices like smart speakers and wearables. Since we target to execute our NN on a constrained memory and computation footprint, we propose a CNN design that. (i) uses a limited number of multiplies. (ii) uses a limited number of model parameters.

AttnMove: History Enhanced Trajectory Recovery via Attentional Network arxiv:2101.00646 📈 4

Tong Xia, Yunhan Qi, Jie Feng, Fengli Xu, Funing Sun, Diansheng Guo, Yong Li

**Abstract:** A considerable amount of mobility data has been accumulated due to the proliferation of location-based service. Nevertheless, compared with mobility data from transportation systems like the GPS module in taxis, this kind of data is commonly sparse in terms of individual trajectories in the sense that users do not access mobile services and contribute their data all the time. Consequently, the sparsity inevitably weakens the practical value of the data even it has a high user penetration rate. To solve this problem, we propose a novel attentional neural network-based model, named AttnMove, to densify individual trajectories by recovering unobserved locations at a fine-grained spatial-temporal resolution. To tackle the challenges posed by sparsity, we design various intra- and inter- trajectory attention mechanisms to better model the mobility regularity of users and fully exploit the periodical pattern from long-term history. We evaluate our model on two real-world datasets, and extensive results demonstrate the performance gain compared with the state-of-the-art methods. This also shows that, by providing high-quality mobility data, our model can benefit a variety of mobility-oriented down-stream applications.

Privacy-sensitive Objects Pixelation for Live Video Streaming arxiv:2101.00604 📈 4

Jizhe Zhou, Chi-Man Pun, Yu Tong

**Abstract:** With the prevailing of live video streaming, establishing an online pixelation method for privacy-sensitive objects is an urgency. Caused by the inaccurate detection of privacy-sensitive objects, simply migrating the tracking-by-detection structure into the online form will incur problems in target initialization, drifting, and over-pixelation. To cope with the inevitable but impacting detection issue, we propose a novel Privacy-sensitive Objects Pixelation (PsOP) framework for automatic personal privacy filtering during live video streaming. Leveraging pre-trained detection networks, our PsOP is extendable to any potential privacy-sensitive objects pixelation. Employing the embedding networks and the proposed Positioned Incremental Affinity Propagation (PIAP) clustering algorithm as the backbone, our PsOP unifies the pixelation of discriminating and indiscriminating pixelation objects through trajectories generation. In addition to the pixelation accuracy boosting, experiments on the streaming video data we built show that the proposed PsOP can significantly reduce the over-pixelation ratio in privacy-sensitive object pixelation.

Gaussian Function On Response Surface Estimation arxiv:2101.00772 📈 3

Mohammadhossein Toutiaee, John Miller

**Abstract:** We propose a new framework for 2-D interpreting (features and samples) black-box machine learning models via a metamodeling technique, by which we study the output and input relationships of the underlying machine learning model. The metamodel can be estimated from data generated via a trained complex model by running the computer experiment on samples of data in the region of interest. We utilize a Gaussian process as a surrogate to capture the response surface of a complex model, in which we incorporate two parts in the process: interpolated values that are modeled by a stationary Gaussian process Z governed by a prior covariance function, and a mean function mu that captures the known trends in the underlying model. The optimization procedure for the variable importance parameter theta is to maximize the likelihood function. This theta corresponds to the correlation of individual variables with the target response. There is no need for any pre-assumed models since it depends on empirical observations. Experiments demonstrate the potential of the interpretable model through quantitative assessment of the predicted samples.

Cycle Registration in Persistent Homology with Applications in Topological Bootstrap arxiv:2101.00698 📈 3

Yohai Reani, Omer Bobrowski

**Abstract:** In this article we propose a novel approach for comparing the persistent homology representations of two spaces (filtrations). Commonly used methods are based on numerical summaries such as persistence diagrams and persistence landscapes, along with suitable metrics (e.g. Wasserstein). These summaries are useful for computational purposes, but they are merely a marginal of the actual topological information that persistent homology can provide. Instead, our approach compares between two topological representations directly in the data space. We do so by defining a correspondence relation between individual persistent cycles of two different spaces, and devising a method for computing this correspondence. Our matching of cycles is based on both the persistence intervals and the spatial placement of each feature. We demonstrate our new framework in the context of topological inference, where we use statistical bootstrap methods in order to differentiate between real features and noise in point cloud data.

Learning Neural Networks on SVD Boosted Latent Spaces for Semantic Classification arxiv:2101.00563 📈 3

Sahil Sidheekh

**Abstract:** The availability of large amounts of data and compelling computation power have made deep learning models much popular for text classification and sentiment analysis. Deep neural networks have achieved competitive performance on the above tasks when trained on naive text representations such as word count, term frequency, and binary matrix embeddings. However, many of the above representations result in the input space having a dimension of the order of the vocabulary size, which is enormous. This leads to a blow-up in the number of parameters to be learned, and the computational cost becomes infeasible when scaling to domains that require retaining a colossal vocabulary. This work proposes using singular value decomposition to transform the high dimensional input space to a lower-dimensional latent space. We show that neural networks trained on this lower-dimensional space are not only able to retain performance while savoring significant reduction in the computational complexity but, in many situations, also outperforms the classical neural networks trained on the native input space.

Meta-Learning Conjugate Priors for Few-Shot Bayesian Optimization arxiv:2101.00729 📈 2

Ruduan Plug

**Abstract:** Bayesian Optimization is methodology used in statistical modelling that utilizes a Gaussian process prior distribution to iteratively update a posterior distribution towards the true distribution of the data. Finding unbiased informative priors to sample from is challenging and can greatly influence the outcome on the posterior distribution if only few data are available. In this paper we propose a novel approach to utilize meta-learning to automate the estimation of informative conjugate prior distributions given a distribution class. From this process we generate priors that require only few data to estimate the shape parameters of the original distribution of the data.

Synthetic Embedding-based Data Generation Methods for Student Performance arxiv:2101.00728 📈 2

Dom Huh

**Abstract:** Given the inherent class imbalance issue within student performance datasets, samples belonging to the edges of the target class distribution pose a challenge for predictive machine learning algorithms to learn. In this paper, we introduce a general framework for synthetic embedding-based data generation (SEDG), a search-based approach to generate new synthetic samples using embeddings to correct the detriment effects of class imbalances optimally. We compare the SEDG framework to past synthetic data generation methods, including deep generative models, and traditional sampling methods. In our results, we find SEDG to outperform the traditional re-sampling methods for deep neural networks and perform competitively for common machine learning classifiers on the student performance task in several standard performance metrics.

Adversarial Unsupervised Domain Adaptation for Harmonic-Percussive Source Separation arxiv:2101.00701 📈 2

Carlos Lordelo, Emmanouil Benetos, Simon Dixon, Sven Ahlbäck, Patrik Ohlsson

**Abstract:** This paper addresses the problem of domain adaptation for the task of music source separation. Using datasets from two different domains, we compare the performance of a deep learning-based harmonic-percussive source separation model under different training scenarios, including supervised joint training using data from both domains and pre-training in one domain with fine-tuning in another. We propose an adversarial unsupervised domain adaptation approach suitable for the case where no labelled data (ground-truth source signals) from a target domain is available. By leveraging unlabelled data (only mixtures) from this domain, experiments show that our framework can improve separation performance on the new domain without losing any considerable performance on the original domain. The paper also introduces the Tap & Fiddle dataset, a dataset containing recordings of Scandinavian fiddle tunes along with isolated tracks for 'foot-tapping' and 'violin'.

An Evolution of CNN Object Classifiers on Low-Resolution Images arxiv:2101.00686 📈 2

Md. Mohsin Kabir, Abu Quwsar Ohi, Md. Saifur Rahman, M. F. Mridha

**Abstract:** Object classification is a significant task in computer vision. It has become an effective research area as an important aspect of image processing and the building block of image localization, detection, and scene parsing. Object classification from low-quality images is difficult for the variance of object colors, aspect ratios, and cluttered backgrounds. The field of object classification has seen remarkable advancements, with the development of deep convolutional neural networks (DCNNs). Deep neural networks have been demonstrated as very powerful systems for facing the challenge of object classification from high-resolution images, but deploying such object classification networks on the embedded device remains challenging due to the high computational and memory requirements. Using high-quality images often causes high computational and memory complexity, whereas low-quality images can solve this issue. Hence, in this paper, we investigate an optimal architecture that accurately classifies low-quality images using DCNNs architectures. To validate different baselines on lowquality images, we perform experiments using webcam captured image datasets of 10 different objects. In this research work, we evaluate the proposed architecture by implementing popular CNN architectures. The experimental results validate that the MobileNet architecture delivers better than most of the available CNN architectures for low-resolution webcam image datasets.

Learning optimal Bayesian prior probabilities from data arxiv:2101.00672 📈 2

Ozan Kaan Kayaalp

**Abstract:** Noninformative uniform priors are staples of Bayesian inference, especially in Bayesian machine learning. This study challenges the assumption that they are optimal and their use in Bayesian inference yields optimal outcomes. Instead of using arbitrary noninformative uniform priors, we propose a machine learning based alternative method, learning optimal priors from data by maximizing a target function of interest. Applying naïve Bayes text classification methodology and a search algorithm developed for this study, our system learned priors from data using the positive predictive value metric as the target function. The task was to find Wikipedia articles that had not (but should have) been categorized under certain Wikipedia categories. We conducted five sets of experiments using separate Wikipedia categories. While the baseline models used the popular Bayes-Laplace priors, the study models learned the optimal priors for each set of experiments separately before using them. The results showed that the study models consistently outperformed the baseline models with a wide margin of statistical significance (p < 0.001). The measured performance improvement of the study model over the baseline was as high as 443% with the mean value of 193% over five Wikipedia categories.

A Tutorial on the Mathematical Model of Single Cell Variational Inference arxiv:2101.00650 📈 2

Songting Shi

**Abstract:** As the large amount of sequencing data accumulated in past decades and it is still accumulating, we need to handle the more and more sequencing data. As the fast development of the computing technologies, we now can handle a large amount of data by a reasonable of time using the neural network based model. This tutorial will introduce the the mathematical model of the single cell variational inference (scVI), which use the variational auto-encoder (building on the neural networks) to learn the distribution of the data to gain insights. It was written for beginners in the simple and intuitive way with many deduction details to encourage more researchers into this field.

RegNet: Self-Regulated Network for Image Classification arxiv:2101.00590 📈 2

Jing Xu, Yu Pan, Xinglin Pan, Steven Hoi, Zhang Yi, Zenglin Xu

**Abstract:** The ResNet and its variants have achieved remarkable successes in various computer vision tasks. Despite its success in making gradient flow through building blocks, the simple shortcut connection mechanism limits the ability of re-exploring new potentially complementary features due to the additive function. To address this issue, in this paper, we propose to introduce a regulator module as a memory mechanism to extract complementary features, which are further fed to the ResNet. In particular, the regulator module is composed of convolutional RNNs (e.g., Convolutional LSTMs or Convolutional GRUs), which are shown to be good at extracting Spatio-temporal information. We named the new regulated networks as RegNet. The regulator module can be easily implemented and appended to any ResNet architecture. We also apply the regulator module for improving the Squeeze-and-Excitation ResNet to show the generalization ability of our method. Experimental results on three image classification datasets have demonstrated the promising performance of the proposed architecture compared with the standard ResNet, SE-ResNet, and other state-of-the-art architectures.

Schemes of Propagation Models and Source Estimators for Rumor Source Detection in Online Social Networks: A Short Survey of a Decade of Research arxiv:2101.00753 📈 1

Rong Jin, Weili Wu

**Abstract:** Recent years have seen various rumor diffusion models being assumed in detection of rumor source research of the online social network. Diffusion model is arguably considered as a very important and challengeable factor for source detection in networks but it is less studied. This paper provides an overview of three representative schemes of Independent Cascade-based, Epidemic-based, and Learning-based to model the patterns of rumor propagation as well as three major schemes of estimators for rumor sources since its inception a decade ago.

The structure of conservative gradient fields arxiv:2101.00699 📈 1

Adrian Lewis, Tonghua Tian

**Abstract:** The classical Clarke subdifferential alone is inadequate for understanding automatic differentiation in nonsmooth contexts. Instead, we can sometimes rely on enlarged generalized gradients called "conservative fields", defined through the natural path-wise chain rule: one application is the convergence analysis of gradient-based deep learning algorithms. In the semi-algebraic case, we show that all conservative fields are in fact just Clarke subdifferentials plus normals of manifolds in underlying Whitney stratifications.

CovTANet: A Hybrid Tri-level Attention Based Network for Lesion Segmentation, Diagnosis, and Severity Prediction of COVID-19 Chest CT Scans arxiv:2101.00691 📈 1

Tanvir Mahmud, Md. Jahin Alam, Sakib Chowdhury, Shams Nafisa Ali, Md Maisoon Rahman, Shaikh Anowarul Fattah, Mohammad Saquib

**Abstract:** Rapid and precise diagnosis of COVID-19 is one of the major challenges faced by the global community to control the spread of this overgrowing pandemic. In this paper, a hybrid neural network is proposed, named CovTANet, to provide an end-to-end clinical diagnostic tool for early diagnosis, lesion segmentation, and severity prediction of COVID-19 utilizing chest computer tomography (CT) scans. A multi-phase optimization strategy is introduced for solving the challenges of complicated diagnosis at a very early stage of infection, where an efficient lesion segmentation network is optimized initially which is later integrated into a joint optimization framework for the diagnosis and severity prediction tasks providing feature enhancement of the infected regions. Moreover, for overcoming the challenges with diffused, blurred, and varying shaped edges of COVID lesions with novel and diverse characteristics, a novel segmentation network is introduced, namely Tri-level Attention-based Segmentation Network (TA-SegNet). This network has significantly reduced semantic gaps in subsequent encoding decoding stages, with immense parallelization of multi-scale features for faster convergence providing considerable performance improvement over traditional networks. Furthermore, a novel tri-level attention mechanism has been introduced, which is repeatedly utilized over the network, combining channel, spatial, and pixel attention schemes for faster and efficient generalization of contextual information embedded in the feature map through feature re-calibration and enhancement operations. Outstanding performances have been achieved in all three-tasks through extensive experimentation on a large publicly available dataset containing 1110 chest CT-volumes that signifies the effectiveness of the proposed scheme at the current stage of the pandemic.

Phase Transitions in Recovery of Structured Signals from Corrupted Measurements arxiv:2101.00599 📈 1

Zhongxing Sun, Wei Cui, Yulong Liu

**Abstract:** This paper is concerned with the problem of recovering a structured signal from a relatively small number of corrupted random measurements. Sharp phase transitions have been numerically observed in practice when different convex programming procedures are used to solve this problem. This paper is devoted to presenting theoretical explanations for these phenomenons by employing some basic tools from Gaussian process theory. Specifically, we identify the precise locations of the phase transitions for both constrained and penalized recovery procedures. Our theoretical results show that these phase transitions are determined by some geometric measures of structure, e.g., the spherical Gaussian width of a tangent cone and the Gaussian (squared) distance to a scaled subdifferential. By utilizing the established phase transition theory, we further investigate the relationship between these two kinds of recovery procedures, which also reveals an optimal strategy (in the sense of Lagrange theory) for choosing the tradeoff parameter in the penalized recovery procedure. Numerical experiments are provided to verify our theoretical results.

Neural network algorithm and its application in temperature control of distillation tower arxiv:2101.00582 📈 1

Ningrui Zhao, Jinwei Lu

**Abstract:** Distillation process is a complex process of conduction, mass transfer and heat conduction, which is mainly manifested as follows: The mechanism is complex and changeable with uncertainty; the process is multivariate and strong coupling; the system is nonlinear, hysteresis and time-varying. Neural networks can perform effective learning based on corresponding samples, do not rely on fixed mechanisms, have the ability to approximate arbitrary nonlinear mappings, and can be used to establish system input and output models. The temperature system of the rectification tower has a complicated structure and high accuracy requirements. The neural network is used to control the temperature of the system, which satisfies the requirements of the production process. This article briefly describes the basic concepts and research progress of neural network and distillation tower temperature control, and systematically summarizes the application of neural network in distillation tower control, aiming to provide reference for the development of related industries.

ASIST: Annotation-free Synthetic Instance Segmentation and Tracking by Adversarial Simulations arxiv:2101.00567 📈 1

Quan Liu, Isabella M. Gaeta, Mengyang Zhao, Ruining Deng, Aadarsh Jha, Bryan A. Millis, Anita Mahadevan-Jansen, Matthew J. Tyska, Yuankai Huo

**Abstract:** Background: The quantitative analysis of microscope videos often requires instance segmentation and tracking of cellular and subcellular objects. The traditional method consists of two stages: (1) performing instance object segmentation of each frame, and (2) associating objects frame-by-frame. Recently, pixel-embedding-based deep learning approaches these two steps simultaneously as a single stage holistic solution. In computer vision, annotated training data with consistent segmentation and tracking is resource intensive, the severity of which is multiplied in microscopy imaging due to (1) dense objects (e.g., overlapping or touching), and (2) high dynamics (e.g., irregular motion and mitosis). Adversarial simulations have provided successful solutions to alleviate the lack of such annotations in dynamics scenes in computer vision, such as using simulated environments (e.g., computer games) to train real-world self-driving systems. Methods: In this paper, we propose an annotation-free synthetic instance segmentation and tracking (ASIST) method with adversarial simulation and single-stage pixel-embedding based learning. Contribution: The contribution of this paper is three-fold: (1) the proposed method aggregates adversarial simulations and single-stage pixel-embedding based deep learning; (2) the method is assessed with both the cellular (i.e., HeLa cells) and subcellular (i.e., microvilli) objects; and (3) to the best of our knowledge, this is the first study to explore annotation-free instance segmentation and tracking study for microscope videos. Results: The ASIST method achieved an important step forward, when compared with fully supervised approaches: ASIST shows 7% to 11% higher segmentation, detection and tracking performance on microvilli relative to fully supervised methods, and comparable performance on Hela cell videos.

Variationally and Intrinsically motivated reinforcement learning for decentralized traffic signal control arxiv:2101.00746 📈 0

Liwen Zhu, Peixi Peng, Zongqing Lu, Xiangqian Wang, Yonghong Tian

**Abstract:** One of the biggest challenges in multi-agent reinforcement learning is coordination, a typical application scenario of this is traffic signal control. Recently, it has attracted a rising number of researchers and has become a hot research field with great practical significance. In this paper, we propose a novel method called MetaVRS~(Meta Variational RewardShaping) for traffic signal coordination control. By heuristically applying the intrinsic reward to the environmental reward, MetaVRS can wisely capture the agent-to-agent interplay. Besides, latent variables generated by VAE are brought into policy for automatically tradeoff between exploration and exploitation to optimize the policy. In addition, meta learning was used in decoder for faster adaptation and better approximation. Empirically, we demonstate that MetaVRS substantially outperforms existing methods and shows superior adaptability, which predictably has a far-reaching significance to the multi-agent traffic signal coordination control.

Coreference Resolution: Are the eliminated spans totally worthless? arxiv:2101.00737 📈 0

Xin Tan, Longyin Zhang, Guodong Zhou

**Abstract:** Various neural-based methods have been proposed so far for joint mention detection and coreference resolution. However, existing works on coreference resolution are mainly dependent on filtered mention representation, while other spans are largely neglected. In this paper, we aim at increasing the utilization rate of data and investigating whether those eliminated spans are totally useless, or to what extent they can improve the performance of coreference resolution. To achieve this, we propose a mention representation refining strategy where spans highly related to mentions are well leveraged using a pointer network for representation enhancing. Notably, we utilize an additional loss term in this work to encourage the diversity between entity clusters. Experimental results on the document-level CoNLL-2012 Shared Task English dataset show that eliminated spans are indeed much effective and our approach can achieve competitive results when compared with previous state-of-the-art in coreference resolution.

Recoding latent sentence representations -- Dynamic gradient-based activation modification in RNNs arxiv:2101.00674 📈 0

Dennis Ulmer

**Abstract:** In Recurrent Neural Networks (RNNs), encoding information in a suboptimal or erroneous way can impact the quality of representations based on later elements in the sequence and subsequently lead to wrong predictions and a worse model performance. In humans, challenging cases like garden path sentences (an instance of this being the infamous "The horse raced past the barn fell") can lead their language understanding astray. However, they are still able to correct their representation accordingly and recover when new information is encountered. Inspired by this, I propose an augmentation to standard RNNs in form of a gradient-based correction mechanism: This way I hope to enable such models to dynamically adapt their inner representation of a sentence, adding a way to correct deviations as soon as they occur. This could therefore lead to more robust models using more flexible representations, even during inference time. I conduct different experiments in the context of language modeling, where the impact of using such a mechanism is examined in detail. To this end, I look at modifications based on different kinds of time-dependent error signals and how they influence the model performance. Furthermore, this work contains a study of the model's confidence in its predictions during training and for challenging test samples and the effect of the manipulation thereof. Lastly, I also study the difference in behavior of these novel models compared to a standard LSTM baseline and investigate error cases in detail to identify points of future research. I show that while the proposed approach comes with promising theoretical guarantees and an appealing intuition, it is only able to produce minor improvements over the baseline due to challenges in its practical application and the efficacy of the tested model variants.

Prev: 2021.01.02 Next: 2021.01.04