Summary for 2021-07-23, created on 2021-12-19

Exploring Deep Registration Latent Spaces arxiv:2107.11238 📈 65

Théo Estienne, Maria Vakalopoulou, Stergios Christodoulidis, Enzo Battistella, Théophraste Henry, Marvin Lerousseau, Amaury Leroy, Guillaume Chassagnon, Marie-Pierre Revel, Nikos Paragios, Eric Deutsch

**Abstract:** Explainability of deep neural networks is one of the most challenging and interesting problems in the field. In this study, we investigate the topic focusing on the interpretability of deep learning-based registration methods. In particular, with the appropriate model architecture and using a simple linear projection, we decompose the encoding space, generating a new basis, and we empirically show that this basis captures various decomposed anatomically aware geometrical transformations. We perform experiments using two different datasets focusing on lungs and hippocampus MRI. We show that such an approach can decompose the highly convoluted latent spaces of registration pipelines in an orthogonal space with several interesting properties. We hope that this work could shed some light on a better understanding of deep learning-based registration methods.

Human Pose Regression with Residual Log-likelihood Estimation arxiv:2107.11291 📈 45

Jiefeng Li, Siyuan Bian, Ailing Zeng, Can Wang, Bo Pang, Wentao Liu, Cewu Lu

**Abstract:** Heatmap-based methods dominate in the field of human pose estimation by modelling the output distribution through likelihood heatmaps. In contrast, regression-based methods are more efficient but suffer from inferior performance. In this work, we explore maximum likelihood estimation (MLE) to develop an efficient and effective regression-based methods. From the perspective of MLE, adopting different regression losses is making different assumptions about the output density function. A density function closer to the true distribution leads to a better regression performance. In light of this, we propose a novel regression paradigm with Residual Log-likelihood Estimation (RLE) to capture the underlying output distribution. Concretely, RLE learns the change of the distribution instead of the unreferenced underlying distribution to facilitate the training process. With the proposed reparameterization design, our method is compatible with off-the-shelf flow models. The proposed method is effective, efficient and flexible. We show its potential in various human pose estimation tasks with comprehensive experiments. Compared to the conventional regression paradigm, regression with RLE bring 12.4 mAP improvement on MSCOCO without any test-time overhead. Moreover, for the first time, especially on multi-person pose estimation, our regression method is superior to the heatmap-based methods. Our code is available at https://github.com/Jeff-sjtu/res-loglikelihood-regression

LocalGLMnet: interpretable deep learning for tabular data arxiv:2107.11059 📈 23

Ronald Richman, Mario V. Wüthrich

**Abstract:** Deep learning models have gained great popularity in statistical modeling because they lead to very competitive regression models, often outperforming classical statistical models such as generalized linear models. The disadvantage of deep learning models is that their solutions are difficult to interpret and explain, and variable selection is not easily possible because deep learning models solve feature engineering and variable selection internally in a nontransparent way. Inspired by the appealing structure of generalized linear models, we propose a new network architecture that shares similar features as generalized linear models, but provides superior predictive power benefiting from the art of representation learning. This new architecture allows for variable selection of tabular data and for interpretation of the calibrated deep learning model, in fact, our approach provides an additive decomposition in the spirit of Shapley values and integrated gradients.

Bias Loss for Mobile Neural Networks arxiv:2107.11170 📈 17

Lusine Abrahamyan, Valentin Ziatchin, Yiming Chen, Nikos Deligiannis

**Abstract:** Compact convolutional neural networks (CNNs) have witnessed exceptional improvements in performance in recent years. However, they still fail to provide the same predictive power as CNNs with a large number of parameters. The diverse and even abundant features captured by the layers is an important characteristic of these successful CNNs. However, differences in this characteristic between large CNNs and their compact counterparts have rarely been investigated. In compact CNNs, due to the limited number of parameters, abundant features are unlikely to be obtained, and feature diversity becomes an essential characteristic. Diverse features present in the activation maps derived from a data point during model inference may indicate the presence of a set of unique descriptors necessary to distinguish between objects of different classes. In contrast, data points with low feature diversity may not provide a sufficient amount of unique descriptors to make a valid prediction; we refer to them as random predictions. Random predictions can negatively impact the optimization process and harm the final performance. This paper proposes addressing the problem raised by random predictions by reshaping the standard cross-entropy to make it biased toward data points with a limited number of unique descriptive features. Our novel Bias Loss focuses the training on a set of valuable data points and prevents the vast number of samples with poor learning features from misleading the optimization process. Furthermore, to show the importance of diversity, we present a family of SkipNet models whose architectures are brought to boost the number of unique descriptors in the last layers. Our Skipnet-M can achieve 1% higher classification accuracy than MobileNetV3 Large.

Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition arxiv:2107.11442 📈 14

Lucas Liebenwein, Alaa Maalouf, Oren Gal, Dan Feldman, Daniela Rus

**Abstract:** We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression. Our algorithm hinges on the idea of compressing each convolutional (or fully-connected) layer by slicing its channels into multiple groups and decomposing each group via low-rank decomposition. At the core of our algorithm is the derivation of layer-wise error bounds from the Eckart Young Mirsky theorem. We then leverage these bounds to frame the compression problem as an optimization problem where we wish to minimize the maximum compression error across layers and propose an efficient algorithm towards a solution. Our experiments indicate that our method outperforms existing low-rank compression approaches across a wide range of networks and data sets. We believe that our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks. Our code is available at https://github.com/lucaslie/torchprune.

Generative adversarial networks in time series: A survey and taxonomy arxiv:2107.11098 📈 10

Eoin Brophy, Zhengwei Wang, Qi She, Tomas Ward

**Abstract:** Generative adversarial networks (GANs) studies have grown exponentially in the past few years. Their impact has been seen mainly in the computer vision field with realistic image and video manipulation, especially generation, making significant advancements. While these computer vision advances have garnered much attention, GAN applications have diversified across disciplines such as time series and sequence generation. As a relatively new niche for GANs, fieldwork is ongoing to develop high quality, diverse and private time series data. In this paper, we review GAN variants designed for time series related applications. We propose a taxonomy of discrete-variant GANs and continuous-variant GANs, in which GANs deal with discrete time series and continuous time series data. Here we showcase the latest and most popular literature in this field; their architectures, results, and applications. We also provide a list of the most popular evaluation metrics and their suitability across applications. Also presented is a discussion of privacy measures for these GANs and further protections and directions for dealing with sensitive data. We aim to frame clearly and concisely the latest and state-of-the-art research in this area and their applications to real-world technologies.

MCDAL: Maximum Classifier Discrepancy for Active Learning arxiv:2107.11049 📈 9

Jae Won Cho, Dong-Jin Kim, Yunjae Jung, In So Kweon

**Abstract:** Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition; however, GAN is usually known to suffer from instability and sensitivity to hyper-parameters. In contrast to these methods, we propose in this paper a novel active learning framework that we call Maximum Classifier Discrepancy for Active Learning (MCDAL) which takes the prediction discrepancies between multiple classifiers. In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them. Intuitively, the discrepancies in the auxiliary classification layers' predictions indicate the uncertainty in the prediction. In this regard, we propose a novel method to leverage the classifier discrepancies for the acquisition function for active learning. We also provide an interpretation of our idea in relation to existing GAN based active learning methods and domain adaptation frameworks. Moreover, we empirically demonstrate the utility of our approach where the performance of our approach exceeds the state-of-the-art methods on several image classification and semantic segmentation datasets in active learning setups.

3D Radar Velocity Maps for Uncertain Dynamic Environments arxiv:2107.11039 📈 9

Ransalu Senanayake, Kyle Beltran Hatch, Jason Zheng, Mykel J. Kochenderfer

**Abstract:** Future urban transportation concepts include a mixture of ground and air vehicles with varying degrees of autonomy in a congested environment. In such dynamic environments, occupancy maps alone are not sufficient for safe path planning. Safe and efficient transportation requires reasoning about the 3D flow of traffic and properly modeling uncertainty. Several different approaches can be taken for developing 3D velocity maps. This paper explores a Bayesian approach that captures our uncertainty in the map given training data. The approach involves projecting spatial coordinates into a high-dimensional feature space and then applying Bayesian linear regression to make predictions and quantify uncertainty in our estimates. On a collection of air and ground datasets, we demonstrate that this approach is effective and more scalable than several alternative approaches.

Joint Shapley values: a measure of joint feature importance arxiv:2107.11357 📈 8

Chris Harris, Richard Pymar, Colin Rowat

**Abstract:** The Shapley value is one of the most widely used model-agnostic measures of feature importance in explainable AI: it has clear axiomatic foundations, is guaranteed to uniquely exist, and has a clear interpretation as a feature's average effect on a model's prediction. We introduce joint Shapley values, which directly extend the Shapley axioms. This preserves the classic Shapley value's intuitions: joint Shapley values measure a set of features' average effect on a model's prediction. We prove the uniqueness of joint Shapley values, for any order of explanation. Results for games show that joint Shapley values present different insights from existing interaction indices, which assess the effect of a feature within a set of features. Deriving joint Shapley values in ML attribution problems thus gives us the first measure of the joint effect of sets of features on model predictions. In a dataset with binary features, we present a presence-adjusted method for calculating global values that retains the efficiency property.

Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation arxiv:2107.11252 📈 8

Bingqian Lin, Yi Zhu, Yanxin Long, Xiaodan Liang, Qixiang Ye, Liang Lin

**Abstract:** Language instruction plays an essential role in the natural language grounded navigation tasks. However, navigators trained with limited human-annotated instructions may have difficulties in accurately capturing key information from the complicated instruction at different timesteps, leading to poor navigation performance. In this paper, we exploit to train a more robust navigator which is capable of dynamically extracting crucial factors from the long instruction, by using an adversarial attacking paradigm. Specifically, we propose a Dynamic Reinforced Instruction Attacker (DR-Attacker), which learns to mislead the navigator to move to the wrong target by destroying the most instructive information in instructions at different timesteps. By formulating the perturbation generation as a Markov Decision Process, DR-Attacker is optimized by the reinforcement learning algorithm to generate perturbed instructions sequentially during the navigation, according to a learnable attack score. Then, the perturbed instructions, which serve as hard samples, are used for improving the robustness of the navigator with an effective adversarial training strategy and an auxiliary self-supervised reasoning task. Experimental results on both Vision-and-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks show the superiority of our proposed method over state-of-the-art methods. Moreover, the visualization analysis shows the effectiveness of the proposed DR-Attacker, which can successfully attack crucial information in the instructions at different timesteps. Code is available at https://github.com/expectorlin/DR-Attacker.

Automatic Detection Of Noise Events at Shooting Range Using Machine Learning arxiv:2107.11453 📈 7

Jon Nordby, Fabian Nemazi, Dag Rieber

**Abstract:** Outdoor shooting ranges are subject to noise regulations from local and national authorities. Restrictions found in these regulations may include limits on times of activities, the overall number of noise events, as well as limits on number of events depending on the class of noise or activity. A noise monitoring system may be used to track overall sound levels, but rarely provide the ability to detect activity or count the number of events, required to compare directly with such regulations. This work investigates the feasibility and performance of an automatic detection system to count noise events. An empirical evaluation was done by collecting data at a newly constructed shooting range and training facility. The data includes tests of multiple weapon configurations from small firearms to high caliber rifles and explosives, at multiple source positions, and collected on multiple different days. Several alternative machine learning models are tested, using as inputs time-series of standard acoustic indicators such as A-weighted sound levels and 1/3 octave spectrogram, and classifiers such as Logistic Regression and Convolutional Neural Networks. Performance for the various alternatives are reported in terms of the False Positive Rate and False Negative Rate. The detection performance was found to be satisfactory for use in automatic logging of time-periods with training activity.

HURRA! Human readable router anomaly detection arxiv:2107.11078 📈 7

Jose M. Navarro, Dario Rossi

**Abstract:** This paper presents HURRA, a system that aims to reduce the time spent by human operators in the process of network troubleshooting. To do so, it comprises two modules that are plugged after any anomaly detection algorithm: (i) a first attention mechanism, that ranks the present features in terms of their relation with the anomaly and (ii) a second module able to incorporates previous expert knowledge seamlessly, without any need of human interaction nor decisions. We show the efficacy of these simple processes on a collection of real router datasets obtained from tens of ISPs which exhibit a rich variety of anomalies and very heterogeneous set of KPIs, on which we gather manually annotated ground truth by the operator solving the troubleshooting ticket. Our experimental evaluation shows that (i) the proposed system is effective in achieving high levels of agreement with the expert, that (ii) even a simple statistical approach is able to extracting useful information from expert knowledge gained in past cases to further improve performance and finally that (iii) the main difficulty in live deployment concerns the automated selection of the anomaly detection algorithm and the tuning of its hyper-parameters.

Constellation: Learning relational abstractions over objects for compositional imagination arxiv:2107.11153 📈 6

James C. R. Whittington, Rishabh Kabra, Loic Matthey, Christopher P. Burgess, Alexander Lerchner

**Abstract:** Learning structured representations of visual scenes is currently a major bottleneck to bridging perception with reasoning. While there has been exciting progress with slot-based models, which learn to segment scenes into sets of objects, learning configurational properties of entire groups of objects is still under-explored. To address this problem, we introduce Constellation, a network that learns relational abstractions of static visual scenes, and generalises these abstractions over sensory particularities, thus offering a potential basis for abstract relational reasoning. We further show that this basis, along with language association, provides a means to imagine sensory content in new ways. This work is a first step in the explicit representation of visual relationships and using them for complex cognitive procedures.

TargetNet: Functional microRNA Target Prediction with Deep Neural Networks arxiv:2107.11381 📈 5

Seonwoo Min, Byunghan Lee, Sungroh Yoon

**Abstract:** Motivation: MicroRNAs (miRNAs) play pivotal roles in gene expression regulation by binding to target sites of messenger RNAs (mRNAs). While identifying functional targets of miRNAs is of utmost importance, their prediction remains a great challenge. Previous computational algorithms have major limitations. They use conservative candidate target site (CTS) selection criteria mainly focusing on canonical site types, rely on laborious and time-consuming manual feature extraction, and do not fully capitalize on the information underlying miRNA-CTS interactions. Results: In this paper, we introduce TargetNet, a novel deep learning-based algorithm for functional miRNA target prediction. To address the limitations of previous approaches, TargetNet has three key components: (1) relaxed CTS selection criteria accommodating irregularities in the seed region, (2) a novel miRNA-CTS sequence encoding scheme incorporating extended seed region alignments, and (3) a deep residual network-based prediction model. The proposed model was trained with miRNA-CTS pair datasets and evaluated with miRNA-mRNA pair datasets. TargetNet advances the previous state-of-the-art algorithms used in functional miRNA target classification. Furthermore, it demonstrates great potential for distinguishing high-functional miRNA targets.

Machine Learning with a Reject Option: A survey arxiv:2107.11277 📈 4

Kilian Hendrickx, Lorenzo Perini, Dries Van der Plas, Wannes Meert, Jesse Davis

**Abstract:** Machine learning models always make a prediction, even when it is likely to be inaccurate. This behavior should be avoided in many decision support applications, where mistakes can have severe consequences. Albeit already studied in 1970, machine learning with a reject option recently gained interest. This machine learning subfield enables machine learning models to abstain from making a prediction when likely to make a mistake. This survey aims to provide an overview on machine learning with a reject option. We introduce the conditions leading to two types of rejection, ambiguity and novelty rejection. Moreover, we define the existing architectures for models with a reject option, describe the standard learning strategies to train such models and relate traditional machine learning techniques to rejection. Additionally, we review strategies to evaluate a model's predictive and rejective quality. Finally, we provide examples of relevant application domains and show how machine learning with rejection relates to other machine learning research areas.

An Improved Algorithm of Robot Path Planning in Complex Environment Based on Double DQN arxiv:2107.11245 📈 4

Fei Zhang, Chaochen Gu, Feng Yang

**Abstract:** Deep Q Network (DQN) has several limitations when applied in planning a path in environment with a number of dilemmas according to our experiment. The reward function may be hard to model, and successful experience transitions are difficult to find in experience replay. In this context, this paper proposes an improved Double DQN (DDQN) to solve the problem by reference to A* and Rapidly-Exploring Random Tree (RRT). In order to achieve the rich experiments in experience replay, the initialization of robot in each training round is redefined based on RRT strategy. In addition, reward for the free positions is specially designed to accelerate the learning process according to the definition of position cost in A*. The simulation experimental results validate the efficiency of the improved DDQN, and robot could successfully learn the ability of obstacle avoidance and optimal path planning in which DQN or DDQN has no effect.

OLR 2021 Challenge: Datasets, Rules and Baselines arxiv:2107.11113 📈 4

Binling Wang, Wenxuan Hu, Jing Li, Yiming Zhi, Zheng Li, Qingyang Hong, Lin Li, Dong Wang, Liming Song, Cheng Yang

**Abstract:** This paper introduces the sixth Oriental Language Recognition (OLR) 2021 Challenge, which intends to improve the performance of language recognition systems and speech recognition systems within multilingual scenarios. The data profile, four tasks, two baselines, and the evaluation principles are introduced in this paper. In addition to the Language Identification (LID) tasks, multilingual Automatic Speech Recognition (ASR) tasks are introduced to OLR 2021 Challenge for the first time. The challenge this year focuses on more practical and challenging problems, with four tasks: (1) constrained LID, (2) unconstrained LID, (3) constrained multilingual ASR, (4) unconstrained multilingual ASR. Baselines for LID tasks and multilingual ASR tasks are provided, respectively. The LID baseline system is an extended TDNN x-vector model constructed with Pytorch. A transformer-based end-to-end model is provided as the multilingual ASR baseline system. These recipes will be online published, and available for participants to construct their own LID or ASR systems. The baseline results demonstrate that those tasks are rather challenging and deserve more effort to achieve better performance.

Estimation of excess air coefficient on coal combustion processes via gauss model and artificial neural network arxiv:2108.04180 📈 3

Sedat Golgiyaz, Muhammed Fatih Talu, Mahmut Daskin, Cem Onat

**Abstract:** It is no doubt that the most important contributing cause of global efficiency of coal fired thermal systems is combustion efficiency. In this study, the relationship between the flame image obtained by a CCD camera and the excess air coefficient (λ) has been modelled. The model has been obtained with a three-stage approach: 1) Data collection and synchronization: Obtaining the flame images by means of a CCD camera mounted on a 10 cm diameter observation port, λ data has been coordinately measured and recorded by the flue gas analyzer. 2) Feature extraction: Gridding the flame image, it is divided into small pieces. The uniformity of each piece to the optimal flame image has been calculated by means of modelling with single and multivariable Gaussian, calculating of color probabilities and Gauss mixture approach. 3) Matching and testing: A multilayer artificial neural network (ANN) has been used for the matching of feature-λ.

ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback arxiv:2107.14035 📈 3

Mike Wu, Noah Goodman, Chris Piech, Chelsea Finn

**Abstract:** High-quality computer science education is limited by the difficulty of providing instructor feedback to students at scale. While this feedback could in principle be automated, supervised approaches to predicting the correct feedback are bottlenecked by the intractability of annotating large quantities of student code. In this paper, we instead frame the problem of providing feedback as few-shot classification, where a meta-learner adapts to give feedback to student code on a new programming question from just a few examples annotated by instructors. Because data for meta-training is limited, we propose a number of amendments to the typical few-shot learning framework, including task augmentation to create synthetic tasks, and additional side information to build stronger priors about each task. These additions are combined with a transformer architecture to embed discrete sequences (e.g. code) to a prototypical representation of a feedback class label. On a suite of few-shot natural language processing tasks, we match or outperform state-of-the-art performance. Then, on a collection of student solutions to exam questions from an introductory university course, we show that our approach reaches an average precision of 88% on unseen questions, surpassing the 82% precision of teaching assistants. Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university. This is, to the best of our knowledge, the first successful deployment of a machine learning based feedback to open-ended student code.

Similarity Based Label Smoothing For Dialogue Generation arxiv:2107.11481 📈 3

Sougata Saha, Souvik Das, Rohini Srihari

**Abstract:** Generative neural conversational systems are generally trained with the objective of minimizing the entropy loss between the training "hard" targets and the predicted logits. Often, performance gains and improved generalization can be achieved by using regularization techniques like label smoothing, which converts the training "hard" targets to "soft" targets. However, label smoothing enforces a data independent uniform distribution on the incorrect training targets, which leads to an incorrect assumption of equi-probable incorrect targets for each correct target. In this paper we propose and experiment with incorporating data dependent word similarity based weighing methods to transforms the uniform distribution of the incorrect target probabilities in label smoothing, to a more natural distribution based on semantics. We introduce hyperparameters to control the incorrect target distribution, and report significant performance gains over networks trained using standard label smoothing based loss, on two standard open domain dialogue corpora.

Deep Learning Based Cardiac MRI Segmentation: Do We Need Experts? arxiv:2107.11447 📈 3

Youssef Skandarani, Pierre-Marc Jodoin, Alain Lalande

**Abstract:** Deep learning methods are the de-facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application which, like many others, requires a large number of annotated data so a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated datasets that machine learning can successfully train on. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert groundtruth for cardiac cine-MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. Results reveal that generalization performances of a segmentation neural network trained on non-expert groundtruth data is, to all practical purposes, as good as on expert groundtruth data, in particular when the non-expert gets a decent level of training, highlighting an opportunity for the efficient and cheap creation of annotations for cardiac datasets.

HierMUD: Hierarchical Multi-task Unsupervised Domain Adaptation between Bridges for Drive-by Damage Diagnosis arxiv:2107.11435 📈 3

Jingxiao Liu, Susu Xu, Mario Bergés, Hae Young Noh

**Abstract:** Monitoring bridge health using vibrations of drive-by vehicles has various benefits, such as no need for directly installing and maintaining sensors on the bridge. However, many of the existing drive-by monitoring approaches are based on supervised learning models that require labeled data from every bridge of interest, which is expensive and time-consuming, if not impossible, to obtain. To this end, we introduce a new framework that transfers the model learned from one bridge to diagnose damage in another bridge without any labels from the target bridge. Our framework trains a hierarchical neural network model in an adversarial way to extract task-shared and task-specific features that are informative to multiple diagnostic tasks and invariant across multiple bridges. We evaluate our framework on experimental data collected from 2 bridges and 3 vehicles. We achieve accuracies of 95% for damage detection, 93% for localization, and up to 72% for quantification, which are ~2 times improvements from baseline methods.

Robust Explainability: A Tutorial on Gradient-Based Attribution Methods for Deep Neural Networks arxiv:2107.11400 📈 3

Ian E. Nielsen, Dimah Dera, Ghulam Rasool, Nidhal Bouaynaya, Ravi P. Ramachandran

**Abstract:** With the rise of deep neural networks, the challenge of explaining the predictions of these networks has become increasingly recognized. While many methods for explaining the decisions of deep neural networks exist, there is currently no consensus on how to evaluate them. On the other hand, robustness is a popular topic for deep learning research; however, it is hardly talked about in explainability until very recently. In this tutorial paper, we start by presenting gradient-based interpretability methods. These techniques use gradient signals to assign the burden of the decision on the input features. Later, we discuss how gradient-based methods can be evaluated for their robustness and the role that adversarial robustness plays in having meaningful explanations. We also discuss the limitations of gradient-based methods. Finally, we present the best practices and attributes that should be examined before choosing an explainability method. We conclude with the future directions for research in the area at the convergence of robustness and explainability.

Optimum Risk Portfolio and Eigen Portfolio: A Comparative Analysis Using Selected Stocks from the Indian Stock Market arxiv:2107.11371 📈 3

Jaydip Sen, Sidra Mehtab

**Abstract:** Designing an optimum portfolio that allocates weights to its constituent stocks in a way that achieves the best trade-off between the return and the risk is a challenging research problem. The classical mean-variance theory of portfolio proposed by Markowitz is found to perform sub-optimally on the real-world stock market data since the error in estimation for the expected returns adversely affects the performance of the portfolio. This paper presents three approaches to portfolio design, viz, the minimum risk portfolio, the optimum risk portfolio, and the Eigen portfolio, for seven important sectors of the Indian stock market. The daily historical prices of the stocks are scraped from Yahoo Finance website from January 1, 2016, to December 31, 2020. Three portfolios are built for each of the seven sectors chosen for this study, and the portfolios are analyzed on the training data based on several metrics such as annualized return and risk, weights assigned to the constituent stocks, the correlation heatmaps, and the principal components of the Eigen portfolios. Finally, the optimum risk portfolios and the Eigen portfolios for all sectors are tested on their return over a period of a six-month period. The performances of the portfolios are compared and the portfolio yielding the higher return for each sector is identified.

Heteroscedastic Temporal Variational Autoencoder For Irregularly Sampled Time Series arxiv:2107.11350 📈 3

Satya Narayan Shukla, Benjamin M. Marlin

**Abstract:** Irregularly sampled time series commonly occur in several domains where they present a significant challenge to standard deep learning models. In this paper, we propose a new deep learning framework for probabilistic interpolation of irregularly sampled time series that we call the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE includes a novel input layer to encode information about input observation sparsity, a temporal VAE architecture to propagate uncertainty due to input sparsity, and a heteroscedastic output layer to enable variable uncertainty in output interpolations. Our results show that the proposed architecture is better able to reflect variable uncertainty through time due to sparse and irregular sampling than a range of baseline and traditional models, as well as recently proposed deep latent variable models that use homoscedastic output layers.

Tackling the Overestimation of Forest Carbon with Deep Learning and Aerial Imagery arxiv:2107.11320 📈 3

Gyri Reiersen, David Dao, Björn Lütjens, Konstantin Klemmer, Xiaoxiang Zhu, Ce Zhang

**Abstract:** Forest carbon offsets are increasingly popular and can play a significant role in financing climate mitigation, forest conservation, and reforestation. Measuring how much carbon is stored in forests is, however, still largely done via expensive, time-consuming, and sometimes unaccountable field measurements. To overcome these limitations, many verification bodies are leveraging machine learning (ML) algorithms to estimate forest carbon from satellite or aerial imagery. Aerial imagery allows for tree species or family classification, which improves the satellite imagery-based forest type classification. However, aerial imagery is significantly more expensive to collect and it is unclear by how much the higher resolution improves the forest carbon estimation. This proposal paper describes the first systematic comparison of forest carbon estimation from aerial imagery, satellite imagery, and ground-truth field measurements via deep learning-based algorithms for a tropical reforestation project. Our initial results show that forest carbon estimates from satellite imagery can overestimate above-ground biomass by up to 10-times for tropical reforestation projects. The significant difference between aerial and satellite-derived forest carbon measurements shows the potential for aerial imagery-based ML algorithms and raises the importance to extend this study to a global benchmark between options for carbon measurements.

Finite-Bit Quantization For Distributed Algorithms With Linear Convergence arxiv:2107.11304 📈 3

Chang-Shen Lee, Nicolò Michelusi, Gesualdo Scutari

**Abstract:** This paper studies distributed algorithms for (strongly convex) composite optimization problems over mesh networks, subject to quantized communications. Instead of focusing on a specific algorithmic design, we propose a black-box model casting distributed algorithms in the form of fixed-point iterates, converging at linear rate. The algorithmic model is coupled with a novel (random) Biased Compression (BC-)rule on the quantizer design, which preserves linear convergence. A new quantizer coupled with a communication-efficient encoding scheme is also proposed, which efficiently implements the BC-rule using a finite number of bits. This contrasts with most of existing quantization rules, whose implementation calls for an infinite number of bits. A unified communication complexity analysis is developed for the black-box model, determining the average number of bit required to reach a solution of the optimization problem within the required accuracy. Numerical results validate our theoretical findings and show that distributed algorithms equipped with the proposed quantizer have more favorable communication complexity than algorithms using existing quantization rules.

A Differentiable Language Model Adversarial Attack on Text Classifiers arxiv:2107.11275 📈 3

Ivan Fursov, Alexey Zaytsev, Pavel Burnyshev, Ekaterina Dmitrieva, Nikita Klyuchnikov, Andrey Kravchenko, Ekaterina Artemova, Evgeny Burnaev

**Abstract:** Robustness of huge Transformer-based models for natural language processing is an important issue due to their capabilities and wide adoption. One way to understand and improve robustness of these models is an exploration of an adversarial attack scenario: check if a small perturbation of an input can fool a model. Due to the discrete nature of textual data, gradient-based adversarial methods, widely used in computer vision, are not applicable per~se. The standard strategy to overcome this issue is to develop token-level transformations, which do not take the whole sentence into account. In this paper, we propose a new black-box sentence-level attack. Our method fine-tunes a pre-trained language model to generate adversarial examples. A proposed differentiable loss function depends on a substitute classifier score and an approximate edit distance computed via a deep learning model. We show that the proposed attack outperforms competitors on a diverse set of NLP problems for both computed metrics and human evaluation. Moreover, due to the usage of the fine-tuned language model, the generated adversarial examples are hard to detect, thus current models are not robust. Hence, it is difficult to defend from the proposed attack, which is not the case for other attacks.

Human Pose Estimation from Sparse Inertial Measurements through Recurrent Graph Convolution arxiv:2107.11214 📈 3

Patrik Puchert, Timo Ropinski

**Abstract:** We propose the adjacency adaptive graph convolutional long-short term memory network (AAGC-LSTM) for human pose estimation from sparse inertial measurements, obtained from only 6 measurement units. The AAGC-LSTM combines both spatial and temporal dependency in a single network operation. This is made possible by equipping graph convolutions with adjacency adaptivity, which also allows for learning unknown dependencies of the human body joints. To further boost accuracy, we propose longitudinal loss weighting to consider natural movement patterns, as well as body-aware contralateral data augmentation. By combining these contributions, we are able to utilize the inherent graph nature of the human body, and can thus outperform the state of the art for human pose estimation from sparse inertial measurements.

A comparison of combined data assimilation and machine learning methods for offline and online model error correction arxiv:2107.11114 📈 3

Alban Farchi, Marc Bocquet, Patrick Laloyaux, Massimo Bonavita, Quentin Malartic

**Abstract:** Recent studies have shown that it is possible to combine machine learning methods with data assimilation to reconstruct a dynamical system using only sparse and noisy observations of that system. The same approach can be used to correct the error of a knowledge-based model. The resulting surrogate model is hybrid, with a statistical part supplementing a physical part. In practice, the correction can be added as an integrated term (i.e. in the model resolvent) or directly inside the tendencies of the physical model. The resolvent correction is easy to implement. The tendency correction is more technical, in particular it requires the adjoint of the physical model, but also more flexible. We use the two-scale Lorenz model to compare the two methods. The accuracy in long-range forecast experiments is somewhat similar between the surrogate models using the resolvent correction and the tendency correction. By contrast, the surrogate models using the tendency correction significantly outperform the surrogate models using the resolvent correction in data assimilation experiments. Finally, we show that the tendency correction opens the possibility to make online model error correction, i.e. improving the model progressively as new observations become available. The resulting algorithm can be seen as a new formulation of weak-constraint 4D-Var. We compare online and offline learning using the same framework with the two-scale Lorenz system, and show that with online learning, it is possible to extract all the information from sparse and noisy observations.

Malware Analysis with Artificial Intelligence and a Particular Attention on Results Interpretability arxiv:2107.11100 📈 3

Benjamin Marais, Tony Quertier, Christophe Chesneau

**Abstract:** Malware detection and analysis are active research subjects in cybersecurity over the last years. Indeed, the development of obfuscation techniques, as packing, for example, requires special attention to detect recent variants of malware. The usual detection methods do not necessarily provide tools to interpret the results. Therefore, we propose a model based on the transformation of binary files into grayscale image, which achieves an accuracy rate of 88%. Furthermore, the proposed model can determine if a sample is packed or encrypted with a precision of 85%. It allows us to analyze results and act appropriately. Also, by applying attention mechanisms on detection models, we have the possibility to identify which part of the files looks suspicious. This kind of tool should be very useful for data analysts, it compensates for the lack of interpretability of the common detection models, and it can help to understand why some malicious files are undetected.

RGB Image Classification with Quantum Convolutional Ansaetze arxiv:2107.11099 📈 3

Yu Jing, Yang Yang, Chonghang Wu, Wenbing Fu, Wei Hu, Xiaogang Li, Hua Xu

**Abstract:** With the rapid growth of qubit numbers and coherence times in quantum hardware technology, implementing shallow neural networks on the so-called Noisy Intermediate-Scale Quantum (NISQ) devices has attracted a lot of interest. Many quantum (convolutional) circuit ansaetze are proposed for grayscale images classification tasks with promising empirical results. However, when applying these ansaetze on RGB images, the intra-channel information that is useful for vision tasks is not extracted effectively. In this paper, we propose two types of quantum circuit ansaetze to simulate convolution operations on RGB images, which differ in the way how inter-channel and intra-channel information are extracted. To the best of our knowledge, this is the first work of a quantum convolutional circuit to deal with RGB images effectively, with a higher test accuracy compared to the purely classical CNNs. We also investigate the relationship between the size of quantum circuit ansatz and the learnability of the hybrid quantum-classical convolutional neural network. Through experiments based on CIFAR-10 and MNIST datasets, we demonstrate that a larger size of the quantum circuit ansatz improves predictive performance in multiclass classification tasks, providing useful insights for near term quantum algorithm developments.

Data-driven deep density estimation arxiv:2107.11085 📈 3

Patrik Puchert, Pedro Hermosilla, Tobias Ritschel, Timo Ropinski

**Abstract:** Density estimation plays a crucial role in many data analysis tasks, as it infers a continuous probability density function (PDF) from discrete samples. Thus, it is used in tasks as diverse as analyzing population data, spatial locations in 2D sensor readings, or reconstructing scenes from 3D scans. In this paper, we introduce a learned, data-driven deep density estimation (DDE) to infer PDFs in an accurate and efficient manner, while being independent of domain dimensionality or sample size. Furthermore, we do not require access to the original PDF during estimation, neither in parametric form, nor as priors, or in the form of many samples. This is enabled by training an unstructured convolutional neural network on an infinite stream of synthetic PDFs, as unbound amounts of synthetic training data generalize better across a deck of natural PDFs than any natural finite training data will do. Thus, we hope that our publicly available DDE method will be beneficial in many areas of data analysis, where continuous models are to be estimated from discrete observations.

Reservoir Computing Approach for Gray Images Segmentation arxiv:2107.11077 📈 3

Petia Koprinkova-Hristova

**Abstract:** The paper proposes a novel approach for gray scale images segmentation. It is based on multiple features extraction from single feature per image pixel, namely its intensity value, using Echo state network. The newly extracted features -- reservoir equilibrium states -- reveal hidden image characteristics that improve its segmentation via a clustering algorithm. Moreover, it was demonstrated that the intrinsic plasticity tuning of reservoir fits its equilibrium states to the original image intensity distribution thus allowing for its better segmentation. The proposed approach is tested on the benchmark image Lena.

Improving the Generalization of Meta-learning on Unseen Domains via Adversarial Shift arxiv:2107.11056 📈 3

Pinzhuo Tian, Yao Gao

**Abstract:** Meta-learning provides a promising way for learning to efficiently learn and achieves great success in many applications. However, most meta-learning literature focuses on dealing with tasks from a same domain, making it brittle to generalize to tasks from the other unseen domains. In this work, we address this problem by simulating tasks from the other unseen domains to improve the generalization and robustness of meta-learning method. Specifically, we propose a model-agnostic shift layer to learn how to simulate the domain shift and generate pseudo tasks, and develop a new adversarial learning-to-learn mechanism to train it. Based on the pseudo tasks, the meta-learning model can learn cross-domain meta-knowledge, which can generalize well on unseen domains. We conduct extensive experiments under the domain generalization setting. Experimental results demonstrate that the proposed shift layer is applicable to various meta-learning frameworks. Moreover, our method also leads to state-of-the-art performance on different cross-domain few-shot classification benchmarks and produces good results on cross-domain few-shot regression.

Text Classification and Clustering with Annealing Soft Nearest Neighbor Loss arxiv:2107.14597 📈 2

Abien Fred Agarap

**Abstract:** We define disentanglement as how far class-different data points from each other are, relative to the distances among class-similar data points. When maximizing disentanglement during representation learning, we obtain a transformed feature representation where the class memberships of the data points are preserved. If the class memberships of the data points are preserved, we would have a feature representation space in which a nearest neighbour classifier or a clustering algorithm would perform well. We take advantage of this method to learn better natural language representation, and employ it on text classification and text clustering tasks. Through disentanglement, we obtain text representations with better-defined clusters and improve text classification performance. Our approach had a test classification accuracy of as high as 90.11% and test clustering accuracy of 88% on the AG News dataset, outperforming our baseline models -- without any other training tricks or regularization.

Semantic-guided Pixel Sampling for Cloth-Changing Person Re-identification arxiv:2107.11522 📈 2

Xiujun Shu, Ge Li, Xiao Wang, Weijian Ruan, Qi Tian

**Abstract:** Cloth-changing person re-identification (re-ID) is a new rising research topic that aims at retrieving pedestrians whose clothes are changed. This task is quite challenging and has not been fully studied to date. Current works mainly focus on body shape or contour sketch, but they are not robust enough due to view and posture variations. The key to this task is to exploit cloth-irrelevant cues. This paper proposes a semantic-guided pixel sampling approach for the cloth-changing person re-ID task. We do not explicitly define which feature to extract but force the model to automatically learn cloth-irrelevant cues. Specifically, we first recognize the pedestrian's upper clothes and pants, then randomly change them by sampling pixels from other pedestrians. The changed samples retain the identity labels but exchange the pixels of clothes or pants among different pedestrians. Besides, we adopt a loss function to constrain the learned features to keep consistent before and after changes. In this way, the model is forced to learn cues that are irrelevant to upper clothes and pants. We conduct extensive experiments on the latest released PRCC dataset. Our method achieved 65.8% on Rank1 accuracy, which outperforms previous methods with a large margin. The code is available at https://github.com/shuxjweb/pixel_sampling.git.

Crosslink-Net: Double-branch Encoder Segmentation Network via Fusing Vertical and Horizontal Convolutions arxiv:2107.11517 📈 2

Qian Yu, Lei Qi, Luping Zhou, Lei Wang, Yilong Yin, Yinghuan Shi, Wuzhang Wang, Yang Gao

**Abstract:** Accurate image segmentation plays a crucial role in medical image analysis, yet it faces great challenges of various shapes, diverse sizes, and blurry boundaries. To address these difficulties, square kernel-based encoder-decoder architecture has been proposed and widely used, but its performance remains still unsatisfactory. To further cope with these challenges, we present a novel double-branch encoder architecture. Our architecture is inspired by two observations: 1) Since the discrimination of features learned via square convolutional kernels needs to be further improved, we propose to utilize non-square vertical and horizontal convolutional kernels in the double-branch encoder, so features learned by the two branches can be expected to complement each other. 2) Considering that spatial attention can help models to better focus on the target region in a large-sized image, we develop an attention loss to further emphasize the segmentation on small-sized targets. Together, the above two schemes give rise to a novel double-branch encoder segmentation framework for medical image segmentation, namely Crosslink-Net. The experiments validate the effectiveness of our model on four datasets. The code is released at https://github.com/Qianyu1226/Crosslink-Net.

Multi-Perspective Content Delivery Networks Security Framework Using Optimized Unsupervised Anomaly Detection arxiv:2107.11514 📈 2

Li Yang, Abdallah Moubayed, Abdallah Shami, Parisa Heidari, Amine Boukhtouta, Adel Larabi, Richard Brunner, Stere Preda, Daniel Migault

**Abstract:** Content delivery networks (CDNs) provide efficient content distribution over the Internet. CDNs improve the connectivity and efficiency of global communications, but their caching mechanisms may be breached by cyber-attackers. Among the security mechanisms, effective anomaly detection forms an important part of CDN security enhancement. In this work, we propose a multi-perspective unsupervised learning framework for anomaly detection in CDNs. In the proposed framework, a multi-perspective feature engineering approach, an optimized unsupervised anomaly detection model that utilizes an isolation forest and a Gaussian mixture model, and a multi-perspective validation method, are developed to detect abnormal behaviors in CDNs mainly from the client Internet Protocol (IP) and node perspectives, therefore to identify the denial of service (DoS) and cache pollution attack (CPA) patterns. Experimental results are presented based on the analytics of eight days of real-world CDN log data provided by a major CDN operator. Through experiments, the abnormal contents, compromised nodes, malicious IPs, as well as their corresponding attack types, are identified effectively by the proposed framework and validated by multiple cybersecurity experts. This shows the effectiveness of the proposed method when applied to real-world CDN data.

Cycled Compositional Learning between Images and Text arxiv:2107.11509 📈 2

Jongseok Kim, Youngjae Yu, Seunghwan Lee, GunheeKim

**Abstract:** We present an approach named the Cycled Composition Network that can measure the semantic distance of the composition of image-text embedding. First, the Composition Network transit a reference image to target image in an embedding space using relative caption. Second, the Correction Network calculates a difference between reference and retrieved target images in the embedding space and match it with a relative caption. Our goal is to learn a Composition mapping with the Composition Network. Since this one-way mapping is highly under-constrained, we couple it with an inverse relation learning with the Correction Network and introduce a cycled relation for given Image We participate in Fashion IQ 2020 challenge and have won the first place with the ensemble of our model.

$μ$DARTS: Model Uncertainty-Aware Differentiable Architecture Search arxiv:2107.11500 📈 2

Biswadeep Chakraborty, Saibal Mukhopadhyay

**Abstract:** We present a Model Uncertainty-aware Differentiable ARchiTecture Search ($μ$DARTS) that optimizes neural networks to simultaneously achieve high accuracy and low uncertainty. We introduce concrete dropout within DARTS cells and include a Monte-Carlo regularizer within the training loss to optimize the concrete dropout probabilities. A predictive variance term is introduced in the validation loss to enable searching for architecture with minimal model uncertainty. The experiments on CIFAR10, CIFAR100, SVHN, and ImageNet verify the effectiveness of $μ$DARTS in improving accuracy and reducing uncertainty compared to existing DARTS methods. Moreover, the final architecture obtained from $μ$DARTS shows higher robustness to noise at the input image and model parameters compared to the architecture obtained from existing DARTS methods.

Using a Cross-Task Grid of Linear Probes to Interpret CNN Model Predictions On Retinal Images arxiv:2107.11468 📈 2

Katy Blumer, Subhashini Venugopalan, Michael P. Brenner, Jon Kleinberg

**Abstract:** We analyze a dataset of retinal images using linear probes: linear regression models trained on some "target" task, using embeddings from a deep convolutional (CNN) model trained on some "source" task as input. We use this method across all possible pairings of 93 tasks in the UK Biobank dataset of retinal images, leading to ~164k different models. We analyze the performance of these linear probes by source and target task and by layer depth. We observe that representations from the middle layers of the network are more generalizable. We find that some target tasks are easily predicted irrespective of the source task, and that some other target tasks are more accurately predicted from correlated source tasks than from embeddings trained on the same task.

A general sample complexity analysis of vanilla policy gradient arxiv:2107.11433 📈 2

Rui Yuan, Robert M. Gower, Alessandro Lazaric

**Abstract:** We adapt recent tools developed for the analysis of Stochastic Gradient Descent (SGD) in non-convex optimization to obtain convergence guarantees and sample complexities for the vanilla policy gradient (PG) -- REINFORCE and GPOMDP. Our only assumptions are that the expected return is smooth w.r.t. the policy parameters and that the second moment of its gradient satisfies a certain \emph{ABC assumption}. The ABC assumption allows for the second moment of the gradient to be bounded by $A\geq 0$ times the suboptimality gap, $B \geq 0$ times the norm of the full batch gradient and an additive constant $C \geq 0$, or any combination of aforementioned. We show that the ABC assumption is more general than the commonly used assumptions on the policy space to prove convergence to a stationary point. We provide a single convergence theorem under the ABC assumption, and show that, despite the generality of the ABC assumption, we recover the $\widetilde{\mathcal{O}}(ε^{-4})$ sample complexity of PG. Our convergence theorem also affords greater flexibility in the choice of hyper parameters such as the step size and places no restriction on the batch size $m$. Even the single trajectory case (i.e., $m=1$) fits within our analysis. We believe that the generality of the ABC assumption may provide theoretical guarantees for PG to a much broader range of problems that have not been previously considered.

Finite-time Analysis of Globally Nonstationary Multi-Armed Bandits arxiv:2107.11419 📈 2

Junpei Komiyama, Edouard Fouché, Junya Honda

**Abstract:** We consider nonstationary multi-armed bandit problems where the model parameters of the arms change over time. We introduce the adaptive resetting bandit (ADR-bandit), which is a class of bandit algorithms that leverages adaptive windowing techniques from the data stream community. We first provide new guarantees on the quality of estimators resulting from adaptive windowing techniques, which are of independent interest in the data mining community. Furthermore, we conduct a finite-time analysis of ADR-bandit in two typical environments: an abrupt environment where changes occur instantaneously and a gradual environment where changes occur progressively. We demonstrate that ADR-bandit has nearly optimal performance when the abrupt or global changes occur in a coordinated manner that we call global changes. We demonstrate that forced exploration is unnecessary when we restrict the interest to the global changes. Unlike the existing nonstationary bandit algorithms, ADR-bandit has optimal performance in stationary environments as well as nonstationary environments with global changes. Our experiments show that the proposed algorithms outperform the existing approaches in synthetic and real-world environments.

State, global and local parameter estimation using local ensemble Kalman filters: applications to online machine learning of chaotic dynamics arxiv:2107.11253 📈 2

Quentin Malartic, Alban Farchi, Marc Bocquet

**Abstract:** In a recent methodological paper, we showed how to learn chaotic dynamics along with the state trajectory from sequentially acquired observations, using local ensemble Kalman filters. Here, we more systematically investigate the possibility to use a local ensemble Kalman filter with either covariance localisation or local domains, in order to retrieve the state and a mix of key global and local parameters. Global parameters are meant to represent the surrogate dynamical core, for instance through a neural network, which is reminiscent of data-driven machine learning of dynamics, while the local parameters typically stand for the forcings of the model. Aiming at joint state and parameter estimation, a family of algorithms for covariance and local domain localisation is proposed. In particular, we show how to rigorously update global parameters using a local domain ensemble Kalman filter (EnKF) such as the local ensemble transform Kalman filter (LETKF), an inherently local method. The approach is tested with success on the 40-variable Lorenz model using several of the local EnKF flavors. A two-dimensional illustration based on a multi-layer Lorenz model is finally provided. It uses radiance-like non-local observations. It features both local domains and covariance localisation in order to learn the chaotic dynamics and the local forcings. This paper more generally addresses the key question of online estimation of both global and local model parameters.

Multi-Channel Automatic Music Transcription Using Tensor Algebra arxiv:2107.11250 📈 2

Axel Marmoret, Nancy Bertin, Jeremy Cohen

**Abstract:** Music is an art, perceived in unique ways by every listener, coming from acoustic signals. In the meantime, standards as musical scores exist to describe it. Even if humans can make this transcription, it is costly in terms of time and efforts, even more with the explosion of information consecutively to the rise of the Internet. In that sense, researches are driven in the direction of Automatic Music Transcription. While this task is considered solved in the case of single notes, it is still open when notes superpose themselves, forming chords. This report aims at developing some of the existing techniques towards Music Transcription, particularly matrix factorization, and introducing the concept of multi-channel automatic music transcription. This concept will be explored with mathematical objects called tensors.

High Dimensional Differentially Private Stochastic Optimization with Heavy-tailed Data arxiv:2107.11136 📈 2

Lijie Hu, Shuo Ni, Hanshen Xiao, Di Wang

**Abstract:** As one of the most fundamental problems in machine learning, statistics and differential privacy, Differentially Private Stochastic Convex Optimization (DP-SCO) has been extensively studied in recent years. However, most of the previous work can only handle either regular data distribution or irregular data in the low dimensional space case. To better understand the challenges arising from irregular data distribution, in this paper we provide the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space. In the first part we focus on the problem over some polytope constraint (such as the $\ell_1$-norm ball). We show that if the loss function is smooth and its gradient has bounded second order moment, it is possible to get a (high probability) error bound (excess population risk) of $\tilde{O}(\frac{\log d}{(nε)^\frac{1}{3}})$ in the $ε$-DP model, where $n$ is the sample size and $d$ is the dimensionality of the underlying space. Next, for LASSO, if the data distribution that has bounded fourth-order moments, we improve the bound to $\tilde{O}(\frac{\log d}{(nε)^\frac{2}{5}})$ in the $(ε, δ)$-DP model. In the second part of the paper, we study sparse learning with heavy-tailed data. We first revisit the sparse linear model and propose a truncated DP-IHT method whose output could achieve an error of $\tilde{O}(\frac{s^{*2}\log d}{nε})$, where $s^*$ is the sparsity of the underlying parameter. Then we study a more general problem over the sparsity ({\em i.e.,} $\ell_0$-norm) constraint, and show that it is possible to achieve an error of $\tilde{O}(\frac{s^{*\frac{3}{2}}\log d}{nε})$, which is also near optimal up to a factor of $\tilde{O}{(\sqrt{s^*})}$, if the loss function is smooth and strongly convex.

Cardiac CT segmentation based on distance regularized level set arxiv:2107.11119 📈 2

Xinyang Wu

**Abstract:** Before analy z ing the CT image, it is very important to segment the heart image, and the left ve ntricular (LV) inner and outer membrane segmentation is one of the most important contents. However, manual segmentation is tedious and time consuming. In order to facilitate doctors to focus on high tech tasks such as disease analysis and diagnosis, it is crucial to develop a fast and accurate segmentation method [1]. In view of this phenomenon, this paper uses distance regularized level set (DRL SE) to explore the segmentation effect of epicardium and endocardium 2 ]], which includes a distance regula riz ed t erm and an external energy term. Finally, five CT images are used to verify the proposed method, and image quality evaluation indexes such as dice score and Hausdorff distance are used to evaluate the segmentation effect. The results showed that the me tho d could separate the inner and outer membrane very well (endocardium dice = 0.9253, Hausdorff = 7.8740; epicardium Hausdorff = 0.9687, Hausdorff = 6 .

Mind the Performance Gap: Examining Dataset Shift During Prospective Validation arxiv:2107.13964 📈 1

Erkin Ötleş, Jeeheh Oh, Benjamin Li, Michelle Bochinski, Hyeon Joo, Justin Ortwine, Erica Shenoy, Laraine Washer, Vincent B. Young, Krishna Rao, Jenna Wiens

**Abstract:** Once integrated into clinical care, patient risk stratification models may perform worse compared to their retrospective performance. To date, it is widely accepted that performance will degrade over time due to changes in care processes and patient populations. However, the extent to which this occurs is poorly understood, in part because few researchers report prospective validation performance. In this study, we compare the 2020-2021 ('20-'21) prospective performance of a patient risk stratification model for predicting healthcare-associated infections to a 2019-2020 ('19-'20) retrospective validation of the same model. We define the difference in retrospective and prospective performance as the performance gap. We estimate how i) "temporal shift", i.e., changes in clinical workflows and patient populations, and ii) "infrastructure shift", i.e., changes in access, extraction and transformation of data, both contribute to the performance gap. Applied prospectively to 26,864 hospital encounters during a twelve-month period from July 2020 to June 2021, the model achieved an area under the receiver operating characteristic curve (AUROC) of 0.767 (95% confidence interval (CI): 0.737, 0.801) and a Brier score of 0.189 (95% CI: 0.186, 0.191). Prospective performance decreased slightly compared to '19-'20 retrospective performance, in which the model achieved an AUROC of 0.778 (95% CI: 0.744, 0.815) and a Brier score of 0.163 (95% CI: 0.161, 0.165). The resulting performance gap was primarily due to infrastructure shift and not temporal shift. So long as we continue to develop and validate models using data stored in large research data warehouses, we must consider differences in how and when data are accessed, measure how these differences may affect prospective performance, and work to mitigate those differences.

A Signal Detection Scheme Based on Deep Learning in OFDM Systems arxiv:2107.13423 📈 1

Guangliang Pan, Zitong Liu, Wei Wang, Minglei Li

**Abstract:** Channel estimation and signal detection are essential steps to ensure the quality of end-to-end communication in orthogonal frequency-division multiplexing (OFDM) systems. In this paper, we develop a DDLSD approach, i.e., Data-driven Deep Learning for Signal Detection in OFDM systems. First, the OFDM system model is established. Then, the long short-term memory (LSTM) is introduced into the OFDM system model. Wireless channel data is generated through simulation, the preprocessed time series feature information is input into the LSTM to complete the offline training. Finally, the trained model is used for online recovery of transmitted signal. The difference between this scheme and existing OFDM receiver is that explicit estimated channel state information (CSI) is transformed into invisible estimated CSI, and the transmit symbol is directly restored. Simulation results show that the DDLSD scheme outperforms the existing traditional methods in terms of improving channel estimation and signal detection performance.

Early Diagnosis of Lung Cancer Using Computer Aided Detection via Lung Segmentation Approach arxiv:2107.12205 📈 1

Abhir Bhandary, Ananth Prabhu G, Mustafa Basthikodi, Chaitra K M

**Abstract:** Lung cancer begins in the lungs and leading to the reason of cancer demise amid population in the creation. According to the American Cancer Society, which estimates about 27% of the deaths because of cancer. In the early phase of its evolution, lung cancer does not cause any symptoms usually. Many of the patients have been diagnosed in a developed phase where symptoms become more prominent, that results in poor curative treatment and high mortality rate. Computer Aided Detection systems are used to achieve greater accuracies for the lung cancer diagnosis. In this research exertion, we proposed a novel methodology for lung Segmentation on the basis of Fuzzy C-Means Clustering, Adaptive Thresholding, and Segmentation of Active Contour Model. The experimental results are analysed and presented.

Caveats for the use of Web of Science Core Collection in old literature retrieval and historical bibliometric analysis arxiv:2107.11521 📈 1

Weishu Liu

**Abstract:** By using publications from Web of Science Core Collection (WoSCC), Fosso Wamba and his colleagues published an interesting and comprehensive paper in Technological Forecasting and Social Change to explore the structure and dynamics of artificial intelligence (AI) scholarship. Data demonstrated in Fosso Wamba's study implied that the year 1991 seemed to be a "watershed" of AI research. This research note tried to uncover the 1991 phenomenon from the perspective of database limitation by probing the limitations of search in abstract/author keywords/keywords plus fields of WoSCC empirically. The low availability rates of abstract/author keywords/keywords plus information in WoSCC found in this study can explain the "watershed" phenomenon of AI scholarship in 1991 to a large extent. Some other caveats for the use of WoSCC in old literature retrieval and historical bibliometric analysis were also mentioned in the discussion section. This research note complements Fosso Wamba and his colleagues' study and also helps avoid improper interpretation in the use of WoSCC in old literature retrieval and historical bibliometric analysis.

Plinko: A Theory-Free Behavioral Measure of Priors for Statistical Learning and Mental Model Updating arxiv:2107.11477 📈 1

Peter A. V. DiBerardino, Alexandre L. S. Filipowicz, James Danckert, Britt Anderson

**Abstract:** Probability distributions are central to Bayesian accounts of cognition, but behavioral assessments do not directly measure them. Posterior distributions are typically computed from collections of individual participant actions, yet are used to draw conclusions about the internal structure of participant beliefs. Also not explicitly measured are the prior distributions that distinguish Bayesian models from others by representing initial states of belief. Instead, priors are usually derived from experimenters' intuitions or model assumptions and applied equally to all participants. Here we present three experiments using "Plinko", a behavioral task in which participants estimate distributions of ball drops over all available outcomes and where distributions are explicitly measured before any observations. In Experiment 1, we show that participant priors cluster around prototypical probability distributions (Gaussian, bimodal, etc.), and that prior cluster membership may indicate learning ability. In Experiment 2, we highlight participants' ability to update to unannounced changes of presented distributions and how this ability is affected by environmental manipulation. Finally, in Experiment 3, we verify that individual participant priors are reliable representations and that learning is not impeded when faced with a physically implausible ball drop distribution that is dynamically defined according to individual participant input. This task will prove useful in more closely examining mechanisms of statistical learning and mental model updating without requiring many of the assumptions made by more traditional computational modeling methodologies.

Non-intrusive reduced order modeling of natural convection in porous media using convolutional autoencoders: comparison with linear subspace techniques arxiv:2107.11460 📈 1

T. Kadeethum, F. Ballarin, Y. Choi, D. O'Malley, H. Yoon, N. Bouklas

**Abstract:** Natural convection in porous media is a highly nonlinear multiphysical problem relevant to many engineering applications (e.g., the process of $\mathrm{CO_2}$ sequestration). Here, we present a non-intrusive reduced order model of natural convection in porous media employing deep convolutional autoencoders for the compression and reconstruction and either radial basis function (RBF) interpolation or artificial neural networks (ANNs) for mapping parameters of partial differential equations (PDEs) on the corresponding nonlinear manifolds. To benchmark our approach, we also describe linear compression and reconstruction processes relying on proper orthogonal decomposition (POD) and ANNs. We present comprehensive comparisons among different models through three benchmark problems. The reduced order models, linear and nonlinear approaches, are much faster than the finite element model, obtaining a maximum speed-up of $7 \times 10^{6}$ because our framework is not bound by the Courant-Friedrichs-Lewy condition; hence, it could deliver quantities of interest at any given time contrary to the finite element model. Our model's accuracy still lies within a mean squared error of 0.07 (two-order of magnitude lower than the maximum value of the finite element results) in the worst-case scenario. We illustrate that, in specific settings, the nonlinear approach outperforms its linear counterpart and vice versa. We hypothesize that a visual comparison between principal component analysis (PCA) or t-Distributed Stochastic Neighbor Embedding (t-SNE) could indicate which method will perform better prior to employing any specific compression strategy.

Self-Repairing Neural Networks: Provable Safety for Deep Networks via Dynamic Repair arxiv:2107.11445 📈 1

Klas Leino, Aymeric Fromherz, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Păsăreanu

**Abstract:** Neural networks are increasingly being deployed in contexts where safety is a critical concern. In this work, we propose a way to construct neural network classifiers that dynamically repair violations of non-relational safety constraints called safe ordering properties. Safe ordering properties relate requirements on the ordering of a network's output indices to conditions on their input, and are sufficient to express most useful notions of non-relational safety for classifiers. Our approach is based on a novel self-repairing layer, which provably yields safe outputs regardless of the characteristics of its input. We compose this layer with an existing network to construct a self-repairing network (SR-Net), and show that in addition to providing safe outputs, the SR-Net is guaranteed to preserve the accuracy of the original network. Notably, our approach is independent of the size and architecture of the network being repaired, depending only on the specified property and the dimension of the network's output; thus it is scalable to large state-of-the-art networks. We show that our approach can be implemented using vectorized computations that execute efficiently on a GPU, introducing run-time overhead of less than one millisecond on current hardware -- even on large, widely-used networks containing hundreds of thousands of neurons and millions of parameters.

Robust Adaptive Submodular Maximization arxiv:2107.11333 📈 1

Shaojie Tang

**Abstract:** Most of existing studies on adaptive submodular optimization focus on the average-case, i.e., their objective is to find a policy that maximizes the expected utility over a known distribution of realizations. However, a policy that has a good average-case performance may have very poor performance under the worst-case realization. In this study, we propose to study two variants of adaptive submodular optimization problems, namely, worst-case adaptive submodular maximization and robust submodular maximization. The first problem aims to find a policy that maximizes the worst-case utility and the latter one aims to find a policy, if any, that achieves both near optimal average-case utility and worst-case utility simultaneously. We introduce a new class of stochastic functions, called \emph{worst-case submodular function}. For the worst-case adaptive submodular maximization problem subject to a $p$-system constraint, we develop an adaptive worst-case greedy policy that achieves a $\frac{1}{p+1}$ approximation ratio against the optimal worst-case utility if the utility function is worst-case submodular. For the robust adaptive submodular maximization problem subject to a cardinality constraint, if the utility function is both worst-case submodular and adaptive submodular, we develop a hybrid adaptive policy that achieves an approximation close to $1-e^{-\frac{1}{2}}$ under both worst case setting and average case setting simultaneously. We also describe several applications of our theoretical results, including pool-base active learning, stochastic submodular set cover and adaptive viral marketing.

Dynamic detection of mobile malware using smartphone data and machine learning arxiv:2107.11167 📈 1

J. S. Panman de Wit, J. van der Ham, D. Bucur

**Abstract:** Mobile malware are malicious programs that target mobile devices. They are an increasing problem, as seen in the rise of detected mobile malware samples per year. The number of active smartphone users is expected to grow, stressing the importance of research on the detection of mobile malware. Detection methods for mobile malware exist but are still limited. In this paper, we provide an overview of the performance of machine learning (ML) techniques to detect malware on Android, without using privileged access. The ML-classifiers use device information such as the CPU usage, battery usage, and memory usage for the detection of 10 subtypes of Mobile Trojans on the Android Operating System (OS). We use a real-life dataset containing device and malware data from 47 users for a year (2016). We examine which features, i.e. aspects, of a device, are most important to monitor to detect (subtypes of) Mobile Trojans. The focus of this paper is on dynamic hardware features. Using these dynamic features we apply state-of-the-art machine learning classifiers: Random Forest, K-Nearest Neighbour, and AdaBoost. We show classification results on different feature sets, making a distinction between global device features, and specific app features. None of the measured feature sets require privileged access. Our results show that the Random Forest classifier performs best as a general malware classifier: across 10 subtypes of Mobile Trojans, it achieves an F1 score of 0.73 with a False Positive Rate (FPR) of 0.009 and a False Negative Rate (FNR) of 0.380. The Random Forest, K-Nearest Neighbours, and AdaBoost classifiers achieve F1 scores above 0.72, an FPR below 0.02 and, an FNR below 0.33, when trained separately to detect each subtype of Mobile Trojans.

Introducing: DeepHead, Wide-band Electromagnetic Imaging Paradigm arxiv:2107.11107 📈 1

A. Al-Saffar, L. Guo, A. Abbosh

**Abstract:** Electromagnetic medical imaging in the microwave regime is a hard problem notorious for 1) instability 2) under-determinism. This two-pronged problem is tackled with a two-pronged solution that uses double compression to maximally utilizing the cheap unlabelled data to a) provide a priori information required to ease under-determinism and b) reduce sensitivity of inference to the input. The result is a stable solver with a high resolution output. DeepHead is a fully data-driven implementation of the paradigm proposed in the context of microwave brain imaging. It infers the dielectric distribution of the brain at a desired single frequency while making use of an input that spreads over a wide band of frequencies. The performance of the model is evaluated with both simulations and human volunteers experiments. The inference made is juxtaposed with ground-truth dielectric distribution in simulation case, and the golden MRI / CT imaging modalities of the volunteers in real-world case.

Learning the structure of wind: A data-driven nonlocal turbulence model for the atmospheric boundary layer arxiv:2107.11046 📈 1

Brendan Keith, Ustim Khristenko, Barbara Wohlmuth

**Abstract:** We develop a novel data-driven approach to modeling the atmospheric boundary layer. This approach leads to a nonlocal, anisotropic synthetic turbulence model which we refer to as the deep rapid distortion (DRD) model. Our approach relies on an operator regression problem which characterizes the best fitting candidate in a general family of nonlocal covariance kernels parameterized in part by a neural network. This family of covariance kernels is expressed in Fourier space and is obtained from approximate solutions to the Navier--Stokes equations at very high Reynolds numbers. Each member of the family incorporates important physical properties such as mass conservation and a realistic energy cascade. The DRD model can be calibrated with noisy data from field experiments. After calibration, the model can be used to generate synthetic turbulent velocity fields. To this end, we provide a new numerical method based on domain decomposition which delivers scalable, memory-efficient turbulence generation with the DRD model as well as others. We demonstrate the robustness of our approach with both filtered and noisy data coming from the 1968 Air Force Cambridge Research Laboratory Kansas experiments. Using this data, we witness exceptional accuracy with the DRD model, especially when compared to the International Electrotechnical Commission standard.

Deep Learning Based Reconstruction of Total Solar Irradiance arxiv:2107.11042 📈 1

Yasser Abduallah, Jason T. L. Wang, Yucong Shen, Khalid A. Alobaid, Serena Criscuoli, Haimin Wang

**Abstract:** The Earth's primary source of energy is the radiant energy generated by the Sun, which is referred to as solar irradiance, or total solar irradiance (TSI) when all of the radiation is measured. A minor change in the solar irradiance can have a significant impact on the Earth's climate and atmosphere. As a result, studying and measuring solar irradiance is crucial in understanding climate changes and solar variability. Several methods have been developed to reconstruct total solar irradiance for long and short periods of time; however, they are physics-based and rely on the availability of data, which does not go beyond 9,000 years. In this paper we propose a new method, called TSInet, to reconstruct total solar irradiance by deep learning for short and long periods of time that span beyond the physical models' data availability. On the data that are available, our method agrees well with the state-of-the-art physics-based reconstruction models. To our knowledge, this is the first time that deep learning has been used to reconstruct total solar irradiance for more than 9,000 years.

EGGS: Eigen-Gap Guided Search Making Subspace Clustering Easy arxiv:2107.12183 📈 0

Jicong Fan, Yiheng Tu, Zhao Zhang, Mingbo Zhao

**Abstract:** The performance of spectral clustering heavily relies on the quality of affinity matrix. A variety of affinity-matrix-construction methods have been proposed but they have hyper-parameters to determine beforehand, which requires strong experience and lead to difficulty in real applications especially when the inter-cluster similarity is high or/and the dataset is large. On the other hand, we often have to determine to use a linear model or a nonlinear model, which still depends on experience. To solve these two problems, in this paper, we present an eigen-gap guided search method for subspace clustering. The main idea is to find the most reliable affinity matrix among a set of candidates constructed by linear and kernel regressions, where the reliability is quantified by the \textit{relative-eigen-gap} of graph Laplacian defined in this paper. We show, theoretically and numerically, that the Laplacian matrix with a larger relative-eigen-gap often yields a higher clustering accuracy and stability. Our method is able to automatically search the best model and hyper-parameters in a pre-defined space. The search space is very easy to determine and can be arbitrarily large, though a relatively compact search space can reduce the highly unnecessary computation. Our method has high flexibility and convenience in real applications, and also has low computational cost because the affinity matrix is not computed by iterative optimization. We extend the method to large-scale datasets such as MNIST, on which the time cost is less than 90s and the clustering accuracy is state-of-the-art. Extensive experiments of natural image clustering show that our method is more stable, accurate, and efficient than baseline methods.

Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers arxiv:2107.11472 📈 0

Yunhui Guo, Xudong Wang, Yubei Chen, Stella X. Yu

**Abstract:** Hyperbolic space can embed hierarchical structures continuously. Hyperbolic Neural Networks (HNNs) exploit such representational power by lifting Euclidean features into hyperbolic space for classification, outperforming Euclidean neural networks (ENNs) on datasets with known hierarchical structures. However, HNNs underperform ENNs on standard benchmarks with unclear hierarchies, greatly restricting HNNs' practical applicability. Our key insight is that HNNs' poorer general classification performance results from vanishing gradients during backpropagation, caused by their hybrid architecture connecting Euclidean features to a hyperbolic classifier. We propose an effective solution by simply clipping the Euclidean feature magnitude while training HNNs. Our experimental results demonstrate that clipped HNNs become super-hyperbolic classifiers: They are not only consistently better than HNNs which already outperform ENNs on hierarchical data, but also on-par with ENNs on MNIST, CIFAR10, CIFAR100 and ImageNet benchmarks, with better adversarial robustness and out-of-distribution detection.

Next Page