Here we use UCI’s Electricity Dataset that contains electricity consumption of 370 points/clients from a period of 2011 to 2014 at an interval of 15 minutes. Apart from electricity consumption data, we also generate some covariates series (for example Day of the Week, Month of the year, etc.) for individual time series.
N-beats is a deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers. The architecture has a number of desirable properties, being interpretable, applicable without modification to a wide array of target domains, and fast to train. Two configurations of N-BEATS demonstrate state-of-the-art performance for M3, M4 and TOURISM competition datasets containing time series from diverse domains, improving forecast accuracy by 11% over a statistical benchmark and by 3% over last year’s winner of the M4 competition
Metric learning aims to learn an embedding space, where the embedded vectors of similar samples are encouraged to be closer, while dissimilar ones are pushed apart from each other. Multi Similarity Loss proposed intuitively better methods to achieve this and is backed up by its accuracies across public benchmark datasets. This paper main contribution are two fold: a) Introducing multiple similarities into the mix, b) hard pair mining.
This loss deals with 3 types of similarities that carry the information of pairs.
ML Engineer/Research Head at Greendeck.