Papers I Read Notes and Summaries

How transferable are features in deep neural networks

Introduction

  • When neural networks are trained on images, they tend to learn the same kind of features... Continue reading


Distilling the Knowledge in a Neural Network

Introduction

  • In machine learning, it is common to train a single large model (with a large number... Continue reading


PTE - Predictive Text Embedding through Large-scale Heterogeneous Text Networks

Introduction

  • Unsupervised text embeddings can be generalized for different tasks but they have weaker predictive powers (as... Continue reading


Revisiting Semi-Supervised Learning with Graph Embeddings

Introduction

  • The paper presents a semi-supervised learning framework for graphs where the node embeddings are used to... Continue reading


Two-Stage Synthesis Networks for Transfer Learning in Machine Comprehension

Introduction

  • The paper proposes a two-stage synthesis network that can perform transfer learning for the task of machine... Continue reading

Higher-order organization of complex networks

Introduction

  • The paper presents a generalized framework for graph clustering (clusters of network motifs) on the basis... Continue reading


Network Motifs - Simple Building Blocks of Complex Networks

Introduction

  • The paper presents the concept of “network motifs” to understand the structural design of a network or... Continue reading

Word Representations via Gaussian Embedding

Introduction


HARP - Hierarchical Representation Learning for Networks

Introduction

  • HARP is an architecture to learn low-dimensional node embeddings by compressing the input graph into smaller... Continue reading


Swish - a Self-Gated Activation Function

Introduction

  • The paper presents a new activation function called Swish with formulation f(x) = x.sigmod(x) and its... Continue reading