Posts

Showing posts from November, 2017

CNN Layers

Four main layers: Convolutional layer --output neurons that are connected to local regions in the input ReLU layer --elementwise activation function Pooling layer --perform a downsampling operation along the spatial dimensions Fully-connected layer -same as regular neural networks Filters act as feature detectors from original image Network will learn filters that activate when they see some type of visual features ReLu converges much faster than sigmoid/tanh in practice Pooling Layer makes representations smaller and more manageable, helps control overfitting CNNs have much fewer connections and parameters which are easier to train, traditionally fully-connected neural network is almost impossible to train when initialized randomly

Clustering vs Dimensionality Reduction

Clustering and Dimensionality Reduction are applied to the problem of unsupervised learning. Clustering identifies unknown structure in the data;  Dimensionality Reduction uses s tructural characteristics to simplify data. However, there is a problem called Curse of Dimensionality . In practice, too many features leads to worse performance . Therefore, we need Dimensionality Reduction . It's possible to represent data with fewer dimensions, which requires us to discover intrinsic dimensionality of data. One way to do dimensionality reduction is to perform lower dimensional projections. In the way, we could transform dataset to have less features, in the new feature space, some original features are combined via linear or nonlinear functions . PCA: Principal Component Analysis Find sequence of linear combinations of the features that have maximal variance and are uncorrelated PCA:1st PC 1st PC of X is unit vector that maximizes the s...