DeepLearning Methods  on iot:A Survey ofState-of-the-artABSTRACT:Deep learning is a machine learningtechnique that teaches   computers to dowhat comes naturally to humans and help to make sense of datasuch as images, sound, and text.In thispaper, we provide aanoverview on a  advanced machine learningtechniques,namely Deep Learning (DL), to facilitate the analyticsand learning in the IoTdomain that have incorporated DL in their intelligence background are alsodiscussed.  Thesemethods have dramatically improved the state-of-the-art in computer vision,speech recognition, natural language processing (NLP) and many other domainssuch as drug discovery and cancer cell detection. Wehave summarize major reportedresearch attempts that leveraged Deep learning inthe IoT domain. GeneralTerms:deeplearningkeywords:deeplearning, collaborative filtering, hybrid recommender,internet of things.Introduction:Thevision of the Internet of Things (IoT) is to transform traditional objects tobeing smart by exploiting a wide range of advanced technologies, from embeddeddevices and communication technologies to Internet protocols, data analytics,andso forth. In recent years, any IoT applications arose in differentverticaldomains.

for example:health, transportation, smart home, smartcity, agriculture,education, etc. Deep Learning (DL) has been actively utilized in many IoTapplications in recent years.Applications:Automated Driving: Automotiveresearchers are using deep learning to automatically detect objects such asstop signs and traffic lights and even pedestrians to decrease accidents. Aerospace and Defense: Deep learning is used to identify objects from satellitesthat locate areas of interest, and identify safe or unsafe zones for troops.Medical Research: Cancerresearchers are using deep learning to automatically detect cancer cells. Industrial Automation: Deep learning saves the worker by automatically detectingwhen people or objects are within an unsafe distance of machines.Electronics: Deeplearning is being used in automated hearing and speech translation. DEEPLEARNING APPROACHES FOR RECOMMENDER SYSTEM1)Deep Neural Networks (DNN)Deep neuralnetwork (DNN) is a multilayer percept network with many hidden layers, whoseweights are fully connected and are often initialized using stacked RBMs or DBN31, 32.

The success of DNN is that it can accommodate a larger hidden units  and performs a better parameter initializationmethods. A DNN with large number of hidden units can have better modelingpower.1.

1  Basic terminologies of deep learning 1)Deep belief network (DBN): The generative models composed of multiple layersof stochastic, hidden variables. The top two layers have undirected and symmetricconnections between them. The lower layers receive top-down and directedconnections from the layer above .2)Boltzmann machine (BM): A network that are connented symmetricaly,neuron-like units that make stochastic decisions about whether to be on or off . 3)Restricted Boltzmannmachine (RBM): This is also a one ofthe special Boltzmann machine consisting of a layer of visible units and alayer of hidden units with no visible-visible or hidden-hidden connections. 4)Deep Boltzmann machine(DBM): This is also a one of the special BM where the hidden units areorganized in a deep layered manner, only adjacent layers are connected, andthere are no visible-visible or hidden-hidden connections within the same layer. 5)Deep neural network (DNN):A multilayer network with many hiddenlayers, whose weights are fully connected and are often initialized  using stacked RBMs or DBN.

6)Deep auto-encoder: A DNN whose output targets  the data input itself, often pre-trained withDBN or using distorted training data to regularize the learning. 7)Distributedrepresentation: The observed data in such a waythat they are modeled as being generated by the interactions of many hiddenfactors. A particular factor learned from the configurations  can often generalize well. Distributed representations formthe basis of deep learning.

2 )NEURAL NETWORKS:Most deep learningmethods use neural network architectures.The term “deep”usually refers to the number of hidden layers in the neural network.Traditional neural networks only contain 2-3 hidden layers, while deep networkscan have as many as 150 layers.

2.2  ARTIFICIAL NEURAL NETWORK:The original goal ofthe ANN is to solve problems in the same way that a human brain would. ANNs have been used on a variety oftasks, including computer vision,speech recognition, machine translation, socialnetwork filtering, playing board and video games and medicaldiagnosis3) Convolutional NeuralNetwork (CNN) ConvolutionalNeural Network is a type of deep learning model with each module consisting ofa convolutional layer and a pooling layer. These modules are often stacked upwith one on top of another, or with a DNN on top of it, to form a deep model. It is very similar to ordinaryneural networks.

 These are actually made of neurons which consists of learnable weights andbias where each neuron get some inputs , performs a dot product operation ofthese inputs and conditionally follows it with non-linearity.This isusually explained in the architecture of this model where each neuron whenreceiving inputs make it to transform through a series of hidden layers. Noweach hidden layer consists of neurons where each neuron is fully connected toall the previous neurons and these neurons in a single layer functionindependently and thus making them not to share connections with others. Thefinally connected layer is the “output layer” and it represents classscoresinclassificationsystemasinglefullyconnectedneuroninafirsthiddenlayerofaregularNeural Network would have32*32*3 = 3072weightsThere are three main parameters thatcontrol the output volume of  theconvolution layer.

They are:1. Depth2. Stride3. Zero padding The main advantage of convolution neural networks is theinputs are represented in a image format and this system isamore sensible wayof neural networks.   The applicationsof convolution neural networks are   1. Imagerecognition    2. Video analysis    3. Checkers   4.

Go   5. Fine-tuning Figure 1.1: Architecture of CNN4)Recurrent Neural Networks(RNNs): The input to an RNN consists of boththe current sample and the previous observed sample and the output of an RNN attime step t=1 affects the output at time step t. Each neuron is equipped with a feedback loop that returnsthe current output as an input for the next step.

5)Long Short Term Memory (LSTM):LSTMis an extension of RNNs.It uses the concept of gates for its units and itcomputes a value between 0 and 1 based on their input. The feedback loopalsostore the information and each neuron in LSTM (also called amemorycell) has a multiplicative forget gate, read gate, and write gate. These gatesare introduced to control the access to memory cells and to prevent them fromperturbation by irrelevant inputs. When the forget gate is active, the neuron writesits data into itself. When the forget gate is turned offbysending a 0, the neuron forgets its last content. When the write gate is set to1, other connected neurons can write to that neuron.

If the read gate is set to1, the connected neurons can read the content of the neuron.6)Autoencoders (AEs):AEsconsist of an input layer and an output layer that are connected through one ormore hidden layers. AEs have the same number of input and output units.7) Variational Autoencoders (VAEs)8)GenerativeAdversarial Networks (GANs)9)Ladder Networks:Laddernetworks were proposed in 2015 by Valpola etal. 30tosupport unsupervised learning.

This ladder network performs variety offunctions such as handwrittendigitsrecognition and image classification. The architecture of a ladder networkconsists of two encoders and one decoder.correlation properties of theobserved or visible data for pattern analysis or synthesis purposes.

Fig.2.1.

Ladder network structure with two layersFig. 3.1  Structure of a recurrent neural network10 )Architectures of Deep Learning 10.1 Generative deeparchitectures: which are intended tocharacterize the high-order correlation properties of the observed or visibledata for pattern analysis or synthesis purposes,10.

2 Discriminative deeparchitectures: which are intended todirectly provide discriminative power for pattern classification,10.3 Hybrid deeparchitectures:where the goal isdiscrimination but is assisted (often in a significant way) with the outcomesof generative architectures via better optimization or/and regularizationFig.4.1. Google Trend showing more attention toward deep learning in recentyears.

Fig. 5.1 .

The overall mechanism oftraining of a DL model.Islanding Detection MethodsThis section provides anoverview of various islanding detection methods. There are threemajor categories forislanding detection methods: passive resident methods, active residentmethods, and communication-based methods.Passive Resident MethodsPassive resident methodsare based on he detection of abnormalities in electrical signals atthe PCC of a DG unit.Active Resident MethodsAn active resident methodarti¯cially creates abnormalities in the PCC signals that can bedetected subsequent to an islanding event.Communication-Based MethodsCommunication-basedmethods are based on transmission of data between a DG unit andthe host utility system.

The data is analyzed by the DG unit to determine if the operationof the DG should be halted.Islanding Detection MethodBased on alaboratory-scale test system, the performance of the islanding detection methodof   undervarious scenarios is evaluated. The studies conclude that the proposed method: is adequately fast to detect anislanding event within 60 ms (3.

5 cycles) under UL1741test conditions,² requires2% to 3% of negative-sequence currents injection for islanding detection,93  is insensitive to the loadparameters variations,is subject to nuisance trips for the grid imbalance morethan 2%, may fail to detect islandingevents if the load is not balanced,can provide fast islanding detection fortransitions from a grid-connected mode to an islanded mode in a DG unit.Global strategies        deep learningprovides two main improvements over       traditional machines. They are:1.They simply reduce the need for handcrafted and engineered feature set to be used exclusively for training purpose.2.y increase the accuracy of theprediction model for larger amounts of dataConclusion: In this article, we presented traditional recommendersystem and techniques involved in deep learning recommender system.

Even thoughdeep learning poses a great impact in various areas, a lot of improvement canstill be done in applying the model to a recommender systems to improveaccuracy of recommendation. This  presents a detailed assessment of a fastislanding detection method originallyproposed in .Then,adopting the islanding detection methodand the proposed control strategies, theviability of uninterruptible operation of single DG o-parallel DG systemssubsequent to islanding events is experimentally validated.rafted andengineered feature set to be used exclusively for training purpose.2.They increase the accuracy of the prediction model forlarger amounts of dataConclusion:

x

Hi!
I'm Erica!

Would you like to get a custom essay? How about receiving a customized one?

Check it out