ABSTRACT Data growth rates will proceed tobuild quickly in the coming years. Cloud computing provides a new way ofservice provision by arranging various resources over internet. One of the importantcloud service amongst the existing services is storage of data.

Data savedmight hold multiple copies of the same data. Data deduplication is one of the vitaltechniques, which compress the data by removing the duplicate copies of thesame data to minimize the storage space. In order to provide the dataprotection which is to be stored on cloud, data are need be stored in theencrypted form. In proposed scheme the main purpose of this is to ensure thatonly one instance of data is stored, lower the amount of storage space, andoptimize storage capacity. Here we design a effective approach whicheffectively reduces encryption overhead using compression and encryption methodand also results in less data chunking uses less indexes and reduced need fortape backup. INTRODUCTION         Cloud computing is an IT that enablesaccess to shared pools of configurable system resources and higher-level services that can berapidly provisioned with minimalmanagement effort, often over the Internet.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Cloudcomputing services all work a little differently. Cloud computing relies on sharing of resources toachieve coherence and economy of scale, similar to a utility. Butmany provide a friendly, browser-based dashboard that makes it easier for ITprofessionals and developers to order resources and manage their accounts. Somecloud computing services are also designed to work with APIs and CLI, givingdevelopers multiple options.

Some of the things we can do with the cloud arecreating new app and service, storing, back up and recover data, stream audioand video. Cloud provides three types of services: IASS, PAAS, SAAS and threecloud deployments: public, private and hybrid.The idea of datadeduplication was proposed to minimize the storage space. It is also called asintelligent compression or single instance storage. In this paper we design anddevelop a new approach that effectively deduplicates redundant data in documentby using the concept of object level component resulting to less data chunking,uses fewer indexes and reduced need for tape backup.

This technique focuses onimproving utilization and also can be applied to network data transfer toreduce number of bytes that must be sent. Data deduplication can operate atfile level, block level and even at bit level. In file-level datadeduplication, if any two files are exactly alike then only one copy of fileneed to be stored then subsequent iteration will have a pointer to files. Thechange in the single bit will need to store entire copy of different file. Inblock-level and bit-level data deduplication it looks within a file, if file isupdated then it saves only the changed blocks between the two files. Howeverfile-level may require less processing power due to smaller index and reducethe number of comparison but in block- level may take more processing power anduse much larger index to track the individual block.

       

x

Hi!
I'm Erica!

Would you like to get a custom essay? How about receiving a customized one?

Check it out