💻
Database Magazine
GlossaryBest Practices and TipsFAQsResources
  • Database Magazine
  • Glossary of Terms
    • A
      • Archive
      • Active backup for Office 365
      • AWS Backup
      • Active Directory
      • Agent
      • Anti-ransomware solutions
    • B
      • Backup
      • Backup and Recovery
      • Backup as a service
      • Bare-metal backup
      • Backup repository
      • Backup schedule
      • Backup Solutions
      • Business Continuity
    • C
      • Cloud Backup
      • Continuous Data Protection (CDP)
      • Compression
      • Consistency check
      • Cold Backup
      • Cloud Data Management (CDM)
    • D
      • Data Deduplication
      • Disaster Recovery (DR)
      • Differential Backup
      • Disk-to-Disk (D2D) Backup
      • Disaster Recovery (DR)
    • E
      • Encryption
      • Endpoint Backup
      • Erasure Coding
      • Export/Import
      • Enterprise Backup Software
    • F
      • Full Backup
      • Failover
      • File-Level Backup
      • File Sync and Share
      • Fireproof and Waterproof Storage
    • G
      • Grandfather-Father-Son (GFS)
      • Granular Recovery
      • Geographically Dispersed Backup
      • Ghost Imaging
      • Global Deduplication
    • H
      • Hybrid Backup
      • Hot Backup
      • High Availability (HA)
      • Hard Disk Drive (HDD)
      • Hybrid Cloud Backup
    • I
      • Incremental Backup
      • Image-based Backup
      • Instant Recovery
      • Integrity Check
      • Infrastructure as a Service (IaaS)
    • J
      • Journaling
      • Job Scheduler
      • Just-in-Time Recovery
      • Journal-Based Recovery
      • Jumbo Frames
    • K
      • Key Management
      • Kernel-Based Recovery
      • Kickstart
      • Kept Versions
      • Kill Switch
    • L
      • Long-Term Retention
      • Log-Based Recovery
      • Local Backup
      • Latency
      • Load Balancing
    • M
      • Metadata
      • Mirroring
      • Multi-Site Replication
      • Media Rotation
      • Mounting
    • N
      • Nearline Storage
      • Network-Attached Storage (NAS)
      • Non-Destructive Recovery
    • O
      • Offsite Backup
      • Online Backup
      • Object Storage
      • Offsite Replication
      • Open File Backup
      • Overwrite Protection
      • One-Click Restore
    • P
      • Point-in-Time Recovery
      • Primary Storage
      • Physical Backup
      • Private Cloud Backup
      • P2V (Physical-to-Virtual) Conversion
    • Q
      • Quiesce
      • Quick Recovery
      • Quota Management
      • Quality of Service (QoS)
      • Query-Based Recovery
    • R
      • Recovery Point Objective (RPO)
      • Recovery Time Objective (RTO)
      • Replication
      • Restore
      • Retention Policy
    • S
      • Snapshot
      • Storage Area Network (SAN)
      • Secondary Storage
      • Single Point of Failure (SPOF)
      • Synthetic Full Backup
    • T
      • Tape Backup
      • Two-Factor Authentication (2FA)
      • Thin Provisioning
      • Test Restore
      • Transaction Log
    • U
      • Universal Restore
    • V
      • Versioning
      • Virtual Machine (VM) Backup
      • Verification
      • Vaulting
      • Virtual Tape Library (VTL)
    • W
      • Warm Site
      • Workload Mobility
      • WAN Acceleration
      • Write-Once, Read-Many (WORM)
      • Windows Backup
    • X
      • XOR (Exclusive OR)
    • Y
      • Yearly Backup
    • Z
      • Zero Data Loss
  • Best Practices and Tips
    • How to backup Microsoft 365 using third-party backup tools
  • FAQs
    • Does Office 365 have backups?
    • What is the best backup for Office 365?
    • How do I backup my Office 365 backup?
    • What is the backup tool for Office 365?
    • Does Office 365 have storage?
    • Is OneDrive a reliable backup solution?
    • What is an Incremental Backup?
    • Does VMware have a backup tool?
    • What is VMware considered backup?
    • What are the types of backup in VMware?
    • Is VMware snapshot a backup?
    • What is the best way to backup a Hyper-V VM?
    • How do I create a backup in Hyper-V?
    • Should you backup a Hyper-V host?
    • What is the difference between Hyper-V snapshot and backup?
    • What is the disaster recovery in IT industry?
    • What should an IT disaster recovery plan include?
    • What are the main steps in IT disaster recovery?
    • What is the difference between IT security and disaster recovery?
    • What is a NAS backup?
    • How do I backup my NAS data?
    • Can NAS be used as a backup?
    • What is Nutanix used for?
    • What is Nutanix storage?
    • What is RPO and RTO in Nutanix?
    • What is MSP backup?
    • What is managed backup service?
    • How do I restore my MSP backup?
    • What is Azure Backup?
    • What is the purpose of Azure Backup?
    • What are the different types of Azure cloud backups?
    • Is Azure Backup a PaaS?
    • What are the downsides of Backblaze?
    • Does Backblaze backup everything?
    • Is Backblaze better than Google Drive?
  • Resources
Powered by GitBook
On this page
  • Definition
  • Explanation
  • Related terms
  1. Glossary of Terms
  2. E

Erasure Coding

Unlock the power of Erasure Coding, an advanced data protection technique that enables efficient fault tolerance and data recovery.

Definition

Erasure Coding is a data protection technique that enhances data reliability and fault tolerance by dividing data into fragments, adding redundant pieces, and distributing them across multiple storage devices or servers. It enables data recovery even if some of the storage components become unavailable or fail.

Explanation

Erasure Coding is used to protect data against data loss and ensure data integrity in distributed storage systems. Instead of relying solely on traditional methods like data replication or RAID (Redundant Array of Independent Disks), Erasure Coding provides a more efficient approach to achieve fault tolerance while minimizing storage overhead.

In Erasure Coding, data is divided into multiple data fragments, and additional fragments, known as parity or redundancy fragments, are generated. These redundancy fragments contain mathematical algorithms that allow the original data to be reconstructed if some of the data fragments become inaccessible or corrupted.

The key concept in Erasure Coding is that the data fragments and parity fragments are distributed across different storage devices or servers, forming a dispersed storage network. This distribution enhances fault tolerance, as the loss of a few fragments can be tolerated while still allowing for data recovery.

Erasure Coding offers several benefits, including:

  1. Increased fault tolerance: Erasure Coding enables data recovery even if multiple storage components or servers fail or become unavailable.

  2. Storage efficiency: Compared to traditional replication-based approaches, Erasure Coding requires less storage overhead, as redundant fragments are generated based on mathematical algorithms rather than full data duplication.

  3. Scalability: Erasure Coding facilitates scalability in distributed storage systems, as new storage devices or servers can be added without duplicating the entire data set.

  4. Bandwidth optimization: Erasure Coding reduces the amount of data that needs to be transferred during data reconstruction, optimizing network bandwidth usage.

However, it's important to note that Erasure Coding introduces additional computational overhead during encoding and decoding processes, which may impact system performance.

Related terms

  • Data redundancy: The presence of extra or duplicate data that can be used for recovery or fault tolerance purposes.

  • RAID (Redundant Array of Independent Disks): A data storage technology that combines multiple physical disk drives into a single logical unit for improved performance, reliability, or both.

  • Data integrity: The assurance that data remains unaltered, consistent, and accurate throughout its lifecycle, including protection against data corruption or unauthorized modifications.

  • Distributed storage: Storage systems that span multiple physical devices or servers, often geographically distributed, to provide scalability, fault tolerance, and improved performance.

  • Data reconstruction: The process of rebuilding the original data from the available data fragments and redundancy fragments using Erasure Coding algorithms.

  • Fault tolerance: The ability of a system or storage infrastructure to continue operating and providing access to data even in the presence of hardware failures or network disruptions.

  • Redundancy level: The degree of redundancy used in Erasure Coding, indicating the number of redundancy fragments generated per data fragment. Higher redundancy levels offer increased fault tolerance but require more storage space.

PreviousEndpoint BackupNextExport/Import

Last updated 1 year ago