ETL vs. ELT: Different Strokes for Different Folks
Unstructured data's sudden rise to dominance came with immense technical challenges. Managing the storage costs for vast amounts of data was one of those. Another was to ensure that the data remained accessible whenever needed, despite being scattered across many locations. Making sense of all that data on the analytics front was the third of the hurdles to be tackled. The sheer amount and different formats of data meant that old ways of working with data were no longer viable.
These challenges have been high up on the agenda for the software industry during the last decade. Tools to pull data from different sources, optimize storage usage, and improve data analytics have become mainstream. Another change involved data pipelines: They had to be reconfigured to accommodate the requirements of handling unstructured data. This change manifested itself in ELT's (Extract - Load - Transform) rising popularity against ETL (Extract - Transform - Load).
ELT hasn't replaced ETL altogether but rather proved more applicable to a broader range of scenarios as it caters to the most pressing needs in the market. Understanding why ELT is more popular and how ETL can still stick around can shed light on the future trajectory of data integration.
ETL (Extract - Transform - Load)
In the ETL, you extract data and move it to a staging layer where transformation will take place. Transformation is the act of converting data to the desired format for analysis. Once that is done, you load the data into the designated storage location. Here, all three actions (extract, transform, and load) are tied together and occur in the same run. In the ETL, you basically define a transformation process that can be applied to a general use case.
ETL is more suited to handling structured data. The data is cleansed and enriched in a processing server before loading it. The user can perform security checks and encrypt the data at the source. These steps help ensure that privacy concerns are taken into consideration (such as removing personal identifying information from data) and that the data now complies with regulations.
ETL is a cost-effective data integration method. However, it is not flexible since each process is designed for a specific type of transformation. You have to build a new ETL pipeline every time you want to make a change in the way your data warehouse works. The whole process has to be reconfigured when the input or output data format is changed. Therefore, ETL can be rather cumbersome to use when format changes are a regular thing.
ELT (Extract - Load - Transform)
Despite its success with structured data, ETL was not a good fit for unstructured data, the predominant form of data today. Unstructured data comes in a wide range of formats from a plethora of different sources and requires complex transformations. The more diverse data formats are, the more the number of reconfigurations you have to implement for data transformation. A sensible way to minimize the number of transformations was to extract the data, load it into a data warehouse, and delay the transformation until a specific need arose. This method was the ELT.
ELT offers the users the flexibility and scalability that ETL can not. With the ELT, you move the whole data set, not just a subset of the data as you would do with ETL, to the target destination, where it will remain as raw data until you need to transform it. The ELT method decouples the "extract" function from the "transform" function— you don't have to transform the data right after extracting it.
This "decoupling" affords you the luxury of delaying the data model decisions to the moment you need to use them. Moreover, you can also configure transformation processes for different, unique use cases instead of configuring a transformation that will apply to a general use case. When you change the SQL queries you run in an ELT pipeline, you are basically changing the transformations you make.
In spite of these advantages, ELT comes with a serious security risk. As the data is loaded into a data warehouse or a data lake before any transformations or security checks, it is likely to contain sensitive personal or confidential data. The data will probably not comply with data security protocols at that point.
Final thoughts
The bifurcation of data integration methods into ETL and ELT stemmed from real needs. Users who work with structured data on source systems with high compute power and prioritize data security keep using ETL. Users who deal with more current problems associated with unstructured data and value the flexibility in designing more complex and unique transformations can transition to ELT. It looks like ELT has nailed the product-market fit in a field dominated by unstructured data.