Data migration is a very common task in (big) data engineering. In the big data landscape, you would probably look for sqoop to handle such a task. But in the small to mid-size ecosystem, this task is actually not an obvious one. In this post, I will introduce a new bulk loading tool – Embulk – which can handle data loading between different technologies.
For a data migration or ETL project, you have multiple choices:
- a generic-purpose ETL tool (e.g. Informatica, Talend) which needs a mimium of set-up and a bit of job design (reading-bucket size, insert batch size, temporary files, mapping, creation table options…)
- hand-coded all-in-one script (e.g. in Python): basically you would have to do it from scratch
- hand-coded set of scripts bulk dump from machine X database x with a dedicated tool (e.g. mysqldump), copy of the dump file x to the machine Y, bulk load into database y with another dedicated tool (e.g. bcp1)
Now, no matter which solution amongst these you consider, if your source or target database technology change you have to do it all over again. That’s annoying to say the least.
This is where Embulk comes in handy. This data transfer utility can bulk load between different database technologies and/or flat files using a very simple (yaml) configuration file. The basic idea is an input source, and an output target. This tool also supports filtering and some transformations (e.g. join two files on a key).
My particular use-case is data migration from a MySQL database to a Microsoft SQL Server database (running in Azure):
This turned out to be an interesting use-case ! The MySQL part is okay, nothing particular here. But the sqlserver part is all but a pleasant journey ! Why is that. Well for one you need two drivers: the standard JDBC driver that will be used by Embulk to proceed with the
create table statements. Now, as you can see we use the ¨native¨ mode. The “normal” mode would actually produce a sequences of insert into statements (and therefore that would not be bulk loading). So we use the “native” mode that requires the “SQL Server Native Client (11.0)”. Tricky thing because the current version is the ODBC driver (currently in preview version 13.0). But the libraries calls are actually hard-coded in the output plugin meaning that we will need some soft links:
Even the ODBC driver call is hard-coded in the plugin so we also needs to map what is called with what we have. You need to add the corresponding entry in /etc/odbcinst.ini
This is what the new entry should look like:
Now we can call (finally) Embulk with our configuration file !
A few pitfalls though:
Embulk guess feature is not great with database, we have a very strict data schema here. Asking MySQL what the data is like should be enough to tell SQL Server which proper data types to use in the create statement. Unfortunately every integer fields were created as BIGINT and every non-numerical fields as TEXT. Not great, this is why a proper definition has been set in the configuration file.
Embulk is a parallel bulk loader. It uses multiple input/output tasks to dump and load data. To do so it creates n temporary data set that is file if the output is a file type. Even when dealing with relational database: first it bulk loads to temporary tables and uses multiple
insert into table select from table_temporary ;statements to complete the integration. This is I believe a huge issue because it’s a bit counterproductive with the bulk load operation.
Basically Embulk has the potential to become a great tool but lack some implementation optimizations. For instance, with a single thread and a single output task, there is no need to bulk load into a temporary table, it could directly load to the target. Also I wonder if the ODBC driver part is really compulsory. I am confident that a bulk copy with the JDBC driver would be at least as efficient.