Mark Hünermund Jensen

Dig deep: MySQL

The inner workings of MySQL are simply fascinating. For years I've wanted to share my findings, as I've ventured deeper and deeper into the world beneath the queries.

A brief story

In the 60ies and 70ies, computer scientist Edgar Frank Codd (1923 – 2003) developed the concept of relational databases. He formalized his idea through a language called Alpha, and continued to develop on it through-out the 70ies, while working at IBM.

IBM wanted to implement Codd's idea in System R. But the team in charge of the implementation, was isolated from Codd himself, and somewhat unfamiliar with his work, so they winded up up developing another language called SEQUEL - The language which later became SQL.

Despite the head-start, Relational Software Inc. beat IBM to releasing the first commercial relational database system, when they introduced Oracle V2 in 1979.

SQL was the de-facto motherlanguage of relational databases, and many new "dialects" derived from it. One of these dialects were, as you guessed, MySQL.

Named after one of the co-founders' daughter, My, the project "MySQL" began development in 1994 by the Swedish company, MySQL AB, and was released in May 1995. It was (and still is) written in C and C++, and developed as an open source project.

In 2008 Sun Microsystems acquired MySQL AB, who where then acquired by Oracle, in January 2010.

MySQL became particularly popular in coallition with Apache and PHP, and as of 2016 it's among the most widely used relational database systems (RDBMS) in the world.


Despite what many think, the bare definition of a relational database is actually not that relations are supported between tables. Instead, what distinguishes SQL from NoSQL is the use of tables and schemas.

But what the hell does that even mean? "No tables and no schemas"?

When you create a new SQL table, you always define a precise set of columns with specific data types, lengths and options. This is the schema, and SQL stricly follows this. You cannot add data which is not expected by the schema. This provides some benefits and some problems.

NoSQL is a free soul, it doesn't care about these kinds of rules. It doesn't need a nice, neat frame for how it does things. It lives one day at a time, carpe diem and all that.

An example of a NoSQL database - a quite popular one, even - is MongoDB. It stores data in JSON format, and you can add whatever data you like. If you want to add a customer with an email key and value, you do it; And if you don't want to, you just leave it out.

Storage Engines

All right, let me ask you a question: When you do an INSERT query in MYSQL, where does the data go? If you say "the harddisk", you're both right and wrong. To ellaborate we need to discuss storage engines (or "pluggable storage engine" as they are often called).

When you create a new table you may have encountered the terms "InnoDB" or "MyISAM". These are storage engines, and there are many more like them.

A storage engine is a software (layer) which MySQL invokes after having interpreted a query. The task of the storage engine is to read and write data. This enables MySQL, as a single database system, to offer many different ways of storing data, with just as many advantages and disadvantages.

MySQL actually has no idea where data goes or where it comes from. It simply asks the storage engine to carry out a task.

A seemingly sinister example of MySQL's life in the unknown, is the BLACKHOLE storage engine. It's not just a cool name, it's also a very true name. Because when you send data to a BLACKHOLE engine you'll never ever see it again. BLACKHOLE sees your data, then it churns it in a grinder, stabs it several times, spits on it, drags it through the mud and throws it away while talking shit about its mom.

While this sounds, well, useless, it actually happens to serve a pretty cool purpose: Database replication is the process in which a MySQL server automatically synchronizes its data with other MySQL servers. This can be used for performance gains as well as backup. We won't go deep into this, but BLACKHOLE provides an option to save bandwidth in such a setup. Often you won't need to back up tables used for caching, but in a replication environment the data is still transferred across the network. Using some tricks and the BLACKHOLE engine, you can prevent some tables from being transferred, without complicating the replication process too much. If you're interested you can read more on Stackoverflow or from the MySQL documentation on the BLACKHOLE engine.

But let's get back on track: "Storage engines" is why it's both right and wrong to answer that the data ends up on the disk. The most popular engines use disk, but MEMORY, for example, stores data directly in the RAM. BLACKHOLE, as we just discussed, stores it nowhere.

And to answer your burning question: Yes. Yes, you can write your own storage engine.

The different engines

All these ways of storing data comes with different advantages and disadvantages. Learn by example: Since MEMORY stores in the RAM, the read and write is blazing fast, compared to the disk-based engines. But what happens when the server reboots or is smacked dead by a power outage? It's gone. Wiped. Nowhere to be found. So obviously, you don't add your important financial statements in a MEMORY table, but it's ideal for caching solutions, because it doesn't really hurt if it's lost and it's faster than disk tables.

But even the disk tables have major differences. MyISAM and InnoDB are the two most commonly used engines, and many developers prefer InnoDB for a couple of reasons. For one thing it supports relations and transactions. But even a detail such as the write lock mechanism is different.

But first let's clarify what a lock is: Let's imagine a situation where you have two heavy queries that take a long time to execute, running simultaneously. They both try to update the same column, but with different values. For one thing you can wave bye-bye to data integrity. But what happens if they try to write in the same place?

To avoid dealing with these kinds of problems, a storage engine can choose to enable a locking mechanism. Basically, it allows one query to say that "I'm currently working in this area, you'll have to wait your turn" to other queries. Queries are now queued instead.

So why is that significant for MyISAM and InnoDB? Well, when you write in a MyISAM table, the MyISAM engine will lock the entire table while working. So if you update customer no. 10,000 in your "Customers" table, you cannot update customer no. 20,000 until the first query is done. There's rarely a particularly good reason to prevent two queries from working on two completely separate rows.

But God (read: the MySQL development team (read: No blasphemy intended)) has answered your prayers. InnoDB does this differently, because it only locks the specific row(s) it'll be working on, and leaves the rest open for other queries to operate on. Meaning that in theory you can have 10,000 queries working on 10,000 rows at the same time.

InnoDB's method is called row-level locking and MyISAM's is called table-level locking.

Creating tables

All right, so we have established that MySQL can interpret queries and that storage engines can do stuff to data. But how do they communicate?

When a programmer writes a storage engine, he will add methods with very specific names, such as create for making new tables, or write_row to carry out an INSERT query.

When a database administrator writes something along the line of:


MySQL breaks the query down and figures out that you want to create a new table (thank you, Captain Obvious) - in this case using the CSV engine.

What happens next, is that MySQL contacts the CSV engine and calls the create method. Additionally, it passes the paramters from its interpretation, such as table name, column definitions and other settings such as the character encoding.

In the CSV engine's case it creates a file called Example.csv in the MySQL data folder. You can actually find this folder by navigating to your MySQL directory. In WAMP on Windows it's located in something like:


Data Structures

Where it starts to get interesting is the many different shapes data can be stored in, for a ton of different purposes and reasons. Let's pretend we build two new storage engines, one for JSON and one for XML, and also compare it against the existing CSV engine. The exact same MySQL query will yield three different results.

INSERT INTO csv (id, name, age) VALUES (1, "Mr. John Doe", 43)

For CSV:
1,"Mr. John Doe",43

	"id": 1,
	"name": "John Doe",
	"age": 43

For XML:
	<name>John Doe</name>

This must be read, insert, updated and deleted in different ways - and that's essentially why we have different storage engines.

Aside from supporting differenet features like transactions, the gist of this is exactly the same as what happens in InnoDB, MyISAM, MEMORY, etc.

In this particular case, where we have JSON and XML, a major disadvantage is that we would normally need to read the entire file, to properly parse the contents of it. MySQL's common data storages are designed to avoid this. We'll dig further into that right away!

Table scanning

The process of looking through a table to find what you need is called table scanning.

When programming, there's a chance you've always read a file, by opening the entire file and putting it in your memory. And that's not wrong! Sometimes you need the entire file to properly parse it (for instance JSON) and sometimes the file is so small it just doesn't make much of a difference.

But MySQL allows for enormous file sizes. Terabytes. Petabytes even. So even if you had the time, you wouldn't have the memory to open such a file. So how exactly do we solve this? By using a cursor (or pointer). A cursor is basically a position in a file. If you have a simple CSV:


Computers are able to traverse through the contents of a file without loading everything in it. If we want to fetch "JOHN" from the above, we position the cursor at 1, after the quotation mark, and traverse 4 characters and collect what we find underway. Now we have "JOHN".

So that's pretty simple, right? Well, no, not always. For one thing, a first name is not always exactly 4 characters long. So we don't know how far we have to traverse. And if we need to the last name, we cannot safely assume where we have to position the cursor to begin with.

How Indexing Works

When you buy a book on mathematics (which we know happens often, right?), you have two ways to find out on which pages differential equations are covered. You can flip through each page of the book until you find it. But the thing is: You can't stop when you reach the first page on the subject, because there's very likely more later on. To be absolutely certain that you've found all occurences of differential equations, you've got to flip from page 1 and through the entire book.

An alternative is looking up in the index, in the back of the book. Databases have a similar tool at their disposal.

Indexing works in different ways and implementation isn't necessarily identical between two storage engines. In general, however, we distinguish between two types: B-Tree and Hash.

The basic difference is that Hash is relatively fast, but it's unable to do comparisons such as greater than or less than. B-Tree is a sorted index with a tree structure. Imagine New York's grid system with its neatly numbered avenues and streets. If you're at the corner of 1st and 14th and have to get to the corner of 5th and 34th, you navigate quickly by following the street numbers. This is somewhat the same a B-tree does when it traverses through the index structure to find what you need. When you reach corner of 2nd and 22nd, you do a quick comparison, and move on.

Char and Varchar

Let's return to discuss our problem with moving the file cursor efficiently through a file, while scanning a table. One attempt at solving the issue is storing data in fixed sizes. However, this comes at a cost: More storage is required. Let's take a look.

Pretend we have a table of people. For simplicity we store first name, last name and email. Now let's explore how the table looks depending on whether you choose CHAR or VARCHAR. Note that this is dummy code, it's not entirely how it looks in the MySQL.

In InnoDB and MyISAM CHAR is stored in exactly the amount of bytes it is long. So CHAR(10) always stores 10 bytes of data. If you provide a text with 7 characters, MySQL adds 3 trailing whitespace characters to fill up the space. These spaces are trimmed upon retrieval (unless you instruct MySQL differently). In our example we store everything in CHAR(10) (to make the example simpler)

Let's pretend we write a storage engine for MySQL, then CHAR-based data could look like this:

JOHN      DOE       [email protected] 
MARK      JENSEN    [email protected] 
BARACK    OBAMA 	[email protected]  

Because of schema and column defintions MySQL knows that the first 10 characters belong to the first name, the next 10 to last name and the last to email.

VARCHAR(10) is actually up to 11 bytes long, because a byte is added at the beginning of the string, which contains the length of the data. VARCHAR is still useful, though, because even though it's defined as VARCHAR(10), it will only occupy 5 bytes for a 4 byte long string of data, whereas CHAR(10) will consequently store 10 bytes.

The byte containing the length of the data, helps the file cursor when it traverses the table while scanning. For one thing, if we don't need that particular chunk of data we know how far we can skip the cursor ahead. But it also assists in understanding where a cell of VARCHAR starts and ends.

In our ficticious storage engine, VARCHAR-stored data could look like this:

[email protected]
[email protected]
[email protected]

Important note: The numbers indicating the data length would not actually be stored as seen above (as "9" for instance). Instead we would store their equivalent value as a byte. So this is merely to simplify the example.

This consumes less space, but it's not as easy to scan this, because we always have to run through the row to know how long it is.

So why don't we just jump from line to line in a file, to find the next row? Remember that a new line is also a character (or two, depending on OS) in a file. So the file scanner isn't aware where the new lines are. It'll still have to scan the file to find where new lines begin.

This leads us back to the starting point: If indexing and scan speed is very important in your case, you can use CHAR. If storage is more important you can use VARCHAR. But remember that a combination of fixed-sized and variable-sized columns is actually counter-productive. When you include just a single variable-sized column, the storage engine loses all capability to assume the length of the rows, when it has to move the file cursor around.

Data types between storage engines

Data types work differently across storage engines. Some types are designed with a speicifc purpose, and others are shaped by by their storage medium.

You probably know that VARCHAR is stored in a variable size (as opposed to CHAR which stores at a fixed size). At least that's how we normally think of them, but it's not always true.

In MyISAM and InnoDB they work as we expect. If you define the length of a CHAR column to 10 characters, the MyISAM/InnoDB storage engines will pad whitespaces to data, or cap it, to ensure it's always exactly 10 characters long. But in CSV, for instance, CHAR is stored with a dynamic size and does not add the trailing spaces.

If we move our eyes to the MEMORY engine, it actually stores the otherwise dynamic-length VARCHAR in fixed sizes, because of how data is allocated in RAM world.

Final remarks

It has been an absolute joy to write this article - and this is just the tip of the iceberg! There's so much more to explore about indexing, data types and storage engines. But hopefully you've learnt something today you didn't already know.

But most importantly: What we have explored today, is in many use-cases bordering to micro-optimization. There are definitely projects where this knowledge is very relevant to performance, but a small website is not tanked by choosing CHAR over VARCHAR, or InnoDB over MyISAM. The intent of this blog post, was to point some people to knowledge they maybe didn't even know they could dig into.