- Joined
- Jan 1, 2001
- Messages
- 60,201
Hey everybody 
Nice of you to check out this thread, this is a simple reminder that you shouldn't forget to do a manual backup once in a while. You can still do your automatic backups, and you should have those. But a manual backup never hurts. You can get a better impression of how big your website is file wise, and how large your database has grown. This is especially handy right before doing some maintenance.
These commands are how I used it today on our Ubuntu linux setup, for macOS/Windows and some other linux distributions this might differ, but the principle is the same. If you have alternative instructions please don't hesitate and share them.
The most important data lies in the database itself, so I prefer to at least get a dump of that, even before I run a repair and optimize on any tables. I just want the data as it is on the system right now. I can always do some maintenance on the system and then do another import to compare the results.
Tip: Once you have a backup, don't forget to test it!
I usually connect with ssh to the system where the MySQL database is running on, and you can (should) do this as a regular user. You don't need to be root/sudo for this one. Once connected, I type the mysql command, using the user and database name, and tell it to output the result to an .sql file. Which I then zip (gzip) up with the tar command to get a compressed tarball, which I download offsite and archive properly.
Tip: Make sure your filenames are accurate, and your archived file is consistent with the rest of the archive and at the least mentions what kind of file it is!
Tip: Do not include the password in the command, there's no need to see it in the bash history.
Okay, let's walk through that command. And then I will show you how it looks basically when you use it.
mysqldump
This is the program we're using to export sql data.
-u sqlUsername
The -u basically means: use this username.
The sqlUsername basically means: and this is the username that has access to the database we want to do something with.
For example, the omgboards.com database might be called: omgforums, so then this would be: -u omgforums
-p sqlDatabase
Please note! This might be confusing, but the -p is ONE thing, and sqlDatabase is ANOTHER thing. The sqlDatabase is NOT the password.
The -p basically means: And ask me for the password when I press enter.
sqlDatabase
The sqlDatabase part refers to the sql database we want to do something with. Again, this shows right after the -p parameter, but it's not the password.
For example, on omgboards.com we might call our database omgdb, so it would look like -u omgforums -p omgdb, I hope that makes sense.
> backup-sqlDatabase.sql
The > character means this basically: the output of this command, go over there with it (do this with it)
The backup-sqlDatabase.sql part is the filename we wish to use. This is where we write to. Note please that you don't have to create this file first, it will create it if it doesn't exist.
For example, on omgboards.com we might call this backup file: sqlbackup-omgboards.sql
Okay, so please adjust the values of the database to your specific database with it's specific user, and press enter. It will ask for a password. The password is not your system password, but the password for the database user you've provided.
This is for example how it looks like:
Okay, let's take a step back. This seems quite confusing. But it really isn't.
Usually you have properly written down the database details (most web apps have a config.php file (like the software XenForo that we're currently using for our website here) and these details are a copy/paste away into the command. Basically all you have to do is enter a command to dump the database to a single file. Then compress it and download it to your archive.
Let's get to that part, the compressing of the backup and archiving it.
To archive things properly, we have to at least zip up the .sql dump. Why? Because it's usually just text. Most database .sql dumps consist of threads, posts, user data, and forum structures, etc. This is all text. Which compresses quite nicely. Another thing we also want to do is create a tarball (tar is an archiving utility which also supports compression).
I personally prefer gzip archives, so I will compress it into .gz, but you can lookup the manual for tar and use bzip or zip, etc).
The command for our example: backup-sqlDatabase.sql, would be:
Please note, the file we have is "backup-sqlDatabase.sql", so I remove the .sql from it, and include the "backup-sqlDatabase" part inside the resulting filename, for the way I do things, this works for me.
Okay, let's walk through that command, tar.
tar
The archiving command we're using to create tarball gz files.
-czf
Three parameters to tell the tar command we're not extracting, but compression (c) and compressing with zgip (z) from a file (f).
backup-mysql-march-14th-2018-backup-sqlDatabase-export.tar.gz
This part is written by me, this is the resulting filename we end up with, I strongly recommend ending it with .tar.gz (regardless of what name you give the filename)
To breakdown the filename, I always prefix my archivable exports with backup-, then I add a readable timestamp (you can use whatever you want of course), then the filename, and since this was an sql export, I include that it's an export.
This is for example how it looks like:
As you can see here, the tarball that's compressed, ends up as a 250 megabyte file. This takes up way less space, it's faster and easier to download, and you can put more exports in your archive and not fill up the drives so quickly.
What you will also spot in that listing, is the webfiles export, that was already there. And we will get into that in the second post of this thread
Before we do that, let's explain something and summarize what we've done.
TLDR
Backing up a website is important, make a manual backup and test it. Using the mysqldump command you can tell the system to dump a specific database to the file system. Before downloading and archiving your backup, you can use the tar command to create a compressed tarball for easy downloading and storing. Being able to quickly dump your database and archive it, makes it easier to roll back to previous working versions, clone & test upgrade software/sites, fix problems, or move to another host.
mysqldump -u [username] -p [database name] > [database name].sql
tar -czf [filename].tar.gz [database name].sql

Nice of you to check out this thread, this is a simple reminder that you shouldn't forget to do a manual backup once in a while. You can still do your automatic backups, and you should have those. But a manual backup never hurts. You can get a better impression of how big your website is file wise, and how large your database has grown. This is especially handy right before doing some maintenance.
These commands are how I used it today on our Ubuntu linux setup, for macOS/Windows and some other linux distributions this might differ, but the principle is the same. If you have alternative instructions please don't hesitate and share them.
The most important data lies in the database itself, so I prefer to at least get a dump of that, even before I run a repair and optimize on any tables. I just want the data as it is on the system right now. I can always do some maintenance on the system and then do another import to compare the results.
Tip: Once you have a backup, don't forget to test it!
I usually connect with ssh to the system where the MySQL database is running on, and you can (should) do this as a regular user. You don't need to be root/sudo for this one. Once connected, I type the mysql command, using the user and database name, and tell it to output the result to an .sql file. Which I then zip (gzip) up with the tar command to get a compressed tarball, which I download offsite and archive properly.
Tip: Make sure your filenames are accurate, and your archived file is consistent with the rest of the archive and at the least mentions what kind of file it is!
Code:
mysqldump -u sqlUsername -p sqlDatabase > backup-sqlDatabase.sql
Tip: Do not include the password in the command, there's no need to see it in the bash history.
Okay, let's walk through that command. And then I will show you how it looks basically when you use it.
mysqldump
This is the program we're using to export sql data.
-u sqlUsername
The -u basically means: use this username.
The sqlUsername basically means: and this is the username that has access to the database we want to do something with.
For example, the omgboards.com database might be called: omgforums, so then this would be: -u omgforums
-p sqlDatabase
Please note! This might be confusing, but the -p is ONE thing, and sqlDatabase is ANOTHER thing. The sqlDatabase is NOT the password.
The -p basically means: And ask me for the password when I press enter.
sqlDatabase
The sqlDatabase part refers to the sql database we want to do something with. Again, this shows right after the -p parameter, but it's not the password.
For example, on omgboards.com we might call our database omgdb, so it would look like -u omgforums -p omgdb, I hope that makes sense.
> backup-sqlDatabase.sql
The > character means this basically: the output of this command, go over there with it (do this with it)
The backup-sqlDatabase.sql part is the filename we wish to use. This is where we write to. Note please that you don't have to create this file first, it will create it if it doesn't exist.
For example, on omgboards.com we might call this backup file: sqlbackup-omgboards.sql
Okay, so please adjust the values of the database to your specific database with it's specific user, and press enter. It will ask for a password. The password is not your system password, but the password for the database user you've provided.
This is for example how it looks like:
Code:
userx@omgboards:~/backups$ mysqldump -u omgforums -p omgdb > sqlbackup-omgboards.sql
Enter password:
userx@omgboards:~/backups$ ls -all
drwxr-xr-x 2 userx userx 4096 Mar 14 13:52 .
drwxr-xr-x 7 userx userx 4096 Mar 14 13:44 ..
-rw-rw-r-- 1 userx userx 900M Mar 14 13:52 sqlbackup-omgboards.sql
Okay, let's take a step back. This seems quite confusing. But it really isn't.
Usually you have properly written down the database details (most web apps have a config.php file (like the software XenForo that we're currently using for our website here) and these details are a copy/paste away into the command. Basically all you have to do is enter a command to dump the database to a single file. Then compress it and download it to your archive.
Let's get to that part, the compressing of the backup and archiving it.
To archive things properly, we have to at least zip up the .sql dump. Why? Because it's usually just text. Most database .sql dumps consist of threads, posts, user data, and forum structures, etc. This is all text. Which compresses quite nicely. Another thing we also want to do is create a tarball (tar is an archiving utility which also supports compression).
I personally prefer gzip archives, so I will compress it into .gz, but you can lookup the manual for tar and use bzip or zip, etc).
The command for our example: backup-sqlDatabase.sql, would be:
Code:
tar -czf backup-mysql-march-14th-2018-backup-sqlDatabase-export.tar.gz backup-sqlDatabase.sql
Please note, the file we have is "backup-sqlDatabase.sql", so I remove the .sql from it, and include the "backup-sqlDatabase" part inside the resulting filename, for the way I do things, this works for me.
Okay, let's walk through that command, tar.
tar
The archiving command we're using to create tarball gz files.
-czf
Three parameters to tell the tar command we're not extracting, but compression (c) and compressing with zgip (z) from a file (f).
backup-mysql-march-14th-2018-backup-sqlDatabase-export.tar.gz
This part is written by me, this is the resulting filename we end up with, I strongly recommend ending it with .tar.gz (regardless of what name you give the filename)
To breakdown the filename, I always prefix my archivable exports with backup-, then I add a readable timestamp (you can use whatever you want of course), then the filename, and since this was an sql export, I include that it's an export.
This is for example how it looks like:
Code:
userx@omgboards:~/backups$ tar -czf backup-mysql-march-14th-2018-backup-sqlDatabase-export.tar.gz backup-sqlDatabase.sql
userx@omgboards:~/backups$ ls -all *.gz
-rw-rw-r-- 1 userx userx 250M Mar 14 13:54 backup-mysql-march-14th-2018-backup-sqlDatabase-export.tar.gz
-rw-r--r-- 1 userx userx 980G Mar 14 13:40 backup-omgboards-com-march-14th-2018-backup-omgWebfiles-export.tar.gz
userx@omgboards:~/backups$
As you can see here, the tarball that's compressed, ends up as a 250 megabyte file. This takes up way less space, it's faster and easier to download, and you can put more exports in your archive and not fill up the drives so quickly.
What you will also spot in that listing, is the webfiles export, that was already there. And we will get into that in the second post of this thread

Before we do that, let's explain something and summarize what we've done.
TLDR
Backing up a website is important, make a manual backup and test it. Using the mysqldump command you can tell the system to dump a specific database to the file system. Before downloading and archiving your backup, you can use the tar command to create a compressed tarball for easy downloading and storing. Being able to quickly dump your database and archive it, makes it easier to roll back to previous working versions, clone & test upgrade software/sites, fix problems, or move to another host.
mysqldump -u [username] -p [database name] > [database name].sql
tar -czf [filename].tar.gz [database name].sql
Last edited: