Hi Davi,
There are lots of ways to encrypt data on a computer, many of these are secure to the extent that unless someone finds a flaw in the software, trying to break in electronically isn’t really a feasible option for gaining access. (hence recent requests by various governments for back-door access to various end-to-end encrypted platforms)
So as long as you are careful with your encryption keys / passwords, it’s difficult or maybe even impossible to differentiate between secure, very secure and very very secure.
I don’t know anything about Veracrypt, however I do use transparent volume level encryption for protecting data I carry around. One mechanism as Keith mentions is eCryptfs, which I think is the software Ubuntu used (or still use?) to provide encrypted home folders. The principle being that your home folder is stored in an encrypted file which is protected by (hopefully) a very long password. When you log in, this file is mounted on /home/username and all reads and writes go through the eCryptfs software, which encrypts and decrypts read/write requests on the fly. It slows down disk access to some extent, but it’s not generally noticeable. When the machine is powered down (or the user logs out), all that’s visible ia a file that just looks like it’s full of garbage.
I’ve not tried it recently, but the Ubuntu installed user to offer a tick box with something like “Encrypt Home directories?” as a option, if it’s still there this is probably the easiest approach / way to get set up, and it’s transparent to you as the user. (so long as you don’t lose your password) From memory it tries to use your login password as the eCryptFs encryption password, and I think when you change your login password it effectively changes the eCryptFs password.
I would recommend at least 16 digits for modern encryption based passwords, longer if possible. A combination of words to > 30 digits is great.
Note however that I don’t use this approach any more for a few reasons;
- afaik eCryptFs only supports regular files, so it’s not a proper filesystem, which can be problematic if you’re a developer and want to create non-file type devices or indeed files which you don’t want to be encrypted due to performance issues. (non-sensitive databases etc)
- It’s a third-party approach and can present integration issues with your desktop software
- It always felt a like a bit of a, well, I don’t want to say hack it strikes me as good software, but it felt like something that should be a part of the filesystem
What I do use now …
All my machines, server and workstation use ZFS. As a filesystem this has many features including the option to transparently encrypt any mounted volume. Essentially you give it a password when you mount the volume, pretty much like eCryptFs except that it’s an integral part of a real filesystem and as I understand it, somewhat more efficient than other options.
Other reasons to use ZFS;
- Many options for RAID
- Many options for transparent compression
- Instant snapshots of any volume at any time
- An incredible snapshot based system for replicating data between systems
- Lots of stuff under the hood you just never think about
CKSUM
Here’s one of those things where ZFS (fairly) recently saved me from a major disaster. When ZFS writes a block to disk, it calculates a checksum for that block and saves the checksum as well as the block. When it reads the block back in from disk, it re-calculates the checksum and makes sure the checksum for the read data matches the checksum it stored when it wrote the data. This is primarily done for RAID setups to detect failing disks and give the system chance to re-read the block correctly off another disk.
Why on earth would you do this?
If you read a block from disk using a normal filesystem, it will work or it won’t. If it works, the filesystem will use the data assuming it is “ok”. If you have a situation where the disk is reading and returning blocks, reporting them as “OK”, but actually returning corrupted data, not only is this likely to eventually knock your system over, in the meantime it could literally corrupt anything. It does kinda make you think, “why on earth don’t all file-systems perform this check?!” if it’s possible for read data not to be the same as the data that was written ?!
Be Afraid! (literally)
Last year I bought what I thought were a bunch of quite reputable SSD’s which performed very well and started rolling them out across my machines. I initially tried one in my machine for a couple months (ext4 filesystem), looked good so I stuck a couple into servers running ZFS.
All was good for a few weeks, then one of my servers locked up for no apparent reason … until I discovered the ZFS filesystem had gone onto Read-Only mode. As it turns out, it had started seeing CKSUM errors when reading blocks from the disk and had initially survived by re-reading those blocks until it got a “good” read. Unfortunately at the time I wasn’t tracking CKSUM errors across file-systems so I didn’t see the warning signs, however once the system could no longer cope it locked down the filesystem so’s to avoid any data corruption as a result of the read-fails.
So yes, this CAN happen, disks CAN fault in this way, and normal file-systems don’t see the fault. ZFS not only sees the fault, it reports it, tries to compensate for it and in the end literally protects your from data corruption.
ZFS Backups
I run ~ 50 virtual machines across ~ 10 machines, all ZFS. The ZFS send/recv mechanism allows me to snapshot each machine and copy incremental backups to a central store. i.e. it just copies changed blocks. An end-to-end backup of the entire estate generally takes around 90 seconds, actual run-time on each machine for a full incremental copy is generally < 1s, this is because the file-system is keeping track of changes at block-level and has an incredibly efficient mechanism for packing up those changes and sending them over the network.
It’s quite feasible for example on sensitive machines to snapshot and backup every minute so you have a working off-machine copy of all your data that’s never > 60s out of date.