|How can i create in incremental backups of mysql databases|
One hacky way may be, once you have taken full data base to a file. diff it
against the weekly backup and store the patch on disk. When you want to
retrieve, just apply diff to the weekly backup and get full db
mysqldump -u $USERNAME --password=$PASSWORD -h $HOSTNAME -e --opt
--skip-lock-tables --skip-extended-insert -c $DATABASE >hourlyFile
diff weeklyFile hourlyFile >hourlyFile.patch
cp weeklyFile hourlyFile
patch hourlyFile <hourlyFile.patch
I am not really aware what kind of output sqldump gives. if it's text above
would work. Otherwise bsdiff may help you here :
|How to most easily make never ending incremental offline backups|
If I were in your shoes I would buy an external hard drive that is large
enough to hold all your data.
Then write a Bash script that would:
Mount the external hard drive
Execute rsync to back up everything that has changed
Unmount the external hard drive
Send me a message (e-mail or whatever) letting me know the backup is
So you'd plug in your external drive, execute the Bash script and then
return the external hard drive to a safe deposit box at a bank (or other
similarly secure location).
|Issue pulling SNAPSHOT dependencies from Archiva snapshot repository|
You are probably not activating the profile correctly
before the profile in settings.xml put something like
Remember this about activeByDefault
This profile will automatically be active for all builds unless
another profile in the same POM is activated using one of the
previously described methods. All profiles that are active by default
are automatically deactivated when a profile in the POM is activated
on the command line or through its activation config.
if you want to confirm if this is the issue, look at the active profiles by
|Can´t style Snapshot theme from WooThemes´s child theme|
Answer from an anonymous user, found in the edit review queue:
I don´t know, but it seems that in this theme, you can´t add styles in
the 'style.css' child theme file.
I´ve tried to make some changes in the header.php, using one of the
answers from this forum.
<link rel="stylesheet" type="text/css" href="<?php echo
But it completly screwd up layout.
So I realized, using FireBug, that the stylesheet that is loaded from the
child theme, is the file "custom.css".
So that is the answer. Using the child theme file, "custom.css", you can
add styles to your child theme.
|Is there any way to change mywebapp-1.0-SNAPSHOT-classes.jar from attachClasses configuration in maven-war-plugin to mywebapp-1.0-SNAPSHOT.jar?|
The closest approach to what you want to do is to use
but this approach will always put the classifier as your_jar-classifier.jar
and if you create and empty or spaced tag, it will default to -classes
|How to get to last NTFS MFT record?|
The MFT location isn't always fixed on the volume. You should get the
starting MFT offset from the boot sector (sector 0 of the volume, you can
find the structure online).
The first file in the MFT is the "$MFT" file which is the file record for
the entire MFT itself. You can parse the attributes of this file like any
other file and get it's data run list. When you know the size of each
fragment in clusters, parse the last cluster for each 1024 byte record of
the last fragment (although I believe a fragmented MFT is rare). The last
record in the MFT is the last record in that particular cluster marked
"FILE0", if you encounter a null magic number that would be 1024 bytes too
Or you can just get the file size from it's attributes and calculate the
offset to the end of the MFT based o
|NTFS vs. File Share|
Short answer: No.
In Windows each file and directory has an ACL controlling access to it.
Each file share also has an ACL controlling access to the share.
When you access a remote file through a share you are doing so using the
credentials used to login to the local computer. (You can connect using
different credentials by entering a username/password when connecting).
The remote computer tests the supplied credentials against the ACL on the
Once you are past that, then every file you attempt to access on the remote
machine through this connection will be checked using your credentials
against the ACL on the file and the share. This allows a file share to
offer more restricted access to some files than if the same user were
attempt to access them locally. (So you could share file
|NTFS Journal USN_REASON_HARD_LINK_CHANGE event|
As always with the USN, I expect you'll need to go through a bit of trial
and error to get it to work right. These observations/guesses may, I hope,
When the last hard link to a file is deleted, the file is deleted; so if
the last hard link has been removed you should see USN_REASON_FILE_DELETE
instead of USN_REASON_HARD_LINK_CHANGE. I believe that each reference
number refers to a file (or directory, but NTFS doesn't support multiple
hard links to directories AFAIK) rather than to a hard link. So
immediately after the event is recorded, at least, the file reference
number should still be valid, and point to another name for the file.
If the file still exists, you can look it up by reference number and use
FindFirstFileNameW and friends to find the current links. Comparin
|NTFS - file record size|
It's not actually that much of a waste. You should try to look at what
happens when the number of attributes stored in the file record exceeds 1
KB. (by adding additional file names, streams, etc.) It is not clear (to me
at least) for different versions of NTFS if the additional attributes are
stored in the data section of the volume or in another File Record.
In previous versions of NTFS the size of a MFT File Record was equal to the
size of a cluster (generally 4KB) which was a waste of space since
sometimes all the attributes would take less than 1 KB of space. Since NT
5.0 (I may be wrong), after some research, Microsoft decided that all MFT
File Records should be 1KB. So, one reason for storing that number may be
backwards compatibility. Imagine you found an old hard drive which stil
|What is the difference between HDFS and NTFS and FAT32?|
... Because NTFS and FAT aren't Distributed. The advantage of HDFS is that
See the HDFS Introduction.
|Viewing Ciphertext of Encrypted File on NTFS (EFS)|
The way you open an encrypted file in order to read its raw encrypted
contents (e.g. for a backup/restore application) is to use the:
Writing the code on the fly, in a hypothetical hybrid language:
void ExportEncryptedFileToStream(String filename, Stream targetStream)
res = OpenEncryptedFileRaw("C:UsersIanwallet.dat", 0, ref context);
if (res <> ERROR_SUCCESS)
res = ReadEncryptedFileRaw(exportCallback, null, context);
if (res != ERROR_SUCCESS)
function ExportCallback(pbData: PBYTE, pvCallbackCo
|access to ntfs stream for a very long filename fails|
As the very helpful page on CreateFile says referring to the lpFileName
parameter which specifies the filename:
In the ANSI version of this function, the name is limited to MAX_PATH
characters. To extend this limit to 32,767 wide characters, call the
Unicode version of the function and prepend "?" to the path.
Since you are contemplating BackupRead obviously you are wanting to access
this stream programatically. If so, test things programatically. Trying all
these operations from the command prompt is a crap-shoot and will not
establish anything more than the ability to perform such operations from
the command prompt.
With that in mind, let's try this simple program - boilerplate code
int APIENTRY _tWinMain(HINSTANCE,
|How does NTFS handle the conflict of short file names?|
NTFS won't create two short names like that. The first will be THISIS~1.txt
and the second will be THISIS~2.txt. For example, open a command prompt and
from the root of C: drive type
C:>dir prog* /x /ad<Enter>
On a Windows 7 64-bit system you will see output similar to this
03/28/2013 12:24 PM <DIR> PROGRA~1 Program Files
07/31/2013 11:09 AM <DIR> PROGRA~2 Program Files
12/10/2012 05:30 PM <DIR> PROGRA~3 ProgramData
|NTFS sparse file data runs ($UsnJrnl)|
No, it means that $UsnJrnl occupies 2576 clusters on disk. Sparse clusters
don't occupy any space on disk, if you'd try to read sparse cluster, e.g.
cluster 10 in your example, NTFS just returns zeros.
Generally, you can't determine start and end cluster of the file, since
files can be fragmented - your example says that first 1408 clusters are
not allocated on disk at all, then 128 clusters of that file occupy disk
clusters 510119 - 510247, then 2448 clusters of the file occupy disk
clusters 256 - 2704; so in this case you can't say that file begins by
cluster X (on disk) and ends by cluster Y (on disk) - it's possible only if
file is not fragmented (when it uses only one cluster run).
|Mongodb EC2 EBS Backups|
Since you are using journaling, you can just run the snapshot without
taking the DB down. This will be fine as long as the journal files are on
the same EBS volume, which they would be unless you symlink them elsewhere.
We run a lot of mongodb servers on Amazon and this is how we do it too.
|How to make Gradle stop trying to chmod MANIFEST.MF on an NTFS drive|
Gradle is trying to set default permissions for that file, and I can't see
a way to stop it from doing that. (You could make it set different
permissions, but I guess that won't help.) Under Windows/NTFS this normally
works just fine, so it might be a problem with you Linux NTFS driver or
|How to make automated S3 Backups|
Quote from Amazon S3 FAQ about durability:
Amazon S3 is designed to provide 99.999999999% durability of objects over
a given year. This durability level corresponds to an average annual
expected loss of 0.000000001% of objects. For example, if you store 10,000
objects with Amazon S3, you can on average expect to incur a loss of a
single object once every 10,000,000 years
These numbers mean, first of all, that they are almost unbeatable. In other
words, your data is safe in Amazon S3.
Thus, the only reason why you would need to backup your data objects is to
prevent their accidental loss (by your own mistake). To solve this problem
Amazon S3 enables versioning of S3 objects. Enable this feature on your S3
bucket and you're safe.
ps. Actually, there is one more possible reason - cost
|PDO MySQL backups function|
All PDO and ext/mysql do are wrap commands to the underlying database
(MySQL in this case). That is to say that there is nothing stopping PDO
from running SHOW CREATE TABLE or the other commands.
For all intents and purposes you can pretty much just replace:
- $link = mysql_connect($host,$user,$pass);
+ $link = new PDO("mysql:host=$host;dbname=$name", $user, $pass);
And instead of
$result = mysql_query($query);
$result = $link->query($query);
|Documents Directory and Backups|
Applications that want to make user data files accessible can do so using
application file sharing. File sharing enables the application to expose
the contents of its /Documents directory to the user through iTunes. The
user can then move files back and forth between the ios device and a
To enable file sharing for your application, do the following:
Add the UIFileSharingEnabled key to your applications Info.plist file and
set the value of the key to YES.
When the device is plugged into the users computer, iTunes displays a File
Sharing section in the Apps tab of the selected device.
The user can add files to this directory or move files to the desktop.
|Create a symbolic link (or other NTFS reparse point) in Windows Driver|
There isn't direct API to create reparse points.
You need to use ZwFsControlFileZwFsControlFile() to send
FSCTL_SET_REPARSE_POINT ioctl with appropriate input buffers and
Don't have example though!
|SQL Server 2012 : getting a list of available backups|
It's to do with the way you pass the filepath to xp_dirtree, the only way I
could get it working was with a temp table and dynamic SQL, like so:
CREATE PROCEDURE [dbo].[spGetBackUpFiles]
SET NOCOUNT ON
IF OBJECT_ID('tempdb..#table') IS NOT NULL
DROP TABLE #table
CREATE TABLE #table
[filename] NVARCHAR(MAX) ,
depth INT ,
DECLARE @backUpPath AS TABLE
name NVARCHAR(MAX) ,
DECLARE @SQL NVARCHAR(MAX)
INSERT INTO @backUpPath
|Restoring differential backups from SQL Server|
You should be setting the database to single user, if there is the
potential for users to be connected (and if it is okay to kill their
sessions). Since you are restoring the database, I would think that it is
okay. You will need to specify the replace option on the full backup
restoration. Also you only need to restore the latest differential backup
(using with recovery), as it will contain all the changes since the last
full backup. It should also be noted that this restore will not include
transactions that have occurred since the last differential backup.
I would think about using log backups every 3 hours. This will have less
impact on your system and will allow the log to truncate (if you are in
full recovery). In this restore scenario, you will need the full backup
and all the
|Automating Backups using Event scheduler|
Multiple statements have to be put between BEGIN and END. Also you have to
change the delimiter, or else MySQL thinks that the event creation
statement is finished with the first ;. And at last, it's DEALLOCATE
PREPARE ..., not DROP PREPARE....
CREATE DEFINER=`root`@`localhost` EVENT `Backup`
ON SCHEDULE EVERY 1 WEEK
STARTS '2013-06-14 18:19:02' ON COMPLETION NOT PRESERVE ENABLE
SET @sql_text = CONCAT("SELECT * FROM BonInterne INTO OUTFILE
'/home/aimad/GestionStock/" , DATE_FORMAT( NOW(), '%Y%m%d') ,
PREPARE s1 FROM @sql_text;
DEALLOCATE PREPARE s1;
|Windows: Avoid or Disable Backups on Files|
In Windows Phone 8, backup and restore settings are controlled by the user
through system settings. An app cannot prevent itself from being backed up.
However, note that the backup does not store any data associated with third
party apps but rather only stores a list of installed apps
So basically you don't need to do anything in your app to prevent local
files from being stored on SkyDrive if the user has enabled backup.
In Windows 8 everything can be backed up since an admin user will have full
access to his computer files, I don't think you can restrict this. If you
have sensitive data you can use DataProtectionProvider to protect it.
|restore physical backups for mysql using xtrabackup|
Check the /etc/mysql/my.cnf and look for the
innodb_log_file_size = 5M
and change that to
innodb_log_file_size = 1000M
Cause 1048576000/1024/1024=1000 and that is how much is InnoDB engine is
expecting for log file size.
|automatic backups of Azure blob storage?|
Azure keeps 3 redundant copies of your data in different locations in the
same data centre where your data is hosted (to guard against hardware
This applies to blob, table and queue storage.
Additionally, You can enable geo-replication on all of your storage. Azure
will automatically keep redundant copies of your data in separate data
centres. This guards against anything happening to the data centre itself.
|sql server 2012 mirroring setup after multiple backups|
It's because of the log chain, mirroring is kind of like restoring
transaction log backups to the other server but automatically, for it to
work, you need an unbroken log chain from a full backup to the last t-log
backup, so the log chain will look like this (with nice sequential LSNs):
So in the example above, if you restored the Full-1 backup, you can restore
log backups A,B,C but not D,E,F. You can only restore those if you restore
In mirroring, you take a Full backup of the DB and then restore it, SQL
server then looks at the Log Sequence Numbers (LSNs) and transfers
transactions that aren't present in the restored mirror database, if you
take another full backup, you break the chain of sequential LSNs.
In your case
|Best Practice for running hourly backups on an SQL Azure Database?|
"Best Way" is going to get this closed as opinionative but I can tell you
that we use the I/E Services and dump a bacpac file and it works well.
Backups are stored in Blob Storage and are easily accessed.
It's also easy to use the Import/Export data-tier functionality in SQL
Server to pull a bacpac down and import it directly into your local/dev sql
Process is completely automated - we've a nightly job that does it and the
bacpac is just there each morning.
You won't have to write code - have a look here -
It's very easy to recover - bacpacs are stored in Blob Storage and can be
restored to local via Import Export Data-Tier or restored to Azure via the
|windows azure virtual machine hard disk backups|
Azure attached disks, just like the OS disk, is stored as a vhd, inside a
blob in Azure Storage. This is durable storage: triple-replicated within
the data center, and optionally geo-replicated to a neighboring data
That said: If you delete something, it's instantly deleted. No going back.
So... then the question is, how to deal with backups from an app-level
perspective. To that end:
You can make snapshots of the blob storing the vhd. A snapshot is basically
a linked list of all the pages in use. In the event you make changes, then
you start incurring additional storage, as additional page blobs are
allocated. The neat thing is: you can take a snapshot, then in the future,
copy the snapshot to its own vhd. Basically it's like rolling backups with
little space used, in the event
|cygwin rsync all log locations|
You asked if "building file list..." happens in memory, or is stored
somewhere. Let's take a look at rsync's sources, namely, flist.c:
2089 rprintf(FLOG, "building file list
2090 if (show_filelist_p())
2091 start_filelist_progress("building file list");
2092 else if (inc_recurse && INFO_GTE(FLIST, 1) &&
2093 rprintf(FCLIENT, "sending incremental file list
2095 start_write = stats.total_written;
2096 gettimeofday(&start_tv, NULL);
2098 if (relative_paths && protocol_version >= 30)
2099 implied_dirs = 1; /* We send flagged implied dirs */
2101 #ifdef SUPPORT_HARD_LINKS
2102 if (preserve_hard_links && protocol_ver
|Enabling rsync with ssh keygen and no password|
The link that you gave us is right, but there is something that they miss.
In the backup-server you must change the file /etc/ssh/sshd_config,
uncomment this line
and your no password access should be working. In resume:
Client (where you have yours original files)
In a terminal write:
$ ssh-keygen -t rsa
this create the id_rsa.pub file in /home/USER/.ssh/
Server (where you will backup yours files)
modify the ssh_config file:
and uncomment the line
and now just copy the content of .ssh/id_rsa.pub(Client) at the end of
.ssh/authorized_keys(Server) and the no-password will be working(to connect
from Client to Server). Maybe you need to restart your ssh server with
|Adding arguments to options in Rsync|
Are you sure you're exercising the modified rsync?
The man page for popt suggests POPT_ARG_INT:
Value Description arg Type
POPT_ARG_NONE No argument expected int
POPT_ARG_STRING No type checking to be performed char *
POPT_ARG_INT An integer argument is expected int
POPT_ARG_LONG A long integer is expected long
POPT_ARG_VAL Integer value taken from CWval int
POPT_ARG_FLOAT An float argument is expected float
POPT_ARG_DOUBLE A double argument is expected double
The man page linked to has only one reference to CWval with no explanation
of what it actually means.
|Rsync and ssh on android: No such file or directory|
I'm not sure but maybe problem is that the destination path
(firstname.lastname@example.org:backup/) is not absolute?
Also if you what to sync your files in the same device, maybe you should
try to not use ssh? And do something like that:
rsync -rvz /mnt/sdcard/some_directory /backup
|boto-rsync multiple credentials|
I don't know a lot about boto-rsync but I know it uses boto under the hood
and boto supports a BOTO_CONFIG environment variable that can be used to
point to your boto configuration file. So, you could have two config
files, one with your AWS credentials and one with your Dreamhost
credentials and then set the BOTO_CONFIG environment variable to point to
the appropriate config file when starting up.
|How to execute bash script after rsync|
You can add a command after the ssh command to execute it instead of
starting a shell.
Add the following after the rsync command :
sshpass -p "$password" ssh $host "cd $dir && ./after_deploy.sh"
|Vagrant Rsync Error before provisioning|
Most likely you are running into the known vagrant-aws issue #72: Failing
with EC2 Amazon Linux Images.
Edit 3 (Feb 2014): Vagrant 1.4.0 (released Dec 2013) and later versions now
support the boolean configuration parameter config.ssh.pty. Set the
parameter to true to force Vagrant to use a PTY for provisioning. Vagrant
creator Mitchell Hashimoto points out that you must not set config.ssh.pty
on the global config, you must set it on the node config directly.
This new setting should fix the problem, and you shouldn't need the
workarounds listed below anymore. (But note that I haven't tested it myself
yet.) See Vagrant's CHANGELOG for details -- unfortunately the
config.ssh.pty option is not yet documented under SSH Settings in the
Edit 2: Bad news. It looks as if even
|Cross-compiling rsync on OS X 10.8 (64bit) to 10.7 (32bit)|
Just found this question: What is the "Illegal Instruction: 4"
error and why does "-mmacosx-version-min=10.x" fix it?
export CFLAGS="-arch i386 -mmacosx-version-min=10.7"
|How to use the following piece of rsync command in subprocess call?|
You need to invoke the shell with shell=True. However, the documentation
discourages using shell=True.
In order to avoid using shell=True, you can first create rsync process and
pipe it's output to grep:
rsync_out = subprocess.Popen(['sshpass', '-p', password, 'rsync',
'--recursive', source], stdout=subprocess.PIPE)
output = subprocess.check_output(('grep', '.'), stdin=rsync_out.stdout)
|rsync -z with remote share mounted locally|
It won't save you anything. To compress the file, rsync needs to read it's
contents (in blocks) and then compress them. Since reading the blocks is
going to happen over the wire, pre-compression, you save no bandwidth and
gain a bit of overhead from the compression itself.
|Rsync with delete option doesn't seem to work|
Assuming you have a recent version of rsync that works the same as on my
machine, your command should work. Here's a test example:
> mkdir -p site/a
> touch site/a/b.txt
> rsync -Cr --delete site/ site2
> find site2
> rm site/a/b.txt
> rsync -Cr --delete site/ site2
> find site2
I suggest you verify a bit more your setup. You might want to test with the