Quantcast
Channel: Red Gate forums: SQL Backup 7
Viewing all 713 articles
Browse latest View live

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
Thanks, James. Servers.dat is only 3KB, to complete the file size portfolio.

RE: Transaction Log Backups Failing

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
Thanks- that'll be handy for Robin's test cases.

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
I deleted all the recommended files.

Activity History is at 10 minutes and counting...

I can see the Jobs, no problem.

Next steps, please, as this version is definitely broken...

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
Is this in relation to the server where you already removed the data.sdf file? If so, I'm not sure what else to try; it may just be slow because it's rebuilding the cache file but if it's sticking once again there's something more underlying going on and we'll need advice from the dev-team on it (you're not the only person seeing this problem, a couple of other users have reported similar)

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
Hi James,

I was waiting for advice on the data.sdf file on the SQL Cluster due to the configuration of the SQL Backup Service with respect to the cluster manager.

Apologies for misleading you all, I didn't delete THAT file, just the client files.

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
UPDATE:

I didn't delete the data.sdf files on the stand-alone Dev and QA Servers and the Activity History just came up fine, immediately after adding the servers to the new laptop.

Only the CLUSTER Activity History is now a problem.

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
Ah, okay- so, if that file is quite large it'll contain a lot of history which will take some time to repopulate.

If it turns out that one does need removing then I guess the issue is indeed that stopping the service may fail the cluster over as the backup is seen as a cluster resource... on the installation notes here it does seem that we recommend a policy to restart it on the *current* node though, rather than failing over, so you may be OK. The other thing is whether you can temporarily remove / ignore the SQL Backup service from the clustering side of things. I'd test this out myself, but unfortunately I don't have a cluster here to try it on Sad

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
Cluster's data.sdf is an enormous 148KB ( Wink )

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
After TWO HOURS the cluster's Activity History is finally back!

It also incurred a SQL "Long Running Query" Alert from SQL Monitor - the query ran for 8718.0400001 seconds!!!

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
It also turns out that the server has Activity dating back to 9 Oct 2012. The sqbdata process is responsible for the long running query, hence the 6 full months of activity may be the underlying issue, not the client or server dat or sdf files...

Maintenance plan for History Cleanup is now configured, executed, and we're down to 30-days.

ERASEFILES & ERASEFILES_REMOTE ignored

$
0
0
I have this very awkward situation on our new cluster that my jobs seem to ignore ERASEFILES & ERASEFILES_REMOTE.

The result is having backup disks running full quickly with all effects and work coming with it.

I usually use BACKUP DATABASES [*] and BACKUP LOGS [*] with the ERASEFILES & ERASEFILES_REMOTE options.

The log also Always states something like this at the bottom:
4/3/2013 3:15:16 AM: The backup set on file 1 is valid.
--------------------------------------------------------------------------------
4/3/2013 3:15:16 AM: Deleting old backup file: H:\Backup\databasename\FULL_INSTANCENAME_databasename_20130401_031500.sqb
4/3/2013 3:15:16 AM: Deleting old backup file: \\networkname\directories\INSTANCENAME\databasename\FULL_INSTANCENAME_databasename_20130319_031500.sqb
4/3/2013 3:15:24 AM: Copied H:\Backup\databasename\FULL_INSTANCENAME_databasename_20130403_031500.sqb to \\networkname\directories\INSTANCENAME\databasename\FULL_INSTANCENAME_databasename_20130403_031500.sqb.

But on the new cluster I completely miss this part..... Locally (erasefiles) as well as on the networkdrive copies (erasefiles_remote). I just continues with the next database.

Now I must say that the old part is running version 6 something
The new cluster is running SQL Backup version 7.2.1.82 & Service application version 7.2.1.82
I already notice that the copy action is now a seperate action as I receive seperate emails from the copy to network action on success.
But I really need ERASEFILES & ERASEFILES_REMOTE to work! By default the FILEOPTIONS = 1 is not added when using the GUI. Also tried adding that, but to no avail.

RE: Transaction Log Backups Failing

$
0
0
Now I'm curious to the solution....

RE: ERASEFILES & ERASEFILES_REMOTE ignored

$
0
0
Many thanks for your post and apologies for inconvenience caused.

I have logged a support call for you call reference for which is F0071348.

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
I'd already left for the night when you posted that last message- unlikely it's going to be busy with lots of data then...
Did you manage to remove the file in the end?

RE: 7.3.0.383 Issue: Refreshing Connection for 20m & counting

$
0
0
No need to remove the file after I whacked the extraneous backup history. All clients now refresh the activity history just fine.

I deployed the server bits to three more production servers that have only 30-day history retention and had no issues at all. Their backup schedules are very similar to the 6-month-retention machine, BTW.

The fact that SQL Monitor kicked out a Long Running Query Alert for sqbdata as the underlying process tells me that the issue lies in THAT process and that the volume of data gave it a two hour twenty five minute headache!

Compatibility with 6.4

$
0
0
Will it be possible to restore backups that are created with 7.3 with verion 6.4 of Redgate? We have several clients on various versions of Redgate and some are receiving the free upgrade notice. We need to know how to advise them.

Possible restore issue with 7.3.0.383 to replace database

$
0
0
Hi,

I am running on W2K12 on SQL2012 SP1 and when I try to use a script that has worked before on W2K3R2 on SQL2005 to restore a database with replace it seems to want to place the database on the same drive as the original backup. I am trying to restore to a folder on our E:drive, where the existing database is currently, from a backup that backed up a database that resided on a D:drive.

Now I am forced to add the move parameters. Looks like this means the default has changed from 7.1.

Here is a snippet of my script

EXECUTE master..sqlbackup N'-SQL "RESTORE DATABASE [AHAH_DEVL] FROM DISK = ''G:\DBABackupsOnly\FULL_AHAH_PROD_20130320_030300_23895.sq6'' WITH RECOVERY, REPLACE"'
go

The AHAH_DEVL resides on the E:drive and the AHAH_PROD resides on the D:drive.

This would tend to make this version unusable for us.

Chris

RE: Possible restore issue with 7.3.0.383 to replace database

$
0
0
I should add the GUI works as expected. It is when you use a script.

Chris

RE: Compatibility with 6.4

$
0
0
Backups created with 7.3 can be restored using 6.4, except when the backup created in 7.3 is encrypted with a unicode password.

E.g. this backup can be restored by 6.4

Code:
EXEC master..sqlbackup '-sql "BACKUP DATABASE model TO DISK = [<AUTO>] WITH PASSWORD = [password_value]"'

but this cannot be restored:

Code:
EXEC master..sqlbackup N'-sql "BACKUP DATABASE model TO DISK = [<AUTO>] WITH PASSWORD = [杨立文]"'
Viewing all 713 articles
Browse latest View live




Latest Images