Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Product Support and Discussion

older | 1 | .... | 33 | 34 | (Page 35) | 36 | newer

    0 0
  • 08/04/14--09:40: The Log Copy Queue
  • To whom it may concern:

    We are currently running SQL Backup 7.6.0.29. We use the log copy queue to support log shipping for a system that experiences high month-end transaction volume. As a result, the log copy queue has trouble keeping up with copying the larger log backup files.

    What I'm seeing when querying the backupfiles_copylist via the sqbdata utility is that many of the queue entries are getting flagged with a status of E. Based on a forum search, it sounds like a status of E indicates the queue entry has expired, presumably because it was not copied within the 24 default period. Given that, I have the following questions:

    1) It appears that the files are not being copied in the order they were created. By that, I mean there are files with a later [created] date which are getting copied before files with an earlier date. Is there anything I could be doing wrong in my config to cause this? It seems counter-intuitive to me.

    2) Are there best practices on config settings for the registry entries that control copy behavior, specifically: COPYTO:ExpiryIntervalInMinutes and
    COPYTO:ThreadCount ? In addition, is there a max value allowed for COPYTO:ExpiryIntervalInMinutes?

    3) What are the valid values for the [status] field in backupfile_copylist? Here are the ones I think I know: A = Active, E = Expired, P = Pending, S = Success.

    4) Finally, how is the data in these admin tables maintained? Do we need to manually schedule a job to execute cleanup commands against them? If so, can you provide a full list of the tables that should be included?

    Thanks in advance for any information or suggestions you can provide.

    Regards,

    -Mike Eastland

    0 0

    Tried to use the UI on the prior released 7.6 version. It was so slow, and never returned any data - spinner only - that I decided to check for an update. Having found 7.7 and installed UI and server components, I had my fingers crossed for increased performance.

    There are only 12 databases on the active-passive cluster.

    6 databases have 5-minutely log backups.

    SQL Monitor barfs up a "Long Running Query" Alert for the Activity Pane's refresh activity when the UI is first opened, expand the databases, click the one of interest, and go for LUNCH!

    Code:
    Low   Long-running query | 4325661
       
    Raised on:   cluster.xxx.xxxx.int\(local)
    Time raised:   4 Aug 2014 1:18 PM (UTC-04)
    Details
    Process ID:   176
    Process name:   SQL Backup
    Database:   master
    Host:   XXX-DB1
    User:   XXX\user
    Process login time:   4 Aug 2014 1:08 PM
    Query start time:   4 Aug 2014 1:08 PM
    Query duration:   600.777 sec
    SQL process fragment
    sqbdata


    15 minutes and NO DATA = unacceptable.

    And it's not like the cluster is overworked, either. There are only 40 jobs, 12 or so of which are SQL Backups and many that only run for a few seconds or minutes every 15 or hourly.

    This is a SQL2012SP1CU6 instance on a dual-hex-core Dell R720 with 256GB of RAM, so it's not lacking resources. CPU currently at 16%, 5 waiting tasks, 7MB/sec I/O. Can't see any good reason for this issue other than, perhaps, with msdb at 35GB. there may be a woefully lacking index on one of the core job history tables...

    Seriously UNUSABLE interface.

    Any ideas, chaps?

    0 0
  • 08/04/14--23:44: RE: The Log Copy Queue
  • Assuming a default 'COPYTO:ThreadCount' value of 5, and a default 'COPYTO:ThreadMultiplier' value of 5, SQL Backup picks up the oldest 25 files that it needs to copy.

    When any of the 5 threads are available, it assigns a file to the thread to begin the copying process.

    Over a duration of the 'COPYTO:SleepIntervalInSeconds' default value of 60 seconds, it checks every 'COPYTO:WaitIntervalInSeconds' value of 5 seconds if any of the 5 threads are available again, and assigns the next file to the free thread to be copied.

    Once the 60 seconds has elapsed, it repeats the above cycle i.e. picks up the oldest 25 files that need to be copied, finds a free thread, assign the file to the thread etc.

    So basically, the copying process is set up to copy a maximum of 25 files over a 60 seconds period. Assuming a transaction log backup interval of 5 minutes, and assuming your server can handle copying 25 files in a minute, SQL Backup should be able to copy a maximum of 1500 files over 5 minutes.

    A potential bottleneck is if all 5 threads are occupied copying large files, and your transaction log backup interval is shorter than the time it takes to copy the file e.g. it takes 6 minutes to copy each file, your backup interval is only 5 minutes, and you have 5 or more databases generating that large of a transaction log backup file. The 5 worker threads copying the files will be preoccupied with the larger files, servicing the smaller files only later, than be preoccupied again with the larger files, and eventually everything gets back-logged.

    Increasing the number of worker threads won't help, as network bandwidth is a fixed resource and the existing worker threads would already be using the maximum possible bandwidth. Increasing the 'COPYTO:ExpiryIntervalInMinutes' may help, as it would allow files to stay in the copy queue longer.

    Another potential bottleneck is if you are generating more than 25 backup files a minute. As SQL Backup will only copy a maximum of 25 files a minute, you will eventually end up with a back log. Increasing the 'COPYTO:ThreadCount' and/or 'COPYTO:ThreadMultiplier' values will be required to handle this load.

    To answer your questions:

    >> 1) It appears that the files are not being copied in the order they were created.
    Two possibilities - the older files have expired because they were not copied within the first 24 hours, and the later files were copied, or the larger files took longer to complete and smaller files (although newer) assigned to a different worker thread were copied over first.

    In the first case, you'll need to increase the 'COPYTO:ExpiryIntervalInMinutes' value. Assuming the bottleneck occurs only during month end, the copy queue should clear up during the beginning of the next month.

    In the second case, SQL Backup's transaction restore process will eventually restore the files once they are available. You might encounter a few errors when the newer files have been copied over before the older files, but once the older files have been copied over, the files will be restored in the correct order.

    >> 2) Are there best practices on config settings for the registry entries that control copy behavior, specifically: COPYTO:ExpiryIntervalInMinutes and
    COPYTO:ThreadCount ?

    This really depends on your backup patterns. As described above, 'COPYTO:ExpiryIntervalInMinutes' helps in keeping the files longer in the copy queue, if your current network bandwidth has already maxed out. 'COPYTO:ThreadCount' would help if you have network bandwidth to spare.

    >> In addition, is there a max value allowed for COPYTO:ExpiryIntervalInMinutes?

    No, there is no maximum value.

    >> 3) What are the valid values for the [status] field in backupfile_copylist? Here are the ones I think I know: A = Active, E = Expired, P = Pending, S = Success.

    And C = Cancelled, used when a hosted storage upload copy item is cancelled.

    4) Finally, how is the data in these admin tables maintained?

    The retention policy that you set up for the local history applies to the 'backupfiles_copylist' and 'backupfiles_copylist_log' tables too. From the GUI, select the server instance, right click to bring up the context menu, and select the 'Server Options' item. The retention policy is the one under 'SQL Server backup and restore history'. If you need a different retention policy, you'll need to perform the deletion manually. It's only those 2 tables that are involved.

    Lastly, there are some other timing settings that you can tweak, if SQL Backup requires more than a single attempt to copy your files. You can see if this is the case by checking the 'count' column in the 'backupfiles_copylist' table. If this is happening in your case, let mw know and I'll explain the settings.

    0 0

    Is it possible to work with the SQL Backup from yours redgate toolbelt on azure servers ?

    when I trying to connect to my SQL server on azure, with the username and the password that I use to raise the databases, and with which I soupose to have all privilegues, it gives me the error :

    Login Failure with nameserver.database.sindow...
    A problem was encounter whilst comunicating with the server
    The user credentials supplied do not represent a member of the sysadmin role,
    SQL Backup requieres sysadmin privileges


    I had check forums and blogs, and all I can see is that the user and pasword I have, should have enough to perform any single activity on the server, but instead, thats all I got.

    Regards,
    pGrnd

    0 0

    I was very disappointed with the results. On the other hand, I realized maybe there is something wrong, some kind of incompatibility.

    0 0
  • 08/07/14--11:22: sdfg sfdg fdsg fdsg
  • Indianapolis Colts vs New York Jets live streamColts vs Jets live streamNew England Patriots vs Washington Redskins live streamPatriots vs Redskins live streamSan Francisco 49ers vs Baltimore Ravens live stream49ers vs Ravens live stream [URL=http://vk.com/icincinnatibengalsvs ]Cincinnati Bengals vs Kansas City Chiefs live stream[/URL] Bengals vs Chiefs live streamSeattle Seahawks vs Denver Broncos live streamSeahawks vs Broncos live streamDallas Cowboys vs San Diego Chargers live streamCowboys vs Chargers live stream
    Indianapolis Colts vs New York Jets live streamColts vs Jets live streamNew England Patriots vs Washington Redskins live streamPatriots vs Redskins live streamSan Francisco 49ers vs Baltimore Ravens live stream49ers vs Ravens live streamCincinnati Bengals vs Kansas City Chiefs live streamBengals vs Chiefs live streamSeattle Seahawks vs Denver Broncos live streamSeahawks vs Broncos live streamDallas Cowboys vs San Diego Chargers live streamCowboys vs Chargers live streamMiami Dolphins vs Atlanta Falcons live streamDolphins vs Falcons live streamBuffalo Bills vs Carolina Panthers live stream [URL=http://www.eventbrite.co.uk/o/freebills-vs-panthers-live-stream-nfl-6937896683 ]Bills vs Panthers live stream[/URL] Tampa Bay Buccaneers vs Jacksonville Jaguars live streamBuccaneers vs Jaguars live streamOakland Raiders vs Minnesota Vikings live streamRaiders vs Vikings live streamPhiladelphia Eagles vs Chicago Bears live streamEagles vs Bears live streamNew Orleans Saints vs St Louis Rams live streamSaints vs Rams live streamCleveland Browns vs Detroit Lions live streamBrowns vs Lions live streamPittsburgh Steelers vs New York Giants live streamSteelers vs Giants live streamGreen Bay Packers vs Tennessee Titans live streamPackers vs Titans live streamHouston Texans vs Arizona Cardinals live streamTexans vs Cardinals live streamNew York Jets vs Indianapolis Colts live streamJets vs Colts live streamWashington Redskins vs New England Patriots live streamRedskins vs Patriots live streamBaltimore Ravens vs San Francisco 49ers live streamRavens vs 49ers live streamKansas City Chiefs vs Cincinnati Bengals live streamChiefs vs Bengals live streamDenver Broncos vs Seattle Seahawks live streamBroncos vs Seahawks live streamSan Diego Chargers vs Dallas Cowboys live streamChargers vs Cowboys live streamAtlanta Falcons vs Miami Dolphins live streamFalcons vs Dolphins live streamCarolina Panthers vs Buffalo Bills live streamPanthers vs Bills live streamJacksonville Jaguars vs Tampa Bay Buccaneers live streamJaguars vs Buccaneers live streamMinnesota Vikings vs Oakland Raiders live streamVikings vs Raiders live streamChicago Bears vs Philadelphia Eagles live streamBears vs Eagles live streamSt Louis Rams vs New Orleans Saints live streamRams vs Saints live streamDetroit Lions vs Cleveland Browns live streamLions vs Browns live streamNew York Giants vs Pittsburgh Steelers live streamGiants vs Steelers live streamTennessee Titans vs Green Bay Packers live streamTitans vs Packers live streamArizona Cardinals vs Houston Texans live streamCardinals vs Texans live stream

    0 0

    Red Gate Support commented, "...the UI uses an internal SQL compact database to cache the history. It is the syncing process between the system tables and the cached database where the performance issue is seen. We do have an existing request logged to improve the UI performance in these types of situations. The number for your reference is SB-4400. I will add you to the list of requestors. In the meantime, the workaround that has worked for other customers is to reduce the number of rows in the table. Sorry for the inconvenience."

    I dropped the SQL Backup history to 10 days and left the UI open - the message on the Server Options dialog says that cleanup only takes place when the UI is open. I see that as a DEFECT because I only ever go to the UI when adding or updating a scheduled backup, or to get the full log message for a rare failure. Cleanup should be something SQL Backup takes care of on its own - the essence of an unattended process - especially because the service knows to write to the history, so it surely can do cleanup, too.....

    At least I received a response in about 45 seconds. Much better, but still very slow... SELECT TOP(N) FROM, perhaps, with paging...?

    Pity there's no "Export History" right-click option, either, so at least I could do some average run time, file size, time-of-day impact analysis on a set of data I can see in the UI, rather than run (and generate an "Unable to generate report" error via the UI "Query error: There was an error parsing the query. [Token line number=1, Token line offset=2, Token in error]" Not exactly helpful with no means of copying that error to the clipboard...) Ah well...

    0 0

    Hello,

    Thanks for your post. In regards to your question about backing up Azure databases, we have an SQL Azure offering here:

    http://cloudservices.red-gate.com/

    (If you are looking to backup databases that are located in a virtualized environment that is hosted in the Azure cloud, then as long as you can connect to it via SSMS and have the ability to install Windows Services (for our SQL Backup Agent) you can use SQL Backup Pro)

    Best Regards,
    Steve

    0 0

    We had this problem once.
    I thought our back-up was sufficient, but everything had to be transported and we lost data.
    That extra step would have helped.
    We do this now as a default.

    0 0

    I have see this other product, but I see very much limited compared with the other redGate tools

    I am mostly interested in having a reliable backup of the DBs, of course (own azure Websites backups are not 100% reliable according the DBs aside of the limitations) but also have the functionality to make a schema and data comparison between live database and backup version, and I believe this is limited to the SQL toolbet

    also, I am already used to your traditional tools

    0 0

    Thank you for your reply.

    You cannot use SQL Backup V7 to backup an Azure database. You cannot deploy the SQL Backup Server Components to the Azure server. It is the SQL Backup Server Components that actually perform the backup and restore tasks.

    The only work around is to use either the CloudServices offering my colleague directed you to in his reply.

    Or, use SQL Compare. Compare and deploy to an empty local SQL Server database / server. Once successfully deployed to the local SQL Server database, backup this database using SQL Backup.

    Many Thanks
    Eddie

    0 0

    Thanks Eddie for the response

    eddie davis wrote:

    You cannot use SQL Backup V7 to backup an Azure database. You cannot deploy the SQL Backup Server Components to the Azure server. It is the SQL Backup Server Components that actually perform the backup and restore tasks.


    That is finally very clear, much more than any azure information.

    eddie davis wrote:

    Or, use SQL Compare. Compare and deploy to an empty local SQL Server database / server. Once successfully deployed to the local SQL Server database, backup this database using SQL Backup.


    Seem a very large workaround, but it is probably even better than the bacpac that offer azure

    The question is, wouldn't be redgate (and many others) be VERY VERY INTERESTED to have Microsoft implementing (full or partial) the SQL Backup Server Components on azure ??

    br,
    pGrnd

    0 0

    Hi, thank you for your reply.

    I am sure that we (Redgate) would love to have a backup solution or add features to our SQL Backup Pro tool for Azure databases.

    The problem is with the SQL Backup Server Components, which contain a total of 9 Extended Stored Procedures.

    When you make use of an Azure database, there are some limitations listed HERE in this MSDN article. One of those features listed (in amongst a long list) as not being supported, is Extended Stored Procedures. This means that the Server Components installer cannot create the SQL Backup Xprocs.

    Also listed as not supported is Backup and Restore, and SQL Server Agent/Jobs.

    As Backup and Restore is not supported, our SQL Backup Pro relies upon SQL Server to collect the backup data and write it to the Virtual Device Interface (VDI) created when the backup command is sent to SQL Server. SQL Backup Pro collects the backup data from the VDI, compresses the data (and optionally encrypts it) before the compressed data is written to disk. On an Azure database this process does not occur.

    I hope the above provides some further information as to why SQL Backup Pro cannot be used to backup and restore Azure databases.

    The solution using our Redgate Tools is either, make use of Redgate cloud services http://cloudservices.red-gate.com/

    Or use the tools available in SQL Toolbelt, in particular SQL Compare and SQL Data Compare (Sorry, I omitted the use of SQL Data Compare in my previous reply) to create a local copy of the database that can then be backed up.

    Many Thanks
    Eddie

    0 0

    This isn't really a support question, but more a general "how do you do it" type query.

    In our organisation we use Redgate backup. We backup 17 SQL instances with a total of about 150 databases across these instances. Redgate Backup is configured to do a full backup of each of these databases every night and 15 minute log backups, which is writes to a central server so that the data is off the local servers. After 7 days, the oldest files are deleted by Redgate Backup.

    This results in around 3 terrabytes of data on the central server. Each Friday, we use Backup Exec to write this data to tape, so that it can be archived and stored offsite. This process is repeated monthly and yearly, so that we can restore data from up to 3 years ago.

    The problem is, we are having issues getting successful backups of the 3TB of data onto tape. The jobs regularly fail and we think it is due to the fact the the redgate backups change so frequently. The Backup exec jobs takes a while to run, during which a number of log backups could have taken place so older files would have been deleted and now no longer exist for backup exec to read.

    I would be interested in what people do to archive their database backups and how they secure long term backups offsite in the most efficient and resilient way.

    Any help would be great.

    0 0

    Sell Cvv(cc) - Wu Transfer - Card Dumps - Bank login/paypal
    MSR606 ,MSR605 Magnetic Stripe Card Reader Writer Encoder Swipe Credit+20 Free Blank Cards,Magstripe Encoders - MSR206,

    MSE750, MSR705, MSR905?ATM Skimmer
    Personal Plastic Card Embosser,Portable Magnetic Stripe Readers,Magnetic Stripe Card Reader/Writers (Encoders)
    BUY FAKE PASSPORT BRITISH(UK) FOR SALE DIPLOMATIC
    CANADIAN FALSE ID CARD ONLINE UNITED STATES(US) FAKE ID
    CARD SELL DRIVERS LICENSE
    BLACK AND REAL DOLLAR,EURO,POUNDS AND SSN SOLUTION FOR CLEANING BLACK MONEY


    Contact us :< Smile>
    Email<>:Blaiserino_cvv1@yahoo.com
    Y!M<>:Blaiserino_cvv1


    Portable Magnetic Stripe Readers
    Mini400-2G
    Mini400b Bluetooth
    MSR600 (mini600) with LCD display
    MSR500M (mini123)
    MSR500EX (mini123ex) Extreme
    MSR400U (mini400) USB
    TA32 (PMR600)
    ........................................................................
    Personal Plastic 68-Character PVC Credit ID VIP Card Embosser CCS200
    Plastic Card Tippers CM30
    Golden inkjet printable pvc sheet
    SKIMMER KORO-16 (skimmer KORO-16+ «baseEXP-16»+Pinpad)
    WIncor with keypad
    ATM Skimmer Wincor Nixdorf
    ATM Skimmer Wincor
    ATM Skimmer Slimm
    ATM Skimmer NCR
    ATM Skimmer Diebold Opteva
    ATM Skimmer Diebold
    ATM Skimmer Small
    Chip POS ingenico&amigo
    ........................................................................
    Magnetic Stripe Card Reader/Writers (Encoders)
    MSR406-2G
    MSR905H
    MSR905
    MSR505C
    MSR705
    MSR206U
    MSR805
    Super Bundle A (MSR500M)
    Super Bundle B (MINI400)
    Super Bundle D (MSR600)
    Super F (MSR400U/MSR905H)
    Super Bundle C (mini400b)Bluetooth
    Super E (MSR500M/MSR905H)
    Card Readers (desktop) MSR003
    ..............................................................................
    Time Clock and Attendance

    TR515 magnetic card reader with windows compatible software. Serial interface

    TR515 magnetic card reader with windows compatible software. RJ45 ethernet interface

    TR515 proximity card reader with windows compatible software. Serial interface

    TR515 proximity card reader with windows compatible software. RJ45 ethernet interface

    Details about Verifone OMNI 5750 Credit Card Terminal VX5750/05750, NA1 4MF/2MS HS SC 3 SAMS

    Details about PCR120 Mini Portable Proximity RFID Reader with LCD/Data Collection Terminal

    Details about MX53 RFID and IC Smart Card Reader/Writer 125 GHz

    PME900 proximity card encoder

    Proximity cards (rewritable)
    ...........................................................................

    Contact us :< Smile>

    Email<>: Blaiserino_cvv1@yahoo.com

    Y!M<>: Blaiserino_cvv1

    0 0

    Greetings,

    I am using a scripted process for automated individual SQL database restorations, as needed. I am invoking sqlcmd via PowerShell, which performs the restoration by using the following example syntax:

    Quote:
    EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DB_NAME] FROM DISK = ''FULL_PATH'' WITH RECOVERY, DISCONNECT_EXISTING, REPLACE"'


    This is simple enough and works, however, I did want to also incorporate some sort of error handling, based on the exitcode generated by the proc being called. I've tried using PowerShell's own $LASTEXITCODE, however, the code generated always appears to be 0, since the script and sqlcmd itself -- technically -- ran successfully, so then it does me no good. The actual first exitcode generated by the sqlbackup proc is ignored.

    Might anyone have a method for exitcode handling, in combination with PowerShell, sqlcmd, and the sqlbackup stored procedure? Thanks in advance!

    0 0

    Hi Eddie

    first of all, thanks a lot for so detailed reply, I am very satisfy with redgate support as I am with your products, that's a hurray for all of you.

    what I mean by
    "wouldn't be redgate be VERY INTERESTED to have Microsoft implementing the SQL Backup Server Components on azure ?"
    is offering my help by asking (or begging) Microsoft to add this functionality, and I sure plenty of other users will follow.

    As said, looking forward be able to use your great products on Azure.
    Br,
    pGrnd

    0 0

    I believe I figured it out, after spending a lot more time with this. I'll post what I came up with here, just in case it helps someone else in the future:

    1. Added the -b switch when calling sqlcmd, which forces to terminate the batch if there is an error.

    2. Modified the restore script so that if the exitcode and sqlerrorcode are anything but 0, it will go to RAISERROR within the script (exact message does not appear to matter). Fortunately, the RedGate sqlbackup proc does allow for the exitcode and sqlerrorcode to be generated into a declared value, as such:

    Quote:
    DECLARE @exitcode INT
    DECLARE @sqlerrorcode INT
    EXECUTE master..sqlbackup '-SQL "RESTORE DATABASE [DB_NAME] FROM DISK = ''BACKUP_FILE_PATH'' WITH RECOVERY, DISCONNECT_EXISTING, REPLACE"', @exitcode OUTPUT, @sqlerrorcode OUTPUT


    3. Once sqlcmd processes RAISERROR in a script, with the -b switch, the $LASTEXITCODE in PowerShell appears to go from 0 to 1, which indicates a restore error. I can then have PowerShell stop the process there, or run an alternate process, which is what I needed.

    0 0

    Hi

    Is it possible to preserve replication setting when backing up a database using SQL backup?

    Currenty when we make a full backup of a Subcriber database, the replication procedures are not restored.

    Using SQL Backup version: 7.4.0.23

    0 0

    Hi

    Is it possible to preserve replication setting when backing up a database using SQL backup?

    Currenty when we make a full backup of a Subcriber database, the replication procedures are not restored.

    Using SQL Backup version: 7.4.0.23

older | 1 | .... | 33 | 34 | (Page 35) | 36 | newer