As SSAS databases get larger and larger, deployment becomes a bigger and bigger problem. How can we ensure that databases are easily and quickly deployed across from development to production with minimal downtime? For some environments, SLA’s dictate that analysis services database must be available for querying 24×7, with seconds for downtime, especially for global companies. When it comes to deployment, there are 4 options to deploy: backup and restore the development database across, using the ssas deployment wizard, scripting the database using an alter xmla command, or using synchronize. backup and restore is slow and cumbersome, using the wizard is, well, using the wizard, and scripting the database with an xmla command is a decent strategy for deployment, although once you’ve run your script, the database needs to be completely re-processed before it can be queried again. In comes synchronize to the rescue! The beauty of synchronize is that while analysis services is doing it sync, users can continue to query and use the destination database. When the synchronize is complete, Analysis Services will automatically switch users over to the newly copied database, putting downtime for deployment in the millisecond range. This can be represented via the following diagram:
This ensures a high performing, scalable solution. Even better, we can use the synchronize command to synchronize production databases across instances in a Network Load Balancing cluster, and increase the resources available to analysis services as needed. For anyone unfamiliar with Network Load Balancing, you use a switch to distribute the workload across multiple smaller servers (scale out), similar to the below diagram:
Additional information on scaling out an Analysis Services solution can be found here.
So, back to the reason for today’s post. Let’s say that we want to synchronize our Analysis Services database across from our development to our production machine. I have two Analysis Services instances on my machine, both 2012 SP1. In Management Studio, right click on the databases folder in the Analysis Services target instance and select Synchronize…
After the wizard introductory page, select the source and destination servers. Note that you also have the option to change the storage location on the destination server. (Remember, fast drives are better for heavily queried stuff, slow drives for older, less queried traffic). If the source server uses remote partitions, the destination server will point to the same remote partitions. For more information on remote partitions, see my previous post, Remote Partitions in SSAS.
On the next screen, select the partition locations and click Next. Then choose if roles and members should be synchronized as well. If you are creating your roles initially you may want to deploy your roles and members, but once you’ve been in production for a while there will be a disconnect, as users have access to the production data but not the development server. We’re going to set it to copy all for this example. Notice the last checkbox, “Use compression when synchronizing databases”. We want to make sure this is checked, in order to keep time to a minimum. Click Next, and then select whether to synchronize now, or to save the script as a file. If you save it as a file, you can then take the xmla command and put it as a part of your nightly job run. We’ll keep it as Synchronize now, and click Next.
The summary screen will appear and we’ll click Finish. That’s it! You’ve successfully synchronized your two instances! I wanted to go into a bit deeper detail, so I’ve turned profiler on in my target instance before I click Finish so I can capture the commands being sent. Also, if you just want the synchronize script for your job, it looks like the below:
Once you click Finish, you’ll see the status bar tick across and do the synchronization between the two servers. My database was pretty small (93 MB), so it only took a few seconds. Below is a screenshot of the profiler events that I captured:
Selecting one of the Progress Reports that was logged, we can see that what Analysis Services that all Analysis Services did was copy of the files from the source to the target. There are no other commands being done (this line was repeated for every object throughout the database).
Keep in mind that this was an initial deployment. Running through the wizard again, this time the profiler shows us some profoundly different results:
It seems that when you are running the synchronize command, any subsequent runs against an existing database are compared to the existing metadata for differences (which would make sense, and you would expect that behavior). As databases grow larger, the impact of this will magnify, and slow performance down. For 90% of the solutions that I see on engagements, synchronize works perfectly like a charm. For larger (over a hundred GB AS databases), this approach may be able to be increased by using the technique outlined by using the Robocopy method outlined in this best practices article from Microsoft. Keep in mind that the Robocopy method demands that the service be stopped/restarted, and writeback and ROLAP support can’t be used). If you need those, using synchronize will be your best option.