There is work to be done! There's a war to be won!

Thursday, 2 January 2014

Symantec NetBackp 7.6 - Sexy integration continues with shiny new features.

Happy New Year to all ! Truly hope that its filled with everything you want for yourself; importantly peaceful, healthy, kindness ... all the good things. To kick off the NY, I thought I'd dump my 2cents worth on Symantec new NetBackup 7.6 offering. FA (first availability) was in Oct '13, with GA announced in December (7.6.0.1) Goodness me, what a lot of exciting features await.

To begin with, MSDP, whether integrated as part of the NetBackup Appliances or not, is certainly sounding a lot healthier with 'self-healing data integrity checks'. Too good to be true? Not from what I've seen: As part of a code rewrite, MSDP-DB (dedupe db) has moved from the 7.5 segment based offering (recall the dedup plugin divided the file into segments and sent those unique segments to dedupe engine (spoold)) - we now use a multithreaded agent (inline) btwn dedup plugin and dedup engine using a referencedb (ok, its a flat-file but anyway), it replaces postgresql and is container-based. So with spoold now containing refdb 'files' we're no-longer concerned about transaction logs in postgres - none of this post-processing stuff that slowed everything down. We're also limiting any corruption to a container of refdb, which is going to be a lot better than the previous single point of failure in 7.5 where nothing started if you were corrupt. What a pleasure. This is also why I saw a huge increase in start-up times of spoold under my little DAS running MSDP - it was faster, no question about it.

* So: things to look out for:

mtstrmd (multithreaded agent)
dedupe-engine(spoold) now has a 'refdb' (dbpath/refdb) with sub-directories that contain the id's (refid.ref).

SLP windows. Yes, I hear you cry:

- Scheduled windows can now be applied to dedup operations and/or replication operations (AIR, not OST). You can also delay the duplication of an image until source copy will expire (4 hours default)
- The GUI has finally got SLP parameters ... forget the config file, click your way into '14 ! lol
- I couldn't test the logic around processing to a great degree as my poor DAS was in a state of shock :) 
Jobs were doing what they were told to do... I kicked off an many as I could handle.
- I did find some welcoming activity monitor progress and size stuff, lovely.
- Can auto-resume files at a selected time or suspend. (however: cant suspend msdp, ndmp, opt dup, AIR (!) or backups from snaps ... lol) - read that manual.


OpsCenter - its all heading here, so get going:

- Security enhanced w/integration AD/LDAP (ad groups) and better user role assigment.
I also read with interest about the embedded authentication broker: each ops server has its own local server, so simplifies issues around install/config/upgrade/uninstall. Also other prods, if any, are not affected by removal, we hope.
- Cool getting started wizard. I always think these things need to exist to make the product more accessible
- Fully NBU managed (no further support for Tivoli, CommVault, EMC Networker) - fair enough.
- Telemetry.
- 'NetBackup Appliance aware' - not tested as Santa didn't buy me an appliance. Its said to have some enhanced monitoring and alerting regarding h/w failures - certainly has a new monitor appliance h/w tab. I saw a large number of db tables associated with Appliances, so I'd bet its all tightly taking care of any issues and sending on the alerts. (cpu, fan, disk, hba ... pci ... raid .. powersupplies ... the list goes on ... fill your boots)

* New daemons & stuff:
opsatd (embedded AT broker)
OpsCenter\server\AuthBroker
Logging still the same, but appliance logs will probably be of benefit to those running them by searching for a keyword associated with refs from the db table (all appear to start with am_HardwareFailureAlert, am_CPUFailureAlert, am_DiskFailureAlert, etc)

Oracle Intelligent Policies:

My testing was conclusive on this one - it allowed the automagical discovery of the oracle instance I was running. I backed up the DB, control file, logs, once I had completed registration; this registration requires you to provide credentials that are stored in NBDB. These credentials can be OS login or Oracle login - this is where you can define an instance group to hold the container of instances and apply it elsewhere later. You can use 'auto-registration' for 100's of other instances if you need to. Lots of stuff to consider here - read the manual, slowly !

A new auto schedule 'archive log' is now available with a minutes frequency: re-do logs are backed-up :) Loving that.

* New daemons and more stuff:
- GUI driven new policy type is Oracle. Well laid out and easy to get going.
- nbdisco for instance discovery and mgmt (master/client)
- use regular nbpem, nbjm, nbsl, nbars for job discovery and admin log, also bphdb
- nboraadm : new cli to play with instances, else just use GUI.
- bpplinfo has some additional switchs, but we're told to use them with caution. -client_list_type and -selection_list_type (0 1 2 3 4) (legacy, wholedb, tablespace, datafile, fra) - nice.


NDMP:

Its taking awhile, but great to have read about wildcard, all_filesystems and exclude lists now being available for NAS. Note: NetApp works with a volume and directory, all other vendors only currently support volume interrogation.

The ALL_FILESYSTEMS is like the ALL_LOCAL_DRIVES, but the VOLUME_EXCLUDE_LIST must precede the ALL_FILEYSTEMS directive:

e.g. VOLUME_EXCLUDE_LIST = /vol/somedir1*, /vol/somedir/[2-5]*, etc
ALL_FILESYSTEMS


Search&Hold:

This came out in 7.5 and allows you to discover data/index it, and put a hold on it for legal reasons, etc.
The problem in 7.5 was that the index server had to be a media server. The ability to install this as a client (Windows 2008/2012) certainly helps with the one less license, even though the index server is a licensed feature. The main process here is searchexecutor.exe (aka: velocity). I originally tested this on 7.5 and it worked with a little bit of tweaking, but didn't get to test it on 7.6

In OpsCenter, there is search&hold tab that allows you to search for images based on data range and puts a hold on all images. Useful stuff. You can also integrate it with EV. I get the license thing but I wonder how long the license will be a requirement. Cant help thinking about competitors ... is it time to make things even more attractive by offering data discovery and indexing as part of the core product? I think so.

VmWare w/Accelerator & Instant recovery

Accelerator was introduced in 7.5. It tries to increase the speed of backups by using a tracking log on a client, and then sending those changes/logs to the media server. It will also use the windows change journal if its enabled (or, um, not enabled). Regardless, the idea is 'faster' backups and useful for clients with a low data growth rate. So do your research before turning it on - know your data, test it in your environment and hopefully you'll be synthesising full backups in no time at all. Also, have a traditional full backup of this client running once a week anyway if you do choose to use it ;)
So, with the power of our VmWare/VSphere friends and Vmotion coupled with the ever-cool-titled 'instant recovery', we simply turn things on: the disk storage on the appliance or the media server is handed over to the ESX server's datastore and the VM backup-image is 'recovered' (powered-on). Its all NFS here, so do your homework. A plugin also exists for vCenter allowing backup history lookups, monitoring and recovery. A few clicks and your favourite VM admin can be viewing the wonders of a window that looks similar to the BAR.

Windows 2008/2012 and Solaris 11

Solaris 11 (root) ZFS changes have been given the all clear with Sun Cluster - I do wonder how many people are still running their core servers on Solaris. Windows 2012 and 2008 support too with NTFS dedup (OS dedup) supported - makes me think how many people will use this on the OS or just allow the storage to handle it ... certainly will cause some issues with backup tools, where arguably OS dedup schedules will run with backups at the same time. Headaches.

New catalog compression

I did notice a different compression being used on NBU catalog - wasn't able to find out much more about it, but was told it offered better access times and told it favours 'lots of small images'. I also read up on the fact that EMM and NBDB have to reside on the same server going forward. Never had a need to install EMM anywhere else myself.

There is also good mention about the additional pre-checks done in this version, and I must admit that it really is about time that things like EEB's are checked and validated.

Well, hope some of that has been interesting to someone out there.
As always, I tend to do all this for myself as I'm genuinely interested and like many people tend to forget things ... which is what most of this blog is all about at the end of the day. Drop a comment if the urge, um, urges ;)

Quick links:
7.6 Docs - http://www.symantec.com/business/support/index?page=content&id=DOC6488
(all 7.6 in 1 file) http://www.symantec.com/business/support/index?page=content&id=DOC6446

Happy data protecting in '14 and best of the best as always.
Glen.

Thursday, 19 December 2013

CommVault Simpana - A holistic approach

Data protection and recoverability is a hobby and a real interest to me;  I decided all those years back when I encountered the concept that it was something I could genuinely enjoy doing as a 'job'. It enabled me to jump into other positions, analysing the components of applications and learning to protect them in their entirety. Not a lot of jobs allow you to do that today.

I'm sure like any product out there offering protection, its the way you choose to use it based on the circumstances you're in and the experiences you've had. You're either familiar with the product or you're not, and this arguably shapes you. You might even be a HP or Dell or Oracle shop and therefore you're in a camp to begin with as far as procurement is concerned.  (No more handshakes on the golf course for you, er, matey) ;) Whatever your experience, if you're good at your job you'll be able to get just about any product to work in any environment. Full stop. There really is no argument.

From what I have seen, Simpana gives you all the tools for data protection in a well marketed campaign. I've seen and used products that offer 'backup/recovery' capability, but I'm impressed by seeing a well presented holistic approach to data protection. Now, I haven't used it in a production environment and don't know how well it scales, but I'm interested to find out more. It appears to me that you can pick it up, allow it to dictate its model to you, and get data protected without too much fuss. Wintel vs UNIX wars ... yes, they still exist, and some swear blind that a 'games' computer has no place protecting data; the Commserve or main workhorse has to reside on a Wintel platform. Well, my thought is if it works, it works, move along now, there's more important stuff to worry about, like protecting the actual data! The product, like so many others out there, is riddled with rich features; checkboxes offering you the world. Naturally the world always seems to come at a cost, so its over to how valuable the data you're planning to protect is, to your company. The template policies instruct you to protect, protect, protect, with a wealth of online help explaining each and every option available; its this offering that re-enforces the full data cycle neatly coupled with retention specifications. This cements the fact that the ageing process of your data and its associated copies, whether integrating snaps, aux copies or long term archiving, is fundamental to whether your boss lets you go for forgetting to think about the bigger picture. Its this bigger picture that constantly guides you, allows you to call up a wealth of reporting tools that are sweetly integrated. A lot of problems I've seen over a period of time comes from users in multiple locations doing 'stuff', aka: friendly fire. Well, 'naa draaamas' (as I've recently moved to Aus, forgive me, I AM trying to be serious here!) Simpana allows you to make people accountable for their actions by including a very rich security layer based on entities (simply: stuff you can or cant do based on who you are). So no more unexpected 'who just removed the path to our silo?!' problem.

You decide whether it works for your data and your environment, because at the end of the day its all still about understanding your data, your environment and doing the research to determine whether it'll work for you or not.You'd be a fool not to look at it ... or anything else for that matter ... but I say it again: holistic is a good.

Anyone want to invite me over for tea to have a look at your CV environ? Feel free, I'm harmless and friendly and will supply biscuits :)

Happy New Year all, and happy data protecting; whatever product you use, make it a good one by making the vendor accountable for the product they sell you. If you want a feature, tell them, involve them and keep talking.

Lastly, feel free to add your comments as always.
Glen.

Sunday, 20 October 2013

How to retain 90% of everything you learn

Reproduced from http://www.psychotactics.com/blog/art-retain-learning/ - as a reminder for myself and for others if it helps.

--

Imagine if you had a bucket of water. And every time you attempted to fill the bucket, 90% of the water would leak out instantly. Every time, all you’d retain was a measly 10%. How many times would you keep filling the bucket?
The answer is simple: just once.
The first time you noticed the leak, you’d take action
You’d either fix the bucket or you’d get another bucket, wouldn’t you?
Yet that’s not at all the way we learn.
Almost all of us waste 90% of our time, resources and learning time, because we don’t understand a simple concept called the Learning Pyramid. The Learning Pyramid was developed way back in the 1960s by the NTL Institute in Bethel, Maine. And if you look at the pyramid you’ll see something really weird.
That weird thing is that you’re wasting time. You’re wasting resources. You’re just doing everything you can to prevent learning. And here’s why.
To summarise the numbers (which sometimes get cited differently) learners retain approximately:
90% of what they learn when they teach someone else/use immediately.
75% of what they learn when they practice what they learned.
50% of what they learn when engaged in a group discussion.
30% of what they learn when they see a demonstration.
20% of what they learn from audio-visual.
10% of what they learn when they’ve learned from reading.
5% of what they learn when they’ve learned from lecture.
So why do you retain 90% when you teach someone else or when you implement it immediately?
There’s a good reason why. When you implement or teach, you instantly make mistakes. Try it for yourself. (In this article for instance, after I’d read the information, I cited the loss rate as 95% instead of 90% to begin with. I had to go back and correct myself. Then I found three more errors, which I had to fix. These were factual errors that required copy and paste, but I still made the errors).
So as soon as you run into difficulty and start to make mistakes, you have to learn how to correct the mistake. This forces your brain to concentrate.
But surely your brain is concentrating in a lecture or while reading
Sure it is, but it’s not making any mistakes. What your brain hears or sees is simply an abstract concept. And no matter how clearly the steps are outlined, there is no way you’re going to retain the information. There are two reasons why.
Reason 1: Your brain gets stuck at the first obstacle.
Reason 2: Your brain needs to make the mistake first hand.
Reason 1: Your brain gets stuck at the first obstacle. 
Yes it does. And the only way to understand this concept is to pick up a book, watch a video, or listen to audio. Any book, any video, any audio. And you’ll find you’ve missed out at least two or three concepts in just the first few minutes. It’s hard to believe at first, but as you keep reading the same chapter over and over, you’ll find you’re finding more and more that you’ve missed.
This is because the brain gets stuck at the first new concept/obstacle. It stops and tries to apply the concept but struggles to do so. But you continue to read the book, watch the video or listen to the speaker. The brain got stuck at the first point, but more points keep coming. And of course, without complete information, you have ‘incomplete information’.
Incomplete information can easily be fixed by making the mistake first hand.
Reason 2: Your brain needs to make the mistake first hand
No matter how good the explanation, you will not get it right the first time. You must make the mistake. And this is because your interpretation varies from the writer/speaker. You think you’ve heard or read what you’ve heard/read. But the reality is different. You’ve only interpreted what they’ve said, and more often than not, the interpretation is not quite correct. You can only find out how much off the mark you are by trying to implement or teach the concept.
So how do you avoid losing 90% of what you’ve learned?
Well, do what I do. I learn something. I write it down in a mindmap. I talk to my wife or clients about the concept. I write an article about it. I do an audio. And so it goes. A simple concept is never just learned. It needs to be discussed, talked, written, felt etc. (I wrote this article, ten minutes after reading these statistics online).
The next time you pick up a book or watch a video, remember this .
Listening or reading something is just listening or reading.
It’s not real learning.
Real learning comes from making mistakes.
And mistakes come from implementation.
And that’s how you retain 90% of everything you learn.
Which is why most of the people you meet are always going around in circles.
They refuse to make mistakes. So they don’t learn.
They’d rather read a book instead. Or watch a video. Or listen to an audio.
Their bucket is leaking 90% of the time.
But they don’t care.
The question is: Do you?