Friday, May 25, 2012

DB2 for z/OS Recovery: Getting it Right -- and Fast -- with Recovery Expert

Some people seem to be under the impression that recovery has drifted down, importance-wise, in the hierarchy of issues associated with running a DB2 for z/OS system. These folks might be thinking that, with the ever-increasing reliability of mainframe hardware, software, and storage systems, DB2 recovery operations just aren't required as often as they once were. On top of that, there's a feeling that point-in-time data recovery -- more complex than recover-to-currency -- is largely a thing of the past, owing to factors such as the increasing prevalence of round-the-clock transactional traffic accessing DB2 databases. Finally, there are the ongoing improvements in backup and recovery capabilities that are delivered with new releases of DB2 for z/OS, seen as mitigating some challenges that presented themselves in years past.

Well, these factors are indeed part of the current landscape: mainframe systems, long known for rock-solid reliability, are even more robust than before. Point-in-time data recovery is trickier when both online transactions and batch jobs update data in the same tables at the same time. DB2 for z/OS backup and recovery functionality has advanced in important ways in recent years. With all that said, however, I will tell you that recovery is as front-and-center in the world of DB2 for z/OS as it's ever been. Here's why:
  • The financial cost of downtime is escalating. It's an online world. Organizations' customers and clients demand access to data and services at all times. For some companies, the cost of data and application unavailability is measured in millions of dollars per hour.
  • The need for point-in-time data recovery is very much present in today's systems. Batch job input files still occasionally contain errors that have to be subsequently backed out of the database; furthermore, the iterative nature of application development common at many sites results in more-frequent database schema changes, and these at times need to be reversed.
  • With more options for DB2 for z/OS backup and recovery come more decisions. If there are multiple go-forward paths for a given recovery situation, which one is the best? If there are more recovery assets nowadays (e.g., system-level as well as object-level backups), how do you track and manage them?

Lots to think about. Fortunately, there is a tool available that can make you better at DB2 for z/OS recovery than you were before. As the voice-over from the beginning of the old television series, The Six Million Dollar Man, put it (yes, I'm dating myself here): "Better. Faster. Stronger." That tool is IBM's DB2 Recovery Expert for z/OS.

Recovery Expert delivers the greatest value when you need it most: 2:00 AM, data unavailable, application programs failing, phone ringing non-stop, escalation to upper management imminent. Pressure's on, cost of mistakes is high, time is absolutely of the essence. Do you even know what your recovery options are? Will you choose a good one? Are the required recovery assets in place (backups, log files, etc.)? Will you get the JCL right? Will you try to multi-thread some of the recovery processing? You can go with your gut and hope for a good outcome, or you can raise the odds of success by getting the advanced technology of Recovery Expert on your side. Fire up the GUI (browser-based starting with the recently announced 3.1 release) or the ISPF interface, point and click (or populate ISPF panel fields) to input information on the recovery task at hand, and let Recovery Expert do its thing. You'll quickly be presented with a list of alternative recovery procedures, with the best-performing option on top. Make the choice that's right for the situation (minimization of elapsed time might be your objective, or you might be focused on CPU consumption or some other aspect of the recovery operation), and leave it to Recovery Expert to build and execute the jobs that will get things done right the first time around. You even have the ability to specify a degree of parallelization for processes involved in the recovery operation. Speed and accuracy are significantly enhanced, and you get to tell the higher-ups that the problem is resolved, versus fighting through a storm of panic and stress.

Hey, and outside those 2:00 AM moments (which, I hope, are few and far between for you), Recovery Expert can deliver all kinds of value for your company. Want to reverse a database schema change without leaving behind all the associated DB2 stuff -- not only indexes and packages, but triggers, views, stored procedures, and authorizations -- that would be impacted by an unload/drop/re-create/re-load operation? Check. Want to restore an accidentally dropped table? Check. Want to accelerate the leveraging of DB2 10 features such as roll-backward recovery? Check. Want to quickly implement DB2 system-level backups to make your backup procedures much simpler and more CPU-efficient? Check. Want to assess available recovery assets to ensure that you have what you need to accomplish various recovery tasks? Check. Want to find "quiet times" in the DB2 log to which you can aim point-in-time recovery operations? Check. Want to accommodate application-based referential integrity in your recovery operations? Check. Want to translate a timestamp to which you want to restore data to the requisite DB2 log RBA or LRSN? Check. Want to create a backup that can be handed over to the IBM DB2 Cloning Tool for z/OS to clone a subsystem? Check. Want to simplify the bringing forward of your existing backup and recovery procedures as part of a migration to DB2 10 for z/OS? Check.

That's a lot of checks, and I haven't even gotten into all that's there. Believe me, Recovery Expert addresses so many needs associated with DB2 for z/OS recovery, once you start using it you may wind up wondering how you were able to get along without it. The more data you have, the more DB2 subsystems you have, the more complex and demanding your DB2 application environment is, the greater the Recovery Expert payoff. Get Recovery Expert, and get bionic with your DB2 recovery capabilities. "We have the technology..."

Thursday, May 3, 2012

Administering (and Changing) a DB2 for z/OS Database: The Right Tools for the Job

A lot of DB2 for z/OS DBAs are being asked to do more these days:
  • There are more database objects to look after (sometimes tens of thousands of tables, indexes, and table spaces in one DB2 subsystem)
  • There is a greater variety of database objects to manage -- not just tables and such, but stored procedures, triggers, user-defined functions, sequences, and more.
  • There are more options available with respect to object definition and alteration. Examples of these new options are hash organization of data in a table, DDL-specified XML schema validation, in-lining of LOB data in a base table, and data versioning by way of temporal data support).
  • There are new demands for physical database design changes. DB2 10 provides, among other things, a non-disruptive process for converting simple, segmented, and "classic" partitioned table spaces to universal table spaces.
  • The number of DB2 environments at many sites has gone way up, and by "environments" I don't necessarily mean DB2 subsystems (though some organizations have scores of these). Even if the number of DB2 subsystems at your company is in the single digits, in just one of those subsystems you could have multiple different development and/or test "environments," which might be in the form of different schemas (and these might contain the same tables, with slight variations in names or logical or physical design).
  • The data security situation is under more scrutiny than ever. In a given subsystem, who has what privileges on which objects? And what about all those new security objects (e.g., roles, trusted contexts, row permissions, column masks) and privileges (EXPLAIN, "system" DBADM, DATAACCESS, etc.) introduced with DB2 9 and DB2 10 for z/OS?
  • Documenting of database changes is a high priority. Management are increasingly demanding that a historical record of database changes be maintained.

And with all this extra stuff on DBAs' plates, is help on the way in the form of more arms and legs to get things done? Not likely: organizations are still running pretty lean and mean with respect to IT staffing, and DBA teams are generally not growing in proportion to the work they are expected to accomplish. What you need is a DBA accelerator -- something that enables a DB2 for z/OS administrator to do what he (or she) could do on his own, only faster. That accelerator is available in the form of IBM's DB2 Administration Tool for z/OS (which I sometimes call the DB2 Admin Tool) and its complement, DB2 Object Comparison Tool for z/OS. Together, these offerings provide the capabilities that enable a DB2 DBA to do his job more time-efficiently than ever, and with higher quality, to boot (you can avoid errors that could occur through efforts to get more done in less time simply by hurrying). Consider just some of the possibilities:
  • Accelerate the answering of questions about your mainframe DB2 environment. Sure, information needed to respond to most any question about a DB2 system can be found in the catalog, and you can always query those tables via SELECT statements. Or, use the Admin Tool's catalog navigation interface and get your answers faster.
  • Accelerate the issuance of DB2 commands and SQL statements. Rather than free-forming these, relying on your recollection of syntax (or having the SQL Reference or Command Reference handy), use the statement- and command-build assist functionality of the Admin Tool to get it right -- fast.
  • Accelerate the leveraging of DB2 TEMPLATE and LISTDEF functionality. TEMPLATE (for controlling the names and other characteristics of data sets that can be dynamically allocated by DB2 utilities such as COPY) and LISTDEF (used when you want a utility job to be executed for a group of objects, such as all table spaces in a given database) are productivity-boosting functions delivered with (as I recall) DB2 V7. TEMPLATEs and LISTDEFs can save you a lot of time when it comes to defining and managing DB2 utility jobs, but you have to set them up first. The DB2 Admin Tool helps you to do that -- fast.
  • Accelerate analysis of DROP and REVOKE actions and avoid "gotcha" outcomes. The DB2 Admin Tool will show you the impact of, for example, dropping an object or revoking a privilege -- BEFORE you do the deed.
  • Accelerate the copying of statistics from one DB2 catalog to another for SQL statement access path analysis. If you want some assurance that access paths seen for SQL statements in a test environment are those that you'd get for the statements in the production DB2 system, the statistics for the target objects in the test DB2 subsystem's catalog had better match those in the production catalog. I know from personal experience that manually migrating a set of statistics from one DB2 catalog to another is a chore. The DB2 Admin Tool makes this process easy -- and fast.
  • Accelerate tracking of DB2 database changes, and foster collaboration in the implementation of those changes. The DB2 Admin Tool stores information about database changes in a repository (a set of DB2 tables), from which data can be quickly retrieved when needed. Does anyone actually enjoy documenting database change plans and operations? I don't. Let the Admin Tool take the documentation load off of your shoulders.
  • Accelerate the restoration of a database to a previous state. Want to take a changed database back to a preexisting state (something that can be very useful in a test environment)? Easily done with the DB2 Object Comparison Tool.
  • Accelerate the generation of database comparison information that matters. Maybe you want to compare two different databases (in the same DB2 subsystem or in different subsystems), but you don't want the output of the comparison operation to be cluttered with information that doesn't matter to you. For example, you KNOW that the schema name of objects in database A is different from the schema name used for those objects in database B, or maybe you KNOW that primary and secondary space allocation specifications are different in the two databases. The DB2 Object Comparison Tool provides masking ("when you see schema name X in the source, translate that to Y in the target for purposes of this comparison") and "ignore" functions ("in comparing this source to that target, disregard differences in PRIQTY and SECQTY values") so that you can get the comparison result information that is important to you.
  • Accelerate the propagation of database changes made in (for example) a development or test environment to production. Database designs tend to evolve over time. For various reasons, an individual table might be split into two tables, or vice versa; a column might be added to or removed from a table; a table's row-ordering scheme might change from clustered to hash-organized. Following successful testing, these changes have to be reproduced in the production system. The DB2 Object Comparison Tool can generate the change actions (ALTER, DROP, CREATE, etc.) needed to bring a target environment in sync with a source environment, and those changes can be automatically applied to the target system.

That's a pretty good list of capabilities, but it's by no means complete. You can get more information via the products' respective Web pages (pointed to by the links near the beginning of this entry), and LOTS more information from the Administration Tool and Object Comparison Tool users guides (the "Product library" link on each product's Web page will take you to a page from which you can download a PDF version of the user's guide).

And the story keeps getting better. IBM is committed to the ongoing enhancement of the Company's tools for DB2 on the System z platform. An important new feature for the DB2 Administration Tool and the DB2 Object Comparison Tool that was recently delivered via PTFs (the associated APARs are PM49907 for the Admin Tool and PM49908 for the Object Comparison Tool) is a batch interface to the products' change management functions. With CM batch (as the new feature is known), users can set up and reuse batch jobs to drive change management processes -- an alternative to the ISPF panel interface that is particularly useful for database change-related actions that are performed on a regular basis. John Dembinski, a member of the IBM team that develops the DB2 Administration Tool for z/OS and the DB2 Object Comparison Tool for z/OS, has written a comprehensive article that describes in detail the capabilities and uses of CM batch. This article should be showing up quite soon on IBM's developerWorks Web site. When I get the direct link to the article I'll provide it via a comment to this blog post.

A closing thought. Something I really want you to understand about the DB2 Administration Tool for z/OS and the DB2 Object Comparison Tool for z/OS is this: the products are not training wheels for inexperienced mainframe DB2 DBAs. To say that these tools are DB2 administration training wheels would be like saying that co-polymer monofilament strings are tennis training wheels for Rafael Nadal. Those high-tech racquet strings didn't make Rafa a great tennis player. Rather, they make him even better at the game than he otherwise would be. Sure, if you're pretty new to DB2 for z/OS then the Admin Tool and the Object Comparison Tool can help you to get productive in a hurry. If, on the other hand, you're a mainframe DB2 veteran, the Admin Tool and the Object Comparison Tool can provide a multiplier effect that will enable you to leverage your knowledge and skills more extensively than you may have thought possible. Wherever you are on the spectrum of DB2 for z/OS experience, YOU are the reason you're good at what you do. With the DB2 Administration Tool and the DB2 Object Comparison Tool, you can become you-plus. Maybe even you-squared. Check 'em out, and get accelerated.