Rules of Thumb – Relics of the past or valuable nuggets of knowledge?

FinOps – Think Data to Get Back Control Of Your Mainframe Costs
November 21, 2022
Ensuring the Mainframe is an Integral Part of Your Enterprise Observability Strategy
November 21, 2022
FinOps – Think Data to Get Back Control Of Your Mainframe Costs
November 21, 2022
Ensuring the Mainframe is an Integral Part of Your Enterprise Observability Strategy
November 21, 2022

Rules of Thumb – Relics of the past or valuable nuggets of knowledge?

Rules of Thumb are easily learned procedures for estimation based on experience or common knowledge. This presentation will examine several ROTs related to the performance of applications and systems. The ROTs will range from those that have and have not stood the test of time.

Since the early days of the mainframe, performance analysts have been developing and using Rules of Thumb to aid in their quest to explain current system behavior and plan for the future. In the late 1980s, a number of CMG papers described ROTs that were focused on mainframe usage, applications and performance. Since then, with the rise of client/server, distributed systems and the Web, the breadth of ROTs has expanded.

Most ROTs were developed from observations or (queuing) theory. What makes this an interesting topic is to understand the reasoning behind individual ROTs; i.e., what was the train of thought that led to the development of a ROT.

However, as with most things related to technology – the world changes. Some of yesterday’s ROTs are no longer applicable. There are other ROTs that have been revised over the years to account for technology changes. And then there is a set of ROTs that work today just as well as they worked years ago.

This presentation will examine a number of ROTs that fall into three categories: relics of the past, updated and revised to address today’s technology, and those that have successfully stood the test of time.


Presented by

Richard Gimarc, Consultant at RG

Richard Gimarc is an independent consultant specializing in Capacity Management.

Richard has leveraged his expertise in computer performance analysis and software development to solve a variety of complex technology and performance problems affecting a wide range of applications and compute platforms.

One of Richard’s unique attributes is a blend of theory and practice; he understand the theory of computer system performance, analysis and modeling and has been able to apply it in practice. The combination of theory and practice enables him to organize, interpret, explain and exploit dependencies and interactions in a variety of data center environments.


 Interview:

IMPACT 2023 Proceeding Session Video:
To view the proceeding session video you must have a CMG Membership. Sign up today!

For existing members sign in here.