The AI Workstation
August 27, 2020Mainframe and the New Normal – November 10, 2020
September 1, 2020Neil Gunther, M.Sc., Ph.D., is a world-renowned expert, the author of several books, and the father of Universal Scalability Law (USL). During this virtual event, Dr. Gunther will speak with Alex Podelko about his research and achievements. Among other topics, they plan to talk about USL, PDQ, guerilla capacity planning, and capacity planning for the cloud.
For the video, click here: https://www.cmg.org/2020/07/legends-series-a-fireside-chat-with-dr-neil-gunther/
If you were to write a book in the 2020’s about the Computational power of cloud, how would it be different form any of your books? Also, are Amdahl’s and other Queuing theories still relevant in this Cloud era?
Apart from updating some of the technologies, e.g., multicores, cloud, etc., I expect the book content would otherwise be very similar, if not identical. When I wrote The Practical Performance Analyst back in the late 90s, one of my motivations was to write it in such a way that the knowledge and the techniques would be time-invariant. I think that has worked out as expected. That really should come as no surprise. Take Little’s law. That law or metric relationship has not died just because the cloud arrived. It’s eternal.
Similarly, with Amdahl’s law (or the USL). You can’t truly understand scalability in a quantitative way without understanding such laws. They are fundamental performance frameworks. One of the reasons I was able to quickly correct the initial PDQ model, that was based on the original data from the AWS cloud, is that I know instinctively how queues operate. I know the queueing laws thoroughly. Queueing theory is now 100 years old and I now know that it can still be applied to the cloud.
How do today’s performance analysts differ from the old timers who grew up in the 80s and 90s?
Some older timers grew up on mainframes in the 60s and 70s (so I’ve read). I think the main difference is what led me to introduce my Guerrilla approach. Originally, performance was analyzed from the standpoint of minimizing the cost of expensive mainframe MIPS. Let’s call that strategic planning. Then, when client-server and Unix started to take off in the 90s, the cost of the iron was no longer the main focus. Instead, it was replaced by the need to make rapid decisions about the scalability of multi-tier architectures and application performance, i.e., tactical planning. Today, with elastic capacity in the cloud and in-situ virtualization, a certain level of scalability can be guaranteed and the focus has returned, rather ironically, to the cost of cloud MIPS. Or it should do. I’m not yet fully convinced that management has really grokked this point. In that sense, we’ve gone full circle.
For more details on this point, see “How to Scale in the Cloud: Chargeback is Back, Baby!” that I presented at Rocky Mountain CMG.
To continue readnig this transcript, you must have a CMG membership. Sign up today! Click here to join for as low as $20/month.
For existing members sign in here.