Lawrence Jones describes a somewhat esoteric and counter-intuitive example of speed and cost savings teams can gain by applying compression in their own software components instead of relying on underlying libraries and components to apply compression where appropriate (e.g. Avro). However the deeper insight is for large organizations where developers are frequently divorced from the true cost of running and operating their software. In my "day job" we run co-located data centers and on-premise systems, and large scale "IT" manual operations. "Ticket" and manual task mentality and culture dominate operation of the software services developed. Large IT operations teams control cost by limiting capacity, which in turn is manually managed, allocated, and provisioned by humans via tickets and manual tasks. The unintended consequence of separating capacity all the way down to the node and sometimes even hardware-level without auto-scaling is enormous over-provisioning -- most of the capacity is under-utilized because there is no auto-scale down of VMs, memory, or storage. A more-sinister consequence is that the over-constrained teams start to "think small" and limit the capabilities of their software design because of the assumption that capacity is so tiny and constrained.
The remedy, of course, is to show developers and their product teams the real cost of the network and computing resources their software consumes when it runs. No matter how much waste and bloat the manual operations infrastructure teams add to the cost, clever software designers will be able to design software within these constraints to keep their services profitable. Clever developers can use APIs available in "ticket" systems to write software that programs humans in natural language and enables auto-scaling within a pool of quota, albeit with extremely long latency.
No comments:
Post a Comment