Sunday, January 10, 2021
sketch notebooks to learn Google Cloud Platform (GCP)

Priyanka Vergadia has collected a bunch of docs about how to use GCP offerings that are in the fun "sketch notebook" format.
Labels:
devops
FOSDEM 2021 is virtual on Feb 6 & 7

Here are the FOSDEM21 tracks for Feb 6 & 7. Find the free open source software you love and visit the virtual session to chat with other like-minded coders.
Labels:
devops
cloud providers as a framework
(total cost of ownership (TCO) is dominated by maintenance; public cloud providers maintain their components and are less expensive in total cost)
Labels:
devops
automated upgrades?
Remember when Python-3 was released? Guido von Rossum shipped the release with an automated upgrade tool called 2to3 that would translate your Python-2 code into Python-3 syntax automatically. Similarly, Rob Pike released the v1.0 version of Go with a command called fix that would translate your v0.9x versions to v1.0 syntax.
James Abley is proposing that all package maintainers should ship completely-automated upgrade tools for each release of their packages. Minor-release number changes (non-breaking changes) are already straightforward and many tools exist to help change manifests, build & YAML files. But major releases require transformers that actually change your code. It is a cool concept and appears feasible at first glance. Everyone needs them. The total available market is huge.
Labels:
devops
Saturday, January 9, 2021
Thursday, January 7, 2021
Sunday, January 3, 2021
strong types in APIs?
Sometimes we want to write fast, hacky code in languages with weak typing (perl, python, javascript), or lazily put data into "string" in Java. Other times we want strong typing, better observability, and easier maintenance. The folks at buf are trying to make protocol buffers as easy to use as json. They make some strong arguments for stronger types in our APIs.
Labels:
devops
cost transparency for developers

Lawrence Jones describes a somewhat esoteric and counter-intuitive example of speed and cost savings teams can gain by applying compression in their own software components instead of relying on underlying libraries and components to apply compression where appropriate (e.g. Avro). However the deeper insight is for large organizations where developers are frequently divorced from the true cost of running and operating their software. In my "day job" we run co-located data centers and on-premise systems, and large scale "IT" manual operations. "Ticket" and manual task mentality and culture dominate operation of the software services developed. Large IT operations teams control cost by limiting capacity, which in turn is manually managed, allocated, and provisioned by humans via tickets and manual tasks. The unintended consequence of separating capacity all the way down to the node and sometimes even hardware-level without auto-scaling is enormous over-provisioning -- most of the capacity is under-utilized because there is no auto-scale down of VMs, memory, or storage. A more-sinister consequence is that the over-constrained teams start to "think small" and limit the capabilities of their software design because of the assumption that capacity is so tiny and constrained.
The remedy, of course, is to show developers and their product teams the real cost of the network and computing resources their software consumes when it runs. No matter how much waste and bloat the manual operations infrastructure teams add to the cost, clever software designers will be able to design software within these constraints to keep their services profitable. Clever developers can use APIs available in "ticket" systems to write software that programs humans in natural language and enables auto-scaling within a pool of quota, albeit with extremely long latency.
Labels:
devops
Subscribe to:
Posts (Atom)