Get in touch to talk to one of our friendly cloud experts.
Get in touch to talk to one of our friendly cloud experts.
The culture of an organisation goes much deeper than just how the work is done. It’s a collective mindset that can make or break any business and it needs to evolve with the organisation’s needs and technological advances.
How do you apply that to an organisation’s IT systems then? In the early days, a lot needed to be learnt in order to fit IT and coding into an organisation’s business processes.
IT is now an integral part of our working environments and has become a major driver of change. The last 10 years have seen us work in a less-than-traditional fashion to keep up with change.
This year has seen even more change than before as the world has been forced to grapple with the COVID-19 pandemic. It has forced large portions of our organisations to move to remote working. At times, we have even had to try things first and patch later as we figure out what’s good and what isn’t.
Organisations looking for a competitive edge are often forced to source niche skillsets offshore. Ensuring consistency across the diverse landscape of computing environments can be a full-time job for the quality assurance (QA) and test team.
It was always going to be a trial and error process in which everyone learnt as they went along.
When it gets to this level of complexity, it is clear to everyone why lead times are four months and not one week in an era when waterfall methodology is so popular.
Those new ideas the devs wanted to try out? They disappeared somewhere in the document prepared three-to-six months ahead for the project management.
Working traditionally in this way means as many questions as possible should be addressed before the project starts. Frontloading like this was done for a good reason: compute, storage, and tooling were expensive and had to be budgeted and organised well ahead of the project starting.
When that’s done, the rest of the project is meant to fall into place in a controlled sequence of events.
Except it doesn’t usually work like that, and you get logjams and bottlenecks in the rigid waterfall process, leading to the dreaded moment when management asks for a progress report and you’re still trying to nail down the specs with different stakeholders.
Stepping back, it’s not hard to see how a technology base 40-50 years old may cause stagnation.
Processes created during decades of operation have led to the dreaded 'technical debt'. This means many organisations run with the brakes on instead of being dynamic and responsive to the market.
In that scenario, trying to work using an outdated waterfall methodology in today’s fast-moving business environment, with demanding users and customers now at the centre, has cascaded into dissatisfied developers polishing up their CVs or even looking at career changes at times.
That’s probably not what any organisation that is already struggling to find and retain talent wants. Staff retention is also another area to avoid if, like so many other organisations, yours is under threat from industry disruptors.
Slow to patch, long lead times to add new features, not enough products to market and being reactive to audits which require substantial effort to go through are indicators that an organisation is a hold-out that’s not going forward.
Change might be scary but it’s a necessary good. Failing isn’t a bad thing either. Not being able to recover from failure is, however. If you think about it, a big, monolithic system, full of complex processes, and parts that nobody is quite sure who is responsible for is not only prone to failure by its very nature but a nightmare to recover as well.
It’s a problem plaguing the public sector in particular, which has traditionally been a victim of 'blame culture', forcing it to be risk-averse by default since it is under heavy scrutiny.
This is nothing new. Many organisations have been there, done that, and moved to agile technologies and processes which bypass the above problems so you don’t have to waste time on them.
Key to that is using containers — lightweight and portable runtime environments which provide all the resources for your apps to run in the cloud. They allow developers to work fast, to try out new features, and if they don’t work, fail fast and move on.
If it’s implemented right, that is. Containerised apps are orchestrated with the open source Kubernetes (K8s) tool. We often get into discussions which focus on pricing for DIY K8s vs our managed Red Hat OpenShift-based offerings for orchestration.
It may be tempting to look at monthly charges in the thousands and think it’s better to go with that than managed K8s solutions which cost tens of thousands. Cheap is good, right? Is it cheap though? K8s and associated systems take real effort to figure out and set up properly, and that costs money for staffing and tooling. It also takes time to manage infrastructure.
Managed K8s means your organisation benefits from not having to manage system software and infrastructure.
Take the easy way out and head down the low code route, basically. If you can see what that means – lower complexity and faster, easier development – you’re onto a good start. Accept paths might change as a project develops and is adapted and that things don’t work and will fail, but avoid getting stuck there.
That’s a real culture change, a different philosophy if you like, and it will bring new capabilities to an organisation.
And if your change advisory board is worried that Jenkins will replace it, tell them not to: they’ll be happy to have continuous integration and delivery (CI/CD) fed into their systems from Jenkins to make their tasks easier and faster.
Don't get too comfortable with what's familiar and prevent your organisation from modernising. As the demands on your organisation become more urgent and complex, count on Datacom's cloud containers to give you a more agile and responsive infrastructure. Contact Datacom today to usher your business into the future.