Cloud vs Traditional Data Center vs Hosted Service vs Indoor Box

Recently, I ran across an article on LinkedIn discussing how one company decided to switch from the cloud back to an in house data center. The person writing it mostly talked about cost savings as the major win but didn’t necessarily go into detail about what a lot of the move entailed. As someone who has worked in a data center, has used the cloud, hosted services and even an on premise box (i.e. THE in house data center), I can tell you that the article and writer were full of shit for anyone looking for a silver bullet/golden hammer solution. Of course, the mass appeal, especially these days, is focusing on the cost savings part. And in the case of this particular company with their type of code and business use case, possibly it made a lot of sense for this shift. But what the article doesn’t necessarily talk about is the difficulty in creating this massive paradigm shift and the other overhead it can bring.

The chief aspect of running a cloud based solution that is appealing to the majority of users out there is that services like AWS, GCP, Azure, etc. afford the lessons of dealing with setting up infrastructure, scaling, tooling and staffing for the average use case. Using their own lessons, experience, techniques, strategies, etc., these companies have essentially monetized that into software for various use cases. For instance, scaling is a nightmare to deal with. Want more users? Add more database services with replicas, plus some form of caching. Each of those instances might require separate boxes with different hardware requirements. Let’s not even get into the networking, cooling and space costs or even management of racks for a true do-it-yourself solution. And that’s just the basics.

And while someone with 25 years of experience like myself can setup a fresh Linux instance to do a lot of this stuff, it’s painful to manage over time. Cloud services typically offer both a command line, web and API interface to manage services and instances. And because of the complexity of software, you end up not only requiring those interfaces but also need additional services like a Kubernetes to manage that.

The other major cost in dealing with a self-managed solution such as a traditional data center is that you require specialized on premise people to run them. Once again, the whole office of the CIO comes into play as you begin requiring system administrator, network engineers, DBAs, people managing them hardware itself, security administrators, etc. where your true cost end up becoming a people problem. And many of these people aren’t cheap and most tend to have nasty personalities due to the nature of their work. On top of that, you need to find people with that type of experience to do it properly. So there’s numerous levels of issues that will come out of a self-hosted, company run data center.

From that, the major issue becomes where the hell do you even setup a data center these days? Companies like Google, Amazon, Meta, etc. end up doing crazy things to run their data centers from figuring out cooling solutions in the ocean, to running solar powered buildings out in the middle of the desert. But companies like these are also engineering first type of companies where you have an inherent culture driven in a large part by engineering. And what I mean by engineering is not just code. It’s the whole problem solving aspect that 99% of the companies out there are simply not equipped to handle.

Where a traditional data center makes sense is where you already have the money and infrastructure in place. In my case, I worked over at Citigroup and they already had physical data centers around the world being managed by large groups of the types I previously enumerated. On top of that, due to the sensitive nature of software and information they have, it makes almost no sense for them to run their systems on the cloud. Are all the people involved competent or all their proprietary systems built in an efficient manner? Probably not. But companies like that tend to have their own sets of standards, builds and specialized manners in managing software and hardware so the transition, while sexy sounding, actually is a detriment.

So what about a case of hosted services vs say Cloud? Hosted services that aren’t of the size as Amazon are great when it comes to much smaller situations. A personal blog site, project or starter kit kind of deal are more ideal where cost is paramount. It makes no sense for me to go to AWS when I can do the majority of things on a single instance. Even stuff like ecommerce probably only needs something like a Magento (with tons of caching) or Shopify.

Then there’s onsite hosted “data centers”. I wouldn’t even call it a data center but just some locally running box that handles an intranet (now that’s a term I bet most people have long forgotten). But an example would be a non-scaling solution that just needs a self-hosted box running Linux, PC parts and is connected to an internal network that provides simple proprietary applications for a company’s users (not external). These days anyone can just set something like this up. The only reason to move it outside of the cloud is that you would need a static IP if people wanted to connect to the application externally in the case that they were working from home. But that’s more of an extreme case and probably would just need a hosted service to replace whatever was being done.

I suppose the point here is truly understanding what scale is and when to use a certain type of hosting. I think hosted and cloud are the most optimal use cases for the bulk of companies out there. Running your own data center just requires too much capital and work but actually depends on the nature of ones business. On premise is great for DIY companies or even college projects but it still eventually gets into dealing with running ones own hardware and dealing with maintenance.



(Visited 1 times, 1 visits today)