Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You said this:

> As a Googler, it's often easier for me to setup a GCP consumer account, AWS, or Heroku account to demo something, compared to using anything internal.

I get you're trying to make a point of saying you can do something easier elsewhere, but then why even through in the "as a Googler bit" without clarifying that you're not really working on anything of consequence where you'd be actually asked to host things internally.

You're basically hosting open source projects on AWS.



Why does it matter whether I'm working on something for production, or working on a 20% project? The point is, it shouldn't have been so hard for me to setup and run a simple server cost (billed using my divisions cost center), and then have to pay out of my own pocket to use a competitors product (and go through the hassle of expensing it). That's a bit of a sad pill for someone working at a company that had one of the most advanced data centers in the world. There are lots of times when developing a product internally that we need to host internal prototypes for demos, and GCP/AWS is a lot easier to use than the internal tooling. Even today, using our internal GCP instance vs the external one is more difficult.

A bigger reason AWS was chosen was because AppEngine, IIRC, back in those days could not support WebSockets (among other things), as Google's frontend Load Balancers that the predecessor to GCP used didn't know how to deal with it. The Chrome team was pushing heavily on new Web APIs like WebSockets, but by the time they were ready to ship to market, our own cloud infrastructure didn't support it yet.

I'd go further and say that, using internal infra to host employee 20% projects that are not really for production, is the kind of dogfooding that would have generated the pain points necessary to make a product like AWS, because those hobby workloads are far more similar to the workloads the average GCP customers deploy. Because we didn't really prioritize that, we kind of missed the boat on Cloud and was a late entrant, even though most of the foundational technology (eg Containers) we already had a long time ago.


An interesting contrast: for as long as I can remember (I've been at Amazon for 5 years) we've had burner AWS accounts available for testing/prototyping stuff that get auto-closed after a week. Many devs, myself included, also have a personal long-lived AWS account for infra stacks of the services we're working on. These accounts get billed directly to the team's fleet budget. As long as you're using them for work-related things (i.e. not mining crypto or hosting a minecraft server) and not racking up massive bills without a good reason then it's fine.


OMG, this is exactly what I want. GCP burner accounts that are billed to the team's cost center, perhaps with preemptable VMs that sleep dormant when unneeded, and eventually expire unless renewed.


This is super easy to setup on GCP? How can you work at Google and not have this?


Typically the problem is that employees with the ability to just stand up random servers within an environment also have the ability to start using them to run production workloads, handle sensitive information, and just build little silos entirely outside the operating standards, and if regulated, compliance requirements of the company.

These things typically ride on a pendulum over the course of 10 years or so, swinging from high speed to high friction. As the costs associated with approaching one extreme stack up, someone eventually says enough and the direction reverses.


Good point, although Google's internal monitoring systems are pretty good (even aggressive) at detecting abandoned, low-use, or high cost un-approved systems. I've gotten a number of my old projects flagged by automated monitoring, and asked to delete them, shut them down, or move them.

In recent years, there actually is an internal framework for standing up servers quickly along with end-to-end everything else a Google production service usually gets. The frontend uses Wiz. It's still not as easy as say, NextJS/Vercel, or Heroku, or even throwing together a Kube deployment, but it does provide a lot more than any of those, so the medium learning curve pays back quickly.


I said this in my original post but was downvoted to oblivion because my view was unpopular. I fully understand why you did it, I've done it myself for the same reasons.

I was just saying there is a line you can easily cross and you need to be careful when you take something and put it on other platforms. I've seen people get in trouble for enabling a Slack / GitHub integration, which actually made sense when I thought about why it was an issue.


Looks like someone is desperate to get into an argument


I couldn’t care less really, I would if someone was just running my code wherever they felt like it though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: