I’ve also always wanted this, but what I’ve realized after noodling on it a while is I’d really just prefer a way to use git, and push markdown documents to the Notes System.
I dont want a different system handling edits reviews and merges.
I just want CD to send my docs from git to a system that can properly host / give me the Doc-related features I need.
Sorry if I'm being thick, but why not just cache the response?
If you are guessing at the data anyway, what's the difference?
Why set up an entire speculative execution engine / runtime snapshot rollback framework when it sounds like adding heuristic decision caching would solve this problem?
Sounds like they were caching it since they could execute it before getting the response. The difference is that they wanted to avoid the situation where they execute stale code that the server never would've served. So they can execute the stale code while waiting for the response then either toss the result or continue on with it once they determine if the server response changed.
Hey cool project! I had the same need, and solved it a very different way.
I set up a wireguard server on a publicly accessible VPS.
The neat part about using "lscr.io/linuxserver/wireguard:latest"
is that it allows my to codify the number of clients I need. This includes both endpoints and source devices.
The second thing I did, was separate out the "networking" bits from the "userspace" bits, meaning it doesn't matter what port the service is running on. The client can hit it.
Taking that one step further, I just combined the above with haproxy and set my application ports there. This means I can hit haproxy on "someport" inside the VPN and it'll forward to whatever service I've got configured on that "client" that haproxy can see on it's LAN.
Works great, currently running a simple web page off the whole thing, where you connect to VPS and it tunnels the actual HTTP connection into kubernetes in my house.
I was thinking about writing this all up one day, but there's some cleanup to be done. Oh well.
Sounds pretty cool, I have done some similar things in the past with using a vpn to proxy backwards into my home network (hello fellow k8s at home user). I think in this case I wanted to basically set up my one nginx config and never have to change the web server config again and support arbitrary services in the future. I've never used haproxy before, but I wonder if there could be some room for improvement (read: not using unix domain sockets) by using a web server that can dynamically detect upstreams in a particular set of ports. E.g. if all my "tunnel" ports are on localhost:8000-9000, it can dynamically pick them up. I guess I still wouldn't know how to answer the "pick a name for the tunnel at runtime" problem, but it's definitely something worth exploring further!
If I was doing something that I intended to have running more than an hour or two at a time, I would 100% do something more like what you're describing haha.
> punish the grumpy ones in favor of the people-pleasers, that this crap happens
This isn’t exclusive to technology, but I see this all the time at my large organization. As a grump, I have to pick which shitstorm to care about and spend my time chipping away at, which ends up being really draining.
To help with this, we began focusing a team of great folks with shared values, leaning in on modern operations/SRE best practices.
We started to have some real success with SDLC for our operations workflows.
That’s when management changed our teams direction. Good times.
Well, developers seem to love writing "configuration" rather than "code" these days. But basically a container + the necessary tools IS a devcontainer. It's just a way of automating the "putting in the necessary tools" part especially if you need things that might need to be added to a base container, or services that need to be configured differently based on the external environment that you don't want to bake in for some reason.
If you've ever had to cut and paste a 50 line docker run command snippet but you forgot that one volume mount or port or ENV var that someone added a dependency on last week then you pretty quickly realize just doing complex docker things by hand is a pain. Another example, if you have a script that you want to run to fetch the latest authentication token from a vault after the container launches because you don't want to store it inside the container. Sure, you could write a bash script to run all these steps inside the container after you launch it but it's nice to have a config file to share with another dev and just say: use this.
And the secondary benefit is that having a config file for the editor (like VSCode) so that plugins can manage all of that stuff better. Generally a dev container runs the VSCode Server, and they know how to talk to each other which can make remote development easier. For example, now I can launch the same dev environment locally or on the 56 core xeon 1TB ram server at the office and it's exactly the same as far as the editor is concerned.
It looks like this project is an alternative to the VSCode Server. My team generally uses docker-compose for this since not everyone uses VSCode.
For the first bit, all I can think is a compose file. Also podman can run k8s configs locally, which I personally hope all of that eventually washes into the same thing. It feels like we already have the tools to make this a "solved" problem, is what I'm trying to say. I just include an additional .env that the compose file pulls in so it's not committed to git.
For the second point, ok this makes a little bit more sense, I've heard of Codespaces or OpenShift Dev Spaces but I guess I still question the value of additional complexity on top of the container (a simple dockerfile in my mind) your vscode instance's terminal is running in.
It makes it easy to point the tool at a Git repo, have it automagically create a containerized environment for that repo with all its dependencies, and open Visual Studio Code on the codebase inside that remote containerized environment.
Devcontainer was created by Microsoft to support Visual Studio Code's remote development features, so it works best in Visual Studio Code. Inasmuch as other IDEs support it, that's up to the IDE vendor.
Pretty much, yeah. It contains all the info necessary to tell Docker how to build/deploy the container, and how to configure the editor to work in it. The goal is turnkey setup of the software, its environment, and the user's IDE so that developers don't have to waste days doing that by hand.
One angle is to simplify the setup of what you described. You can do this manually with Docker already, but the DevContainers config means your editor will do it for you.
Another angle is rent-seeking and locking you into a proprietary, expensive ecosystem. Big Tech has successfully convinced most companies to overpay by orders of magnitude for compute and bandwidth, but so far local development machines were excluded. This aims to tackle that shortcoming and make sure you enjoy all the "benefits" of the cloud even during development.
I did something similar with Kubernetes, work has some OSE clusters that will generate DNS for you, it works great and the devs love using it. It’s a little bespoke but its simple and gets a lot of attention.
Plus since the namespaces preexist the workloads, we spin them up for the entire branch lifetime (times out after n days). Makes everyones jobs a lot easier.
Anything that helps shift lifecycle requirements and testing left has huge impact on DX.
I’ve been working on the same thing for a few months now.
Not only is it more customizable / less complicated than helm / other solutions, but GNU gettext is almost 30?! years old at this point, and environment variables are probably realistically double that age. They aint going anywhere anytime soon.
Plus I feel that more complex logic removes value from the configs we are building, and so am not interested in many other tools.
If the door is not closed and you shift into drive, you also get a very loud annoying beep and visuals.
Not only that, the brakes engage and the car LOCKS ITSELF IN PARK AND DOES NOT LET YOU DRIVE EVEN THOUGH THE SHIFTER IS IN ‘D’!!!
I pray the door sensor doesnt fail while driving on the highway, or theres never a situation that arises where the car needs to be driven regardless of the door being open.
On top of that, the door locks /trunk locks act very strange while the engine is running /driver door is open. Still dont have that figured out.
I dont want a different system handling edits reviews and merges.
I just want CD to send my docs from git to a system that can properly host / give me the Doc-related features I need.