> The only thing that consistently works: start on time, end on time, and don’t wait for late arrivals.
Agree, but that doesn't solve the problem of back to back mtgs. In some cases people have to physically move from one mtg room to another, or they need to use the restroom, etc.
Having the 5 min gap really is needed for those types of things.
> I don’t see any engagement with the centuries (maybe millennia) of thought on this subject
I used to be that person, but then someone pointed me to the Stanford Encyclopedia of Philosophy which was a real eye-opener.
Every set of arguments I read I thought "ya, exactly, that makes sense" and then I read the counters in the next few paragraphs "oh man, I hadn't thought of that, that's true also". Good stuff.
> That's not what I see on the market. Even the paying version of Odoo is way more affordable than traditional ERP:
The system looks well organized and clearly built by people with knowledge of the domain(s), but traditional ERP has significantly more depth in functionality, I don't think that comparison makes sense at this point in time.
Can you explain this comment? Are you saying to develop directly in the main branch?
How do you manage the various time scales and complexity scales of changes? Task/project length can vary from hours to years and dependencies can range from single systems to many different systems, internal and external.
The complexity comes from releases. Suppose you have a good commit 123 were all your tests pass for some project, you cut a release, and deploy it.
Then development continues until commit 234, but your service is still at 123. Some critical bug is found, and fixed in commit 235. You can't just redeploy at 235 since the in-between may include development of new features that aren't ready, so you just cherry pick the fix to your release.
It's branches in a way, but _only_ release branches. The only valid operations are creating new releases from head, or applying cherrypicks to existing releases.
That's where tags are useful because the only valid operations (depending on force push controls) are creating a new tag. If your release process creates tag v0.6.0 for commit 123 your tools (including `git describe`) should show that as the most recent release, even at commit 234. If you need to cut a hotfix release for a critical bug fix you can easily start the branch from your tag: `git switch -c hotfix/v0.6.1 v0.6.0`. Code review that branch when it is ready and tag v0.6.1 from its end result.
Ideally you'd do the work in your hotfix branch and merge it to main from there rather than cherry picking, but I feel that mostly because git isn't always great at cherry picking.
> Suppose you have a good commit 123 were all your tests pass for some project, you cut a release, and deploy it.
And you've personally done this for a larger project with significant amount of changes and a longer duration (like maybe 6 months to a year)?
I'm struggling to understand why you would eliminate branches? It would increase complexity, work and duration of projects to try to shoehorn 2 different system models into one system. Your 6 month project just shifted to a 12 to 24 month project.
The reason I said it would impact duration is the assumption that the previous version and new version of the system are all in the code at one time, managed via feature flags or something. I think I was picturing that due to other comments later in the thread, you may not be handling it that way.
Either way, I still don't understand how you can reasonably manage the complexity, or what value it brings.
Example:
main - current production - always matches exactly what is being executed in production, no differences allowed
production_qa - for testing production changes independent of the big project
production_dev_branches - for developing production changes during big project
big_project_qa_branch - tons of changes, currently being used to qa all of the interactions with this system as well as integrations to multiple other systems internal and external
big_project_dev_branches - as these get finalized and ready for qa they move to qa
Questions:
When production changes and project changes are in direct conflict, how can you possibly handle that if everyone is just committing to one branch?
How do you create a clean QA image for all of the different types of testing and ultimately business training that will need to happen for the project?
It depends a lot on a team by team basis as different teams would like different approaches
In general, all new code gets added to the tip of main, your only development branch. Then, new features can also be behind feature flags optionally. This allows developers to test and develop on the latest commit. They can enable a flag if they are interested in a particular feature. Ideally new code also comes with relevant automated tests just to keep the quality of the branch high.
Once a feature is "sufficiently tested" whatever that may mean for your team it can be enabled by default, but it won't be usable until deployed.
Critically, there is CI that validates every commit, _but_ deployments are not strictly performed from every commit. Release processes can be very varied.
A simple example is we decide to create a release from commit 123, which has some features enabled. You grab the code, build it, run automated tests, and generate artifacts like server binaries or assets. This is a small team with little SLAs so it's okay to trust automated tests and deploy right to production. That's the end, commit 123 is live.
As another example, a more complex service may require more testing. You do the same first steps, grab commit 123, test, build, but now deploy to staging. At this point staging will be fixed to commit 123, even as development continues. A QA team can perform heavy testing, fixes are made to main and cherry picked, or the release dropped if something is very wrong. At some point the release is verified and you just promote it to production.
So development is always driven from the tip of the main branch. Features can optionally be behind flags. And releases allow for as much control as you need.
There's no rule that says you can only have one release or anything like that. You could have 1 automatic release every night if you want to.
Some points that make it work in my experience are:
1. Decent test culture. You really want to have at least some metric for which commits are good release candidates.
2. You'll need some real release management system. The common tools available like to tie together CI and CD which is not the right way to think about it IMO (example your GitHub CI makes a deployment).
TL:Dr:
Multiple releases, use flags or configuration for the different deployments. They could all even be from the same or different commits.
> As another example, a more complex service may require more testing. You do the same first steps, grab commit 123, test, build, but now deploy to staging. At this point staging will be fixed to commit 123, even as development continues. A QA team can perform heavy testing, fixes are made to main and cherry picked, or the release dropped if something is very wrong. At some point the release is verified and you just promote it to production.
But how would you create that QA environment when it involves thousands of commits over a 6 month period?
It totally depends on how you want to test releases. You can have nightlies that deploy the latest green commit every day, do some QA there, then once it is feature complete promote it to a stable release that only cherry picks fixes, then finally promote it to production.
It will be highly dependent on the kind of software you are building. My team in particular deals with a project that cuts "feature complete" releases every 6 months or so, at that point only fixes are allowed for another month or so before launch, during this time feature development continues on main. Another project we have is not production critical, we only do automated nightlies and that's it.
> It totally depends on how you want to test releases.
For a big project, typically it involves deploying to a fully functioning QA environment so all functionality can be tested end to end, including interactions with all other systems internal to the enterprise and external. Eventually user acceptance testing and finally user training before going live.
I don't see how you're avoiding development branches. Surely while a change is in development the author doesn't simply push to main. Otherwise concurrent development, and any code review process—assuming you have one—would be too impractical.
So you can say that you have short-lived development branches that are always rebased on main. Along with the release branch and cherry-pick process, the workflow you describe is quite common.
They don’t do code reviews or any sort of parallel development.
They’re under the impression that “releases are complex and this is how they avoid it” but they just moved the complexity and sacrificed things like parallel work, code reviews, reverts of whole features.
Fun stuff. I built a system like this for artificial life years ago (neural network was the brain).
I'm curious how you handled the challenges around genotype>>>phenotype mapping? For my project the neural network was fairly large and somewhat modular due to needing to support multiple different functions (i.e. vision, hearing, touch, motor, logic+control, etc.) and it felt like the problem would be too challenging to solve well (to retain general structure of network so retaining existing capabilities but also with some variation for new) so I punted and had no gene's.
I just evolved each brain based on some high level rules. The most successful creatures had a low percentage chance of changing any neuron/connection/weight/activation function/etc. and less successful creatures had a higher percentage chance of changes with the absolute worst just getting re-created entirely.
Things I noticed that I thought were interesting, wondering what things you've noticed in yours:
1-Most successful ones frequently ended up with a chokepoint, like layer 3 out of 7 where there was a smaller number of neurons and high connectivity to previous neurons.
2-Binary/step activation function ended up in successful networks much more frequently than I expected, not sure why.
3-Somewhat off topic from digit recognition but an interesting topic about ANN evolution: how to push the process forward? What conditions in the system would cause the process to find a capability that is more advanced/indirectly tied to success. For example, vision and object recognition: what is a precursor step that is valuable that the system could first develop. Also, how to create a generic environment where those things can naturally evolve without trying to steer the system.
> Nah, those changes are only in the surface, at the most shallow level.
Very strongly disagree.
There are limitless methods of solving problems with software (due to very few physical constraints) and there are an enormous number of different measures of whether it's "good" or "bad".
Once again, that's only true at the surface level.
If you dig deeper you'll realize that it's possible to categorize techniques, tools, libraries, algorithms, recipes, whatever.
And if you dig even deeper, you'll realize that there is foundational knowledge that lets you understand a lot of things that people complain about being too new.
The biggest curse of software is people saying "no" to education and knowledge.
> Once again, that's only true at the surface level.
Can you provide concrete examples of the things that you think are foundational in software? I'm thinking beyond "be organized so it's easier for someone to understand", which applies to just about everything we do (e.g. modularity, naming, etc.)
For every different approach like OOP, functional, relation DB, object DB, enterprise service bus + canonical documents, microservices, cloud, on prem, etc. etc., they are just options with pros and cons.
With each approach the set of trade-offs is dependent on the context that the approach is applied into, it's not an absolute set of trade-offs, it's relative.
A critical skill that takes a long time to develop is to see the problem space and do a reasonably good job of identifying how the different approaches fit in with the systems and organizational context.
Here's a real example:
A project required a bunch of new configuration capabilities to be added to a couple systems using the normal configuration approach found in ERP systems (e.g. flags and codes attached to entities in the system controlling functional flow and data resolution, etc.). But for some of them a more flexible "if then" type capability made sense when analyzing the types of situations the business would encounter in these areas. For these areas, the naive/simple approach would have been possible but would have been fragile and difficult to explain to the business how to get the different configurations in different places to come together to produce the desired result.
There is no simple rule you can train someone on to spot when this is the right approach and when it is not. It's heavily dependent on the business context and takes experience.
> Can you provide concrete examples of the things that you think are foundational in software?
Are you really expecting an answer here? I'll answer anyway.
• A big chunk of the CompSci curriculum is foundational.
• Making wrong states unrepresentable either via type systems or via code itself, using invariants, pre/post-conditions, etc. This applies to pretty much every tool or every language you can use.
• Error handling is a topic that goes beyond tools and languages, and even beyond whether you use try/catch, algebraic objects or values. It seeps into logging and observability too.
• Reasoning about time/space and tradeoffs of algorithms and structures, knowing what can and can't be computed, parsed, or recognized at all. Knowing why some problems don’t scale and others do.
• Good modeling of change, including ordering: immutability vs mutation, idempotency, retry logic, concurrency. How to make implicit timing explicit. Knowing which choices are cheap to undo and which are expensive, and design for those.
• Clear ownership of responsibilities and data between parts of the system via design of APIs, interfaces and contracts. This applies to OOP, FP, micro-services, modules and classes, and even to how one deals with third party-services beyond the basic.
• Computer basics (some of which goes back to 60s/70s or even back): processes, threads, green memory, scheduling, cache, instructions, memory hierarchy, threads, but races, deadlock, and ordering.
• Information theory (a lot goes to Claude Shannon, and back): compression, entropy, noise. And logic, sets, relations, proofs.
I never said there is a "simple rule" only foundational topics, but I'll say again: The biggest curse of software is people saying "no" to education and knowledge.
> Are you really expecting an answer here? I'll answer anyway.
Yes, and thanks for the examples, it's now clear what you were referring to. I agree that most of those are generally good fundamentals (e.g. wrong states, error handling, time+space), but some are already in complex territory like mutability. Even though we can see the problem, we have a massive amount of OOP systems with state all over the place. So the application of a principle like that is very far from settled or easy to have a set of rules to guide SE's.
> The software engineers' body of knowledge can change 52 times in a year
Nah, those changes are only in the surface, at the most shallow level.
I think the types of items you listed above are the shallow layer. The body of knowledge about how to implement software systems above that (the patterns and approaches) is enormous and growing. It's a large collection of approaches each with some strengths and weaknesses but no clear cut rule for application other than significant experience.
> I think the types of items you listed above are the shallow layer
They are not, by definition. You provided proof for it yourself: you mention the "body of knowledge [...] above that", so they really aren't the topmost layer.
> is enormous and growing
That's why you learn the fundamentals. So you can understand the refinements and applications of them at first glance.
> They are not, by definition. You provided proof for it yourself: you mention the "body of knowledge [...] above that", so they really aren't the topmost layer
I said "shallow", not "topmost".
> That's why you learn the fundamentals. So you can understand the refinements and applications of them at first glance.
Can you explain when (if ever) a person should use an OOP approach and when (if ever) he/she should use a functional approach to implement a system?
I don't think those fundamentals listed above help answer questions like that and those questions are exactly what the industry has not really figured out yet. We can see both pros and cons to all of the different approaches but we don't have a body of knowledge that can point to concrete evidence that one approach is preferred over the many other approaches.
> I can, and have done several times, actually, for different systems.
The reason I asked that question isn't to be argumentative, it's because, IMO, the answer to those types of questions are exactly what does not exist in the software engineering world.
And talking through the details of our different opinions is how we can understand where each one is coming from and possibly, maybe, incorporate some new information or new way of looking at things into our mental models of the world.
So, if you do think you have an answer, I am truly interested in when you think OOP is appropriate and when functional is better suited (or neither).
If someone asked me that question, I would say "If we're in fantasy land and it's the first system ever built and there are no variables related to existing systems and supportability and resource knowledge, etc., then I really can't answer the question. I've never built a system that was significantly functional, I've only built procedural, OOP and mixtures of those two with sprinklings of functional. I know there are significant pros to functional, but without actually building a complete system at least once, I can't really compare"
You asked whether one should use OOP or FP to implement a system.
I can answer that, and did in the past, as I have done projects in both OOP and FP. But before I answer, I ask follow-up question about the system itself, and I will be giving lots of "it depends" and conditions.
There is no quick and dirty rule that will apply to any situation, and it's definitely not something I can teach in a message board.
> Could give examples of use cases where dense 3d packing is needed? (Say, besides literal packing of physical objects in a box? )
Not an answer, but something interesting on this topic:
In a warehouse/distribution center, a dense packing result can be too time consuming for most consumer products. As density increases, it takes the human longer to find their own solution rapidly that works. You can provide instructions but that is even slower than the human just doing their best via trial and error.
We had to dial back our settings from about a 95% volume consumption percent (initial naive setting) down to about 80% before they could rapidly fill the cartons. Basically it's balancing cost of labor vs capacity of system during peak (conveyor would start backing up) vs shipping costs.
I personally prefer the 5 minute gap, it's a simple and clean solution.
reply