Optimization through removal
The fastest code is code that doesn't run
Three things from the same debugging session taught me this:
1. "unsorted fetch seems faster (shocker)"
Removed a sort operation that wasn't needed. Instant improvement.
2. Rule results were being calculated three times
At shipment update, at search query, and again in memory with map/filter. Only the first one was necessary.
3. Integration servers were fetching, then updating everything
They fetched from the API, then sent update requests for all shipments without checking if updates were needed. A diffcheck at the source cut most traffic.
The principle
The reflex is to make things faster - better algorithms, more caching, parallel execution. Sometimes that's right. But often the better question is: does this need to happen at all?
Removing unnecessary work beats optimizing it. Every time.