Lately I’m of the opinion that refactoring is an Anti-Pattern. I’m going to pick on systemd because I do believe that it was an unnecessary rewrite of something that basically worked - and that the rewrite resulted in a subsystem that was far more complex and yet didn’t do much to improve system management. But I’ve seen this scenario play out in commercially-developed software as well.
Theoretically, the idea of dependency management sounds great, and perhaps integrated infrastructure does, too. But systemd didn’t really solve the hard problems that have plagued system maintainers. For the most part, system services are still managed in the same way - many systemd unit files just call shell scripts anyway. There’s no common adoption of management hooks such that applications advertise that they are ready. Want to make sure your web application backend doesn’t start until the database is up? You can’t rely on systemd, you still have to manage that case yourself. Systemd doesn’t add a whole lot except for complexity, and really just changes the workflow from managing and starting services via shell scripts and symlinks to managing and starting services via unit files, shell scripts, and symlinks.
There are (at least) three reasons, any one of which can cause a rewrite or refactor to adhere to this Anti-Pattern:
The developer promises they won’t break anything, but the outcome is often predictable. Refactoring is about improving code while keeping functionality intact! But all too often, refactoring excursions become rewrites that implement use cases differently than the original code and cause subtle breakage. That’s if the rewrite attempts a complete re-implementation of the original functionality - not always the case.
Complex code is complex for a reason.
Code tends to enter testing and production after achieving a certain level of functionality as a “minimum viable product”, but before the code is feature-complete. The rewritten code doesn’t implement all of the edge cases that the legacy code supported and so both versions of the code stay around. Particularly in the case where the rewrite is never fully completed, this sort of “refactoring” only burns cash and increases technical debt. One the new code has spent enough time in production, other developers start using it, leading to the situation where neither the legacy code nor the rewritten code is an independent, complete implementation of the required functionality.
When changes can’t be implemented and tested in small steps, refactoring is pointless.
Refactoring begins often because a developer complains of “code smell” or some other ambiguous heuristic describing code quality. Maybe the code is undocumented and seems overly complicated, or is full of spelling errors from a non-native English speakers, or uses a mix of tabs and spaces. But if a developer’s time is worth anything, there’s some return on investment to be expected from the work. In the case of arguments for refactoring or rewrite, developers usually don’t have to articulate the quantitative value of their work.
The value of refactoring is as impossible to measure as technical debt.
These issues intermix leading to the deployment of broken/partially functional rewrites that are inferior to the original. Instead of one wad of ugly code in need of help, now you’re stuck with double the trouble.
… and stopped when the changes grow out of scope. If a rewrite is necessary, then the developer should identify the quantitative criteria for the rewrite, and assess against those criteria along the way - stopping or redirecting the work as appropriate. Otherwise, in terms of refactoring and rewriting, developers need to make sure they can compartmentalize the work effectively so that the new code doesn’t enter production until it is as capable as the original. Capability needs to consider API compliance and implementation of use cases, and also performance, relative complexity, user and developer workflow, and testing.