Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
97-things-every-programmer-should-know-en.pdf
Скачиваний:
37
Добавлен:
12.05.2015
Размер:
844.5 Кб
Скачать

Reinvent the Wheel Often

"Just use something that exists — it's silly to reinvent the wheel..."

Have you ever heard this or some variation thereof? Sure you have! Every developer and student probably hears comments like this frequently. Why though? Why is reinventing the wheel so frowned upon? Because, more often than not, existing code is working code. It has already gone through some sort of quality control, rigorous testing, and is being used successfully. Additionally, the time and effort invested in reinvention are unlikely to pay off as well as using an existing product or code base. Should you bother reinventing the wheel? Why? When?

Perhaps you have seen publications about patterns in software development, or books on software design. These books can be sleepers regardless of how wonderful the information contained in them is. The same way watching a movie about sailing is very different to going sailing, so too is using existing code versus designing your own software from the ground up, testing it, breaking it, repairing it, and improving it along the way.

Reinventing the wheel is not just an exercise in where to place code constructs: It is how to get an intimate knowledge of the inner workings of various components that already exist. Do you know how memory managers work? Virtual paging? Could you implement these yourself? How about double-linked lists? Dynamic array classes? ODBC clients? Could you write a graphical user interface that works like a popular one you know and like? Can you create your own web-browser widgets? Do you know when to write a multiplexed system versus a multi-threaded one? How to decide between a fileor a memory-based database? Most developers simply have never created these types of core software implementations themselves and therefore do not have an intimate knowledge of how they work. The consequence is all these kinds of software are viewed as mysterious black boxes that just work. Understanding only the surface of the water is not enough to reveal the hidden dangers beneath. Not knowing the deeper things in software development will limit your ability to create stellar work.

Reinventing the wheel and getting it wrong is more valuable than nailing it first time. There are lessons learned from trial and error that have an emotional component to them that reading a technical book alone just cannot deliver!

Learned facts and book smarts are crucial, but becoming a great programmer is as much about acquiring experience as it is about collecting facts. Reinventing the wheel is as important to a developer's education and skill as weight lifting is to a body builder.

By Jason P Sage

Resist the Temptation of the Singleton Pattern

The Singleton pattern solves many of your problems. You know that you only need a single instance. You have a guarantee that this instance is initialized before it's used. It keeps your design simple by having a global access point. It's all good. What's not to like about this classic design pattern?

Quite a lot, it turns out. Tempting they may be, but experience shows that most singletons really do more harm than good. They hinder testability and harm maintainability. Unfortunately, this additional wisdom is not as widespread as it should be and singletons continue to be irresistible to many programmers. But it is worth resisting:

The single-instance requirement is often imagined. In many cases it's pure speculation that no additional instances will be needed in the future. Broadcasting such speculative properties across an application's design is bound to cause pain at some point. Requirements will change. Good design embraces this. Singletons don't.

Singletons cause implicit dependencies between conceptually independent units of code. This is problematic both because they are hidden and because they introduce unnecessary coupling between units. This code smell becomes pungent when you try to write unit tests, which depend on loose coupling and the ability to selectively substitute a mock implementation for a real one. Singletons prevent such straightforward mocking.

Singletons also carry implicit persistent state, which again hinders unit testing. Unit testing depends on tests being independent of one another, so the tests can be run in any order and the program can be set to a known state before the execution of every unit test. Once you have introduced singletons with mutable state, this may be hard to achieve. In addition, such globally accessible persistent state makes it harder to reason about the code, especially in a multithreaded environment.

Multi-threading introduces further pitfalls to the singleton pattern. As straightforward locking on access is not very efficient, the so-called double-checked locking pattern (DCLP) has gained in popularity. Unfortunately, this may be a further form of fatal attraction. It turns out that in many languages DCLP is not thread-safe and, even where it is, there are still opportunities to get it subtly wrong.

The cleanup of singletons may present a final challenge:

There is no support for explicitly killing singletons, which can be a serious issue in some contexts. For example, in a plug-in architecture where a plug-in can only be safely unloaded after all its objects have been cleaned up.

There is no order to the implicit cleanup of singletons at program exit. This can be troublesome for applications that contain singletons with interdependencies. When shutting down such applications, one singleton may access another that has already been destroyed.

Some of these shortcomings can be overcome by introducing additional mechanisms. However, this comes at the cost of additional complexity in code that could have been avoided by choosing an alternative design.

Therefore, restrict your use of the Singleton pattern to the classes that truly must never be instantiated more than once. Don't use a singleton's global access point from arbitrary code. Instead, direct access to the singleton should be from only a few well-defined places, from where it can be passed around via its interface to other code. This other code is unaware, and so does not depend on whether a singleton or any other kind of class implements the interface. This breaks the dependencies that prevented unit testing and improves the maintainability. So, next time you are thinking about implementing or accessing a singleton, hopefully you'll pause, and think again.

by Sam Saariste

The Road to Performance Is Littered with Dirty Code Bombs

More often than not, performance tuning a system requires you to alter code. When we need to alter code, every chunk that is overly complex or highly coupled is a dirty code bomb laying in wait to derail the effort. The first casualty of dirty code will be your schedule. If the way forward is smooth it will be easy to predict when you'll finish. Unexpected encounters with dirty code will make it very difficult to make a sane prediction.

Consider the case where you find an execution hot spot. The normal course of action is to reduce the strength of the underlying algorithm. Let's say you respond to your manager's request for an estimate with an answer of 3-4 hours. As you apply the fix you quickly realize that you've broken a dependent part. Since closely related things are often necessarily coupled, this breakage is most likely expected and accounted for. But what happens if fixing that dependency results in other dependent parts breaking? Furthermore, the farther away the dependency is from the origin, the less likely you are to recognize it as such and account for it in your estimate. All of a sudden your 3-4 hour estimate can easily balloon to 3-4 weeks. Often this unexpected inflation in the schedule happens 1 or 2 days at a time. It is not uncommon to see "quick" refactorings eventually taking several months to complete. In these instances, the damage to the credibility and political capital of the responsible team will range from severe to terminal. If only we had a tool to help us identify and measure this risk.

In fact, we have many ways of measuring and controlling the degree and depth of coupling and complexity of our code. Software metrics can be used to count the occurrences of specific features in our code. The values of these counts do correlate with code quality. Two of a number of metrics that measure coupling are fan-in and fan-out. Consider fan-out for classes: It is defined as the number of classes referenced either directly or indirectly from a class of interest. You can think of this as a count of all the classes that must be compiled before your class can be compiled. Fan-in, on the other hand, is a count of all classes that depend upon the class of interest. Knowing fan-out and fan-in we can calculate an instability factor using I = fo / (fi + fo). As I approaches 0, the package becomes more stable. As I approaches 1, the package becomes unstable. Packages that are stable are low risk targets for recoding whereas unstable packages are more likely to be filled with dirty code bombs. The goal in refactoring is to move I closer to 0.

When using metrics one must remember that they are only rules of thumb. Purely on math we can see that increasing fi without changing fo will move I closer to 0. There is, however, a downside to a very large fan-in value in that these class will be more difficult to alter without breaking dependents. Also, without addressing fan-out you're not really reducing your risks so some balance must be applied.

One downside to software metrics is that the huge array of numbers that metrics tools produce can be intimidating to the uninitiated. That said, software metrics can be a powerful tool in our fight for clean code. They can help us to identify and eliminate dirty code bombs before they are a serious risk to a performance tuning exercise.

By Kirk Pepperdine

Simplicity Comes from Reduction

"Do it again...," my boss told me as his finger pressed hard on the delete key. I watched the computer screen with an all too familiar sinking feeling, as my code — line after line — disappeared into oblivion.

My boss, Stefan, wasn't always the most vocal of people, but he knew bad code when he saw it. And he knew exactly what to do with it.

I had arrived in my present position as a student programmer with lots of energy, plenty of enthusiasm but absolutely no idea how to code. I had this horrible tendency to think that the solution to every problem was to add in another variable some place. Or throw in another line. On a bad day, instead of the logic getting better with each revision, my code gradually got larger, more complex, and farther away from working consistently.

It's natural, particularly when in a rush, to just want to make the most minimal changes to an existing block of code, even if it is awful. Most programmers will preserve bad code, fearing that starting anew will require significantly more effort than just going back to the beginning. That can be true for code that is close to working, but there is just some code that is beyond all help.

More time gets wasted in trying to salvage bad work than it should. Once something becomes a resource sink, it needs to be discarded. Quickly.

Not that one should easily toss away all of that typing, naming, and formatting. My boss's reaction was extreme, but it did force me to rethink the code on the second (or occasionally third) attempt. Still, the best approach to fixing bad code is to flip into a mode were the code is mercilessly refactored, shifted around, or deleted.

The code should be simple. There should be a minimal number of variables, functions, declarations, and other syntactic language necessities. Extra lines, extra variables... extra anything, really, should be purged. Removed immediately. What's there, what's left, should only be just enough to get the job done, completing the algorithm or performing the calculations. Anything and everything else is just extra unwanted noise, introduced accidentally and obscuring the flow. Hiding the important stuff.

Of course, if that doesn't do it then just delete it all and type it in over again. Drawing from one's memory in that way can often help cut through a lot of unnecessarily clutter.

By Paul W. Homer

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]