Pages with other entries: 1 2 3 4 5 6 ... 9

yacaph replaced with reCAPTCHA

Added: November 16, 2011

Tags: info

It has been some time already when I started to receive spam comments on these two blogs. After short period of denial I realized it really happens. I have turned the comments off. Being lazy to fix the problem I tried to turn them on couple of weeks later. Bots found me in about one week.

Turning them off again, waiting for time and courage to fix the problem... I have modified the yacaph generator to make different and less repeating waves of text and added some strikes-through. Nothing fancy, rmagick just did not obey my orders thoroughly and I lost my patience soon. Plus (just to err on side of caution) I have changed password and mangled hash code and file name. Being satisfied, I had turned the comments again on.

About two weeks passed. Another strange comment appeared. No, it cannot be true, just some typos and email address not reminding a name at all, but I should wait a bit more. After 2 days I got 2 more comments (I mean spams, of course) and turned commenting off again.

Time for reCAPTCHA to show what it can do. I hope it will work much better than my previous attempt to improve yacaph with fraction of time spent on integration. Well, to be fair yacaph worked perfectly for years, it just seems to be broken now.

It has been already some time since I started to oppose to idea of the Best Practice. Somehow it felt like an excuse to avoid thinking and using out-of-box solution. But I could not express my concerns well enough for the Best Practice Seekers and Followers to even start doubting it.

Now, I can use very good article Death to Best Practices written by Ted Neward. I encourage everybody to read it as well as an article linked there, but I offer my favourite part:

The point is, best practices just don’t exist. They are an attempt to take a solution to a problem out of the context and apply them across the entire spectrum, and that essentially invalidates the entire thing. A solution is only useful when considered in context.

Don’t believe me? Consider this challenge: when is it not a best practice to breathe? When you’re under water, of course.

(Unless you’re a fish. Or a frog. Or maybe a prince turned into a frog.)

Point is… context matters, folks.
I am looking forward for comments saying you actually can breathe under water, you just need to... But that would be only cognitive dissonance at work.

There is a value in the best practices, I have never doubted it, but I always suggested to use name good practices to show they are not universal truth. IMHO their value is in learning how experts solve particular problem in particular context, allowing to think whether it is a good match in our context, discussing differences in contexts and to think whether we could change our assumptions about problem we are trying to solve.

I have touched this topic before in Giving gifts versus conceiving rules and also in Is Javadoc useful?

More than year ago I was assigned to different project. To my delight team was writing not only unit tests, but also end-to-end tests. Every test class undeployed all plugins and components from server, reconfigured environment and deployed tested artefacts (not necessarily the same combination). Fantastic, tests were self-contained and independent.

But there were problems. Tests were slow and redeployment step was unnecessary in many cases, because all needed artefacts for subsequent test were already deployed and no changes were actually needed. Additionally all tests performed test setup in the old fashioned procedural way, describing in all gory details how to achieve desired state.

It could look like following code (comments are place holders for real functionality I am not going to show here).

@BeforeClass public static void beforeClass() {
    TestingUtil util = // get it ...

    util.undeployAllComponents();
    util.undeployAllPlugins();

    // initialize Database
    util.deployPlugin("daos.jar");
    util.deployPlugin("router.jar", routerParams);
    // some necessary tasks
    util.deployComponent("authenticator.jar");
    util.deployComponent("decisions.jar");
  
    User userA = // prepare user profile
    // ...
    // save user profiles
}
Instead of comments there were many lines of other code. Number of deployed components was varying between tests and also their order was not always the same (when it did not matter). Additionally there were 2 environments thus plugin router was usually unclosed with if-else together with varying DB setup. It was rather hard to track down what exactly is going on, but after some time we got used to it. If only it was not duplicated 17 times and every time it looked differently (nearly every test tested different component, but nearly all components depended on other components). Occasionally a new dependency was introduced and we needed to modify all those tests and add new one in correct place, unless it was not needed for particular test. It was messy.

Solution

I don't remember any more if I was begging our team leader to give me time to fix it or he asked me to do so, but I got the chance to find correct abstraction for this problem and to remove duplication.

I've realized our existing approach mixed together:

  • dependency management
  • description of scenario being tested
  • deployment tasks
  • varying environment
  • preparation of data in database
So I have created following:
  • Artefact knows what it is (plugin, component) and what other artefacts it depends on. There are subclasses for every component and plugin in order to encapsulate and reuse that knowledge.
  • Scenario collects Artefacts that need to be deployed and can iterate through them in different ways (in deployment order, in undeployment order). It got more responsibilities later so I have split it to 2 separate classes.
  • ScenarioDeployer is responsible for coordination of whole redeployment of scenario
  • Deployer knows what environment it is deploying to and how to perform it.
  • A hook for inserting custom code (e.g. for DB) between components are undeployed and deployed.
I have decided to push as much responsibilities as I could into individual objects. In this case the key was to give Artefact responsibilities of knowing what it depends on and how to deploy itself (by delegating to Deployer). Alternative with testing superclass having different methods doing deployment led to original mess. Alternative with static methods would fail too. What we've got was following:
@BeforeClass public static void beforeClass() {

    // initialize Database
    Scenario scenario = new Scenario();
    scenario.add(new AuthenticatorComponent());
    scenario.add(new DecisionsComponnent());
    createScenarioDeployer().redeploy(scenario);
  
    User userA = // prepare user profile
    // ...
    // save user profiles
}
Notice it does not mention anything about how to undeploy artefacts or which ones to undeploy. It will happen inside redeploy() and it allows for any changes in implementation as needed. It also mentions only top level artefacts (those that are tested by this test). All others, needed as dependency do not need to be mentioned, they are implicit and will be resolved.

Real solution is a bit more complex than this example, but original pseudo code I have shown does not show everything too. Main point is it goes directly to essence of scenario setup - it shows nicely what we want to use in test instead of obscuring it with details how it is achieved. There are, however, a few even more important things that are not shown:

  • It was easy to achieve optimization - to undeploy only unwanted artefacts, deploy missing ones and redeploy those whose dependency changed, everything in correct order. All without touching beforeClass at all. This was possible only because it was not done explicitly, but hidden inside of redeploy().
  • whole code together with optimization was heavily unit tested, so I could do changes confidently, while original duplicated test setup was tested only indirectly through tests it was meant to configure. (To be fair, there was one bug found a year later - optimization did not work effectively under all circumstances).
  • Less than 1 month later after introducing it, our team leader realized we can use whole framework and existing artefact implementations in installer, we just need to provide slightly modified Deployer.
  • When I returned back from my Round The World travelling I found the framework was moved from our project into shared space and it is used by other projects. Currently we are thinking we will use it for yet another unforeseen application, but it is too soon to be sure. But I think it should be possible just by providing another implementation of Deployer.

Conclusion

I have hinted in Solution that duplication of code usually means you are missing some abstraction. If you find it and make explicit you might save a lot of pain you had before, because information hiding allows you to introduce new approaches or enhancements cheaply.

With correct abstraction and well distributed responsibilities you might be surprised how your piece of code will get reused. In early 90s everybody hoped for OO languages being a solution for reusable code, but somehow it seemed OO could not deliver. IMHO it was always problem with badly assigned responsibilities (and misused inheritance). This claim could be slight simplification, but there is a lot of truth in it.

I wish I was there at the time it was decided it will be published to wider audience. I have some suspicion I have made not-so-good-decision-after-all with how Artefact handles equality. It is OK for my original task, but with wider usage it seems to be restrictive and confusing. And it is too late to change now after it is published contract ;-)

Test Driven Learning

Added: August 06, 2010

Tags: learning readability testing

I've just found interesting post Test Driven Code Review. Its title and the 2nd sentence caught my attention:

When reviewing or just reading such code, it's often best to first read the tests.
That is something I started to use some time ago and I had very good experience with it many times. But not always. As Philip Zembrod writes further he had similar experience and coined terms well-specifying tests and poorly-specifying tests. I like them. What could be the better place to understand code you've just got to review without deep understanding of requirements and domain it touches? Well, I agree with the author it should be tests. But they have to be clean, easy to read and simple.

After being guilty myself in writing obscure tests in the past, I try to make them as easy and concise as possible for last couple of years. I think my turning point was when I (gradually with time) created a unit test using 13 mocks. Every time some of test (usually more at once) failed I had to go to deep analysis whether I really introduced bug or I just need to tweak one of those inconvenient mocks, because requirements changed.

I have learned a lot from that experience

  • Limited responsibility of class will ask for less mocks. It is always possible to rearrange responsibilities in some way to allow simpler testing (and understanding both production and test code).
  • Test should explain main points, hiding irrelevant details (like creation of data it needs, but they are not the reason for existence of this test; hiding details of how mocking library works while saying what you want to achieve).
  • Test should be concise, only couple of lines all focused to tell you a story. What environment you have, what you do with tested class and what effects you should see as the result.
Last two points are not dissimilar to good story. You create some tension with method name and then let the readers to read through without getting them lost or loosing interest. As the result they should learn something and hopefully it will be about expected behaviour of tested code instead of how complicated mocking can get :-)

But not everybody does

Sad part is I always find hard to read tests that are completely obscuring the main point. Apart from plenty of "easy" to fix tests I have seen also couple of ugly ones. The worst was 3500 lines long test with 32 mocks! It was worse than really thick book with unappealing name. I've never bothered to look at production code. (Fortunately I did not need to). OK, there is even sadder part: I am afraid the author(s) will never learn anything from this test experience. Maybe only one incorrect thing - unit testing is meant to be hard :-)

Back to reviews

I usually try to spread good ideas and I've found the code reviews are pretty good medium for suggesting simple improvements in tests without being caught as mentoring. Many times I found developers are too proud to accept explicit help, which is pity in my opinion. Disadvantage of this approach is high probability helped person will miss the bigger picture. But that might come later.

Giving gifts versus conceiving rules

Added: June 17, 2010

Tags: people

Nearly 2 years ago I decided to organized unit testing workgroup with my colleagues. I found existing unit tests for product I was working at were not good enough to pay themselves off (being cheap to develop, reliable, easy to maintain and helping to grow production code instead of prohibiting it). I had prepared a presentation showing ideas, good and bad examples (there were less bad examples, because I was advised not to alienate my colleagues at the beginning).

Presentation went well (I thought), colleagues agreed to continue with these sessions weekly. They asked me to concentrate on writing The Best Practices. I objected, I didn't like the idea much, because IMHO it was wrong approach. You cannot create universal best practices, because you always need to consider context you want to apply them at. But I agreed to write Good Practices. In following 3-4 sessions we covered all ideas like expressive tests, data builders, testing scenarios instead of methods, ... Colleagues agreed they indeed are good ideas and that was it. Whole idea died slowly. I have linked wiki pages occasionally in e-mails, but I suspect nobody remembers it now.

My original thoughts were the colleagues were not ready and I marketed my ideas insufficiently. I am sure these reasons played certain role, but now I've realized there was big mismatch in our expectations of this workgroup. I wanted to give them a gift, but all they wanted were rules to follow. In way, they really were not ready, it is much easier to follow set of given rules and in case of failure the blame belongs to somebody else - "Hey, I've just followed the rules". I did not understand it then, because I was quite new to a company.

I've decided to continue with my gift giving (even I did not have this name for it then) in covert mode. Helping by making things better nobody asked me to do, I considered possibilities of enhancements when I reached some threshold of pain.

Sometimes I tried to provide a hint how to improve a piece of code, but that can be a bit risky in covert mode, because it allows for misinterpretations if not explained well. And due to my previous experience of refusals I tried to avoid getting into explicit mentoring mode. It usually depended on my previous experience with person I tried to help.

Situation is getting better slowly. Couple of people started to practice TDD, but others refuse vigorously even to try it. I am finding readable focused unit tests more often, but they are still rare compared to 50 lines long monsters you cannot understand even after reading its javadoc. There are people around who understand the power of finding better ways to do things, but many want to live in safety of rules made up 10 years ago (e.g. using StringBuilder for every concatenation, despite fact JVM optimizes '+' very well now and it is much easier to read) or to create a mess if there is no rule against it (e.g. copy&pasting piece of code 17 times and modify each copy slightly instead of making effort to find a correct abstraction - I should probably blog about this at some time). I still overhear from time to time "I would like to have the best practices for X", but it is less often than it was before.

What is difference between gifts and rules?

When I compare those 2 approaches, I like much more possibility to give a gift than to invent a new rule. The point is that with a new rule you are saying you don't believe people are able to do correct thing. Their genius certainly cannot shine if it is restricted with rules. Those asking for rules also express they don't believe themselves they are able to deliver.

On the other hand, by spending time with colleagues to show them how they can improve shows them I respect them and I believe it is not wasted time. And in the end it makes difference with product and our working environment.

Pages with other entries: 1 2 3 4 5 6 ... 9