roland tiefenbrunner :
18.5.13
Time to move
13.4.13
Role Models as Analogs and Antilogs
http://www.flickr.com/photos/shkarim/4780781176/sizes/m/in/photostream/ |
I just finished reading the book Dear Theo: ... (I found it via Scott Berkun) and it is as inspirational as motivational. It illustrates that even Vincent Van Gogh had to practice like anybody else and that he was not born a perfect painter. He even went trough phases of melancholy and depression, but he kept on doing. It is one thing to have good ideas and good intentions, but a completely different thing to invest in them (time, money, etc). At the end of the day it is all about going out there and dedicating oneself to something.
Additionally, he was a modest man, not daring to compare himself to other great painters of his time. In contrary, he chose them as role models.
At the same time I read the book Getting to Plan B where the authors recommend to find analogs and antilogs for your product's or company's vision. That means to find out what has worked for other people or not, so you do not have to reinvent the wheel or to make the same mistakes.
I think role models can be used as analogs and anitlogs for your career's vision. It is not about becoming those persons, but about using practices that worked for them and/or combining those with your own.
Therefore it is a good thing to seek people that stand for values you admire and to get to know their lives. Copying Einstein's sleeping habits or having a personal reviewer like Vincent Van Gogh (his brother); maybe it will work for you as well.
20.1.13
Feature's hierarchy of needs
So I watched Gojko's video Reinventing software quality where he talks about a software quality model that is based on Maslow's hierarchy of needs. He says that of course his model is wrong, as all models, but at least it worked for the given problem context.
For each layer just enough work is invested to reach the next one. So if it is deployable and functionally ok, there is no need to make the deployment faster or redesign the code ... If the system is usable the next step is to measure its usefulness; is anybody using it? If not, where is the point in maintaining it, etc. The last step is to verify if it is successful in terms of business value.
So, I found that approach very interesting and after reading 'Impact Mapping' I came up with a similar model (which is of course wrong), but just for a single feature.
Before the feature can be developed there needs to be a related business value or goal, the purpose of it's life. The customer formulates a problem he wants to be solved: 'increase sales by 10%' or 'being able to sell bundles'. After that the feature is developed, is functionally ok, can be deployed, is performant and usable.
I removed 'useful' as in my opinion for a feature this is part of being successful. However, it is successful if the business goal was reached; 'sale increased by 10%' or if the customer accepted it at the live demo.
So, my point is, if there is a lot of uncertainty (there mostly is) get the feature out as fast as possible, to learn and to know if the feature is worth the effort and especially the money. If this is the case, invest in it immediately to reduce ongoing maintenance costs.
For each layer just enough work is invested to reach the next one. So if it is deployable and functionally ok, there is no need to make the deployment faster or redesign the code ... If the system is usable the next step is to measure its usefulness; is anybody using it? If not, where is the point in maintaining it, etc. The last step is to verify if it is successful in terms of business value.
So, I found that approach very interesting and after reading 'Impact Mapping' I came up with a similar model (which is of course wrong), but just for a single feature.
I removed 'useful' as in my opinion for a feature this is part of being successful. However, it is successful if the business goal was reached; 'sale increased by 10%' or if the customer accepted it at the live demo.
You ain't gonna need itNow it is time for self-actualization, meaning it is now time to invest in it. Within Maslow's pyramid it is possible to desire the next layer even if the current one is just fulfilled to 70%. And that should be the same for our feature. There was no need to have 100% test coverage or to even use TDD (of course it results in better quality, but costs 30% more time). No need doing exploratory testing for several hours by 3 testers.
We'll get it “right” the third time.But now as we have the certainty that this feature is worth the effort, we start to redesign it, invest in its quality, secure it etc. so we can decrease later maintenance costs. The last step is done if the defined quality level is reached and non functional requirements are met.
So, my point is, if there is a lot of uncertainty (there mostly is) get the feature out as fast as possible, to learn and to know if the feature is worth the effort and especially the money. If this is the case, invest in it immediately to reduce ongoing maintenance costs.
This week I had the opportunity to attend the Software Quality Days 2013 in Vienna.
The conference was very well organized and the speakers covered interesting topics. The main focus lied on quantifying software quality, introducing metrics, agile best practices, test data management etc. I also had the pleasure to get to know Tom Gilb, the grandfather of agile, who actually took the time after his talk to answer all of my questions; and there were a lot of them :) Furthermore I had a short chat with Sander Hoogendoorn who gave me some tips for my presentation, as he is a quite experienced speaker.
If you want to check out my slides for "Acceptance Testing using Examples" just write me a short email. The presentation went very well and there was no seat left.
The conference was very well organized and the speakers covered interesting topics. The main focus lied on quantifying software quality, introducing metrics, agile best practices, test data management etc. I also had the pleasure to get to know Tom Gilb, the grandfather of agile, who actually took the time after his talk to answer all of my questions; and there were a lot of them :) Furthermore I had a short chat with Sander Hoogendoorn who gave me some tips for my presentation, as he is a quite experienced speaker.
If you want to check out my slides for "Acceptance Testing using Examples" just write me a short email. The presentation went very well and there was no seat left.
4.9.12
Assert Better Quality
In 1999, Andrew Hunt and David Thomas already suggested in "The Pragmatic Programmer" to Design with Contracts (TIP31) and to use assertions (TIP33). They write about the benefits like concrete behavior documentation or crashing early (TIP32). Furthermore, you tend to think more about the behavior itself and how you want to handle exceptions and failures.
In 2009, Janie Chang found a relationship between Software Quality and assertions by conduct an empirical study on two Microsoft internal Software Components.
"The team observed a definite negative correlation: more assertions and code verifications means fewer bugs. Looking behind the straight statistical evidence, they also found a contextual variable: experience. Software engineers who were able to make productive use of assertions in their code base tended to be well-trained and experienced, a factor that contributed to the end results."So, time to take a look at the possibilities in Java :)
Assertions
Java has the reserved keyword assert that can be used like
assert Expression1 ;
or
assert Expression1 : Expression2 ;
The first expressions must result in a Boolean value, whereas the second is used to specify the error message and must therefore return a value. Assertions are not enabled by default and must be activated using the JVM parameter -ea. Check the following code snippet for a concrete example:
public void doSomt(Integer i){
assert i != null;
assert i > 3 : "The first argument must be > 3";
}
A disadvantages of the Java assertions when enabled is the impact on performance. However, in The Pragmatic Programmer it is suggested to leave the assertions turned on and remove only those assertions that really have an impact.Furthermore, the possibility to turn them on or off can also be a disadvantage. If you are the only one in your team using assertions and other environments like daily or testing machines do not have assertions turned on, there is no benefit in using them.
Additionally, extra documentation is needed to carry these internal defined constraints to the outside world, e.g. interface method documentation.
Google Guava Preconditions
The guava-libraries include a class called Preconditions. It can be used to verify parameters in an easy and convenient way:
public void doSomt(Integer i) {If a constraint is a violated an standard RuntimeException is thrown: IllegalState, Nullpointer, IllegalArgument or IndexOutOfBounds. Therefore, the caller must be informed about these rules using documentation or tests. Thereby, the caller becomes responsible for passing the appropriate data (but not as strict as in Designing with Contracts).
checkArgument(checkNotNull(i) > 3,
"The first argument must be > 3");
}
So the question is how and when to use what!?
I think, a combination of both is more valuable than using just one subset. Use a library like guava-Preconditions or create your own classes for validating parameters. Add some documentation about the constraints and back them up with lots of unit tests.
Assertions on the other hand can be used for verifying internal states and post conditions, e.g. verifying that the last element is the lowest when the sorting algorithm is finished. During the local development phase assertions can be turned on and will definitely help to improve code quality.
The valuable part about it, is that you start thinking in more detail about the behavior and invalid states. Furthermore, you will save time as the program is going to crash early, meaning that you do not have to go through long stack traces just to find the real root cause.
In another entry I will also write about Design by Contract and present some Java implementations.
29.5.12
EJB Lifecycle Interceptor
Depending on the actually type of your Enterprise Java Bean you can have different lifecycle events e.g. PostConstruct or PreDestroy. It is also possible to create Interceptors for these callback methods. The following code shows an EJB with a intercepted callback method and the Interceptor implementation.
The first thing to note is that interceptors for lifecycle events can be just declared at class level; nothing happens if the annotation is added to the method. Furthermore, a general advice, do not forget to call the proceed method on the context object (unless intended). I saw several examples without it, which results in overwriting the intercepted method.
@Stateful
@Interceptors({MyInterceptor.class})
public class MemberRegistration{
@PostConstruct
public void init() {
// do smg
}
}
public class MyInterceptor implements Serializable{
@PostConstruct
public Object intercept(InvocationContext ctx) throws Exception{
public Object intercept(InvocationContext ctx) throws Exception{
// do smg
return ctx.proceed();
}
}
The first thing to note is that interceptors for lifecycle events can be just declared at class level; nothing happens if the annotation is added to the method. Furthermore, a general advice, do not forget to call the proceed method on the context object (unless intended). I saw several examples without it, which results in overwriting the intercepted method.
22.5.12
Acceptance Test: What needs automation?
Fact:
As you can read here and here, Acceptance Testing is a technique for demonstrating the customer that features are implemented correctly and are doing what they are supposed to. They are black box tests and specify the feature from end-to-end (in terms of process and not architectural layers). Furthermore, they are supposed to be automated as they are used for the regression suite.
Problem:
Assuming we are implementing a product with a web-based graphical user interface, we therefore decide to use one of the various GUI automation tools like Selenium, Marathon or Watij. Motivated as we are, we start to write long test scripts for each of our acceptance tests. After some stories the customer wants lots of GUI changes and we need half a sprint just for fixing our tests and the more changes we have the more time we spent with test fixing.
Suggestion:
Martin Fowler argues in this article that it is definitely not necessary to automate 100% of the GUI part as it hardly pays off. Furthermore, also C. Lowell and J. Stell-Smith, the creators of Marathon, advise in this article to keep the testing pyramid in mind and keep the GUI tests around 10%.
Even though they are writing about test portfolios in general, I think this concept can be used for Acceptance Testing too: stick to the testing pyramid and stay with 10 % GUI tests.
Furthermore, it is probably not necessary to keep and fix all broken GUI tests. One possibilitiy, is to kick out tests that fail (because of GUI changes) certain times in a row.
So the question arises what and how to test?
As before, test the business rules (acceptance criteria), but on API not GUI level.
Gojko Adzic approaches in his (actually quite old ;) ) article to concentrate on the key business scenarios. That means, end-to-end tests for such scenarios (which should not be too many) are created and automated and further used as Smoke Tests. Other GUI related tests verify stuff like"are fields correctly displayed, when hitting this button", but not business rules.
Therefore, an architecture is needed that allows testing on another (architectural) layer. As stated by Uncle Bob here, one approach would be to use the same services like the GUI and mimic the its behavior within the test. This, however, presupposes a correct separation of frontend and business layer exists. If there is no usable API, it is absolutely OK to build in test hooks, according to J. Whittaker.
Further Readings:
Brian Marick - When Should a Test Be Automated?
Behavior Driven Development - It raises the level of abstraction and thereby helps to reuse code, resulting in less broken code when dealing with GUI changes.
As you can read here and here, Acceptance Testing is a technique for demonstrating the customer that features are implemented correctly and are doing what they are supposed to. They are black box tests and specify the feature from end-to-end (in terms of process and not architectural layers). Furthermore, they are supposed to be automated as they are used for the regression suite.
Problem:
Assuming we are implementing a product with a web-based graphical user interface, we therefore decide to use one of the various GUI automation tools like Selenium, Marathon or Watij. Motivated as we are, we start to write long test scripts for each of our acceptance tests. After some stories the customer wants lots of GUI changes and we need half a sprint just for fixing our tests and the more changes we have the more time we spent with test fixing.
Suggestion:
Martin Fowler argues in this article that it is definitely not necessary to automate 100% of the GUI part as it hardly pays off. Furthermore, also C. Lowell and J. Stell-Smith, the creators of Marathon, advise in this article to keep the testing pyramid in mind and keep the GUI tests around 10%.
Even though they are writing about test portfolios in general, I think this concept can be used for Acceptance Testing too: stick to the testing pyramid and stay with 10 % GUI tests.
Furthermore, it is probably not necessary to keep and fix all broken GUI tests. One possibilitiy, is to kick out tests that fail (because of GUI changes) certain times in a row.
So the question arises what and how to test?
As before, test the business rules (acceptance criteria), but on API not GUI level.
Gojko Adzic approaches in his (actually quite old ;) ) article to concentrate on the key business scenarios. That means, end-to-end tests for such scenarios (which should not be too many) are created and automated and further used as Smoke Tests. Other GUI related tests verify stuff like"are fields correctly displayed, when hitting this button", but not business rules.
Therefore, an architecture is needed that allows testing on another (architectural) layer. As stated by Uncle Bob here, one approach would be to use the same services like the GUI and mimic the its behavior within the test. This, however, presupposes a correct separation of frontend and business layer exists. If there is no usable API, it is absolutely OK to build in test hooks, according to J. Whittaker.
Further Readings:
Brian Marick - When Should a Test Be Automated?
Behavior Driven Development - It raises the level of abstraction and thereby helps to reuse code, resulting in less broken code when dealing with GUI changes.
Subscribe to:
Posts (Atom)