Skip to content

Bash scripting web tests with ‘wget’

Did you know you can use the command line wget tool to perform simple tests on your web site?

In a Linux environment combining the programmatic capabilities of a Bash (Bourne Again SHell) script with wget is pretty easy and allows you to carry out basic load and performance tests. These tests could be run directly from your local machine or any other remote machine or VM with a Bash shell and wget, which is pretty much any Linux distro.

When I recently wanted to check whether a problem, where my web servers died under load, was resolved I wrote the following simple script:

#!/bin/bash
if [ $# -ne 2 ]; then  
        echo "Usage: wload URL #processes(eg. 1000)" 
        exit 
fi 
start=`date +%H:%M:%S` 
x=1
while [ $x -le $2 ]
do 
  echo "Request $x.."
  wget -o /dev/null -q -O /dev/null $1 &
  sleep 0.01
  x=$(( $x + 1 ))
done 
wait 
end=`date +%H:%M:%S` 
printf "%s " `hostname` $1 : $2 $start $end 
echo

You can save this as wload.sh and execute it in a Linux environment using the command:
./wload.sh [your URL under test] [no. concurrent requests to fire]

I think these kind of scripts are pretty neat, and I believe there is definitely a place for the ‘humble’ Bash script in the development process every now and again. You can do a lot more with Bash than just fire off requests at a page and as it is essentially a programming language it can be very powerful in the right hands.

Programming languages you hear might more about in 2014

There is a lot of buzz at the moment in the front-end web technology world with frameworks such as AngularJS breaking into the mainstream of software departments around the world. However, bubbling under the surface there are some interesting developments going on in back-end technology, including new programming languages gaining some popularity and maturing. I thought I’d take a look at three languages you may have never heard of that are each targeted at different goals:

Elixir

Elixir is built on top of the Erlang language, developed by Ericsson some time ago which is already used by many large organisations. Erlang’s main focus was to make concurrency and fault-tolerance part of the core language allowing for highly scalable software systems, originally intended for the telecoms industry. Elixir builds on Erlang to make it more of a productive general purpose language for distributed applications. You can take a whistle-stop tour of Elixir by watching this great presentation on YouTube from José Valim (the creator of the Elixir language).

Ioke

Ioke is one of the many new languages to use the JVM as a platform. It uses prototype based object-orientation with no concept of specific classes as every data type is an instance. It is inspired by a number of languages including Smalltalk. It is more focused on expressiveness of objects, data structures and algorithms rather than concurrency per-se. This makes it a good contender for a language that is better at modelling a problem domain in object oriented code.

Mercury

Mercury draws its inspiration from Prolog, which is probably my favorite programming language due to its simplicity and focus on logic. It is a strongly typed, highly declarative language including many of the currently popular functional programming concepts. It also allows for compiling to a number of ‘back-end’ platforms to run on including C and Java. This language has existed for some time, but now it is in a state of reasonable maturity it could well become a powerful language for writing specific types of software.

To sum up…

One of the key themes in each of these very different languages seems to be the concept of meta-programming. Each language is built on top of itself and code itself can very easily be manipulated to extend or alter functionality, enabled by the fact that everything can be broken down into the core constructs of the language. Put simply there is no bloat to these languages, the authors have thought long and hard about the syntactical concepts they want to include in order to avoid having too many different ways to implement common software patterns.

As a Java developer I am looking more to the changes in Java 8 for the impact on the work I do not too far down the road. But the landscape of the entire web stack is also very much in flux at the moment. It has been interesting to look at what other developments are out there that might be completely orthogonal to what I do day-to-day, but ultimately I’m sure will have an impact eventually.

Unit testing with JUnit

Until fairly recently unit testing is an engineering practice I’ve had little exposure to. It’s perfectly possible to get by developing without unit tests, but they do help with the design, verification and maintenance of code.

A unit test is designed to test a small functional unit of code for a particular scenario. Generally multiple tests are written to verify different paths through the same method/function. Personally I think unit tests are particularly valuable in a continuous integration environment (e.g. Jenkins) where the tests run prior to regular deployment and developers are alerted of any failures.

There are many testing frameworks but for basic unit tests JUnit is pretty much the de-facto standard. The Eclipse IDE comes with a plugin already configured to allow unit tests within a project to be run by simply right-clicking on the project and running the code as a JUnit test. However JUnit tests can also be run stand alone through the command line, an Apache Ant task, or a Apache Maven build. Maven will automatically run unit tests placed in /src/test/java during a build by default.

So what does a simple unit test look like?

import static org.junit.Assert.assertTrue;

import org.junit.Before;
import org.junit.Test;

public class HelloTest {

  private String productName;

  public class SayHello {
    private final String userName;
    private final String productName;
    public SayHello(String userName, String productName) {
      this.userName = userName;
      this.productName = productName;
    }

    public String speak() {
      return "Hello " + userName + " welcome to " + productName;
    }
  }

  @Before
  private void setup() {
    name = getProperty("productName");
  }

  @Test
  public void speakReturnsString_WithCorrectProductAndUsername() {
    SayHello sayHello = new SayHello("Dave", productName);
    assertTrue(sayHello.speak().contains("Dave"));
    assertTrue(sayHello.speak().contains(productName));
  }
}

The annotation @Before defines common code to run prior to the execution of each test method defined by @Test. [The getProperty method has been omitted]

This is a simple test which verifies that the implementation of the construction of a SayHello object followed by execution of the speak method results in a string being returned that contains the userName and productName which were passed in the constructor. This illustrates the difference between desired test behavior and the actual implementation. Here we are allowing the flexibility for the speak() method be modified as long as it returns a string containing both the username and the product name passed in.

Though this is a basic example hopefully this gives an idea of how unit tests are usually used to define desired behaviour (what the code should do) not necessarily implementation (how it does it). Writing tests allows you to clarify the behavior of your code which makes it a useful tool in code design as well as validation.

New frontiers in 2014 – Nator Designs, community contribution, and software sustainability

Having some time for reflection before the new year begins really helps to focus the mind. After some thought, here’s a summary of some things I will be working towards in 2014:

  • Put more effort into blogging to give quality Java software development tips, tutorials and information (read: you’re going to see some changes round here!)
  •  Working on Nator Designs – a business I co-founded developing innovative web and mobile software
  • Contributing to and participating in the communities that serve individuals and business with quality free and open source software and support (a well from which I have drunk deeply in the past)
  • Investigating the notion of sustainable software development

Additionally if I get time, I’d also like to make some more progress on a project I started some time ago, which is still in its infancy, relating to logic and behavior in application frameworks.

However as life is often busy, if any of these peak your interest, and you would like to collaborate I would be interested to talk. Also if you are looking for help with your Java software development (search/web/mobile etc.) please get in touch with us at Nator Designs as we’re always happy to talk about code!

Buenos días amigos! – returning from Mexico

My wife and I finished up a busy 2013 in style with our honeymoon to Mexico and also visited some interesting countries off the Gulf of Mexico. This was an amazing experience, as well as an eye-opening one. I got to spend time with my beautiful wife, of course and also visit what is a beautiful country in many different senses of the world – not least of all the way it operates.

Mexico has its problems, for sure, but I get the feeling that in this day an age they are not necessarily greater or more stifling in magnitude than those of supposedly more ‘developed’ countries, they are just different. I’ll save the full review for TripAdvisor(!) but all the time we were there we felt safe and had a great time, and we learnt a lot about a culture very different from the neighbouring North American culture and somewhat different to our European culture as well. It certainly feels like there is an air of social change and it is clear from spending even a small length of time there that Socialism plays a big part in this.

I’ve never given the political and social history of South American countries much thought and certainly not considered the link between these factors and the interesting and rich culture found in South America. It’s amazing how visiting new places can give you a different perspective on the world and realise that there are many different approaches to solving problems and ways to live a fulfilling life.

Anyway, I’ve included a few photos from our honeymoon to break up some of the drabness of my last few posts!

DSCF1184

Checks and balances

Next in the thrilling installments of ‘Rants By a Prematurely Aporetic Programmer’ is an indictment of the great power given to the average intrepid code monkey.

I love to understand technology at a number of different levels – I think its great to be aware of how software works from the humble digital signal to the deployment process of a cloud hosted web app. But I don’t think any one man is an island and capable of managing several different contexts of a piece of software at once. So why then will you likely find cases of the intrepid code monkey fiddling with server configurations, munging data and delving into SQL databases?

Necessity it seems.

The same reason you will find many developers brew their own beer, wiring their own electrics or maintaining the vehicle they drive. Because a lot of the time no other bugger seems capable of getting the job done right. Of course this is an unjust criticism of the ‘other buggers’ around. Its not so much that no-one can or will help, but that the convenience of said code monkey taking on these challenges seems to be the best option in absence of any system, process or team dedicated the particular job at hand.

A recurring theme across so many organisations is that of our code monkey is being given so much power they actually have a sense of abject horror at how much they can fuck things up. None of us want to shoulder such a burden alone.

Fortunately in great teams, like-minded individuals share the responsibility together, and more often than not the end result is successful.

But it shouldn’t be this way. As with so many things it is easy to criticise, but harder to be constructively critical. All I can offer as a solution is that this power should be locked down and the boundaries of responsibility more clearly defined. Common sense must apply to avoid needless bureaucracy. But some checks and balances should be put in place.

By enforcing more of this kind of mindset, slowly things will (and to some extend are) changing. We don’t want to be given access to stuff we have no right to muck about with any more… and if you meet a programmer that does, I think you ought to be seriously suspicious.

The why and how of measuring software quality

My last post was about what makes information retrieval software valuable, and one of the most important cross-cutting  factors in creating valuable software is quality. This is something I want to talk a bit more about because it one of the things that motivates me to do my job well and also frustrates me so much when it is ignored.

Assessing quality may seem trivial – if a product looks good, is pleasant to use and serves its purpose, its a high quality product, right? … Well not necessarily. What about criteria like; maintainability, adaptability and robustness? These are things that are hard to see up front when it comes to software. These are things a user doesn’t directly care about either. But they are things that in the long run will affect users and developers alike and account for the lifespan of a product.

So having some idea of the quality of a code-base is clearly important. The next question is how can you measure the quality of a code-base?

For decades people have tried with varying degrees of success to measure code quality. There are many tools which will give you metrics on anti-patterns implemented in your code, tell you how well teams are performing and give you an idea of how well your code is covered by automated tests. These are great advancements in helping to build good software. The problem is they also give us a false sense that we are on top of things and everything is OK, simply because we have these metrics available, and hey – if we plot them on a graph the esoteric line of quality keeps going up and up over time.

A reality check

It’s my considered opinion, that may come back to haunt me, that if you think you can improve a product’s quality at the same time as mindlessly adding features to a code-base, you’re full of shit. Just because the new ‘whatsit-widget’ in your product has been written alongside some unit-tests doesn’t mean you’re winning any kind of war against technical debt.

How about these for metrics: [many of these you can probably confirm looking at version control history]

  • how many times in the last year did developers at your organisation spend two weeks or more solely re-factoring code?
  • can you compare how long it takes to make modifications to existing components, features or classes?
  • how long does it take for a developer unfamiliar with a particular feature to feel comfortable changing the code behind it?
  • how many elements of the software are ‘owned’, and only ever modified by one developer?
  • can you describe a feature or a class in a few (3 or less) short sentences?

Bear in mind that these are not necessarily linear metrics – a really low number for ‘more than two weeks spent re-factoring’ would indicate not enough time allocated to improving quality, a very high number would indicate technical debt is never being repaid.

The above are just suggestions of the sorts of obvious indicators I think should be looked at when measuring code quality. I am by no means an expert on the subject. At the end of the day, if a software organisation is paying some attention to quality and have a reasonable degree of visibility of the quality their products (both on the surface and underneath) they are helping themselves to build better software.

The value of search services

There are a huge number of online services, both paid-for and freely available, which provide users the ability to search for, retrieve and digest content. As is so often the case with online products and services, quantifying the economic value of these and pinning down from where the value arises is extremely difficult.

It would be reasonable to assume that in some, probably unequal, proportion the value is largely created by the content and the search features that allow for the retrieval of this content.

Content

Obviously without content a search service is unlikely to have any value at all. Key factors in the value of the content include:

  • the quality and integrity of the data
  • the volume of material available
  • the format of the material
  • the relevance of the content to potential users (including recency of creation and publication)
  • how content which duplicates an original, perhaps physical, format varies from the original

Retrieval

Similarly without a appropriate mechanism to retrieve relevant data, a service will have little value. However as long as a user has some way to access relevant content, even if it is inconvenient or cumbersome, I would argue the quality of this is secondary to the quality of the content. Factors in the value of the retrieval include:

  • the speed at which a user can find results
  • the variety of ways a user can access the material (e.g. range of devices supported)
  • the ease of use
  • the range of search featuers supported
  • the ability to combine search features

Hold on, what sort of ‘value’ are we talking about?

You might have picked up on the fact that I haven’t specified what form of value these aspects of a search service create. This is the most difficult part of valuing a search service. Personally I think many of those factors mentioned above are high on the list in terms of creating value for the end-user: providing them with a reason to use the service itself. However, translating this into any kind of monetary value, in terms of what the customer is willing to pay for (if anything) is very tricky.

Open Library is an information retrieval service disrupting traditional publishing business models

What’s the point?

This creates one of the biggest challenges of the day in the publishing industry. Quantifying the cost and profit of products and features within them is problematic. This is particularly compounded by the fact that new ‘open access’ services featuring ‘open data’ are disrupting the business models publishers currently rely on. As a result many of those companies developing search services for profit have to make decisions with very limited evidence as to what to invest money in.

How can a search service make a sustainable profit?

Though I don’t have any answers myself, I feel its worth making the point that the above question remains mostly unanswered for many of the organisations involved in the ‘search business’. Obviously improving on all of the factors mentioned that affects a user’s perception of the service would certainly help, as happy users tends to equate to happy customers. There is also profit to be made in tools that empower individuals and organisations to create and publish the content featured in search services. But ultimately pinning down what makes for the success and failure of businesses in this industry is a hard task.  Having said this, the company behind the most well known search service in the world, Google, has been making a sustained profit for many years and provide inspiration that it is possible to build a profitable search service.

What’s it to you?

‘Search’ is certainly an interesting area  to work in from a technical perspective. But what is inspiring to me as a developer from a business perspective is that there are successful cases of businesses in this area that manage to maintain high standards in terms of quality, a healthy company culture and a level of professionalism while remaining financially viable.

Book Review – Just for Fun: The Story of an Accidental Revolutionary

Just for Fun: The Story of an Accidental Revolutionary
By Linus Torvalds and David Diamond

I’ve never been a Linux zealot, but like any self-respecting geek I’ve known the general background to the development of the Linux operating system and like to root for and support open source software wherever possible.

This book describes in detail Linus Torvalds’ (the ‘creator of linux’) background and early life, interspersed with segments from his meetings with David Diamond, the co-author. It then goes on to answer some of the typical questions you might have about a man who developed an operating system which powers millions of computers and devices while managing to remain at its heart freely available and freely modifiable. Linus sums the book up with convincing arguments for open source software and against the ‘evils’ of patents and intellectual property law.

I found the start of the book to be much more easy going to read, and consequently more enjoyable. Even the sections about the technical considerations during the birth of Linux that Linus had to deal with seemed well-conveyed and therefore really quite interesting to learn about. The latter half of the book was, to me, much less ‘fun’, but still important as some readers will want answers about open source software and how it works. Whether you agree with his opinions, it has to be said they are well expressed, and at times thought provoking.

Ultimately this is a book I’m glad I’ve read as it was enjoyable and insightful, if a little draining at times. Despite it’s age this book leaves you with a sense that the significance and relevance of both open source software and Linux in today’s world cannot be underestimated.

Agility – process improvement is part of what makes us human

So SCRUM / Kanban / Lean is the greatest project management methodology ‘invented’, and all these fantastic advances took place within the last few decades?

That’s just plain silly, the greatest single thing that makes process improvement possible is our innate ability as a species to do a little thing called reflection. Please don’t focus too much on the name, we have far too many labels for stuff already. But I’d argue that the moment our ancestors genetically mutated to become able to think about what they are doing [wrong] and how they can change it [to improve], is the moment we really became able to rapidly advance as a civilisation.

My point in relation to software development is that everyone already has some degree of agility in their approach to their work. How measured and thought out it is may be debatable, but who doesn’t want to make their own job easier in some way?

In light of the fact that many people may already be no stranger to process improvement, it’s probably wise when applying an approach like SCRUM to first consider how things are working and being improved currently. The trouble with methodological frameworks is that if they are too tightly applied with too many rules, you lose the spontaneity of your ability to respond to change, and if they are too loosely applied then the process is likely to be ‘broken’ such that it might as well not exist in the first place. Much can be learnt from looking at existing procedures in place, which may in fact be much more applicable to the nature of the work in question and consider factors that (to begin with at least) the new way might ignore.

Often it seems to me that: the tools used, methods of communication, decision making procedure and working practices make a much more substantial contribution to how well stuff works in a company, than the chosen way. I’m not denying the importance of project management and process improvement, just expanding on the ‘no silver bullet’ theory in relation to programming languages, to apply to all aspects of software development, and indeed to any other industry. Just as there is no simple way to get rich quick there is no simple way to deliver stuff on time and on budget. It takes hard work, application of the little grey cells and the humility to reflect when things inevitably go wrong.

git commands

I’ve been using git on and off for version control recently. Here is a list of some of the common basic git commands, along with a simple description of what they do:

git clone repository-url - checkout a repository into the current working directory (creates a local repository)

git checkout -b my-branch create a branch of the current repository and switch to it

git add files - add files to the ‘staging area’ (basically notifies git to prepare files for commit), use git add . to add all files

git commit -m ‘this is a lovely new feature which is complete’ - this will commit the files to the local repository with the message specified by -m

git push - push all changes up to the remote repository

git status - get info about the state of the repository

git merge branch-to-bring-in - merge the specified branch into the current branch


Obviously you can combine the commands to build up various workflow related operations, e.g. if you wish to merge a feature branch into master:
git checkout master; git merge my-branch

For more detailed information on all the git commands available, check out the git man pages here.

Prolog FTW!

Echoing my views on software communities in the previous post, I recently took part in some talks about logic programming and Prolog. There’s some really interesting meetup groups in Cambridge for software development, and this one was no exception. You can view my slides here: http://sdrv.ms/119TGoA

If I get time and there is appetite for it I hope to post a bit more about logic programming in future, as its always been one of my interests and I see promoting its use where it can be beneficial as a really positive move! In future it will surely only become more important to use multiple programming techniques, choosing ‘the right tool for the job’ and inter-operating, rather than using one monolithic ‘multi-tool’.

The value of communities in software

The computing industry has spawned many communities which have an influence that spans across different online and offline sectors of society. These communities reach and bring together people from around the world with common interests and purposes. This stimulates productiveness in developing and using software. I would also argue that another effect these communities have in general is adding value to products, services, software and society at large which would be difficult to generate any other way.

I’m not a ‘business’ type of guy, nor am I a politician, but I recognise the importance of a community to software and society, even if I would struggle to foster one around a piece of software myself. Nearly all successful open source projects have a strong community, which often consists of a mixture of passionate individuals, businesses and partner organisations. There are a variety of methods that are used to organise and  communicate within these communities including online forums, project management software, wikis, mailing lists, emails, meetup/user groups and conferences. Sometimes small communities work well, and sometimes a large critical mass of participants is required for these communities to be productive. The organisation of software communities also varies, with some having a very hierarchical structure and others being quite ad-hoc and ‘flat’. Different approaches work for different software.

The important message I’m trying to deliver is that community really matters for a lot of software projects, and not just to boost the profile and success of the software itself. It does a lot for the industry in general to have diverse communities which can help to break down barriers and get people interacting together. It helps developers and users to avoid the feeling of being lost in the wilderness and feel they have peers, and hopefully friends, they can turn to for advice. I think it can make our industry a more friendly, pleasant one to be involved with. That is of course dependent on how approachable the community in question is.

Search Technologies… Lucene, Solr and the importance of ‘Search’

Reflecting the nature of trends in web user experience, my work as a developer has led me to be quite involved in the field of ‘search’. It’s fair to say I never fully appreciated how important information retrieval / search theory would become in my career, and what ‘search’ is really all about.

A few months ago I changed jobs and began working for ProQuest. Both here and at my previous employer, Open Objects, search technologies are a key part of the underlying software infrastructure on which their web based products are built. Generally this is the case for web applications which allow users to sift through large amounts of semi-structured full-text data (documents), and retrieve the specific information they are after with ease.

I think its no trade secret that a lot of [forward thinking] organisations to whom search is important are using or moving towards the use of open source search solutions, of which there are currently two major projects: Apache Lucene and Apache Solr. The former is a Java library which provides a wide range of indexing and searching functionality, and the latter is a self contained web application providing further functionality and opening up Lucene’s features over a HTTP based interface.

Its important to make the distinction between searching and fulfilling an information need. ‘Search’ is just a mechanism through which users find the information they need, and particular technologies and features provide a variety of ways they can do this. Some people speculate that the future of information retrieval may not look like ‘search’ at all. In fact there has for some time been growing interest in algorithms that use data continually collected from a number of sources to ‘learn’ and provide the user with the information they need.

You can use Solr without much understanding of Lucene’s composition, and you can also use Lucene without much knowledge of information retrieval methods. This is testament to both projects’ APIs. However the more you find yourself wanting to do in the search space, the more useful it is to develop an understanding of the concepts underpinning these technologies.

I think that’ll suffice for an intro to search circa 2013. Hope it was useful, more to come!

Book review – Masters of Doom

Masters of Doom: How Two Guys Created an Empire and Transformed Pop Culture

By David Kushner

I used to be a very avid gamer, and I would estimate I’ve spent thousands of hours of my youth playing the Doom and Quake games. They were fantastically immersive and exciting. But I haven’t seriously touched a game in years, not through any deliberate action nor is it something I’m particularly bothered about, I just naturally found less time for games. These days if I do have a few spare minutes its usually programming that I spend my time doing. But I do have one thing to thank this book for and that is reigniting my passion for reading, something that if I’m honest I’ve spent far less of my life doing than gaming.

This book covers in equal measure the social, technical and game industry related aspects of the history of id Software. It closely follows the lives of John Romero and John Carmack, two of the founding members of id Software that have gained incredible ‘rock-star’ notoriety. Though times have very much changed since they started out, this book still has some relevant themes which resonate today. The author carefully navigates the history of the company remaining neutral without casting any aspersions, leaving you to make your own mind up about the what can be learned from what happened. And ultimately though this neutral stance removes some of the tension and suspense, the story stands up as an enjoyable one nonetheless.

The book really gives you a sense of the success that these quirky ‘nerds’ had at a time when there was no Big Bang Theory and ‘nerd’ culture. They overcame some of the difficult times they experienced to not only develop amazing pieces of software, but become Ferrari driving celebrities in their own right (whether they celebrated their new found popularity or not).

Masters of Doom also reminds you of the fact that games were instrumental in the progress of new ideas like open source software and community driven software development. I almost forgot how many expansion packs and extra levels there were available online for these games, which secured them a much longer shelf life – I even made a few (pretty poor!) add-ons myself.

I can thoroughly recommend Masters of Doom, and as it has got me reading books again of all kinds (biographies, fiction and software related) I may well post similar book reviews in future.

CQRS and the Axon Framework

At its heart CQRS (Command Query Response Segregation) is a simple design philosophy highly suited to certain types of applications such as those dealing with reasonable size datasets, or event driven user interfaces. The aim is to separate the infrastructure code from the application logic, and also the create/update/delete (command) operations from the read (query) operations.

This in practice means asynchronous command processing and optionally separate infrastructure for command and query operations. Increasingly web applications are reasonable large scale and perceived ‘transactional’ behavior often involves multiple processes and calls to web services. This adds uncertainty and indeterminate periods of execution time to the operation.

The increase in service orientation (SOA), or servitisation as it is sometimes called, is clearly important to modern applications and carefully designed service architecture helps to avoid a nest of web service calls which may never return. However CQRS introduces a more realistic model which enforces data updates to be asynchronous, and encourages separation between command and query data, a pattern which may already be familiar or have already emerged from complex application designs.

Axon Framework

The Axon Framework is a Java CQRS application framework which provides all the hooks for effective CQRS design. The framework should fit existing infrastructure technologies, such as Spring, JPA and JMS. Production applications using this framework are starting to appear and its popularity continues to grow. Though the Axon provides an easy way to structure an application, it is possible ‘roll your own’ framework, and many existing applications that use messaging technologies probably do. The advantage of Axon is that it has been carefully thought through by those used to CQRS design and helps to enforce a specific pattern for developers to follow.

The future of software design?

It is likely that we’ll be seeing more applications adopting a CQRS architecture in the coming years. Ultimately this may be of particular benefit to applications where data may be updated and viewed by multiple users and this relies on external services. We will probably start to see an increase in web sites and web apps which update when events are processed. With the advent of HTML5 features such as WebSockets this becomes even more seamless as the client’s web browser will no longer need to poll to pick up changes, rather they will be ‘pushed’ to the client. Though there are a lot of challenges with larger scale applications, hopefully carefully implemented emerging design patterns such as CQRS will make like easier.

What do you want from life? – the question we should be asking post economic disaster

Its a heady, deep and widely avoided question – what do we really want from life? As the world is gradually recovering from economic disaster, I argue this question becomes increasingly more important to ask both yourself, those around you and ultimately force politicians to answer for themselves too. A healthy economy is too fluffy a term, and means different things to different people.

I think there are two reactions one can have to economic hardship – worrying solely about money which generally leads to a shift in priorities causing a fixation on material goods, or concern about how we got here in the first place, ultimately a very difficult question to answer. There’s a great opportunity emerging from a recession for my generation and those younger to assess their priorities in life, and for this to have a wider impact across communities. To my mind social well being, environmental reform (of both the natural and man-made environment) and striving to do a good job should be high on the list. In my view when you get these ducks in a row in your personal life, you and those around you tend to live a happier life. I’m a firm believer that the relationships you build and way you behave has an impact on how life pans out for you. Apply these priorities on a larger scale to society and the world would be a better place.

We all know the world economy has a fundamental floor called credit and there is nothing we can do now to remove that carcinogen from the pool of risks associated with capitalism. But if we focused on (to use a turn of phrase often used by legendary programmer John. D Carmack) doing the right thing, perhaps life would be a lot better.

JSF 2.0 – converting objects without a Converter

As is so often the case, the best solution is also the simplest. I’ve been developing a personal web app, using JSF 2.0 for the presentation layer. I’m used to creating custom Converters using the @FacesConverter annotation to carry out the conversion from String to Object format and vice versa. This is useful when the String representation and the Object are loosely coupled. For some reason this particular converter was proving problematic. It wasn’t until I happened to implement a toString() method on the object that I realised, in many cases this is all you need: http://docs.oracle.com/javaee/6/api/javax/faces/component/UIOutput.html (for an explanation)

So ultimately in your code, all that is required is:

<h:selectOneRadio value="#{backingBean.myChoice}">
  <f:selectItems value="#{backingBean.yourCollectionOfItems}" />
</h:selectOneRadio>

So many Java libraries make use of the basic object methods (toString, equals and hashCode), and so those wiser Java lecturers and developers weren’t lying to you when they said make sure you always implement them!

The Future of Social Networking

This was an old post I vaguely remember starting aimlessly which quickly turned into an essay, I decided it was time to dust it off and see if some of my musings and gripes about social networking are still relevant over two years later!

One could write a diatribe about social networking and the continued popularity of Facebook, so I’ll try my best to stick to the point. What I want to discuss is the users perspective of social networking. When people discovered Facebook, most people thought it was great, and its not hard to see why; with such a huge membership it is easy to connect and communicate with friends across the globe. After the initial honeymoon period, concerns started to be raised about the direction Facebook was being taken. Firstly in terms of visual appeal and secondly in terms of privacy policy. Currently most Facebook users now realise, the design of the system is beyond their control. While the content was very ‘web 2.0′, the interface was not. Continue reading ‘The Future of Social Networking’ »

Maven failing to download dependencies

Recently I’ve been using Maven for the dependency management of Java projects. When trying to build projects from the command line I found Maven had problems downloading certain files… it would stop at a certain point consistently and refuse to go any further. The fix I found, thanks to the ever useful StackOverflow, was to add the following option to your mvn command call:

-Djava.net.preferIPv4Stack=true

I think this a bug which occurs when using certain versions of the JVM and Maven together, which at a guess would be due to the implementation of the IPv6 stack in the JVM. Whatever it is, blogging this cure will hopefully save some of the Maven-ites out there some time!