amillionlittlepieces

UK technology trains of thought...

Wednesday, April 26, 2006

What should a project never be without?

Someone just asked me what a project should never be without. That's hard to answer as every project is different, whereas an agile process may suit a team of 10, you need something a lot more heavyweight for larger projects (I realise this is stating the obvious!).

So, here's my list of 5 things that a project should always have. I'm caveating this with an assumption that the project involves 5+ people and will take 2+ months:

1. A really clear definition of the project in no more than 18 words.

2. A comprehensive definition of the problem you are trying to solve. This usually should fit on a single page, if it doesn't, you probably don't understand the problem.

3. A requirements definition. This is a no-brainer but still sometimes doesn't get the attention it needs. I have seen very large projects spend millions on a sub-project that is justified by a single requirement, instead of breaking it down into realistic requirements. Perhaps the most underused tool in requirements analysis is stakeholder analysis, when you have multiple people with requirements this can be a great help - put simply, you just need to work out who wants what, how much power they have over the project, how legitimate their power is, and how urgent their requirements are.

4. A Work Breakdown Structure. In it's simplest form, this is a hierarchy of tasks. As a general rule of thumb, anything that takes 5 working days or more should be it's own task.

5. An analysis of the critical path - When it comes to day to day prioritisation, sometimes you need to put the preperation of a future task ahead of the current task. To do this properly you need to know which tasks cannot slip, and which can.

IMHO, if you have all five of these on a small project, you will always know where you are. The final point is to avoid over-iterating on the planning on small projects, set yourself a fixed period of time each week to keep project management and project reporting work up to date, but don't let it become the primary task... if you are a part-time PM then you can end up getting distracted from the work, if you are a full-time PM you should be anticipating risks and issues and mitigating them before they impact anyone else.

Tuesday, February 14, 2006

AJAX & Software Quality

If you have been following some of the developments in web-technology over the last year, you will have heard of AJAX – the latest approach to creating rich, interactive web content. Similarly, if you have used any of Google’s perpetually ‘beta’ products this year you might have noticed that they are very responsive compared to some of their competitors.

Before I define AJAX, and how it affects test and quality professionals, let me give you a real example. If you go to http://www.multimap.com and enter your current location, you will get a map of the area. Now if you change the zoom factor, the page will reload and display a different level of detail.

This ‘reload’ has been the accepted paradigm for most web applications since the dawn of the web. The only exceptions to this are executables that download and run within the browser such as Flash and Java Applets. However a long download time, and the need for third party software and plug-ins.
To demonstrate the power of AJAX, visit http://maps.google.com and follow the same process. You will notice that the application appears to present the results of changing the zoom level faster; the more observant will notice that the page itself does not appear to reload or change. This is exactly the purpose and the raison d’ être for AJAX.

When you change the zoom level at Multimap, it is not only changing the image, but repeating requests done earlier to find other information about the area, finding appropriate adverts etc. In effect, it is starting again.

So what is AJAX?

The way it works is by using a combination of XML and JavaScript to asynchronously update the page. This means that the request for a certain part of the page is not linked to the response that displays the results. AJAX enables a part of the page to be updated without a reload of the rest of the page.

Technically, this has been done before – and this is one criticism that has been levelled at the AJAX hype. However this is the first time it has been brought to the mainstream under a single banner, using standards based technology rather than proprietary add-ons.

AJAX is an acronym for Asynchronous Java And XML. The way it works is by using JavaScript in the HTML to consider the rendered web page as an XML object. This means it can be manipulated using the widely-supported Document Object Model (DOM). Because of the wide support for JavaScript, XML and DOM – developers can rely on AJAX working in most modern browsers.

It should be emphasised that AJAX is not a technology, or something you can tangibly download or install, it is simply a design approach that has gained popularity. This is primarily down to it’s adoption by Google and Yahoo for their forward-looking beta projects.
AJAX and Quality

Every software quality professional knows that improving quality is always a trade-off, and that with new and immature technologies this is even more acute.

The performance enhancements that AJAX offers will transform users expectations of web based applications. Full reloads of the page will become analogous to looking at a static list of data on a command line, rather than looking at a constantly updated screen. New applications will be enabled by the technology, with a richer and more interactive feel.

If you are reviewing a web application project in it’s early stages, you should be asking – what will the users perception of reasonable performance be from this application by the time the project has finished? It used to be acceptable for web pages to load in ten seconds – that is certainly not the case any more, nor is it excusable with the advent of high-speed internet.

However this approach is not a silver-bullet. As I said before there are always trade-offs with such decisions, in this case they lie with usability, accessibility, security and testability. These all have technical solutions, but as I said this is immature technology and should be treated with care.

Firstly, with usability there are a number of concerns that need to be addressed in AJAX projects. Because the page load is no longer ‘atomic’, i.e. a single transaction, you cannot expect the standard ‘back’ and ‘forward’ browser navigation functionality to work as normal. The very concept of AJAX abandons the concept of a static page, and with it goes the simplicity of navigation functionality. This can be resolved in a number of ways; however there is no standard solution.

Another area of concern is that copying and pasting, or bookmarking the URL may not work. Again, because we are no longer dealing with a static page – you cannot automatically expect a single point of reference to that page to work. This isn’t unfamiliar ground however; some web applications have not worked like this for some time – although that doesn’t make it less irritating!

In terms of accessibility, some projects may have legal requirements upon them that mean they need to comply with certain restrictions, others may simply be aiming to reach their audience, but dynamic web pages is not something that sits well with many accessibility solutions. The main response to this at the moment is to provide a less interactive and dynamic version of the page for users that wish to use special browsers or other technology.

Security is another issue that needs to be properly assessed, using AJAX means putting dynamic code in the publicly viewable portion of the web application. This means anyone can see how the page is calling your servers. This easily worked around through obfuscation, but it is wise to check that you are not exposing too much information to the user – especially if the JavaScript is calling a different server to the main web server.

Another concern with security from a user point of view is that they will need JavaScript turned on in their browser, or ActiveX if they use Internet Explorer. These represent high-risk areas within a browser and the more paranoid user may have them turned off.

Most easily forgotten in all this is the testability of your application. The more you shift the onus on to the client-side rather than the server-side, the more you stand to lose in logging capability. There are plenty of ways to address this, including an open-source project called Log4AJAX – however a little planning upfront should ensure you don’t waste hours looking in the wrong place for an error hidden in your Internet Explorer logs. Ensuring log messages are dealt with correctly will also be critical for thorough beta testing.
AJAX and Testing
Even if you are black-box testing, testing AJAX applications is neither like testing interactive web pages, nor like testing desktop applications. It is most like testing a Java applet, but without as clear a boundary between the client and server or a conventional user interface.

The diagram below illustrates the three threads that will be executing. Firstly the browser will initialise the page and it may at this point set up a JavaScript loop that can update variables, contact the server and update the page through the DOM.

Another thread will be listening for remote function calls (usually in XML) from the server. This can be called at the discretion of the server, by the loop launched at initialization, or by a third thread that responds to user events.

Because the threads managing the request and response to requests are not synchronised, the user can cause more than one type of user event before the last has even started. This is not usually possible in many modern user-interfaces. If you have three different data items on your screen to be updated every minute, and three processes that can be triggered simultaneously on the server through a user event – you can end up finding it very difficult to test all permutations of client and server side states. This is of course augmented when you inevitably have more than one user!

The interactions between the threads, users and functions something that should be a focal point of design and code reviews – to ensure all design is streamlined be ‘bulletproof’ in case of unexpected events. Defining functions as ‘services’ that are autonomous of each other is way of doing this that is gaining popularity.

Applying a risk-based approach to the potential events is necessary to prioritise testing, and this requires close integration with the development team. Key areas of functional risk sit around:
  • transactions that impact data that is being continuously updated elsewhere;
  • multiple transactions launched from the same page that may use the same logic or data;
  • corruption caused by using the ‘back’ or ‘refresh’ button;
  • multiple transactions launched from the same page where one impacts the dependency of another.
As the response to the remote call is separate from the request, testing using web-services based tools, including tools that create HTTP requests, are limited in the way they can replicate the user experience. Also, tools that are accustomed to testing the end user interface, may not be designed to trigger multiple events at once and correctly capture the response.

There are testing tools that can help massively with testing the pages, certainly the open-source Selenium has the ability to wait for the callback – and this puts it well ahead of the hype curve on AJAX.

In conclusion, AJAX is a new type of solution that will bring a new level of functionality to the ubiquitous web based application. Although is still a web page, it has new considerations in terms of the quality considerations, functional risks, and technical approach.

technorati tags: , ,


Saturday, February 11, 2006

Asset management gone mad?

Sunday, January 15, 2006

If you know a criminal, are you therefore a criminal?

Social Networking

I am an avid user of certain services, currently Plaxo and LinkedIn, that could be generalised as social networking sites. Plaxo enables me to keep my contacts, task list and calendar integrated between two computers on separate networks, and a PocketPC. Unlike other methods, this does not require them to be interconnected. The social networking comes into play when you become ‘linked’ to somebody else who is already a member, when their details change, your address book is automatically updated.

LinkedIn enables me to keep track of professional contacts whilst carefully controlling the information available to them. Whilst it is similar to Plaxo in some ways, the information you are providing is around your history, skills and contact list rather than actual contact details.

The benefits of these sites are countered with an exposure of information in two ways, in the case of LinkedIn, you are giving the people you know access to the names and profiles of your professional contacts, in both cases, you are giving a private company full access to a lot of information about you – potentially protected by rather weak terms of service.

Criminally Social


Commercial social networking services generally fall into two types, social (Friendster, Friends Reunited, Facebook and Orkut), and professional (Plaxo, LinkedIn, Ryze). Oddly, both Orkut and Friendster have been linked to criminal professional networking. In both cases, the police have stumbled across these social creations after the fact, rather than driven their investigation from it. Nevertheless, it has undoubtedly provided a significant source of intelligence.

It’s fairly hard to have sympathy for a career criminal who is stupid enough to use Orkut. Then again as it is a popular service in Latin-America, is it hard to have sympathy for people who were innocently associated with him? Unwittingly linking themselves to a known criminal forever. In the Cold War, and now again with the looming nemesis of terrorism, social networks becomes the main way in which preliminary intelligence is gathered about individuals suspected of crime.

Acquaintances of terrorists, terrorism suspects, terrorism financiers, terrorist supporters and terrorist sympathisers are at risk of being allocated into a grey zone of terrorist associates. A tag of that kind is potentially as harmful to a person as have been negative categorisations made in previous contexts, such as 'etranger', 'subversive' and 'unamerican'.

- Roger Clarke, Very Little Black Books

Jacob Morse makes a lot of comments here about the connection between an incredibly popular social networking site for US students called Facebook. He describes links between the site and the venture capital gold pot of the CIA. Whilst the relationship is pretty tenuous, the potential objective is quite clear. He also draws attention to a rather ominous sentence in the terms, which implies they will be gathering information about you irrespective of whether you use the service.

Whilst it is fairly unlikely that investigation of social networking is anywhere near standard operating procedure, the value that it can add is clearly recognised.

Data Retention

Whilst the information gathered by these services is useful to the users, and the relationships they illustrate interesting, they will not provide high-quality intelligence to governments and security services. They are limited to the information provided by the user. For example, whilst LinkedIn and Plaxo sync fairly seamlessly with Outlook and Thunderbird, the information they can collect is limited to interactions I have choose to have through Outlook. It is in a sense, active participation.

In the UK we have to implement the Data Retention directive within 18 months. This will mean that certain pieces of data about us will be available that have never been available before. The Data Retention directive will compel service providers to keep specific data about their customers for 12 months. This could effectively result in aggregation of information from private companies I have agreed to give information to. Because I have not consented to this information being aggregated, this is passive participation in the mapping of not just my network, but the interactions I have with them.

The Directive suggests to retain for one year the following categories of data:

• data necessary to trace and identify the source of a communication;
• data necessary to trace and identify the destination of a communication;
• data necessary to identify the date, time and duration of a communication;
• data necessary to identify the type of communication;
• data necessary to identify the communication device;
• data necessary to identify the location of mobile communication equipment.

This means for me, at a minimum:

• My location to within a reasonable degree of accuracy, all the time. The required accuracy for this is laid down by FCC standards for tracking 911 calls from mobile phones (E911 mandate) is to within 100 meters.

• Any telephone I call I have made, from either a home phone, business phone or mobile phone. Presumably indirectly also the names, addresses, and possibly locations of people I call or people who call me.

• Any SMS messages I have sent or received. Where I was at the time.

• The email address, date, time and subject of any email I send or receive, and the IP/MAC of the computer I send or receive it on.

• Any MSN conversations I have, and the IP/MAC of the computers used.

I should be clear that the service providers are not compelled to keep the content of the communication, but they are compelled to keep the details of the transmission. Not only will it be possible to see people I have interacted with, it will be possible to ascertain who I was with and where we were at any one time.

Information about my network can be created on a quantified basis, for instance the frequency with which I exchange email with people, I am in the same room in them, or how often I talk to them on MSN. This vastly exceeds the capabilities of any existing consumer services.

Although we are not talking about a huge big brother database, we are talking about the ability to analyse a substantive portion of the last year of somebody’s life - In a way that has never been possible before. If not already, I am sure there soon will be a way of using data gathered through this means to create integrated intelligence maps.

Access to the data through RIPA

RIPA, the Regulation of Investigatory Powers Act, currently regulates access to data gathered voluntarily by service providers. Access to this newly required information is likely to also be controlled under RIPA, rather than court orders. The controls and oversight protection associated with this are somewhat controversial:

"Under the law as it stands, if existing powers are not rescinded then people will be able to act within or outside the guidelines," - "There's a strong argument that government should go back and get it right,"

The list of organisations with that will have access to this information is quite large; especially when you consider the amount of banal queries they will be responsible for (tax audit etc).

• Police forces
• National Criminal Intelligence Service
• National Crime Squad
• HM Customs and Excise
• Inland Revenue
• Security Service
• Secret Intelligence Service
• GCHQ

The limitation that applies to the police - is simply that they must believe they are preventing or detecting crime, protecting public health, collecting any money owed to the government, or anything else the Secretary of State thinks up. This is not just a little bit broad, and it is a significantly wider remit than was mandated by the EU Directive. Bear in mind that the 'reason' we are implementing this legislation is because of an EU directive, albeit under a UK presidency of the EU.

Aggregation

Over time, this information will continue to be aggregated. A similar case of security enforced through technology is the UK DNA database. This now contains 3 million samples. By 2007 it is expected to contain a sample from 7% of the population. Many, many of these people are not criminals, or suspected criminals, but their DNA may have been taken at a crime scene for the purposes of elimination.

Likewise with the Data Retention Directive, any ‘intelligence’ database will eventually be huge. Whilst the number of DNA samples you would find at a crime scene is significant, it is the equivalent to everyone in your address book?

It is reasonable to see the next step in the intelligence effort against crime may be the technological advances that enable government agencies to automatically aggregate and analyse social and location data that is gathered because someone is suspected of a crime.

If you know a criminal, are you a criminal?

The realisation of the ability to generate suspicion based on automatically analysed, loose social networks, is incredibly scary. The powers that restrict the access and use of this information are weak. It is quite clear that now this data can be gathered, there is no need for further legislation or public support. The use of data is not legislated anywhere near as tightly as the gathering of it, and it is undoubtedly true that we are sleepwalking into a future new era of digital rights.

We can only hope, that the use of this data will first come to light through a significant error, which will hopefully result in a media frenzy and urgent legislation to restrict its use.


technorati tags: , ,

Monday, January 09, 2006

Risk reduction margins!

Glenn Alleman on his excellent Herding Cats blog says:

"Don't have any plan that doesn't have explicit risk reduction margin."

That sounds so much better than "contingency time". ..


technorati tags:

Thursday, January 05, 2006

Universal Translation

Remember the universal translator in Star Trek? An unseen device that translates in real-time between two people who don't speak the same language. Enabling instant, seamless multilingual communication?


On Slashdot a story caught my attention about an open-source Yahoo IM proxy
- it isn't the first piece of software to integrate with Instant Messaging clients and translate, but it seems to be the first to use a proxy and automatically translate rather than wait for user interaction. This is important, in Star Trek one person is unable to hear the other language, they can only hear the translation... With an IM proxy you don't need to see the other language either.

Many offshore companies use IM to keep in touch with clients. If you advertise work to small development shops in India, they almost exclusively come with a raft of MSN, Yahoo, AIM and ICQ addresses. Perhaps this will bring IM to the forefront of remote customer service - particularly with internet-only businesses that don't want to have premises.

The main problem with this is that machine translation does not provide you with an accurate meaning, it gives you a good idea of the meaning - but not more than reasonably accurate. Various experts suggest 2012 - 2014 is a likely time for this to become a mature technology.

The other technical challenge is speech recognition. Microsoft is investing heavily in this area, as is IBM. Gates predicts that in 2011, automatic speech recognition will be as good as human speech recognition.

The final icing on the cake of course, is the fact that you can't hear the other language... hopefully this should be possible with the tech used in noise-cancelling headphones. In 8 years we could be walking around anywhere in the world, and experience everyone talking to us in perfect English. That's quite scary.

technorati tags: , ,

Saturday, December 24, 2005

New Year, New Job

Merry xmas! In January I shall be looking for a contract role. If you know anyone that needs a Process Improvement Consultant, or a strong Test / Quality Manager - get in touch!

Property in France