Since the release of CLM 2011 much of my time has been spent working out, in my mind or fo customers, how best to go about deploying the Rational solution for Collaborative Lifecycle Management. At one recent engagement, we first spent a few cold, rainy winter days sitting around, drinking (mostly) coffee and listening to how the team did what they did, what worked, what didn’t, what they would like to improve. We collected lots of notes on various software development practices, roles and artifacts. One of the key themes that was consistently raised was that while the various artifacts may be produced as part of a given “project” (loosely defined as a number of roles working on a specific set of tasks requiring completion by a certain date), these artifacts would invariably need to be reused in some form in other projects and situations, either as-is or after some modifications.
So we drank more coffee and talked about artifacts in more detail. Starting at the figurative top of the software development artifact tree : our business analysts are capturing business needs, features, stakeholder requests and requirements in various shapes,forms and sizes, trying to work out what it is that needs doing. Documents (Word, Symphony, Open Office) are usually the preferred “repository” for these important artifacts, with graphic (Visio perhaps) representations thrown in as needed. Once these documents reach a certain stage of “completeness”, they get stored (maybe on a file share somewhere), shipped around (via email, on USB keys, in printed form) for review and/or approval.
The Architect and Systems Analyst community then gets involved, taking the documents that have been produced so far and generate architecture, detailed requirements and design elements, working out some of how the “what” will be done. Again these artifacts are generally produced in the form of documents such as Software Architecture Specifications, Security Architecture Specifications, Web Services Interface Specifications, Detailed Software Specifications and so on. And again file shares, document “repositories”, email, print etc are used to share and collaborate on these documents.
So pausing at this point to consider the implications of all the activities and artifacts being produced it becomes apparent that the threads that tie the different “things” together (Features to Requests, requirements to Design elements ) become a crucial part of the puzzle. The numerous documents and diagrams are probably excellent in their own right but now there needs to be a way to manage the relationships between them. In many cases this then requires the introduction of a new entity – the Traceability Table. Whether in a spreadsheet or in a document this entity in reality has no right to a life of its own, at least in relation to the overall software development process. In other words the traceability between artifacts, once established, should not need to be managed, maintained and kept up to date as a separate software development task. In some cases we get clever (or so we think) and “hard code” the linking threads in the source or target documents themselves, without a separate traceability table. For example, assuming that we assign unique identifiers to our requirements and the documents they live in, a design specification might have a reference such as “this design element satisfies Requirement X12 as specified in document BRS236-02“.
Great stuff. Well, great stuff until our recalcitrant stakeholder decides that this request here should actually be something else. Or another project over there decides that one of these requirements actually applies to what they’re doing as well. In both cases “someone” needs to wade through a bunch of documents, assuming that they can be found in the first place, work out if they are related to the request at all and then consider what should be done. Should they make a copy of the original document and work with the copy hoping that someone, something somewhere will remember to inform them if the original document changes? But not everything in the original document is even relevant to this other project so maybe they should just create a completely new document, with a facsimile of the requirement there?
Leave that aside for a minute.
Along the way somewhere, depending on your favorite process flavour, the Quality (usually called Test) team needs to begin using some of the artifacts being produced to work out how to verify that the development teams (outsourced or otherwise) are producing what the stakeholders expect, also satisfying the expectations of how they would be produced. Now we talk in terms of Test Plans, Test Cases, Test Scripts. We’ve also added at least another dimension to the relationships that can exist.
All pretty familiar territory to me: around 15 years ago I was using ClearCase hyperlinks to try to maintain the relationships between FameMaker documents and some (magical) automation scripts that parsed tables in those documents in an attempt to not need a separate task assigned to me in the Project Plan called “Update traceability“.
That’s why I’d like to lose my documents: while many of us might find whipping up documents full of requirements or design elements or stakeholder requests the easiest thing in the world to do, it causes all sorts of headaches. What we’d like to be doing is treating requirements, use cases, stakeholder requests, design elements as being different types of “artifacts”, each type having its own characteristics (or attributes).
Which brings me to what got me going on this topic in the first place: Rational Eequirements Composer (RRC). Or more precisely RRC version 3.0.1. As I began creating some of the RRC project framework (artifact types, attributes, link types etc), necessary to get the team started, I found myself wishing it had been this easy, flexible and intuitive 15 years ago. It’s true that the technologies used to implement some of the features were not around then but there are some fundamental things in the way RRC is constructed to support Requirements capture and management that I liked.
Like removing the unhealthy dependency on ‘documents‘. If we want to keep a document as is, well we can : we just upload it as-is as a “blob”, and it’s just “there” in the RRC repository as something that is a supporting artifact, can be linked to other artifacts, reviewed as a whole, comment on etc. This is good for documents that , for example, are produced and maintained by some external organisation, and that we can’t or don’t want to break into smaller chunks.
On the other hand, if we would like to allow members of our team to collaborate on, review, modify the individual requirement artifacts within the document then we can use the Import facility to convert the content into RRC’s rich text format and begin to do really useful things to the content in “chunks” that make sense from a requirements management point of view. Taking chunks and making them artifacts of different types is very, very simple to do: simply put the artifact (remember we don’t have a traditional “document” anymore) in Edit mode, highlight the “chunk” and save the selection as a new artifact, choosing whether to keep the entire selection embedded as-is in context or saved as a link. Of course I can also add other rich text content which I can then convert to embedded or linked requirements artifacts.
One of the key benefits of a CLM solution should be traceability: traceability to other requirements artifacts and traceability to down stream (or “side” stream if you prefer something like a V-model) artifacts like design, development and test artifacts. So if we have all these artifacts floating around in our requirements universe we can start to easily create links to other artifacts, even enable automatic link creation and management, making it easy for requirements driven development across the lifecycle. Once we have such links in place the answers to some of those questions that could only be found in that extra spreadsheet or table I mentioned before, begin to stare us in the face with no additional effort: Have all requirements have been implemented? Are there any defects affecting requirements that need attention? What is the level of test coverage? What is the potential impact of changing a requirement? (This is a nifty add-on that lets me visualise the various links.)
It also becomes natural to go from coarse-grained and unnatural requirements reuse (documents) to very fine-grained reuse of requirements artifacts even across different organisational or functional boundaries. One of the groups of artifacts that is useful to maintain is a “glossary” – definitions of commonly used terms. With RRC I can easily select a word or a phrase and Create a new Term from it, and then whenever I use the term it gets replaced with a hyperlink and hovering over it shows the definition of the term. Kind of like the “Translate” button on the Google toolbar, which I used to turn off in annoyance, but find very useful when, for whatever reason, I want to see what a word on a web page translates to in Japanese or Hebrew or French. Seems a simple enough and innocuous little feature. Now extend it as RRC does to allow “terms” to be of any other of the artifact types and this then becomes the basis for an organisation wide “data dictionary”. For example if an “author” is an important, oft used actor I can create an Actor called “Author” (that may include various actor-related attribute values and a rich text definition), create a term from it and wherever it is important that the meaning of “author” be made clear, I can reference the term. This removes the potential for ambiguity and therefore misunderstanding that is inherent in most languages. (Apparently the word set has the most definitions in the English language – 464).
It doesn’t take a huge stretch of the imagination to extend the “glossary term” reusability model to other requirements artifacts: define a Requirements Management artifact container at the organisational level and populate it with a whole bunch of common artifacts and (subtly different) commonly used artifacts. Then over time as the need arises (new projects starting up, new releases of existing projects etc) we don’t need to hire a PI to go out and find those artifacts we need and, equally as important, the relationships they have, how, when and why they changed. All we need is perhaps a vague memory of a word or two that might have been part of the artifact and the excellent little search field in RRC becomes our best friend forever. As we go about creating new artifacts, those that have not changed but need to be repeated can simply be inserted or embedded, without much thought to where they actually live. From a reuse point of view, what would be a nice additional extension is being able to copy or move requirements artifacts across project areas, for those cases where only a small part of the artifact is in fact reusable (copy) or where we would like the artifact to be “owned” by a specific project area rather than the common one (move).
One other problem we found during our coffee drinking sessions mentioned previously is getting the different stakeholders to contribute and collaborate on requirements artifacts, easily and in the context of the affected artifacts. Printing stuff out for review meetings, then collecting meeting minutes in (more!) documents or email exchanges are often the norm here and again one flaw is that there is a disconnect between these media and the things they relate to. (I won’t get into that other flaw: printing kills trees). RRC allows us create formal or informal reviews on all types of artifacts or even meaningful collections of artifacts. Adding participants to these reviews then sends out notifications (on dashboards or email) which they then respond to and have their say. The good thing about this is that all this important detail about how the requirement got to where it’s at is captured in the RRC repository for analysis and use.
So I squeezed in a few things that I think are useful and easy in RRC, though there’s a heap of other stuff that go a long way in making the Requirements Elicitation and Management process simpler. I’d toyed very,very briefly with RRC prior to the release of version 3.0.1, but shied away from diving deeper mostly because of the need to install a separate client for it. As I’m sure many of us do, I find using a web browser a very “natural” thing to do these days and so once I had the RRC v 3.0.1 browser add-ons installed, it didn’t take me long to get stuck into it.