Many publishing applications require creation of multiple output
products in which a large portion of the data is common to all. Whether
different models, different languages, different customers or different
delivery media, the challenges are similar.
The classical solution to this problem was to make and keep a copy of the data file for each version. You entered the necessary changes to make each one version specific, and kept them in harmony by updating each one whenever common data changed. With many applications containing over 90 percent common data across all versions, this created a complex, expensive and error prone environment.
With the rise of "document management" systems, keeping all that duplicate data became less and less defensible. Users reckoned that the way to improve the situation might be to break down the data into small chunks so that each unique output version could be created by collecting the proper chunks and stringing them together, in the correct order, into a document version.
This approach allows chunks of data used in multiple output versions to be stored and updated only once. It also, unfortunately, means that documents must "shredded," sometimes into thousands of parts. Keeping pieces of documents creates a whole new class of problems including losing data, publishing the wrong data or data in the wrong order, and forcing authors to think and write in non-linear "snippets."
And for all this, you get to spend more and more money on systems because data shredding systems require complex and expensive hardware and software to ensure that documents and their content are properly managed and delivered, negating many of the hoped for benefits.
Simply put, effectivity uses a single source file for production of all related output products, embedding within it data unique to all output versions, marked so that they may be "resolved" to the proper version at publication time.
When a specific output product is desired, the common file is processed by a simple software program to resolve it into the desired version. ISI coined the name Common Source Effectivity or "CSE" to indicate not so much a difference of approach as an expansion of goals for the technique.
ISI has pioneered the development of SGML Effectivity systems.
CSE, while not appropriate for every publishing need, has amazingly broad application. ISI's CSE tools are used by aerospace firms, auto manufacturers, technical and legal publishers, and software producers. Indeed, whether you fly in an aircraft powered by a small Pratt & Whitney jet engine, or drive a Ford, you're traveling with the help of ISI's SGML Version Manager.
Based on Arbor Text's ADEPT SGML editor, SGML Version Manager ensures that effectivity tagging is accurate, controlled and properly managed over versions and time. VM also provides authors with the tools to keep the editorial process productive no matter how complex the effectivity becomes.
SGML Version Manager is compliant with most formal effectivity specifications including ATA 100 and 2100, AECMA 1000D and DoD CALS.