The ECM Trends I follow include Web Content Management (WCM), High Volume Transactional Output (HVTO) and ePresentment. The field of document imaging and capture is a little off my beat, but we’ve been seeing some significant advances in capture that make sense to talk about.
In the old days, organizations would use capture solutions to scan, index and OCR documents, and then “release” the documents and metadata to the backend archive or workflow solutions. Very simple: Capture, Index, Load. Often, organizations would supplement this process with customized management systems to ensure that the expected number of documents was actually “captured”, that the metadata contained the right number of fields or characters, etc. Additionally, as requirements around capture expanded, additional custom utilities would be added to unzip the odd file, or notify the user upon failure. These systems were usually confined to one or two departments, such as Accounts Payable for incoming invoices, or Claims for insurance claims, for example. In short, the dirty secret in the old days was that the Enterprise Content Management (ECM) was rarely “Enterprise” but more departmental.
Fast forward to today. Now, organizations are not only scanning and indexing document content, and loading ECM systems from many departments; they are truly deploying content capture across the enterprise and linking the capture process to Line of Business (LOB) applications, and it is completely bi-directional. In many cases, data is being extracted from LOB systems to assist in populating or validating the indexes for the documents. Content is also being included in LOB processes for decisioning, so there is now a requirement to use document content as part of a process. Plus, today, we have web applications that are using the documents to do their thing. So, today, it is not so simple: Capture, Index, Validate, Load, Extract, Notify… you get the picture!
So, the old custom code from yester-year no longer cuts it for truly enterprise-scope deployments with multiple departments loading and extracting content and using webservices as the glue to get it all done. What is needed is an enterprise content capture approach that has the following:
• Single Infrastructure with multiple events and processes
o Webservices, http(s), ftp(s), email, sockets, command line, others
• Load high volumes of document content into the archive
• Validate successful loading
• “Rollback” the load in case of failure or inadvertent load
• Easily connect to multiple backend systems, databases for extraction or loading
• Ability to do mundane things like unzip files
• Normalize or transform file formats (transform to PDF, or .CSV or TIFF, for example)
• Extract content from the archive to pass onto other web-based processes
• Ability to easily notify users or administrators of errors in the process
• Manage it all from a web-based GUI with process flow design and preflight
So What? The cost to large companies to constantly replicate custom capture systems is huge. We’ve seen some organizations looking at spending several $ million to extend departmental capture to just 1 or 2 additional departments. These processes are complex, involve a lot of users and custom code, and the risk of getting it wrong is too great. And, don’t even get us talking about compliance as a reason to do things once, and do them right.
A company I’ve found to be a thought leader on ECC is Vega ECM. They have been involved for years with organizations on content capture approaches, and as consultants, helped write a number of the custom approaches I alluded to above. Vega now sees this as a discreet discipline within ECM, and is helping organizations move from the custom approach of yester-year to more modern, single-infrastructure approach bulleted above. Check out http://www.vegaecm.com/ for more.