Thursday, February 7, 2019


Why not just fix document accessibility at the source?


The successful mantra of organizations striving for digital accessibility is: “always direct efforts at the source applications” because unless you address accessibility issues there, it will be impossible to keep up as changes propagate throughout content over time. This works swimmingly for website design where you have relatively few source systems, and proven strategies that can be employed equally well across those systems.

Take, for example, the use of tables. When designing web content, the accessibility best practice is to use tables only for tabular data, not as a design layout scheme. Tables must have table headers for rows and columns so people reading the table can understand the relationship of the cells. This edict works very well and can be replicated easily across an organization’s few web design environments.

Take this same idea of accessible tables, and apply it to documents, and the model quickly breaks down. Organizations usually have handfuls, or worse, sometimes dozens of ways that it creates documents. End users generate content in MS Office, including Word and PowerPoint and in Google Docs, Sheets or Slides. There are also desktop publishing applications such as Quark or Microsoft’s SSRS, Adobe InDesign and many others. Then there are handfuls of corporate systems that generate high volume transactional content or reporting content. All of these systems generate content differently. Regarding tables, some document systems will default to creating tabular data in what screen readers interpret as paragraphs, losing much of the context required to navigate the data. So, applying one overall rule even to handle something as self-contained as tables, quickly turns into a very difficult task, and requires working with many different content system authors.

This is just one reason why many large organizations choose to tackle document accessibility both before and after the content is created. For platforms that have rich accessibility capabilities, such as Microsoft Office, build in accessibility when the content is created. But for platforms that struggle or have limited accessibility flexibility, this is where post-authoring accessibility becomes a life-saver. A post-authoring accessibility system can create templates that predict what will be in your most common document streams, and can either create all of your content in accessible formats, or you can invoke this type of system on-demand, only when a document is being pulled for use. And, as you would hope, a post-authoring system has a wealth of flexibility in setting up important elements such as tables, paragraphs, lists, heading structures, URL links and alternate text for graphical content. Post-authoring systems can even make up for bad decisions made upstream in the authoring process, such as poor heading structures, or lack of alternate text for graphics.

Follow me here for more updates as accessibility for documents expands and improves. Next up will be a discussion about the convergence of artificial intelligence and document accessibility. Oh, and I’ll be at CSUN in Anaheim, March 12-15:



Friday, January 24, 2014

Update on AODA for Document Accessibility


We’ve been asked to comment on a whirlwind of activity around AODA or the Accessibility for Ontarians with Disabilities Act.  In our Fall Seminar series, we learned that AODA has very specific timelines in place that mandate specific compliance for organizations with over 20 employees.  One of the early deadlines just passed, January 1, 2014.  By this time, organizations must be WCAG A compliant and have an accessibility plan outlined and documented.  AA compliance accrues to a future date.

This is having some specific impact on organizations which have not put a plan in place, and notably, one of the largest property & casualty insurers in North America recently sent out  a notice to all Ontarian customers informing them that:

“Due to technical limitations, starting December 8th 2013, most or possibly all documents you view online will no longer be available.  Unfortunately, this is a necessary step as we improve the overall accessibility of our site.”

               In the age of online customer service in which virtually all organizations not only have extensive website service channels, they actually use the website to reduce more expensive call center traffic.  The impact in terms of brand damage and sheer cost we know is huge.      

               This is completely avoidable.  The technology exists to quickly remediate websites, but even beyond that, there is now readily available technology to remediate high volume insurance documents, insurance cards, policies, bills, notices, claim letters, and the like.  Don’t let this happen!  Attend an Actuate webinar or seminar on document accessibility and find out what you can do!

 

 

 

 

 

 

Charlotte Document Accessibility Seminar


Momentum continues to build in the document accessibility space.  We recently held the 2nd installment of our seminar series, this time in Charlotte, NC.  The first session in Toronto was received very well and has led to a lot of discussion on the Ontario accessibility legislation and deadlines, more on that in our next post. 

This second session was almost twice as large in terms of attendees with 9 companies and 22 in attendance.  It’s remarkable that every single registrant actually made it to the session!  Also in attendance was TJ, a handsome German Shepherd service dog.  Note: it was sunny and gorgeous in Charlotte on this particular day!  

The presentations were on the mark with Tom Logan again providing the regulatory and litigation landscape, Shannon Kelly, working with Lou Fioritto at BrailleWorks, International demonstrated how a visually impaired person actually works with correctly tagged PDF’s.  Lou remarked that it’s important to keep in mind that for a visually impaired person, this type of technology opens the door to independence.   “Imgaine”, Lou said, “if you had to invite a stranger over to your house to read you your financial statements to you!” 

               Jeff Williams discussed the actual technology being used to create accessible PDF files for high volume transaction applications along with updates on PDF/UA, and Will Davis provided a demonstration of both creating applications that generate accessible PDF files, and remediating already created files in accessible format. 

               The Q&A session again proved to be one of the most interesting sessions.  There were questions on what happens if the tagging is somehow applied incorrectly or the alternate text, for example is out of sync with the image it is meant to represent.  The answer is that if you deploy this technology correctly, you are in fact storing a document which is the legally admissible document of record.  It would be highly unlikely to apply incorrect tags, but if it were to happen, it is easily corrected and according to Matt Aranas at SSBBart, one would have a very low probability of discrimination on that specific document. 

               Additional feedback came from an enterprise architect of a large credit card services company already using the Actuate document accessibility solution, who commented that the group that sets up the templates to manage high volume remediation does not also control the content being created.  Therefore, there is the possibility that somebody else along the way, either in a marketing or elsewhere could add new content that is not tagged.  It would be good to catch “un-tagged” elements as part of the process and send an SMS or email alert message to get those new document elements tagged and accessible immediately. 

               It was an excellent session and there were already next steps with a majority of the attendees either in terms of questions to be answered or more in-depth discussions and technology briefings.  The next Seminar event is scheduled for New York City in mid-April, followed by Washington D.C. in the summer!
 

Wednesday, October 23, 2013

Questions from Presentation: Using Analytics to Improve Customer Communication

      
 
The best part of our presentation to the ABA on 10/22 was not the presentation itself but the really good questions that followed:
 
 
Can you elaborate on what impact privacy laws might have on using publicly available social media and other content as your source for customer messaging?  My only answer to that was that we are not the compliance or bank law experts.  In talking with a compliance vendor later, he saw no issue with this because as individuals we opt in to disclosing all manner of personal information to Google, Facebook, etc.  Bottom line: if the banks decide not to take advantage of this publicly available information, somebody else may do so instead.
 
Any ideas on how to know identify of clicks to specific ads you would embed?  That’s a darn good question, and I wonder if there is any way the e-Ad could somehow capture session information.  The question was posed by an SVP of Marketing of a 20 branch bank who is doing very well with embedding ads in their real estate channel that should drive sales to their lending channel and they are getting a lot of good clicks but how could they ever tell who it is that they need to follow up with.   More to follow as we could not come up with a complete answer to this very good question

Is it possible to do Communication Management completely independent of Analytics?  We answered that yes, one could definitely find low hanging fruit in the customer communication area especially in correspondence, without having to tie in analytics.  We explained that many banks in particular are getting a better handle on the way they manage the correspondence with their clients to ensure that they leverage their brand, manage message consistency and even compliance.  We mentioned the example of the bank we met at the Customer Experience Exchange who found countless examples of letters to customers stating that they "did not meet their standards."  (cue the fingernails on chalkboard).  We do believe that analytics helps you embed much more personalization and power into your messaging however and that most banks will at some point couple analytics to their communication hubs. 

Link to presentation: http://www.linkedin.com/profile/view?id=23220005&trk=nav_responsive_tab_profile
 
 
 

Tuesday, October 22, 2013

ABA 2013: Banks Bounce Back


The ABA 2013 conference in New Orleans has just drawn to a close.  Banks are bouncing back!

 
Most of the attendees were C-Level execs from small, community banks with assets of <$1B and they were mostly from the Southeast given that the venue was New Orleans this year.

 
Dodd-Frank is in the midst of being fully implemented with tighter lending guidelines for QM or Qualified Mortgages hitting in Jan 2014, however, the head of the CFPB stated that there would be a gradual easing in of the guidelines with little to no litigation in the first few months.  The bottom line for the small banks was that if they have a traditionally good track record making loans in their markets with acceptable loss rates, this new legislation should have little or no impact.  The reaction of the CEO's ranged however from from activist "we should show up with thousands to march on Washington" to more resigned "let's not dwell on the regulation but focus on our business." 

 
The attitudes of the CEO's matched almost exactly the geographies from which they hail.  The West Coast CEO focusing on a vibrant, young work force, the East Coast bank implementing new technology but only to the extent that it would pay off with its specific customer profile.  The CEO from a deep south bank explained how his bank's branches had been "slabbed" during Katrina, a verb meaning completely demolished save the cement slab upon which the building used to sit.  But the employees were at work the next day at fold-out tables doing business, ultimately handing out up to $100M total in cash to bank customers needing money to evacuate immediately.  I'm not sure what was more moving - the image of employees at those tables in front of their erstwhile branches, or the news that the bank experienced less than 1% loss from this cash handout despite the complete lack of ID validation.  "How can you force someone to produce ID, when you can plainly see that they 'swam' out of their house this morning."  This, said the CEO, is proof positive that we are still a nation of trustworthy individuals.  This particular bank has expanded rapidly since Katrina, capturing a double digit increase in market share right after the storm.  How's that for a positive customer experience?

 

 

Wednesday, September 11, 2013

Migrating-in-Place for ECM Consolidation


Migrating-in-Place for ECM Consolidation

We recently covered why organizations are moving swiftly to Next Gen Repositories.  No longer are modern organizations satisfied with the old “system of record” with all of its upkeep, overhead, and user charges.  They want something from which relevant, actionable data can be derived and pulled out by Gen X & Y users as they demand it.  A perfect example of this new, “system of engagement” is being delivered as part of the actual migration from the old repository to the new.  Let me explain.

Instead of simply mass-migrating the existing data to the new target system, many are choosing to “Migrate in Place.”  In other words, they leave the existing archive in place and allow users to continue to pull and view content.  What results is a process by which users dictate the content that is actually required in the new system.  Behind the scenes, the data being actively selected gets migrated dynamically to the new environment.  If you are from the document scanning world, this is similar to “scan on demand” which left paper archives in place until retrieved and scanned.  Once retrieved, they live again as a digital asset, and are ingested into workflow and repository systems with specific retention periods.   

One big assumption here is that the organization has decided that keeping the legacy system in place for some period is desirable.  This may be due to factors such as:

·        Legacy system can be accessed, with or without API’s

·        Legacy system maintenance is no longer required to maintain system access

·        Knowledgeable staff is no longer around to properly administer the legacy system (“How can we shut down what we don’t even understand?”)

What happens to data that never gets selected?  It simply ages off, or is migrated based upon specific configurable requirements.  For example, a life insurance company may have policies in force for life of the insured (100 years?) + seven years, so those policy documents would obviously be chosen to move to the new system even if not chosen by user retrievals.  This type of “Migrate Based on Retention” process can be configured and automated.

One other important note is that it’s not necessary to own API’s to the legacy system because of the advent of new access methods such as the CMIS Interface, (LINK TO INFO ON CMIS) an open access protocol to many ECM systems, and due to repository adapters which can be easily built by savvy integrators, with or without a CMIS interface.
One of the dangers of any mass data migration is what happens when you find entire libraries of data that don’t appear to have any disposition requirements.  “What is this data and to whom does it belong?”  Worse than that: “Who do we even ask to find out who owns the data?”  Vital to any successful migration is maintaining a regular communication channel with key user communities which access this data.  Think about setting up a spreadsheet or table with content type, user group and “go-to” individuals’ contact details so that you can make these important disposition decisions expeditiously.  The cost of this process is not in running the process but in deciding what to do with “orphan” data and how to bring the business and IT together long enough to decide.    At the end of the day, you have many fewer orphan decisions to make when you migrate-in-place. 
               “Isn’t setting all this up really expensive?” is one obvious question.  The answer is: “not really.”  The key is deploying a relatively light weight repository with excellent process flow and decision capabilities.  If done right, you can easily access both the legacy and new systems with zero coding.
               Here are the Do’s and Don’ts for Migrating-in-Place:
·        Implement a light-weight repository with excellent process flow capabilities so that decision making can be automated
·        Don’t worry about using or buying API’s to the legacy system as there are other ways to access the content
·        Avoid orphan document situations by establishing regular touch points with business users
·        Set up simultaneous access to both systems making it transparent to users
 

Friday, September 21, 2012


Gartner, Forrester and Doculabs have all been following the Integrated Document Archive and Retrieval System (IDARS) marketplace since all the way back in the early 90’s.  The concept is not new: store high volumes of internally generated content in a highly efficient system with a database for metadata, and separate storage for documents themselves.  With security, allow internal users and external customers access to the database, and the documents that the database points to. 

The concept has been a smashing success.  Virtually all mid-to large sized enterprises have deployed some version of IDARs to reduce printing costs, make content available to CSR’s so that they can answer customers’ questions on their statements or bills in real-time, and vendors have built solutions that have scaled, integrated them into total content management infrastructures with federated search across IDARs, and other content repositories. And we’ve all made or saved a few bucks in the process.

So, after 30 years of solid but rudimentary success, you’d think there would be nothing further to do here.  These systems are now embedded within the enterprise, the vendors have built mini-empires based upon the results they’ve delivered, and it’s a pretty static landscape in terms of new developments.  So, it’s surprising that a division of Actuate, squarely in the business analytics space, has decided after all this time to throw their hat in the Repository ring as well:


There are two compelling reasons why customers want more than the traditional IDARs: User Interactivity and Cost Allocation.  Let’s first discuss why users are bored with their current IDARS:

 Internally generated content like reports, customer statements, insurance claims documents and complex financial reports are not an island unto themselves.  This content fits into a framework of information that organizations are increasingly mining to make decisions, predict behaviors, and satisfy broad regulatory requirements.  I recently visited a top five P&C insurer, and the news is that first tier organizations are no longer content with content as usual.  They are thinking about the requirement to store underwriting documents, and all the process-related decisions that generated the documents, for example.  They need to generate a view into the rules that generated the decision to underwrite a policy with a specific risk calculation, so they can evaluate that decision against actual claims data mined from data warehouses and claims reports.  This is not your granfather’s IDARS or ECM system anymore.  We are talking about merging, linking structured and unstructured content and being able to present not only views of documents, but more interactive views of rules engines, documents and compare predictions with data actuals.

Cost Allocation: Traditional IDARS systems are pretty vanilla when it comes to how they report on who uses the system, what percentage of the content is being retrieved from, say, the Variable Annuities LOB, vs. the Protection LOB.   You get a pretty standard set of admin screens and you basically have to build your own reporting based on system logs.  Because ECM is truly an enterprise application these days, set up centrally to manage content from all business units, the need to do this reporting is key from a “who pays, and how much” standpoint.  What if you could generate dynamic reports that allow you to generate dynamic reports that show usage be department in terms of total bytes stored, total numbers of retrievals, which departments added the most new reports, and do all of this with a simple, no-programming interface?  The answer is that you’d be able to better allocate costs, but you’d be able to better predict what your costs will be for new applications, because you can easily view what’s happened in the past.  And, there is more going on here than simply the ability to generate beautiful, 3D graphs, and interact with the reports, we are also talking about being able to make associations between the granular data you store, how it is indexed, and how it ties into records retention.  It is not good enough to store and records manage “corporate financial reports.” These reports relate specifically to individual corporate entities, and people, and if you have a system that can see those associations, and allow you to manage at that level, you get a much more elegant content delivery platform, and you will attract user communities that know they will get value based upon actual usage.

 
 
 
 
 
Figure 1: dynamically generated reporting about what’s in the repository
 

And this brings us to why an analytics company announced a new IDARS system.  Today’s ECM systems are primarily focused on systems of record.  Store unstructured content, and serve it up to users who request it.  But, as the insurance company told us, those are yesterday’s requirements.  What’s needed now, is a system of engagement that can store, link and dynamically present information derived from a variety of structured and unstructured content, and to do so over web, mobile and touch-tablet device channels.  The Next Gen Repository discreetly manages content by the audience that views it, not just by the name of the content.  Please stay tuned as we uncover more trends on this topic.  One thing’s for sure: yesterday’s methods will only produce yesterday’s results!