New risks to consider

Like any new technology, document automation introduces new risks and negative externalities. Here are some of the scenarios you should be thinking about, and consider grabbing a copy of Weapons of Math Destruction by Cathy O'Niel for a more nuanced view and further reading suggestions in the space.

  • Being wrong... a million times a second
    What can automate correct decisions at great speed can automate incorrect decisions at great speed. Make sure approval for "straight-through" automation is granted only with careful risk analysis and regular review.
  • Job displacement
    The best uses of automation enhance, not replace, human labor, but it's inevitable that automation can displace the need for certain jobs. Ensure that your automation planning includes using the value unlocked to create new roles, not just displace old ones.
  • Codified bias
    Any codified process encodes bias. But human processes have a natural flexibility to them: we all do things a bit differently, and we reflect on them as we do them. Left unchecked, automated processes can be rigid, universal, and unreviewed. Take special care to review automated decisions not just for correctness but also unintended biases and correlations.
  • Metrics becoming Targets
    Goodhart's Law states that when a metric becomes a target, it ceases to be a valuable metric. Automating your processes will result in helpful business metrics about the volume and velocity of work automated. But if those metrics become your exclusive targets, you risk losing sight of less quantifiable goals, like the quality and strategic relevance of work performed.
  • Hidden bias
    Does your loan approval pipeline use straight-through automation for typed loan applications, but human processing for handwritten applications? Then you've codified that people who type loan applications get faster loan approvals. Was your loan system trained on a giant "black-box" neural net? Then it might be incorporating factors such as gender and name into its decisions unless you specifically removed those from consideration.
  • Accidental standard-setting
    When a system begins creating data, other systems begin using it. Those uses might extend far beyond the original scope of intent. A metric that (under the hood) correlates to a certain patient population might get repurposed for an insurance billing system, causing rejected reimbursements at higher rates for a certain group. In one famous case of the snake eating its own tail, Google Translate began accidentally training web pages created with its own output, reinforcing its own mistakes rather than learning from native speakers.
  • Un-auditable Decisions
    Explainability is a hot topic in AI because there's so much variance in approaches to it. There are two main questions you may need to ask: how does your system make decisions in general, and why did it make a particular decision in some specific case. Have a plan for how you intend to ask these questions of an automated system.