Interoperability is primarily a matter of the use case, not of the technology. Policymakers considering interoperability mandates need to be watchful for extremes of perfection or compromise, which both offer a game to be exploited by the unscrupulous.
Reviewers of a paper concerning interoperability complained that some sections seemed to imply only 100% functional equivalence would be acceptable, and told us “much smaller percentages are perfectly adequate.” So how much interoperability is enough interoperability? The answer, dear to the hearts of every politician, is “it depends”.
Take for example interoperability between two document processing systems. Users who are drafting blocks of text that will be used in formatting-insensitive ways – within a system with stylesheets, or pasted into a larger work, or for processing by other software, for example – will find that almost any degree of interoperability that preserves the plain text when accessed on the interoperable system will be sufficient.
Users creating the near-final version of a document whose formatting is critical – for example a legal document for court usage where exact line numbers and their contents need to be precisely reproduced on every system, or generating the camera-ready proofs of a document to be reproduced offset litho – will demand that every single detail of the document they produce in their software be precisely and faithfully reproduced when loaded into a different software system. Truly, nothing less than 100% is enough. For some, even variations in font kerning would be a fatal flaw.
The same system, the same standard, but with different use-cases, will have different expectations and requirements for how much interoperability is enough. A dominant vendor might well implement capabilities on the margins of a standard in order to cause import differences by a competitor’s software that the user will then blame on the competitor. In addition, issues that are perceived as interoperability issues by the user may actually be due to platform differences. For example, many issues reported by LibreOffice users actually arise from the availability of the correct fonts on their platform since the default Windows fonts are not universally available on other platforms. Even knowing this doesn’t make a fix easy.
Given the near impossibility of delivering a user experience perceived as identical on alternative implementations, how has software like Google Docs been able to gain such a large user base? Instead of a focus of a substitutable user experience, Google started with a compelling new capability – real-time user collaboration and change tracking – and implemented good-enough interoperability using open source tools.
Even where the capability involved is a subset of function rather than the whole system, the use case will control the level of interoperability that’s acceptable. Consider a larger system intended for some other purpose – social media, forums, a phone – with the addition of integrated text messaging that can receive messages from other people. For some users, the only function that matters is sending a short, text-only message to another person and having the text faithfully reproduced.
Another user may want to include emojis in their message, and expect the same emoji to be seen by the recipient – it’s almost the same use-case. If the receiving system displays from a different emoji palette, using only the image definitions in the standard for reference, it’s possible that entirely the wrong impression could be given.
For example, many users treat the “folded hands” emoji (U+1F64F) as a “high five” while some treat it as “applause” and many others treat it as “praying hands”. So the message “Just heard your news 🙏” could be read as a concluding celebration, an empathetic prayer or as personal congratulations depending on the reader and their device, despite full conformance with the standard and apparent 100% interoperability. The two people might never know why their friendship suddenly ended; experiential interoperability takes more than adherence to the standard.
Meanwhile a third user needs to be sure the message they are sending is visible only to its recipient so needs not just the text exchanged but reliable and secure encryption using private keys accessible only to their owners but working fully on the systems at both ends. To them the transmission of the text would be an error if the end-to-end integrity and repudiability of the conversation were not guaranteed.
Again, the same systems may have use-cases with wildly different needs for interoperable functionality. Broad statements about “all-or-nothing” or “just the basics” serve the discussion poorly.
So how much interoperability is enough? Like Goldilocks, we can confidently answer “enough is just right”. Policymakers considering interoperability mandates are most likely using a good tool for the job, but they need to make sure they specify the use cases and scope they anticipate so that neither the perfectionist nor the fractional pragmatist can claim their solution is the best answer, especially since both extremes are often the preserve of the company gaming “interoperability” to deliver its inverse.
Many thanks to the Internet Society who made this essay possible.