Dazzle

2022-08-30

Why products are or aren't trustable

By Johannes Ernst

https://dazzle.town/blog/2022-08-30-how-products-arent-trustable/

At Dazzle, we want to build trustable products, and we want to provide as much hard evidence as possible to users, that their trust is not misplaced.

In this quest of delivering trustable products, the following analysis may be helpful on why any product may or may not be trustable.

First, we need to distinguish between products that work as they are intended to work by the team that created them, and products that don’t work as intended. This distinction is essential because the avenues to protect against scenarios in those categories need to be quite different. Here is the distinction:

Product works as intended Product does not work as intended

For example, if a product is intended to defraud or attack its users (like malware), it can work exactly as intended by the development team, but obviously is not trustworthy from the perspective of the user. On the other hand, if a product is trying to do a good job by its users, it is generally only not trustworthy if it doesn’t work as intended, such as by having security vulnerabilities. (Of course, products that intend to defraud their users can, in addition, also have security vulnerabilities.)

Let’s start with the left column: products that work as intended.

Product works as intended Product does not work as intended
Product only does good things Product does some bad things
Overt Covert
→ Trustworthy → Not trustworthy

In this case, if the product only does good things, it is justifiably trustworthy. If the product also does bad things, we need to distinguish between those that are quite overt about it, and those that are trying to hide that from the user – from hiding it in plain sight such as by using Orwellian language all the way to undocumented backdoors. Of course, there can be combinations of those where an overt not-so-good feature is masking an even worse covert bad feature. In either case, the product is not trustworthy.

On the right column, we need to distinguish between issues that are known, and issues that aren’t:

Product works as intended Product does not work as intended
There are known issues There are unknown issues
Bugs or missing features that do not put the user at risk Security or privacy bugs
→ Trustworthy → Not trustworthy → May or may not be trustworthy

If a product has known issues, some may be dangerous for the user, like a known security hole that is being actively exploited. Some other issues may be merely annoying but not make the product not trustworthy. For example, if a word processor had a bug that sometimes turns bold-faced text into italics, users would certainly be annoyed, but this does not put anybody at risk.

Finally, there are issues that are unknown, and by their very nature, we cannot make any statement of whether they make the product trustworthy or not. Assuming that the development team is trying to do a good job at reducing, as much as possible, the areas in which unknown issues are possible, the remaining risks are fundamentally 0-days.

This leads us to the final version of the table:

Product works as intended Product does not work as intended
Product only does good things Product does some bad things There are known issues There are unknown issues
Overt Covert Bugs or missing features that do not put the user at risk Security or privacy bugs
→ Trustworthy → Not trustworthy → Trustworthy → Not trustworthy → May or may not be trustworthy

That gives us the task, at Dazzle, to put a scheme in place by which we eliminate the paths in the table that lead to the “not trustworthy” outcomes. This will be part of our work going forward.

P.S. It is an interesting exercise to apply this framework to various products you may or may not feel you can trust (say, Instagram, Google or Signal).

P.S. This post does not attempt to define what features or behaviors of a given product are “good” or “bad”. It appears that this needs to be determined on a product-by-product, use case-by-use case basis. For example, a text editor intended for private journaling would not be trustworthy if it sometimes published writings to the public web. But the same text editor, used for blogging, is expected to do exactly that. So “good” and “bad” depends on the context and we need to still define those for Dazzle.