Building a trustable product
We’ll be asking our users to import their personal data into Dazzle software. They only will do that if they believe they can trust us, and our products. How do we convince them?
This problem is of course not unique to Dazzle. So let’s look around how other projects and organizations convince their customers that their products are trustable. Here are some examples:
Facebook says: “We want everyone to feel safe when using Facebook”. Hmm, note they say “feel safe”, not “be safe”. Which is consistent with their actual behavior.
Apple says “Privacy … [is] one of our core values”. They now communicate this message in many ads like this one.
DuckDuckGo, the search engine that competes with Google mostly on its claims to better privacy, was recently caught sending tracking cookies to Microsoft. In response, their CEO was forced to pull back. Whether this has long-term consequences on how users see their product remains to be seen.
Tesla notably failed to convince the Chinese government that its cars are trustworthy around critical infrastructure in China.
We can observe that all of those companies’ “trust” work appears to be centered on making public, but unverifiable statements. Nobody seems to even attempt to provide tangible evidence to back up their claims. One kind of wonders why, perhaps what’s happening out of the public spotlight is simply that bad, so no evidence is actually possible. (Papers like Apple Platform Security are a step in the right direction, but they themselves do not provide much evidence; it’s just a paper, who knows what actually happens on the ground, as most assertions are not verifiable.)
The public seems to understand just how shallow these self-assertions are. In this recent poll about large tech companies, Amazon is listed as the most trustworthy, but even for them, 40% of respondents trust it “not much” or “not at all”. In case of Facebook, that number is 72%!!
At Dazzle, we need to do better, and we want to do better. Much better.
We want a product that not only says it can be trusted, but actually can be trusted.
And we want to provide hard evidence that the trust in the product is indeed justified.
This is not easily done, but we’ll try, one step at a time, hopefully getting better and better over time. In practice, it will will probably be done through a combination of technical features (from checksums to things like code sandboxes, logs, ability to inspect) and human involvement (e.g. just like financial statements of publicly traded companies have generally become fairly trustworthy due to periodic audits by external experts.) So far this work is at the idea stage, but it is worth stating what our goals are, and perhaps if you read this, you can help us accomplishing them! Participatory governance is certainly a key advantage in this.
Imagine a technology product that you can demonstrably trust! Now that will be a revolution.