PeerPoint An Open P2p requirements Definition and Design Specification Proposal


III. PeerPoint Requirements: Security and Anonymity



Download 0.69 Mb.
Page6/20
Date02.02.2017
Size0.69 Mb.
#15337
1   2   3   4   5   6   7   8   9   ...   20

III. PeerPoint Requirements: Security and Anonymity

Security can never be 100%. It is often based on trust and reputation, which we need for a web without gatekeepers.


Privacy and anonymity can be thought of as a forms of security.
Security by obscurity is an important principle.
Many writers take the view that there is no such thing as anonymity on the internet (search “no such thing as anonymity on the internet” for sources) due to data mining and pattern analysis technologies. Kat Orphanides writes: “Even if you disable cookies, your browser could easily share enough information to give you a unique signature on the web. I've been testing the computers I use on the Electronic Frontier Foundation'sPanopticlick website, which reports the identifying information your browser is sharing and compares it against data it has already collected from other users. So far, every system I've tested has been uniquely identifiable.”
Perhaps if identity can be discovered heuristically, that’s the way PeerPoint should go, rather than using certificates, tokens, etc. On the other hand, perhaps part of Peerpoint’s requirements should be methods for obfuscating such identifying patterns to preserve anonymity when that is a user’s desire. Is it possible to distinguish between legitimate (e.g. political) and illegitimate (e.g. criminal) reasons for anonymity?
Freedom not Fear (freedomnotfear.org and eff.com)



  • We are a coalition of more than 150 organizations that share a common goal.

  • We want freedom of speech in a digitalized world and a free and uncensored internet to express ourselves.

  • We want privacy in the knowledge society, not surveillance.

  • We want to live in freedom, not in fear.


TrustCloud: A Framework for Accountability and Trust in Cloud Computing (hp.com)
CryptoParty Handbook This 392 page, Creative Commons licensed handbook is designed to help those with no prior experience to protect their basic human right to Privacy in networked, digital domains. By covering a broad array of topics and use contexts it is written to help anyone wishing to understand and then quickly mitigate many kinds of vulnerability using free, open-source tools.

IV. PeerPoint Requirements: semantic web ontology

V. PeerPoint Requirements: system library




VI. Library of P2P Middleware and APIs




VII. PeerPoint Requirements: distributed data store

misc: Storage Quota Management API (W3C)



VIII. Trust/reputation Metrics

PeerPoint Requirements:


- user ratings/reports of peer nodes
- white/black lists of peers (by individuals, groups, communities, institutions, etc)
- hierarchical ID/trust certificate authorities (groups, communities, trusted institutions, states, etc.)
xID In accordance with The Standards of LIFE forInformation, the xID specification uses a distributed storage model that allows data to be held in separated silos that are as close to the people they serve as is practical, given the security requirements. It also specifies the nature of a transaction between trusted and untrusted systems that returns verification results without exposing or compromising the contents of the identity record.

The xID system is concerned solely with identity, and does not store any other data than the xID records. Related data, such as medical records or legal records, are stored separately, and include xID certificate references.

- A heuristic method for predicting trustworthiness of a potential peer (“You may like these peer nodes...”)
“What we want is peers that are trusted by entities like ourselves, and/or have engaged in transactions that are beneficial to entities like ourselves, not those that allegedly trust entities that we trust and have allegedly engaged in transactions like those that we have engaged in.” (James <p2p-hackers@lists.zooko.com> p2p-hackers Digest, Vol 69, Issue 12)
“The dataset I wish to collect and make public doesn't say anything about what the transactions actually are. The optimizing factors are success rate and transfer rate. [The Slope One and Singular Value Decomposition] algorithms are typically employed for "recommendation systems" such as the one seen on Amazon, i.e. "based on your behavior we think you'll like products X, Y, and Z", where recommendations are driven by a large corpus of user data. I am attempting to perform a similar calculation, except in this case it's "based on my behavior I think I'll like peers X, Y, and Z", and the calculation is driven by a large corpus of peer interaction metadata the

system collects and distributes by design.” (Tony <p2p-hackers@lists.zooko.com> p2p-hackers Digest, Vol 69, Issue 12)

Resources:
PeerTrust: In an open peer-to-peer information system, peers often have to interact with unfamiliar peers and need to manage the risk that is involved with the interactions. PeerTrust aims to develop a trust mechanism for such system so peers can quantify and compare the trustworthiness of other peers and perform trusted interactions based on their past interaction histories without trusted third parties.
TrustCloud: A Framework for Accountability and Trust in Cloud Computing (hp.com)
Building Trust in P2P Marketplaces (p2pfoundation.net) Transparency is key

On the Web, vast amounts of data are created every day. Most of the companies I examined in my thesis are looking for ways to make this data available and useful to users, for instance by calculating so-called “trust scores” with the help of algorithms. These scores, which are based on data from social networks and other sources (that provide things like damage reports, peer reviews and transaction history) are supposed to help strangers judge each other’s trustworthiness. This information facilitates and accelerates the process of building trust between strangers on the Web. Since you take your trust score with you whatever platform you are on, it encourages good behavior. A person who has worked hard to build up an online reputation will not want to jeopardize that. My research also showed that it is crucial for companies offering these systems to remain as transparent as possible about how their trust scores are derived. Since trust is complex and every platform requires different dimensions of trust, every person should be able to understand the score and decide themselves whether they want to trust a person or not. Being a good driver is very different, for example, from being a friendly and reliable CouchSurfing host.


Another issue with creating trust and identity systems in general is data privacy. The functioning of these trust systems heavily depends on the users’ willingness to give a third party their data in return for building their online reputation, and not everyone is willing to do that. Especially in countries outside the U.S. people seem to be reluctant to reveal their personal data to third parties. It’s thus crucial for companies working on trust systems to gain the trust of users and P2P platforms. As Simon Baumann, PR Spokesperson at the German ride-sharing company Carpooling.com noted, “the question is always, how trustworthy the trust system is.”

Breiifly, Breiifly Blog
Legit aims to be the Credit System of the Sharing Economy. We correlate data across marketplaces, creating a holistic picture of a user's reputation. Legit measures real, transaction-based accountability without relying on social network data. Overall behavior improves when everyone is held accountable. Real-time alerts keep you up to date so you can act before damage is caused. Plus, the good reputation that users build on other marketplaces empowers them on yours.
Slope One is a family of algorithms used forcollaborative filtering, introduced in a 2005 paper by Daniel Lemire and Anna Maclachlan. Arguably, it is the simplest form of non-trivial item-based collaborative filtering based on ratings. Their simplicity makes it especially easy to implement them efficiently while their accuracy is often on par with more complicated and computationally expensive algorithms. (Wikipedia)




Download 0.69 Mb.

Share with your friends:
1   2   3   4   5   6   7   8   9   ...   20




The database is protected by copyright ©ininet.org 2024
send message

    Main page