Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts
21

Proof of Identity/Proof of Person : The elephant in the room.

21
Posted by5 years ago
Archived

Proof of Identity/Proof of Person : The elephant in the room.

The blockchain is revolutionary because it is the first example of a decentralized, computationally verified consensus that is able to function even in the face of bad actors in the network.

Most proposals and ideas I've seen relating to using crypto for political ends center around enabling democratic principles. The problem is that while the blockchain is quite democratic; it is a democracy of computing power and capital, not of people. Not the sort of democracy that most proponents of this sort of idea would actually like to see happen.

If we want to use technology as a means of political change or income redistribution without the need for a trusted central party; this is a hurdle we must absolutely overcome.

The priority of CryptoUBI advocates must be on a form of cryptographic proof of identity.

You might also refer to this as a proof of person.

To implement a technocratic consensus of people, we must define:

  • What is a person

  • How do we distinguish one person from another

  • Is this possible to do in a decentralized way

  • Is this possible to do in an anonymous way

Once you have this hypothetical PoP, it becomes quite easy to combine it with the financially incentivised distributed ledger of existing blockchains to enable any sort of voting between people that you like. It's quite possible that such a PoP system could even live on an existing blockchain

This is not an easy problem, I'll admit I don't have much in the way of solving it.

In my mind it feels like there is some overlap a turing test, but a turing test requires known good humans. From these two principles it seems like some sort of reputational vouching system is what would be needed to accomplish this but maybe it isn't the only way.

Sorry to ramble but I just found this sub and I look forward to find others interested in talking about these concepts in serious and practical ways.

45 comments
86% Upvoted
This thread is archived
New comments cannot be posted and votes cannot be cast
Sort by
level 1
4 points · 5 years ago

I've started to lean toward a peer to peer authentication based on something like the way PGP works.

You would have decentralized key signing parties where you verify that everyone there is a real person. You sign your friends keys, and you sign the keys of businesses you interact with. Basically you are constantly verifying the humanity (or existence for businesses) of the people you interact with already, and they do the same for you.

level 2
6 points · 5 years ago

Yes, but you also have to prove that a person is exactly (or at least, no more than) one single person. I'm not sure PGP works like this... I could claim several identities, with several keys and get them all signed at key signing parties. Especially if I had the help of a government or with fake real world ID.

I think we actually need a distributed identity store too... it's a much harder problem.

level 2
Original Poster2 points · 5 years ago

This is what my mind always gravitates towards as well.

Some sort of chain of humanity verification.

One problem I see with this approach though it seems like this structure requires some trust in at least the founding person of the chain?

level 1
6 points · 5 years ago

I was thinking about this the other day being a passionate bitcoin user. This is not really a problem with computers, it's a problem with physics or biology. How do you identify a person? Probably the only way to do that is by using something like DNA (although twin brothers/sisters have the same DNA). You can't use things like eye color or skin color because they are easy to fake. So we need to find something that is not only unique but also hard to fake. Unfortunately DNA is very easy to fake (all you need is someone's hair). What else is there really? The system we have now with physical ID cards is centralized. Perhaps some of the properties these centralized services carry, such as the ability to use a photo and/or fingerprint, might come in useful. But overall, the system of "identifying people" is not 100% accurate. If you think about it, the only thing we have is the photo: if you can find someone else's ID card with a photo similar to yourself, you can easily fake yourself as that person.

So I'm not sure there exists a way to do this from a physical/biological point of view. And doing this in a decentralized digital system is even harder.

level 2
Original Poster1 point · 5 years ago

Yeah, I have the same concerns. I think one potential out for us is that it may be possible to use some sort of social proof of uniqueness.

A Turing test itself is somewhat democratic, once we can differentiate a from b we can apply that to do distributed Turing tests.

So yes, uniqueness is the key, and at first glance it seems like biological means may be necessary to that end.

level 2
1 point · 5 years ago

Easier: Just give people ID cards and tell them to send in several photos of themselves/take a live video interview.

level 1
Comment deleted by user5 years ago
level 2
3 points · 5 years ago · edited 5 years ago

This idea looks like it would work very well, the only issue being that not everyone wants to submit their real identity in the first place. This could hold back the adoption of the system.
 
As a potential solution the above system could be setup alongside an anonymous solution. Giving the user the ability to decide how they would like to authenticate, those more worried about privacy can do so anonymously, those not worried can submit their personal information directly.
 
On the anonymous end, a fixed interval system could be utilized.
 
**1.**The account holder activates their account for a specific period of time. (i.e. 1 Month)
**2.**The authentication can only occur once over this fixed period. (i.e. once every Month)
**3.**The authentication date is fixed for all individuals over the entire network. (6:00AM on the first Monday of every month)
**4.**The time to authenticate is a fixed period for all individuals over the entire network. (30 min)
**5.**The authentication task takes at least 51% of the allotted time to complete. (15.3 min)
    **a.**Where the higher the percentage means greater integrity, with a reduced likelihood of multiple accounts being activated over the period. But is less flexible to people being late.
**6.**Once complete, the user is authenticated to use the account over the fixed period. (1 month)
 
Bots can be weeded out by requiring captcha-like tasks. (http://areyouahuman.com/ games come to mind). We can (through testing a large sample) determine how long these tasks take to complete. And adjust the number of tasks, and the length of the authentication period accordingly. This could potentially be done during a trial period.
 
This method would be prohibitive if done alone, but coupled with your system we could have the best of both worlds.

Continue this thread 
level 1
Comment deleted by user5 years ago
level 2
2 points · 5 years ago

But how would you stop someone from creating another fake identity, by changing some of the properties you describe. Identity must be unique.

level 1
3 points · 5 years ago

The Elephant in the room indeed!

I am by far not an expert, but this feels equally as challenging as solving the "byzantine generals problem" was for bitcoin... or maybe even much more complicated as it extends to the physical world at the same time?!

I've been trying to wrap my head around it for a bit now and all I can come up with includes some form of centralized check-points and/or manual work by "trusted" individuals... well... and that again enters the vicious cycle of trusting checkpoints and individuals... *sigh

TL;DR: I don't have any constructive contribution

level 2
Original Poster1 point · 5 years ago

I am by far not an expert, but this feels equally as challenging as solving the "byzantine generals problem" was for bitcoin

Absolutely that is a a very constructive summary of the insight I try to describe in this post.

all I can come up with includes some form of centralized check-points and/or manual work by "trusted" individuals... well...

Yeah, same here (but I've not been thinking on that problem as much as the others and bigger picture). But I would point out that we can't let the perfect be the enemy of the good; and for the purposes of getting a UBI going it might well make sense to piggyback on Statist identification systems.

"Violence, even well intentioned, always rebounds upon oneself." - Lao Tzu

level 1
2 points · 4 years ago · edited 4 years ago

Hey go1dfish, I'm glad that I stumbled upon this post! I've been searching the interwebs for a solution to this problem via Blockchain technology. While I'm not the most technically savvy person, Id like to brainstorm a bit. I wanted to run a competition much like http://www.reddit.com/r/millionairemakers/, but wanted to open it up for the world and not just Reddit to enjoy. I think it could scale into something much more meaningful than what it is now. While its not exactly like your Fairshare, we have similar problems. It's absolutely critical that we mitigate fraud in order to keep the integrity of the system(s). Without integrity we have nothing.

Main Problem: How do we prevent people from starting multiple accounts or selling their identities? Next, we need to design a system that is impenetrable to hackers, and on top of all that we need to do it really cheaply. Then we need to scale it.

Solution: If we follow Satoshi's lead all we really need to do is make it more lucrative for a node to act honestly than become a bad actor. Adding an identity verification layer on top of the blockchain is a fresh idea--the ethos of it is debateable--but I don't think it is a silver bullet for our problem. Basically the only reason why we would do this is to prevent a double spend, or in our case a double spend would constitute a multi accounter and be rejected by the network.

Another problem is that we want to make our systems open to anyone with an internet connection (or would it be better with just a phone?). Next we want to design it in such a way that an honest user doesn't have too much trouble signing up.

Its been pointed out that this may require an innovation much like solving the Byzantine Generals problem, and I happen to agree. Short of that innovation we can only mitigate fraud in layers. Here is what I propose:


Step1. Alice is required to take a picture of herself holding her government-issued I.D. up to her face with our in-app camera. Step2. we assign random numbers and letters to each point on Alice's face (not exactly sure how many points are needed, lets just say 256 different points). This unique string is unique to Alice, since there is only one Alice in the world, then this hash can serve as her unique identifier. Step3 Our in-app camera compares the two pictures: Alice's ID and Alice's picture. We can reasonably expect that the two pictures will not look exactly the same (e.g. Alice has dyed her hair since she took the Gov ID pic). To compensate for this we come up with a threshold of inconsistencies between the two pictures (e.g. 1-15 points) that will allow the program to accept or deny. If under 15 then the program returns a 1. If there are more dissimilarities between the two pictures then the program returns 0. If the pictures look too similar then that may count as a red flag. Step4. Alice's unique identifier is hashed into a bitcoin address and is issued a corresponding priv key. Step5. We notify Alice to transfer an arbitrary amount of Bitcoin into her pubkey, and the Blockchain timestamps it. step6 In order to receive any funds on our website it has to be from that pubkey, which Alice only needs her privkey to access. step7 If Alice ever tries to sign up for another account then she must 1. obtain another ID. 2 Impersonate that person. If we can design a program that recognizes this and returns a value within the threshold level then Alice is rejected. Perhaps the Blockchain can be that extra layer of identification verification to fall back on? step8 Any pics taken must be taken with our in-app camera. This camera must be designed in a way to be able to detect fraud (e.g. simply holding two pictures side by side instead of taking a fresh one next to an I.D.) step9 Each time Alice is rejected her computer is forced to compute a proof-of-work function that multiple by a power of 10


Another layer we could do rather cheaply is add an extra layer to this by asking questions only the user would know, much like you would do when you signed up for a bank account. Things such as past address, parents middle name, etc.

OK so now that I've written that down it seems like that doesn't really make sense, but I think a possible solution might be close to that. One thing is for sure that any account creation done this way is highly visible, any fraud would be easier to investigate. Im simply brainstorming here so any feedback would be more than appreciated. Even if something like this were to be implemented we would still have a problem with some users being able to obtain I.Ds from corrupt Gov'ts easier than others. I think completely eliminating fraud is not doable, but mitigating it is.

level 2
Original Poster2 points · 4 years ago

Glad to have you aboard. It might be time to start up a new thread specifically for the Proof of Entitlement problem over at r/FairShare and organize the thoughts on it.

Would you like to make such a post?

This thread is getting pretty old and even predates the FairShare concept/name

level 1
2 points · 5 years ago

What if wallets were restricted? In order for a wallet to be created, a request must be submitted to the blockchain, signed by a trusted authority (admittedly, the weakest link here). The person generating the block has to acknowledge the authority, and if they do the wallet gets created. If an trusted authority goes rogue, then someone generating blocks has the ability to flag that account and if multiple flags appear then the wallets generated by that source are invalidated.

level 2
Original Poster3 points · 5 years ago

Yeah, this feels like a bit of a stop gap to me.

Here you end up with a hybrid of technocracy and existing bureaucracy.

Likely better than what we have now, but certainly not as revolutionary as a completely decentralized and automated network that can operate without having to trust other actors.

More posts from the CryptoUBI community
Continue browsing in r/CryptoUBI
Gathering space for the various projects looking at the possibility of a distributed currency with a built-in universal & unconditional Basic Income. See more at http://cryptoubi.org
1.5k

Members

4

Online


Created Mar 24, 2014
Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.