heyuan 发布的文章 - 六币之门
首页
视频教程
网站导航
活动日历
关于我们
用户投稿
推荐
新闻动态
搜 索
1
融资周报 | 公开融资事件11起;加密技术公司Toposware完成500万美元融资,Polygon联创参投
112 阅读
2
六币日报 | 九只比特币ETF在6天内积累了9.5万枚BTC;贝莱德决定停止推出XRP现货ETF计划
76 阅读
3
融资周报 | 公开融资事件27起;L1区块链Monad Labs完成2.25亿美元融资,Paradigm领投
74 阅读
4
六币日报 | 美国SEC再次推迟对灰度以太坊期货ETF做出决定;Do Kwon已出黑山监狱等待引渡
72 阅读
5
【ETH钱包开发06】查询某个地址的交易记录
56 阅读
新闻动态
每日快报
一周精选
融资情况
项目投研
自治组织
数字藏品
去中心化应用
去中心化游戏
去中心化社交
去中心化金融
区块链交易所
科普知识
小白入门
用户手册
开发文档
行业报告
技术前沿
登录
搜 索
标签搜索
新闻
日报
元歌Eden
累计撰写
1,087
篇文章
累计收到
0
条评论
首页
栏目
新闻动态
每日快报
一周精选
融资情况
项目投研
自治组织
数字藏品
去中心化应用
去中心化游戏
去中心化社交
去中心化金融
区块链交易所
科普知识
小白入门
用户手册
开发文档
行业报告
技术前沿
页面
视频教程
网站导航
活动日历
关于我们
用户投稿
推荐
新闻动态
用户登录
登录
找到
1087
篇与
heyuan
相关的结果
2024-10-22
How do trusted setups work?
How do trusted setups work?2022 Mar 14 See all posts How do trusted setups work? Necessary background: elliptic curves and elliptic curve pairings. See also: Dankrad Feist's article on KZG polynomial commitments.Special thanks to Justin Drake, Dankrad Feist and Chih-Cheng Liang for feedback and review.Many cryptographic protocols, especially in the areas of data availability sampling and ZK-SNARKs depend on trusted setups. A trusted setup ceremony is a procedure that is done once to generate a piece of data that must then be used every time some cryptographic protocol is run. Generating this data requires some secret information; the "trust" comes from the fact that some person or some group of people has to generate these secrets, use them to generate the data, and then publish the data and forget the secrets. But once the data is generated, and the secrets are forgotten, no further participation from the creators of the ceremony is required. There are many types of trusted setups. The earliest instance of a trusted setup being used in a major protocol is the original Zcash ceremony in 2016. This ceremony was very complex, and required many rounds of communication, so it could only have six participants. Everyone using Zcash at that point was effectively trusting that at least one of the six participants was honest. More modern protocols usually use the powers-of-tau setup, which has a 1-of-N trust model with \(N\) typically in the hundreds. That is to say, hundreds of people participate in generating the data together, and only one of them needs to be honest and not publish their secret for the final output to be secure. Well-executed setups like this are often considered "close enough to trustless" in practice.This article will explain how the KZG setup works, why it works, and the future of trusted setup protocols. Anyone proficient in code should also feel free to follow along this code implementation: https://github.com/ethereum/research/blob/master/trusted_setup/trusted_setup.py.What does a powers-of-tau setup look like?A powers-of-tau setup is made up of two series of elliptic curve points that look as follows:\([G_1, G_1 * s, G_1 * s^2 ... G_1 * s^]\) \([G_2, G_2 * s, G_2 * s^2 ... G_2 * s^]\)\(G_1\) and \(G_2\) are the standardized generator points of the two elliptic curve groups; in BLS12-381, \(G_1\) points are (in compressed form) 48 bytes long and \(G_2\) points are 96 bytes long. \(n_1\) and \(n_2\) are the lengths of the \(G_1\) and \(G_2\) sides of the setup. Some protocols require \(n_2 = 2\), others require \(n_1\) and \(n_2\) to both be large, and some are in the middle (eg. Ethereum's data availability sampling in its current form requires \(n_1 = 4096\) and \(n_2 = 16\)). \(s\) is the secret that is used to generate the points, and needs to be forgotten.To make a KZG commitment to a polynomial \(P(x) = \sum_i c_i x^i\), we simply take a linear combination \(\sum_i c_i S_i\), where \(S_i = G_1 * s^i\) (the elliptic curve points in the trusted setup). The \(G_2\) points in the setup are used to verify evaluations of polynomials that we make commitments to; I won't go into verification here in more detail, though Dankrad does in his post.Intuitively, what value is the trusted setup providing?It's worth understanding what is philosophically going on here, and why the trusted setup is providing value. A polynomial commitment is committing to a piece of size-\(N\) data with a size \(O(1)\) object (a single elliptic curve point). We could do this with a plain Pedersen commitment: just set the \(S_i\) values to be \(N\) random elliptic curve points that have no known relationship with each other, and commit to polynomials with \(\sum_i c_i S_i\) as before. And in fact, this is exactly what IPA evaluation proofs do.However, any IPA-based proofs take \(O(N)\) time to verify, and there's an unavoidable reason why: a commitment to a polynomial \(P(x)\) using the base points \([S_0, S_1 ... S_i ... S_]\) would commit to a different polynomial if we use the base points \([S_0, S_1 ... (S_i * 2) ... S_]\). A valid commitment to the polynomial \(3x^3 + 8x^2 + 2x + 6\) under one set of base points is a valid commitment to \(3x^3 + 4x^2 + 2x + 6\) under a different set of base points. If we want to make an IPA-based proof for some statement (say, that this polynomial evaluated at \(x = 10\) equals \(3826\)), the proof should pass with the first set of base points and fail with the second. Hence, whatever the proof verification procedure is cannot avoid somehow taking into account each and every one of the \(S_i\) values, and so it unavoidably takes \(O(N)\) time.But with a trusted setup, there is a hidden mathematical relationship between the points. It's guaranteed that \(S_ = s * S_i\) with the same factor \(s\) between any two adjacent points. If \([S_0, S_1 ... S_i ... S_]\) is a valid setup, the "edited setup" \([S_0, S_1 ... (S_i * 2) ... S_]\) cannot also be a valid setup. Hence, we don't need \(O(n)\) computation; instead, we take advantage of this mathematical relationship to verify anything we need to verify in constant time.However, the mathematical relationship has to remain secret: if \(s\) is known, then anyone could come up with a commitment that stands for many different polynomials: if \(C\) commits to \(P(x)\), it also commits to \(\frac\), or \(P(x) - x + s\), or many other things. This would completely break all applications of polynomial commitments. Hence, while some secret \(s\) must have existed at one point to make possible the mathematical link between the \(S_i\) values that enables efficient verification, the \(s\) must also have been forgotten.How do multi-participant setups work?It's easy to see how one participant can generate a setup: just pick a random \(s\), and generate the elliptic curve points using that \(s\). But a single-participant trusted setup is insecure: you have to trust one specific person!The solution to this is multi-participant trusted setups, where by "multi" we mean a lot of participants: over 100 is normal, and for smaller setups it's possible to get over 1000. Here is how a multi-participant powers-of-tau setup works. Take an existing setup (note that you don't know \(s\), you just know the points):\([G_1, G_1 * s, G_1 * s^2 ... G_1 * s^]\) \([G_2, G_2 * s, G_2 * s^2 ... G_2 * s^]\)Now, choose your own random secret \(t\). Compute:\([G_1, (G_1 * s) * t, (G_1 * s^2) * t^2 ... (G_1 * s^) * t^]\) \([G_2, (G_2 * s) * t, (G_2 * s^2) * t^2 ... (G_2 * s^) * t^]\)Notice that this is equivalent to:\([G_1, G_1 * (st), G_1 * (st)^2 ... G_1 * (st)^]\) \([G_2, G_2 * (st), G_2 * (st)^2 ... G_2 * (st)^]\)That is to say, you've created a valid setup with the secret \(s * t\)! You never give your \(t\) to the previous participants, and the previous participants never give you their secrets that went into \(s\). And as long as any one of the participants is honest and does not reveal their part of the secret, the combined secret does not get revealed. In particular, finite fields have the property that if you know know \(s\) but not \(t\), and \(t\) is securely randomly generated, then you know nothing about \(s*t\)!Verifying the setupTo verify that each participant actually participated, each participant can provide a proof that consists of (i) the \(G_1 * s\) point that they received and (ii) \(G_2 * t\), where \(t\) is the secret that they introduce. The list of these proofs can be used to verify that the final setup combines together all the secrets (as opposed to, say, the last participant just forgetting the previous values and outputting a setup with just their own secret, which they keep so they can cheat in any protocols that use the setup). \(s_1\) is the first participant's secret, \(s_2\) is the second participant's secret, etc. The pairing check at each step proves that the setup at each step actually came from a combination of the setup at the previous step and a new secret known by the participant at that step. Each participant should reveal their proof on some publicly verifiable medium (eg. personal website, transaction from their .eth address, Twitter). Note that this mechanism does not prevent someone from claiming to have participated at some index where someone else has (assuming that other person has revealed their proof), but it's generally considered that this does not matter: if someone is willing to lie about having participated, they would also be willing to lie about having deleted their secret. As long as at least one of the people who publicly claim to have participated is honest, the setup is secure.In addition to the above check, we also want to verify that all the powers in the setup are correctly constructed (ie. they're powers of the same secret). To do this, we could do a series of pairing checks, verifying that \(e(S_, G_2) = e(S_i, T_1)\) (where \(T_1\) is the \(G_2 * s\) value in the setup) for every \(i\). This verifies that the factor between each \(S_i\) and \(S_\) is the same as the factor between \(T_1\) and \(G_2\). We can then do the same on the \(G_2\) side.But that's a lot of pairings and is expensive. Instead, we take a random linear combination \(L_1 = \sum_^ r_iS_i\), and the same linear combination shifted by one: \(L_2 = \sum_^ r_iS_\). We use a single pairing check to verify that they match up: \(e(L_2, G_2) = e(L_1, T_1)\).We can even combine the process for the \(G_1\) side and the \(G_2\) side together: in addition to computing \(L_1\) and \(L_2\) as above, we also compute \(L_3 = \sum_^ q_iT_i\) (\(q_i\) is another set of random coefficients) and \(L_4 = \sum_^ q_iT_\), and check \(e(L_2, L_3) = e(L_1, L_4)\).Setups in Lagrange formIn many use cases, you don't want to work with polynomials in coefficient form (eg. \(P(x) = 3x^3 + 8x^2 + 2x + 6\)), you want to work with polynomials in evaluation form (eg. \(P(x)\) is the polynomial that evaluates to \([19, 146, 9, 187]\) on the domain \([1, 189, 336, 148]\) modulo 337). Evaluation form has many advantages (eg. you can multiply and sometimes divide polynomials in \(O(N)\) time) and you can even use it to evaluate in \(O(N)\) time. In particular, data availability sampling expects the blobs to be in evaluation form.To work with these cases, it's often convenient to convert the trusted setup to evaluation form. This would allow you to take the evaluations (\([19, 146, 9, 187]\) in the above example) and use them to compute the commitment directly.This is done most easily with a Fast Fourier transform (FFT), but passing the curve points as input instead of numbers. I'll avoid repeating a full detailed explanation of FFTs here, but here is an implementation; it is actually not that difficult.The future of trusted setupsPowers-of-tau is not the only kind of trusted setup out there. Some other notable (actual or potential) trusted setups include:The more complicated setups in older ZK-SNARK protocols (eg. see here), which are sometimes still used (particularly Groth16) because verification is cheaper than PLONK. Some cryptographic protocols (eg. DARK) depend on hidden-order groups, groups where it is not known what number an element can be multiplied by to get the zero element. Fully trustless versions of this exist (see: class groups), but by far the most efficient version uses RSA groups (powers of \(x\) mod \(n = pq\) where \(p\) and \(q\) are not known). Trusted setup ceremonies for this with 1-of-n trust assumptions are possible, but are very complicated to implement. If/when indistinguishability obfuscation becomes viable, many protocols that depend on it will involve someone creating and publishing an obfuscated program that does something with a hidden internal secret. This is a trusted setup: the creator(s) would need to possess the secret to create the program, and would need to delete it afterwards. Cryptography continues to be a rapidly evolving field, and how important trusted setups are could easily change. It's possible that techniques for working with IPAs and Halo-style ideas will improve to the point where KZG becomes outdated and unnecessary, or that quantum computers will make anything based on elliptic curves non-viable ten years from now and we'll be stuck working with trusted-setup-free hash-based protocols. It's also possible that what we can do with KZG will improve even faster, or that a new area of cryptography will emerge that depends on a different kind of trusted setup.To the extent that trusted setup ceremonies are necessary, it is important to remember that not all trusted setups are created equal. 176 participants is better than 6, and 2000 would be even better. A ceremony small enough that it can be run inside a browser or phone application (eg. the ZKopru setup is web-based) could attract far more participants than one that requires running a complicated software package. Every ceremony should ideally have participants running multiple independently built software implementations and running different operating systems and environments, to reduce common mode failure risks. Ceremonies that require only one round of interaction per participant (like powers-of-tau) are far better than multi-round ceremonies, both due to the ability to support far more participants and due to the greater ease of writing multiple implementations. Ceremonies should ideally be universal (the output of one ceremony being able to support a wide range of protocols). These are all things that we can and should keep working on, to ensure that trusted setups can be as secure and as trusted as possible.
2024年10月22日
4 阅读
0 评论
0 点赞
2024-10-22
Encapsulated vs systemic complexity in protocol design
Encapsulated vs systemic complexity in protocol design2022 Feb 28 See all posts Encapsulated vs systemic complexity in protocol design One of the main goals of Ethereum protocol design is to minimize complexity: make the protocol as simple as possible, while still making a blockchain that can do what an effective blockchain needs to do. The Ethereum protocol is far from perfect at this, especially since much of it was designed in 2014-16 when we understood much less, but we nevertheless make an active effort to reduce complexity whenever possible.One of the challenges of this goal, however, is that complexity is difficult to define, and sometimes, you have to trade off between two choices that introduce different kinds of complexity and have different costs. How do we compare?One powerful intellectual tool that allows for more nuanced thinking about complexity is to draw a distinction between what we will call encapsulated complexity and systemic complexity. Encapsulated complexity occurs when there is a system with sub-systems that are internally complex, but that present a simple "interface" to the outside. Systemic complexity occurs when the different parts of a system can't even be cleanly separated, and have complex interactions with each other.Here are a few examples.BLS signatures vs Schnorr signaturesBLS signatures and Schnorr signatures are two popular types of cryptographic signature schemes that can be made with elliptic curves.BLS signatures appear mathematically very simple:Signing: \(\sigma = H(m) * k\)Verifying: \(e([1], \sigma) \stackrel e(H(m), K)\)\(H\) is a hash function, \(m\) is the message, and \(k\) and \(K\) are the private and public keys. So far, so simple. However, the true complexity is hidden inside the definition of the \(e\) function: elliptic curve pairings, one of the most devilishly hard-to-understand pieces of math in all of cryptography.Now, consider Schnorr signatures. Schnorr signatures rely only on basic elliptic curves. But the signing and verification logic is somewhat more complex: So... which type of signature is "simpler"? It depends what you care about! BLS signatures have a huge amount of technical complexity, but the complexity is all buried within the definition of the \(e\) function. If you treat the \(e\) function as a black box, BLS signatures are actually really easy. Schnorr signatures, on the other hand, have less total complexity, but they have more pieces that could interact with the outside world in tricky ways.For example:Doing a BLS multi-signature (a combined signature from two keys \(k_1\) and \(k_2\)) is easy: just take \(\sigma_1 + \sigma_2\). But a Schnorr multi-signature requires two rounds of interaction, and there are tricky key cancellation attacks that need to be dealt with. Schnorr signatures require random number generation, BLS signatures do not. Elliptic curve pairings in general are a powerful "complexity sponge" in that they contain large amounts of encapsulated complexity, but enable solutions with much less systemic complexity. This is also true in the area of polynomial commitments: compare the simplicity of KZG commitments (which require pairings) to the much more complicated internal logic of inner product arguments (which do not).Cryptography vs cryptoeconomicsOne important design choice that appears in many blockchain designs is that of cryptography versus cryptoeconomics. Often (eg. in rollups) this comes in the form of a choice between validity proofs (aka. ZK-SNARKs) and fraud proofs.ZK-SNARKs are complex technology. While the basic ideas behind how they work can be explained in a single post, actually implementing a ZK-SNARK to verify some computation involves many times more complexity than the computation itself (hence why ZK-SNARKs for the EVM are still under development while fraud proofs for the EVM are already in the testing stage). Implementing a ZK-SNARK effectively involves circuit design with special-purpose optimization, working with unfamiliar programming languages, and many other challenges. Fraud proofs, on the other hand, are inherently simple: if someone makes a challenge, you just directly run the computation on-chain. For efficiency, a binary-search scheme is sometimes added, but even that doesn't add too much complexity.But while ZK-SNARKs are complex, their complexity is encapsulated complexity. The relatively light complexity of fraud proofs, on the other hand, is systemic. Here are some examples of systemic complexity that fraud proofs introduce:They require careful incentive engineering to avoid the verifier's dilemma. If done in-consensus, they require extra transaction types for the fraud proofs, along with reasoning about what happens if many actors compete to submit a fraud proof at the same time. They depend on a synchronous network. They allow censorship attacks to be also used to commit theft. Rollups based on fraud proofs require liquidity providers to support instant withdrawals. For these reasons, even from a complexity perspective purely cryptographic solutions based on ZK-SNARKs are likely to be long-run safer: ZK-SNARKs have are more complicated parts that some people have to think about, but they have fewer dangling caveats that everyone has to think about.Miscellaneous examplesProof of work (Nakamoto consensus) - low encapsulated complexity, as the mechanism is extremely simple and easy to understand, but higher systemic complexity (eg. selfish mining attacks). Hash functions - high encapsulated complexity, but very easy-to-understand properties so low systemic complexity. Random shuffling algorithms - shuffling algorithms can either be internally complicated (as in Whisk) but lead to easy-to-understand guarantees of strong randomness, or internally simpler but lead to randomness properties that are weaker and more difficult to analyze (systemic complexity). Miner extractable value (MEV) - a protocol that is powerful enough to support complex transactions can be fairly simple internally, but those complex transactions can have complex systemic effects on the protocol's incentives by contributing to the incentive to propose blocks in very irregular ways. Verkle trees - Verkle trees do have some encapsulated complexity, in fact quite a bit more than plain Merkle hash trees. Systemically, however, Verkle trees present the exact same relatively clean-and-simple interface of a key-value map. The main systemic complexity "leak" is the possibility of an attacker manipulating the tree to make a particular value have a very long branch; but this risk is the same for both Verkle trees and Merkle trees. How do we make the tradeoff?Often, the choice with less encapsulated complexity is also the choice with less systemic complexity, and so there is one choice that is obviously simpler. But at other times, you have to make a hard choice between one type of complexity and the other. What should be clear at this point is that complexity is less dangerous if it is encapsulated. The risks from complexity of a system are not a simple function of how long the specification is; a small 10-line piece of the specification that interacts with every other piece adds more complexity than a 100-line function that is otherwise treated as a black box.However, there are limits to this approach of preferring encapsulated complexity. Software bugs can occur in any piece of code, and as it gets bigger the probability of a bug approaches 1. Sometimes, when you need to interact with a sub-system in an unexpected and new way, complexity that was originally encapsulated can become systemic.One example of the latter is Ethereum's current two-level state tree, which features a tree of account objects, where each account object in turn has its own storage tree. This tree structure is complex, but at the beginning the complexity seemed to be well-encapsulated: the rest of the protocol interacts with the tree as a key/value store that you can read and write to, so we don't have to worry about how the tree is structured.Later, however, the complexity turned out to have systemic effects: the ability of accounts to have arbitrarily large storage trees meant that there was no way to reliably expect a particular slice of the state (eg. "all accounts starting with 0x1234") to have a predictable size. This makes it harder to split up the state into pieces, complicating the design of syncing protocols and attempts to distribute the storage process. Why did encapsulated complexity become systemic? Because the interface changed. The fix? The current proposal to move to Verkle trees also includes a move to a well-balanced single-layer design for the tree,Ultimately, which type of complexity to favor in any given situation is a question with no easy answers. The best that we can do is to have an attitude of moderately favoring encapsulated complexity, but not too much, and exercise our judgement in each specific case. Sometimes, a sacrifice of a little bit of systemic complexity to allow a great reduction of encapsulated complexity really is the best thing to do. And other times, you can even misjudge what is encapsulated and what isn't. Each situation is different.
2024年10月22日
4 阅读
0 评论
0 点赞
2024-10-22
Soulbound
Soulbound2022 Jan 26 See all posts Soulbound One feature of World of Warcraft that is second nature to its players, but goes mostly undiscussed outside of gaming circles, is the concept of soulbound items. A soulbound item, once picked up, cannot be transferred or sold to another player.Most very powerful items in the game are soulbound, and typically require completing a complicated quest or killing a very powerful monster, usually with the help of anywhere from four to thirty nine other players. Hence, in order to get your character anywhere close to having the best weapons and armor, you have no choice but to participate in killing some of these extremely difficult monsters yourself. The purpose of the mechanic is fairly clear: it keeps the game challenging and interesting, by making sure that to get the best items you have to actually go and do the hard thing and figure out how to kill the dragon. You can't just go kill boars ten hours a day for a year, get thousands of gold, and buy the epic magic armor from other players who killed the dragon for you.Of course, the system is very imperfect: you could just pay a team of professionals to kill the dragon with you and let you collect the loot, or even outright buy a character on a secondary market, and do this all with out-of-game US dollars so you don't even have to kill boars. But even still, it makes for a much better game than every item always having a price.What if NFTs could be soulbound?NFTs in their current form have many of the same properties as rare and epic items in a massively multiplayer online game. They have social signaling value: people who have them can show them off, and there's more and more tools precisely to help users do that. Very recently, Twitter started rolling out an integration that allows users to show off their NFTs on their picture profile.But what exactly are these NFTs signaling? Certainly, one part of the answer is some kind of skill in acquiring NFTs and knowing which NFTs to acquire. But because NFTs are tradeable items, another big part of the answer inevitably becomes that NFTs are about signaling wealth. CryptoPunks are now regularly being sold for many millions of dollars, and they are not even the most expensive NFTs out there. Image source here. If someone shows you that they have an NFT that is obtainable by doing X, you can't tell whether they did X themselves or whether they just paid someone else to do X. Some of the time this is not a problem: for an NFT supporting a charity, someone buying it off the secondary market is sacrificing their own funds for the cause and they are helping the charity by contributing to others' incentive to buy the NFT, and so there is no reason to discriminate against them. And indeed, a lot of good can come from charity NFTs alone. But what if we want to create NFTs that are not just about who has the most money, and that actually try to signal something else?Perhaps the best example of a project trying to do this is POAP, the "proof of attendance protocol". POAP is a standard by which projects can send NFTs that represent the idea that the recipient personally participated in some event. Part of my own POAP collection, much of which came from the events that I attended over the years. POAP is an excellent example of an NFT that works better if it could be soulbound. If someone is looking at your POAP, they are not interested in whether or not you paid someone who attended some event. They are interested in whether or not you personally attended that event. Proposals to put certificates (eg. driver's licenses, university degrees, proof of age) on-chain face a similar problem: they would be much less valuable if someone who doesn't meet the condition themselves could just go buy one from someone who does.While transferable NFTs have their place and can be really valuable on their own for supporting artists and charities, there is also a large and underexplored design space of what non-transferable NFTs could become.What if governance rights could be soulbound?This is a topic I have written about ad nauseam (see [1] [2] [3] [4] [5]), but it continues to be worth repeating: there are very bad things that can easily happen to governance mechanisms if governance power is easily transferable. This is true for two primary types of reasons:If the goal is for governance power to be widely distributed, then transferability is counterproductive as concentrated interests are more likely to buy the governance rights up from everyone else. If the goal is for governance power to go to the competent, then transferability is counterproductive because nothing stops the governance rights from being bought up by the determined but incompetent. If you take the proverb that "those who most want to rule people are those least suited to do it" seriously, then you should be suspicious of transferability, precisely because transferability makes governance power flow away from the meek who are most likely to provide valuable input to governance and toward the power-hungry who are most likely to cause problems.So what if we try to make governance rights non-transferable? What if we try to make a CityDAO where more voting power goes to the people who actually live in the city, or at least is reliably democratic and avoids undue influence by whales hoarding a large number of citizen NFTs? What if DAO governance of blockchain protocols could somehow make governance power conditional on participation? Once again, a large and fruitful design space opens up that today is difficult to access.Implementing non-transferability in practicePOAP has made the technical decision to not block transferability of the POAPs themselves. There are good reasons for this: users might have a good reason to want to migrate all their assets from one wallet to another (eg. for security), and the security of non-transferability implemented "naively" is not very strong anyway because users could just create a wrapper account that holds the NFT and then sell the ownership of that.And indeed, there have been quite a few cases where POAPs have frequently been bought and sold when an economic rationale was there to do so. Adidas recently released a POAP for free to their fans that could give users priority access at a merchandise sale. What happened? Well, of course, many of the POAPs were quickly transferred to the highest bidder. More transfers than items. And not the only time.To solve this problem, the POAP team is suggesting that developers who care about non-transferability implement checks on their own: they could check on-chain if the current owner is the same address as the original owner, and they could add more sophisticated checks over time if deemed necessary. This is, for now, a more future-proof approach.Perhaps the one NFT that is the most robustly non-transferable today is the proof-of-humanity attestation. Theoretically, anyone can create a proof-of-humanity profile with a smart contract account that has transferable ownership, and then sell that account. But the proof-of-humanity protocol has a revocation feature that allows the original owner to make a video asking for a profile to be removed, and a Kleros court decides whether or not the video was from the same person as the original creator. Once the profile is successfully removed, they can re-apply to make a new profile. Hence, if you buy someone else's proof-of-humanity profile, your possession can be very quickly taken away from you, making transfers of ownership non-viable. Proof-of-humanity profiles are de-facto soulbound, and infrastructure built on top of them could allow for on-chain items in general to be soulbound to particular humans.Can we limit transferability without going all the way and basing everything on proof of humanity? It becomes harder, but there are medium-strength approaches that are probably good enough for some use cases. Making an NFT bound to an ENS name is one simple option, if we assume that users care enough about their ENS names that they are not willing to transfer them. For now, what we're likely to see is a spectrum of approaches to limit transferability, with different projects choosing different tradeoffs between security and convenience.Non-transferability and privacyCryptographically strong privacy for transferable assets is fairly easy to understand: you take your coins, put them into tornado.cash or a similar platform, and withdraw them into a fresh account. But how can we add privacy for soulbound items where you cannot just move them into a fresh account or even a smart contract? If proof of humanity starts getting more adoption, privacy becomes even more important, as the alternative is all of our activity being mapped on-chain directly to a human face.Fortunately, a few fairly simple technical options are possible:Store the item at an address which is the hash of (i) an index, (ii) the recipient address and (iii) a secret belonging to the recipient. You could reveal your secret to an interface that would then scan for all possible items that belong to your, but no one without your secret could see which items are yours. Publish a hash of a bunch of items, and give each recipient their Merkle branch. If a smart contract needs to check if you have an item of some type, you can provide a ZK-SNARK. Transfers could be done on-chain; the simplest technique may just be a transaction that calls a factory contract to make the old item invalid and the new item valid, using a ZK-SNARK to prove that the operation is valid.Privacy is an important part of making this kind of ecosystem work well. In some cases, the underlying thing that the item is representing is already public, and so there is no point in trying to add privacy. But in many other cases, users would not want to reveal everything that they have. If, one day in the future, being vaccinated becomes a POAP, one of the worst things we could do would be to create a system where the POAP is automatically advertised for everyone to see and everyone has no choice but to let their medical decision be influenced by what would look cool in their particular social circle. Privacy being a core part of the design can avoid these bad outcomes and increase the chance that we create something great.From here to thereA common criticism of the "web3" space as it exists today is how money-oriented everything is. People celebrate the ownership, and outright waste, of large amounts of wealth, and this limits the appeal and the long-term sustainability of the culture that emerges around these items. There are of course important benefits that even financialized NFTs can provide, such as funding artists and charities that would otherwise go unrecognized. However, there are limits to that approach, and a lot of underexplored opportunity in trying to go beyond financialization. Making more items in the crypto space "soulbound" can be one path toward an alternative, where NFTs can represent much more of who you are and not just what you can afford.However, there are technical challenges to doing this, and an uneasy "interface" between the desire to limit or prevent transfers and a blockchain ecosystem where so far all of the standards are designed around maximum transferability. Attaching items to "identity objects" that users are either unable (as with proof-of-humanity profiles) or unwilling (as with ENS names) to trade away seems like the most promising path, but challenges remain in making this easy-to-use, private and secure. We need more effort on thinking through and solving these challenges. If we can, this opens a much wider door to blockchains being at the center of ecosystems that are collaborative and fun, and not just about money.
2024年10月22日
5 阅读
0 评论
0 点赞
2024-10-22
The bulldozer vs vetocracy political axis
The bulldozer vs vetocracy political axis2021 Dec 19 See all posts The bulldozer vs vetocracy political axis Typically, attempts to collapse down political preferences into a few dimensions focus on two primary dimensions: "authoritarian vs libertarian" and "left vs right". You've probably seen political compasses like this: There have been many variations on this, and even an entire subreddit dedicated to memes based on these charts. I even made a spin on the concept myself, with this "meta-political compass" where at each point on the compass there is a smaller compass depicting what the people at that point on the compass see the axes of the compass as being.Of course, "authoritarian vs libertarian" and "left vs right" are both incredibly un-nuanced gross oversimplifications. But us puny-brained human beings do not have the capacity to run anything close to accurate simulations of humanity inside our heads, and so sometimes incredibly un-nuanced gross oversimplifications are something we need to understand the world. But what if there are other incredibly un-nuanced gross oversimplifications worth exploring?Enter the bulldozer vs vetocracy divideLet us consider a political axis defined by these two opposing poles:Bulldozer: single actors can do important and meaningful, but potentially risky and disruptive, things without asking for permission Vetocracy: doing anything potentially disruptive and controversial requires getting a sign-off from a large number of different and diverse actors, any of whom could stop it Note that this is not the same as either authoritarian vs libertarian or left vs right. You can have vetocratic authoritarianism, the bulldozer left, or any other combination. Here are a few examples:The key difference between authoritarian bulldozer and authoritarian vetocracy is this: is the government more likely to fail by doing bad things or by preventing good things from happening? Similarly for libertarian bulldozer vs vetocracy: are private actors more likely to fail by doing bad things, or by standing in the way of needed good things?Sometimes, I hear people complaining that eg. the United States (but other countries too) is falling behind because too many people use freedom as an excuse to prevent needed reforms from happening. But is the problem really freedom? Isn't, say, restrictive housing policy preventing GDP from rising by 36% an example of the problem precisely being people not having enough freedom to build structures on their own land? Shifting the argument over to saying that there is too much vetocracy, on the other hand, makes the argument look much less confusing: individuals excessively blocking governments and governments excessively blocking individuals are not opposites, but rather two sides of the same coin.And indeed, recently there has been a bunch of political writing pointing the finger straight at vetocracy as a source of many huge problems:https://astralcodexten.substack.com/p/ezra-klein-on-vetocracy https://www.vox.com/2020/4/22/21228469/marc-andreessen-build-government-coronavirus https://www.vox.com/2016/10/26/13352946/francis-fukuyama-ezra-klein https://www.politico.com/news/magazine/2019/11/29/penn-station-robert-caro-073564 And on the other side of the coin, people are often confused when politicians who normally do not respect human rights suddenly appear very pro-freedom in their love of Bitcoin. Are they libertarian, or are they authoritarian? In this framework, the answer is simple: they're bulldozers, with all the benefits and risks that that side of the spectrum brings.What is vetocracy good for?Though the change that cryptocurrency proponents seek to bring to the world is often bulldozery, cryptocurrency governance internally is often quite vetocratic. Bitcoin governance famously makes it very difficult to make changes, and some core "constitutional norms" (eg. the 21 million coin limit) are considered so inviolate that many Bitcoin users consider a chain that violates that rule to be by-definition not Bitcoin, regardless of how much support it has.Ethereum protocol research is sometimes bulldozery in operation, but the Ethereum EIP process that governs the final stage of turning a research proposal into something that actually makes it into the blockchain includes a fair share of vetocracy, though still less than Bitcoin. Governance over irregular state changes, hard forks that interfere with the operation of specific applications on-chain, is even more vetocratic: after the DAO fork, not a single proposal to intentionally "fix" some application by altering its code or moving its balance has been successful.The case for vetocracy in these contexts is clear: it gives people a feeling of safety that the platform they build or invest on is not going to suddenly change the rules on them one day and destroy everything they've put years of their time or money into. Cryptocurrency proponents often cite Citadel interfering in Gamestop trading as an example of the opaque, centralized (and bulldozery) manipulation that they are fighting against. Web2 developers often complain about centralized platforms suddenly changing their APIs in ways that destroy startups built around their platforms. And, of course.... Vitalik Buterin, bulldozer victim Ok fine, the story that WoW removing Siphon Life was the direct inspiration to Ethereum is exaggerated, but the infamous patch that ruined my beloved warlock and my response to it were very real!And similarly, the case for vetocracy in politics is clear: it's a response to the often ruinous excesses of the bulldozers, both relatively minor and unthinkably severe, of the early 20th century.So what's the synthesis?The primary purpose of this point is to outline an axis, not to argue for a particular position. And if the vetocracy vs bulldozer axis is anything like the libertarian vs authoritarian axis, it's inevitably going to have internal subtleties and contradictions: much like a free society will see people voluntarily joining internally autocratic corporations (yes, even lots of people who are totally not economically desperate make such choices), many movements will be vetocratic internally but bulldozery in their relationship with the outside world.But here are a few possible things that one could believe about bulldozers and vetocracy:The physical world has too much vetocracy, but the digital world has too many bulldozers, and there are no digital places that are truly effective refuges from the bulldozers (hence: why we need blockchains?) Processes that create durable change need to be bulldozery toward the status quo but protecting that change requires a vetocracy. There's some optimal rate at which such processes should happen; too much and there's chaos, not enough and there's stagnation. A few key institutions should be protected by strong vetocracy, and these institutions exist both to enable bulldozers needed to enact positive change and to give people things they can depend on that are not going to be brought down by bulldozers. In particular, blockchain base layers should be vetocratic, but application-layer governance should leave more space for bulldozers Better economic mechanisms (quadratic voting? Harberger taxes?) can get us many of the benefits of both vetocracy and bulldozers without many of the costs. Vetocracy vs bulldozer is a particularly useful axis to use when thinking about non-governmental forms of human organization, whether for-profit companies, non-profit organizations, blockchains, or something else entirely. The relatively easier ability to exit from such systems (compared to governments) confounds discussion of how libertarian vs authoritarian they are, and so far blockchains and even centralized tech platforms have not really found many ways to differentiate themselves on the left vs right axis (though I would love to see more attempts at left-leaning crypto projects!). The vetocracy vs bulldozer axis, on the other hand, continues to map to non-governmental structures quite well - potentially making it very relevant in discussing these new kinds of non-governmental structures that are becoming increasingly important.
2024年10月22日
4 阅读
0 评论
0 点赞
2024-10-22
Endgame
Endgame2021 Dec 06 See all posts Endgame Special thanks to a whole bunch of people from Optimism and Flashbots for discussion and thought that went into this piece, and Karl Floersch, Phil Daian, Hasu and Alex Obadia for feedback and review.Consider the average "big block chain" - very high block frequency, very high block size, many thousands of transactions per second, but also highly centralized: because the blocks are so big, only a few dozen or few hundred nodes can afford to run a fully participating node that can create blocks or verify the existing chain. What would it take to make such a chain acceptably trustless and censorship resistant, at least by my standards?Here is a plausible roadmap:Add a second tier of staking, with low resource requirements, to do distributed block validation. The transactions in a block are split into 100 buckets, with a Merkle or Verkle tree state root after each bucket. Each second-tier staker gets randomly assigned to one of the buckets. A block is only accepted when at least 2/3 of the validators assigned to each bucket sign off on it. Introduce either fraud proofs or ZK-SNARKs to let users directly (and cheaply) check block validity. ZK-SNARKs can cryptographically prove block validity directly; fraud proofs are a simpler scheme where if a block has an invalid bucket, anyone can broadcast a fraud proof of just that bucket. This provides another layer of security on top of the randomly-assigned validators. Introduce data availability sampling to let users check block availability. By using DAS checks, light clients can verify that a block was published by only downloading a few randomly selected pieces. Add secondary transaction channels to prevent censorship. One way to do this is to allow secondary stakers to submit lists of transactions which the next main block must include. What do we get after all of this is done? We get a chain where block production is still centralized, but block validation is trustless and highly decentralized, and specialized anti-censorship magic prevents the block producers from censoring. It's somewhat aesthetically ugly, but it does provide the basic guarantees that we are looking for: even if every single one of the primary stakers (the block producers) is intent on attacking or censoring, the worst that they could do is all go offline entirely, at which point the chain stops accepting transactions until the community pools their resources and sets up one primary-staker node that is honest.Now, consider one possible long-term future for rollups...Imagine that one particular rollup - whether Arbitrum, Optimism, Zksync, StarkNet or something completely new - does a really good job of engineering their node implementation, to the point where it really can do 10,000 transactions per second if given powerful enough hardware. The techniques for doing this are in-principle well-known, and implementations were made by Dan Larimer and others many years ago: split up execution into one CPU thread running the unparallelizable but cheap business logic and a huge number of other threads running the expensive but highly parallelizable cryptography. Imagine also that Ethereum implements sharding with data availability sampling, and has the space to store that rollup's on-chain data between its 64 shards. As a result, everyone migrates to this rollup. What would that world look like? Once again, we get a world where, block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented. Rollup block producers have to process a huge number of transactions, and so it is a difficult market to enter, but they have no way to push invalid blocks through. Block availability is secured by the underlying chain, and block validity is guaranteed by the rollup logic: if it's a ZK rollup, it's ensured by SNARKs, and an optimistic rollup is secure as long as there is one honest actor somewhere running a fraud prover node (they can be subsidized with Gitcoin grants). Furthermore, because users always have the option of submitting transactions through the on-chain secondary inclusion channel, rollup sequencers also cannot effectively censor.Now, consider the other possible long-term future of rollups...No single rollup succeeds at holding anywhere close to the majority of Ethereum activity. Instead, they all top out at a few hundred transactions per second. We get a multi-rollup future for Ethereum - the Cosmos multi–chain vision, but on top of a base layer providing data availability and shared security. Users frequently rely on cross-rollup bridging to jump between different rollups without paying the high fees on the main chain. What would that world look like?It seems like we could have it all: decentralized validation, robust censorship resistance, and even distributed block production, because the rollups are all invididually small and so easy to start producing blocks in. But the decentralization of block production may not last, because of the possibility of cross-domain MEV. There are a number of benefits to being able to construct the next block on many domains at the same time: you can create blocks that use arbitrage opportunities that rely on making transactions in two rollups, or one rollup and the main chain, or even more complex combinations. A cross-domain MEV opportunity discovered by Western Gate Hence, in a multi-domain world, there are strong pressures toward the same people controlling block production on all domains. It may not happen, but there's a good chance that it will, and we have to be prepared for that possibility. What can we do about it? So far, the best that we know how to do is to use two techniques in combination:Rollups implement some mechanism for auctioning off block production at each slot, or the Ethereum base layer implements proposer/builder separation (PBS) (or both). This ensures that at least any centralization tendencies in block production don't lead to a completely elite-captured and concentrated staking pool market dominating block validation. Rollups implement censorship-resistant bypass channels, and the Ethereum base layer implements PBS anti-censorship techniques. This ensures that if the winners of the potentially highly centralized "pure" block production market try to censor transactions, there are ways to bypass the censorship. So what's the result? Block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented. Three paths toward the same destination. So what does this mean?While there are many paths toward building a scalable and secure long-term blockchain ecosystem, it's looking like they are all building toward very similar futures. There's a high chance that block production will end up centralized: either the network effects within rollups or the network effects of cross-domain MEV push us in that direction in their own different ways. But what we can do is use protocol-level techniques such as committee validation, data availability sampling and bypass channels to "regulate" this market, ensuring that the winners cannot abuse their power.What does this mean for block producers? Block production is likely to become a specialized market, and the domain expertise is likely to carry over across different domains. 90% of what makes a good Optimism block producer also makes a good Arbitrum block producer, and a good Polygon block producer, and even a good Ethereum base layer block producer. If there are many domains, cross-domain arbitrage may also become an important source of revenue.What does this mean for Ethereum? First of all, Ethereum is very well-positioned to adjust to this future world, despite the inherent uncertainty. The profound benefit of the Ethereum rollup-centric roadmap is that it means that Ethereum is open to all of the futures, and does not have to commit to an opinion about which one will necessarily win. Will users very strongly want to be on a single rollup? Ethereum, following its existing course, can be the base layer of that, automatically providing the anti-fraud and anti-censorship "armor" that high-capacity domains need to be secure. Is making a high-capacity domain too technically complicated, or do users just have a great need for variety? Ethereum can be the base layer of that too - and a very good one, as the common root of trust makes it far easier to move assets between rollups safely and cheaply.But also, Ethereum researchers should think hard about what levels of decentralization in block production are actually achievable. It may not be worth it to add complicated plumbing to make highly decentralized block production easy if cross-domain MEV (or even cross-shard MEV from one rollup taking up multiple shards) make it unsustainable regardless. What does this mean for big block chains? There is a path for them to turn into something trustless and censorship resistant, and we'll soon find out if their core developers and communities actually value censorship resistance and decentralization enough for them to do it!It will likely take years for all of this to play out. Sharding and data availability sampling are complex technologies to implement. It will take years of refinement and audits for people to be fully comfortable storing their assets in a ZK-rollup running a full EVM. And cross-domain MEV research too is still in its infancy. But it does look increasingly clear how a realistic but bright future for scalable blockchains is likely to emerge.
2024年10月22日
3 阅读
0 评论
0 点赞
2024-10-22
Review of Optimism retro funding round 1
Review of Optimism retro funding round 12021 Nov 16 See all posts Review of Optimism retro funding round 1 Special thanks to Karl Floersch and Haonan Li for feedback and review, and Jinglan Wang for discussion.Last month, Optimism ran their first round of retroactive public goods funding, allocating a total of $1 million to 58 projects to reward the good work that these projects have already done for the Optimism and Ethereum ecosystems. In addition to being the first major retroactive general-purpose public goods funding experiment, it's also the first experiment in a new kind of governance through badge holders - not a very small decision-making board and also not a fully public vote, but instead a quadratic vote among a medium-sized group of 22 participants.The entire process was highly transparent from start to finish:The rules that the badge holders were supposed to follow were enshrined in the badge holder instructions You can see the projects that were nominated in this spreadsheet All discussion between the badge holders happened in publicly viewable forums. In addition to Twitter conversation (eg. Jeff Coleman's thread and also others), all of the explicit structured discussion channels were publicly viewable: the #retroactive-public-goods channel on the Optimism discord, and a published Zoom call The full results, and the individual badge holder votes that went into the results, can be viewed in this spreadsheet And finally, here are the results in an easy-to-read chart form: Much like the Gitcoin quadratic funding rounds and the MolochDAO grants, this is yet another instance of the Ethereum ecosystem establishing itself as a key player in the innovative public goods funding mechanism design space. But what can we learn from this experiment?Analyzing the resultsFirst, let us see if there are any interesting takeaways that can be seen by looking at the results. But what do we compare the results to? The most natural point of comparison is the other major public goods funding experiment that we've had so far: the Gitcoin quadratic funding rounds (in this case, round 11).Gitcoin round 11 (tech only) Optimism retro round 1 Probably the most obvious property of the Optimism retro results that can be seen without any comparisons is the category of the winners: every major Optimism retro winner was a technology project. There was nothing in the badge holder instructions that specified this; non-tech projects (say, the translations at ethereum.cn) were absolutely eligible. And yet, due to some combination of choice of badge holders and subconscious biases, the round seems to have been understood as being tech-oriented. Hence, I restricted the Gitcoin results in the table above to technology ("DApp Tech" + "Infra Tech") to focus on the remaining differences.Some other key remaining differences are:The retro round was low variance: the top-receiving project only got three times more (in fact, exactly three times more) than the 25th, whereas in the Gitcoin chart combining the two categories the gap was over 5x, and if you look at DApp Tech or Infra Tech separately the gap is over 15x! I personally blame this on Gitcoin using standard quadratic funding (\(reward \approx (\sum_i \sqrt x_i) ^2\)) and the retro round using \(\sum_i \sqrt x_i\) without the square; perhaps the next retro round should just add the square. The retro round winners are more well-known projects: this is actually an intended consequence: the retro round focused on rewarding projects for value already provided, whereas the Gitcoin round was open-ended and many contributions were to promising new projects in expectation of future value. The retro round focused more on infrastructure, the Gitcoin round more on more user-facing projects: this is of course a generalization, as there are plenty of infrastructure projects in the Gitcoin list, but in general applications that are directly user-facing are much more prominent there. A particularly interesting consequence (or cause?) of this is that the Gitcoin round more on projects appealing to sub-communities (eg. gamers), whereas the retro round focused more on globally-valuable projects - or, less charitably, projects appealing to the one particular sub-community that is Ethereum developers. It is my own (admittedly highly subjective) opinion that the retro round winner selection is somewhat higher quality. This is independent of the above three differences; it's more a general impression that the specific projects that were chosen as top recipients on the right were very high quality projects, to a greater extent than top recipients on the left.Of course, this could have two causes: (i) a smaller but more skilled number of badge holders ("technocrats") can make better decisions than "the crowd", and (ii) it's easier to judge quality retroactively than ahead of time. And this gets us an interesting question: what if a simple way to summarize much of the above findings is that technocrats are smarter but the crowd is more diverse?Could we make badge holders and their outputs more diverse?To better understand the problem, let us zoom in on the one specific example that I already mentioned above: ethereum.cn. This is an excellent Chinese Ethereum community project (though not the only one! See also EthPlanet), which has been providing a lot of resources in Chinese for people to learn about Ethereum, including translations of many highly technical articles written by Ethereum community members and about Ethereum originally in English. Ethereum.cn webpage. Plenty of high quality technical material - though they have still not yet gotten the memo that they were supposed to rename "eth2" to "consensus layer". Minus ten retroactive reward points for them. What knowledge does a badge holder need to be able to effectively determine whether ethereum.cn is an awesome project, a well-meaning but mediocre project that few Chinese people actually visit, or a scam? Likely the following:Ability to speak and understand Chinese Being plugged into the Chinese community and understanding the social dynamics of that specific project Enough understanding of both the tech and of the frame of mind of non-technical readers to judge the site's usefulness for them Out of the current, heavily US-focused, badge holders, the number that satisfy these requirements is basically zero. Even the two Chinese-speaking badge holders are US-based and not close to the Chinese Ethereum community.So, what happens if we expand the badge holder set? We could add five badge holders from the Chinese Ethereum community, five from India, five from Latin America, five from Africa, and five from Antarctica to represent the penguins. At the same time we could also diversify among areas of expertise: some technical experts, some community leaders, some people plugged into the Ethereum gaming world. Hopefully, we can get enough coverage that for each project that's valuable to Ethereum, we would have at least 1-5 badge holders who understands enough about that project to be able to intelligently vote on it. But then we see the problem: there would only be 1-5 badge holders able to intelligently vote on it.There are a few families of solutions that I see:Tweak the quadratic voting design. In theory, quadratic voting has the unique property that there's very little incentive to vote on projects that you do not understand. Any vote you make takes away credits that you could use to vote on projects you understand better. However, the current quadratic voting design has a flaw here: not voting on a project isn't truly a neutral non-vote, it's a vote for the project getting nothing. I don't yet have great ideas for how to do this. A key question is: if zero becomes a truly neutral vote, then how much money does a project get if nobody makes any votes on it? However, this is worth looking into more. Split up the voting into categories or sub-committees. Badge holders would first vote to sort projects into buckets, and the badge holders within each bucket ("zero knowledge proofs", "games", "India"...) would then make the decisions from there. This could also be done in a more "liquid" way through delegation - a badge holder could select some other badge holder to decide their vote on some project, and they would automatically copy their vote. Everyone still votes on everything, but facilitate more discussion. The badge holders that do have the needed domain expertise to assess a given project (or a single aspect of some given project) come up with their opinions and write a document or spreadsheet entry to express their reasoning. Other badge holders use this information to help make up their minds. Once the number of decisions to be made gets even higher, we could even consider ideas like in-protocol random sortition (eg. see this idea to incorporate sortition into quadratic voting) to reduce the number of decisions that each participant needs to make. Quadratic sortition has the particularly nice benefit that it naturally leads to large decisions being made by the entire group and small decisions being made by smaller groups.The means-testing debateIn the post-round retrospective discussion among the badgeholders, one of the key questions that was brought up is: when choosing which projects to fund, should badge holders take into account whether that project is still in dire need of funding, and de-prioritize projects that are already well-funded through some other means? That is to say, should retroactive rewards be means-tested?In a "regular" grants-funding round, the rationale for answering "yes" is clear: increasing a project's funding from $0 to $100k has a much bigger impact on its ability to do its job than increasing a project's funding from $10m to $10.1m. But Optimism retro funding round 1 is not a regular grants-funding round. In retro funding, the objective is not to give people money in expectation of future work that money could help them do. Rather, the objective is to reward people for work already done, to change the incentives for anyone working on projects in the future. With this in mind, to what extent should retroactive project funding depend on how much a given project actually needs the funds?The case for means testingSuppose that you are a 20-year-old developer, and you are deciding whether to join some well-funded defi project with a fancy token, or to work on some cool open-source fully public good work that will benefit everyone. If you join the well-funded defi project, you will get a $100k salary, and your financial situation will be guaranteed to be very secure. If you work on public-good projects on your own, you will have no income. You have some savings and you could make some money with side gigs, but it will be difficult, and you're not sure if the sacrifice is worth it.Now, consider two worlds, World A and World B. First, the similarities:There are ten people exactly like you out there that could get retro rewards, and five of you will. Hence, there's a 50% chance you'll get a retro reward. Theres's a 30% chance that your work will propel you to moderate fame and you'll be hired by some company with even better terms than the original defi project (or even start your own). Now, the differences:World A (means testing): retro rewards are concentrated among the actors that do not find success some other way World B (no means testing): retro rewards are given out independently of whether or not the project finds success in some other way Let's look at your chances in each world.Event Probability (World A) Probability (World B) Independent success and retroactive reward 0% 15% Independent success only 30% 15% Retroactive reward only 50% 35% Nothing 20% 35% From your point of view as a non-risk-neutral human being, the 15% chance of getting success twice in world B matters much less than the fact that in world B your chances of being left completely in the cold with nothing are nearly double.Hence, if we want to encourage people in this hypothetical 20 year old's position to actually contribute, concentrating retro rewards among projects who did not already get rewarded some other way seems prudent.The case against means testingSuppose that you are someone who contributes a small amount to many projects, or an investor seed-funding public good projects in anticipation of retroactive rewards. In this case, the share that you would get from any single retroactive reward is small. Would you rather have a 10% chance of getting $10,100, or a 10% chance of getting $10,000 and a 10% chance of getting $100? It really doesn't matter.Furthermore, your chance of getting rewarded via retroactive funding may well be quite disjoint from your chance of getting rewarded some other way. There are countless stories on the internet of people putting a big part of their lives into a project when that project was non-profit and open-source, seeing that project go for-profit and become successful, and getting absolutely nothing out of it for themselves. In all of these cases, it doesn't really matter whether or not retroactive rewards care on whether or not projects are needy. In fact, it would probably be better for them to just focus on judging quality.Means testing has downsides of its own. It would require badge holders to expend effort to determine to what extent a project is well-funded outside the retroactive reward system. It could lead to projects expending effort to hide their wealth and appear scrappy to increase their chance of getting more rewards. Subjective evaluations of neediness of recipients could turn into politicized evaluations of moral worthiness of recipients that introduce more controversy into the mechanism. In the extreme, an elaborate tax return system might be required to properly enforce fairness.What do I think?In general, it seems like doing a little bit of prioritizing projects that have not discovered business models has advantages, but we should not do too much of that. Projects should be judged by their effect on the world first and foremost.NominationsIn this round, anyone could nominate projects by submitting them in a Google form, and there were only 106 projects nominated. What about the next round, now that people know for sure that they stand a chance at getting thousands of dollars in payout? What about the round a year in the hypothetical future, when fees from millions of daily transactions are paying into the retro funding rounds, and individual projects are getting more money than the entire round is today?Some kind of multi-level structure for nominations seems inevitable. There's probably no need to enshrine it directly into the voting rules. Instead, we can look at this as one particular way of changing the structure of the discussion: nomination rules filter out the nominations that badge holders need to look at, and anything the badge holders do not look at will get zero votes by default (unless a badge holder really cares to bypass the rules because they have their own reasons to believe that some project is valuable). Some possible ideas:Badge holder pre-approval: for a proposal to become visible, it must be approved by N badge holders (eg. N=3?). Any N badge holders could pre-approve any project; this is an anti-spam speed bump, not a gate-keeping sub-committee. Require proposers to provide more information about their proposal, justifying it and reducing the work badge holders need to do to go through it. Badge holders would also appoint a separate committee and entrust it with sorting through these proposals and forwarding the ones that follow the rules and pass a basic smell test of not being spam Proposals have to specify a category (eg. "zero knowledge proofs", "games", "India"), and badge holders who had declared themselves experts in that category would review those proposals and forward them to a vote only if they chose the right category and pass a basic smell test. Proposing requires a deposit of 0.02 ETH. If your proposal gets 0 votes (alternatively: if your proposal is explicitly deemed to be "spam"), your deposit is lost. Proposing requires a proof-of-humanity ID, with a maximum of 3 proposals per human. If your proposals get 0 votes (alternatively: if any of your proposals is explicitly deemed to be "spam"), you can no longer submit proposals for a year (or you have to provide a deposit). Conflict of interest rulesThe first part of the post-round retrospective discussion was taken up by discussion of conflict of interest rules. The badge holder instructions include the following lovely clause: No self-dealing or conflicts of interestRetroDAO governance participants should refrain from voting on sending funds to organizations where any portion of those funds is expected to flow to them, their other projects, or anyone they have a close personal or economic relationship with. As far as I can tell, this was honored. Badge holders did not try to self-deal, as they were (as far as I can tell) good people, and they knew their reputations were on the line. But there were also some subjective edge cases:Wording issues causing confusion. Some badge holders wondered about the word "other": could badge holders direct funds to their own primary projects? Additionally, the "sending funds to organizations where..." language does not strictly prohibit direct transfers to self. These were arguably simple mistakes in writing this clause; the word "other" should just be removed and "organizations" replaced with "addresses". What if a badge holder is part of a nonprofit that itself gives out grants to other projects? Could the badge holder vote for that nonprofit? The badge holder would not benefit, as the funds would 100% pass-through to others, but they could benefit indirectly. What level of connection counts as close connection? Ethereum is a tight-knit community and the people qualified to judge the best projects are often at least to some degree friends with the team or personally involved in those projects precisely because they respect those projects. When do those connections step over the line? I don't think there are perfect answers to this; rather, the line will inevitably be gray and can only be discussed and refined over time. The main mechanism-design tweaks that can mitigate it are (i) increasing the number of badge holders, diluting the portion of them that can be insiders in any single project, (ii) reducing the rewards going to projects that only a few badge holders support (my suggestion above to set the reward to \((\sum_i \sqrt x_i)^2\) instead of \(\sum_i \sqrt x_i\) would help here too), and (iii) making sure it's possible for badge holders to counteract clear abuses if they do show up.Should badge holder votes be secret ballot?In this round, badge holder votes were completely transparent; anyone can see how each badge holder votes. But transparent voting has a huge downside: it's vulnerable to bribery, including informal bribery of the kind that even good people easily succumb to. Badge holders could end up supporting projects in part with the subconscious motivation of winning favor with them. Even more realistically, badge holders may be unwilling to make negative votes even when they are justified, because a public negative vote could easily rupture a relationship.Secret ballots are the natural alternative. Secret ballots are used widely in democratic elections where any citizen (or sometimes resident) can vote, precisely to prevent vote buying and more coercive forms of influencing how people vote. However, in typical elections, votes within executive and legislative bodies are typically public. The usual reasons for this have to do with theories of democratic accountability: voters need to know how their representatives vote so that they can choose their representatives and know that they are not completely lying about their stated values. But there's also a dark side to accountability: elected officials making public votes are accountable to anyone who is trying to bribe them.Secret ballots within government bodies do have precedent:The Israeli Knesset uses secret votes to elect the president and a few other officials The Italian parliament has used secret votes in a variety of contexts. In the 19th century, it was considered an important way to protect parliament votes from interference by a monarchy. Discussions in US parliaments were less transparent before 1970, and some researchers argue that the switch to more transparency led to more corruption. Voting in juries is often secret. Sometimes, even the identities of jurors are secret. In general, the conclusion seems to be that secret votes in government bodies have complicated consequences; it's not clear that they should be used everywhere, but it's also not clear that transparency is an absolute good either.In the context of Optimism retro funding specifically, the main specific argument I heard against secret voting is that it would make it harder for badge holders to rally and vote against other badge holders making votes that are clearly very wrong or even malicious. Today, if a few rogue badge holders start supporting a project that has not provided value and is clearly a cash grab for those badge holders, the other badge holders can see this and make negative votes to counteract this attack. With secret ballots, it's not clear how this could be done.I personally would favor the second round of Optimism retro funding using completely secret votes (except perhaps open to a few researchers under conditions of non-disclosure) so we can tell what the material differences are in the outcome. Given the current small and tight-knit set of badge holders, dealing with rogue badge hodlers is likely not a primary concern, but in the future it will be; hence, coming up with a secret ballot design that allows counter-voting or some alternative strategy is an important research problem.Other ideas for structuring discussionThe level of participation among badge holders was very uneven. Some (particularly Jeff Coleman and Matt Garnett) put a lot of effort into their participation, publicly expressing their detailed reasoning in Twitter threads and helping to set up calls for more detailed discussion. Others participated in the discussion on Discord and still others just voted and did little else.There was a choice made (ok fine, I was the one who suggested it) that the #retroactive-public-goods channel should be readable by all (it's in the Optimism discord), but to prevent spam only badge holders should be able to speak. This reduced many people's ability to participate, especially ironically enough my own (I am not a badge holder, and my self-imposed Twitter quarantine, which only allows me to tweet links to my own long-form content, prevented me from engaging on Twitter).These two factors together meant that there was not that much discussion taking place; certainly less than I had been hoping for. What are some ways to encourage more discussion?Some ideas:Badge holders could vote in advisors, who cannot vote but can speak in the #retroactive-public-goods channel and other badge-holder-only meetings. Badge holders could be required to explain their decisions, eg. writing a post or a paragraph for each project they made votes on. Consider compensating badge holders, either through an explicit fixed fee or through a norm that badge holders themselves who made exceptional contributions to discussion are eligible for rewards in future rounds. Add more discussion formats. If the number of badge holders increases and there are subgroups with different specialties, there could be more chat rooms and each of them could invite outsiders. Another option is to create a dedicated subreddit. It's probably a good idea to start experimenting with more ideas like this.ConclusionsGenerally, I think Round 1 of Optimism retro funding has been a success. Many interesting and valuable projects were funded, there was quite a bit of discussion, and all of this despite it only being the first round.There are a number of ideas that could be introduced or experimented with in subsequent rounds:Increase the number and diversity of badge holders, while making sure that there is some solution to the problem that only a few badge holders will be experts in any individual project's domain. Add some kind of two-layer nomination structure, to lower the decision-making burden that the entire badge holder set is exposed to Use secret ballots Add more discussion channels, and more ways for non-badge-holders to participate. This could involve reforming how existing channels work, or it could involve adding new channels, or even specialized channels for specific categories of projects. Change the reward formula to increase variance, from the current \(\sum_i \sqrt x_i\) to the standard quadratic funding formula of \((\sum_i \sqrt x_i) ^2\). In the long term, if we want retro funding to be a sustainable institution, there is also the question of how new badge holders are to be chosen (and, in cases of malfeasance, how badge holders could be removed). Currently, the selection is centralized. In the future, we need some alternative. One possible idea for round 2 is to simply allow existing badge holders to vote in a few new badge holders. In the longer term, to prevent it from being an insular bureaucracy, perhaps one badge holder each round could be chosen by something with more open participation, like a proof-of-humanity vote?In any case, retroactive public goods funding is still an exciting and new experiment in institutional innovation in multiple ways. It's an experiment in non-coin-driven decentralized governance, and it's an experiment in making things happen through retroactive, rather than proactive, incentives. To make the experiment fully work, a lot more innovation will need to happen both in the mechanism itself and in the ecosystem that needs to form around it. When will we see the first retro funding-focused angel investor? Whatever ends up happening, I'm looking forward to seeing how this experiment evolves in the rounds to come.
2024年10月22日
3 阅读
0 评论
0 点赞
2024-10-22
Halo and more: exploring incremental verification and SNARKs without pairings
Halo and more: exploring incremental verification and SNARKs without pairings2021 Nov 05 See all posts Halo and more: exploring incremental verification and SNARKs without pairings Special thanks to Justin Drake and Sean Bowe for wonderfully pedantic and thoughtful feedback and review, and to Pratyush Mishra for discussion that contributed to the original IPA exposition.Readers who have been following the ZK-SNARK space closely should by now be familiar with the high level of how ZK-SNARKs work. ZK-SNARKs are based on checking equations where the elements going into the equations are mathematical abstractions like polynomials (or in rank-1 constraint systems matrices and vectors) that can hold a lot of data. There are three major families of cryptographic technologies that allow us to represent these abstractions succinctly: Merkle trees (for FRI), regular elliptic curves (for inner product arguments (IPAs)), and elliptic curves with pairings and trusted setups (for KZG commitments). These three technologies lead to the three types of proofs: FRI leads to STARKs, KZG commitments lead to "regular" SNARKs, and IPA-based schemes lead to bulletproofs. These three technologies have very distinct tradeoffs:Technology Cryptographic assumptions Proof size Verification time FRI Hashes only (quantum safe!) Large (10-200 kB) Medium (poly-logarithmic) Inner product arguments (IPAs) Basic elliptic curves Medium (1-3 kB) Very high (linear) KZG commitments Elliptic curves + pairings + trusted setup Short (~500 bytes) Low (constant) So far, the first and the third have seen the most attention. The reason for this has to do with that pesky right column in the second row of the table: elliptic curve-based inner product arguments have linear verification time. What this means that even though the size of a proof is small, the amount of time needed to verify the proof always takes longer than just running the computation yourself. This makes IPAs non-viable for scalability-related ZK-SNARK use cases: there's no point in using an IPA-based argument to prove the validity of an Ethereum block, because verifying the proof will take longer than just checking the block yourself. KZG and FRI-based proofs, on the other hand, really are much faster to verify than doing the computation yourself, so one of those two seems like the obvious choice.More recently, however, there has been a slew of research into techniques for merging multiple IPA proofs into one. Much of the initial work on this was done as part of designing the Halo protocol which is going into Zcash. These merging techniques are cheap, and a merged proof takes no longer to verify than a single one of the proofs that it's merging. This opens a way forward for IPAs to be useful: instead of verifying a size-\(n\) computation with a proof that takes still takes \(O(n)\) time to verify, break that computation up into smaller size-\(k\) steps, make \(\frac\) proofs for each step, and merge them together so the verifier's work goes down to a little more than \(O(k)\). These techniques also allow us to do incremental verification: if new things keep being introduced that need to be proven, you can just keep taking the existing proof, mixing it in with a proof of the new statement, and getting a proof of the new combined statement out. This is really useful for verifying the integrity of, say, an entire blockchain.So how do these techniques work, and what can they do? That's exactly what this post is about.Background: how do inner product arguments work?Inner product arguments are a proof scheme that can work over many mathematical structures, but usually we focus on IPAs over elliptic curve points. IPAs can be made over simple elliptic curves, theoretically even Bitcoin and Ethereum's secp256k1 (though some special properties are preferred to make FFTs more efficient); no need for insanely complicated pairing schemes that despite having written an explainer article and an implementation I can still barely understand myself.We'll start off with the commitment scheme, typically called Pedersen vector commitments. To be able to commit to degree \(< n\) polynomials, we first publicly choose a set of base points, \(G_0 ... G_\). These points can be generated through a pseudo-random procedure that can be re-executed by anyone (eg. the x coordinate of \(G_i\) can be \(hash(i, j)\) for the lowest integer \(j \ge 0\) that produces a valid point); this is not a trusted setup as it does not rely on any specific party to introduce secret information.To commit to a polynomial \(P(x) = \sum_i c_i x^i\), the prover computes \(com(P) = \sum_i c_i G_i\). For example, \(com(x^2 + 4)\) would equal \(G_2 + 4 * G_0\) (remember, the \(+\) and \(*\) here are elliptic curve addition and multiplication). Cryptographers will also often add an extra \(r \cdot H\) hiding parameter for privacy, but for simplicity of exposition we'll ignore privacy for now; in general, it's not that hard to add privacy into all of these schemes. Though it's not really mathematically accurate to think of elliptic curve points as being like real numbers that have sizes, area is nevertheless a good intuition for thinking about linear combinations of elliptic curve points like we use in these commitments. The blue area here is the value of the Pedersen commitment \(C = \sum_i c_i G_i\) to the polynomial \(P = \sum_i c_i x^i\). Now, let's get into how the proof works. Our final goal will be a polynomial evaluation proof: given some \(z\), we want to make a proof that \(P(z) = a\), where this proof can be verified by anyone who has the commitment \(C = com(P)\). But first, we'll focus on a simpler task: proving that \(C\) is a valid commitment to any polynomial at all - that is, proving that \(C\) was constructed by taking a linear combination \(\sum_i c_i G_i\) of the points \(\\}\), without anything else mixed in.Of course, technically any point is some multiple of \(G_0\) and so it's theoretically a valid commitment of something, but what we care about is proving that the prover knows some \(\\}\) such that \(\sum_i c_i G_i = C\). A commitment \(C\) cannot commit to multiple distinct polynomials that the prover knows about, because if it could, that would imply that elliptic curves are broken.The prover could, of course, just provide \(\\}\) directly and let the verifier check the commitment. But this takes too much space. So instead, we try to reduce the problem to a smaller problem of half the size. The prover provides two points, \(L\) and \(R\), representing the yellow and green areas in this diagram: You may be able to see where this is going: if you add \(C + L + R\) together (remember: \(C\) was the original commitment, so the blue area), the new combined point can be expressed as a sum of four squares instead of eight. And so now, the prover could finish by providing only four sums, the widths of each of the new squares. Repeat this protocol two more times, and we're down to a single full square, which the prover can prove by sending a single value representing its width.But there's a problem: if \(C\) is incorrect in some way (eg. the prover added some extra point \(H\) into it), then the prover could just subtract \(H\) from \(L\) or \(R\) to compensate for it. We plug this hole by randomly scaling our points after the prover provides \(L\) and \(R\): Choose a random factor \(\alpha\) (typically, we set \(\alpha\) to be the hash of all data added to the proof so far, including the \(L\) and \(R\), to ensure the verifier can also compute \(\alpha\)). Every even \(G_i\) point gets scaled by \(\alpha\), every odd \(G_i\) point gets scaled down by the same factor. Every odd coefficient gets scaled up by \(\alpha\) (notice the flip), and every even coefficient gets scaled down by \(\alpha\). Now, notice that:The yellow area (\(L\)) gets multiplied by \(\alpha^2\) (because every yellow square is scaled up by \(\alpha\) on both dimensions) The green area (\(R\)) gets divided by \(\alpha^2\) (because every green square is scaled down by \(\alpha\) on both dimensions) The blue area (\(C\)) remains unchanged (because its width is scaled up but its height is scaled down) Hence, we can generate our new half-size instance of the problem with some simple transformations:\(G'_ = \alpha G_ + \frac}\) \(c'_ = \frac} + \alpha c_\) \(C' = C + \alpha^2 L + \frac\) (Note: in some implementations you instead do \(G'_i = \alpha G_ + G_\) and \(c'_i = c_ + \alpha c_\) without dividing the odd points by \(\alpha\). This makes the equation \(C' = \alpha C + \alpha^2 L + R\), which is less symmetric, but ensures that the function to compute any \(G'\) in any round of the protocol becomes a polynomial without any division. Yet another alternative is to do \(G'_i = \alpha G_ + G_\) and \(c'_i = c_ + \frac}\), which avoids any \(\alpha^2\) terms.)And then we repeat the process until we get down to one point: Finally, we have a size-1 problem: prove that the final modified \(C^*\) (in this diagram it's \(C'''\) because we had to do three iterations, but it's \(log(n)\) iterations generally) equals the final modified \(G^*_0\) and \(c^*_0\). Here, the prover just provides \(c^*_0\) in the clear, and the verifier checks \(c^*_0 G^*_0 = C^*\). Computing \(c^*_0\) required being able to compute a linear combination of \(\\}\) that was not known ahead of time, so providing it and verifying it convinces the verifier that the prover actually does know all the coefficients that go into the commitment. This concludes the proof.Recapping:The statement we are proving is that \(C\) is a commitment to some polynomial \(P(x) = \sum_i c_i x^i\) committed to using the agreed-upon base points \(\\}\) The proof consists of \(log(n)\) pairs of \((L, R)\) values, representing the yellow and green areas at each step. The prover also provides the final \(c^*_0\) The verifier walks through the proof, generating the \(\alpha\) value at each step using the same algorithm as the prover and computing the new \(C'\) and \(G'_i\) values (the verifier doesn't know the \(c_i\) values so they can't compute any \(c'_i\) values) At the end, they check whether or not \(c^*_0 G^*_0 = C^*\) On the whole, the proof contains \(2 * log(n)\) elliptic curve points and one number (for pedants: one field element). Verifying the proof takes logarithmic time in every step except one: computing the new \(G'_i\) values. This step is, unfortunately, linear.See also: Dankrad Feist's more detailed explanation of inner product arguments.Extension to polynomial evaluationsWe can extend to polynomial evaluations with a simple clever trick. Suppose we are trying to prove \(P(z) = a\). The prover and the verifier can extend the base points \(G_0 ... G_\) by attaching powers of \(z\) to them: the new base points become \((G_0, 1), (G_1, z) ... (G_, z^)\). These pairs can be treated as mathematical objects (for pedants: group elements) much like elliptic curve points themselves; to add them you do so element-by-element: \((A, x) + (B, y) =\) \((A + B,\ x + y)\), using elliptic curve addition for the points and regular field addition for the numbers.We can make a Pedersen commitment using this extended base! Now, here's a puzzle. Suppose \(P(x) = \sum_i c_i x^i\), where \(P(z) = a\), would have a commitment \(C = \sum_i c_i G_i\) if we were to use the regular elliptic curve points we used before as a base. If we use the pairs \((G_i, z^i)\) as a base instead, the commitment would be \((C, y)\) for some \(y\). What must be the value of \(y\)?The answer is: it must be equal to \(a\)! This is easy to see: the commitment is \((C, y) = \sum_i c_i (G_i, z^i)\), which we can decompose as \((\sum_i c_i G_i,\ \sum_i c_i z^i)\). The former is equal to \(C\), and the latter is just the evaluation \(P(z)\)!Hence, if \(C\) is a "regular" commitment to \(P\) using \(\\}\) as a base, then to prove that \(P(z) = a\) we need only use the same protocol above, but proving that \((C, a)\) is a valid commitment using \((G_0, 1), (G_1, z) ... (G_, z^)\) as a base!Note that in practice, this is usually done slightly differently as an optimization: instead of attaching the numbers to the points and explicitly dealing with structures of the form \((G_i, z^i)\), we add another randomly chosen base point \(H\) and express it as \(G_i + z^i H\). This saves space.See here for an example implementation of this whole protocol.So, how do we combine these proofs?Suppose that you are given two polynomial evaluation proofs, with different polynomials and different evaluation points, and want to make a proof that they are both correct. You have:Proof \(\Pi_1\) proving that \(P_1(z_1) = y_1\), where \(P_1\) is represented by \(com(P_1) = C_1\) Proof \(\Pi_2\) proving that \(P_2(z_2) = y_2\), where \(P_2\) is represented by \(com(P_2) = C_2\) Verifying each proof takes linear time. We want to make a proof that proves that both proofs are correct. This will still take linear time, but the verifier will only have to make one round of linear time verification instead of two.We start off with an observation. The only linear-time step in performing the verification of the proofs is computing the \(G'_i\) values. This is \(O(n)\) work because you have to combine \(\frac\) pairs of \(G_i\) values into \(G'_i\) values, then \(\frac\) pairs of \(G'_i\) values into \(G''_i\) values, and so on, for a total of \(n\) combinations of pairs. But if you look at the algorithm carefully, you will notice that we don't actually need any of the intermediate \(G'_i\) values; we only need the final \(G^*_0\). This \(G^*_0\) is a linear combination of the initial \(G_i\) values. What are the coefficients to that linear combination? It turns out that the \(G_i\) coefficient is the \(X^i\) term of this polynomial:\[(X + \alpha_1) * (X^2 + \alpha_2)\ *\ ...\ *\ (X^} + \alpha_) \]This is using the \(C' = \alpha C + \alpha^2 L + R\) version we mentioned above. The ability to directly compute \(G^*_0\) as a linear combination already cuts down our work to \(O(\frac)\) due to fast linear combination algorithms, but we can go further.The above polynomial has degree \(n - 1\), with \(n\) nonzero coefficients. But its un-expanded form has size \(log(n)\), and so you can evaluate the polynomial at any point in \(O(log(n))\) time. Additionally, you might notice that \(G^*_0\) is a commitment to this polynomial, so we can directly prove evaluations! So here is what we do:The prover computes the above polynomial for each proof; we'll call these polynomials \(K_1\) with \(com(K_1) = D_1\) and \(K_2\) with \(com(K_2) = D_2\). In a "normal" verification, the verifier would be computing \(D_1\) and \(D_2\) themselves as these are just the \(G^*_0\) values for their respective proofs. Here, the prover provides \(D_1\) and \(D_2\) and the rest of the work is proving that they're correct. To prove the correctness of \(D_1\) and \(D_2\) we'll prove that they're correct at a random point. We choose a random point \(t\), and evaluate both \(e_1 = K_1(t)\) and \(e_2 = K_2(t)\) The prover generates a random linear combination \(L(x) = K_1(x) + rK_2(x)\) (and the verifier can generate \(com(L) = D_1 + rD_2\)). The prover now just needs to make a single proof that \(L(t) = e_1 + re_2\). The verifier still needs to do a bunch of extra steps, but all of those steps take either \(O(1)\) or \(O(log(n))\) work: evaluate \(e_1 = K_1(t)\) and \(e_2 = K_2(t)\), calculate the \(\alpha_i\) coefficients of both \(K_i\) polynomials in the first place, do the elliptic curve addition \(com(L) = D_1 + rD_2\). But this all takes vastly less than linear time, so all in all we still benefit: the verifier only needs to do the linear-time step of computing a \(G^*_0\) point themselves once.This technique can easily be generalized to merge \(m > 2\) signatures.From merging IPAs to merging IPA-based SNARKs: HaloNow, we get into the core mechanic of the Halo protocol being integrated in Zcash, which uses this proof combining technique to create a recursive proof system. The setup is simple: suppose you have a chain, where each block has an associated IPA-based SNARK (see here for how generic SNARKs from polynomial commitments work) proving its correctness. You want to create a new block, building on top of the previous tip of the chain. The new block should have its own IPA-based SNARK proving the correctness of the block. In fact, this proof should cover both the correctness of the new block and the correctness of the previous block's proof of the correctness of the entire chain before it.IPA-based proofs by themselves cannot do this, because a proof of a statement takes longer to verify than checking the statement itself, so a proof of a proof will take even longer to verify than both proofs separately. But proof merging can do it! Essentially, we use the usual "recursive SNARK" technique to verify the proofs, except the "proof of a proof" part is only proving the logarithmic part of the work. We add an extra chain of aggregate proofs, using a trick similar to the proof merging scheme above, to handle the linear part of the work. To verify the whole chain, the verifier need only verify one linear-time proof at the very tip of the chain.The precise details are somewhat different from the exact proof-combining trick in the previous section for efficiency reasons. Instead of using the proof-combining trick to combine multiple proofs, we use it on a single proof, just to re-randomize the point that the polynomial committed to by \(G^*_0\) needs to be evaluated at. We then use the same newly chosen evaluation point to evaluate the polynomials in the proof of the block's correctness, which allows us to prove the polynomial evaluations together in a single IPA.Expressed in math:Let \(P(z) = a\) be the previous statement that needs to be proven The prover generates \(G^*_0\) The prover proves the correctness of the new block plus the logarithmic work in the previous statements by generating a PLONK proof: \(Q_L * A + Q_R * B + Q_O * C + Q_M * A * B + Q_C = Z * H\) The prover chooses a random point \(t\), and proves the evaluation of a linear combination of \(\\) at \(t\). We can then check the above equation, replacing each polynomial with its now-verified evaluation at \(t\), to verify the PLONK proof. Incremental verification, more generallyThe size of each "step" does not need to be a full block verification; it could be something as small as a single step of a virtual machine. The smaller the steps the better: it ensures that the linear work that the verifier ultimately has to do at the end is less. The only lower bound is that each step has to be big enough to contain a SNARK verifying the \(log(n)\) portion of the work of a step.But regardless of the fine details, this mechanism allows us to make succinct and easy-to-verify SNARKs, including easy support for recursive proofs that allow you to extend proofs in real time as the computation extends and even have different provers to do different parts of the proving work, all without pairings or a trusted setup! The main downside is some extra technical complexity, compared with a "simple" polynomial-based proof using eg. KZG-based commitments.Technology Cryptographic assumptions Proof size Verification time FRI Hashes only (quantum safe!) Large (10-200 kB) Medium (poly-logarithmic) Inner product arguments (IPAs) Basic elliptic curves Medium (1-3 kB) Very high (linear) KZG commitments Elliptic curves + pairings + trusted setup Short (~500 bytes) Low (constant) IPA + Halo-style aggregation Basic elliptic curves Medium (1-3 kB) Medium (constant but higher than KZG) Not just polynomials! Merging R1CS proofsA common alternative to building SNARKs out of polynomial games is building SNARKs out of matrix-vector multiplication games. Polynomials and vectors+matrices are both natural bases for SNARK protocols because they are mathematical abstractions that can store and compute over large amounts of data at the same time, and that admit commitment schemes that allow verifiers to check equations quickly.In R1CS (see a more detailed description here), an instance of the game consists of three matrices \(A\), \(B\), \(C\), and a solution is a vector \(Z\) such that \((A \cdot Z) \circ (B \cdot Z) = C \cdot Z\) (the problem is often in practice restricted further by requiring the prover to make part of \(Z\) public and requiring the last entry of \(Z\) to be 1). An R1CS instance with a single constraint (so \(A\), \(B\) and \(C\) have width 1), with a satisfying \(Z\) vector, though notice that here the \(Z\) appears on the left and has 1 in the top position instead of the bottom. Just like with polynomial-based SNARKs, this R1CS game can be turned into a proof scheme by creating commitments to \(A\), \(B\) and \(C\), requiring the prover to provide a commitment to (the private portion of) \(Z\), and using fancy proving tricks to prove the equation \((A \cdot Z) \circ (B \cdot Z) = C \cdot Z\), where \(\circ\) is item-by-item multiplication, without fully revealing any of these objects. And just like with IPAs, this R1CS game has a proof merging scheme!Ioanna Tzialla et al describe such a scheme in a recent paper (see page 8-9 for their description). They first modify the game by introducing an expanded equation:\[ (A \cdot Z) \circ (B \cdot Z) - u * (C \cdot Z) = E\]For a "base" instance, \(u = 1\) and \(E = 0\), so we get back the original R1CS equation. The extra slack variables are added to make aggregation possible; aggregated instances will have other values of \(u\) and \(E\). Now, suppose that you have two solutions to the same instance, though with different \(u\) and \(E\) variables:\[(A \cdot Z_1) \circ (B \cdot Z_1) - u_1 * (C \cdot Z_1) = E_1\]\[(A \cdot Z_2) \circ (B \cdot Z_2) - u_2 * (C \cdot Z_2) = E_2\]The trick involves taking a random linear combination \(Z_3 = Z_1 + r Z_2\), and making the equation work with this new value. First, let's evaluate the left side:\[ (A \cdot (Z_1 + rZ_2)) \circ (B \cdot (Z_1 + rZ_2)) - (u_1 + ru_2)*(C \cdot (Z_1 + rZ_2)) \]This expands into the following (grouping the \(1\), \(r\) and \(r^2\) terms together):\[(A \cdot Z_1) \circ (B \cdot Z_1) - u_1 * (C \cdot Z_1)\]\[r((A \cdot Z_1) \circ (B \cdot Z_2) + (A \cdot Z_2) \circ (B \cdot Z_1) - u_1 * (C \cdot Z_2) - u_2 * (C \cdot Z_1))\]\[r^2((A \cdot Z_2) \circ (B \cdot Z_2) - u_2 * (C \cdot Z_2))\]The first term is just \(E_1\); the third term is \(r^2 * E_2\). The middle term is very similar to the cross-term (the yellow + green areas) near the very start of this post. The prover simply provides the middle term (without the \(r\) factor), and just like in the IPA proof, the randomization forces the prover to be honest.Hence, it's possible to make merging schemes for R1CS-based protocols too. Interestingly enough, we don't even technically need to have a "succinct" protocol for proving the \[ (A \cdot Z) \circ (B \cdot Z) = u * (C \cdot Z) + E\] relation at the end; instead, the prover could just prove by opening all the commitments directly! This would still be "succinct" because the verifier would only need to verify one proof that actually represents an arbitrarily large number of statements. However, in practice having a succinct protocol for this last step is better because it keeps the proofs smaller, and Tzialla et al's paper provides such a protocol too (see page 10).RecapWe don't know of a way to make a commitment to a size-\(n\) polynomial where evaluations of the polynomial can be verified in \(< O(n)\) time directly. The best that we can do is make a \(log(n)\) sized proof, where all of the work to verify it is logarithmic except for one final \(O(n)\)-time piece. But what we can do is merge multiple proofs together. Given \(m\) proofs of evaluations of size-\(n\) polynomials, you can make a proof that covers all of these evaluations, that takes logarithmic work plus a single size-\(n\) polynomial proof to verify. With some clever trickery, separating out the logarithmic parts from the linear parts of proof verification, we can leverage this to make recursive SNARKs. These recursive SNARKs are actually more efficient than doing recursive SNARKs "directly"! In fact, even in contexts where direct recursive SNARKs are possible (eg. proofs with KZG commitments), Halo-style techniques are typically used instead because they are more efficient. It's not just about polynomials; other games used in SNARKs like R1CS can also be aggregated in similar clever ways. No pairings or trusted setups required! The march toward faster and more efficient and safer ZK-SNARKs just keeps going...
2024年10月22日
6 阅读
0 评论
0 点赞
2024-10-22
Crypto Cities
Crypto Cities2021 Oct 31 See all posts Crypto Cities Special thanks to Mr Silly and Tina Zhen for early feedback on the post, and to a big long list of people for discussion of the ideas.One interesting trend of the last year has been the growth of interest in local government, and in the idea of local governments that have wider variance and do more experimentation. Over the past year, Miami mayor Francis Suarez has pursued a Twitter-heavy tech-startup-like strategy of attracting interest in the city, frequently engaging with the mainstream tech industry and crypto community on Twitter. Wyoming now has a DAO-friendly legal structure, Colorado is experimenting with quadratic voting, and we're seeing more and more experiments making more pedestrian-friendly street environments for the offline world. We're even seeing projects with varying degrees of radicalness - Cul de sac, Telosa, CityDAO, Nkwashi, Prospera and many more - trying to create entire neighborhoods and cities from scratch.Another interesting trend of the last year has been the rapid mainstreaming of crypto ideas such as coins, non-fungible tokens and decentralized autonomous organizations (DAOs). So what would happen if we combine the two trends together? Does it make sense to have a city with a coin, an NFT, a DAO, some record-keeping on-chain for anti-corruption, or even all four? As it turns out, there are already people trying to do just that:CityCoins.co, a project that sets up coins intended to become local media of exchange, where a portion of the issuance of the coin goes to the city government. MiamiCoin already exists, and "San Francisco Coin" appears to be coming soon. Other experiments with coin issuance (eg. see this project in Seoul) Experiments with NFTs, often as a way of funding local artists. Busan is hosting a government-backed conference exploring what they could do with NFTs. Reno mayor Hillary Schieve's expansive vision for blockchainifying the city, including NFT sales to support local art, a RenoDAO with RenoCoins issued to local residents that could get revenue from the government renting out properties, blockchain-secured lotteries, blockchain voting and more. Much more ambitious projects creating crypto-oriented cities from scratch: see CityDAO, which describes itself as, well, "building a city on the Ethereum blockchain" - DAOified governance and all. But are these projects, in their current form, good ideas? Are there any changes that could make them into better ideas? Let us find out...Why should we care about cities?Many national governments around the world are showing themselves to be inefficient and slow-moving in response to long-running problems and rapid changes in people's underlying needs. In short, many national governments are missing live players. Even worse, many of the outside-the-box political ideas that are being considered or implemented for national governance today are honestly quite terrifying. Do you want the USA to be taken over by a clone of WW2-era Portuguese dictator Antonio Salazar, or perhaps an "American Caesar", to beat down the evil scourge of American leftism? For every idea that can be reasonably described as freedom-expanding or democratic, there are ten that are just different forms of centralized control and walls and universal surveillance.Now consider local governments. Cities and states, as we've seen from the examples at the start of this post, are at least in theory capable of genuine dynamism. There are large and very real differences of culture between cities, so it's easier to find a single city where there is public interest in adopting any particular radical idea than it is to convince an entire country to accept it. There are very real challenges and opportunities in local public goods, urban planning, transportation and many other sectors in the governance of cities that could be addressed. Cities have tightly cohesive internal economies where things like widespread cryptocurrency adoption could realistically independently happen. Furthermore, it's less likely that experiments within cities will lead to terrible outcomes both because cities are regulated by higher-level governments and because cities have an easier escape valve: people who are unhappy with what's going on can more easily exit.So all in all, it seems like the local level of government is a very undervalued one. And given that criticism of existing smart city initiatives often heavily focuses on concerns around centralized governance, lack of transparency and data privacy, blockchain and cryptographic technologies seem like a promising key ingredient for a more open and participatory way forward.What are city projects up to today?Quite a lot actually! Each of these experiments is still small scale and largely still trying to find its way around, but they are all at least seeds that could turn into interesting things. Many of the most advanced projects are in the United States, but there is interest across the world; over in Korea the government of Busan is running an NFT conference. Here are a few examples of what is being done today.Blockchain experiments in RenoReno, Nevada mayor Hillary Schieve is a blockchain fan, focusing primarily on the Tezos ecosystem, and she has recently been exploring blockchain-related ideas (see her podcast here) in the governance of her city:Selling NFTs to fund local art, starting with an NFT of the "Space Whale" sculpture in the middle of the city Creating a Reno DAO, governed by Reno coins that Reno residents would be eligible to receive via an airdrop. The Reno DAO could start to get sources of revenue; one proposed idea was the city renting out properties that it owns and the revenue going into a DAO Using blockchains to secure all kinds of processes: blockchain-secured random number generators for casinos, blockchain-secured voting, etc. Reno space whale. Source here. CityCoins.coCityCoins.co is a project built on Stacks, a blockchain run by an unusual "proof of transfer" (for some reason abbreviated PoX and not PoT) block production algorithm that is built around the Bitcoin blockchain and ecosystem. 70% of the coin's supply is generated by an ongoing sale mechanism: anyone with STX (the Stacks native token) can send their STX to the city coin contract to generate city coins; the STX revenues are distributed to existing city coin holders who stake their coins. The remaining 30% is made available to the city government.CityCoins has made the interesting decision of trying to make an economic model that does not depend on any government support. The local government does not need to be involved in creating a CityCoins.co coin; a community group can launch a coin by themselves. An FAQ-provided answer to "What can I do with CityCoins?" includes examples like "CityCoins communities will create apps that use tokens for rewards" and "local businesses can provide discounts or benefits to people who ... stack their CityCoins". In practice, however, the MiamiCoin community is not going at it alone; the Miami government has already de-facto publicly endorsed it. MiamiCoin hackathon winner: a site that allows coworking spaces to give preferential offers to MiamiCoin holders. CityDAOCityDAO is the most radical of the experiments: Unlike Miami and Reno, which are existing cities with existing infrastructure to be upgraded and people to be convinced, CityDAO a DAO with legal status under the Wyoming DAO law (see their docs here) trying to create entirely new cities from scratch.So far, the project is still in its early stages. The team is currently finalizing a purchase of their first plot of land in a far-off corner of Wyoming. The plan is to start with this plot of land, and then add other plots of land in the future, to build cities, governed by a DAO and making heavy use of radical economic ideas like Harberger taxes to allocate the land, make collective decisions and manage resources. Their DAO is one of the progressive few that is avoiding coin voting governance; instead, the governance is a voting scheme based on "citizen" NFTs, and ideas have been floated to further limit votes to one-per-person by using proof-of-humanity verification. The NFTs are currently being sold to crowdfund the project; you can buy them on OpenSea. What do I think cities could be up to?Obviously there are a lot of things that cities could do in principle. They could add more bike lanes, they could use CO2 meters and far-UVC light to more effectively reduce COVID spread without inconveniencing people, and they could even fund life extension research. But my primary specialty is blockchains and this post is about blockchains, so... let's focus on blockchains.I would argue that there are two distinct categories of blockchain ideas that make sense:Using blockchains to create more trusted, transparent and verifiable versions of existing processes. Using blockchains to implement new and experimental forms of ownership for land and other scarce assets, as well as new and experimental forms of democratic governance. There's a natural fit between blockchains and both of these categories. Anything happening on a blockchain is very easy to publicly verify, with lots of ready-made freely available tools to help people do that. Any application built on a blockchain can immediately plug in to and interface with other applications in the entire global blockchain ecosystem. Blockchain-based systems are efficient in a way that paper is not, and publicly verifiable in a way that centralized computing systems are not - a necessary combination if you want to, say, make a new form of voting that allows citizens to give high-volume real-time feedback on hundreds or thousands of different issues.So let's get into the specifics.What are some existing processes that blockchains could make more trusted and transparent?One simple idea that plenty of people, including government officials around the world, have brought up to me on many occasions is the idea of governments creating a whitelisted internal-use-only stablecoin for tracking internal government payments. Every tax payment from an individual or organization could be tied to a publicly visible on-chain record minting that number of coins (if we want individual tax payment quantities to be private, there are zero-knowledge ways to make only the total public but still convince everyone that it was computed correctly). Transfers between departments could be done "in the clear", and the coins would be redeemed only by individual contractors or employees claiming their payments and salaries. This system could easily be extended. For example, procurement processes for choosing which bidder wins a government contract could largely be done on-chain.Many more processes could be made more trustworthy with blockchains:Fair random number generators (eg. for lotteries) - VDFs, such as the one Ethereum is expected to include, could serve as a fair random number generator that could be used to make government-run lotteries more trustworthy. Fair randomness could also be used for many other use cases, such as sortition as a form of government. Certificates, for example cryptographic proofs that some particular individual is a resident of the city, could be done on-chain for added verifiability and security (eg. if such certificates are issued on-chain, it would become obvious if a large number of false certificates are issued). This can be used by all kinds of local-government-issued certificates. Asset registries, for land and other assets, as well as more complicated forms of property ownership such as development rights. Due to the need for courts to be able to make assignments in exceptional situations, these registries will likely never be fully decentralized bearer instruments in the same way that cryptocurrencies are, but putting records on-chain can still make it easier to see what happened in what order in a dispute. Eventually, even voting could be done on-chain. Here, many complexities and dragons loom and it's really important to be careful; a sophisticated solution combining blockchains, zero knowledge proofs and other cryptography is needed to achieve all the desired privacy and security properties. However, if humanity is ever going to move to electronic voting at all, local government seems like the perfect place to start.What are some radical economic and governance experiments that could be interesting?But in addition to these kinds of blockchain overlays onto things that governments already do, we can also look at blockchains as an opportunity for governments to make completely new and radical experiments in economics and governance. These are not necessarily final ideas on what I think should be done; they are more initial explorations and suggestions for possible directions. Once an experiment starts, real-world feedback is often by far the most useful variable to determine how the experiment should be adjusted in the future.Experiment #1: a more comprehensive vision of city tokensCityCoins.co is one vision for how city tokens could work. But it is far from the only vision. Indeed, the CityCoins.so approach has significant risks, particularly in how economic model is heavily tilted toward early adopters. 70% of the STX revenue from minting new coins is given to existing stakers of the city coin. More coins will be issued in the next five years than in the fifty years that follow. It's a good deal for the government in 2021, but what about 2051? Once a government endorses a particular city coin, it becomes difficult for it to change directions in the future. Hence, it's important for city governments to think carefully about these issues, and choose a path that makes sense for the long term.Here is a different possible sketch of a narrative of how city tokens might work. It's far from the only possible alternative to the CityCoins.co vision; see Steve Waldman's excellent article arguing for a city-localized medium of exchange for yet another possible direction. In any case, city tokens are a wide design space, and there are many different options worth considering. Anyway, here goes...The concept of home ownership in its current form is a notable double-edged sword, and the specific ways in which it's actively encouraged and legally structured is considered by many to be one of the biggest economic policy mistakes that we are making today. There is an inevitable political tension between a home as a place to live and a home as an investment asset, and the pressure to satisfy communities who care about the latter often ends up severely harming the affordability of the former. A resident in a city either owns a home, making them massively over-exposed to land prices and introducing perverse incentives to fight against construction of new homes, or they rent a home, making them negatively exposed to the real estate market and thus putting them economically at odds with the goal of making a city a nice place to live.But even despite all of these problems, many still find home ownership to be not just a good personal choice, but something worthy of actively subsidizing or socially encouraging. One big reason is that it nudges people to save money and build up their net worth. Another big reason is that despite its flaws, it creates economic alignment between residents and the communities they live in. But what if we could give people a way to save and create that economic alignment without the flaws? What if we could create a divisible and fungible city token, that residents could hold as many units of as they can afford or feel comfortable with, and whose value goes up as the city prospers?First, let's start with some possible objectives. Not all are necessary; a token that accomplishes only three of the five is already a big step forward. But we'll try to hit as many of them as possible:Get sustainable sources of revenue for the government. The city token economic model should avoid redirecting existing tax revenue; instead, it should find new sources of revenue. Create economic alignment between residents and the city. This means first of all that the coin itself should clearly become more valuable as the city becomes more attractive. But it also means that the economics should actively encourage residents to hold the coin more than faraway hedge funds. Promote saving and wealth-building. Home ownership does this: as home owners make mortgage payments, they build up their net worth by default. City tokens could do this too, making it attractive to accumulate coins over time, and even gamifying the experience. Encourage more pro-social activity, such as positive actions that help the city and more sustainable use of resources. Be egalitarian. Don't unduly favor wealthy people over poor people (as badly designed economic mechanisms often do accidentally). A token's divisibility, avoiding a sharp binary divide between haves and have-nots, does a lot already, but we can go further, eg. by allocating a large portion of new issuance to residents as a UBI. One pattern that seems to easily meet the first three objectives is providing benefits to holders: if you hold at least X coins (where X can go up over time), you get some set of services for free. MiamiCoin is trying to encourage businesses to do this, but we could go further and make government services work this way too. One simple example would be making existing public parking spaces only available for free to those who hold at least some number of coins in a locked-up form. This would serve a few goals at the same time:Create an incentive to hold the coin, sustaining its value. Create an incentive specifically for residents to hold the coin, as opposed to otherwise-unaligned faraway investors. Furthermore, the incentive's usefulness is capped per-person, so it encourages widely distributed holdings. Creates economic alignment (city becomes more attractive -> more people want to park -> coins have more value). Unlike home ownership, this creates alignment with an entire town, and not merely a very specific location in a town. Encourage sustainable use of resources: it would reduce usage of parking spots (though people without coins who really need them could still pay), supporting many local governments' desires to open up more space on the roads to be more pedestrian-friendly. Alternatively, restaurants could also be allowed to lock up coins through the same mechanism and claim parking spaces to use for outdoor seating. But to avoid perverse incentives, it's extremely important to avoid overly depending on one specific idea and instead to have a diverse array of possible revenue sources. One excellent gold mine of places to give city tokens value, and at the same time experiment with novel governance ideas, is zoning. If you hold at least Y coins, then you can quadratically vote on the fee that nearby landowners have to pay to bypass zoning restrictions. This hybrid market + direct democracy based approach would be much more efficient than current overly cumbersome permitting processes, and the fee itself would be another source of government revenue. More generally, any of the ideas in the next section could be combined with city tokens to give city token holders more places to use them.Experiment #2: more radical and participatory forms of governanceThis is where Radical Markets ideas such as Harberger taxes, quadratic voting and quadratic funding come in. I already brought up some of these ideas in the section above, but you don't have to have a dedicated city token to do them. Some limited government use of quadratic voting and funding has already happened: see the Colorado Democratic party and the Taiwanese presidential hackathon, as well as not-yet-government-backed experiments like Gitcoin's Boulder Downtown Stimulus. But we could do more!One obvious place where these ideas can have long-term value is giving developers incentives to improve the aesthetics of buildings that they are building (see here, here, here and here for some recent examples of professional blabbers debating the aesthetics of modern architecture). Harberger taxes and other mechanisms could be used to radically reform zoning rules, and blockchains could be used to administer such mechanisms in a more trustworthy and efficient way. Another idea that is more viable in the short term is subsidizing local businesses, similar to the Downtown Stimulus but on a larger and more permanent scale. Businesses produce various kinds of positive externalities in their local communities all the time, and those externalities could be more effectively rewarded. Local news could be quadratically funded, revitalizing a long-struggling industry. Pricing for advertisements could be set based on real-time votes of how much people enjoy looking at each particular ad, encouraging more originality and creativity.More democratic feedback (and possibly even retroactive democratic feedback!) could plausibly create better incentives in all of these areas. And 21st-century digital democracy through real-time online quadratic voting and funding could plausibly do a much better job than 20th-century democracy, which seems in practice to have been largely characterized by rigid building codes and obstruction at planning and permitting hearings. And of course, if you're going to use blockchains to secure voting, starting off by doing it with fancy new kinds of votes seems far more safe and politically feasible than re-fitting existing voting systems. Mandatory solarpunk picture intended to evoke a positive image of what might happen to our cities if real-time quadratic votes could set subsidies and prices for everything. ConclusionsThere are a lot of worthwhile ideas for cities to experiment with that could be attempted by existing cities or by new cities. New cities of course have the advantage of not having existing residents with existing expectations of how things should be done; but the concept of creating a new city itself is, in modern times, relatively untested. Perhaps the multi-billion-dollar capital pools in the hands of people and projects enthusiastic to try new things could get us over the hump. But even then, existing cities will likely continue to be the place where most people live for the foreseeable future, and existing cities can use these ideas too.Blockchains can be very useful in both the more incremental and more radical ideas that were proposed here, even despite the inherently "trusted" nature of a city government. Running any new or existing mechanism on-chain gives the public an easy ability to verify that everything is following the rules. Public chains are better: the benefits from existing infrastructure for users to independently verify what is going on far outweigh the losses from transaction fees, which are expected to quickly decrease very soon from rollups and sharding. If strong privacy is required, blockchains can be combined zero knowledge cryptography to give privacy and security at the same time.The main trap that governments should avoid is too quickly sacrificing optionality. An existing city could fall into this trap by launching a bad city token instead of taking things more slowly and launching a good one. A new city could fall into this trap by selling off too much land, sacrificing the entire upside to a small group of early adopters. Starting with self-contained experiments, and taking things slowly on moves that are truly irreversible, is ideal. But at the same time, it's also important to seize the opportunity in the first place. There's a lot that can and should be improved with cities, and a lot of opportunities; despite the challenges, crypto cities broadly are an idea whose time has come.
2024年10月22日
5 阅读
0 评论
0 点赞
2024-10-22
On Nathan Schneider on the limits of cryptoeconomics
On Nathan Schneider on the limits of cryptoeconomics2021 Sep 26 See all posts On Nathan Schneider on the limits of cryptoeconomics Nathan Schneider has recently released an article describing his perspectives on cryptoeconomics, and particularly on the limits of cryptoeconomic approaches to governance and what cryptoeconomics could be augmented with to improve its usefulness. This is, of course, a topic that is dear to me ([1] [2] [3] [4] [5]), so it is heartening to see someone else take the blockchain space seriously as an intellectual tradition and engage with the issues from a different and unique perspective.The main question that Nathan's piece is trying to explore is simple. There is a large body of intellectual work that criticizes a bubble of concepts that they refer to as "economization", "neoliberalism" and similar terms, arguing that they corrode democratic political values and leave many people's needs unmet as a result. The world of cryptocurrency is very economic (lots of tokens flying around everywhere, with lots of functions being assigned to those tokens), very neo (the space is 12 years old!) and very liberal (freedom and voluntary participation are core to the whole thing). Do these critiques also apply to blockchain systems? If so, what conclusions should we draw, and how could blockchain systems be designed to account for these critiques? Nathan's answer: more hybrid approaches combining ideas from both economics and politics. But what will it actually take to achieve that, and will it give the results that we want? My answer: yes, but there's a lot of subtleties involved.What are the critiques of neoliberalism and economic logic?Near the beginning of Nathan's piece, he describes the critiques of overuse of economic logic briefly. That said, he does not go much further into the underlying critiques himself, preferring to point to other sources that have already covered the issue in depth:The economics in cryptoeconomics raises a particular set of anxieties. Critics have long warned against the expansion of economic logics, crowding out space for vigorous politics in public life. From the Zapatista insurgents of southern Mexico (Hayden, 2002) to political theorists like William Davies (2014) and Wendy Brown (2015), the "neoliberal" aspiration for economics to guide all aspects of society represents a threat to democratic governance and human personhood itself. Here is Brown:Neoliberalism transmogrifies every human domain and endeavor, along with humans themselves, according to a specific image of the economic. All conduct is economic conduct; all spheres of existence are framed and measured by economic terms and metrics, even when those spheres are not directly monetized. In neoliberal reason and in domains governed by it, we are only and everywhere homo oeconomicus (p. 10)For Brown and other critics of neoliberalism, the ascent of the economic means the decline of the political—the space for collective determinations of the common good and the means of getting there.At this point, it's worth pointing out that the "neoliberalism" being criticized here is not the same as the "neoliberalism" that is cheerfully promoted by the lovely folks at The Neoliberal Project; the thing being critiqued here is a kind of "enough two-party trade can solve everything" mentality, whereas The Neoliberal Project favors a mix of markets and democracy. But what is the thrust of the critique that Nathan is pointing to? What's the problem with everyone acting much more like homo oeconomicus? For this, we can take a detour and peek into the source, Wendy Brown's Undoing the Demos, itself. The book helpfully provides a list of the top "four deleterious effects" (the below are reformatted and abridged but direct quotes):Intensified inequality, in which the very top strata acquires and retains ever more wealth, the very bottom is literally turned out on the streets or into the growing urban and sub-urban slums of the world, while the middle strata works more hours for less pay, fewer benefits, less security... Crass or unethical commercialization of things and activities considered inappropriate for marketization. The claim is that marketization contributes to human exploitation or degradation, [...] limits or stratifies access to what ought to be broadly accessible and shared, [...] or because it enables something intrinsically horrific or severely denigrating to the planet. Ever-growing intimacy of corporate and finance capital with the state, and corporate domination of political decisions and economic policy Economic havoc wreaked on the economy by the ascendance and liberty of finance capital, especially the destabilizing effects of the inherent bubbles and other dramatic fluctuations of financial markets. The bulk of Nathan's article follows along with analyses of how these issues affect DAOs and governance mechanisms within the crypto space specifically. Nathan focuses on three key problems:Plutocracy: "Those with more tokens than others hold more [I would add, disproportionately more] decision-making power than others..." Limited exposure to diverse motivations: "Cryptoeconomics sees only a certain slice of the people involved. Concepts such as self- sacrifice, duty, and honor are bedrock features of most political and business organizations, but difficult to simulate or approximate with cryptoeconomic incentive design" Positive and negative externalities: "Environmental costs are classic externalities—invisible to the feedback loops that the system understands and that communicate to its users as incentives ... the challenge of funding"public goods" is another example of an externality - and one that threatens the sustainability of crypteconomic systems" The natural questions that arise for me are (i) to what extent do I agree with this critique at all and how it fits in with my own thinking, and (ii) how does this affect blockchains, and what do blockchain protocols need to actually do to avoid these traps?What do I think of the critiques of neoliberalism generally?I disagree with some, agree with others. I have always been suspicious of criticism of "crass and unethical commercialization", because it frequently feels like the author is attempting to launder their own feelings of disgust and aesthetic preferences into grand ethical and political ideologies - a sin common among all such ideologies, often the right (random example here) even more than the left. Back in the days when I had much less money and would sometimes walk a full hour to the airport to avoid a taxi fare, I remember thinking that I would love to get compensated for donating blood or using my body for clinical trials. And so to me, the idea that such transactions are inhuman exploitation has never been appealing.But at the same time, I am far from a Walter Block-style defender of all locally-voluntary two-party commerce. I've written up my own viewpoints expressing similar concerns to parts of Wendy Brown's list in various articles:Multiple pieces decrying the evils of vote buying, or even financialized governance generally The importance of public goods funding. Failure modes in financial markets due to subtle issues like capital efficiency. So where does my own opposition to mixing finance and governance come from? This is a complicated topic, and my conclusions are in large part a result of my own failure after years of attempts to find a financialized governance mechanism that is economically stable. So here goes...Finance is the absence of collusion preventionOut of the standard assumptions in what gets pejoratively called "spherical cow economics", people normally tend to focus on the unrealistic nature of perfect information and perfect rationality. But the unrealistic assumption that is hidden in the list that strikes me as even more misleading is individual choice: the idea that each agent is separately making their own decisions, no agent has a positive or negative stake in another agent's outcomes, and there are no "side games"; the only thing that sees each agent's decisions is the black box that we call "the mechanism". This assumption is often used to bootstrap complex contraptions such as the VCG mechanism, whose theoretical optimality is based on beautiful arguments that because the price each player pays only depends on other players' bids, each player has no incentive to make a bid that does not reflect their true value in order to manipulate the price. A beautiful argument in theory, but it breaks down completely once you introduce the possibility that even two of the players are either allies or adversaries outside the mechanism.Economics, and economics-inspired philosophy, is great at describing the complexities that arise when the number of players "playing the game" increases from one to two (see the tale of Crusoe and Friday in Murray Rothbard's The Ethics of Liberty for one example). But what this philosophical tradition completely misses is that going up to three players adds an even further layer of complexity. In an interaction between two people, the two can ignore each other, fight or trade. In an interaction between three people, there exists a new strategy: any two of the three can communicate and band together to gang up on the third. Three is the smallest denominator where it's possible to talk about a 51%+ attack that has someone outside the clique to be a victim.When there's only two people, more coordination can only be good. But once there's three people, the wrong kind of coordination can be harmful, and techniques to prevent harmful coordination (including decentralization itself) can become very valuable. And it's this management of coordination that is the essence of "politics". Going from two people to three introduces the possibility of harms from unbalanced coordination: it's not just "the individual versus the group", it's "the individual versus the group versus the world". Now, we can understand try to use this framework to understand the pitfalls of "finance". Finance can be viewed as a set of patterns that naturally emerge in many kinds of systems that do not attempt to prevent collusion. Any system which claims to be non-finance, but does not actually make an effort to prevent collusion, will eventually acquire the characteristics of finance, if not something worse. To see why this is the case, compare two point systems we are all familiar with: money, and Twitter likes. Both kinds of points are valuable for extrinsic reasons, both have inevitably become status symbols, and both are number games where people spend a lot of time optimizing to try to get a higher score. And yet, they behave very differently. So what's the fundamental difference between the two?The answer is simple: it's the lack of an efficient market to enable agreements like "I like your tweet if you like mine", or "I like your tweet if you pay me in some other currency". If such a market existed and was easy to use, Twitter would collapse completely (something like hyperinflation would happen, with the likely outcome that everyone would run automated bots that like every tweet to claim rewards), and even the likes-for-money markets that exist illicitly today are a big problem for Twitter. With money, however, "I send X to you if you send Y to me" is not an attack vector, it's just a boring old currency exchange transaction. A Twitter clone that does not prevent like-for-like markets would "hyperinflate" into everyone liking everything, and if that Twitter clone tried to stop the hyperinflation by limiting the number of likes each user can make, the likes would behave like a currency, and the end result would behave the same as if Twitter just added a tipping feature.So what's the problem with finance? Well, if finance is optimized and structured collusion, then we can look for places where finance causes problems by using our existing economic tools to understand which mechanisms break if you introduce collusion! Unfortunately, governance by voting is a central example of this category; I've covered why in the "moving beyond coin voting governance" post and many other occasions. Even worse, cooperative game theory suggests that there might be no possible way to make a fully collusion-resistant governance mechanism.And so we get the fundamental conundrum: the cypherpunk spirit is fundamentally about making maximally immutable systems that work with as little information as possible about who is participating ("on the internet, nobody knows you're a dog"), but making new forms of governance requires the system to have richer information about its participants and ability to dynamically respond to attacks in order to remain stable in the face of actors with unforeseen incentives. Failure to do this means that everything looks like finance, which means, well.... perennial over-representation of concentrated interests, and all the problems that come as a result. On the internet, nobody knows if you're 0.0244 of a dog (image source). But what does this mean for governance? The central role of collusion in understanding the difference between Kleros and regular courtsNow, let us get back to Nathan's article. The distinction between financial and non-financial mechanisms is key in the article. Let us start off with a description of the Kleros court:The jurors stood to earn rewards by correctly choosing the answer that they expected other jurors to independently select. This process implements the "Schelling point" concept in game theory (Aouidef et al., 2021; Dylag & Smith, 2021). Such a jury does not deliberate, does not seek a common good together; its members unite through self-interest. Before coming to the jury, the factual basis of the case was supposed to come not from official organs or respected news organizations but from anonymous users similarly disciplined by reward-seeking. The prediction market itself was premised on the supposition that people make better forecasts when they stand to gain or lose the equivalent of money in the process. The politics of the presidential election in question, here, had been thoroughly transmuted into a cluster of economies.The implicit critique is clear: the Kleros court is ultimately motivated to make decisions not on the basis of their "true" correctness or incorrectness, but rather on the basis of their financial interests. If Kleros is deciding whether Biden or Trump won the 2020 election, and one Kleros juror really likes Trump, precommits to voting in his favor, and bribes other jurors to vote the same way, other jurors are likely to fall in line because of Kleros's conformity incentives: jurors are rewarded if their vote agrees with the majority vote, and penalized otherwise. The theoretical answer to this is the right to exit: if the majority of Kleros jurors vote to proclaim that Trump won the election, a minority can spin off a fork of Kleros where Biden is considered to have won, and their fork may well get a higher market price than the original. Sometimes, this actually works! But, as Nathan points out, it is not always so simple:But exit may not be as easy as it appears, whether it be from a social-media network or a protocol. The persistent dominance of early-to-market blockchains like Bitcoin and Ethereum suggests that cryptoeconomics similarly favors incumbency.But alongside the implicit critique is an implicit promise: that regular courts are somehow able to rise above self-interest and "seek a common good together" and thereby avoid some of these failure modes. What is it that financialized Kleros courts lack, but non-financialized regular courts retain, that makes them more robust? One possible answer is that courts lack Kleros's explicit conformity incentive. But if you just take Kleros as-is, remove the conformity incentive (say, there's a reward for voting that does not depend on how you vote), and do nothing else, you risk creating even more problems. Kleros judges could get lazy, but more importantly if there's no incentive at all to choose how you vote, even the tiniest bribe could affect a judge's decision.So now we get to the real answer: the key difference between financialized Kleros courts and non-financialized regular courts is that financialized Kleros courts are, well... financialized. They make no effort to explicitly prevent collusion. Non-financialized courts, on the other hand, do prevent collusion in two key ways:Bribing a judge to vote in a particular way is explicitly illegal The judge position itself is non-fungible. It gets awarded to specific carefully-selected individuals, and they cannot simply go and sell or reallocate their entire judging rights and salary to someone else. The only reason why political and legal systems work is that a lot of hard thinking and work has gone on behind the scenes to insulate the decision-makers from extrinsic incentives, and punish them explicitly if they are discovered to be accepting incentives from the outside. The lack of extrinsic motivation allows the intrinsic motivation to shine through. Furthermore, the lack of transferability allows governance power to be given to specific actors whose intrinsic motivations we trust, avoiding governance power always flowing to "the highest bidder". But in the case of Kleros, the lack of hostile extrinsic motivation cannot be guaranteed, and transferability is unavoidable, and so overpoweringly strong in-mechanism extrinsic motivation (the conformity incentive) was the best solution they could find to deal with the problem.And of course, the "final backstop" that Kleros relies on, the right of users to fork away, itself depends on social coordination to take place - a messy and difficult institution, often derided by cryptoeconomic purists as "proof of social media", that works precisely because public discussion has lots of informal collusion detection and prevention all over the place.Collusion in understanding DAO governance issuesBut what happens when there is no single right answer that they can expect voters to converge on? This is where we move away from adjudication and toward governance (yes, I know that adjudication has unavoidably grey edge cases too. Governance just has them much more often). Nathan writes:Governance by economics is nothing new. Joint-stock companies conventionally operate on plutocratic governance—more shares equals more votes. This arrangement is economically efficient for aligning shareholder interests (Davidson and Potts, this issue), even while it may sideline such externalities as fair wages and environmental impacts...In my opinion, this actually concedes too much! Governance by economics is not "efficient" once you drop the spherical-cow assumption of no collusion, because it is inherently vulnerable to 51% of the stakeholders colluding to liquidate the company and split its resources among themselves. The only reason why this does not happen much more often "in real life" is because of many decades of shareholder regulation that have been explicitly built up to ban the most common types of abuses. This regulation is, of course, non-"economic" (or, in my lingo, it makes corporate governance less financialized), because it's an explicit attempt to prevent collusion.Notably, Nathan's favored solutions do not try to regulate coin voting. Instead, they try to limit the harms of its weaknesses by combining it with additional mechanisms:Rather than relying on direct token voting, as other protocols have done, The Graph uses a board-like mediating layer, the Graph Council, on which the protocol's major stakeholder groups have representatives. In this case, the proposal had the potential to favor one group of stakeholders over others, and passing a decision through the Council requires multiple stakeholder groups to agree. At the same time, the Snapshot vote put pressure on the Council to implement the will of token-holders.In the case of 1Hive, the anti-financialization protections are described as being purely cultural:According to a slogan that appears repeatedly in 1Hive discussions, "Come for the honey, stay for the bees." That is, although economics figures prominently as one first encounters and explores 1Hive, participants understand the community's primary value as interpersonal, social, and non-economic.I am personally skeptical of the latter approach: it can work well in low-economic-value communities that are fun oriented, but if such an approach is attempted in a more serious system with widely open participation and enough at stake to invite determined attack, it will not survive for long. As I wrote above, "any system which claims to be non-finance, but does not actually make an effort to prevent collusion, will eventually acquire the characteristics of finance".[Edit/correction 2021.09.27: it has been brought to my attention that in addition to culture, financialization is limited by (i) conviction voting, and (ii) juries enforcing a covenant. I'm skeptical of conviction voting in the long run; many DAOs use it today, but in the long term it can be defeated by wrapper tokens. The covenant, on the other hand, is interesting. My fault for not checking in more detail.] The money is called honey. But is calling money honey enough to make it work differently than money? If not, how much more do you have to do? The solution in TheGraph is very much an instance of collusion prevention: the participants have been hand-picked to come from diverse constituencies and to be trusted and upstanding people who are unlikely to sell their voting rights. Hence, I am bullish on that approach if it successfully avoids centralization.So how can we solve these problems more generally?Nathan's post argues:A napkin sketch of classical, never-quite-achieved liberal democracy (Brown, 2015) would depict a market (governed through economic incentives) enclosed in politics (governed through deliberation on the common good). Economics has its place, but the system is not economics all the way down; the rules that guide the market, and that enable it in the first place, are decided democratically, on the basis of citizens' civil rights rather than their economic power. By designing democracy into the base-layer of the system, it is possible to overcome the kinds of limitations that cryptoeconomics is vulnerable to, such as by counteracting plutocracy with mass participation and making visible the externalities that markets might otherwise fail to see.There is one key difference between blockchain political theory and traditional nation-state political theory - and one where, in the long run, nation states may well have to learn from blockchains. Nation-state political theory talks about "markets embedded in democracy" as though democracy is an encompassing base layer that encompasses all of society. In reality, this is not true: there are multiple countries, and every country at least to some degree permits trade with outside countries whose behavior they cannot regulate. Individuals and companies have choices about which countries they live in and do business in. Hence, markets are not just embedded in democracy, they also surround it, and the real world is a complicated interplay between the two.Blockchain systems, instead of trying to fight this interconnectedness, embrace it. A blockchain system has no ability to regular "the market" in the sense of people's general ability to freely make transactions. But what it can do is regulate and structure (or even create) specific markets, setting up patterns of specific behaviors whose incentives are ultimately set and guided by institutions that have anti-collusion guardrails built in, and can resist pressure from economic actors. And indeed, this is the direction Nathan ends up going in as well. He talks positively about the design of Civil as an example of precisely this spirit:The aborted Ethereum-based project Civil sought to leverage cryptoeconomics to protect journalism against censorship and degraded professional standards (Schneider, 2020). Part of the system was the Civil Council, a board of prominent journalists who served as a kind of supreme court for adjudicating the practices of the network's newsrooms. Token holders could earn rewards by successfully challenging a newsroom's practices; the success or failure of a challenge ultimately depended on the judgment of the Civil Council, designed to be free of economic incentives clouding its deliberations. In this way, a cryptoeconomic enforcement market served a non-economic social mission. This kind of design could enable cryptoeconomic networks to serve purposes not reducible to economic feedback loops.This is fundamentally very similar to an idea that I proposed in 2018: prediction markets to scale up content moderation. Instead of doing content moderation by running a low-quality AI algorithm on all content, with lots of false positives, there could be an open mini prediction market on each post, and if the volume got high enough a high-quality committee could step in an adjudicate, and the prediction market participants would be penalized or rewarded based on whether or not they had correctly predicted the outcome. In the mean time, posts with prediction market scores predicting that the post would be removed would not be shown to users who did not explicitly opt-in to participate in the prediction game. There is precedent for this kind of open but accountable moderation: Slashdot meta moderation is arguably a limited version of it. This more financialized version of meta-moderation through prediction markets could produce superior outcomes because the incentives invite highly competent and professional participants to take part.Nathan then expands:I have argued that pairing cryptoeconomics with political systems can help overcome the limitations that bedevil cryptoeconomic governance alone. Introducing purpose-centric mechanisms and temporal modulation can compensate for the blind-spots of token economies. But I am not arguing against cryptoeconomics altogether. Nor am I arguing that these sorts of politics must occur in every app and protocol. Liberal democratic theory permits diverse forms of association and business within a democratic structure, and similarly politics may be necessary only at key leverage points in an ecosystem to overcome the limitations of cryptoeconomics alone.This seems broadly correct. Financialization, as Nathan points out in his conclusion, has benefits in that it attracts a large amount of motivation and energy into building and participating in systems that would not otherwise exist. Furthermore, preventing financialization is very difficult and high cost, and works best when done sparingly, where it is needed most. However, it is also true that financialized systems are much more stable if their incentives are anchored around a system that is ultimately non-financial.Prediction markets avoid the plutocracy issues inherent in coin voting because they introduce individual accountability: users who acted in favor of what ultimately turns out to be a bad decision suffer more than users who acted against it. However, a prediction market requires some statistic that it is measuring, and measurement oracles cannot be made secure through cryptoeconomics alone: at the very least, community forking as a backstop against attacks is required. And if we want to avoid the messiness of frequent forks, some other explicit non-financialized mechanism at the center is a valuable alternative.ConclusionsIn his conclusion, Nathan writes:But the autonomy of cryptoeconomic systems from external regulation could make them even more vulnerable to runaway feedback loops, in which narrow incentives overpower the common good. The designers of these systems have shown an admirable capacity to devise cryptoeconomic mechanisms of many kinds. But for cryptoeconomics to achieve the institutional scope its advocates hope for, it needs to make space for less-economic forms of governance.If cryptoeconomics needs a political layer, and is no longer self-sufficient, what good is cryptoeconomics? One answer might be that cryptoeconomics can be the basis for securing more democratic and values-centered governance, where incentives can reduce reliance on military or police power. Through mature designs that integrate with less-economic purposes, cryptoeconomics might transcend its initial limitations. Politics needs cryptoeconomics, too ... by integrating cryptoeconomics with democracy, both legacies seem poised to benefit.I broadly agree with both conclusions. The language of collusion prevention can be helpful for understanding why cryptoeconomic purism so severely constricts the design space. "Finance" is a category of patterns that emerge when systems do not attempt to prevent collusion. When a system does not prevent collusion, it cannot treat different individuals differently, or even different numbers of individuals differently: whenever a "position" to exert influence exists, the owner of that position can just sell it to the highest bidder. Gavels on Amazon. A world where these were NFTs that actually came with associated judging power may well be a fun one, but I would certainly not want to be a defendant! The language of defense-focused design, on the other hand, is an underrated way to think about where some of the advantages of blockchain-based designs can be. Nation state systems often deal with threats with one of two totalizing mentalities: closed borders vs conquer the world. A closed borders approach attempts to make hard distinctions between an "inside" that the system can regulate and an "outside" that the system cannot, severely restricting flow between the inside and the outside. Conquer-the-world approaches attempt to extraterritorialize a nation state's preferences, seeking a state of affairs where there is no place in the entire world where some undesired activity can happen. Blockchains are structurally unable to take either approach, and so they must seek alternatives.Fortunately, blockchains do have one very powerful tool in their grasp that makes security under such porous conditions actually feasible: cryptography. Cryptography allows everyone to verify that some governance procedure was executed exactly according to the rules. It leaves a verifiable evidence trail of all actions, though zero knowledge proofs allow mechanism designers freedom in picking and choosing exactly what evidence is visible and what evidence is not. Cryptography can even prevent collusion! Blockchains allow applications to live on a substrate that their governacne does not control, which allows them to effectively implement techniques such as, for example, ensure that every change to the rules only takes effect with a 60 day delay. Finally, freedom to fork is much more practical, and forking is much lower in economic and human cost, than most centralized systems.Blockchain-based contraptions have a lot to offer the world that other kinds of systems do not. On the other hand, Nathan is completely correct to emphasize that blockchainized should not be equated with financialized. There is plenty of room for blockchain-based systems that do not look like money, and indeed we need more of them.
2024年10月22日
8 阅读
0 评论
0 点赞
2024-10-22
Alternatives to selling at below-market-clearing prices for achieving fairness (or community sentiment, or fun)
Alternatives to selling at below-market-clearing prices for achieving fairness (or community sentiment, or fun)2021 Aug 22 See all posts Alternatives to selling at below-market-clearing prices for achieving fairness (or community sentiment, or fun) When a seller wants to sell a fixed supply of an item that is in high (or uncertain and possibly high) demand, one choice that they often make is to set a price significantly lower than what "the market will bear". The result is that the item quickly sells out, with the lucky buyers being those who attempted to buy first. This has happened in a number of situations within the Ethereum ecosystem, notably NFT sales and token sales / ICOs. But this phenomenon is much older than that; concerts and restaurants frequently make similar choices, keeping prices cheap and leading to seats quickly selling out or buyers waiting in long lines.Economists have for a long time asked the question: why do sellers do this? Basic economic theory suggests that it's best if sellers sell at the market-clearing price - that is, the price at which the amount that buyers are willing to buy exactly equals the amount the seller has to sell. If the seller doesn't know what the market-clearing price is, the seller should sell through an auction, and let the market determine the price. Selling below market-clearing price not only sacrifices revenue for the seller; it also can harm the buyers: the item may sell out so quickly that many buyers have no opportunity to get it at all, no matter how much they want it and are willing to pay to get it. Sometimes, the competitions created by these non-price-based allocation mechanisms even create negative externalities that harm third parties - an effect that, as we will see, is particularly severe in the Ethereum ecosystem.But nevertheless, the fact that below-market-clearing pricing is so prevalent suggests that there must be some convincing reasons why sellers do it. And indeed, as the research into this topic over the last few decades has shown, there often are. And so it's worth asking the question: are there ways of achieving the same goals with more fairness, less inefficiency and less harm?Selling at below market-clearing prices has large inefficiencies and negative externalitiesIf a seller sells an item at market price, or through an auction, someone who really really wants that item has a simple path to getting it: they can pay the high price or if it's an auction they can bid a high amount. If a seller sells the item at below market price, then demand exceeds supply, and so some people will get the item and others won't. But the mechanism deciding who will get the item is decidedly not random, and it's often not well-correlated with how much participants want the item. Sometimes, it involves being faster at clicking buttons than everyone else. At other times, it involves waking up at 2 AM in your timezone (but 11 PM or even 2 PM in someone else's). And at still other times, it just turns into an "auction by other means", one which is more chaotic, less efficient and laden with far more negative externalties.Within the Ethereum ecosystem, there are many clear examples of this. First, we can look at the ICO craze of 2017. In 2017, there were a large number of projects launching initial coin offerings (ICOs), and a typical model was the capped sale: the project would set the price of the token and a hard maximum for how many tokens they are willing to sell, and at some point in time the sale would start automatically. Once the number of tokens hit the cap, the sale ends.What's the result? In practice, these sales would often end in as little as 30 seconds. As soon as (or rather, just before) the sale starts, everyone would start sending transactions in to try to get in, offering higher and higher fees to encourage miners to include their transaction first. An auction by another name - except with revenues going to the miners instead of the token seller, and the extremely harmful negative externality of pricing out every other application on-chain while the sale is going on. The most expensive transaction in the BAT sale set a fee of 580,000 gwei, paying a fee of $6,600 to get included in the sale.Many ICOs after that tried various strategies to avoid these gas price auctions; one ICO notably had a smart contract that checked the transaction's gasprice and rejected it if it exceeded 50 gwei. But that of course, did not solve the problem. Buyers wishing to cheat the system sent many transactions, hoping that at least one would get in. Once again, an auction by another name, and this time clogging up the chain even more.In more recent times, ICOs have become less popular, but NFTs and NFT sales are now very popular. Unfortunately, the NFT space failed to learn the lessons from 2017; they make fixed-quantity fixed-supply sales just like the ICOs did (eg. see the mint function on lines 97-108 of this contract here). What's the result? And this isn't even the biggest one; some NFT sales have created gas price spikes as high as 2000 gwei.Once again, sky-high gas prices from users fighting each other by sending higher and higher transaction fees to get in first. An auction by another name, pricing out every other application on-chain for 15 minutes, just as before.So why do sellers sometimes sell below market price?Selling at below market price is hardly a new phenomenon, both within the blockchain space and outside, and over the decades there have been many articles and papers and podcasts writing (and sometimes bitterly complaining) about the unwillingness to use auctions or set prices to market-clearing levels.Many of the arguments are very similar between the examples in the blockchain space (NFTs and ICOs) and outside the blockchain space (popular restaurants and concerts). A particular concern is fairness and the desire to not lock poorer people out and not lose fans or create tension as a result of being perceived as greedy. Kahneman, Knetsch and Thaler's 1986 paper is a good exposition of how perceptions of fairness and greed can influence these decisions. In my own recollection of the 2017 ICO season, the desire to avoid perceptions of greed was similarly a decisive factor in discouraging the use of auction-like mechanisms (I am mostly going off memory here and do not have many sources, though I did find a link to a no-longer-available parody video making some kind of comparison between the auction-based Gnosis ICO and the National Socialist German Workers' Party). In addition to fairness issues, there are also the perennial arguments that products selling out and having long lines creates a perception of popularity and prestige, which makes the product seem even more attractive to others further down the line. Sure, in a rational actor model, high prices should have the same effect as long lines, but in reality long lines are much more visible than high prices are. This is just as true for ICOs and NFTs as it is for restaurants. In addition to these strategies generating more marketing value, some people actually find participating in or watching the game of grabbing up a limited set of opportunities first before everyone else takes them all to be quite fun.But there are also some factors specific to the blockchain space. One argument for selling ICO tokens at below-market-clearing prices (and one that was decisive in convincing the OmiseGo team to adopt their capped sale strategy) has to do with community dynamics of token issuance. The most basic rule of community sentiment management is simple: you want prices to go up, not down. If community members are "in the green", they are happy. But if the price goes lower than what it was when the community members bought, leaving them at a net loss, they become unhappy and start calling you a scammer, and possibly creating a social media cascade leading to everyone else calling you a scammer.The only way to avoid this effect is to set a sale price low enough that the post-launch market price will almost certainly be higher. But, how do you actually do this without creating a rush-for-the-gates dynamic that leads to an auction by other means?Some more interesting solutionsThe year is 2021. We have a blockchain. The blockchain contains not just a powerful decentralized finance ecosystem, but also a rapidly growing suite of all kinds of non-financial tools. The blockchain also presents us with a unique opportunity to reset social norms. Uber legitimized surge pricing where decades of economists yelling about "efficiency" failed; surely, blockchains can also be an opportunity to legitimize new uses of mechanism design. And surely, instead of fiddling around with a coarse-grained one-dimensional strategy space of selling at market price versus below market price (with perhaps a second dimension for auction versus fixed-price sale), we could use our more advanced tools to create an approach that more directly solves the problems, with fewer side effects?First, let us list the goals. We'll try to cover the cases of (i) ICOs, (ii) NFTs and (iii) conference tickets (really a type of NFT) at the same time; most of the desired properties are shared between the three cases.Fairness: don't completely lock low-income people out of participating, give them at least some chance to get in. For token sales, there's the not quite identical but related goal of avoiding high initial wealth concentration and having a larger and more diverse initial token holder community. Don't create races: avoid creating situations where lots of people are rushing to take the same action and only the first few get in (this is the type of situation that leads to the horrible auctions-by-another-name that we saw above). Don't require fine-grained knowledge of market conditions: the mechanism should work even if the seller has absolutely no idea how much demand there is. Fun: the process of participating in the sale should ideally be interesting and have game-like qualities, but without being frustrating. Give buyers positive expected returns: in the case of a token (or, for that matter, an NFT), buyers should be more likely to see the item go up in price than go down. This necessarily implies selling to buyers at below the market price. We can start by looking at (1). Looking at it from the point of view of Ethereum, there is a pretty clear solution. Instead of creating race conditions, just use an explicitly designed tool for the job: proof of personhood protocols! Here's one quick proposed mechanism:Mechanism 1 Each participant (verified by proof-of-personhood) can buy up to X units at price P, and if they want to buy more they can buy in an auction.It seems like it satisfies a lot of the goals already: the per-person aspect provides fairness, if the auction price turns out higher than P buyers can get positive expected returns for the portion sold through the per-person mechanism, and the auction part does not require the seller to understand the level of demand. Does it avoid creating races? If the number of participants buying through the per-person pool is not that high, it seems like it does. But what if so many people show up that the per-person pool is not big enough to provide an allocation for all of them?Here's an idea: make the per-person allocation amount itself dynamic.Mechanism 2 Each participant (verified by proof-of-personhood) can make a deposit into a smart contract to declare interest for up to X tokens. At the end, each buyer is given an allocation of min(X, N / number_of_buyers) tokens, where N is the total amount sold through the per-person pool (some other amount can also be sold by auction). The portion of the buyer's deposit going above the amount needed to buy their allocation is refunded to them.Now, there's no race condition regardless of the number of buyers going through the per-person pool. No matter how high the demand, there's no way in which it's more beneficial to participate earlier rather than later.Here's yet another idea, if you like your game mechanics to be more clever and use fancy quadratic formulas.Mechanism 3 Each participant (verified by proof-of-personhood) can buy \(X\) units at a price \(P * X^2\), up to a maximum of \(C\) tokens per buyer. \(C\) starts at some low number, and then increases over time until enough units are sold.This mechanism has the particularly interesting property that if you're making a governance token (please don't do that; this is purely harm-reduction advice), the quantity allocated to each buyer is theoretically optimal, though of course post-sale transfers will degrade this optimality over time. Mechanisms 2 and 3 seem like they both satisfy all of the above goals, at least to some extent. They're not necessarily perfect and ideal, but they do make good starting points.There is one remaining issue. For fixed and limited-supply NFTs, you might get the problem that the equilibrium purchased quantity per participant is fractional (in mechanism 2, perhaps number_of_buyers > N, and in mechanism 3, perhaps setting \(C = 1\) already leads to enough demand to over-subscribe the sale). In this case, you can sell fractional items by offering lottery tickets: if there are N items to be sold, then if you subscribe you have a chance of N / number_of_buyers that you will actually get the item, and otherwise you get a refund. For a conference, groups that want to go together could be allowed to bundle their lottery tickets to guarantee either all-win or all-lose. Ability to get the item for certain can be sold at auction. A fun mildly-grey-hat tactic for conference tickets is to disguise the pool being sold at market rate as the bottom tier of "sponsorships". You may end up with a bunch of people's faces on the sponsor board, but... maybe that's fine? After all, EthCC had John Lilic's face on their sponsor board!In all of these cases, the core of the solution is simple: if you want to be reliably fair to people, then your mechanism should have some input that explicitly measures people. Proof of personhood protocols do this (and if desired can be combined with zero knowledge proofs to ensure privacy). Ergo, we should take the efficiency benefits of market and auction-based pricing, and the egalitarian benefits of proof of personhood mechanics, and combine them together.Answers to possible questionsQ: Wouldn't lots of people who don't even care about your project buy the item through the egalitarian scheme and immediately resell it?A: Initially, probably not. In practice, such meta-games take time to show up. But if/when they do, one possible mitigation is to make them untradeable for some period of time. This actually works because proof-of-personhood identities are untradeable: you can always use your face to claim that your previous account got hacked and the identity corresponding to you, including everything in it, should be moved to a new account.Q: What if I want to make my item accessible not just to people in general, but to a particular community?A: Instead of proof of personhood, use proof of participation tokens connected to events in that community. An additional alternative, also serving both egalitarian and gamification value, is to lock some items inside solutions to some publicly-published puzzles.Q: How do we know people will accept this? People have been resistant to weird new mechanisms in the past.A: It's very difficult to get people to accept a new mechanism that they find weird by having economists write screeds about how they "should" accept it for the sake of "efficiency" (or even "equity"). However, rapid changes in context do an excellent job of resetting people's set expectations. So if there's any good time at all to try this, the blockchain space is that time. You could also wait for the "metaverse", but it's quite possible that the best version of the metaverse will run on Ethereum anyway, so you might as well just start now.
2024年10月22日
4 阅读
0 评论
0 点赞
1
...
43
44
45
...
109