分类 科普知识 下的文章 - 六币之门
首页
视频教程
网站导航
活动日历
关于我们
用户投稿
推荐
新闻动态
搜 索
1
融资周报 | 公开融资事件11起;加密技术公司Toposware完成500万美元融资,Polygon联创参投
116 阅读
2
六币日报 | 九只比特币ETF在6天内积累了9.5万枚BTC;贝莱德决定停止推出XRP现货ETF计划
79 阅读
3
六币日报 | 美国SEC再次推迟对灰度以太坊期货ETF做出决定;Do Kwon已出黑山监狱等待引渡
76 阅读
4
融资周报 | 公开融资事件27起;L1区块链Monad Labs完成2.25亿美元融资,Paradigm领投
76 阅读
5
【ETH钱包开发06】查询某个地址的交易记录
64 阅读
新闻动态
每日快报
一周精选
融资情况
项目投研
自治组织
数字藏品
去中心化应用
去中心化游戏
去中心化社交
去中心化金融
区块链交易所
科普知识
小白入门
用户手册
开发文档
行业报告
技术前沿
登录
搜 索
标签搜索
新闻
日报
元歌Eden
累计撰写
1,087
篇文章
累计收到
0
条评论
首页
栏目
新闻动态
每日快报
一周精选
融资情况
项目投研
自治组织
数字藏品
去中心化应用
去中心化游戏
去中心化社交
去中心化金融
区块链交易所
科普知识
小白入门
用户手册
开发文档
行业报告
技术前沿
页面
视频教程
网站导航
活动日历
关于我们
用户投稿
推荐
新闻动态
用户登录
登录
找到
227
篇与
科普知识
相关的结果
2024-10-22
In Defense of Bitcoin Maximalism
In Defense of Bitcoin Maximalism2022 Apr 01 See all posts In Defense of Bitcoin Maximalism We've been hearing for years that the future is blockchain, not Bitcoin. The future of the world won't be one major cryptocurrency, or even a few, but many cryptocurrencies - and the winning ones will have strong leadership under one central roof to adapt rapidly to users' needs for scale. Bitcoin is a boomer coin, and Ethereum is soon to follow; it will be newer and more energetic assets that attract the new waves of mass users who don't care about weird libertarian ideology or "self-sovereign verification", are turned off by toxicity and anti-government mentality, and just want blockchain defi and games that are fast and work.But what if this narrative is all wrong, and the ideas, habits and practices of Bitcoin maximalism are in fact pretty close to correct? What if Bitcoin is far more than an outdated pet rock tied to a network effect? What if Bitcoin maximalists actually deeply understand that they are operating in a very hostile and uncertain world where there are things that need to be fought for, and their actions, personalities and opinions on protocol design deeply reflect that fact? What if we live in a world of honest cryptocurrencies (of which there are very few) and grifter cryptocurrencies (of which there are very many), and a healthy dose of intolerance is in fact necessary to prevent the former from sliding into the latter? That is the argument that this post will make.We live in a dangerous world, and protecting freedom is serious businessHopefully, this is much more obvious now than it was six weeks ago, when many people still seriously thought that Vladimir Putin is a misunderstood and kindly character who is merely trying to protect Russia and save Western Civilization from the gaypocalypse. But it's still worth repeating. We live in a dangerous world, where there are plenty of bad-faith actors who do not listen to compassion and reason.A blockchain is at its core a security technology - a technology that is fundamentally all about protecting people and helping them survive in such an unfriendly world. It is, like the Phial of Galadriel, "a light to you in dark places, when all other lights go out". It is not a low-cost light, or a fluorescent hippie energy-efficient light, or a high-performance light. It is a light that sacrifices on all of those dimensions to optimize for one thing and one thing only: to be a light that does when it needs to do when you're facing the toughest challenge of your life and there is a friggin twenty foot spider staring at you in the face. Source: https://www.blackgate.com/2014/12/23/frodo-baggins-lady-galadriel-and-the-games-of-the-mighty/ Blockchains are being used every day by unbanked and underbanked people, by activists, by sex workers, by refugees, and by many other groups either who are uninteresting for profit-seeking centralized financial institutions to serve, or who have enemies that don't want them to be served. They are used as a primary lifeline by many people to make their payments and store their savings.And to that end, public blockchains sacrifice a lot for security:Blockchains require each transaction to be independently verified thousands of times to be accepted. Unlike centralized systems that confirm transactions in a few hundred milliseconds, blockchains require users to wait anywhere from 10 seconds to 10 minutes to get a confirmation. Blockchains require users to be fully in charge of authenticating themselves: if you lose your key, you lose your coins. Blockchains sacrifice privacy, requiring even crazier and more expensive technology to get that privacy back. What are all of these sacrifices for? To create a system that can survive in an unfriendly world, and actually do the job of being "a light in dark places, when all other lights go out".Excellent at that task requires two key ingredients: (i) a robust and defensible technology stack and (ii) a robust and defensible culture. The key property to have in a robust and defensible technology stack is a focus on simplicity and deep mathematical purity: a 1 MB block size, a 21 million coin limit, and a simple Nakamoto consensus proof of work mechanism that even a high school student can understand. The protocol design must be easy to justify decades and centuries down the line; the technology and parameter choices must be a work of art.The second ingredient is the culture of uncompromising, steadfast minimalism. This must be a culture that can stand unyieldingly in defending itself against corporate and government actors trying to co-opt the ecosystem from outside, as well as bad actors inside the crypto space trying to exploit it for personal profit, of which there are many.Now, what do Bitcoin and Ethereum culture actually look like? Well, let's ask Kevin Pham: Don't believe this is representative? Well, let's ask Kevin Pham again: Now, you might say, this is just Ethereum people having fun, and at the end of the day they understand what they have to do and what they are dealing with. But do they? Let's look at the kinds of people that Vitalik Buterin, the founder of Ethereum, hangs out with: Vitalik hangs out with elite tech CEOs in Beijing, China. Vitalik meets Vladimir Putin in Russia. Vitalik meets Nir Bakrat, mayor of Jerusalem. Vitalik shakes hands with Argentinian former president Mauricio Macri. Vitalik gives a friendly hello to Eric Schmidt, former CEO of Google and advisor to US Department of Defense. Vitalik has his first of many meetings with Audrey Tang, digital minister of Taiwan. And this is only a small selection. The immediate question that anyone looking at this should ask is: what the hell is the point of publicly meeting with all these people? Some of these people are very decent entrepreneurs and politicians, but others are actively involved in serious human rights abuses that Vitalik certainly does not support. Does Vitalik not realize just how much some of these people are geopolitically at each other's throats?Now, maybe he is just an idealistic person who believes in talking to people to help bring about world peace, and a follower of Frederick Douglass's dictum to "unite with anybody to do right and with nobody to do wrong". But there's also a simpler hypothesis: Vitalik is a hippy-happy globetrotting pleasure and status-seeker, and he deeply enjoys meeting and feeling respected by people who are important. And it's not just Vitalik; companies like Consensys are totally happy to partner with Saudi Arabia, and the ecosystem as a whole keeps trying to look to mainstream figures for validation.Now ask yourself the question: when the time comes, actually important things are happening on the blockchain - actually important things that offend people who are powerful - which ecosystem would be more willing to put its foot down and refuse to censor them no matter how much pressure is applied on them to do so? The ecosystem with globe-trotting nomads who really really care about being everyone's friend, or the ecosystem with people who take pictures of themslves with an AR15 and an axe as a side hobby?Currency is not "just the first app". It's by far the most successful one.Many people of the "blockchain, not Bitcoin" persuasion argue that cryptocurrency is the first application of blockchains, but it's a very boring one, and the true potential of blockchains lies in bigger and more exciting things. Let's go through the list of applications in the Ethereum whitepaper:Issuing tokens Financial derivatives Stablecoins Identity and reputation systems Decentralized file storage Decentralized autonomous organizations (DAOs) Peer-to-peer gambling Prediction markets Many of these categories have applications that have launched and that have at least some users. That said, cryptocurrency people really value empowering under-banked people in the "Global South". Which of these applications actually have lots of users in the Global South?As it turns out, by far the most successful one is storing wealth and payments. 3% of Argentinians own cryptocurrency, as do 6% of Nigerians and 12% of people in Ukraine. By far the biggest instance of a government using blockchains to accomplish something useful today is cryptocurrency donations to the government of Ukraine, which have raised more than $100 million if you include donations to non-governmental Ukraine-related efforts.What other application has anywhere close to that level of actual, real adoption today? Perhaps the closest is ENS. DAOs are real and growing, but today far too many of them are appealing to wealthy rich-country people whose main interest is having fun and using cartoon-character profiles to satisfy their first-world need for self-expression, and not build schools and hospitals and solve other real world problems.Thus, we can see the two sides pretty clearly: team "blockchain", privileged people in wealthy countries who love to virtue-signal about "moving beyond money and capitalism" and can't help being excited about "decentralized governance experimentation" as a hobby, and team "Bitcoin", a highly diverse group of both rich and poor people in many countries around the world including the Global South, who are actually using the capitalist tool of free self-sovereign money to provide real value to human beings today.Focusing exclusively on being money makes for better moneyA common misconception about why Bitcoin does not support "richly stateful" smart contracts goes as follows. Bitcoin really really values being simple, and particularly having low technical complexity, to reduce the chance that something will go wrong. As a result, it doesn't want to add the more complicated features and opcodes that are necessary to be able to support more complicated smart contracts in Ethereum.This misconception is, of course, wrong. In fact, there are plenty of ways to add rich statefulness into Bitcoin; search for the word "covenants" in Bitcoin chat archives to see many proposals being discussed. And many of these proposals are surprisingly simple. The reason why covenants have not been added is not that Bitcoin developers see the value in rich statefulness but find even a little bit more protocol complexity intolerable. Rather, it's because Bitcoin developers are worried about the risks of the systemic complexity that rich statefulness being possible would introduce into the ecosystem! A recent paper by Bitcoin researchers describes some ways to introduce covenants to add some degree of rich statefulness to Bitcoin. Ethereum's battle with miner-extractable value (MEV) is an excellent example of this problem appearing in practice. It's very easy in Ethereum to build applications where the next person to interact with some contract gets a substantial reward, causing transactors and miners to fight over it, and contributing greatly to network centralization risk and requiring complicated workarounds. In Bitcoin, building such systemically risky applications is hard, in large part because Bitcoin lacks rich statefulness and focuses on the simple (and MEV-free) use case of just being money.Systemic contagion can happen in non-technical ways too. Bitcoin just being money means that Bitcoin requires relatively few developers, helping to reduce the risk that developers will start demanding to print themselves free money to build new protocol features. Bitcoin just being money reduces pressure for core developers to keep adding features to "keep up with the competition" and "serve developers' needs".In so many ways, systemic effects are real, and it's just not possible for a currency to "enable" an ecosystem of highly complex and risky decentralized applications without that complexity biting it back somehow. Bitcoin makes the safe choice. If Ethereum continues its layer-2-centric approach, ETH-the-currency may gain some distance from the application ecosystem that it's enabling and thereby get some protection. So-called high-performance layer-1 platforms, on the other hand, stand no chance.In general, the earliest projects in an industry are the most "genuine"Many industries and fields follow a similar pattern. First, some new exciting technology either gets invented, or gets a big leap of improvement to the point where it's actually usable for something. At the beginning, the technology is still clunky, it is too risky for almost anyone to touch as an investment, and there is no "social proof" that people can use it to become successful. As a result, the first people involved are going to be the idealists, tech geeks and others who are genuinely excited about the technology and its potential to improve society.Once the technology proves itself enough, however, the normies come in - an event that in internet culture is often called Eternal September. And these are not just regular kindly normies who want to feel part of something exciting, but business normies, wearing suits, who start scouring the ecosystem wolf-eyed for ways to make money - with armies of venture capitalists just as eager to make their own money supporting them from the sidelines. In the extreme cases, outright grifters come in, creating blockchains with no redeeming social or technical value which are basically borderline scams. But the reality is that the line from "altruistic idealist" and "grifter" is really a spectrum. And the longer an ecosystem keeps going, the harder it is for any new project on the altruistic side of the spectrum to get going.One noisy proxy for the blockchain industry's slow replacement of philosophical and idealistic values with short-term profit-seeking values is the larger and larger size of premines: the allocations that developers of a cryptocurrency give to themselves. Source for insider allocations: Messari. Which blockchain communities deeply value self-sovereignty, privacy and decentralization, and are making to get big sacrifices to get it? And which blockchain communities are just trying to pump up their market caps and make money for founders and investors? The above chart should make it pretty clear.Intolerance is goodThe above makes it clear why Bitcoin's status as the first cryptocurrency gives it unique advantages that are extremely difficult for any cryptocurrency created within the last five years to replicate. But now we get to the biggest objection against Bitcoin maximalist culture: why is it so toxic?The case for Bitcoin toxicity stems from Conquest's second law. In Robert Conquest's original formulation, the law says that "any organization not explicitly and constitutionally right-wing will sooner or later become left-wing". But really, this is just a special case of a much more general pattern, and one that in the modern age of relentlessly homogenizing and conformist social media is more relevant than ever:If you want to retain an identity that is different from the mainstream, then you need a really strong culture that actively resists and fights assimilation into the mainstream every time it tries to assert its hegemony.Blockchains are, as I mentioned above, very fundamentally and explicitly a counterculture movement that is trying to create and preserve something different from the mainstream. At a time when the world is splitting up into great power blocs that actively suppress social and economic interaction between them, blockchains are one of the very few things that can remain global. At a time when more and more people are reaching for censorship to defeat their short-term enemies, blockchains steadfastly continue to censor nothing. The only correct way to respond to "reasonable adults" trying to tell you that to "become mainstream" you have to compromise on your "extreme" values. Because once you compromise once, you can't stop. Blockchain communities also have to fight bad actors on the inside. Bad actors include:Scammers, who make and sell projects that are ultimately valueless (or worse, actively harmful) but cling to the "crypto" and "decentralization" brand (as well as highly abstract ideas of humanism and friendship) for legitimacy. Collaborationists, who publicly and loudly virtue-signal about working together with governments and actively try to convince governments to use coercive force against their competitors. Corporatists, who try to use their resources to take over the development of blockchains, and often push for protocol changes that enable centralization. One could stand against all of these actors with a smiling face, politely telling the world why they "disagree with their priorities". But this is unrealistic: the bad actors will try hard to embed themselves into your community, and at that point it becomes psychologically hard to criticize them with the sufficient level of scorn that they truly require: the people you're criticizing are friends of your friends. And so any culture that values agreeableness will simply fold before the challenge, and let scammers roam freely through the wallets of innocent newbies.What kind of culture won't fold? A culture that is willing and eager to tell both scammers on the inside and powerful opponents on the outside to go the way of the Russian warship.Weird crusades against seed oils are goodOne powerful bonding tool to help a community maintain internal cohesion around its distinctive values, and avoid falling into the morass that is the mainstream, is weird beliefs and crusades that are in a similar spirit, even if not directly related, to the core mission. Ideally, these crusades should be at least partially correct, poking at a genuine blind spot or inconsistency of mainstream values.The Bitcoin community is good at this. Their most recent crusade is a war against seed oils, oils derived from vegetable seeds high in omega-6 fatty acids that are harmful to human health. This Bitcoiner crusade gets treated skeptically when reviewed in the media, but the media treats the topic much more favorably when "respectable" tech firms are tackling it. The crusade helps to remind Bitcoiners that the mainstream media is fundamentally tribal and hypocritical, and so the media's shrill attempts to slander cryptocurrency as being primarily for money laundering and terrorism should be treated with the same level of scorn.Be a maximalistMaximalism is often derided in the media as both a dangerous toxic right-wing cult, and as a paper tiger that will disappear as soon as some other cryptocurrency comes in and takes over Bitcoin's supreme network effect. But the reality is that none of the arguments for maximalism that I describe above depend at all on network effects. Network effects really are logarithmic, not quadratic: once a cryptocurrency is "big enough", it has enough liquidity to function and multi-cryptocurrency payment processors will easily add it to their collection. But the claim that Bitcoin is an outdated pet rock and its value derives entirely from a walking-zombie network effect that just needs a little push to collapse is similarly completely wrong.Crypto-assets like Bitcoin have real cultural and structural advantages that make them powerful assets worth holding and using. Bitcoin is an excellent example of the category, though it's certainly not the only one; other honorable cryptocurrencies do exist, and maximalists have been willing to support and use them. Maximalism is not just Bitcoin-for-the-sake-of-Bitcoin; rather, it's a very genuine realization that most other cryptoassets are scams, and a culture of intolerance is unavoidable and necessary to protect newbies and make sure at least one corner of that space continues to be a corner worth living in.It's better to mislead ten newbies into avoiding an investment that turns out good than it is to allow a single newbie to get bankrupted by a grifter.It's better to make your protocol too simple and fail to serve ten low-value short-attention-span gambling applications than it is to make it too complex and fail to serve the central sound money use case that underpins everything else.And it's better to offend millions by standing aggressively for what you believe in than it is to try to keep everyone happy and end up standing for nothing.Be brave. Fight for your values. Be a maximalist.
2024年10月22日
4 阅读
0 评论
0 点赞
2024-10-22
Two thought experiments to evaluate automated stablecoins
Two thought experiments to evaluate automated stablecoins2022 May 25 See all posts Two thought experiments to evaluate automated stablecoins Special thanks to Dan Robinson, Hayden Adams and Dankrad Feist for feedback and review.The recent LUNA crash, which led to tens of billions of dollars of losses, has led to a storm of criticism of "algorithmic stablecoins" as a category, with many considering them to be a "fundamentally flawed product". The greater level of scrutiny on defi financial mechanisms, especially those that try very hard to optimize for "capital efficiency", is highly welcome. The greater acknowledgement that present performance is no guarantee of future returns (or even future lack-of-total-collapse) is even more welcome. Where the sentiment goes very wrong, however, is in painting all automated pure-crypto stablecoins with the same brush, and dismissing the entire category.While there are plenty of automated stablecoin designs that are fundamentally flawed and doomed to collapse eventually, and plenty more that can survive theoretically but are highly risky, there are also many stablecoins that are highly robust in theory, and have survived extreme tests of crypto market conditions in practice. Hence, what we need is not stablecoin boosterism or stablecoin doomerism, but rather a return to principles-based thinking. So what are some good principles for evaluating whether or not a particular automated stablecoin is a truly stable one? For me, the test that I start from is asking how the stablecoin responds to two thought experiments.Click here to skip straight to the thought experiments.Reminder: what is an automated stablecoin?For the purposes of this post, an automated stablecoin is a system that has the following properties:It issues a stablecoin, which attempts to target a particular price index. Usually, the target is 1 USD, but there are other options too. There is some targeting mechanism that continuously works to push the price toward the index if it veers away in either direction. This makes ETH and BTC not stablecoins (duh). The targeting mechanism is completely decentralized, and free of protocol-level dependencies on specific trusted actors. Particularly, it must not rely on asset custodians. This makes USDT and USDC not automated stablecoins. In practice, (2) means that the targeting mechanism must be some kind of smart contract which manages some reserve of crypto-assets, and uses those crypto-assets to prop up the price if it drops.How does Terra work?Terra-style stablecoins (roughly the same family as seignorage shares, though many implementation details differ) work by having a pair of two coins, which we'll call a stablecoin and a volatile-coin or volcoin (in Terra, UST is the stablecoin and LUNA is the volcoin). The stablecoin retains stability using a simple mechanism:If the price of the stablecoin exceeds the target, the system auctions off new stablecoins (and uses the revenue to burn volcoins) until the price returns to the target If the price of the stablecoin drops below the target, the system buys back and burns stablecoins (issuing new volcoins to fund the burn) until the price returns to the target Now what is the price of the volcoin? The volcoin's value could be purely speculative, backed by an assumption of greater stablecoin demand in the future (which would require burning volcoins to issue). Alternatively, the value could come from fees: either trading fees on stablecoin volcoin exchange, or holding fees charged per year to stablecoin holders, or both. But in all cases, the price of the volcoin comes from the expectation of future activity in the system.How does RAI work?In this post I'm focusing on RAI rather than DAI because RAI better exemplifies the pure "ideal type" of a collateralized automated stablecoin, backed by ETH only. DAI is a hybrid system backed by both centralized and decentralized collateral, which is a reasonable choice for their product but it does make analysis trickier.In RAI, there are two main categories of participants (there's also holders of FLX, the speculative token, but they play a less important role):A RAI holder holds RAI, the stablecoin of the RAI system. A RAI lender deposits some ETH into a smart contract object called a "safe". They can then withdraw RAI up to the value of \(\frac\) of that ETH (eg. if 1 ETH = 100 RAI, then if you deposit 10 ETH you can withdraw up to \(10 * 100 * \frac \approx 667\) RAI). A lender can recover the ETH in the same if they pay back their RAI debt. There are two main reasons to become a RAI lender:To go long on ETH: if you deposit 10 ETH and withdraw 500 RAI in the above example, you end up with a position worth 500 RAI but with 10 ETH of exposure, so it goes up/down by 2% for every 1% change in the ETH price. Arbitrage if you find a fiat-denominated investment that goes up faster than RAI, you can borrow RAI, put the funds into that investment, and earn a profit on the difference. If the ETH price drops, and a safe no longer has enough collateral (meaning, the RAI debt is now more than \(\frac\) times the value of the ETH deposited), a liquidation event takes place. The safe gets auctioned off for anyone else to buy by putting up more collateral.The other main mechanism to understand is redemption rate adjustment. In RAI, the target isn't a fixed quantity of USD; instead, it moves up or down, and the rate at which it moves up or down adjusts in response to market conditions:If the price of RAI is above the target, the redemption rate decreases, reducing the incentive to hold RAI and increasing the incentive to hold negative RAI by being a lender. This pushes the price back down. If the price of RAI is below the target, the redemption rate increases, increasing the incentive to hold RAI and reducing the incentive to hold negative RAI by being a lender. This pushes the price back up. Thought experiment 1: can the stablecoin, even in theory, safely "wind down" to zero users?In the non-crypto real world, nothing lasts forever. Companies shut down all the time, either because they never manage to find enough users in the first place, or because once-strong demand for their product is no longer there, or because they get displaced by a superior competitor. Sometimes, there are partial collapses, declines from mainstream status to niche status (eg. MySpace). Such things have to happen to make room for new products. But in the non-crypto world, when a product shuts down or declines, customers generally don't get hurt all that much. There are certainly some cases of people falling through the cracks, but on the whole shutdowns are orderly and the problem is manageable.But what about automated stablecoins? What happens if we look at a stablecoin from the bold and radical perspective that the system's ability to avoid collapsing and losing huge amounts of user funds should not depend on a constant influx of new users? Let's see and find out!Can Terra wind down?In Terra, the price of the volcoin (LUNA) comes from the expectation of fees from future activity in the system. So what happens if expected future activity drops to near-zero? The market cap of the volcoin drops until it becomes quite small relative to the stablecoin. At that point, the system becomes extremely fragile: only a small downward shock to demand for the stablecoin could lead to the targeting mechanism printing lots of volcoins, which causes the volcoin to hyperinflate, at which point the stablecoin too loses its value.The system's collapse can even become a self-fulfilling prophecy: if it seems like a collapse is likely, this reduces the expectation of future fees that is the basis of the value of the volcoin, pushing the volcoin's market cap down, making the system even more fragile and potentially triggering that very collapse - exactly as we saw happen with Terra in May.LUNA price, May 8-12 UST price, May 8-12 First, the volcoin price drops. Then, the stablecoin starts to shake. The system attempts to shore up stablecoin demand by issuing more volcoins. With confidence in the system low, there are few buyers, so the volcoin price rapidly falls. Finally, once the volcoin price is near-zero, the stablecoin too collapses. In principle, if demand decreases extremely slowly, the volcoin's expected future fees and hence its market cap could still be large relative to the stablecoin, and so the system would continue to be stable at every step of its decline. But this kind of successful slowly-decreasing managed decline is very unlikely. What's more likely is a rapid drop in interest followed by a bang. Safe wind-down: at every step, there's enough expected future revenue to justify enough volcoin market cap to keep the stablecoin safe at its current level.Unsafe wind-down: at some point, there's not enough expected future revenue to justify enough volcoin market cap to keep the stablecoin safe. Collapse is likely. Can RAI wind down?RAI's security depends on an asset external to the RAI system (ETH), so RAI has a much easier time safely winding down. If the decline in demand is unbalanced (so, either demand for holding drops faster or demand for lending drops faster), the redemption rate will adjust to equalize the two. The lenders are holding a leveraged position in ETH, not FLX, so there's no risk of a positive-feedback loop where reduced confidence in RAI causes demand for lending to also decrease.If, in the extreme case, all demand for holding RAI disappears simultaneously except for one holder, the redemption rate would skyrocket until eventually every lender's safe gets liquidated. The single remaining holder would be able to buy the safe in the liquidation auction, use their RAI to immediately clear its debt, and withdraw the ETH. This gives them the opportunity to get a fair price for their RAI, paid for out of the ETH in the safe.Another extreme case worth examining is where RAI becomes the primary appliation on Ethereum. In this case, a reduction in expected future demand for RAI would crater the price of ETH. In the extreme case, a cascade of liquidations is possible, leading to a messy collapse of the system. But RAI is far more robust against this possibility than a Terra-style system.Thought experiment 2: what happens if you try to peg the stablecoin to an index that goes up 20% per year?Currently, stablecoins tend to be pegged to the US dollar. RAI stands out as a slight exception, because its peg adjusts up or down due to the redemption rate and the peg started at 3.14 USD instead of 1 USD (the exact starting value was a concession to being normie-friendly, as a true math nerd would have chosen tau = 6.28 USD instead). But they do not have to be. You can have a stablecoin pegged to a basket of assets, a consumer price index, or some arbitrarily complex formula ("a quantity of value sufficient to buy hectares of land in the forests of Yakutia"). As long as you can find an oracle to prove the index, and people to participate on all sides of the market, you can make such a stablecoin work.As a thought experiment to evaluate sustainability, let's imagine a stablecoin with a particular index: a quantity of US dollars that grows by 20% per year. In math language, the index is \(1.2^\) USD, where \(t\) is the current time in years and \(t_0\) is the time when the system launched. An even more fun alternative is \(1.04^*(t - t_0)^2}\) USD, so it starts off acting like a regular USD-denominated stablecoin, but the USD-denominated return rate keeps increasing by 4% every year. Obviously, there is no genuine investment that can get anywhere close to 20% returns per year, and there is definitely no genuine investment that can keep increasing its return rate by 4% per year forever. But what happens if you try?I will claim that there's basically two ways for a stablecoin that tries to track such an index to turn out:It charges some kind of negative interest rate on holders that equilibrates to basically cancel out the USD-denominated growth rate built in to the index. It turns into a Ponzi, giving stablecoin holders amazing returns for some time until one day it suddenly collapses with a bang. It should be pretty easy to understand why RAI does (1) and LUNA does (2), and so RAI is better than LUNA. But this also shows a deeper and more important fact about stablecoins: for a collateralized automated stablecoin to be sustainable, it has to somehow contain the possibility of implementing a negative interest rate. A version of RAI programmatically prevented from implementing negative interest rates (which is what the earlier single-collateral DAI basically was) would also turn into a Ponzi if tethered to a rapidly-appreciating price index.Even outside of crazy hypotheticals where you build a stablecoin to track a Ponzi index, the stablecoin must somehow be able to respond to situations where even at a zero interest rate, demand for holding exceeds demand for borrowing. If you don't, the price rises above the peg, and the stablecoin becomes vulnerable to price movements in both directions that are quite unpredictable.Negative interest rates can be done in two ways:RAI-style, having a floating target that can drop over time if the redemption rate is negative Actually having balances decrease over time Option (1) has the user-experience flaw that the stablecoin no longer cleanly tracks "1 USD". Option (2) has the developer-experience flaw that developers aren't used to dealing with assets where receiving N coins does not unconditionally mean that you can later send N coins. But choosing one of the two seems unavoidable - unless you go the MakerDAO route of being a hybrid stablecoin that uses both pure cryptoassets and centralized assets like USDC as collateral.What can we learn?In general, the crypto space needs to move away from the attitude that it's okay to achieve safety by relying on endless growth. It's certainly not acceptable to maintain that attitude by saying that "the fiat world works in the same way", because the fiat world is not attempting to offer anyone returns that go up much faster than the regular economy, outside of isolated cases that certainly should be criticized with the same ferocity.Instead, while we certainly should hope for growth, we should evaluate how safe systems are by looking at their steady state, and even the pessimistic state of how they would fare under extreme conditions and ultimately whether or not they can safely wind down. If a system passes this test, that does not mean it's safe; it could still be fragile for other reasons (eg. insufficient collateral ratios), or have bugs or governance vulnerabilities. But steady-state and extreme-case soundness should always be one of the first things that we check for.
2024年10月22日
6 阅读
0 评论
0 点赞
2024-10-22
Encapsulated vs systemic complexity in protocol design
Encapsulated vs systemic complexity in protocol design2022 Feb 28 See all posts Encapsulated vs systemic complexity in protocol design One of the main goals of Ethereum protocol design is to minimize complexity: make the protocol as simple as possible, while still making a blockchain that can do what an effective blockchain needs to do. The Ethereum protocol is far from perfect at this, especially since much of it was designed in 2014-16 when we understood much less, but we nevertheless make an active effort to reduce complexity whenever possible.One of the challenges of this goal, however, is that complexity is difficult to define, and sometimes, you have to trade off between two choices that introduce different kinds of complexity and have different costs. How do we compare?One powerful intellectual tool that allows for more nuanced thinking about complexity is to draw a distinction between what we will call encapsulated complexity and systemic complexity. Encapsulated complexity occurs when there is a system with sub-systems that are internally complex, but that present a simple "interface" to the outside. Systemic complexity occurs when the different parts of a system can't even be cleanly separated, and have complex interactions with each other.Here are a few examples.BLS signatures vs Schnorr signaturesBLS signatures and Schnorr signatures are two popular types of cryptographic signature schemes that can be made with elliptic curves.BLS signatures appear mathematically very simple:Signing: \(\sigma = H(m) * k\)Verifying: \(e([1], \sigma) \stackrel e(H(m), K)\)\(H\) is a hash function, \(m\) is the message, and \(k\) and \(K\) are the private and public keys. So far, so simple. However, the true complexity is hidden inside the definition of the \(e\) function: elliptic curve pairings, one of the most devilishly hard-to-understand pieces of math in all of cryptography.Now, consider Schnorr signatures. Schnorr signatures rely only on basic elliptic curves. But the signing and verification logic is somewhat more complex: So... which type of signature is "simpler"? It depends what you care about! BLS signatures have a huge amount of technical complexity, but the complexity is all buried within the definition of the \(e\) function. If you treat the \(e\) function as a black box, BLS signatures are actually really easy. Schnorr signatures, on the other hand, have less total complexity, but they have more pieces that could interact with the outside world in tricky ways.For example:Doing a BLS multi-signature (a combined signature from two keys \(k_1\) and \(k_2\)) is easy: just take \(\sigma_1 + \sigma_2\). But a Schnorr multi-signature requires two rounds of interaction, and there are tricky key cancellation attacks that need to be dealt with. Schnorr signatures require random number generation, BLS signatures do not. Elliptic curve pairings in general are a powerful "complexity sponge" in that they contain large amounts of encapsulated complexity, but enable solutions with much less systemic complexity. This is also true in the area of polynomial commitments: compare the simplicity of KZG commitments (which require pairings) to the much more complicated internal logic of inner product arguments (which do not).Cryptography vs cryptoeconomicsOne important design choice that appears in many blockchain designs is that of cryptography versus cryptoeconomics. Often (eg. in rollups) this comes in the form of a choice between validity proofs (aka. ZK-SNARKs) and fraud proofs.ZK-SNARKs are complex technology. While the basic ideas behind how they work can be explained in a single post, actually implementing a ZK-SNARK to verify some computation involves many times more complexity than the computation itself (hence why ZK-SNARKs for the EVM are still under development while fraud proofs for the EVM are already in the testing stage). Implementing a ZK-SNARK effectively involves circuit design with special-purpose optimization, working with unfamiliar programming languages, and many other challenges. Fraud proofs, on the other hand, are inherently simple: if someone makes a challenge, you just directly run the computation on-chain. For efficiency, a binary-search scheme is sometimes added, but even that doesn't add too much complexity.But while ZK-SNARKs are complex, their complexity is encapsulated complexity. The relatively light complexity of fraud proofs, on the other hand, is systemic. Here are some examples of systemic complexity that fraud proofs introduce:They require careful incentive engineering to avoid the verifier's dilemma. If done in-consensus, they require extra transaction types for the fraud proofs, along with reasoning about what happens if many actors compete to submit a fraud proof at the same time. They depend on a synchronous network. They allow censorship attacks to be also used to commit theft. Rollups based on fraud proofs require liquidity providers to support instant withdrawals. For these reasons, even from a complexity perspective purely cryptographic solutions based on ZK-SNARKs are likely to be long-run safer: ZK-SNARKs have are more complicated parts that some people have to think about, but they have fewer dangling caveats that everyone has to think about.Miscellaneous examplesProof of work (Nakamoto consensus) - low encapsulated complexity, as the mechanism is extremely simple and easy to understand, but higher systemic complexity (eg. selfish mining attacks). Hash functions - high encapsulated complexity, but very easy-to-understand properties so low systemic complexity. Random shuffling algorithms - shuffling algorithms can either be internally complicated (as in Whisk) but lead to easy-to-understand guarantees of strong randomness, or internally simpler but lead to randomness properties that are weaker and more difficult to analyze (systemic complexity). Miner extractable value (MEV) - a protocol that is powerful enough to support complex transactions can be fairly simple internally, but those complex transactions can have complex systemic effects on the protocol's incentives by contributing to the incentive to propose blocks in very irregular ways. Verkle trees - Verkle trees do have some encapsulated complexity, in fact quite a bit more than plain Merkle hash trees. Systemically, however, Verkle trees present the exact same relatively clean-and-simple interface of a key-value map. The main systemic complexity "leak" is the possibility of an attacker manipulating the tree to make a particular value have a very long branch; but this risk is the same for both Verkle trees and Merkle trees. How do we make the tradeoff?Often, the choice with less encapsulated complexity is also the choice with less systemic complexity, and so there is one choice that is obviously simpler. But at other times, you have to make a hard choice between one type of complexity and the other. What should be clear at this point is that complexity is less dangerous if it is encapsulated. The risks from complexity of a system are not a simple function of how long the specification is; a small 10-line piece of the specification that interacts with every other piece adds more complexity than a 100-line function that is otherwise treated as a black box.However, there are limits to this approach of preferring encapsulated complexity. Software bugs can occur in any piece of code, and as it gets bigger the probability of a bug approaches 1. Sometimes, when you need to interact with a sub-system in an unexpected and new way, complexity that was originally encapsulated can become systemic.One example of the latter is Ethereum's current two-level state tree, which features a tree of account objects, where each account object in turn has its own storage tree. This tree structure is complex, but at the beginning the complexity seemed to be well-encapsulated: the rest of the protocol interacts with the tree as a key/value store that you can read and write to, so we don't have to worry about how the tree is structured.Later, however, the complexity turned out to have systemic effects: the ability of accounts to have arbitrarily large storage trees meant that there was no way to reliably expect a particular slice of the state (eg. "all accounts starting with 0x1234") to have a predictable size. This makes it harder to split up the state into pieces, complicating the design of syncing protocols and attempts to distribute the storage process. Why did encapsulated complexity become systemic? Because the interface changed. The fix? The current proposal to move to Verkle trees also includes a move to a well-balanced single-layer design for the tree,Ultimately, which type of complexity to favor in any given situation is a question with no easy answers. The best that we can do is to have an attitude of moderately favoring encapsulated complexity, but not too much, and exercise our judgement in each specific case. Sometimes, a sacrifice of a little bit of systemic complexity to allow a great reduction of encapsulated complexity really is the best thing to do. And other times, you can even misjudge what is encapsulated and what isn't. Each situation is different.
2024年10月22日
5 阅读
0 评论
0 点赞
2024-10-22
How do trusted setups work?
How do trusted setups work?2022 Mar 14 See all posts How do trusted setups work? Necessary background: elliptic curves and elliptic curve pairings. See also: Dankrad Feist's article on KZG polynomial commitments.Special thanks to Justin Drake, Dankrad Feist and Chih-Cheng Liang for feedback and review.Many cryptographic protocols, especially in the areas of data availability sampling and ZK-SNARKs depend on trusted setups. A trusted setup ceremony is a procedure that is done once to generate a piece of data that must then be used every time some cryptographic protocol is run. Generating this data requires some secret information; the "trust" comes from the fact that some person or some group of people has to generate these secrets, use them to generate the data, and then publish the data and forget the secrets. But once the data is generated, and the secrets are forgotten, no further participation from the creators of the ceremony is required. There are many types of trusted setups. The earliest instance of a trusted setup being used in a major protocol is the original Zcash ceremony in 2016. This ceremony was very complex, and required many rounds of communication, so it could only have six participants. Everyone using Zcash at that point was effectively trusting that at least one of the six participants was honest. More modern protocols usually use the powers-of-tau setup, which has a 1-of-N trust model with \(N\) typically in the hundreds. That is to say, hundreds of people participate in generating the data together, and only one of them needs to be honest and not publish their secret for the final output to be secure. Well-executed setups like this are often considered "close enough to trustless" in practice.This article will explain how the KZG setup works, why it works, and the future of trusted setup protocols. Anyone proficient in code should also feel free to follow along this code implementation: https://github.com/ethereum/research/blob/master/trusted_setup/trusted_setup.py.What does a powers-of-tau setup look like?A powers-of-tau setup is made up of two series of elliptic curve points that look as follows:\([G_1, G_1 * s, G_1 * s^2 ... G_1 * s^]\) \([G_2, G_2 * s, G_2 * s^2 ... G_2 * s^]\)\(G_1\) and \(G_2\) are the standardized generator points of the two elliptic curve groups; in BLS12-381, \(G_1\) points are (in compressed form) 48 bytes long and \(G_2\) points are 96 bytes long. \(n_1\) and \(n_2\) are the lengths of the \(G_1\) and \(G_2\) sides of the setup. Some protocols require \(n_2 = 2\), others require \(n_1\) and \(n_2\) to both be large, and some are in the middle (eg. Ethereum's data availability sampling in its current form requires \(n_1 = 4096\) and \(n_2 = 16\)). \(s\) is the secret that is used to generate the points, and needs to be forgotten.To make a KZG commitment to a polynomial \(P(x) = \sum_i c_i x^i\), we simply take a linear combination \(\sum_i c_i S_i\), where \(S_i = G_1 * s^i\) (the elliptic curve points in the trusted setup). The \(G_2\) points in the setup are used to verify evaluations of polynomials that we make commitments to; I won't go into verification here in more detail, though Dankrad does in his post.Intuitively, what value is the trusted setup providing?It's worth understanding what is philosophically going on here, and why the trusted setup is providing value. A polynomial commitment is committing to a piece of size-\(N\) data with a size \(O(1)\) object (a single elliptic curve point). We could do this with a plain Pedersen commitment: just set the \(S_i\) values to be \(N\) random elliptic curve points that have no known relationship with each other, and commit to polynomials with \(\sum_i c_i S_i\) as before. And in fact, this is exactly what IPA evaluation proofs do.However, any IPA-based proofs take \(O(N)\) time to verify, and there's an unavoidable reason why: a commitment to a polynomial \(P(x)\) using the base points \([S_0, S_1 ... S_i ... S_]\) would commit to a different polynomial if we use the base points \([S_0, S_1 ... (S_i * 2) ... S_]\). A valid commitment to the polynomial \(3x^3 + 8x^2 + 2x + 6\) under one set of base points is a valid commitment to \(3x^3 + 4x^2 + 2x + 6\) under a different set of base points. If we want to make an IPA-based proof for some statement (say, that this polynomial evaluated at \(x = 10\) equals \(3826\)), the proof should pass with the first set of base points and fail with the second. Hence, whatever the proof verification procedure is cannot avoid somehow taking into account each and every one of the \(S_i\) values, and so it unavoidably takes \(O(N)\) time.But with a trusted setup, there is a hidden mathematical relationship between the points. It's guaranteed that \(S_ = s * S_i\) with the same factor \(s\) between any two adjacent points. If \([S_0, S_1 ... S_i ... S_]\) is a valid setup, the "edited setup" \([S_0, S_1 ... (S_i * 2) ... S_]\) cannot also be a valid setup. Hence, we don't need \(O(n)\) computation; instead, we take advantage of this mathematical relationship to verify anything we need to verify in constant time.However, the mathematical relationship has to remain secret: if \(s\) is known, then anyone could come up with a commitment that stands for many different polynomials: if \(C\) commits to \(P(x)\), it also commits to \(\frac\), or \(P(x) - x + s\), or many other things. This would completely break all applications of polynomial commitments. Hence, while some secret \(s\) must have existed at one point to make possible the mathematical link between the \(S_i\) values that enables efficient verification, the \(s\) must also have been forgotten.How do multi-participant setups work?It's easy to see how one participant can generate a setup: just pick a random \(s\), and generate the elliptic curve points using that \(s\). But a single-participant trusted setup is insecure: you have to trust one specific person!The solution to this is multi-participant trusted setups, where by "multi" we mean a lot of participants: over 100 is normal, and for smaller setups it's possible to get over 1000. Here is how a multi-participant powers-of-tau setup works. Take an existing setup (note that you don't know \(s\), you just know the points):\([G_1, G_1 * s, G_1 * s^2 ... G_1 * s^]\) \([G_2, G_2 * s, G_2 * s^2 ... G_2 * s^]\)Now, choose your own random secret \(t\). Compute:\([G_1, (G_1 * s) * t, (G_1 * s^2) * t^2 ... (G_1 * s^) * t^]\) \([G_2, (G_2 * s) * t, (G_2 * s^2) * t^2 ... (G_2 * s^) * t^]\)Notice that this is equivalent to:\([G_1, G_1 * (st), G_1 * (st)^2 ... G_1 * (st)^]\) \([G_2, G_2 * (st), G_2 * (st)^2 ... G_2 * (st)^]\)That is to say, you've created a valid setup with the secret \(s * t\)! You never give your \(t\) to the previous participants, and the previous participants never give you their secrets that went into \(s\). And as long as any one of the participants is honest and does not reveal their part of the secret, the combined secret does not get revealed. In particular, finite fields have the property that if you know know \(s\) but not \(t\), and \(t\) is securely randomly generated, then you know nothing about \(s*t\)!Verifying the setupTo verify that each participant actually participated, each participant can provide a proof that consists of (i) the \(G_1 * s\) point that they received and (ii) \(G_2 * t\), where \(t\) is the secret that they introduce. The list of these proofs can be used to verify that the final setup combines together all the secrets (as opposed to, say, the last participant just forgetting the previous values and outputting a setup with just their own secret, which they keep so they can cheat in any protocols that use the setup). \(s_1\) is the first participant's secret, \(s_2\) is the second participant's secret, etc. The pairing check at each step proves that the setup at each step actually came from a combination of the setup at the previous step and a new secret known by the participant at that step. Each participant should reveal their proof on some publicly verifiable medium (eg. personal website, transaction from their .eth address, Twitter). Note that this mechanism does not prevent someone from claiming to have participated at some index where someone else has (assuming that other person has revealed their proof), but it's generally considered that this does not matter: if someone is willing to lie about having participated, they would also be willing to lie about having deleted their secret. As long as at least one of the people who publicly claim to have participated is honest, the setup is secure.In addition to the above check, we also want to verify that all the powers in the setup are correctly constructed (ie. they're powers of the same secret). To do this, we could do a series of pairing checks, verifying that \(e(S_, G_2) = e(S_i, T_1)\) (where \(T_1\) is the \(G_2 * s\) value in the setup) for every \(i\). This verifies that the factor between each \(S_i\) and \(S_\) is the same as the factor between \(T_1\) and \(G_2\). We can then do the same on the \(G_2\) side.But that's a lot of pairings and is expensive. Instead, we take a random linear combination \(L_1 = \sum_^ r_iS_i\), and the same linear combination shifted by one: \(L_2 = \sum_^ r_iS_\). We use a single pairing check to verify that they match up: \(e(L_2, G_2) = e(L_1, T_1)\).We can even combine the process for the \(G_1\) side and the \(G_2\) side together: in addition to computing \(L_1\) and \(L_2\) as above, we also compute \(L_3 = \sum_^ q_iT_i\) (\(q_i\) is another set of random coefficients) and \(L_4 = \sum_^ q_iT_\), and check \(e(L_2, L_3) = e(L_1, L_4)\).Setups in Lagrange formIn many use cases, you don't want to work with polynomials in coefficient form (eg. \(P(x) = 3x^3 + 8x^2 + 2x + 6\)), you want to work with polynomials in evaluation form (eg. \(P(x)\) is the polynomial that evaluates to \([19, 146, 9, 187]\) on the domain \([1, 189, 336, 148]\) modulo 337). Evaluation form has many advantages (eg. you can multiply and sometimes divide polynomials in \(O(N)\) time) and you can even use it to evaluate in \(O(N)\) time. In particular, data availability sampling expects the blobs to be in evaluation form.To work with these cases, it's often convenient to convert the trusted setup to evaluation form. This would allow you to take the evaluations (\([19, 146, 9, 187]\) in the above example) and use them to compute the commitment directly.This is done most easily with a Fast Fourier transform (FFT), but passing the curve points as input instead of numbers. I'll avoid repeating a full detailed explanation of FFTs here, but here is an implementation; it is actually not that difficult.The future of trusted setupsPowers-of-tau is not the only kind of trusted setup out there. Some other notable (actual or potential) trusted setups include:The more complicated setups in older ZK-SNARK protocols (eg. see here), which are sometimes still used (particularly Groth16) because verification is cheaper than PLONK. Some cryptographic protocols (eg. DARK) depend on hidden-order groups, groups where it is not known what number an element can be multiplied by to get the zero element. Fully trustless versions of this exist (see: class groups), but by far the most efficient version uses RSA groups (powers of \(x\) mod \(n = pq\) where \(p\) and \(q\) are not known). Trusted setup ceremonies for this with 1-of-n trust assumptions are possible, but are very complicated to implement. If/when indistinguishability obfuscation becomes viable, many protocols that depend on it will involve someone creating and publishing an obfuscated program that does something with a hidden internal secret. This is a trusted setup: the creator(s) would need to possess the secret to create the program, and would need to delete it afterwards. Cryptography continues to be a rapidly evolving field, and how important trusted setups are could easily change. It's possible that techniques for working with IPAs and Halo-style ideas will improve to the point where KZG becomes outdated and unnecessary, or that quantum computers will make anything based on elliptic curves non-viable ten years from now and we'll be stuck working with trusted-setup-free hash-based protocols. It's also possible that what we can do with KZG will improve even faster, or that a new area of cryptography will emerge that depends on a different kind of trusted setup.To the extent that trusted setup ceremonies are necessary, it is important to remember that not all trusted setups are created equal. 176 participants is better than 6, and 2000 would be even better. A ceremony small enough that it can be run inside a browser or phone application (eg. the ZKopru setup is web-based) could attract far more participants than one that requires running a complicated software package. Every ceremony should ideally have participants running multiple independently built software implementations and running different operating systems and environments, to reduce common mode failure risks. Ceremonies that require only one round of interaction per participant (like powers-of-tau) are far better than multi-round ceremonies, both due to the ability to support far more participants and due to the greater ease of writing multiple implementations. Ceremonies should ideally be universal (the output of one ceremony being able to support a wide range of protocols). These are all things that we can and should keep working on, to ensure that trusted setups can be as secure and as trusted as possible.
2024年10月22日
5 阅读
0 评论
0 点赞
2024-10-22
The bulldozer vs vetocracy political axis
The bulldozer vs vetocracy political axis2021 Dec 19 See all posts The bulldozer vs vetocracy political axis Typically, attempts to collapse down political preferences into a few dimensions focus on two primary dimensions: "authoritarian vs libertarian" and "left vs right". You've probably seen political compasses like this: There have been many variations on this, and even an entire subreddit dedicated to memes based on these charts. I even made a spin on the concept myself, with this "meta-political compass" where at each point on the compass there is a smaller compass depicting what the people at that point on the compass see the axes of the compass as being.Of course, "authoritarian vs libertarian" and "left vs right" are both incredibly un-nuanced gross oversimplifications. But us puny-brained human beings do not have the capacity to run anything close to accurate simulations of humanity inside our heads, and so sometimes incredibly un-nuanced gross oversimplifications are something we need to understand the world. But what if there are other incredibly un-nuanced gross oversimplifications worth exploring?Enter the bulldozer vs vetocracy divideLet us consider a political axis defined by these two opposing poles:Bulldozer: single actors can do important and meaningful, but potentially risky and disruptive, things without asking for permission Vetocracy: doing anything potentially disruptive and controversial requires getting a sign-off from a large number of different and diverse actors, any of whom could stop it Note that this is not the same as either authoritarian vs libertarian or left vs right. You can have vetocratic authoritarianism, the bulldozer left, or any other combination. Here are a few examples:The key difference between authoritarian bulldozer and authoritarian vetocracy is this: is the government more likely to fail by doing bad things or by preventing good things from happening? Similarly for libertarian bulldozer vs vetocracy: are private actors more likely to fail by doing bad things, or by standing in the way of needed good things?Sometimes, I hear people complaining that eg. the United States (but other countries too) is falling behind because too many people use freedom as an excuse to prevent needed reforms from happening. But is the problem really freedom? Isn't, say, restrictive housing policy preventing GDP from rising by 36% an example of the problem precisely being people not having enough freedom to build structures on their own land? Shifting the argument over to saying that there is too much vetocracy, on the other hand, makes the argument look much less confusing: individuals excessively blocking governments and governments excessively blocking individuals are not opposites, but rather two sides of the same coin.And indeed, recently there has been a bunch of political writing pointing the finger straight at vetocracy as a source of many huge problems:https://astralcodexten.substack.com/p/ezra-klein-on-vetocracy https://www.vox.com/2020/4/22/21228469/marc-andreessen-build-government-coronavirus https://www.vox.com/2016/10/26/13352946/francis-fukuyama-ezra-klein https://www.politico.com/news/magazine/2019/11/29/penn-station-robert-caro-073564 And on the other side of the coin, people are often confused when politicians who normally do not respect human rights suddenly appear very pro-freedom in their love of Bitcoin. Are they libertarian, or are they authoritarian? In this framework, the answer is simple: they're bulldozers, with all the benefits and risks that that side of the spectrum brings.What is vetocracy good for?Though the change that cryptocurrency proponents seek to bring to the world is often bulldozery, cryptocurrency governance internally is often quite vetocratic. Bitcoin governance famously makes it very difficult to make changes, and some core "constitutional norms" (eg. the 21 million coin limit) are considered so inviolate that many Bitcoin users consider a chain that violates that rule to be by-definition not Bitcoin, regardless of how much support it has.Ethereum protocol research is sometimes bulldozery in operation, but the Ethereum EIP process that governs the final stage of turning a research proposal into something that actually makes it into the blockchain includes a fair share of vetocracy, though still less than Bitcoin. Governance over irregular state changes, hard forks that interfere with the operation of specific applications on-chain, is even more vetocratic: after the DAO fork, not a single proposal to intentionally "fix" some application by altering its code or moving its balance has been successful.The case for vetocracy in these contexts is clear: it gives people a feeling of safety that the platform they build or invest on is not going to suddenly change the rules on them one day and destroy everything they've put years of their time or money into. Cryptocurrency proponents often cite Citadel interfering in Gamestop trading as an example of the opaque, centralized (and bulldozery) manipulation that they are fighting against. Web2 developers often complain about centralized platforms suddenly changing their APIs in ways that destroy startups built around their platforms. And, of course.... Vitalik Buterin, bulldozer victim Ok fine, the story that WoW removing Siphon Life was the direct inspiration to Ethereum is exaggerated, but the infamous patch that ruined my beloved warlock and my response to it were very real!And similarly, the case for vetocracy in politics is clear: it's a response to the often ruinous excesses of the bulldozers, both relatively minor and unthinkably severe, of the early 20th century.So what's the synthesis?The primary purpose of this point is to outline an axis, not to argue for a particular position. And if the vetocracy vs bulldozer axis is anything like the libertarian vs authoritarian axis, it's inevitably going to have internal subtleties and contradictions: much like a free society will see people voluntarily joining internally autocratic corporations (yes, even lots of people who are totally not economically desperate make such choices), many movements will be vetocratic internally but bulldozery in their relationship with the outside world.But here are a few possible things that one could believe about bulldozers and vetocracy:The physical world has too much vetocracy, but the digital world has too many bulldozers, and there are no digital places that are truly effective refuges from the bulldozers (hence: why we need blockchains?) Processes that create durable change need to be bulldozery toward the status quo but protecting that change requires a vetocracy. There's some optimal rate at which such processes should happen; too much and there's chaos, not enough and there's stagnation. A few key institutions should be protected by strong vetocracy, and these institutions exist both to enable bulldozers needed to enact positive change and to give people things they can depend on that are not going to be brought down by bulldozers. In particular, blockchain base layers should be vetocratic, but application-layer governance should leave more space for bulldozers Better economic mechanisms (quadratic voting? Harberger taxes?) can get us many of the benefits of both vetocracy and bulldozers without many of the costs. Vetocracy vs bulldozer is a particularly useful axis to use when thinking about non-governmental forms of human organization, whether for-profit companies, non-profit organizations, blockchains, or something else entirely. The relatively easier ability to exit from such systems (compared to governments) confounds discussion of how libertarian vs authoritarian they are, and so far blockchains and even centralized tech platforms have not really found many ways to differentiate themselves on the left vs right axis (though I would love to see more attempts at left-leaning crypto projects!). The vetocracy vs bulldozer axis, on the other hand, continues to map to non-governmental structures quite well - potentially making it very relevant in discussing these new kinds of non-governmental structures that are becoming increasingly important.
2024年10月22日
7 阅读
0 评论
0 点赞
2024-10-22
Soulbound
Soulbound2022 Jan 26 See all posts Soulbound One feature of World of Warcraft that is second nature to its players, but goes mostly undiscussed outside of gaming circles, is the concept of soulbound items. A soulbound item, once picked up, cannot be transferred or sold to another player.Most very powerful items in the game are soulbound, and typically require completing a complicated quest or killing a very powerful monster, usually with the help of anywhere from four to thirty nine other players. Hence, in order to get your character anywhere close to having the best weapons and armor, you have no choice but to participate in killing some of these extremely difficult monsters yourself. The purpose of the mechanic is fairly clear: it keeps the game challenging and interesting, by making sure that to get the best items you have to actually go and do the hard thing and figure out how to kill the dragon. You can't just go kill boars ten hours a day for a year, get thousands of gold, and buy the epic magic armor from other players who killed the dragon for you.Of course, the system is very imperfect: you could just pay a team of professionals to kill the dragon with you and let you collect the loot, or even outright buy a character on a secondary market, and do this all with out-of-game US dollars so you don't even have to kill boars. But even still, it makes for a much better game than every item always having a price.What if NFTs could be soulbound?NFTs in their current form have many of the same properties as rare and epic items in a massively multiplayer online game. They have social signaling value: people who have them can show them off, and there's more and more tools precisely to help users do that. Very recently, Twitter started rolling out an integration that allows users to show off their NFTs on their picture profile.But what exactly are these NFTs signaling? Certainly, one part of the answer is some kind of skill in acquiring NFTs and knowing which NFTs to acquire. But because NFTs are tradeable items, another big part of the answer inevitably becomes that NFTs are about signaling wealth. CryptoPunks are now regularly being sold for many millions of dollars, and they are not even the most expensive NFTs out there. Image source here. If someone shows you that they have an NFT that is obtainable by doing X, you can't tell whether they did X themselves or whether they just paid someone else to do X. Some of the time this is not a problem: for an NFT supporting a charity, someone buying it off the secondary market is sacrificing their own funds for the cause and they are helping the charity by contributing to others' incentive to buy the NFT, and so there is no reason to discriminate against them. And indeed, a lot of good can come from charity NFTs alone. But what if we want to create NFTs that are not just about who has the most money, and that actually try to signal something else?Perhaps the best example of a project trying to do this is POAP, the "proof of attendance protocol". POAP is a standard by which projects can send NFTs that represent the idea that the recipient personally participated in some event. Part of my own POAP collection, much of which came from the events that I attended over the years. POAP is an excellent example of an NFT that works better if it could be soulbound. If someone is looking at your POAP, they are not interested in whether or not you paid someone who attended some event. They are interested in whether or not you personally attended that event. Proposals to put certificates (eg. driver's licenses, university degrees, proof of age) on-chain face a similar problem: they would be much less valuable if someone who doesn't meet the condition themselves could just go buy one from someone who does.While transferable NFTs have their place and can be really valuable on their own for supporting artists and charities, there is also a large and underexplored design space of what non-transferable NFTs could become.What if governance rights could be soulbound?This is a topic I have written about ad nauseam (see [1] [2] [3] [4] [5]), but it continues to be worth repeating: there are very bad things that can easily happen to governance mechanisms if governance power is easily transferable. This is true for two primary types of reasons:If the goal is for governance power to be widely distributed, then transferability is counterproductive as concentrated interests are more likely to buy the governance rights up from everyone else. If the goal is for governance power to go to the competent, then transferability is counterproductive because nothing stops the governance rights from being bought up by the determined but incompetent. If you take the proverb that "those who most want to rule people are those least suited to do it" seriously, then you should be suspicious of transferability, precisely because transferability makes governance power flow away from the meek who are most likely to provide valuable input to governance and toward the power-hungry who are most likely to cause problems.So what if we try to make governance rights non-transferable? What if we try to make a CityDAO where more voting power goes to the people who actually live in the city, or at least is reliably democratic and avoids undue influence by whales hoarding a large number of citizen NFTs? What if DAO governance of blockchain protocols could somehow make governance power conditional on participation? Once again, a large and fruitful design space opens up that today is difficult to access.Implementing non-transferability in practicePOAP has made the technical decision to not block transferability of the POAPs themselves. There are good reasons for this: users might have a good reason to want to migrate all their assets from one wallet to another (eg. for security), and the security of non-transferability implemented "naively" is not very strong anyway because users could just create a wrapper account that holds the NFT and then sell the ownership of that.And indeed, there have been quite a few cases where POAPs have frequently been bought and sold when an economic rationale was there to do so. Adidas recently released a POAP for free to their fans that could give users priority access at a merchandise sale. What happened? Well, of course, many of the POAPs were quickly transferred to the highest bidder. More transfers than items. And not the only time.To solve this problem, the POAP team is suggesting that developers who care about non-transferability implement checks on their own: they could check on-chain if the current owner is the same address as the original owner, and they could add more sophisticated checks over time if deemed necessary. This is, for now, a more future-proof approach.Perhaps the one NFT that is the most robustly non-transferable today is the proof-of-humanity attestation. Theoretically, anyone can create a proof-of-humanity profile with a smart contract account that has transferable ownership, and then sell that account. But the proof-of-humanity protocol has a revocation feature that allows the original owner to make a video asking for a profile to be removed, and a Kleros court decides whether or not the video was from the same person as the original creator. Once the profile is successfully removed, they can re-apply to make a new profile. Hence, if you buy someone else's proof-of-humanity profile, your possession can be very quickly taken away from you, making transfers of ownership non-viable. Proof-of-humanity profiles are de-facto soulbound, and infrastructure built on top of them could allow for on-chain items in general to be soulbound to particular humans.Can we limit transferability without going all the way and basing everything on proof of humanity? It becomes harder, but there are medium-strength approaches that are probably good enough for some use cases. Making an NFT bound to an ENS name is one simple option, if we assume that users care enough about their ENS names that they are not willing to transfer them. For now, what we're likely to see is a spectrum of approaches to limit transferability, with different projects choosing different tradeoffs between security and convenience.Non-transferability and privacyCryptographically strong privacy for transferable assets is fairly easy to understand: you take your coins, put them into tornado.cash or a similar platform, and withdraw them into a fresh account. But how can we add privacy for soulbound items where you cannot just move them into a fresh account or even a smart contract? If proof of humanity starts getting more adoption, privacy becomes even more important, as the alternative is all of our activity being mapped on-chain directly to a human face.Fortunately, a few fairly simple technical options are possible:Store the item at an address which is the hash of (i) an index, (ii) the recipient address and (iii) a secret belonging to the recipient. You could reveal your secret to an interface that would then scan for all possible items that belong to your, but no one without your secret could see which items are yours. Publish a hash of a bunch of items, and give each recipient their Merkle branch. If a smart contract needs to check if you have an item of some type, you can provide a ZK-SNARK. Transfers could be done on-chain; the simplest technique may just be a transaction that calls a factory contract to make the old item invalid and the new item valid, using a ZK-SNARK to prove that the operation is valid.Privacy is an important part of making this kind of ecosystem work well. In some cases, the underlying thing that the item is representing is already public, and so there is no point in trying to add privacy. But in many other cases, users would not want to reveal everything that they have. If, one day in the future, being vaccinated becomes a POAP, one of the worst things we could do would be to create a system where the POAP is automatically advertised for everyone to see and everyone has no choice but to let their medical decision be influenced by what would look cool in their particular social circle. Privacy being a core part of the design can avoid these bad outcomes and increase the chance that we create something great.From here to thereA common criticism of the "web3" space as it exists today is how money-oriented everything is. People celebrate the ownership, and outright waste, of large amounts of wealth, and this limits the appeal and the long-term sustainability of the culture that emerges around these items. There are of course important benefits that even financialized NFTs can provide, such as funding artists and charities that would otherwise go unrecognized. However, there are limits to that approach, and a lot of underexplored opportunity in trying to go beyond financialization. Making more items in the crypto space "soulbound" can be one path toward an alternative, where NFTs can represent much more of who you are and not just what you can afford.However, there are technical challenges to doing this, and an uneasy "interface" between the desire to limit or prevent transfers and a blockchain ecosystem where so far all of the standards are designed around maximum transferability. Attaching items to "identity objects" that users are either unable (as with proof-of-humanity profiles) or unwilling (as with ENS names) to trade away seems like the most promising path, but challenges remain in making this easy-to-use, private and secure. We need more effort on thinking through and solving these challenges. If we can, this opens a much wider door to blockchains being at the center of ecosystems that are collaborative and fun, and not just about money.
2024年10月22日
7 阅读
0 评论
0 点赞
2024-10-22
Review of Optimism retro funding round 1
Review of Optimism retro funding round 12021 Nov 16 See all posts Review of Optimism retro funding round 1 Special thanks to Karl Floersch and Haonan Li for feedback and review, and Jinglan Wang for discussion.Last month, Optimism ran their first round of retroactive public goods funding, allocating a total of $1 million to 58 projects to reward the good work that these projects have already done for the Optimism and Ethereum ecosystems. In addition to being the first major retroactive general-purpose public goods funding experiment, it's also the first experiment in a new kind of governance through badge holders - not a very small decision-making board and also not a fully public vote, but instead a quadratic vote among a medium-sized group of 22 participants.The entire process was highly transparent from start to finish:The rules that the badge holders were supposed to follow were enshrined in the badge holder instructions You can see the projects that were nominated in this spreadsheet All discussion between the badge holders happened in publicly viewable forums. In addition to Twitter conversation (eg. Jeff Coleman's thread and also others), all of the explicit structured discussion channels were publicly viewable: the #retroactive-public-goods channel on the Optimism discord, and a published Zoom call The full results, and the individual badge holder votes that went into the results, can be viewed in this spreadsheet And finally, here are the results in an easy-to-read chart form: Much like the Gitcoin quadratic funding rounds and the MolochDAO grants, this is yet another instance of the Ethereum ecosystem establishing itself as a key player in the innovative public goods funding mechanism design space. But what can we learn from this experiment?Analyzing the resultsFirst, let us see if there are any interesting takeaways that can be seen by looking at the results. But what do we compare the results to? The most natural point of comparison is the other major public goods funding experiment that we've had so far: the Gitcoin quadratic funding rounds (in this case, round 11).Gitcoin round 11 (tech only) Optimism retro round 1 Probably the most obvious property of the Optimism retro results that can be seen without any comparisons is the category of the winners: every major Optimism retro winner was a technology project. There was nothing in the badge holder instructions that specified this; non-tech projects (say, the translations at ethereum.cn) were absolutely eligible. And yet, due to some combination of choice of badge holders and subconscious biases, the round seems to have been understood as being tech-oriented. Hence, I restricted the Gitcoin results in the table above to technology ("DApp Tech" + "Infra Tech") to focus on the remaining differences.Some other key remaining differences are:The retro round was low variance: the top-receiving project only got three times more (in fact, exactly three times more) than the 25th, whereas in the Gitcoin chart combining the two categories the gap was over 5x, and if you look at DApp Tech or Infra Tech separately the gap is over 15x! I personally blame this on Gitcoin using standard quadratic funding (\(reward \approx (\sum_i \sqrt x_i) ^2\)) and the retro round using \(\sum_i \sqrt x_i\) without the square; perhaps the next retro round should just add the square. The retro round winners are more well-known projects: this is actually an intended consequence: the retro round focused on rewarding projects for value already provided, whereas the Gitcoin round was open-ended and many contributions were to promising new projects in expectation of future value. The retro round focused more on infrastructure, the Gitcoin round more on more user-facing projects: this is of course a generalization, as there are plenty of infrastructure projects in the Gitcoin list, but in general applications that are directly user-facing are much more prominent there. A particularly interesting consequence (or cause?) of this is that the Gitcoin round more on projects appealing to sub-communities (eg. gamers), whereas the retro round focused more on globally-valuable projects - or, less charitably, projects appealing to the one particular sub-community that is Ethereum developers. It is my own (admittedly highly subjective) opinion that the retro round winner selection is somewhat higher quality. This is independent of the above three differences; it's more a general impression that the specific projects that were chosen as top recipients on the right were very high quality projects, to a greater extent than top recipients on the left.Of course, this could have two causes: (i) a smaller but more skilled number of badge holders ("technocrats") can make better decisions than "the crowd", and (ii) it's easier to judge quality retroactively than ahead of time. And this gets us an interesting question: what if a simple way to summarize much of the above findings is that technocrats are smarter but the crowd is more diverse?Could we make badge holders and their outputs more diverse?To better understand the problem, let us zoom in on the one specific example that I already mentioned above: ethereum.cn. This is an excellent Chinese Ethereum community project (though not the only one! See also EthPlanet), which has been providing a lot of resources in Chinese for people to learn about Ethereum, including translations of many highly technical articles written by Ethereum community members and about Ethereum originally in English. Ethereum.cn webpage. Plenty of high quality technical material - though they have still not yet gotten the memo that they were supposed to rename "eth2" to "consensus layer". Minus ten retroactive reward points for them. What knowledge does a badge holder need to be able to effectively determine whether ethereum.cn is an awesome project, a well-meaning but mediocre project that few Chinese people actually visit, or a scam? Likely the following:Ability to speak and understand Chinese Being plugged into the Chinese community and understanding the social dynamics of that specific project Enough understanding of both the tech and of the frame of mind of non-technical readers to judge the site's usefulness for them Out of the current, heavily US-focused, badge holders, the number that satisfy these requirements is basically zero. Even the two Chinese-speaking badge holders are US-based and not close to the Chinese Ethereum community.So, what happens if we expand the badge holder set? We could add five badge holders from the Chinese Ethereum community, five from India, five from Latin America, five from Africa, and five from Antarctica to represent the penguins. At the same time we could also diversify among areas of expertise: some technical experts, some community leaders, some people plugged into the Ethereum gaming world. Hopefully, we can get enough coverage that for each project that's valuable to Ethereum, we would have at least 1-5 badge holders who understands enough about that project to be able to intelligently vote on it. But then we see the problem: there would only be 1-5 badge holders able to intelligently vote on it.There are a few families of solutions that I see:Tweak the quadratic voting design. In theory, quadratic voting has the unique property that there's very little incentive to vote on projects that you do not understand. Any vote you make takes away credits that you could use to vote on projects you understand better. However, the current quadratic voting design has a flaw here: not voting on a project isn't truly a neutral non-vote, it's a vote for the project getting nothing. I don't yet have great ideas for how to do this. A key question is: if zero becomes a truly neutral vote, then how much money does a project get if nobody makes any votes on it? However, this is worth looking into more. Split up the voting into categories or sub-committees. Badge holders would first vote to sort projects into buckets, and the badge holders within each bucket ("zero knowledge proofs", "games", "India"...) would then make the decisions from there. This could also be done in a more "liquid" way through delegation - a badge holder could select some other badge holder to decide their vote on some project, and they would automatically copy their vote. Everyone still votes on everything, but facilitate more discussion. The badge holders that do have the needed domain expertise to assess a given project (or a single aspect of some given project) come up with their opinions and write a document or spreadsheet entry to express their reasoning. Other badge holders use this information to help make up their minds. Once the number of decisions to be made gets even higher, we could even consider ideas like in-protocol random sortition (eg. see this idea to incorporate sortition into quadratic voting) to reduce the number of decisions that each participant needs to make. Quadratic sortition has the particularly nice benefit that it naturally leads to large decisions being made by the entire group and small decisions being made by smaller groups.The means-testing debateIn the post-round retrospective discussion among the badgeholders, one of the key questions that was brought up is: when choosing which projects to fund, should badge holders take into account whether that project is still in dire need of funding, and de-prioritize projects that are already well-funded through some other means? That is to say, should retroactive rewards be means-tested?In a "regular" grants-funding round, the rationale for answering "yes" is clear: increasing a project's funding from $0 to $100k has a much bigger impact on its ability to do its job than increasing a project's funding from $10m to $10.1m. But Optimism retro funding round 1 is not a regular grants-funding round. In retro funding, the objective is not to give people money in expectation of future work that money could help them do. Rather, the objective is to reward people for work already done, to change the incentives for anyone working on projects in the future. With this in mind, to what extent should retroactive project funding depend on how much a given project actually needs the funds?The case for means testingSuppose that you are a 20-year-old developer, and you are deciding whether to join some well-funded defi project with a fancy token, or to work on some cool open-source fully public good work that will benefit everyone. If you join the well-funded defi project, you will get a $100k salary, and your financial situation will be guaranteed to be very secure. If you work on public-good projects on your own, you will have no income. You have some savings and you could make some money with side gigs, but it will be difficult, and you're not sure if the sacrifice is worth it.Now, consider two worlds, World A and World B. First, the similarities:There are ten people exactly like you out there that could get retro rewards, and five of you will. Hence, there's a 50% chance you'll get a retro reward. Theres's a 30% chance that your work will propel you to moderate fame and you'll be hired by some company with even better terms than the original defi project (or even start your own). Now, the differences:World A (means testing): retro rewards are concentrated among the actors that do not find success some other way World B (no means testing): retro rewards are given out independently of whether or not the project finds success in some other way Let's look at your chances in each world.Event Probability (World A) Probability (World B) Independent success and retroactive reward 0% 15% Independent success only 30% 15% Retroactive reward only 50% 35% Nothing 20% 35% From your point of view as a non-risk-neutral human being, the 15% chance of getting success twice in world B matters much less than the fact that in world B your chances of being left completely in the cold with nothing are nearly double.Hence, if we want to encourage people in this hypothetical 20 year old's position to actually contribute, concentrating retro rewards among projects who did not already get rewarded some other way seems prudent.The case against means testingSuppose that you are someone who contributes a small amount to many projects, or an investor seed-funding public good projects in anticipation of retroactive rewards. In this case, the share that you would get from any single retroactive reward is small. Would you rather have a 10% chance of getting $10,100, or a 10% chance of getting $10,000 and a 10% chance of getting $100? It really doesn't matter.Furthermore, your chance of getting rewarded via retroactive funding may well be quite disjoint from your chance of getting rewarded some other way. There are countless stories on the internet of people putting a big part of their lives into a project when that project was non-profit and open-source, seeing that project go for-profit and become successful, and getting absolutely nothing out of it for themselves. In all of these cases, it doesn't really matter whether or not retroactive rewards care on whether or not projects are needy. In fact, it would probably be better for them to just focus on judging quality.Means testing has downsides of its own. It would require badge holders to expend effort to determine to what extent a project is well-funded outside the retroactive reward system. It could lead to projects expending effort to hide their wealth and appear scrappy to increase their chance of getting more rewards. Subjective evaluations of neediness of recipients could turn into politicized evaluations of moral worthiness of recipients that introduce more controversy into the mechanism. In the extreme, an elaborate tax return system might be required to properly enforce fairness.What do I think?In general, it seems like doing a little bit of prioritizing projects that have not discovered business models has advantages, but we should not do too much of that. Projects should be judged by their effect on the world first and foremost.NominationsIn this round, anyone could nominate projects by submitting them in a Google form, and there were only 106 projects nominated. What about the next round, now that people know for sure that they stand a chance at getting thousands of dollars in payout? What about the round a year in the hypothetical future, when fees from millions of daily transactions are paying into the retro funding rounds, and individual projects are getting more money than the entire round is today?Some kind of multi-level structure for nominations seems inevitable. There's probably no need to enshrine it directly into the voting rules. Instead, we can look at this as one particular way of changing the structure of the discussion: nomination rules filter out the nominations that badge holders need to look at, and anything the badge holders do not look at will get zero votes by default (unless a badge holder really cares to bypass the rules because they have their own reasons to believe that some project is valuable). Some possible ideas:Badge holder pre-approval: for a proposal to become visible, it must be approved by N badge holders (eg. N=3?). Any N badge holders could pre-approve any project; this is an anti-spam speed bump, not a gate-keeping sub-committee. Require proposers to provide more information about their proposal, justifying it and reducing the work badge holders need to do to go through it. Badge holders would also appoint a separate committee and entrust it with sorting through these proposals and forwarding the ones that follow the rules and pass a basic smell test of not being spam Proposals have to specify a category (eg. "zero knowledge proofs", "games", "India"), and badge holders who had declared themselves experts in that category would review those proposals and forward them to a vote only if they chose the right category and pass a basic smell test. Proposing requires a deposit of 0.02 ETH. If your proposal gets 0 votes (alternatively: if your proposal is explicitly deemed to be "spam"), your deposit is lost. Proposing requires a proof-of-humanity ID, with a maximum of 3 proposals per human. If your proposals get 0 votes (alternatively: if any of your proposals is explicitly deemed to be "spam"), you can no longer submit proposals for a year (or you have to provide a deposit). Conflict of interest rulesThe first part of the post-round retrospective discussion was taken up by discussion of conflict of interest rules. The badge holder instructions include the following lovely clause: No self-dealing or conflicts of interestRetroDAO governance participants should refrain from voting on sending funds to organizations where any portion of those funds is expected to flow to them, their other projects, or anyone they have a close personal or economic relationship with. As far as I can tell, this was honored. Badge holders did not try to self-deal, as they were (as far as I can tell) good people, and they knew their reputations were on the line. But there were also some subjective edge cases:Wording issues causing confusion. Some badge holders wondered about the word "other": could badge holders direct funds to their own primary projects? Additionally, the "sending funds to organizations where..." language does not strictly prohibit direct transfers to self. These were arguably simple mistakes in writing this clause; the word "other" should just be removed and "organizations" replaced with "addresses". What if a badge holder is part of a nonprofit that itself gives out grants to other projects? Could the badge holder vote for that nonprofit? The badge holder would not benefit, as the funds would 100% pass-through to others, but they could benefit indirectly. What level of connection counts as close connection? Ethereum is a tight-knit community and the people qualified to judge the best projects are often at least to some degree friends with the team or personally involved in those projects precisely because they respect those projects. When do those connections step over the line? I don't think there are perfect answers to this; rather, the line will inevitably be gray and can only be discussed and refined over time. The main mechanism-design tweaks that can mitigate it are (i) increasing the number of badge holders, diluting the portion of them that can be insiders in any single project, (ii) reducing the rewards going to projects that only a few badge holders support (my suggestion above to set the reward to \((\sum_i \sqrt x_i)^2\) instead of \(\sum_i \sqrt x_i\) would help here too), and (iii) making sure it's possible for badge holders to counteract clear abuses if they do show up.Should badge holder votes be secret ballot?In this round, badge holder votes were completely transparent; anyone can see how each badge holder votes. But transparent voting has a huge downside: it's vulnerable to bribery, including informal bribery of the kind that even good people easily succumb to. Badge holders could end up supporting projects in part with the subconscious motivation of winning favor with them. Even more realistically, badge holders may be unwilling to make negative votes even when they are justified, because a public negative vote could easily rupture a relationship.Secret ballots are the natural alternative. Secret ballots are used widely in democratic elections where any citizen (or sometimes resident) can vote, precisely to prevent vote buying and more coercive forms of influencing how people vote. However, in typical elections, votes within executive and legislative bodies are typically public. The usual reasons for this have to do with theories of democratic accountability: voters need to know how their representatives vote so that they can choose their representatives and know that they are not completely lying about their stated values. But there's also a dark side to accountability: elected officials making public votes are accountable to anyone who is trying to bribe them.Secret ballots within government bodies do have precedent:The Israeli Knesset uses secret votes to elect the president and a few other officials The Italian parliament has used secret votes in a variety of contexts. In the 19th century, it was considered an important way to protect parliament votes from interference by a monarchy. Discussions in US parliaments were less transparent before 1970, and some researchers argue that the switch to more transparency led to more corruption. Voting in juries is often secret. Sometimes, even the identities of jurors are secret. In general, the conclusion seems to be that secret votes in government bodies have complicated consequences; it's not clear that they should be used everywhere, but it's also not clear that transparency is an absolute good either.In the context of Optimism retro funding specifically, the main specific argument I heard against secret voting is that it would make it harder for badge holders to rally and vote against other badge holders making votes that are clearly very wrong or even malicious. Today, if a few rogue badge holders start supporting a project that has not provided value and is clearly a cash grab for those badge holders, the other badge holders can see this and make negative votes to counteract this attack. With secret ballots, it's not clear how this could be done.I personally would favor the second round of Optimism retro funding using completely secret votes (except perhaps open to a few researchers under conditions of non-disclosure) so we can tell what the material differences are in the outcome. Given the current small and tight-knit set of badge holders, dealing with rogue badge hodlers is likely not a primary concern, but in the future it will be; hence, coming up with a secret ballot design that allows counter-voting or some alternative strategy is an important research problem.Other ideas for structuring discussionThe level of participation among badge holders was very uneven. Some (particularly Jeff Coleman and Matt Garnett) put a lot of effort into their participation, publicly expressing their detailed reasoning in Twitter threads and helping to set up calls for more detailed discussion. Others participated in the discussion on Discord and still others just voted and did little else.There was a choice made (ok fine, I was the one who suggested it) that the #retroactive-public-goods channel should be readable by all (it's in the Optimism discord), but to prevent spam only badge holders should be able to speak. This reduced many people's ability to participate, especially ironically enough my own (I am not a badge holder, and my self-imposed Twitter quarantine, which only allows me to tweet links to my own long-form content, prevented me from engaging on Twitter).These two factors together meant that there was not that much discussion taking place; certainly less than I had been hoping for. What are some ways to encourage more discussion?Some ideas:Badge holders could vote in advisors, who cannot vote but can speak in the #retroactive-public-goods channel and other badge-holder-only meetings. Badge holders could be required to explain their decisions, eg. writing a post or a paragraph for each project they made votes on. Consider compensating badge holders, either through an explicit fixed fee or through a norm that badge holders themselves who made exceptional contributions to discussion are eligible for rewards in future rounds. Add more discussion formats. If the number of badge holders increases and there are subgroups with different specialties, there could be more chat rooms and each of them could invite outsiders. Another option is to create a dedicated subreddit. It's probably a good idea to start experimenting with more ideas like this.ConclusionsGenerally, I think Round 1 of Optimism retro funding has been a success. Many interesting and valuable projects were funded, there was quite a bit of discussion, and all of this despite it only being the first round.There are a number of ideas that could be introduced or experimented with in subsequent rounds:Increase the number and diversity of badge holders, while making sure that there is some solution to the problem that only a few badge holders will be experts in any individual project's domain. Add some kind of two-layer nomination structure, to lower the decision-making burden that the entire badge holder set is exposed to Use secret ballots Add more discussion channels, and more ways for non-badge-holders to participate. This could involve reforming how existing channels work, or it could involve adding new channels, or even specialized channels for specific categories of projects. Change the reward formula to increase variance, from the current \(\sum_i \sqrt x_i\) to the standard quadratic funding formula of \((\sum_i \sqrt x_i) ^2\). In the long term, if we want retro funding to be a sustainable institution, there is also the question of how new badge holders are to be chosen (and, in cases of malfeasance, how badge holders could be removed). Currently, the selection is centralized. In the future, we need some alternative. One possible idea for round 2 is to simply allow existing badge holders to vote in a few new badge holders. In the longer term, to prevent it from being an insular bureaucracy, perhaps one badge holder each round could be chosen by something with more open participation, like a proof-of-humanity vote?In any case, retroactive public goods funding is still an exciting and new experiment in institutional innovation in multiple ways. It's an experiment in non-coin-driven decentralized governance, and it's an experiment in making things happen through retroactive, rather than proactive, incentives. To make the experiment fully work, a lot more innovation will need to happen both in the mechanism itself and in the ecosystem that needs to form around it. When will we see the first retro funding-focused angel investor? Whatever ends up happening, I'm looking forward to seeing how this experiment evolves in the rounds to come.
2024年10月22日
6 阅读
0 评论
0 点赞
2024-10-22
Endgame
Endgame2021 Dec 06 See all posts Endgame Special thanks to a whole bunch of people from Optimism and Flashbots for discussion and thought that went into this piece, and Karl Floersch, Phil Daian, Hasu and Alex Obadia for feedback and review.Consider the average "big block chain" - very high block frequency, very high block size, many thousands of transactions per second, but also highly centralized: because the blocks are so big, only a few dozen or few hundred nodes can afford to run a fully participating node that can create blocks or verify the existing chain. What would it take to make such a chain acceptably trustless and censorship resistant, at least by my standards?Here is a plausible roadmap:Add a second tier of staking, with low resource requirements, to do distributed block validation. The transactions in a block are split into 100 buckets, with a Merkle or Verkle tree state root after each bucket. Each second-tier staker gets randomly assigned to one of the buckets. A block is only accepted when at least 2/3 of the validators assigned to each bucket sign off on it. Introduce either fraud proofs or ZK-SNARKs to let users directly (and cheaply) check block validity. ZK-SNARKs can cryptographically prove block validity directly; fraud proofs are a simpler scheme where if a block has an invalid bucket, anyone can broadcast a fraud proof of just that bucket. This provides another layer of security on top of the randomly-assigned validators. Introduce data availability sampling to let users check block availability. By using DAS checks, light clients can verify that a block was published by only downloading a few randomly selected pieces. Add secondary transaction channels to prevent censorship. One way to do this is to allow secondary stakers to submit lists of transactions which the next main block must include. What do we get after all of this is done? We get a chain where block production is still centralized, but block validation is trustless and highly decentralized, and specialized anti-censorship magic prevents the block producers from censoring. It's somewhat aesthetically ugly, but it does provide the basic guarantees that we are looking for: even if every single one of the primary stakers (the block producers) is intent on attacking or censoring, the worst that they could do is all go offline entirely, at which point the chain stops accepting transactions until the community pools their resources and sets up one primary-staker node that is honest.Now, consider one possible long-term future for rollups...Imagine that one particular rollup - whether Arbitrum, Optimism, Zksync, StarkNet or something completely new - does a really good job of engineering their node implementation, to the point where it really can do 10,000 transactions per second if given powerful enough hardware. The techniques for doing this are in-principle well-known, and implementations were made by Dan Larimer and others many years ago: split up execution into one CPU thread running the unparallelizable but cheap business logic and a huge number of other threads running the expensive but highly parallelizable cryptography. Imagine also that Ethereum implements sharding with data availability sampling, and has the space to store that rollup's on-chain data between its 64 shards. As a result, everyone migrates to this rollup. What would that world look like? Once again, we get a world where, block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented. Rollup block producers have to process a huge number of transactions, and so it is a difficult market to enter, but they have no way to push invalid blocks through. Block availability is secured by the underlying chain, and block validity is guaranteed by the rollup logic: if it's a ZK rollup, it's ensured by SNARKs, and an optimistic rollup is secure as long as there is one honest actor somewhere running a fraud prover node (they can be subsidized with Gitcoin grants). Furthermore, because users always have the option of submitting transactions through the on-chain secondary inclusion channel, rollup sequencers also cannot effectively censor.Now, consider the other possible long-term future of rollups...No single rollup succeeds at holding anywhere close to the majority of Ethereum activity. Instead, they all top out at a few hundred transactions per second. We get a multi-rollup future for Ethereum - the Cosmos multi–chain vision, but on top of a base layer providing data availability and shared security. Users frequently rely on cross-rollup bridging to jump between different rollups without paying the high fees on the main chain. What would that world look like?It seems like we could have it all: decentralized validation, robust censorship resistance, and even distributed block production, because the rollups are all invididually small and so easy to start producing blocks in. But the decentralization of block production may not last, because of the possibility of cross-domain MEV. There are a number of benefits to being able to construct the next block on many domains at the same time: you can create blocks that use arbitrage opportunities that rely on making transactions in two rollups, or one rollup and the main chain, or even more complex combinations. A cross-domain MEV opportunity discovered by Western Gate Hence, in a multi-domain world, there are strong pressures toward the same people controlling block production on all domains. It may not happen, but there's a good chance that it will, and we have to be prepared for that possibility. What can we do about it? So far, the best that we know how to do is to use two techniques in combination:Rollups implement some mechanism for auctioning off block production at each slot, or the Ethereum base layer implements proposer/builder separation (PBS) (or both). This ensures that at least any centralization tendencies in block production don't lead to a completely elite-captured and concentrated staking pool market dominating block validation. Rollups implement censorship-resistant bypass channels, and the Ethereum base layer implements PBS anti-censorship techniques. This ensures that if the winners of the potentially highly centralized "pure" block production market try to censor transactions, there are ways to bypass the censorship. So what's the result? Block production is centralized, block validation is trustless and highly decentralized, and censorship is still prevented. Three paths toward the same destination. So what does this mean?While there are many paths toward building a scalable and secure long-term blockchain ecosystem, it's looking like they are all building toward very similar futures. There's a high chance that block production will end up centralized: either the network effects within rollups or the network effects of cross-domain MEV push us in that direction in their own different ways. But what we can do is use protocol-level techniques such as committee validation, data availability sampling and bypass channels to "regulate" this market, ensuring that the winners cannot abuse their power.What does this mean for block producers? Block production is likely to become a specialized market, and the domain expertise is likely to carry over across different domains. 90% of what makes a good Optimism block producer also makes a good Arbitrum block producer, and a good Polygon block producer, and even a good Ethereum base layer block producer. If there are many domains, cross-domain arbitrage may also become an important source of revenue.What does this mean for Ethereum? First of all, Ethereum is very well-positioned to adjust to this future world, despite the inherent uncertainty. The profound benefit of the Ethereum rollup-centric roadmap is that it means that Ethereum is open to all of the futures, and does not have to commit to an opinion about which one will necessarily win. Will users very strongly want to be on a single rollup? Ethereum, following its existing course, can be the base layer of that, automatically providing the anti-fraud and anti-censorship "armor" that high-capacity domains need to be secure. Is making a high-capacity domain too technically complicated, or do users just have a great need for variety? Ethereum can be the base layer of that too - and a very good one, as the common root of trust makes it far easier to move assets between rollups safely and cheaply.But also, Ethereum researchers should think hard about what levels of decentralization in block production are actually achievable. It may not be worth it to add complicated plumbing to make highly decentralized block production easy if cross-domain MEV (or even cross-shard MEV from one rollup taking up multiple shards) make it unsustainable regardless. What does this mean for big block chains? There is a path for them to turn into something trustless and censorship resistant, and we'll soon find out if their core developers and communities actually value censorship resistance and decentralization enough for them to do it!It will likely take years for all of this to play out. Sharding and data availability sampling are complex technologies to implement. It will take years of refinement and audits for people to be fully comfortable storing their assets in a ZK-rollup running a full EVM. And cross-domain MEV research too is still in its infancy. But it does look increasingly clear how a realistic but bright future for scalable blockchains is likely to emerge.
2024年10月22日
3 阅读
0 评论
0 点赞
2024-10-22
Crypto Cities
Crypto Cities2021 Oct 31 See all posts Crypto Cities Special thanks to Mr Silly and Tina Zhen for early feedback on the post, and to a big long list of people for discussion of the ideas.One interesting trend of the last year has been the growth of interest in local government, and in the idea of local governments that have wider variance and do more experimentation. Over the past year, Miami mayor Francis Suarez has pursued a Twitter-heavy tech-startup-like strategy of attracting interest in the city, frequently engaging with the mainstream tech industry and crypto community on Twitter. Wyoming now has a DAO-friendly legal structure, Colorado is experimenting with quadratic voting, and we're seeing more and more experiments making more pedestrian-friendly street environments for the offline world. We're even seeing projects with varying degrees of radicalness - Cul de sac, Telosa, CityDAO, Nkwashi, Prospera and many more - trying to create entire neighborhoods and cities from scratch.Another interesting trend of the last year has been the rapid mainstreaming of crypto ideas such as coins, non-fungible tokens and decentralized autonomous organizations (DAOs). So what would happen if we combine the two trends together? Does it make sense to have a city with a coin, an NFT, a DAO, some record-keeping on-chain for anti-corruption, or even all four? As it turns out, there are already people trying to do just that:CityCoins.co, a project that sets up coins intended to become local media of exchange, where a portion of the issuance of the coin goes to the city government. MiamiCoin already exists, and "San Francisco Coin" appears to be coming soon. Other experiments with coin issuance (eg. see this project in Seoul) Experiments with NFTs, often as a way of funding local artists. Busan is hosting a government-backed conference exploring what they could do with NFTs. Reno mayor Hillary Schieve's expansive vision for blockchainifying the city, including NFT sales to support local art, a RenoDAO with RenoCoins issued to local residents that could get revenue from the government renting out properties, blockchain-secured lotteries, blockchain voting and more. Much more ambitious projects creating crypto-oriented cities from scratch: see CityDAO, which describes itself as, well, "building a city on the Ethereum blockchain" - DAOified governance and all. But are these projects, in their current form, good ideas? Are there any changes that could make them into better ideas? Let us find out...Why should we care about cities?Many national governments around the world are showing themselves to be inefficient and slow-moving in response to long-running problems and rapid changes in people's underlying needs. In short, many national governments are missing live players. Even worse, many of the outside-the-box political ideas that are being considered or implemented for national governance today are honestly quite terrifying. Do you want the USA to be taken over by a clone of WW2-era Portuguese dictator Antonio Salazar, or perhaps an "American Caesar", to beat down the evil scourge of American leftism? For every idea that can be reasonably described as freedom-expanding or democratic, there are ten that are just different forms of centralized control and walls and universal surveillance.Now consider local governments. Cities and states, as we've seen from the examples at the start of this post, are at least in theory capable of genuine dynamism. There are large and very real differences of culture between cities, so it's easier to find a single city where there is public interest in adopting any particular radical idea than it is to convince an entire country to accept it. There are very real challenges and opportunities in local public goods, urban planning, transportation and many other sectors in the governance of cities that could be addressed. Cities have tightly cohesive internal economies where things like widespread cryptocurrency adoption could realistically independently happen. Furthermore, it's less likely that experiments within cities will lead to terrible outcomes both because cities are regulated by higher-level governments and because cities have an easier escape valve: people who are unhappy with what's going on can more easily exit.So all in all, it seems like the local level of government is a very undervalued one. And given that criticism of existing smart city initiatives often heavily focuses on concerns around centralized governance, lack of transparency and data privacy, blockchain and cryptographic technologies seem like a promising key ingredient for a more open and participatory way forward.What are city projects up to today?Quite a lot actually! Each of these experiments is still small scale and largely still trying to find its way around, but they are all at least seeds that could turn into interesting things. Many of the most advanced projects are in the United States, but there is interest across the world; over in Korea the government of Busan is running an NFT conference. Here are a few examples of what is being done today.Blockchain experiments in RenoReno, Nevada mayor Hillary Schieve is a blockchain fan, focusing primarily on the Tezos ecosystem, and she has recently been exploring blockchain-related ideas (see her podcast here) in the governance of her city:Selling NFTs to fund local art, starting with an NFT of the "Space Whale" sculpture in the middle of the city Creating a Reno DAO, governed by Reno coins that Reno residents would be eligible to receive via an airdrop. The Reno DAO could start to get sources of revenue; one proposed idea was the city renting out properties that it owns and the revenue going into a DAO Using blockchains to secure all kinds of processes: blockchain-secured random number generators for casinos, blockchain-secured voting, etc. Reno space whale. Source here. CityCoins.coCityCoins.co is a project built on Stacks, a blockchain run by an unusual "proof of transfer" (for some reason abbreviated PoX and not PoT) block production algorithm that is built around the Bitcoin blockchain and ecosystem. 70% of the coin's supply is generated by an ongoing sale mechanism: anyone with STX (the Stacks native token) can send their STX to the city coin contract to generate city coins; the STX revenues are distributed to existing city coin holders who stake their coins. The remaining 30% is made available to the city government.CityCoins has made the interesting decision of trying to make an economic model that does not depend on any government support. The local government does not need to be involved in creating a CityCoins.co coin; a community group can launch a coin by themselves. An FAQ-provided answer to "What can I do with CityCoins?" includes examples like "CityCoins communities will create apps that use tokens for rewards" and "local businesses can provide discounts or benefits to people who ... stack their CityCoins". In practice, however, the MiamiCoin community is not going at it alone; the Miami government has already de-facto publicly endorsed it. MiamiCoin hackathon winner: a site that allows coworking spaces to give preferential offers to MiamiCoin holders. CityDAOCityDAO is the most radical of the experiments: Unlike Miami and Reno, which are existing cities with existing infrastructure to be upgraded and people to be convinced, CityDAO a DAO with legal status under the Wyoming DAO law (see their docs here) trying to create entirely new cities from scratch.So far, the project is still in its early stages. The team is currently finalizing a purchase of their first plot of land in a far-off corner of Wyoming. The plan is to start with this plot of land, and then add other plots of land in the future, to build cities, governed by a DAO and making heavy use of radical economic ideas like Harberger taxes to allocate the land, make collective decisions and manage resources. Their DAO is one of the progressive few that is avoiding coin voting governance; instead, the governance is a voting scheme based on "citizen" NFTs, and ideas have been floated to further limit votes to one-per-person by using proof-of-humanity verification. The NFTs are currently being sold to crowdfund the project; you can buy them on OpenSea. What do I think cities could be up to?Obviously there are a lot of things that cities could do in principle. They could add more bike lanes, they could use CO2 meters and far-UVC light to more effectively reduce COVID spread without inconveniencing people, and they could even fund life extension research. But my primary specialty is blockchains and this post is about blockchains, so... let's focus on blockchains.I would argue that there are two distinct categories of blockchain ideas that make sense:Using blockchains to create more trusted, transparent and verifiable versions of existing processes. Using blockchains to implement new and experimental forms of ownership for land and other scarce assets, as well as new and experimental forms of democratic governance. There's a natural fit between blockchains and both of these categories. Anything happening on a blockchain is very easy to publicly verify, with lots of ready-made freely available tools to help people do that. Any application built on a blockchain can immediately plug in to and interface with other applications in the entire global blockchain ecosystem. Blockchain-based systems are efficient in a way that paper is not, and publicly verifiable in a way that centralized computing systems are not - a necessary combination if you want to, say, make a new form of voting that allows citizens to give high-volume real-time feedback on hundreds or thousands of different issues.So let's get into the specifics.What are some existing processes that blockchains could make more trusted and transparent?One simple idea that plenty of people, including government officials around the world, have brought up to me on many occasions is the idea of governments creating a whitelisted internal-use-only stablecoin for tracking internal government payments. Every tax payment from an individual or organization could be tied to a publicly visible on-chain record minting that number of coins (if we want individual tax payment quantities to be private, there are zero-knowledge ways to make only the total public but still convince everyone that it was computed correctly). Transfers between departments could be done "in the clear", and the coins would be redeemed only by individual contractors or employees claiming their payments and salaries. This system could easily be extended. For example, procurement processes for choosing which bidder wins a government contract could largely be done on-chain.Many more processes could be made more trustworthy with blockchains:Fair random number generators (eg. for lotteries) - VDFs, such as the one Ethereum is expected to include, could serve as a fair random number generator that could be used to make government-run lotteries more trustworthy. Fair randomness could also be used for many other use cases, such as sortition as a form of government. Certificates, for example cryptographic proofs that some particular individual is a resident of the city, could be done on-chain for added verifiability and security (eg. if such certificates are issued on-chain, it would become obvious if a large number of false certificates are issued). This can be used by all kinds of local-government-issued certificates. Asset registries, for land and other assets, as well as more complicated forms of property ownership such as development rights. Due to the need for courts to be able to make assignments in exceptional situations, these registries will likely never be fully decentralized bearer instruments in the same way that cryptocurrencies are, but putting records on-chain can still make it easier to see what happened in what order in a dispute. Eventually, even voting could be done on-chain. Here, many complexities and dragons loom and it's really important to be careful; a sophisticated solution combining blockchains, zero knowledge proofs and other cryptography is needed to achieve all the desired privacy and security properties. However, if humanity is ever going to move to electronic voting at all, local government seems like the perfect place to start.What are some radical economic and governance experiments that could be interesting?But in addition to these kinds of blockchain overlays onto things that governments already do, we can also look at blockchains as an opportunity for governments to make completely new and radical experiments in economics and governance. These are not necessarily final ideas on what I think should be done; they are more initial explorations and suggestions for possible directions. Once an experiment starts, real-world feedback is often by far the most useful variable to determine how the experiment should be adjusted in the future.Experiment #1: a more comprehensive vision of city tokensCityCoins.co is one vision for how city tokens could work. But it is far from the only vision. Indeed, the CityCoins.so approach has significant risks, particularly in how economic model is heavily tilted toward early adopters. 70% of the STX revenue from minting new coins is given to existing stakers of the city coin. More coins will be issued in the next five years than in the fifty years that follow. It's a good deal for the government in 2021, but what about 2051? Once a government endorses a particular city coin, it becomes difficult for it to change directions in the future. Hence, it's important for city governments to think carefully about these issues, and choose a path that makes sense for the long term.Here is a different possible sketch of a narrative of how city tokens might work. It's far from the only possible alternative to the CityCoins.co vision; see Steve Waldman's excellent article arguing for a city-localized medium of exchange for yet another possible direction. In any case, city tokens are a wide design space, and there are many different options worth considering. Anyway, here goes...The concept of home ownership in its current form is a notable double-edged sword, and the specific ways in which it's actively encouraged and legally structured is considered by many to be one of the biggest economic policy mistakes that we are making today. There is an inevitable political tension between a home as a place to live and a home as an investment asset, and the pressure to satisfy communities who care about the latter often ends up severely harming the affordability of the former. A resident in a city either owns a home, making them massively over-exposed to land prices and introducing perverse incentives to fight against construction of new homes, or they rent a home, making them negatively exposed to the real estate market and thus putting them economically at odds with the goal of making a city a nice place to live.But even despite all of these problems, many still find home ownership to be not just a good personal choice, but something worthy of actively subsidizing or socially encouraging. One big reason is that it nudges people to save money and build up their net worth. Another big reason is that despite its flaws, it creates economic alignment between residents and the communities they live in. But what if we could give people a way to save and create that economic alignment without the flaws? What if we could create a divisible and fungible city token, that residents could hold as many units of as they can afford or feel comfortable with, and whose value goes up as the city prospers?First, let's start with some possible objectives. Not all are necessary; a token that accomplishes only three of the five is already a big step forward. But we'll try to hit as many of them as possible:Get sustainable sources of revenue for the government. The city token economic model should avoid redirecting existing tax revenue; instead, it should find new sources of revenue. Create economic alignment between residents and the city. This means first of all that the coin itself should clearly become more valuable as the city becomes more attractive. But it also means that the economics should actively encourage residents to hold the coin more than faraway hedge funds. Promote saving and wealth-building. Home ownership does this: as home owners make mortgage payments, they build up their net worth by default. City tokens could do this too, making it attractive to accumulate coins over time, and even gamifying the experience. Encourage more pro-social activity, such as positive actions that help the city and more sustainable use of resources. Be egalitarian. Don't unduly favor wealthy people over poor people (as badly designed economic mechanisms often do accidentally). A token's divisibility, avoiding a sharp binary divide between haves and have-nots, does a lot already, but we can go further, eg. by allocating a large portion of new issuance to residents as a UBI. One pattern that seems to easily meet the first three objectives is providing benefits to holders: if you hold at least X coins (where X can go up over time), you get some set of services for free. MiamiCoin is trying to encourage businesses to do this, but we could go further and make government services work this way too. One simple example would be making existing public parking spaces only available for free to those who hold at least some number of coins in a locked-up form. This would serve a few goals at the same time:Create an incentive to hold the coin, sustaining its value. Create an incentive specifically for residents to hold the coin, as opposed to otherwise-unaligned faraway investors. Furthermore, the incentive's usefulness is capped per-person, so it encourages widely distributed holdings. Creates economic alignment (city becomes more attractive -> more people want to park -> coins have more value). Unlike home ownership, this creates alignment with an entire town, and not merely a very specific location in a town. Encourage sustainable use of resources: it would reduce usage of parking spots (though people without coins who really need them could still pay), supporting many local governments' desires to open up more space on the roads to be more pedestrian-friendly. Alternatively, restaurants could also be allowed to lock up coins through the same mechanism and claim parking spaces to use for outdoor seating. But to avoid perverse incentives, it's extremely important to avoid overly depending on one specific idea and instead to have a diverse array of possible revenue sources. One excellent gold mine of places to give city tokens value, and at the same time experiment with novel governance ideas, is zoning. If you hold at least Y coins, then you can quadratically vote on the fee that nearby landowners have to pay to bypass zoning restrictions. This hybrid market + direct democracy based approach would be much more efficient than current overly cumbersome permitting processes, and the fee itself would be another source of government revenue. More generally, any of the ideas in the next section could be combined with city tokens to give city token holders more places to use them.Experiment #2: more radical and participatory forms of governanceThis is where Radical Markets ideas such as Harberger taxes, quadratic voting and quadratic funding come in. I already brought up some of these ideas in the section above, but you don't have to have a dedicated city token to do them. Some limited government use of quadratic voting and funding has already happened: see the Colorado Democratic party and the Taiwanese presidential hackathon, as well as not-yet-government-backed experiments like Gitcoin's Boulder Downtown Stimulus. But we could do more!One obvious place where these ideas can have long-term value is giving developers incentives to improve the aesthetics of buildings that they are building (see here, here, here and here for some recent examples of professional blabbers debating the aesthetics of modern architecture). Harberger taxes and other mechanisms could be used to radically reform zoning rules, and blockchains could be used to administer such mechanisms in a more trustworthy and efficient way. Another idea that is more viable in the short term is subsidizing local businesses, similar to the Downtown Stimulus but on a larger and more permanent scale. Businesses produce various kinds of positive externalities in their local communities all the time, and those externalities could be more effectively rewarded. Local news could be quadratically funded, revitalizing a long-struggling industry. Pricing for advertisements could be set based on real-time votes of how much people enjoy looking at each particular ad, encouraging more originality and creativity.More democratic feedback (and possibly even retroactive democratic feedback!) could plausibly create better incentives in all of these areas. And 21st-century digital democracy through real-time online quadratic voting and funding could plausibly do a much better job than 20th-century democracy, which seems in practice to have been largely characterized by rigid building codes and obstruction at planning and permitting hearings. And of course, if you're going to use blockchains to secure voting, starting off by doing it with fancy new kinds of votes seems far more safe and politically feasible than re-fitting existing voting systems. Mandatory solarpunk picture intended to evoke a positive image of what might happen to our cities if real-time quadratic votes could set subsidies and prices for everything. ConclusionsThere are a lot of worthwhile ideas for cities to experiment with that could be attempted by existing cities or by new cities. New cities of course have the advantage of not having existing residents with existing expectations of how things should be done; but the concept of creating a new city itself is, in modern times, relatively untested. Perhaps the multi-billion-dollar capital pools in the hands of people and projects enthusiastic to try new things could get us over the hump. But even then, existing cities will likely continue to be the place where most people live for the foreseeable future, and existing cities can use these ideas too.Blockchains can be very useful in both the more incremental and more radical ideas that were proposed here, even despite the inherently "trusted" nature of a city government. Running any new or existing mechanism on-chain gives the public an easy ability to verify that everything is following the rules. Public chains are better: the benefits from existing infrastructure for users to independently verify what is going on far outweigh the losses from transaction fees, which are expected to quickly decrease very soon from rollups and sharding. If strong privacy is required, blockchains can be combined zero knowledge cryptography to give privacy and security at the same time.The main trap that governments should avoid is too quickly sacrificing optionality. An existing city could fall into this trap by launching a bad city token instead of taking things more slowly and launching a good one. A new city could fall into this trap by selling off too much land, sacrificing the entire upside to a small group of early adopters. Starting with self-contained experiments, and taking things slowly on moves that are truly irreversible, is ideal. But at the same time, it's also important to seize the opportunity in the first place. There's a lot that can and should be improved with cities, and a lot of opportunities; despite the challenges, crypto cities broadly are an idea whose time has come.
2024年10月22日
7 阅读
0 评论
0 点赞
2024-10-22
Halo and more: exploring incremental verification and SNARKs without pairings
Halo and more: exploring incremental verification and SNARKs without pairings2021 Nov 05 See all posts Halo and more: exploring incremental verification and SNARKs without pairings Special thanks to Justin Drake and Sean Bowe for wonderfully pedantic and thoughtful feedback and review, and to Pratyush Mishra for discussion that contributed to the original IPA exposition.Readers who have been following the ZK-SNARK space closely should by now be familiar with the high level of how ZK-SNARKs work. ZK-SNARKs are based on checking equations where the elements going into the equations are mathematical abstractions like polynomials (or in rank-1 constraint systems matrices and vectors) that can hold a lot of data. There are three major families of cryptographic technologies that allow us to represent these abstractions succinctly: Merkle trees (for FRI), regular elliptic curves (for inner product arguments (IPAs)), and elliptic curves with pairings and trusted setups (for KZG commitments). These three technologies lead to the three types of proofs: FRI leads to STARKs, KZG commitments lead to "regular" SNARKs, and IPA-based schemes lead to bulletproofs. These three technologies have very distinct tradeoffs:Technology Cryptographic assumptions Proof size Verification time FRI Hashes only (quantum safe!) Large (10-200 kB) Medium (poly-logarithmic) Inner product arguments (IPAs) Basic elliptic curves Medium (1-3 kB) Very high (linear) KZG commitments Elliptic curves + pairings + trusted setup Short (~500 bytes) Low (constant) So far, the first and the third have seen the most attention. The reason for this has to do with that pesky right column in the second row of the table: elliptic curve-based inner product arguments have linear verification time. What this means that even though the size of a proof is small, the amount of time needed to verify the proof always takes longer than just running the computation yourself. This makes IPAs non-viable for scalability-related ZK-SNARK use cases: there's no point in using an IPA-based argument to prove the validity of an Ethereum block, because verifying the proof will take longer than just checking the block yourself. KZG and FRI-based proofs, on the other hand, really are much faster to verify than doing the computation yourself, so one of those two seems like the obvious choice.More recently, however, there has been a slew of research into techniques for merging multiple IPA proofs into one. Much of the initial work on this was done as part of designing the Halo protocol which is going into Zcash. These merging techniques are cheap, and a merged proof takes no longer to verify than a single one of the proofs that it's merging. This opens a way forward for IPAs to be useful: instead of verifying a size-\(n\) computation with a proof that takes still takes \(O(n)\) time to verify, break that computation up into smaller size-\(k\) steps, make \(\frac\) proofs for each step, and merge them together so the verifier's work goes down to a little more than \(O(k)\). These techniques also allow us to do incremental verification: if new things keep being introduced that need to be proven, you can just keep taking the existing proof, mixing it in with a proof of the new statement, and getting a proof of the new combined statement out. This is really useful for verifying the integrity of, say, an entire blockchain.So how do these techniques work, and what can they do? That's exactly what this post is about.Background: how do inner product arguments work?Inner product arguments are a proof scheme that can work over many mathematical structures, but usually we focus on IPAs over elliptic curve points. IPAs can be made over simple elliptic curves, theoretically even Bitcoin and Ethereum's secp256k1 (though some special properties are preferred to make FFTs more efficient); no need for insanely complicated pairing schemes that despite having written an explainer article and an implementation I can still barely understand myself.We'll start off with the commitment scheme, typically called Pedersen vector commitments. To be able to commit to degree \(< n\) polynomials, we first publicly choose a set of base points, \(G_0 ... G_\). These points can be generated through a pseudo-random procedure that can be re-executed by anyone (eg. the x coordinate of \(G_i\) can be \(hash(i, j)\) for the lowest integer \(j \ge 0\) that produces a valid point); this is not a trusted setup as it does not rely on any specific party to introduce secret information.To commit to a polynomial \(P(x) = \sum_i c_i x^i\), the prover computes \(com(P) = \sum_i c_i G_i\). For example, \(com(x^2 + 4)\) would equal \(G_2 + 4 * G_0\) (remember, the \(+\) and \(*\) here are elliptic curve addition and multiplication). Cryptographers will also often add an extra \(r \cdot H\) hiding parameter for privacy, but for simplicity of exposition we'll ignore privacy for now; in general, it's not that hard to add privacy into all of these schemes. Though it's not really mathematically accurate to think of elliptic curve points as being like real numbers that have sizes, area is nevertheless a good intuition for thinking about linear combinations of elliptic curve points like we use in these commitments. The blue area here is the value of the Pedersen commitment \(C = \sum_i c_i G_i\) to the polynomial \(P = \sum_i c_i x^i\). Now, let's get into how the proof works. Our final goal will be a polynomial evaluation proof: given some \(z\), we want to make a proof that \(P(z) = a\), where this proof can be verified by anyone who has the commitment \(C = com(P)\). But first, we'll focus on a simpler task: proving that \(C\) is a valid commitment to any polynomial at all - that is, proving that \(C\) was constructed by taking a linear combination \(\sum_i c_i G_i\) of the points \(\\}\), without anything else mixed in.Of course, technically any point is some multiple of \(G_0\) and so it's theoretically a valid commitment of something, but what we care about is proving that the prover knows some \(\\}\) such that \(\sum_i c_i G_i = C\). A commitment \(C\) cannot commit to multiple distinct polynomials that the prover knows about, because if it could, that would imply that elliptic curves are broken.The prover could, of course, just provide \(\\}\) directly and let the verifier check the commitment. But this takes too much space. So instead, we try to reduce the problem to a smaller problem of half the size. The prover provides two points, \(L\) and \(R\), representing the yellow and green areas in this diagram: You may be able to see where this is going: if you add \(C + L + R\) together (remember: \(C\) was the original commitment, so the blue area), the new combined point can be expressed as a sum of four squares instead of eight. And so now, the prover could finish by providing only four sums, the widths of each of the new squares. Repeat this protocol two more times, and we're down to a single full square, which the prover can prove by sending a single value representing its width.But there's a problem: if \(C\) is incorrect in some way (eg. the prover added some extra point \(H\) into it), then the prover could just subtract \(H\) from \(L\) or \(R\) to compensate for it. We plug this hole by randomly scaling our points after the prover provides \(L\) and \(R\): Choose a random factor \(\alpha\) (typically, we set \(\alpha\) to be the hash of all data added to the proof so far, including the \(L\) and \(R\), to ensure the verifier can also compute \(\alpha\)). Every even \(G_i\) point gets scaled by \(\alpha\), every odd \(G_i\) point gets scaled down by the same factor. Every odd coefficient gets scaled up by \(\alpha\) (notice the flip), and every even coefficient gets scaled down by \(\alpha\). Now, notice that:The yellow area (\(L\)) gets multiplied by \(\alpha^2\) (because every yellow square is scaled up by \(\alpha\) on both dimensions) The green area (\(R\)) gets divided by \(\alpha^2\) (because every green square is scaled down by \(\alpha\) on both dimensions) The blue area (\(C\)) remains unchanged (because its width is scaled up but its height is scaled down) Hence, we can generate our new half-size instance of the problem with some simple transformations:\(G'_ = \alpha G_ + \frac}\) \(c'_ = \frac} + \alpha c_\) \(C' = C + \alpha^2 L + \frac\) (Note: in some implementations you instead do \(G'_i = \alpha G_ + G_\) and \(c'_i = c_ + \alpha c_\) without dividing the odd points by \(\alpha\). This makes the equation \(C' = \alpha C + \alpha^2 L + R\), which is less symmetric, but ensures that the function to compute any \(G'\) in any round of the protocol becomes a polynomial without any division. Yet another alternative is to do \(G'_i = \alpha G_ + G_\) and \(c'_i = c_ + \frac}\), which avoids any \(\alpha^2\) terms.)And then we repeat the process until we get down to one point: Finally, we have a size-1 problem: prove that the final modified \(C^*\) (in this diagram it's \(C'''\) because we had to do three iterations, but it's \(log(n)\) iterations generally) equals the final modified \(G^*_0\) and \(c^*_0\). Here, the prover just provides \(c^*_0\) in the clear, and the verifier checks \(c^*_0 G^*_0 = C^*\). Computing \(c^*_0\) required being able to compute a linear combination of \(\\}\) that was not known ahead of time, so providing it and verifying it convinces the verifier that the prover actually does know all the coefficients that go into the commitment. This concludes the proof.Recapping:The statement we are proving is that \(C\) is a commitment to some polynomial \(P(x) = \sum_i c_i x^i\) committed to using the agreed-upon base points \(\\}\) The proof consists of \(log(n)\) pairs of \((L, R)\) values, representing the yellow and green areas at each step. The prover also provides the final \(c^*_0\) The verifier walks through the proof, generating the \(\alpha\) value at each step using the same algorithm as the prover and computing the new \(C'\) and \(G'_i\) values (the verifier doesn't know the \(c_i\) values so they can't compute any \(c'_i\) values) At the end, they check whether or not \(c^*_0 G^*_0 = C^*\) On the whole, the proof contains \(2 * log(n)\) elliptic curve points and one number (for pedants: one field element). Verifying the proof takes logarithmic time in every step except one: computing the new \(G'_i\) values. This step is, unfortunately, linear.See also: Dankrad Feist's more detailed explanation of inner product arguments.Extension to polynomial evaluationsWe can extend to polynomial evaluations with a simple clever trick. Suppose we are trying to prove \(P(z) = a\). The prover and the verifier can extend the base points \(G_0 ... G_\) by attaching powers of \(z\) to them: the new base points become \((G_0, 1), (G_1, z) ... (G_, z^)\). These pairs can be treated as mathematical objects (for pedants: group elements) much like elliptic curve points themselves; to add them you do so element-by-element: \((A, x) + (B, y) =\) \((A + B,\ x + y)\), using elliptic curve addition for the points and regular field addition for the numbers.We can make a Pedersen commitment using this extended base! Now, here's a puzzle. Suppose \(P(x) = \sum_i c_i x^i\), where \(P(z) = a\), would have a commitment \(C = \sum_i c_i G_i\) if we were to use the regular elliptic curve points we used before as a base. If we use the pairs \((G_i, z^i)\) as a base instead, the commitment would be \((C, y)\) for some \(y\). What must be the value of \(y\)?The answer is: it must be equal to \(a\)! This is easy to see: the commitment is \((C, y) = \sum_i c_i (G_i, z^i)\), which we can decompose as \((\sum_i c_i G_i,\ \sum_i c_i z^i)\). The former is equal to \(C\), and the latter is just the evaluation \(P(z)\)!Hence, if \(C\) is a "regular" commitment to \(P\) using \(\\}\) as a base, then to prove that \(P(z) = a\) we need only use the same protocol above, but proving that \((C, a)\) is a valid commitment using \((G_0, 1), (G_1, z) ... (G_, z^)\) as a base!Note that in practice, this is usually done slightly differently as an optimization: instead of attaching the numbers to the points and explicitly dealing with structures of the form \((G_i, z^i)\), we add another randomly chosen base point \(H\) and express it as \(G_i + z^i H\). This saves space.See here for an example implementation of this whole protocol.So, how do we combine these proofs?Suppose that you are given two polynomial evaluation proofs, with different polynomials and different evaluation points, and want to make a proof that they are both correct. You have:Proof \(\Pi_1\) proving that \(P_1(z_1) = y_1\), where \(P_1\) is represented by \(com(P_1) = C_1\) Proof \(\Pi_2\) proving that \(P_2(z_2) = y_2\), where \(P_2\) is represented by \(com(P_2) = C_2\) Verifying each proof takes linear time. We want to make a proof that proves that both proofs are correct. This will still take linear time, but the verifier will only have to make one round of linear time verification instead of two.We start off with an observation. The only linear-time step in performing the verification of the proofs is computing the \(G'_i\) values. This is \(O(n)\) work because you have to combine \(\frac\) pairs of \(G_i\) values into \(G'_i\) values, then \(\frac\) pairs of \(G'_i\) values into \(G''_i\) values, and so on, for a total of \(n\) combinations of pairs. But if you look at the algorithm carefully, you will notice that we don't actually need any of the intermediate \(G'_i\) values; we only need the final \(G^*_0\). This \(G^*_0\) is a linear combination of the initial \(G_i\) values. What are the coefficients to that linear combination? It turns out that the \(G_i\) coefficient is the \(X^i\) term of this polynomial:\[(X + \alpha_1) * (X^2 + \alpha_2)\ *\ ...\ *\ (X^} + \alpha_) \]This is using the \(C' = \alpha C + \alpha^2 L + R\) version we mentioned above. The ability to directly compute \(G^*_0\) as a linear combination already cuts down our work to \(O(\frac)\) due to fast linear combination algorithms, but we can go further.The above polynomial has degree \(n - 1\), with \(n\) nonzero coefficients. But its un-expanded form has size \(log(n)\), and so you can evaluate the polynomial at any point in \(O(log(n))\) time. Additionally, you might notice that \(G^*_0\) is a commitment to this polynomial, so we can directly prove evaluations! So here is what we do:The prover computes the above polynomial for each proof; we'll call these polynomials \(K_1\) with \(com(K_1) = D_1\) and \(K_2\) with \(com(K_2) = D_2\). In a "normal" verification, the verifier would be computing \(D_1\) and \(D_2\) themselves as these are just the \(G^*_0\) values for their respective proofs. Here, the prover provides \(D_1\) and \(D_2\) and the rest of the work is proving that they're correct. To prove the correctness of \(D_1\) and \(D_2\) we'll prove that they're correct at a random point. We choose a random point \(t\), and evaluate both \(e_1 = K_1(t)\) and \(e_2 = K_2(t)\) The prover generates a random linear combination \(L(x) = K_1(x) + rK_2(x)\) (and the verifier can generate \(com(L) = D_1 + rD_2\)). The prover now just needs to make a single proof that \(L(t) = e_1 + re_2\). The verifier still needs to do a bunch of extra steps, but all of those steps take either \(O(1)\) or \(O(log(n))\) work: evaluate \(e_1 = K_1(t)\) and \(e_2 = K_2(t)\), calculate the \(\alpha_i\) coefficients of both \(K_i\) polynomials in the first place, do the elliptic curve addition \(com(L) = D_1 + rD_2\). But this all takes vastly less than linear time, so all in all we still benefit: the verifier only needs to do the linear-time step of computing a \(G^*_0\) point themselves once.This technique can easily be generalized to merge \(m > 2\) signatures.From merging IPAs to merging IPA-based SNARKs: HaloNow, we get into the core mechanic of the Halo protocol being integrated in Zcash, which uses this proof combining technique to create a recursive proof system. The setup is simple: suppose you have a chain, where each block has an associated IPA-based SNARK (see here for how generic SNARKs from polynomial commitments work) proving its correctness. You want to create a new block, building on top of the previous tip of the chain. The new block should have its own IPA-based SNARK proving the correctness of the block. In fact, this proof should cover both the correctness of the new block and the correctness of the previous block's proof of the correctness of the entire chain before it.IPA-based proofs by themselves cannot do this, because a proof of a statement takes longer to verify than checking the statement itself, so a proof of a proof will take even longer to verify than both proofs separately. But proof merging can do it! Essentially, we use the usual "recursive SNARK" technique to verify the proofs, except the "proof of a proof" part is only proving the logarithmic part of the work. We add an extra chain of aggregate proofs, using a trick similar to the proof merging scheme above, to handle the linear part of the work. To verify the whole chain, the verifier need only verify one linear-time proof at the very tip of the chain.The precise details are somewhat different from the exact proof-combining trick in the previous section for efficiency reasons. Instead of using the proof-combining trick to combine multiple proofs, we use it on a single proof, just to re-randomize the point that the polynomial committed to by \(G^*_0\) needs to be evaluated at. We then use the same newly chosen evaluation point to evaluate the polynomials in the proof of the block's correctness, which allows us to prove the polynomial evaluations together in a single IPA.Expressed in math:Let \(P(z) = a\) be the previous statement that needs to be proven The prover generates \(G^*_0\) The prover proves the correctness of the new block plus the logarithmic work in the previous statements by generating a PLONK proof: \(Q_L * A + Q_R * B + Q_O * C + Q_M * A * B + Q_C = Z * H\) The prover chooses a random point \(t\), and proves the evaluation of a linear combination of \(\\) at \(t\). We can then check the above equation, replacing each polynomial with its now-verified evaluation at \(t\), to verify the PLONK proof. Incremental verification, more generallyThe size of each "step" does not need to be a full block verification; it could be something as small as a single step of a virtual machine. The smaller the steps the better: it ensures that the linear work that the verifier ultimately has to do at the end is less. The only lower bound is that each step has to be big enough to contain a SNARK verifying the \(log(n)\) portion of the work of a step.But regardless of the fine details, this mechanism allows us to make succinct and easy-to-verify SNARKs, including easy support for recursive proofs that allow you to extend proofs in real time as the computation extends and even have different provers to do different parts of the proving work, all without pairings or a trusted setup! The main downside is some extra technical complexity, compared with a "simple" polynomial-based proof using eg. KZG-based commitments.Technology Cryptographic assumptions Proof size Verification time FRI Hashes only (quantum safe!) Large (10-200 kB) Medium (poly-logarithmic) Inner product arguments (IPAs) Basic elliptic curves Medium (1-3 kB) Very high (linear) KZG commitments Elliptic curves + pairings + trusted setup Short (~500 bytes) Low (constant) IPA + Halo-style aggregation Basic elliptic curves Medium (1-3 kB) Medium (constant but higher than KZG) Not just polynomials! Merging R1CS proofsA common alternative to building SNARKs out of polynomial games is building SNARKs out of matrix-vector multiplication games. Polynomials and vectors+matrices are both natural bases for SNARK protocols because they are mathematical abstractions that can store and compute over large amounts of data at the same time, and that admit commitment schemes that allow verifiers to check equations quickly.In R1CS (see a more detailed description here), an instance of the game consists of three matrices \(A\), \(B\), \(C\), and a solution is a vector \(Z\) such that \((A \cdot Z) \circ (B \cdot Z) = C \cdot Z\) (the problem is often in practice restricted further by requiring the prover to make part of \(Z\) public and requiring the last entry of \(Z\) to be 1). An R1CS instance with a single constraint (so \(A\), \(B\) and \(C\) have width 1), with a satisfying \(Z\) vector, though notice that here the \(Z\) appears on the left and has 1 in the top position instead of the bottom. Just like with polynomial-based SNARKs, this R1CS game can be turned into a proof scheme by creating commitments to \(A\), \(B\) and \(C\), requiring the prover to provide a commitment to (the private portion of) \(Z\), and using fancy proving tricks to prove the equation \((A \cdot Z) \circ (B \cdot Z) = C \cdot Z\), where \(\circ\) is item-by-item multiplication, without fully revealing any of these objects. And just like with IPAs, this R1CS game has a proof merging scheme!Ioanna Tzialla et al describe such a scheme in a recent paper (see page 8-9 for their description). They first modify the game by introducing an expanded equation:\[ (A \cdot Z) \circ (B \cdot Z) - u * (C \cdot Z) = E\]For a "base" instance, \(u = 1\) and \(E = 0\), so we get back the original R1CS equation. The extra slack variables are added to make aggregation possible; aggregated instances will have other values of \(u\) and \(E\). Now, suppose that you have two solutions to the same instance, though with different \(u\) and \(E\) variables:\[(A \cdot Z_1) \circ (B \cdot Z_1) - u_1 * (C \cdot Z_1) = E_1\]\[(A \cdot Z_2) \circ (B \cdot Z_2) - u_2 * (C \cdot Z_2) = E_2\]The trick involves taking a random linear combination \(Z_3 = Z_1 + r Z_2\), and making the equation work with this new value. First, let's evaluate the left side:\[ (A \cdot (Z_1 + rZ_2)) \circ (B \cdot (Z_1 + rZ_2)) - (u_1 + ru_2)*(C \cdot (Z_1 + rZ_2)) \]This expands into the following (grouping the \(1\), \(r\) and \(r^2\) terms together):\[(A \cdot Z_1) \circ (B \cdot Z_1) - u_1 * (C \cdot Z_1)\]\[r((A \cdot Z_1) \circ (B \cdot Z_2) + (A \cdot Z_2) \circ (B \cdot Z_1) - u_1 * (C \cdot Z_2) - u_2 * (C \cdot Z_1))\]\[r^2((A \cdot Z_2) \circ (B \cdot Z_2) - u_2 * (C \cdot Z_2))\]The first term is just \(E_1\); the third term is \(r^2 * E_2\). The middle term is very similar to the cross-term (the yellow + green areas) near the very start of this post. The prover simply provides the middle term (without the \(r\) factor), and just like in the IPA proof, the randomization forces the prover to be honest.Hence, it's possible to make merging schemes for R1CS-based protocols too. Interestingly enough, we don't even technically need to have a "succinct" protocol for proving the \[ (A \cdot Z) \circ (B \cdot Z) = u * (C \cdot Z) + E\] relation at the end; instead, the prover could just prove by opening all the commitments directly! This would still be "succinct" because the verifier would only need to verify one proof that actually represents an arbitrarily large number of statements. However, in practice having a succinct protocol for this last step is better because it keeps the proofs smaller, and Tzialla et al's paper provides such a protocol too (see page 10).RecapWe don't know of a way to make a commitment to a size-\(n\) polynomial where evaluations of the polynomial can be verified in \(< O(n)\) time directly. The best that we can do is make a \(log(n)\) sized proof, where all of the work to verify it is logarithmic except for one final \(O(n)\)-time piece. But what we can do is merge multiple proofs together. Given \(m\) proofs of evaluations of size-\(n\) polynomials, you can make a proof that covers all of these evaluations, that takes logarithmic work plus a single size-\(n\) polynomial proof to verify. With some clever trickery, separating out the logarithmic parts from the linear parts of proof verification, we can leverage this to make recursive SNARKs. These recursive SNARKs are actually more efficient than doing recursive SNARKs "directly"! In fact, even in contexts where direct recursive SNARKs are possible (eg. proofs with KZG commitments), Halo-style techniques are typically used instead because they are more efficient. It's not just about polynomials; other games used in SNARKs like R1CS can also be aggregated in similar clever ways. No pairings or trusted setups required! The march toward faster and more efficient and safer ZK-SNARKs just keeps going...
2024年10月22日
6 阅读
0 评论
0 点赞
1
...
5
6
7
...
23