Bitcoin is a disruptive technology, and since time immemorial disruptive technologies have challenged our existing theories and demanded improvement. I'm not going to beat around the bush trying to make Bitcoin conform to our existing schemas. We need to rethink what makes types of money that particular type; not look into why Bitcoin can function as a currency - that is already well understood. I'll outline what I think are the important constituents of money that help differentiate them today. We'll then look into how Bitcoin fits in, hopefully in such a way that convinces you Bitcoin is truly novel.
Broadly, the three well understood forms of money are as follows: fiat money, commodity money (CM), and commodity-backed money (CBM). People will often separate the former with the latter two based on the notion of 'intrinsic value'. While we can agree that all three have value, we also know that the value of fiat money is derived from legislation: not an intrinsic property.
While this categorisation schema works for the above, I believe there is a more pertinent distinction to be made: that of non-monetary use-value; i.e. the money type in question has some primary purpose other than to simply exchange value between parties. The primary use of fiat money is to exchange value, we can therefore see that not only does fiat not have any non-monetary use-value, but that the use-value of fiat money is fundamentally bound to the exchange-value. On the other hand, we see that CM and CBM derive their use value not just from exchange, but also from the intrinsic properties of the commodity itself. Therefore we can categorise these three forms of money as before, but with the determinant being non-monetary use-value.
The second property I'd like to introduce is the concept of 'deferred value', and 'direct value'. Ultimately you can think of these as 'an IOU', or 'not an IOU' respectively.
Deferred value is seen in both commodity-backed money, and fiat money. Both are not so much valuable because of what they are, but because of what they entitle the user to. This is easy to see in the case of commodity-backed money, such as a gold certificate, as it is redeemable for a commodity (in this case, gold). However, in the case of fiat money, it is not directly redeemable for any particular commodity, but is legislated that it has value. Put in another way: fiat money entitles the user to some value. We should consider that if the legislatory environment surrounding either of the above were to collapse, they would respectively become useless. Commodity money can never truly reach zero value, but both CBM and fiat can, and so participating in such systems is like passing a hot potato; it's okay as long as you're not the one to get burnt. This is part of the nature of deferred value.
On the other hand, direct value implies that the received value is intrinsically tied to the received object. In the case of commodity money - say, rice - it is trivial to see that the value of rice (that you can eat it) is bestowed to the recipient immediately upon receipt.
By viewing the property of non-monetary use-value in light of deferred or direct value, we can see that while a gold certificate may have no particular non-monetary uses in and of itself, by acting as a method of deferred value it can inherit non-monetary use value from the commodity it is tied to.
As a visual summary, here is what we've talked about so far:
Deferred Value | Direct Value |
--------------------------------------------
commodity-backed | Commodity | Has non-monetary
Money | Money | use value
--------------------------------------------
Fiat Money, | | No non-monetary
Generic IOUs (e.g.| | use value
paypal-us-dollars)| |
Enter Bitcoin: the rule breaker, the status quo usurper. You might have noticed there is one particular combination of the above properties that has not been covered by traditional monetary systems. It's tempting to fill in the blank with Bitcoin; but we should remember that Bitcoin is merely an example of this missing puzzle piece, just as the US Dollar is just an example of fiat currency.
Definition: Factum Money
A stand-alone money system in which each unit, by its intrinsic properties alone, necessarily holds a non-zero value.
Rational:
Factum loosely translates from latin as 'done'; a play on words, as fiat loosely translates as 'let it be so'.
Factum also lends itself to 'fact based currency': because of each individuals' knowledge of the system, it is able to be used to exchange value; an appropriate description.
To gain an intuitive understanding of what this really means, let us diverge from the topic for a moment to discuss aliens. (Bear with me!) It's an assumed property of the universe that no matter where you are spatiotemporally the number pi will be constant. You can express this in various ways; but the simplest is that the ratio between the radius and circumference of a perfect circle is always constant. I suggest that Factum Money has a similar property: that regardless of position in space and time, society, culture, species, or any other physical differences, true Factum Money is able to transfer value. Doubtless each instance of factum money can have local environmental factors that prohibit its use; Bitcoin is known to lack quantum computing resistance and will fail if SHA256 is broken, just as Litecoin relies on Scrypt. However, due to the particular conditions of today, Bitcoin is able to transfer value, and holds the mantle of 'Factum Money'.
Filling in the blank, we now we have a table that looks like this:
Deferred Value | Direct Value |
--------------------------------------------
commodity-backed | Commodity | Has non-monetary
Money | Money | use value
---------------------------------------------
Fiat Money | Factum Money | No non-monetary
Generic IOUs (e.g. | e.g. Bitcoin | use value
paypal-us-dollars) | |
There's a great deal left to explore within this idea; of particular interest (which I'll explore in a follow up post) is what this actually means for Bitcoiners, and how we can predict and take advantage of this model. Cryptocurrency has many facets that have so far only been theoretically explored, in particular perfect money laundering, and distributed exchanges. I'll largely be exploring distributed exchanges in my next post.
I think I've changed my mind about Bitcoin as money. Particularly: it's a fiat currency (factum money is just fiat in disguise), and I don't think it's sound money anymore. Just because it doesn't have government backing doesn't mean that it isn't fiat.
The reason for the change-of-mind is that I've started reading Theory of Money and Credit by Mises (TMC). I don't think Mises would be in favor of Bitcoin; how does one answer what is the origin of Bitcoin's value? -- I don't think there's a substantive answer.
I think I’ve changed my mind about Bitcoin as money. Particularly: it’s a fiat currency
The reason for the change-of-mind is that I’ve started reading Theory of Money and Credit by Mises (TMC).
Related discussion: https://discuss.criticalfallibilism.com/t/crypto-currency-fraud/128/18 (this post and on).
This doesn’t give proper credit. It makes it sound like your idea that you came up with after you started reading the book on your own initiative. “Related discussion” doesn’t communicate “where I got the idea, and the book recommendation, from someone else”.
I started reading TMC entirely because Elliot referenced it in the original 'related discussion' link. Elliot's original post:
Bitcoin is not a commodity money. Its value is meant to come from being a medium of exchange and that’s it. That’s bad. See Mises (Theory of Money and Credit) and Reisman (Capitalism) on commodity and fiat money.
Particularly: That’s bad. was the bit to convince me that it was worth reading TMC. The idea that Bitcoin (or anything) trying to be something other than commodity money was bad: I got that from Elliot.
I was especially interested in reading TMC because I first learnt about Mises through articles and discussion around Bitcoin-as-money (prior to writing n/2002) and Elliot's comment implied there was a conflict between Mises's ideas and the reasons I thought Bitcoin was sound money (I now agree there is a conflict there).
I thought (and, IMO, many Bitcoiners did and still do think) that Mises would have liked Bitcoin -- I was convinced otherwise by the first ~7 chapters of TMC which seem pretty cut-and-dry WRT the important qualities of the origin of the value of a money.
Additionally wrt other unattributed ideas: re-reading the original factum money post; most of the foundational ideas (like non-monetary use-value) seem to be lifted from TMC, or at least they're descendants of ideas in TMC and elsewhere too (IDK enough about the history of econ ideas to know what they're descendants of, exactly). And, ofc, I was way less rigorous with them than Mises is in TMC.
There's also at least one thing I said that Mises refuted in TMC -- we also know that the value of fiat money is derived from legislation; I haven't closely re-read OP to know if there are more. That mistake is a key point behind my argument in n/2002 that Bitcoin isn't fiat, which is the excuse to invent factum money.
I now think the entire bottom row of the table in n/2002 is all fiat:
Deferred Value | Direct Value |
--------------------------------------------
commodity-backed | Commodity | Has non-monetary
Money | Money | use value
---------------------------------------------
Fiat Money | Factum Money | No non-monetary
Generic IOUs (e.g. | e.g. Bitcoin | use value
paypal-us-dollars) | |
Note: this post is easier to read on GitHub due to the fixed width ascii diagrams.
In my [first post](FactumMoney.md) in the factum series I looked at Factum Money, a new category of money which has no non-financial use (or if there is a non-financial use, it is a distance second when ranked by utility), and has no extrinsic requirement to hold value. That is to say, the main utility of Factum Money is as money, and we can see this by looking at the properties of said money and nothing more.
Commodity money and commodity backed money have common non-financial uses and often this is their primary use, and it is thus trivial to see they are not factum money. Fiat money (such as common government issued monies in use today) has no substantial non-financial use, however, by looking solely at the construction of fiat money we can see that it is not guaranteed to have a non-zero value. Most (if not all) of the time fiat money will acquire value through the exercise of authority, which almost ubiquitously takes a legal form.
Bitcoin, however, satisfies both requirements; it is a purely financial instrument (though it's technical implementation leads to other uses, like storing data in the blockchain) and one can see that Bitcoin has a non-zero (or should we say intrinsic) value by a pure examination of the internal mechanism.
Markets and Exchange
Money is a multidimensional problem. Sending value to other parties is all good and well, but there must exist a means of exchanging between monies; without One World Currency there will exist some merchants who won't accept your preferred flavour of money and some clients who don't have any of the flavours you accept.
Traditionally currencies are exchanged in an environment outside the system of money itself. They are walled off from the outside and require specific entry and exit procedures. These procedures are a method of deferring value (mentioned in the first in the factum series). Take the best known BTC/USD exchange: MtGox. The deposit process, for both USD and BTC, involves P1 (the first party) relinquishing their funds, and in return they are issued fungible tokens that operate within the MtGox environment, but are incompatible with any other payment network. These tokens have the unique property (belonging to none of their parents) of being exchangeable for other tokens (denominated in a different currency) on the market provided by MtGox. Thus tokens from P1 are swapped for tokens from P2 (the second party), and both parties direct the value back into their original forms, and the exchange is complete.
By issuing tokens for USD or BTC, MtGox are deferring the value. Put another way, the tokens only have value because of something we know about then outside of their construction: there is a legal gateway allowing value to flow into and out of the MtGox zone.
Because of the similarity in the constructions of tokens in these markets, and the construction of fiat money and commodity backed money, I feel it is appropriate to label these systems Fiat Exchanges. Two forms of fiat money are being traded. No real BTC is traded on the MtGox exchange, only MtGox-BTC -- a petty shadow by comparison.
There exists, however, another form of exchange. In this form the monies themselves are exchanged; not tokens in a segregated environment. When the two of us exchange gold coins for silver, the exchange takes place in an environment native to the subjects of the exchange (in this case the real world). A factum exchange that operates using Bitcoin would satisfy the same criterion; the exchange takes place in an environment native to the subjects of the exchange (the blockchain). Transactions on the Bitcoin network and transactions involved in the exchange are one and the same (in this hypothetical exchange system).
These exchanges, whether they involve real world objects or cryptocurrencies, will be labelled Factum Exchanges. In these cases, the ability for exchange to occur is built into the payment networks -- in the case of gold it is the physical transfer, in the case of Bitcoin it is a transaction on the network. When a factum exchange operates over factum money (and possible some fiat money) there should be no possibility of any particular exchange being censored, blocked, regulated, etc. There should only be two possibilities: the exchange fails (and the original money is returned) or the exchange succeeds (and both parties swap as agreed).
It is possible to host a factum exchange over fiat currencies. An exchange in person, or through banking networks, can be called a factum exchange because value is not deferred from the subjects of the exchange.
A Note on Cryptocurrencies
Although Bitcoin is a form of factum money, not all crypto currency needs to be. One can easily imagine a crypto currency minted by a central bank, and although mining might be decentralised, supply of the money is still regulated by the bank (even with distributed wealth generation). This would count as fiat currency because there is zero assurance (excepting legal agreements) that the currency won't be inflated through the roof. If the private key were compromised this would be an inevitability.
Cryptocurrency could even be a form of commodity backed money, but we shan't explore that here.
Exchange and Distributed Exchange
The Bitcoin community has felt the thorns of regulation for some time. Traditional methods of currency exchange are stifling growth and promoting unhealthy markets (such as we're seeing in the exchange differential between USD/BTC markets on MtGox, Bitstamp, and BTC-e). The Bitcoin community owes it to itself to build tools to better allow for this exchange, and to investigate what is going wrong currently (so we know how and to build, and the purpose of such tools).
The current system looks like this: (using MtGox as an example)
TITLE: USING MTGOX TO CONVERT BETWEEN BITCOIN AND USD
________________________________________________________________
BITCOIN
P1-->MTG-------+ +--MTG-->P2
________________|________________________________|______________
USD | |
P2==>MTG-----+ | | +--MTG==>P1 (insert joke here
______________|_|________________________________|_|____________ about never get-
| | | | ting fiat out of
| | | | MtGox)
MTGOX | | | |
______________|_|________________________________|_|____________
| +-M>P1(BTC)----#X->P2(BTC)-D>MTG-+ |
+---M>P2(USD)-#---X->P1(USD)-D>MTG---+
________________________________________________________________
LEGEND
- : Path of Action, horizontal
| : Path of Action, vertical
+ : Path of Action, corner; if two paths intersect, their direction is unchanged
. : Path of Funds in Escrow
> : Transaction to
# : Order Placed
• : Block Produced (alt 8)
X : Exchange Matched
@ : Proof of Payment
& : Escrow Unlock
= : Stress on Path of Action (e.g. Regulation)
M : Mint coins/tokens
D : Destroy / Unmint
Cn : for an integer n, custom action, details provided with graph
P1 : The First Party, always refers to the same people
Pn : for any integer n, as above
En : for any integer n, a special escrow account, details provided with graph
PN : The Nth Party, can be anyone and can be a different party every time
ABC : Three letter abbreviation of a real world entity.
RULES - The following excludes paths and transactions:
- When two icons appear next to each other they are simultaneous. Their order
preserves causality.
- When two icons are in the same column they are simultaneous.
- If two icons are in adjacent columns but different rows they are not
necessarily simultaneous.
- If two icons are adjacent and there is a third in a column shared with one of
these icons, all three are simultaneous.
The regulatory efforts are applied to the fiat entry and exit points, with = indicating where those stresses are felt. The MtGox infrastructure requires value to be deferred into compatible tokens, which is the choke point in this system. If direct person to person transactions in USD were used instead, the regulator pressure would fall away. Unfortunately this isn't able to be checked from within the MtGox infrastructure, and thus relies on manual verification, in turn allowing for a number of attacks that make the system too burdensome to use.
The First Distributed Exchange: Ripple
A system such as Ripple looks like this:
TITLE: USING RIPPLE TO CONVERT BETWEEN BITCOIN AND USD
__________________________________________________________
BITCOIN
P1-->BTS---+ +-BTS->P2
____________|_________________________________|___________
USD | |
P2==>BTS-+ | | +-BTS=>P1
__________|_|_________________________________|_|_________
| | | |
| | | |
RIPPLE | | | |
__________|_|_________________________________|_|_________
| +--M>P1(BTC)-#----X>P2(BTC)-D>BTS-+ |
+----M>P2(USD)-----#X>P1(USD)-D>BTS---+
__________________________________________________________
LEGEND
-|+ : Path of Action, horizontal,vertical,corner
> : Transaction to
# : Order Placed
X : Exchange Matched
= : Stress on Path of Action (e.g. Regulation)
M : Mint coins/tokens
D : Destroy / Unmint
It is nearly identical to MtGox, and involves using a payment processor to cash in and out. Bitstamp (BTS) was chosen here as it is a 'gateway' into the Ripple system for both Bitcoin and USD. It can be replaced with any other gateway, though additional steps are required. Ultimately this provides no advantage over the traditional model (see MtGox, above) besides that there are more options to cash in and out (though you have to exchange Bitstamp-USD for OtherGateway-USD as the two aren't fungible). I guess you'd say it's better than MtGox, but not substantially.
TITLE: SIMPLE CROSS CHAIN TRADE BETWEEN BITCOIN AND LITECOIN
_____________________________________
BITCOIN
P1------X->E1-•-C1------C2-->P2
_________|___________________________
LITECOIN |
P2------X-------->E2-•->P1
_____________________________________
LEGEND
-|+ : Path of Action, horizontal,vertical,corner
> : Transaction to
• : Block Produced (alt 8)
X : Exchange Matched
E1 and E2 are both transactions to the following output:
[Pubkey] OP_CHECKSIGVERIFY OP_HASH256 [HashOfX] OP_EQUAL
Redeemed by:
[X] [Sig]
C1 - P1 tells P2 about the transaction
C2 - P1 tells P2 the secret (X) OR P1 spends the TX, making the secret public.
Cross chain trade looks fairly simple, but in reality there is no market built behind it, so much of the communication is manual. This might be solved with some distributed layer over the top, but there is still the issue of P1 keeping X secret, locking funds away forever. There has been a suggested solution to this that involves timeout periods. This makes things a little more difficult:
TITLE: ZERO-LOSS CROSS CHAIN TRADE BETWEEN BITCOIN AND LITECOIN
_______________________________________________
BITCOIN
P1------X------C1------C3->E1--C5--•->P2
_________|_____/__\____/__\____/__\____________
LITECOIN | / \ / \ / \
P2------X--C0------C2------C4->E2----•->P1
_______________________________________________
LEGEND
-|+ : Path of Action, horizontal,vertical,corner
> : Transaction to
• : Block Produced (alt 8)
X : Exchange Matched
E1 and E2 are both transactions to the following output:
OP_IF
[PubkeyYou] OP_CHECKSIGVERIFY OP_HASH256 [HashOfX]
OP_EQUALVERIFY OP_HASH256 [HashOfY] OP_EQUAL
OP_ELSE
2 [PubkeyYou] [PubkeyMe] 2 OP_CHECKMULTISIG
OP_ENDIF
Redeemed by:
0 [Sig] [Sig]
OR
1 [Y] [X] [Sig]
The flow of information between the Cn events are shown with slashes.
C0 - P2 tells P1 [HashOfY]
C1 - P1 telling P2 the E1 transaction, and [HashOfX], unsigned, and providing
a reversal transaction R1.
* R1 is locked for 48 hours
* Rn is a reversal transaction from En>Pn. It has a lock time far in the future,
and far from the time the lock time of the other reversal, if it is known.
C2 - P2 verifying E1, signing and returning R1, P2 also tells P1
about the E2 tx, unsigned, and provides R2.
* R2 is locked for 24 hours
C3 - P1 verifies E2, inspects R2, signs, and returns. P1 signs R1 and
signs and broadcasts E1.
C4 - P2 verifies E1 has been broadcast, signs
and broadcasts E2. P2 tells P1 [Y]
C5 - P1 tells P2 [X], either explicitly or by spending the transaction
In this case the trade will never fail: after R2 becomes active it is unsafe for P1 provide the secret X. Thus, if P1 is unable to redeem E2 she can wait for R1 become active. By placing the burden of providing the secret on P2, the transaction with the first reversal is guaranteed to occur first.
However, the cost of using this method is great; many confirmations are required for individuals to be certain they are safe executing the next step and there is a great deal of time for either party to renege on the transaction after the terms of exchange are set. Furthermore a reorganisation on one chain could lead to one party with both sides of the transaction.
Other Distributed Fiat Exchanges (Hypothetical)
None of these have been completed to my knowledge. They typically allow the creation of assets backed by some value (fiat), or offered IPO style (possibly fiat or factum).
Typically, a generic fiat exchange (GFiX) will take the following form:
TITLE: GENERIC FIAT EXCHANGE (GFiX) - BITCOIN TO USD
___________________________________________________
BITCOIN
P1-->BTS---+ +-BRK-->P2
____________|_________________________|____________
USD | |
P2==>BRK-+ | | +-BRK==>P1
__________|_|_________________________|_|__________
| | | |
| | | |
GFiX | | | |
__________|_|_________________________|_|____________
BTC | +-M>P1-#-•...•X&>P2-D>BRK-+ |
| | |
__________|_______________|_____________|____________
USD | | |
+---M>P2---•-#-•X>P1--D>BRK---+
__________________________________________________
LEGEND
-|+ : Path of Action, horizontal,vertical,corner
. : Path of Funds in Escrow
> : Transaction to
# : Order Placed
• : Block Produced (alt 8)
X : Exchange Matched
& : Escrow Unlock
= : Stress on Path of Action (e.g. Regulation)
M : Mint coins/tokens
D : Destroy / Unmint
BRK : Broker
Existing and Future Factum Exchanges
Mastercoin
Note: Marstercoin is a layer over Bitcoin (data stored in TXs on the Bitcoin network) and thus blocks occur at the same time.
Note: The following may be incorrect. I've scraped together some information but details of the spec are pretty thin.
TITLE: EXCHANGING BITCOIN FOR MASTERCOIN
_______________________________
BITCOIN
P1----------C1#-•X-->P2 •
__________________|____________
MASTERCOIN |
P2--------#-----•X......•&>P1
_______________________________
C1 - Buyer selects order to fill
In english:
An order is placed on the Mastercoin network (sell)
A buyer finds and decides to fill that order (buy)
They publish the order, and away the next block
When the next block arrives the orders are matched and the Mastercoin asset goes into pseudo-escrow
Once P1 pays P2 in the required specific manner, Mastercoin unlocks the escrow and transfers the assets to P1
One issue is the buyer (of MSC) is able to renege on the trade before it is complete; this, however, is an issue with any asynchronous exchange. The process here is simpler than other asynchronous models because it is only possible to trade between Bitcoin and Mastercoin, and Mastercoin has interchain awareness to Bitcoin; when an order is filled (and paid for) the Mastercoin chain can release funds automagically. A 'pledge' system could easily be implemented to indicate one's intent to purchase. A market system could be implemented to remove the need to choose an order to fill.
Marketcoin
Marketcoin is a hypothetical currency/market I've begun to set out here
TITLE: EXCHANGING BITCOIN FOR MARKETCOIN - ANNOTATED
ASYMMETRIC EXCHANGE
+-- P1 places an order on the MKC network to buy (with pledge)
| +-- Exchange matched, unacknowledged on bitcoin network
| | +-- P1 sends coins to P2 as agreed
| | | +-- BTC block mined
_____________________________________
BITCOIN +---------+
P1---------+----X-+-P1>P2 • |
____________|____|___________|_______
MARKETCOIN # | @
P2-----#-------•X.............•&>P1
_____________________________________
| | | +-- When an MKC block is mined,
| | | the MKC in escrow is released to P1
| | +-- P1 provides proof of tx to MKC network
| +-- MKC Block produced, exchange matched
| P2's funds put in escrow
+-- P2 places an order on the MKC network to sell
In english:
P2 places bid
P1 places ask (with pledge)
When an MKC block is created and the orders are matched, P2's MKC is put into escrow, redeemable by P1
P1 then transfers correct amount of BTC to P2
A Bitcoin block is mined including this transaction
P1 provides proof of payment to the Marketcoin network
When an MKC block is mined, the MKC in escrow is released to P1
Currently, P1 can technically renege on the trade before transacting to P2. However, the order P1 places on the Marketcoin network (this is one of many possible implementations) require a fee to be paid (in MKC) that is proportional to the change of value in the last 24 hours (as that is the escrow length). If the trade times out the pledge is given to P2, so P2 is guaranteed to either make the trade, or receive compensation better than the best case scenario considering the last 24 hours. That is a powerful incentive to trade. This requirement of pledges is not required in symmetric exchange. Proof of payment is another issue; it may be possible to automate this process with deep scanning of the foreign blockchain, but this could easily be too intensive a task, and requires marking transactions on the foreign chain. Manual proof of payment is sufficient at this stage, and with software automation, not a burden at all.
Methods of Operation for Distributed Cross-Chain Exchanges
An asymmetric exchange has different rules on both sides. In the case of Marketcoin, it hosts the order book for both currencies, and Altcoin is unaware of it. This means asymmetric exchanges are backwards compatible (they can be made to 'read' a great many forms of blockchain), so a web between all currencies can be created with only asymmetric exchanges.
An ideal asymmetric exchange built into two cryptocurrencies looks like:
TITLE: EXCHANGING BITCOIN FOR LITECOIN ON AN IDEAL ASYMMETRIC EXCHANGE
______________________________
XCOIN
P1-----------#--•X>P2
__________________|___________
YCOIN |
P2--------#-•....X..•&>P1
______________________________
What is important is this is the only way this could happen. Bitcoin hosts the exchange and Litecoin 'reads' the exchange from the Bitcoin blockchain. This is about half-way to a symmetric exchange where both coins have both functionalities. We explore this graph in more detail later on.
Ignorant and Knowledgable Asymmetric Exchange
There is a clear difference between Marketcoin and an ideal asymmetric exchange. This is because in an ideal case, both chains have interchain awareness and can verify transactions on the other chain. In the case of Marketcoin and an existing cryptocurrency (say Bitcoin), the foreign coin is unable to read the local market (hosted entirely by Marketcoin). I call this ignorant asymmetric exchange, because Bitcoin is ignorant of the fact it is even occurring.
In the counter-case that a foreign chain is aware of the local chain, we can offload buy orders to the foreign chain, which can then enable some escrow like function to guarantee trades are atomic. I call this knowledgable asymmetric exchange. However, to guarantee determinism in such an exchange, buy orders will only be matched once they are learnt about by the other chain, and can only be canceled with the permission of the other chain (they will either be cancelled or a trade will occur before the cancelation takes place).
Combining Fiat Exchange with Factum Asymmetric Exchange
Consider a distributed Generic Fiat Exchange (GFiX). Often such an exchange will have a core cryptocurrency operating beneath the user-defined assets (think Ripples, BitShares, Freicoins, etc) and so should be compatible with a distributed cryptocurrency exchange. Then presume this chain also supports ignorant asymmetric exchange. In such a case it would be possible to buy arbitrary assets on the GFiX using Bitcoin in a trustless, multistep manner. In the best case where Xcoin supports asymmetric cross-chain trade in a similar way to Marketcoin and an asset market, you would experience the following process:
TITLE: EXCHANGING BITCOIN FOR XCOIN-ASSET THROUGH XCOIN
__________________________________________________
BITCOIN +---------+
P1---------+------X-+-P1>P2 • |
____________|______|___________|__________________
XCOIN # | @
P2-----#---------•X.............•&>P1---#-•X>P3 }
____________________________________________|_____ } Walled off Market
XCOIN-ASSET | }
P3----------------------------------#-•...•X&>P1 }
__________________________________________________
LEGEND
-|+ : Path of Action, horizontal,vertical,corner
. : Path of Funds in Escrow
> : Transaction to
# : Order Placed
• : Block Produced (alt 8)
X : Exchange Matched
@ : Proof of Payment
& : Escrow Unlock
Symmetric Exchange
A symmetric exchange is one where both networks run identical rule sets. Each hosts one half of two exchanges - a 'bid' and an 'ask' order book. Consider MKC and XC, two crypto coins. In the case of MKC, the 'ask' order book is for XC/MKC (the MKC chain is authoritative) and the bid order book is for MKC/XC (the XC chain is authoritative). Likewise, the XC chain hosts an 'ask' order book for MKC/XC (XC chain authoritative), and a 'bid' order book for XC/MKC (MKC chain authoritative).
Technically this is equivalent to two knowledgable asymmetric exchanges running on both chains.
A 'brief' explanation of one possible symmetric exchange
Each market's deterministic execution is decided by the authoritative chain, based on best-effort updates shared between chains.
Side note: If you imagine a cryptocoin hosting 10 markets, not every market needs to be updated every block, in addition, preventing updates increases liquidity at the cost of transaction time. For fledgling, unpopular markets this may well be a positive thing, and evidence in favour of only including market updates for the markets you care about - after all, you'll need to be running those clients. In addition, a very flexible update method such as this lends itself to better compatibility with block chains progressing at different rates.
These best effort updates relay specific information about both order books.
In the case of the 'ask' order book - the market the local coin has authority over - all known but excluded bid updates (from the foreign chain) are amalgamated into a chunk and deterministically matched against the 'ask' order book. The details of the trades made are recorded and packaged into a market update, which is then relayed back to the foreign blockchain in a best-effort fashion. Market updates are recorded in the merkel root (or a similar device) and secured in a chain so one can not be omitted. These are in turn recorded under the full block chain headers of the foreign chain. The deterministic matching happens over one market update. If a local block happens to contain three foreign market updates then all three are lumped and evaluated against the local order book deterministically. This leads to the fairest exchange between all parties concerned. The corresponding market update, when relayed back to the foreign blockchain, alerts the chain which cross-chain escrow transactions may be released and in what proportions (if there is a change address, for example).
In the case of the 'bid' order book - the market the local coin has no authority over - once orders are made the coins are locked and in escrow. All new or changed bid orders are added to the market update. After this update is recorded in the blockchain, in order to maintain foreign blockchain authority, if the user wishes to cancel the order they will have to wait for the foreign chain to recognise and acknowledge the request (one must broadcast a cancel message, record this in a market update, have the foreign coin register this in a market update, and then have the local blockchain recognise that it is finally safe to release the coins from escrow). During this time it is possible the order may be filled and the cancel message will fail.
Overview
Let us examine an ideal distributed exchange which operates exclusively on cryptocurrencies; and the path of value (for a Bitcoin/Litecoin trade) looks like:
TITLE: EXCHANGING BITCOIN FOR LITECOIN ON AN IDEAL SYMMETRIC EXCHANGE
______________________________
BITCOIN
P1-----------#--•X>P2
__________________|___________
LITECOIN |
P2--------#-•....X..•&>P1
______________________________
OR:
______________________________
BITCOIN
P1--------#-•....X..•&>P2
__________________|___________
LITECOIN |
P2-----------#--•X>P1
______________________________
That is to say, in the first case, an order is placed on the Litecoin network (ask for BTC or bid LTC) and recorded in a block. At some other point an order is placed on the Bitcoin network (bid BTC or ask LTC) which overlaps with the previous order on the other blockchain. When this second order is recorded in a block, it is known to have matched with the order on the Litecoin network (deterministically) and is automatically sent (or assigned) to the receiving party. At the next block on the Litecoin network the trade is learned of and the other transaction is performed so the Litecoins are transferred to the receiving party.
Moving form Xcoin to Zcoin through Ycoin - all of which support symmetric exchange:
By voting on which market updates to accept (from which other cryptocurrencies) and which markets to run, it will be possible to create a dynamic mesh of markets, forming many possible paths between any cryptocurrencies. (In the worst situation, a coin can run an asymmetric market and use that to trade into and out of foreign block chains.)
Finding Efficiency
Every novel technology developed will be hindered by regulation in a unique way. The 'directness' of transferring fiat to crypto is a worry for a number of people (the same people as are behind the steering wheel of regulation). They feel they need to remain insulated from the untested Bitcoin. Perhaps they would feel more comfortable with a bridging technology, such as a cryptocoin pegged to a parent currency, that just so happens to natively interact with a distributed market.
This, however, will likely not be the first iteration. It is elegant, quick, and efficient, but there are likely to be many sticking points before that can be realised. Firstly, Bitcoin doesn't support knowledgeable asymmetric exchange, and there is a strong possibility that too much regulatory pressure will require CRYPTO-AUD to ship without an exchange. Therefore the first iteration might look something a little more like:
Not as pretty. Compared to a our current fiat exchanges, this doesn't look that appealing. That said, legally CRYPTO-AUD is far more comparable to a traditional payment processing system such as PayPal. By exploring the edge cases of regulation we can help find the inconsistencies and assist its evolution. An attack on Bitcoin by a Government will necessarily involve shutting down the flow of fiat into Bitcoin as much as possible. One strategy is to create useful technologies that are too similar to existing tech (PayPal, Visa) so we can stand on some very resilient legal precedents and standards if these systems are challenged. These middle ground crypto networks, then, cannot be made illegal without negatively effecting the current corporate monopoly because they are designed to resemble them so much. Whether that's possible is another matter.
Conclusions
We know that regulatory pressure can be applied to restrict a flow of Dollars/Yen/Euros anywhere. While this investigation into distributed exchange didn't identify how to expunge regulatory pressure, it did yield at least one other method of allowing value to move from Fiat to Bitcoin: distributed cross-chain markets for cryptocurrency. By building a PayPal-like network built on Bitcoin and minted/destroyed in the same fashion as PayPal (deposit -> mint, withdraw -> destroy). Ultimately some legal entity must be responsible for the cash-in/out process, but this can now safely be entertained without needing to worry about the regulation surrounding running an exchange - provided there is a cross-chain market out there already.
In addition to this, we've looked at the minimum ideal for a market and found that it is readily achievable with knowledgable asymmetric exchange (better yet, symmetric exchange). We've briefly looked at one way this can be achieved, providing a fast, resilient, cross-chain market that operates as a close to perfect market. We have not investigated this suggestion deeply, though (don't worry, that's coming soon). By extrapolating this across all coins (or even just a few) we can see an interconnected web of markets bridging the gaps between chains.
Where To?
For cryptocurrency to succeed a distributed exchange must be developed linking them together. By leveraging interchain awareness we can build factum exchanges that operate in the domain of the currency itself. Atomic exchange that is intwined in cryptocurrency is a strong motivator for adoption, and by creating a web of markets the canonical boundaries between *coins will be removed.
Efficient atomic cross chain trade is a necessity for the long term viability of cryptocurrency. Using an asymmetric exchange, any user of a *coin can move value into an alternate chain, letting them take advantage of any new innovative features produced on other chains.
Addendum by
u/max
after
2021-10-27 12:25:22 UTC
almost 8 years
Originally published in the Bitcoin Magazine on January 15th, 2014. Reposted here on 1st April, 2014.
Proof of Work (PoW) is the only external method of powering the distributed consensus engine known as a blockchain. However, at least two alternatives have been proposed, and both are internal to the network (Proof of Stake (PoS), and Proof of Burn (PoB)). This is important as it uses virtual resources obtained within the network as a substitute for PoW, meaning they these methods consume virtually no energy, which has been aconcern of late. The figures suggested will only occur in a system of absolute equilibrium (the market is saturated with the most efficient ASICs that are possible to produce), though even if the reality is one or two orders of magnitude lower than predicted, it is still alarming and still must be addressed.
Proof of Stake and Proof of Burn
Both PoS and PoB use similar mechanisms. The auditor makes a sacrifice - in the case of PoS it is coindays (which are difficult to acquire; also a good measure of economic activity), and in the case of PoB it is coins themselves (which are also difficult to acquire). Ultimately, any Proof of {something} must require a cost, whether that be electricity, coin days, or coins themselves.
Herein I suggest a fourth method, very similar to how a term deposit works (in that dusty old banking system).
Monetary Velocity and Value of Money
The equation of exchange tells us that as velocity increases the price should decrease, and when prices decrease the value of each unit of currency increase - this is only the case provided the monetary supply remains constant. In a late-stage currency we would expect a relatively low level of monetary inflation / deflation (as opposed to price inflation / deflation - an important distinction), so we'll discard the concern of constant monetary supply.
In a Proof of Stake fuelled network one is required to hold currency for some time before it is able to be used to mint a block. Because it cannot be used in a transaction it is essentially removed from the monetary supply as it is unavailable for a period of time (not technically true because one can spend it up until it is used to mint a block, be the economic effect is the same either way in terms of velocity). Because the money supply will effectively (but doesn't actually) decrease, prices should also fall by a small amount. One can imagine the network saying Here is a small reward for temporarily removing your coins from the supply and making us all a little more wealthy, *in addition to auditing and securing the network*.
Proof of Burn is used in a similar fashion: coins are destroyed in an unspendable transaction which is not immediately obvious to the network (the author suggests using a P2SH address). At some later date this is revealed and used to create a block. The miner is then rewarded with new coins and/or transactions fees (presumably more than the coins they've burned, else they've made a loss). This is like the network saying Here's a small reward for temporarily removing your coins from the supply and making us a little more wealthy, *in addition to auditing and securing the network*. Huh, that sounds familiar.
While they may sound very similar, there are a few differences in terms of public knowledge. In both cases it is unknown how many coins have been left waiting in the wings (similar to how it is impossible to tell how many bitcoins have been lost or abandoned over the years), though PoB provides a little more specificity allowing us to determine a narrower range of candidates (unspent P2SH addresses) than PoS (which includes all unspent transactions). The volume of coins in each case is also an indicator, as in both cases there will be some effective minimum required. However, PoB implies the number of coins burnt cannot be set in advance as both the date of redemption and volume of burnt coins are unknown. PoS does not destroy coins and so any extra volume of coindays destroyed is less important. These differences are subtle, but may become important as the systems are explored more deeply.
Economically speaking, the basis of both proofing systems relies on relinquishing the ability to use coins for some time. In PoS this is voluntary and the funds are spendable at any time, whereas in PoB uses a rather more permanent operation so the user commits immediately to mining a block in the future, regardless of whether it is profitable or not (provided they meet the difficulty requirement, else the coins may be lost forever; perhaps pooled mining might alleviate this concern, though), but the length of time till that utility will be used is unknown. In this case, as the ability to use coins is relinquished, there is no possibility they will increase monetary velocity and thus should (in theory) increase the value of the each coin in the total supply.
Proof of Deposit
Proof of Deposit (or PoD) fills a medium between the two methods. Simply put, PoD blocks have a difficulty proportional (or equal) to the amount of coins that must be offered for ‘deposit’ and have a known block reward. Deposited coins remain untouchable for some length of time and the block reward is delivered to the miner (either immediately or over a period of time like a dividend or interest payments). As there is one deposit per block there are a limited number of deposits available each year, and if deposits are appearing too fast then the return must be too high, so the difficulty is increased (which implies the return is lowered) and thus demand decreases. Our personified network might once again say “Here’s a small reward for temporarily removing your coins from the supply and making us a little more wealthy, in addition to auditing and securing the network.”
That’s getting awfully familiar…
Why yes, it is. This should come as no surprise, though. What resources are there internal to a currency besides the currency itself? Economically speaking, there’s very little substantive difference between these three methods, and their monetary implications are very similar; the main difference is the physical actions that help it propagate. If, however, humans are psychologically biased to one way over another, then those physical actions are exactly the things that will count in a showdown between these proofing methods.
Does it even work?
This is really the only important question here. If none of these schemes work, do we have a reason to care? A discerning reader like yourself might have noticed something peculiar about these three methods: you need money to make money. Without internal resources existing the network has no fuel.
Peercoin mitigates this concern by using both PoW and PoS in combination to create new blocks. Over time PoW blocks become less frequent and PoS blocks become more frequent, so it should eventually lead to an energy efficient network (or at least more so than the Bitcoin network). Whether this will pan out or not is difficult to say; the reward for attacking Peercoin is far lower than a well executed attack on Bitcoin, and without an increase in Peercoin’s popularity and/or accessibility we might never discover how easy an attack really is.
Where to from here?
The possibility of a zero energy currency is not something that should go without research, but should also be approached with a degree of scepticism. It has been argued that monetary monocultures contribute to financial instability due to the lower resilience of a homogeneous system (compared to one of high diversity). Is it possible that a reliance on internal states causes instability more generally, even in a currency that has no resistance to opt in to or out of? If it is still the case, can we build several different sorts of these systems together to help provide that resilience? Can one network’s security rely on actions in one or several other distinct currencies? These are important economic questions that may have profound consequences for the future of finance; they are novel because systems of this precision have been impossible under legal frameworks, and never before has any person been able to create a truly global currency in their garage. Experimentation is the future of currency, and I am excited to watch it happen.
Every PoW driven cryptonet has a state. The state of Bitcoin (and forks) is the particular
set of Unspent Transaction Outputs (UTXOs) at the time - essentially the set of all Bitcoin
able to be spent.
When a new block arrives, the usual process to update the state is simple:
{% highlight text %}
Start with S[n,0] (state at block n)
Apply the first transaction from the new block (B[0]) to S
S[n,k] + B[k] -> S[n,k+1] for all k in B
S[n+1,0] = S[n,max(k)+1]
{% endhighlight %}
However, what happens when a new block arrives causing a reorganisation of the main chain?
{% highlight text %}
. 3a← 4a <-- 3a and 4a are not in the main chain currently
↙
1 ← 2 ← 3 ← 4 <-- 3 and 4 are in the main chain
5a arrives, causing the reorg:
1 ← 2 ← 3a← 4a← 5a <-- New main chain
↖
3 ← 4 <-- Old main chain, 3 and 4 no longer in the main chain
In this case block #2 was the lowest common ancestor (a pivot point)
of the two competing chains 3a->5a and 3->4.
{% endhighlight %}
Bitcoin et al solve the issue by stepping backwards through time.
Since Bitcoin transactions spend outputs, and outputs may be spent only once, playing the
blockchain backwards is trivial:
{% highlight text %}
for each transaction:
remove it's outputs from the list of UTXOs.
add the outputs it spends to the list of UTXOs.
{% endhighlight %}
And bam! You can then play time forward from the LCA to calculate the new state. How nice.
What happens, though, when we move to a cryptonet that only operates on balances and doesn't
use the input/output system of Bitcoin?
Well, provided we're recording every transaction it's quite simple. A transaction moving
X coins from A to B results in A-=X and B+=X. That is trivial to reverse. However, the
caveat is that we must record every transaction. Once we start including complex mechanisms
within the protocol that produce transactions that are not recorded but simply implied, we
can no longer play time 'backwards' as S[m] depends on S[m-1] and without knowing S[m-1] to
calculate the implied transactions, we can't play time backwards. Of course, if we know S[m-1]
we don't need to do any of this anyway, so we're sort of stuck.
Examples of this sort of mechanism can be found in the way contracts create
transactions in Ethereum and the market evaluation in Marketcoin.
Remembering S[m-1] is easy but what if the reorg is of length 2, or 3, or 10? We can't just
remember all the states.
So, we can see that we have a problem.
Efficiently remembering states
The intuitive solution (to me, at least) is to know some but not all states at strategic
intervals between the genesis block and the current head. When a reorg of length n
occurs, the network has already committed to evaluating n new states. I define 'efficient'
here to mean evaluating no more than 2n new states (in the worst case). Unfortunately,
this means we'll need to remember about 2*log(2,h) states, where h is the height of the
chain head. All the UTXOs in Bitcoin take up a few hundred meg of RAM, so for 500,000 blocks
we're looking at no more than 40 states, but that's still ~10 GB of space (by Bitcoin's
standards) which isn't ideal. It's unlikely that we'll see long reorganisations, but we'd still
be storing half of the figures mentioned above, which, while better, isn't perfect.
One solution may be to record the net implied change of state as the last transaction,
but that solution might be more painful than the cure, and requires introducing extra
complexity into the network architecture, which I'm against, so we won't consider
this option here.
In addition to the above constraint on 'efficient', we also require that for each block
building on the main chain we should only have to calculate one new state (the updated
current state). This implies that when we step through the blockchain, we only ever
forget cached states, with the exception of the new state produced by the next block.
Somewhat-formally:
{% highlight text %}
Current head is of height n.
A[n] = {cached states at height n}
Block n+1 arrives:
assert A[n] is a superset of {all a in A[n+1] s.t. a is not of height n+1}
{% endhighlight %}
Thus A[n+1] can be described as the set of some or all of the states in A[n] and
the state at n+1, and therefore our collection of states does
not requrie regeneration on each new block.
I propose a solution below that has a number of desired properties:
A reorg of length n requires computing no more than 2n states
Space efficient: k states saved where ld(h) <= k <= 2*ld(h)
Incremental: only one new state has to be calculated for each new block
{% highlight text %}
Initial conditions:
- Reorg length: n
- Current height: h >= 3
- i = 0; i < h
2k < h - i <= 2k+1 is always the case for some k
if h-i == 2: set k to 1. (it would otherwise be 0)
After finding k, and while h-i > 1:
1. Cache states at height i + 2k and i + 2k-1.
2. i += 2k-1
{% endhighlight %}
and in python: (testing all combinations up to 213)
{% highlight python %}
import math
h = 3
states = set([1,2])
while h <= 2*13:
newStates = set()
# find largest k s.t. 2k < h
i = 0
while h-i >= 2:
k = math.log(h-i)//math.log(2)
newStates.add(int(2k)+i)
newStates.add(int(2(k-1))+i)
i += int(2*(k-1))
ts1 = set(states) # temp set for testing superset requirement
ts1.add(h) # add the current state (instead of removing it from newStates)
assert ts1 >= newStates # ts1 is a superset of newStates
l = list(newStates) # temp list just to print
l.sort()
print(h, math.log(h)//math.log(2)+1, len(l), l)
states = newStates
h += 1
{% endhighlight %}
Because of the ~log(n) space requirement a very fast block time is not a major concern.
A chain with a target time of 1 minute requires about 1.5x the storage capacity of an
equivelant chain with a target time of 10 minutes in the first year, and this ratio
rapidly approaches 1 in the following years.
That said, after the first year with a 1 minute block time, we'd be storing around 30 states.
If we ignored all states more than 2000 blocks deep (a day and a bit) we're still storing more
than 15, which isn't a particularly great optimisation. (When we have events like the
Fork of March 2013 we would like clients
to adjust quickly and efficiently).
I have some ideas about state-deltas to try and solve this issue (which is
ungood, but not doubleplusungood) but that can wait for a future post.
Eleven months ago I started planning Marketcoin and since then I've not described the updated design.
It has changed significantly since I first described it, and is far superior in many aspects.
Herein I'll describe what Marketcoin is designed to do, sometimes with little or no justification to how it is achieved.
The implementation is highly technical and does not belong in a general introduction.
Marketcoin is an idea that manifests in a novel fashion.
It's a bit like Bitcoin, and a bit like Mastercoin, and a bit like Ethereum, but also like none of them in many ways.
It's not any one single network, but many that are able to communicate and transfer value from chain to chain.
They're able to share a common unit of a consistent value across all chains supporting the correct standard.
It is self pruning and selects on the most efficient markets, while still enabling diversity and innovation.
It is a parallel cryptonet designed to span the Internet and enable trustless trade between both old and new chains.
Markets can be hosted anywhere, on any chain, but there will be a central hash rate source where most market-chains live by default, known as the Grachten.
A market-chain may host many markets but the central unit is always common.
This communal living is an important design decision for market-chains because it provides an environment where high quality markets can grow that have some level of mutual quality assurance due to their competitive environment.
A high quality neighbourhood is important as chains will have to communicate to move the central unit between them; remember that confidence in the chain is inversely proportional to required confirmations, so less secure chains will naturally be slower to interact with their peers.
Just as Namecoin is merge-mined with Bitcoin, market-chains can be merge-mined with the Grachten.
Unlike the Namecoin / Bitcoin relationship, though, the Grachten has limited space, and it becomes more and more difficult to produce blocks as a miner attempts to include more data.
For this reason cryptographic authentication is deferred to the Grachten, providing a competitive environment where market-chains can prove their efficiency.
The more coins stored on that chain the higher the block reward is, increasing the incentive for a miner to mine that chain.
Two chains competing for the same currency pair will each hinder the efforts of the other, so once one gains a majority the weaker will atrophy until it is discarded.
The main idea of Marketcoin is to provide a fair and unbiased market on which to trade various cryptocurrency and other smart properties.
In the same way a human watches the Bitcoin blockchain to wait for payment, Marketcoin watches the Bitcoin blockchain to confirm trades.
Since there is some internal unit and an external unit, it is conceptually easy to see that an exchange can take place.
The actual market design used in a market-chain can be of the community's choosing, however, a blockchain provides unique challenges that traditional market structures do not neatly fit.
The proof of concept (PoC) due out in the near future will demonstrate a design based on a modified call market.
Since orders must be inserted into the order book whether they are executed immediately or not, no distinction is made between orders waiting in the order book and orders made immediately.
Due to the near perfect-information state of cryptonets, it's likely a market will react to an order before it is executed - helping to create liquidity and competition around every trade.
A typical market-chain will experience the following phenomena:
Every minute or two a market update will be produced, solidifying orders still floating around the network and helping to prevent manipulation on the part of miners - orders are added to the order-book but not executed.
Every 15 minutes or so the market will execute, clearing all overlapping trades.
This aspect is similar to a call market.
Part of the design requires that this cannot be predicted.
Orders will be prioritised in terms of how good a 'deal' they are.
A user wishing to sell the central unit for very little, or buy at a high price will be preferenced over a user who is less generous in their offer.
The highest bid is matched with the lowest ask, and the price of the trade is decided to be the average of the two offers.
One or both of the trades is consumed fully and the remainder is added to the top of the respective order-book.
This continues until there is no overlap between bids and asks.
As a consequence there is no longer a spread in the market, but an uncertainty in the price appears instead.
(During simulations this was ~0.3% at a maximum, and usually less; since this is less than or on par with most exchange fees it's counted as negligible.
As liquidity rises this error will fall, except in times of volatility.)
Furthermore, a large trade at a good price will generate a lot of interest, and the potential of profit will cause traders to actively compete for a slice of the trade.
To compete traders must make a generous offer in the other direction, and so their competition benefits the market as a whole, both by increasing liquidity and by helping to maintain price stability.
Due to the uncertainty of the market, it is possible to have two opposing trades in the same execution and make a profit (it is also possible to make a loss).
Whether this is an advantage or disadvantage is relative to the user in question.
There is a possibility of miners using their power to manipulate the orderbook slightly in their favour before offering a block, however, execution is always at the beginning of a block ensuring only existing orders are executed, preventing too much manipulation on the part of a miner.
They can, however, manipulate the orderbook very slightly with every block they produce, so that if the next block happens to invoke an execution they will be slightly advantaged.
By analogy, this is the high frequency trading of Marketcoin - both are simply having a say when it most counts.
It should be evident at this point that novel market structures are easily implemented under Marketcoin, allowing the most efficient and desired market structure to emerge.
This is an excellent example of the neutrality of Marketcoin's design.
While one market-chain may only support a few currency pairs (more may prove too cumbersome), other market-chains are easy to create and can maintain a two-way peg with the central unit provided a standard is followed.
This standard dictates a few aspects important to the network.
For example, the rate of central unit generation as a block reward should be proportional to the number of central units stored on that chain and inversely proportional to the block frequency of that chain.
Since market-chains are designed to coexist on shared hashing power they have a natural resilience to changes in block production times, and the reward adjusts accordingly.
Creating a new market-chain will be extremely accessible (we're going to provide a library) but maintaining one is very costly (you have to convince people to mine it in a competitive environment).
Because of this combination, I anticipate there will be a great evolutionary synergy, whereby there is no central Marketcoin chain and the central unit exists between many chains, abandoning them as they become insignificant and jumping on those that prove useful, novel, or advantageous.
The unit of value will be extricated from the confines of just one blockchain, allowing for innovation not just in market structure but blockchain technology - all without needing hard-forks.
Marketcoin is designed to be the solution to distributed exchange: from an ethical launch to evolutionary agility, we want to cover every base to ensure the longevity of such an ambitious project.
The Ethereum devs have been mentioning microchains lately so I figured it was time to write up what my thoughts on this sort of thing have condensed into; they might differ from Gav Wood's thoughts.
As a note, I didn't coin the term microchain, though I've heard Gavin Wood use it (and Stephan Tual). I didn't have a term and I think this is perfect.
The point of a microchain is to provide a shared scalable PoW 'container' - a chain meant for nothing else but wrapping data in a PoW. Typically this has been done in a roundabout way (see AuxPoW or Mastercoin/Counterparty) that requires a lot of data, and is not efficient for any 'piggy-backing' chains hanging off the main chain. This isn't a huge issue; insofar as - in the case of AuxPoW - proofs just go from 80 bytes to ~500 bytes (unless you're using P2Pool or Eligius then it's a bunch more). This is because the whole chain from block hash to PoW must be included, which is Hash(Header(MerkleTree(Coinbase(ScriptSig(BlockHash))))). Ugh!
Additionally AuxPoW has a number of design flaws: using 'chain-ids' to dictate positions in merkle trees is just ugly. The point is to ensure uniqueness in the proof - that you can't secretly include two different block hashes (since data in a merkle tree can be hidden) and later launch a doublespend attack. It's trivial to see that a merkle patricia tree (MPT) is the better solution here as key-uniqueness is guaranteed.
Another flaw is the indirect and bulky nature of the proofs as described above.
A further flaw is the reliance on a central chain: Namecoin will never exist without Bitcoin (or at least requires a hardfork) and necessitates the use of the Bitcoin chain. It would be nice to have a system of merged mining that is coin-agnostic (doesn't favour Bitcoin, basically). Hardforks are bad, lets avoid them.
A related flaw is the dictation of data structure which favours bitcoin-forks. It introduces needless complexity for a ground-up chain to implement AuxPoW.
All in all, using AuxPoW has a lot of side effects, and it'd be nice to be able to avoid them.
Basic microchains
These are minimum structures to fairly support merged mining and general data-inclusion.
(The code is written to be roughly compatible with Encodium.
Ensure genesis keys diverge as quickly as possible; Put a cap on proof length to avoid bloat for fun - putting two very similar keys can create a worst-case proof.
Change MicrochainBlock's proof of work to: def pow(self): return hash(self.tree.root + nonce) - this means we have O(1) updates to PoW whereas without this a k,v pair in the MPT must be altered, which is an O(log n) complexity update.
More complex forms
There are a few more alterations I've been thinking of, especially:
Making microchains into a blockchain of their own (and the metadata is included in the tree like everything else - this metadata governs targets, difficulty, etc) which will aggregate mining power in a more formal manner. Additonally it means that a chain can just not worry about PoW, and simply take an authenticated list of hashes from the parent chain (for better or worse). And...
Deregulating block frequency on merged chains and allowing the microchain to govern update frequency. Which ties into...
Competition within the tree. By this I mean merged mining an additional chain incurs some cost, this drastically alters the incentive structures around attacking networks and merged mining (haven't done the math yet to figure out if it even can be benefcial).
Those three points mean the microchain could support many merged chains, and their block frequencies would be governed by how often they are mined in the microchain (and lower frequency means higher reward per block), and with added competition that means they will reach an equilibrium which allows a direct measurement of percieved economic value. More detail another day.
Part 1 is primarily concerned with the money creation process (fiat money, fractional reserve banking, and interest) and who benefits from this. Part 2 is a conceptual introduction to Bitcoin, why it is such a significant achievement, and why it is superior to our current monetary systems.
I am an amateur philosopher practicing the school of Critical Fallibilsm. My favorite philosophy topics are: epistemology, learning, systems design, communication, morality, project planning and business, and general life skills and attitudes. I spent the last half of 2020 improving my thinking methods via (commercial) one-on-one philosophy tutoring from Elliot Temple. Those 52 tutorials (~100 hrs) are available free on YouTube and you can read my learning FI site and my microblog to see the philosophy work I was doing at that time. The tutorials started with my goal of improving my writing quality, but they covered a very wide range of topics, like grammar, procrastination, social dynamics (e.g. analysing + understanding lies and dishonesty), yes/no philosophy (epistemology), learning methods, and having successful discussions.
As a result of the improvements to my thinking methods, in January 2021 I unendorsed all my previous ideas. That means that I revoked my implicit endorsement of projects, ideas, opinions, etc that I had previously worked on, advocated, promoted, etc. There's more details in the above-linked video. This about page was sorely out-of-date prior to the update on 11th July 2021. You can see previous versions on github.
I am very comfortable learning new languages, frameworks, toolkits, etc. I have a strong preference towards safe programming techniques. Some examples: rich static types (e.g. Haskell, Purescript); well integrated functional techniques (e.g. Rust); unique compiler-level safety (e.g. Rust, Elm); and declarative frameworks (e.g. Cloudformation/IaC, Tailwind CSS). I really like type-level programming and am disappointed at the lack of support for it in languages like Rust. I think things like higher-kinded types, functional dependencies, and type-level rows are incredibly powerful, but the current implementations and tooling are ultimately lacking, making the act of doing type-level programming harder than it needs to be. There are some exceptions to my preference for safe languages, too. I quite like Ruby and Rails (though have some criticisms, too), and I have a new-found appreciation for SQL. I don't like javascript much, but it's not so bad anymore with modern ecmascript and typescript. I don't hate it.
My two most well-known past projects are likely Flux and SecureVote.
Flux is a political party I founded in 2015 to introduce better methods of doing democracy. Particularly, I invented a new way to do democracy -- Issue Based Digital Democracy (IBDD) -- built around foundational concepts from epistemology and free market economics such as error correction, cycles of conjecture and criticism, specialization and trade, division of labor, comparative advantage, and arbitrage. Flux ran in state and federal elections (in Australia) in 2016, 2017, 2019, and 2020.
SecureVote is a startup (in indefinite hiatus), founded in 2016, that produces secure online voting software and infrastructure. We own a patent on the most space-efficient method of secure, online, p2p secret ballot. In 2017 I publicly ran a 24 hr stress test of our prototype high-capacity voting architecture -- this achieved 1.6 billion votes anchored to the Bitcoin blockchain and was able to be audited by the public. Based on those results, a 2016 15" MacBook Pro would have capable of processing up to 16 billion votes in 24 hours, i.e., the 1.6 billion-vote stress test used approximately 10% of that macbook's computational capacity on the bottleneck task: signature validation (I guess mb it would thermal throttle, tho 🤨).
Some other past projects of mine:
BitChomp Mining Pool (2011)
I ran a NMC/BTC mining pool (BitChomp) for a short while in 2011. That died after the pool became insolvent due to a repeating-payments bug in my BTC payout code. For whatever reason the same code worked fine for NMC payouts, but BTC payouts encountered an exception between the send BTC tx and record the payout in the DB steps. The regular cronjob to trigger payouts meant that (by the time I woke up) BitChomp's first Bitcoin mining payouts (about ~7-8 BTC out of 50 BTC reward) had been sent to miners 6 times over until the wallet was drained. The reward distribution model (PPLNS, or SMPPS, mb) meant that the pool was able to build up a buffer of excess reward, but that needed to be maintained to pay miners later during unlucky periods (or something like that). The takeaway is that distributing the excess like this meant that the internal accounting of the pool was out of wack, and meant it was insolvent. I learned a valuable lesson about handling atomic, irreversible events and DB synchronization; and I'm glad it happened early in my career and not in a high-stakes situation.
Marketcoin, Ethereum, Eudemonia, The Grachten, Quanta (May 2013 to August 2014)
I tried to launch a distributed-exchange blockchain -- Marketcoin -- in May 2013. The architecture is still workable and better (higher capacity, lower fee) than the DEXs around today (like Uniswap/Balancer). In Dec 2013 -> Feb/March 2014 I worked with the Ethereum team doing self-direct work around smart contracts and wrote (to my knowledge) the first smart contract testing framework alongside a test implementation of Marketcoin's price matching and escrow engine and a precursor to BTC Relay comprised of 3 contracts: CHAINHEADERS (for Bitcoin's headers-only consensus), MERKLETRACKER, and SPV. To be clear: if the design of Ethereum smart contracts (and tooling around their authorship) had not changed significantly before launch, this is probably the earliest near-functional decentralized, cross-chain exchange.
In April-July 2014, a friend and I started a short-lived project, Eudemonia Research, where I returned to Marketcoin and wrote, from scratch, a blockchain framework for fast development of custom blockchains: Cryptonet. It's like Parity's Substrate, except 1,316 days older, written in Python, and dead. I used cryptonet for some other important prototypes, too.
One was The Grachten (originally GPDHT), an early implementation of a blockchain scalability soln based on merged-mining and something like pseudo-sharding. It later went on to be refined and coined [a microchain by Gav Wood. When Gav Wood said the following, he was talking about our conversation about this idea.
Decoupling the underlying consensus from the state-transition has been informally proposed in private for at least two years---Max Kaye was a proponent of such a strategy during the very early days of Ethereum.
Another important prototype I built using cryptonet was Quanta -- which is the world's first implementation of the generalization of Nakamoto consensus to a DAG that is capable of merging histories from multiple parents. The method I created for Quanta was independently discovered a year later by Yoad Lewenberg, Yonatan Sompolinsky, and Aviv Zohar in Inclusive Block Chain Protocols.
I'm no longer interested in pursuing anything around digital democracy.
I now think that what Flux was trying to do (and widespread digital democracy more broadly) is a fool's errand. There are much bigger problems with govt/democracy, and those problems will prevent digital democracy making any substantial impact. The AEC's flagrant disrespect for the Electoral Act and consistent head-in-the-sand style denial of any fault is an example of this. Moreover, voluntarily participating in broken systems (e.g., starting a political party) is, in-essence, consenting to it and publicly supporting it as legitimate. I don't think that's a good thing to do, and the right thing is to opt-out of those systems to whatever extent is possible.
There are two philosophy books I recommend you read that discuss these sorts of issues. Karl Popper's Enemies of the Open Society, and Ayn Rand's Atlas Shrugged.
IMO, digital democracy actually presents a risk in some ways, in that it might make government interference a lot easier. IMO, the further the government is from your life, the better.
A first attempt to describe the neutral voting bloc.
Originally written some time around 2 July, 2014. If you are interested in becoming a founding member of the NVB you can do so at our site: nvbloc.org.
Governance is currently a centralized endeavour in most developed nations, and inefficiencies arise in the modern age due to design assumptions made (typically) in the 17th through 20th centuries. Assumptions like speed of communication (snail mail), the ability of constituents to understand policy (non-universal education), travel time (days to weeks), and magnitude of population (orders less than today), among others. Since nearly all of these assumptions are now wrong, we expect that more efficient and just systems must now be possible. Furthermore, we expect there to exist some incremental solution by which a smooth transition into a new political and societal structure so that the transition can occur over decades (slowly and steady) instead of days (hasty revolution). A potential solution — a neutral voting bloc — is presented below aimed at the Australian Federal Senate due to the system of preference allocation as parties are eliminated from the race. Over time this system can diffuse into local and state governments. Additionally the structure is such that the bloc (as a party) can safely hold 51% (or more) of parliament without facilitating tyranny of the majority.
Many good ideas are floating around regarding the structure of what a good, neutral, and effective voting system could look like in today’s world. Most of these ideas surround liquid democracy (LD), a system that allows individuals to defer the potential of their vote to someone they trust. In this way, over a chain of several deferrals, responsive representative democracy is achieved (responsive because votes can be take away if a ‘politician’ misbehaves) for mundane issues that don’t attract attention. Similarly, if an issue attracts a great deal of attention, individual voters can directly vote, spending their vote potential personally instead of granting it to their trusted representative. Liquid democracy thus provides a smooth transition between referendums and oligarchical democracy, depending on the interest of the public, and ensures efficiency in representation through strong competition.
The underlying voting system, however, is not a recipe for success. These systems do not provide — on their own — an incremental transition into a better democratic system, they simply provide a means of allocating votes. Thus they are a means to an end; that end being just allocation of votes. Therefore, no matter how well designed voting software becomes, without citizens’ personal interests at stake such a system will not be adopted. Our task, then, is to design such a system as to always provide someone — anyone — with utility by participating, and once they participate, to offer the same value proposition to another individual, and so on. Without such a marginal increment in utility we cannot achieve an incremental solution.
The utility of such a system is the potential for the individual to cast their vote according to their preference. Therefore the system must strengthen whenever a new member joins regardless of their political ideology. This is achieved by using the party structure as a proxy for the alternate, superior, underlying voting network. As users participate the allocation of parliament (particularly in the upper house) belonging to the bloc will continue to increase with each election (as constituents cast their primary votes in favour of the bloc) providing the beginnings of incremental increase in utility.
However, this is not a novel idea (so far) and has been tried before. Senator Online is an Australian party acting as a direct democracy proxy service for each member. They received 0.06% of votes in the 2007 federal election, and 0.09% in the 2013 election. Clearly their model is not effective. Unfortunately, even though Senator Online may have some of the properties we seek, due to Australia’s preferential system increasing the granularity of representation, Senator Online may never have enough support to gain a seat. Direct democracy is very burdensome to the individual and so may appear unattractive to the majority of constituents (violating our requirement). Additionally there is no utility to be gained for an individual as there is no potential to actually vote if the party holds no seats. The combination of these factors easily explains the failure of Senator Online.
There are two additional issues, then, to overcome: 1) ensure liquidity of utility, and 2) make it easy and attractive for the layman — ideally no requirement for ongoing participation. Issue 1) can be overcome by ensuring the bloc has at least one seat, and issue 2) disappears when liquid democracy is used in place of direct democracy. It may be worth noting that LD requires less interaction than the contemporary Australian system, as a user can ‘set and forget’ by deferring to someone close and trusted, like a spouse, or child.
Ensuring the bloc has at least one seat is a difficult task if only members of the party behind the bloc are allowed to vote. This is because, in such a case, utility is only given to party members. An alternate design choice is to allow external voting too, and as such utility can be distributed beyond the membership itself. This provides an incentive to participate and cooperate even to those who are not even prospective members. The importance of this is realised with the following party policy: if another party grants preferences to the bloc, they gain a share of votes in proportion to their contribution (number of preferences) they’ve passed to the bloc. Thus competing parties are incentivised to pool votes via this preference allocation. Additionally, this pooling means that seats that would otherwise be given to major parties (due to the elimination process) are now shared among parties participating in the bloc — effectively. (It is worth noting that individual members play only a small role at this early stage, but since party policy must be a product of the members they have considerable power in the structure of the system). Due to the granularity forced by state-by-state allocation of seats, this also allows parties to gain seats they would have otherwise lost by combining preferences across state boundaries, which decreases the variance experienced during seat allocation — a phenomenon particularly harmful to minor parties. Therefore each and every party has an incentive to participate, even if the bloc is 3rd or 4th on their list of preferences. It forms a safety net so that in the case a party isn’t able to claim a seat for themselves, they can still have some stake in a seat, along with other minor parties. This is obviously preferable to simply giving those votes to major parties, and so should provide the incentive necessary for parties to consider cooperation and participation in their best interest.
Of course, there is the requirement that participating parties never have to trust each other as they would then become dependent on radically different parties which is unacceptable in a political environment. Trustless ledgers and numeric allocation systems (which could be used to track votes) is a field of computer science which is only now beginning to be researched. However, the structure at the core of these systems — a blockchain — is applicable to this situation which affords the trustless environment we seek. Due to the age of this structure it is not widely understood, yet, and so efforts in development and education need to be spent in order to convince even the most conservative of parties of the security of the underlying system. The particular design of an appropriate blockchain is a question as yet unanswered, and research into this is needed. Particularly decisions on the transparency of votes, voters, and voting need to be made in order not to violate the natural law and contemporary ethics which produce our standards for voting events, like secret ballot. Ultimately this is not a big hurdle and simply takes time for both the design and development of appropriate software.
Since the bloc is particularly effective at eating away at the fractional seats in the senate (particularly the last seat allocated), if it were to gain significant individual votes in elections a core allocation of seats will form that are not attributed to cooperation of parties, but to acceptance by the Australian people. Western culture often produces two party systems and it is not uncommon for a significant proportion of the population to be dissatisfied with both major parties. A neutral voting bloc provides a perfect avenue for those voters to express their dissatisfaction by effectively splitting their vote between all parties and people which have a stake in the bloc. If these voters facilitate (and ideally participate in) the formation of such a bloc then the utility of the bloc (from both the party and individual perspectives) massively increases as the bloc now controls more seats than preferences provide, meaning that participating parties actively increase their parliamentary exposure by preferencing the bloc. There is no other way —that the author is aware of— that parties can turn the zero sum game that currently exists into a positive sum game. Hence the proportion of parliament attributed to the bloc should continuously increase, as overall utility grows for all actors in Australia regardless of their participation. Therefore the requirement of increasing marginal utility is satisfied — at least while the bloc has less than a majority in parliament, after which the requirement is no longer necessary.
Democracy is not an easy road, and solutions are not always simple. However, with perseverance society can overcome these obstacles provided enabling technology is embraced. A system crafted to solve one very particular problem — decentralising the Australian Senate (and maybe the lower house as a consequence) — has been presented and shown to satisfy some basic requirements. With luck this design may be appropriated to produce more general solutions to solve this problem in other democracies globally. Solutions are on the horizon, let us boldly seek them out.
Intellectual property rights to any of the above are hereby forfeited, and so the work is in the public domain. Information wants to be free.
— Max
If you're interested in becoming a part of the NVB you can do so at our site: nvbloc.org.
The Australian Senate uses an unintuitive system of preference voting. Due to the low barrier to appearing on the ballot there are sometimes many persons and parties vying for the six senate seats per state up for election. To help provide a reasonable and quick voting experience for voters, they have the option of voting 'above the line' where they select just one party (or group) to support and allow that party or group to set the list of preferences for them. In this way they vote with the party.
The seats themselves and the process of filling them is based on a quota system. This means that once a party has secured enough votes to guarantee them a seat, the number of votes dictated by the quota are removed, though all preferences continue to be counted using a weighting system (to account for the already elected candidate). Usually the first 3 or 4 seats are uncontentious, however, the 5th seat and particularly the 6th seat can often be extremely dependant on how preferences flow during the election. Of particular note is the election of Ricky Muir who was elected to the senate during the 2013 federal election. Without these complex preference flows he would not have been elected as the number of first-preference votes the Motoring Enthusiast Party (which he represents) received was well below the quota.
Thus this property of senate voting comes under criticism often as it is seen as a way for illegitimate candidates to become elected. However, I will explain that it also presents a great opportunity for an alternate system of electoral representation using a party as container in which to house a better democratic system.
Mechanism of the Hack
I will define a 'hack' as 'behaviour introduced to a system that subverts some restrictions or properties that were originally intended to hold'. It is not the case that all hacks lead to negative consequences but certainly some subset do, though it is unlikely that his hack can be used to destabilise Australia or cause any political controversy besides discussion of the use of the hack itself.
In this case the Senate Preference Hack (SPH) subverts the traditional Senate electoral method by giving the host party an unfair advantage. As long as a small number of first preference votes are received the hack allows a minor party to obtain a number of seats (expected to be the 6th elected seat) far in excess of what first preference votes would indicate.
The core mechanic of the Hack is to provide something in exchange for preferences. Particularly it is a stake in the seats which the host party is elected in to. In this way there is an incentive for every other party to preference the host party ahead of most other parties. In short the Hack provides a neutral 'better than nothing' alternative for all parties willing to participate. Additionally there is not cost for parties to participate as they are required to preference everyone anyway, and thus are physically required to participate at some level.
Because of this the Hack has the following properties:
Mandatory Participation at some level.
Reward for preferences given to the host party.
Silent and harmless failure (if it occurs).
Free participation.
Permissionless.
Requirements for the host party
The host party must have some way of providing said stake to those other parties which preference it. Thus there is a requirement that said other parties are able to submit votes in such a way as to guarantee the host party does not manipulate votes to achieve their own ends. If this requirement is violated it is unknowable whether any tampering or censorship has gone on which violates the underlying implicit agreement. The agreement provides the incentive for other parties, thus must remain intact.
I do not think it is coincidence that the Hack therefore requires an open and transparent democratic system. As it also needs to provide some utility not found in our current system it provides an avenue to introduce nearly exclusively superior systems of democracy, with inferior systems failing to gain traction (though superior systems may also fail in this way).
While it is possible to use a simple direct-democracy-esq majority-rule system in which all parties and participants vote, I feel that would be a wasted opportunity, and provide a weaker incentive than a voting system that is more intricate and sophisticated. Particularly this intricacy must create some value that did not exist previously. Thus a system that provides the greatest chance of success is one that is again superior to our contemporary system, insofar as it meets some need better than its host's environment. I do not believe it is unreasonable to look to the internet and computers for a solution here. Most voting systems used in nations today were designed before the advent of the internet and so have a number of restrictions and compromises that were reasonable in a pre-information-age but now are less reasonable. A prime example of this is the philosophy of representative democracy as a whole. Without the internet and with vast distances between citizens it makes sense to elect delegates to act on your behalf. However, if there is no limit to how many others one is able to communicate with, and the speed with which one can do so, the foundation of representative democracy is less competitive.
Conclusion
The Senate Preference Hack enables a political party to gain unfair representation in the senate (by number) only if it then accurately represents those who preference it. It is conjectured to provide a viable method of testing and employing new systems of democracy without sacrificing the stability or integrity of Australian politics. To achieve this it must be wrapped within a new political party, but requires interaction with other parties so is naturally inclusive.
It occured to me that there is an unsolved problem with the NVB: death.
When a voter dies, if there is no connection between their physical death and their cryptographic identity; it continues to live on. This brings up a new-ish toxic market: dead people's ID. In short, since they don't expire they essentially become a 'vote for hire' if they can be recovered. Artificially inflating the voting pool in this way would almost certainly be to the long term detriment of the quality of governance we are obliged to accept (or is it? I presume yes for the moment).
So, one way to address this is for votes to 'time out'. Let's take a situation like the following:
Voters are empowered once every three months, and many at once. The empowerment of their ID only lasts 1 year at a time (or w/e). During that 1 year they are free to transfer away the right to vote to other identities. Particularly, they can do so through (synchronised?) CoinJoin operations. In this way, provided they only partake with members of their own pool, they can maintain relative anonymity while still linking their identity to only one large batch of people so an expiry date is maintained. The transaction they create will maintain constant voting rights and contstant expiry date. Mismatching expiry dates cause the whole transaction to be flagged as invalid.
That sort of situation would largely solve the problem. People who die are known and thus can't validate, so their token expires.
Aditionally you can choose your pool by timing when you register, or abstaning (by letting your token expire for a 3 month period or w/e, then revalidating for another block).
I guess people can also elect to be empowered instantly but this isn't the default and will lead to less privicy. Political active people are good candidates for going straight away, but those valuing secret ballet are encouraged to wait.
So, I recently wanted to broadcast a nonstandard tx and didn't want to wait for full blockchain sync on my dev machine.
I knew that Eligius supports the Free Tx Relay Policy, and that I'd sent them nonstandard txs before, but all the IPs I could find weren't accepting connections. Finally I found 68.168.105.168 and was able to connect and broadcast the message. Later I realised you can search getaddr.bitnodes.io for 'eligius' to find nodes.
This is a great start, but we still need to send the transaction. By default Bitcoin Core does not relay transactions that:
Are nonstandard
Contain outputs not in the current UTXO set
At this point we are going to need to compile a version of Bitcoin Core. We will need to address the above two problems to broadcast a transaction. The second problem is included so we don't have to download the whole blockchain, allowing us to relay pretty much anything.
The first section of code to change is IsStandard() in script\standard.cpp. Source Link. The quickest way to fix this is add a return true; at the top of the function like so:
Sweet, now all transactions are standard for our node. The next part is to relay transactions even when we can't validate the outputs they spend. When troubleshooting before I ran into this error message being thrown. To avoid more debugging I figured the best thing to do was just rip the whole block of code out. That is to say this:
<snip>
bool fHaveChain = existingCoins && existingCoins->nHeight < 1000000000;
if (!fHaveMempool && !fHaveChain) {
// push to local node and sync with wallets
CValidationState state;
bool fMissingInputs;
if (!AcceptToMemoryPool(mempool, state, tx, false, &fMissingInputs, !fOverrideFees)) {
if (state.IsInvalid()) {
throw JSONRPCError(RPC_TRANSACTION_REJECTED, strprintf("%i: %s", state.GetRejectCode(), state.GetRejectReason()));
} else {
if (fMissingInputs) {
throw JSONRPCError(RPC_TRANSACTION_ERROR, "Missing inputs");
}
throw JSONRPCError(RPC_TRANSACTION_ERROR, state.GetRejectReason());
}
}
} else if (fHaveChain) {
throw JSONRPCError(RPC_TRANSACTION_ALREADY_IN_CHAIN, "transaction already in block chain");
}
RelayTransaction(tx);
<snip>
So, all that's left to do is compile bitcoind, run it with -connect=68.168.105.168, wait for the connection to initialize (which can be checked with bitcoin-cli getpeerinfo) and then sendrawtransaction when you've confirmed the connection. If all goes well the tx will appear in the next eligius block!
Australia's Two-Party System Has Failed Us, and We Can't "Just Fix It", but There Is a Way
A few days ago a Junkee article by Jane Gilmore of a similar title was posted to /r/Australia, and although Gilmore's heart was in the right place, the solution to just replace bad politicians with good independent politicians is no solution at all. However, there is still hope, as I shall explain.
Here's the crux of my argument: replacing party-politicians with independent-politicians will not work well. Without infrastructure to prevent re-party-fication and factionalization there is no reason to believe that independents can provide a long term solution, or will avoid joining new or existing parties, or that they will be any more effective than current representatives. Ignoring the lack of Gilmore's plan working en mass anywhere else in the world, the problem lies with a common obsession in democracies: who should rule? It is a toxic question that leads to faulty reasoning. It is epistemologically analogous to who is the authoritative source of knowledge? There is no authoritative source of knowledge, and likewise, there are no permanently good rulers.
There is an additional problem with existing explanations of why democracy works: just as the philosophy of empiricism thought that somehow the laws of nature were 'written' on to the mind through observation, some proponents of democracy think that the right rulers are somehow 'read' from the minds of voters. But voters are not a magic eight ball, they are just as fallible as politicians, and perfectly capable of making the same mistakes. Without accepting this fallibility how can we design a democratic system that is immune? If we fight with reality we will not be able to keep from fooling ourselves.
We can see the who-should-rule question embedded all over the place:
If all the people that vote Liberal for no other reason than they don't like Labor, and [vice versa], all voted for Greens and independents instead the problem would solve itself.
But that requires voters give more thought to their vote than A or B. - /u/Kangalooney (top comment on the original thread)
Imagine if we re-framed this for knowledge: If everyone who doesn't like Oracle-1 and Oracle-2 went to Oracle-3 or Oracle-N we'd be okay. Of course, we could argue that some sort of market for correctness would be set up between the oracles but this is inconsistent with them being oracles. Likewise, expecting competition to prompt parties to somehow create good policy ignores the fact that the origin of good policy must therefore not be the parties.
(Also of note is that Kangalooney criticizes the voters, particularly swing voters, not representatives or the system.)
Or from Jane Gilmore, the author of the original article:
What if every rural seat in the country sent someone to parliament who was prepared to actually fight for the things that matter to rural communities? What if the country people elected MPs who were strongly invested in rural medical facilities, ... [many more examples] ... infrastructure and education?
Unfortunately, electing a benevolent dictator is not really a solution. Essentially, that is the fictional entity Gilmore imagines, and often what we all imagine, but it is a simplistic and flawed vision. The who-should-rule question has no good answer, especially for communities of our size. There are just too many ideas, people, interests, and motives for one independent to represent them all fairly. It doesn't matter how much someone wants to do good things, that in no way enables them to. Without an explanation of why results should improve we should not expect them to.
There are also a number of traditional political problems Gilmore's solution doesn't solve. For example, minor parties have never been good at working together: factions, ideology, dogma, these are what are expressed when you put minor parties in a room. It's not a magic recipe for success, and we've known that for a long time.
Furthermore, when you do get 'cooperation' in diverse coalition governments, a curious phenomenon occurs around the creation of policy. Party A suggests Policy A, which they think solves a problem. Party B suggest Policy B which they think will solve the same problem. After negotiations the resultant policy (AB) is a 'compromise' or mish-mash of the two, and curiously nobody thought AB would work in the first place! The solution is not just more proportional representation, we need to go deeper.
If our existing electoral system had the answer within it all along, then it would in itself be the solution to the problems we've had, the same problems that were created by that very system! We need new strategy, we need new systems of organization, we need new ways to rally minor parties to help them cooperate instead of bickering. We need something that has never been tried before because everything that has been tried has failed (else we wouldn't be in the state we are!)
Karl Popper had a useful criterion for determining the quality of a democratic government. First, a democracy is one in which bad policy can be removed without violence. Tick. Second, the quality of a democracy is *how well* it can remove bad policy. (1) Uhhh, we're probably about a 'D+' on that one. I challenge anyone to think of a better method of ranking democracies with such an objective and explicable quality.
I will construct a truism:
The optimum political strategy is not to implement good policy and remove bad policy. (2)
We know this because:
Current political strategies that work do not resemble (2)
People who try (2) face overwhelming odds and rarely succeed beyond the small scale
Without addressing (2) how can we expect the policy we produce to improve? If we don't change how vulnerable our representatives are to removal, why should we expect any greater control than we have now? In short, to achieve anything resembling (2) requires change to both how power is structured and instantiated in people, furthermore, such a change cannot be obsessed with 'who should rule'.
I'll follow now in Gilmore's footsteps, and give you a few paragraphs of 'what ifs', of the requirements, the potential, the promises, and the hardships. Everything from here out can be started tomorrow, if we put the work in.
What if a small proportion of Australians realised we have to step into the unknown? That to solve these problems we must embrace and move past them, and to do so has never been done before; that it will be scary, and difficult -- for becoming pioneers has never been easy -- that we must abandon our previous ideas about why democracy is good, and look beyond ideology into a scalable future?
What if Australia had an unique entry point for new ideas, a way to experiment, boldly and safely, with out risking our political system? What if our Senate elections involved parties trading preferences so we could introduce elements to help reorganise our political landscape without permanently altering our parliament? What if we could craft a new political party just to house an experiment, an experiment that could give more back to minor parties, allowing them to specialize, and in doing so give each their ability to really help in the areas they know most about? What if this party used their elected candidates only as a proxy, and allowed novel experiments to feed into our real parliament? What if this party could have a candidate elected in every state with only 1% of the primary vote?
What if this political party could house a direct democracy, and allow all voters to participate whenever they felt it necessary? What if leaders were held to account throughout their term, and in the cases they needed removing, what if we could remove them? What if we embrace bold new voting systems, allowing rules and division of power based on issues? What if you could set a delegate like a family member, or friend, or community leader, instead of having to vote all the time? What if this happened through every level of our political system, so that power structures could be quickly rearranged without needing the whole population to vote? What if we could somehow introduce liquid democracy, to ensure that voting can be fast and cheap? What if we used modern cryptography to ensure anonymity, cost effectiveness, and immutable and transparent ballots?
What if this party was already in motion and on the way to gaining 550 members so it can register Federally and run for the next election? What if this party had a solid plan of action and the explanations to back it up? What if your actions today help decide the future of our parliament, and the future of Australia? What if you had the chance to join this party? What if you could be part of the 1% to take back democracy?
I am trying to start a political party to directly address the philosophical issues I explore through this post. It is called the Neutral Voting Bloc, requires ~1% of the primary vote to win 6 Senate seats, and it is novel in strategy, implementation, and philosophy.
The Neutral Voting Bloc is the only solution I know of, which is why I'm building it. Join me?
I think you make some excellent points and your suggestions are certainly sensible, but I also think there is a bigger beast to slay. The problems we have now are no more 'wicked' than any we have had in the past, however, we certainly need to adopt more modern explanations so that we can navigate their 'complex interdependencies'. I suggest the root of the problem comes from our obsession with the who-should-rule question, and our lack of innovation in policy-making, which can be improved once we understand a little more about where policy comes from.
Furthermore it does no more good to lament the issues we face: there will always be problems, and we will always have to solve them, and because of that we will always have such 'wicked' problems of increasing complexity, and 'contradictory ... requirements'. These point to nothing more than our systems themselves requiring improvement. Jones speaks of these 'wicked' problems as though the problem is some how to blame, but there is nothing inherently wicked about the universe or the problems it challenges us with, and there is a simpler (and more reassuring) explanation: some of our explanations are wicked, and more particularly it is bad philosophy that helps create and propagate these wicked things. This is why there is no definitive formulation, and why they have no 'stopping rule'. However, there is hope because when the problem is now how we interact and the ideas we cultivate, and the question becomes how do we do away with wicked explanations, and wicked policy?
I agree that Parliament is where the improvement needs to happen, but to suggest that we could keep electing the same people and achieve superior results cannot possibly fix the problems we face, because our true problem is with how we solve problems. Only by fixing the process of policy creation and removal can we ensure that we are able to avoid the bad explanations that produce bad policy.
Karl Popper suggests a criterion for judging democracies which is both objective and powerful: the quality of a democracy is how easy it is to remove bad policy without violence. Canonical democracy is able to do such a thing, however, we are by no means efficient at it. How are we to define 'bad policy', though? The answer is simple: the ease of varying policy is inversely related to how good it is. If we think about each extreme, it becomes clearer to see why, as a policy that is easy to vary must have little connection to reality, and a policy that is very deeply connected with reality must only have a very small number of possible forms (where any change to the 'why' would take it further from the truth). In actuality it is exceedingly difficult to make a policy so perfect it is impossible to improve upon, however, since our options for policies are diverse and finite, it is easy to compare them, and when no suitable explanation can be found to inspire policy we must create new ones.
This is truly where policy comes from: it is a creative endeavour that comes from our understanding of the world. It is the creation of new options that allows us to solve the wicked problems of today, and thus to 'fix' our parliament we must first 'fix' our method of creating new options. There are good reasons to believe that parties are not the vessel that can take us there, at least not in their current form. I don't doubt that a loosening of political discipline would move us in that direction, but to believe we can do it forever with canonical parties ignores the fact that parties are not the source of good debate or good policy, and thus there must be better methods of creating good policy, and solving wicked problems. I've written about this political philosophy just a few days ago: http://xk.io/2015/07/10/aus-two-party-failed-us/
I contend that if we were able to solve the problem of the explanations behind policy there will be no requirement to educate or train those involved with politics, because those best suited to create novel policy options are already qualified. The question is how to utilize them.
Finally, I'm starting a novel political party in an attempt to address the philosophical issues plaguing our parliament. It doesn't conform to the methods or structures of canonical parties, and may be able to win 6 senate seats with only 1% of the primary vote (with a trick I call the Senate Preference Hack). If you're interested, it is named the Neutral Voting Bloc and the website is http://nvbloc.org/
Development of Bitcoin has become dysfunctional. This post isn't about either side of the block size debate, though the debate itself is the inspiration for this post. While the correct course of action may not be apparent, the divide in the community (and corresponding lack of direction) is apparent.
Despite this, some of the community seems stead fast in finding a solution and moving forward in particular directions against the advice of some core developers. Unfortunately, we currently lack the ability to determine who truly has the most support in the public debate, or even if it is one sided. Herein I propose a way to:
Fund core development, and
Determine the source code used for compilation in a decentralized manner
All of this occurs on-blockchain where needed, and there are no trusted entities besides the compiler-maintainers, which can again be chosen in a similar decentralized manner. In the future perhaps we can use zero knowledge proofs to decentralize the compiling stage, but for the moment this is not considered.
This proposal requires bitcoins to be burnt, or destroyed, in order to form consensus. While this may be unpalatable for some, it provides a way to secure the distributed consensus.
BlocVoting - delegative democracy on the blockchain
I presume the reader is familiar with the first two: Bitcoin and Git. Bitcoin provides the blockchain and Git provides our method of managing source code.
GitTorrent is a recent development that allows accessing and hosting source code via a DHT, similar to accessing torrents via magnet links.
Bitcoin offers us an important lesson: the irreversible conversion or destruction of resources provides a method for converging to consensus in a distributed manner. By burning coins in a way that resembles proof-of-work we can secure a blockchain like structure to manage identities.
Coins are burnt by sending them to an OP_RETURN output that contains linking information, among other things, and makes it impossible for these coins to be spent. Each burn transaction points to one or two previous burn transactions, which in turn points to previous burn transactions, etc. In this way, starting from a genesis burn tx, a graph (or list) of burnings can form. Like the blockchain has a top block, the burn-graph will have one node with more coins cumulatively burnt than any other. This is the head of the graph, and used as the basis of the weighting system. In this way, an identity's weighting is determined by the volume of resources destroyed (number of coins). Because a weighting will only be obtained (for the burner) if the burning ends inside the burn-graph, there is an incentive to work off the top, in the same way that Bitcoin incentivises mining on top of the Bitcoin chain.
Aside: exponentially increasing the weighting with respect to time may be required in order to ensure old burnings don't interfere with recent burnings.
After the burn-graph is established we can extract a map (or dictionary, or list of (key, value) pairs) that will continually be updated. This map is between an identity (which can be a Bitcoin address) and a weighting. One increases their weighting by burning more coins.
Membership and Voting
Using this map as a membership list (open to anyone willing to participate in the burn-graph) we can then allocate the number of votes for that identity based on the weighting. Votes could be used to indicated current preference for the hash of the git commit which they prefer as the canonical source used for Bitcoin compilation. One option would be to implement direct democracy on the blockchain, but that would be inefficient.
Instead, I propose an implementation of Delegative Democracy. This would allow most individuals (who do not have the technical prowess needed to read and understand Bitcoin source code) to choose a delegate, which they can change at any time, who can vote on their behalf. This delegate may in turn have a delegate of their own. This allows core developers to hold the same responsibility and power as they have in the past, until they are unable to solve problems effectively amongst themselves. This inevitably happens from time to time, and so at this point the next layer of delegates can take matters into their own hands and vote directly. If this increase in participation is still unable to solve the issue, the process can continue until we reach something similar to direct democracy where everyone is participating.
Furthermore, when there is little controversy within the community delegative democracy is incredibly light, perhaps requiring only a few kilobytes per release cycle. During times of controversy it is natural that participation will rise, and so the space requirements will rise accordingly.
Disclosure: I am developing an on-blockchain implementation of delegative democracy called BlocVoting.
This voting network would not physically transfer tokens as some voting proposals do. Rather a weighted graph of voters would be established and evaluated for each ballot.
Hosting Source Code
While (at this stage) we could hook up a git server to read the blockchain and publish information about the current git head, we can do better.
Using GitTorrent (source code) we can decentralize the source-code-hosting problem. Because we can decide on the latest commit with respect to the blockchain we no longer need to reference a) a hosted git repository, or b) a central authority. (These are the only two methods originally suggested, though the author does talk about using blockchain name resolution.) In this case, running a node to store and provide access to the Bitcoin source code would help the source-code-serving network (especially a node that tries to include as many branches as possible).
The final problem to solve is source code distribution. The same voting network is capable of voting on compiler-maintainers. These would be public key identities (probably well connected to real world identities) that would be responsible for deterministically compiling and hosting Bitcoin binaries, based on what the most recent ballot yielded as the git head. In this way we could at least know if any funny business was going on by comparing the various compiled binaries. There is the potential to decentralize this further, but for the moment the above is considered sufficient.
Funding Core Development
At the beginning I mentioned funding core development, though that has remained absent until now. There is no way that I know of to integrate this sort of funding in such a way that does not provide an advantage for an attacker. However, with sensible defaults we can heavily mitigate this possibility.
One possibility is for the protocol to mandate each vote requires 2 outputs with some ratio between the values (such as 1:1). One is an OP_RETURN output, and one standard output. By default users would be encouraged to select a core developer to donate to, though they could specify any address (including their own) to direct the second output at. Whether to include this at all is a design decision perhaps best left for later, but the possibility of funding development is tantalizing.
Summary
Using some novel technology and the immutability of the blockchain we can construct a framework to help manage decisions around what code to include in Bitcoin, including hard-forks and block-size updates. The unique combination of these technologies allows for a completely decentralized development process without the implicit trust that the Bitcoin community has endured (and now suffers from). We can host code trustlessly using GitTorrent and publish our preference for which code to use as the Bitcoin source code. Using a proof-of-burn based weighted graph we can ensure we maintain decentralized consensus using similar game theory to Bitcoin itself.
Background: we (Nathan and I) built a political party in ~3 weeks on FB via ads. Cost about $3500 or so.
The biggest thing I learnt was to try as many things as possible via split tests, and if possible try them quickly. We used https://adespresso.com and I can't recommend it highly enough. If you're going to spend more than a few hundred dollars it's well worth it (basic is $50/month and it's not on a contract or anything - maybe even has a free trial).
Particularly a few words can make a HUGE difference (like 30%+ clickthrough rates). An example of that was the following two:
Representative Democracy has failed us and
Representative Democracy has failed us, time to do something about it (much better clickthroughs).
Particularly I'd suggest split testing images and headlines, since they're the bits that grab people. We noticed a smaller difference with the descriptions/wordy bits, but still significant (15% or so). If you can nail both then there's an easy 50%+ you can just iterate towards, which goes a long way esp. with a NFP or when you're on a budget. Images also can make a 20%+ difference, so make sure to try a few.
Also, make sure to install the facebook tracking pixels so you can track conversions accurately. Facebook can tell you a lot about the people clicking through and signing up just by using that. For example, most clickthroughs we had were older folk (70% male, 50% over 50 or so, which was surprising for a tech based political party) HOWEVER, most signups were younger folk, so you can sort of start to tailor things better with that data.
I wouldn't recommend split testing over interests because you end up with smaller running ads, and you get smaller sample sizes => more money to get significant results. (And you also get info on who is interested in what anyway)
You get a frequency measurement through adespresso too; keep an eye on that because if it stays low you can pump the ad way more (as the same ppl aren't seeing it again). As frequency gets higher your dollars won't stretch as far.
I'd suggest heavy experimentation early on and then pump the ads that work really hard later on. That way you build engagement.
Oh yeah on that point, split tests create multiple ads, that means that comments are only on one single ad, not all of them. Engagement matters so be sure to try and not create ads later in the campaign because you lose that engagement.
Most democracy systems judge themselves by a criterion like “how well does this system represent the preference of the people”. We don’t think that’s a very good thing to measure against, though. Sometimes the preference of the people is evil (Hitler was voted in, remember), or maybe it’s just bad for them (even if they think otherwise), or maybe the outcome is very sensitive to change (like Colombia's peace referendum at 50.2% to 49.8%).
A fundamental background assumption is that the best form of democracy is comparatively better to other forms of democracy. This means that two (or two hundred) countries using different forms of democracy can be measured against one another, and the one with the better form of democracy will, over a long period, end up better off economically and socially.
Because we’re talking about long term challenges and improvements we go beyond mere preference. If a democratic society, in 1901, decided to ban electricity and kept that ban up to now, they’d be far behind the rest of the world. In other words: their preference didn’t help them, and accurately reflecting that preference wouldn’t have helped them. What would have helped them is a progress oriented democracy, on that made it untenable to not to adopt electricity. To some degree this goes on today, but we still see social issues taking decades to play out, instead of years or months, precisely because we wait to get ‘over the distribution hump’ of opinion. We take a long time because we try to accurately project the ‘will of the people’ too much!
IBDD solves this issue through the reorganisation of political power. It makes it expensive for people to hold society back, but also makes the political process accessible. This relationship is designed to allow policy improvements to happen as quickly as possible, which is crucial if we want the best democracy possible, and the best life possible - for all of us.
This doesn’t mean that just anyone can do anything - that could never work. It does mean we all have a chance to contribute though.
We use a different measure of democracies. Instead of obsessing over the preference of people, we focus on the progress a democracy provides to its people. Because progress is fundamentally connected to reality and truth we need to look at how and from what policy is formed, instead of who the policy is formed for. This means we need to bias towards good explanations instead of public preference. There are always some members of society years ahead of the curve, and we should focus on empowering them instead of satisfying a relatively non-specialised [1] majority.
Footnotes
[1] I say ‘non-specialised’ because ‘uneducated’ sounds condescending. The reality is, though, that all people are uneducated about most things, even if they’re highly educated about certain things. IBDD is concerned with putting the highly specialised and well suited people into a position where they have a real chance to implement policy. The other side of this is we’re all specialised in something, so it’s not that this disenfranchises people, it simply reorganises them. There’s a good analogy in TBOI that involves transitioning from a line, to a square, to a cube, and then to higher dimensions. It’s on page 100 or there about, in the ‘Creation’ chapter. Think about it in terms of dimensions of knowledge.
Addendum by
u/max
after
2021-08-28 08:43:40 UTC
over 4 years
Summary: Early December I travelled to Brazil to present this at the Wired Festival. In this 30 minute reproduction I talk about how political power and fallibilism interact and how we can take advantage of that to produce far superior policy.
I recently wrote the following in a correspondence with a colleague. It was too good not to post, so I hope the following helps you gain a grasp of why Flux and IBDD exist, and their founding philosophy.
The Flux Movement is founded on (what I now call) Deutschian Fallibilism. It's an evolution of Popperian Fallibilism and David Deutsch's book The Beginning of Infinity does a great job of explaining both the theory and exploring the breathtakingly profound consequences. In one light it's a book about epistemology, but the consequences are far-reaching.
In essence the book's thesis is that explanations are the basis of human knowledge, new knowledge can always be created, that all evils are due to a lack of knowledge (at a fundamental level: how to rearrange the atoms around us to alleviate some problem), and thus that the progress of people (and prosperity linked to that) is essentially unbounded - it's just a matter of creating the right knowledge.
The book also discusses myriad other topics, such as morality, Dawkinsian memes, aesthetics, AI, democracy, and many others. All these topics are brought together in a breathtakingly profound, consistent argument.
We've taken his lessons and built a novel form of democracy we call Issue Based Direct Democracy. IBDD comes at the problem of democracy from an entirely novel position: that democracy should be designed around solving problems, not the will of the people.
The reasoning for this is quite simple: if canonical democracy (Rep Democ, Liquid Democ, or DD) is able to make decisions that reduce the prosperity of its citizens then those decisions are wrong, regardless of how the majority feels about it. Furthermore, it's very simple to see that a democracy that biases prosperity (via the creation of new knowledge) at a minimum must always be at least as good as canonical democracy since it is able to create policy that is of greater benefit than other forms of democracy.
The core method of creating new knowledge is a cycle of conjecture and criticism. The two processes we know of in the universe that create knowledge (evolution and human creativity) both use this method. In biology, the conjecture is gene mutation, and the criticism is death. In human creativity, a great deal of conjecture and criticism goes on in our mind before we even know we have a thought (perhaps this is what is happening in those 'empty' moments of creativity before an epiphany hits), and after we publish our thoughts peer review process and debate takes over. Karl Popper originally called these two things conjectures and refutations.
It's not that canonical democracy doesn't have this: election cycles, party politics, and citizen movements all involve conjecture and criticism, but it is far too slow and far more akin to biological evolution than human creativity.
We took the idea of conjecture and criticism and designed a system of democracy around it that we believe biases more correct knowledge. In other words, IBDD is a truth machine (or rather, a more-truer-than-what-we-had-last machine).
Because we've built democracy around epistemology (instead of the usual will of the people) we view the policy creation process uniquely too: policy is simply an application of explanations we have around certain phenomena and instructions on how best to change our reality to cause some effect.
Thus the most important part of policy formation should not be voter buy-in, but how well that policy is crafted and how good the underlying explanation is - and this is what should decide which policies are enacted. (The book goes into great detail about how we can tell if explanations are good or bad before we even test them, something that's very useful in policy since testing is often slow or inconclusive - no doubt due in part to the problem of creating a 'control' in society).
Furthermore, since mistakes are impossible to avoid (and necessary for progress) we put incredible emphasis on the ability to self-correct. This is one of the reasons Flux is called Flux and not Stasis - the outcome of a particular issue in IBDD can change day to day, based on how voters intend to interact (or perceive the benefits of their interaction) based on everything else that's going on that the time, and based on the new knowledge that's been created recently. (This also has the great benefit that it can help improve IBDD strictly more easily than the system that came before it.)
We enable this dynamism by treating policy as an ecosystem (as opposed to each policy in isolation) and crucially allow voters to move their political power between issues via an auction market and neutral central liquidity token (think of it like money and stocks, except that you can't sell liquidity tokens and everyone starts on an even footing and is constantly pushed back towards an even footing). The end result is that by giving opportunity cost to every choice a voter makes, they are incentivised to move their political capital into the issues that are most prescient to them ending the problem of 'tyranny of the majority' and 'boaty mcboatface problem'.
An additional side effect is that both Arrow's Impossibility Theorem and Balinski and Young's apportionment paradox are avoided entirely. We don't 'solve' the problems (after all they are mathematical theorems) but avoid them by constructing democracy differently.
These discoveries are what has spurred Nathan and me into creating The Flux Movement. Put another way: we see the action of holding on to this information and not doing our best to instantiate IBDD as a grave crime against all current and future people, and thus we have a moral duty to bring this to bear as fast as possible (provided we don't destroy any methods of correcting mistakes).
As a final note: The Beginning of Infinity teaches us that a key property of knowledge is persistence, which is to say that knowledge instantiated in reality (by say, building a device) should persist. Thus IBDD's ability to persist is indicative of whether it actually is better suited to solving the problems we face as a society. We predict that once we start using IBDD anywhere (provided it's suitable) it should be rapidly adopted since the benefits of IBDD (producing new knowledge in policy efficiently) is associated with increasing the prosperity of the population that adopts it.
If we are right, IBDD will thus lead to the end of tyranny globally, since no system is better able to create new knowledge than one designed to do exactly that.
In 1960 Popper published his paper Knowledge without Authority where he conjectured that the central question of political theory should not be Who should rule? but something entirely different.
What he put forward has become known as Popper's Criterion - a criterion I believe IBDD satisfies with flying colours.
Popper's Criterion
In response to the who should rule? question, Popper writes:
This political question is wrongly put and the answers which it elicits are paradoxical. It should be replaced by a completely different question such as 'How can we organise our political institutions so that bad or incompetent rulers cannot do too much damage?'. I believe that only by changing our question in this way can we hope to proceed towards a reasonable theory of political institutions.
The question about the sources of our knowledge can be replaced in a similar way. It has been asked in the spirit of: 'What are the best sources of our knowledge — the most reliable, those which will not lead us into error, and those to which we can and must turn, in case of doubt, as the last court of appeal?' I propose to assume, instead, that no such ideal sources exist — no more than ideal rulers — and that all 'sources' are liable to lead us into error at times. And I propose to replace, therefore, the question of the sources of our knowledge by the entirely different question 'How can we hope to detect and eliminate error?'
Karl Popper, Knowledge Without Authority (1960)
There are two important political questions here:
How can we organise our political institutions so that bad or incompetent rulers cannot do too much damage?
How can we hope to detect and eliminate error?
Question 1 has morphed to become known as Popper's Criterion:
Popper's Criterion — Good political institutions are those that make it as easy as possible to detect whether a ruler or policy is a mistake, and to remove rulers or policies without violence when they are.
David Deutsch, The Beginning of Infinity (2011)
There are two important ways we must consider IBDD in the context of Popper's Criterion. The first regards the nature of authority and how IBDD is fundamentally anti-authority, and the second is a direct application where we investigate exactly how well IBDD is able to 'detect whether a ... policy is a mistake, and to remove ... policies without violence when they are'.
I would also like to extend Popper's Criterion to include the system of decision making itself. If we are unable to change the underlying system without violence — seeing as the system itself is a policy — then such a system must fail Popper's Criterion.
Authority and IBDD
A note on the use of 'good' and 'bad' when describing policy and explanations: I am using these words in the same manner as Deutsch uses them in The Beginning of Infinity. He does a great job of explaining them in detail, so I will summarise: good explanations are those which are hard to vary and account for the phenomena they purport to. Not all good explanations are the most correct, but the most correct explanations must be good. Since policy is a way to take an explanation relating to some problem, and provide instructions on how to solve it, policy inherits this property. Policy which is based on a bad explanation is bad policy, and vice versa for good policy.
Given that 'who should rule?' is 'who or what is the authoritative source of good policy?' in another form, the concept of authority plays a strong role in current political systems.
At a fundamental level Issue Based Direct Democracy (IBDD) attempts to acknowledge that there is no authority on good policy (or knowledge). In competing democratic systems, the answer is often in the form:
The party or parties elected through a given electoral system. (Representative Democracy (RD))
The people, or the will of the people. This could be put more exactly: the source of good policy is the measurement of public opinion where each voter is treated equally. (Pure Direct Democracy (DD) and Liquid Democracy (LD))
The groups able to purchase the most votes given a quadratic pricing relationship (Quadratic Voting)
While IBDD is a form of direct democracy, we do not claim that any one source of policy is particularly better than another. Rather, we reason through the process a little differently.
All systems of decision making must rely on all people or some subset of people (if it involves explanations)
Thus it is not useful to say that simply because a system involves all people at all times, it must fail Popper's Criterion
However, we can institute a system which gives different weights to different sources of knowledge at different times
The set of such systems must include those which are very bad at biasing sources of knowledge, and systems which are very good at biasing sources of knowledge
Thus the question we'd really like to answer is how can we make the best guess of which source of knowledge is best for a given problem, without invoking authority?
Furthermore, we can see that different sources are appropriate for different problems, even if those problems coincide, and therefore an optimal system must treat and weigh relevant sources of knowledge independently.
While majoritarian systems (RD, DD, LD) do incorporate many sources into a ruling faction — either through a single party or a coalition of entities — they are tightly bound, and without political disturbance, or specific arrangements of representatives in a legislature, they are inextricable from one another. Often this collection of knowledge is referred to as a policy platform.
As explained in the Flux Philosophy White paper Redefining Democracy, the reason for this centralisation is due to the fundamental nature of majoritarian democracy.
This centralisation is not exclusive to representative democracy. Rather, it is a result of one particular rule many people feel strongly about: one person, one vote. Traditional democratic intuition holds that one vote should be distributed to each voter and no redistribution should occur: that the set of voters should be static. However, this ignores that the distribution of interest in different issues is not even. When a voter is
not seriously interested in an issue they are able to use their vote as leverage against those voters who are interested. This leads to a curious result: to be sure you can pass a bill in a diverse group of voters you must form a bloc large enough to be the deciding factor in whether the bill passes or not. If you are not in that bloc, someone else will be, and they will use your bill as leverage on other bills that you aren’t interested in, forcing you to participate anyway. Thus the optimum strategy in a static democracy is to become the largest bloc, as it gives you the greatest chance of being responsible for whether any given bill passes or doesn't.
Kaye & Spataro, Redefining Democracy (2017)
Thus, in order to treat solutions to social problems independently we cannot have a system ignorant of distribution of knowledge in society. That is to say it must weight different sources of knowledge differently.
The selectorate theory of political power pioneered by Bueno de Mesquita et al. teaches us that the current weights on sources of knowledge is determined by political leaders and their need to satisfy key supporters (the selectorate). However, biasing sources of knowledge needed to keep a leader in power does not strongly bias good explanation. This is accounts for why these sources are largely inextricable from one another in such systems, and why there are fundamental limits on the speed of progress of RD, LD, and DD.
The problem we are concerned with now becomes how to weigh sources of knowledge without authority? and it is to this question that IBDD has a direct answer.
IBDD delivers its answer in the form of a specially crafted persistent and inclusive market. This allows the distribution of political expression on any one issue (or rather, on a particular policy conjectured to solve that issue) to be decided by the aggregate input of all voters considering that conjecture in the context of all present and future issues.
This allows voters to individually weigh the value of their conjectures and criticisms by their perceived benefit at a given time in a highly dynamic way.
It is possible, at this point, to argue that this is just authority in another form: that really we are answering the question of what is the source of good policy? in the form the dominant weighted source of knowledge as determined by dynamic participation of all voters though a market.
However, this answer neglects other important aspects of IBDD, and particularly neglects the effect of such a system over time.
If we factor time in to the above answer, we reach a more compelling answer: the source of good policy is the result of repeated conjecture (by proposing policy) and criticism (by voting against it) by those voters who participate through creative self selection whereby their criticisms are weighted through their dynamic participation in an all-inclusive market. By replacing the source of good policy is with one way to ensure good policy is continually produced and bad policy is continually challenged is through we can answer this question in a manner compatible with fallibilism, and indeed one which directly contains many core components of it.
Note: the bulk of this essay concerns the 'policy' part of Popper's Criterion, and not the 'leaders' part. This is because IBDD does not have leaders in the sense that RD does. There is no Prime Minister or President (at least not in the current formation), however, this does not mean that leaders will not arise. While I don't deal with this in the main body, I've included an addendum that discusses this concern.
IBDD as a system for correcting mistakes
A system for correcting mistakes must necessarily act over some time period, and thus a key question becomes how quickly are mistakes able to be fixed?. It is trivial to see that a slower system (RD, for example) is less useful than a faster system (which I conjecture IBDD is).
Note: this is not to say that a system which instantly enacts legislation is superior, as this itself may be a way to introduce mistakes. I am in favour of a mandatory delay period for most legislation passed in IBDD, which would help protect methods of correcting mistakes without impacting the speed at which conjecture and criticism is offered.
Popper's question How can we hope to detect and eliminate error? has been explained in general quite well by both Popper and Deutsch. Their conclusion is a cycle of conjecture and criticism (or conjecture and refutations as put by Popper).
Seeing as we've included this near verbatim in our answer to what is the source of good policy?, I will not stress that point again.
Instead, let us think about the rate at which conjecture and criticism are applied in various democratic systems.
Representative Democracy is a highly exclusive system, where the ability to provide real conjecture and criticism (by proposing new policy or blocking the implementation of new policy) is a right given to very few actors. Without being the governing party, or having a majority through an arrangement of smaller parties, it is near impossible to do much beyond making a point. Additionally, in situations like a bicameral system where the same party has a majority in both houses, the ability to propose conjecture and criticism is reserved for those in the ruling party. It is the case that a minority party may be given a chance to propose criticism when given permission by some subset of the ruling party in the case of minority defection on some issue by members of the ruling party, in the form of a temporary alliance.
There is, of course, a matter of mass conjecture and criticism when the terms of some members of either house expire, whereby the aggregate of all voters may determine some other party should be the ruling party (the second party acts as conjecture by its very existence, and voters provide the criticism by voting for it over the ruling party).
The fact that conjecture and criticism is instantiated in RD to some degree does imply the possibility of progress, but because the set of conjectures and possible criticisms are so small we should not expect this process to be efficient.
Thus to increase the rate, volume, and quality of conjecture and criticism we must remove this exclusivity and simultaneously introduce the possibility of specialisation — allowing some conjectures and criticisms to be weighted more highly than others.
Furthermore, because we cannot predict the source of the best conjectures (as new knowledge is unpredictable) we cannot exclude any voter from this process.
These two conjectures in concert imply that an efficient system must be 'direct' (as in sourced from individual voters) in some way.
This is reassuring in that it creates a sense of 'permissionlessness' around the ability to propose new conjectures and apply criticism. A property we observe in many instances, such as the progress of science and the success of some businesses over others (though not all environments for business remain equal, and so there may be some external bias applied in the economic case).
Using Markets to spur the creation of new options and good policy
Introducing a market for votes is not sufficient to provide good quality conjecture and criticism, though. It is easy to imagine some market which includes a rule like Person A is allocated 100x the resources of all other participants. It is clear that in such a case Person A has been selected as a better source of knowledge than other participants. This is a problem because it is not possible to consistently predict which sources of knowledge will be most correct (though I'm sure we can make good guesses), and to assert otherwise is prophesy or authoritarianism.
However, it is possible to embody this uncertainty in a market by weighting all sources equally, at least initially. Through a well crafted market we will see that this produces interesting incentives aligned with the production of good policy, and crucially, the creation of new options.
We also need to lay out some ground rules for this market. Particularly that suggesting some new policy requires opportunity cost. At a minimum suggesting policy should be restricted over time, such that a bad actor cannot flood the system with bad conjecture. I believe it is, however, better to unify the opportunity cost of suggesting policy with the opportunity cost of voting on policy such that they use the same resources. In this latter case there is a slight bias against the proposer of a policy, increasing the likelihood of bad policy not passing, and incentivising the creation of better policy from the get-go.
An easy way to facilitate this is through a one-to-many market, where a central liquidity token (LT) mediates value between all issues and the ability to conjecture new policy.
Furthermore, the introduction of this LT allows us to manipulate the supply, and when we apply some constant inflation (say, 30% per annum) and distribute the newly created LTs evenly among the voting population. This provides an equalising force, constantly pushing political power back towards an even distribution, and also incentivises voters to act sooner rather than later (introducing opportunity cost over time as well as conjecture of policy).
By auctioning off the right to propose new policy (in the same way votes for various issues are auctioned, except there is no recipient for LTs spent in this way) we can control the rate of conjecture and ensure we maintain some sensible political environment (whatever that means). Since the number of spots for new policy conjecture is limited, we can adjust this up or down depending on how we perceive the system to be functioning. The exact method of this adjustment is an unsolved problem at this time, though I do not believe it is that relevant to the current discussion, provided edge cases are excluded.
In order to understand this setup, let us consider some cases that may arise in a voting market.
Case 1: Bad Policy Conjectured
If a proposed policy is based on a bad explanation then its predictions will not hold. In extreme cases it is easy to see that a majority of voters will be harmed and thus they will not relinquish their vote on that issue. In this case it is probably not worth the cost of even proposing it.
However, the case also exists where most people are not harmed, and the group in favour of said bad policy (A) is strictly larger than the group to be harmed by it (B). In this case A and B will both bid on the auction for votes on this policy, and thus both expend some LTs in an effort to pass or block the proposal respectively.
In the case such a policy passes, it is in group B's interest to make a counter-conjecture, at a minimum undoing the policy put forward by group A. In order to defend against this group B will once again need to expend LTs (like group A) to defend their policy.
Group B can continue this tactic indefinitely, which eventually ends in a 'war of attrition' where both group's ability to affect change is severely limited.
If we presume that group B has some good policy (if their policy is bad also we will look at that in case 5 and 6), we can see a pattern emerge:
As long as group A attempts to maintain their bad policy they will need to expend LTs, limiting their ability to affect change in other areas. Unless their policy is so beneficial that this is worth it (which is dubious as we've already presumed group B, who are directly harmed, is small) the path to be as politically productive as possible involves not continuing to put forward this bad policy.
Case 2: Bad (but Good) Policy
There is also the case where a good explanation is used to craft policy (perhaps by a leader aligning or acquiring key supporters, and thus benefiting some group disproportionately), but it is presented under the guise of a bad policy.
This is similar to case 1, whereby the maintenance of such a policy becomes a burden to those who benefit from it. In the case of a political faction this can stress the internal relationships as it requires a contribution of opportunity cost from all involved.
If some subset of this group is able to create good policy that provides an equivalent benefit to them (without the harmful externalities of the original policy) then they are incentivised to break from the main group and put forward their policy instead. Their optimum strategy, then, is to create new knowledge (in the form of their good policy) and become independent.
In this way IBDD acts as a decentralising force on the political landscape, and encourages specialisation.
Case 3: Good Policy simpliciter
In the case that good policy is conjectured that has very little in the way of negative externalities (granted, problems are inevitable so really this is a case where those negative externalities are seen as less important than the externalities of other policies) there is little incentive for any group to oppose such a policy, as doing so removes some of their ability to change other policies that would result in a greater improvement to the group.
However, this case is probably far less likely to occur than case 4, where some group already has some bias in their favour, and a good policy would remove or reduce that bias.
Note: for the purposes of this analysis I include removing a bad policy as a good policy roughly equivalent to don't implement this bad policy because this good explanation shows it is harmful.
Case 4: Good Policy that harms some group
A good example of this might be increasing or reducing taxation on some strata of society. It is easy to conceive of a taxation system that unfairly biases some group, and a good policy may be to reduce or increase some tax bracket.
In this case we see a similar pattern to case 2 (Bad (but Good) Policy). The group that unfairly benefits (A) is incentivised to defend the past policy that biased them, and the group that is harmed (B) is incentivised to put forward some good solution.
A similar equilibrium exists, whereby the continued defence of bad policy by group A restricts their ability to affect change. In the issue of taxation this does include a monetary component (though you can argue all policy includes some economic component) and so it might be a little 'stickier' than other issues.
If policy just goes back and forward in a manner such as adjusting tax brackets it's easy to see this continuing forever in some kind of back-and-forth rally, or possibly converging (in the manner of a compromise) to some middle ground. But, this does not preclude the possibility that there is some better solution out there, and incentivises its creation in the same way as case 2.
Note: I personally suspect a far superior method of taxation exists than income/sales tax where tax is leveraged implicitly through deliberate inflation, and prices are expressed in terms of percentage points of M1. This is an example of a possible conjecture that would break a taxation rally as described above (provided it really is a good policy).
However, if the conjectured policy really is good the equilibrium lies in favour of the good policy. That is group A has less incentive to criticise the policy, and group B has every incentive to conjecture it. The case for both groups is that passing such a policy results in a net increase in their ability to affect other issues.
Case 5: Two opposing sides, both with Bad Policy, unwilling to cooperate
In the case where two opposing sides are competing for control of some policy area (environmentalists vs coal companies might be an easy example) they might both put forward bad policy.
The result of such a conflict (as explored above) is an expensive stalemate, due to the perpetual need to acquire votes to blockade the opponents, where neither group really gets what they want, and their ability to act on other issues is diminished.
Thus we would expect to see both groups either with very few, or quite a lot of LTs. One possibility is an active standoff, where each group continually puts forward their policy and we oscillate between them, and the other is a passive standoff where both groups stockpile LTs, ready for the 'final confrontation'.
In both possibilities it is clearly not an optimum strategy for the groups.
Case 6: Two opposing sides, both with Bad Policy, but willing to cooperate
If we take case 5, but add in their willingness to cooperate, then they are likely going to either seek a compromise or create some new option.
In the case of compromise they might manage to pass a new policy, but neither side will really be happy about it, and so there's always an incentive to resume the standoff in ambition of a more beneficial policy.
However, if the two groups have had a passive standoff and found some new option which is a good policy, they are now unified as one force, and both have substantial stockpiles of LTs they can use to further their agendas in new ways.
Thus the optimum strategy for groups in this position is to focus on creating new options, and removing the bad policy which came before it.
Case Conclusions
In each of the above cases we observe that the optimum strategy involves creating new options, cooperating with other groups, and producing good policy.
In answer to Popper's two questions:
How can we organise our political institutions so that bad or incompetent rulers cannot do too much damage?
How can we hope to detect and eliminate error?
We can see now that IBDD answers both quite well. We are able to avoid damage by introducing constant criticism and aligning incentives in the direction of preventing damage (and producing good policy, which should lead to progress and prosperity), and the detection and elimination of errors is enabled by direct participation, and incentivised via market dynamics.
Popper's Criterion and IBDD as a system - what happens when IBDD becomes the mistake
There is, of course, a final matter we must deal with. Because problems are inevitable, it is necessary that even if IBDD is far superior to our current systems of governance, it will one day become a prescient problem, and voters will once again need to support some new system.
I do not think the method determining and implementing that new system needs to be discussed here, as IBDD applies to a wide and diverse set of possible voters, and thus there will be many instances of it, it is entirely reasonable to presume that once some new system is tested somewhere and appears more desirable it will be rapidly implemented elsewhere. What such a system will look like is, of course, unpredictable, though I suspect it may better enshrine Fallibilism than IBDD does.
The particular thing we must discuss here is the difficulty of replacing various systems from within without violence.
Let us modify Deutsch's definition of Poppers Criterion such that it applies to systems:
Good political institutions are those that make it as easy as possible to detect deficiencies within themselves, and to remove those deficiencies, improve themselves, or replace themselves without violence when they exist.
I do not see any reason that this should be an invalid statement, or subject to some criticism that Popper's Criterion is not.
When viewed in this light, representative democracy does not fair well. If it did, democracies around the world would converge to a common implementation.
Part of the problem with RD is that such a change requires a referendum or overwhelming consensus in most cases, and additionally that the consent of the representatives who may be harmed by such an improvement is required. In the best case a referendum must be won, and so our best threshold for RD is the smallest set of voters larger than 50% of the voting population.
We can therefore predict that a system which fulfils Popper's Criterion better than RD should enable modification to the system with less than 50% of the voter base. Similar to some of the cases above, I predict a modification to IBDD constructed via a bad explanation should not fare well within IBDD.
However, in the case that such a modification would really improve IBDD, then the threshold for that improvement should be far lower than 50%, as it is good policy, and therefore those who agree with it are not incentivised to participate except in the case it may fail to pass.
In this way, error correction is so strongly embedded in IBDD that we should expect IBDD to exclusively improve itself more efficiently than the swap from whatever came before IBDD.
I know of no better way to satisfy Popper's Criterion than this.
As a final note, it is possible that IBDD does not act as predicted, and that returning to RD will result in better policy. However, even in that case, we see that at worst IBDD has the same threshold to change as the best case for RD. Thus, if IBDD does turn out to be a mistake, it will be no more difficult to change back, as it was to make the change in the first place. However, I doubt anything as extreme as this will come to pass. It's unlikely that IBDD would reach dominance without being criticised fatally, if such a criticism exists. Given it is likely to first appear in small communities (such as municipal governments) or hold the balance of power in some legislature, any such fatal flaws will likely come to the fore far earlier than a complete transition into IBDD could occur.
Conclusion
We've now seen that not only does IBDD satisfy Popper's Criterion (by providing mechanisms to remove bad policy and bad systems without violence), but also embodies many of the core philosophical conclusions Popper himself came to when thinking about the problem of progress, authority, and knowledge.
In this way, it is possible to view IBDD as an embodiment of Popper's Criterion: a system which takes the removal of bad policy and systems so seriously as to optimise for this effect.
Through the dynamic redistribution of political power we diversify groups, spurring specialisation and breaking down factions. This diversification then also increases the ability of any group to criticise established policy and affect its removal.
Furthermore, by carefully constructing an egalitarian market for political power, we actively incentivise the creation of new options by increasing the value of new conjecture.
Finally, due to the above properties we predict IBDD should improve itself far more effectively than any system it replaces, and will be replaced by a system that excels even further in this manner.
It occurs to me that the transition from RD to IBDD should mirror the transition from anti-rational memes to rational memes. It was not that anti-rational memes prevented progress absolutely, but that they were far less powerful at producing progress than rational memes (including by restricting the manner and form of such progress), and the groups which adopted rational memes progressed faster and became far more resilient. Although rational memes still require constant defence (as I imagine IBDD will to), they are sufficiently embedded in western philosophy that an immense change would be needed to extinguish them - unless that change were an improvement, of course.
Is it possible that IBDD embeds the values and explanations of our current rational memes more deeply than other forms of democracy? It is certainly the case that the rational meme of Fallibilism has compelled this author to design, improve, and implement IBDD.
It is my hope that IBDD is able to prove itself, is able to persist once instantiated, and that it is able to help spread rational memes. For if IBDD works, people will attempt to understand it, and if they do they will undoubtably come up against the rational meme of Fallibilism, and at such time they might themselves find that it compels them into action.
Addendum: Leaders in IBDD
It’s important to note ... that IBDD is not a system which denies the importance of leadership amongst human societies, but rather, embraces it. IBDD allows leaders to be tested in a way where the best continuously rise to the top, while poor leaders are quickly removed . Interestingly, we start to select [19] leaders based on different criteria. Presently representatives are often chosen for characteristics such as charisma, tenacity, and their promises. Since the challenges faced by leaders under representative democracy are not present in IBDD, they can be selected by a far more significant criterion: their ability to create good policy.
Footnote 19: The ability to remove bad leaders without violence is not specific to IBDD, but is certainly enhanced. Karl Popper has argued that this ability is what has allowed democracies to progress and remain stable in spite of the authority held by Government.
Kaye & Spataro, Redefining Democracy (2017)
I do not envision IBDD having leaders in the same way our old political systems required leaders. Rather, leaders will naturally arise through their ability to affect change, and as we have seen in a system like IBDD this requires producing good policy.
It may seems slightly circular, then, to apply Popper's Criterion to leaders in this case, as they are by definition not bad. Those aspiring leaders that are bad are heavily restricted via market mechanisms from ever making much of an impact.
However, it's easy to imagine (and wise to presume that) a leader may arise through the production of good policy, and then attempt to abuse their position, in effect transitioning into a bad leader, and, by Popper's Criterion, we must have an effective way to remove them without violence.
I predict this will unfold in a similar way to the cases of bad policy we considered earlier:
A leader who was previously good, but has now become bad, will have a track record of producing good policy. However, the new policy she produces will not adhere to this track record.
The only way in IBDD to obtain far greater political power than the average voter is to convince voters to delegate to you. Because voters are free to change delegates at any time (or revoke it entirely), leaders will need to continue providing some benefit to the voter — if they do not, there is every incentive for the voter to select a new delegate.
In this case, the act of becoming a bad leader actively diminishes that leader's ability to affect change, totally satisfying Popper's question 'How can we organise our political institutions so that bad or incompetent rulers cannot do too much damage?'
However, there is the case where a group may follow a leader without requiring progress from them. Examples of this might be extremist groups, interested in only their ideology, and prevented from accepting new options due to anti-rational memes.
In such a case, it is unfortunate but a reality of IBDD that they are a near-permanent inefficiency (I say near-permanent as they may have some capacity to produce good policy, and over time their anti-rational memes should weaken). However, if this opposition really does embody an anti-rational meme (which it probably will if IBDD acts as predicted) then the problem of convincing them to abandon a particular anti-rational meme is the same problem we've faced since the enlightenment.
There is, of course, the case where such a group has overwhelming power, and uses this to enforce their anti-rational memes. However, such a problem, just as before, is not unique to IBDD. Its potential is a constant in all societies, and all democracies. Critically, though, it would suffer from the same problem of all static societies — suppressing the ability to create the right knowledge to solve problems. As it stands, I consider this scenario no less likely in RD as IBDD, and crucially, the meme of IBDD works harder to propagate rational memes than RD does (though that may just be my bias or perspective).
Addendum: A Hypothesis from the Selectorate Theory
While investigating how selectorate theory, IBDD, and Fallibilism interact I noticed an extension of a pattern in selectorate theory.
As states transition from dictators to democracies, a key factor is the increased industrialisation of that state. Such progress necessitates new key supporters (perhaps in new industries), and as the number of key supporters grows dictatorships become less stable.
In the end, the only choice is to transition into democracy, though that doesn't guarantee the democracy will be stable at that point.
However, we are also witnessing a growing discontent with how democracy is done today. (Redefining Democracy covers this in greater detail). While transitioning back to dictatorship cannot support more key supporters (keys) and so it's entirely reasonable to conjecture that these keys are looking for a solution better able to accomodate them.
Due to IBDD's ability to incentivise the creation of new knowledge, it seems likely that IBDD possesses a far greater potential for satisfaction of keys, and we may, for the first time in human history, witness a transition from centralised leadership, where the leader wrangles the keys, into a decentralised leadership where keys are supported by the very system itself.
From this line of reasoning I created a hypothesis. I believe that if this hypothesis is true (for the most part) IBDD should inevitably succeed over RD, since it is predicted to support more keys.
That hypothesis is:
The system of governance that eventually dominates is the one able to accomodate the most key supporters.
Whether this is true or not remains to be seen, but it certainly feels like it has merit.
Addendum: Quadratic Voting and Popper's Criterion
I mention Quadratic Voting (QV) briefly in the above essay, and have mentioned it in several other places, but I have not yet actually formed an argument as to why QV does not satisfy Popper's criterion as well as IBDD.
The setup for QV is this:
Votes for every issue are available to all voters
Voters start with 0 votes
Voters may exchange money (which goes into a pool) for votes
The price per vote for each voter depends on how many they purchase, this schedule increases quadratically (e.g. $1 for 1 vote, $4 for 2, $9 for 3, etc).
Votes are cast as normal
The pool of money is then either split equally between all voters, or distributed to the entire voter base
I have a few problems with QV. There are also some points I think make it superior in particular ways to RD, DD, and LD.
One advantage of QV is that it breaks down majoritarianism and allows for specialisation.
However, this specialisation is far more available to wealthy voters than to the vast majority of voters.
Furthermore, in the same way that splitting Bill Gates' $88 billion fortune between all people on Earth would only result in $12.57 per person, socialising the result of a quadratic vote does not substantially redistribute wealth from the point of view of other voters. It does not substantially increase their ability to participate.
A related problem is that by splitting a kitty between multiple voters you obtain far cheaper votes, even if this is only between 2 people. $2y*2y for 2y votes is twice $y*y + $y*y for 2y votes. Thus there are potential economic attacks.
QV is good at ensuring that (when dealing with equally wealthy advocates) the proposal with wider support wins, and good at preventing minority rule to some degree, but this comes at the cost of decreased specialisation, and thus a diminished capacity to effectively use the knowledge spread throughout society.
IBDD has some similar properties, but also has (in my opinion) some vastly superior properties. For example, in IBDD the cost (in liquidity tokens) to acquire 1 vote in any given issue is the same regardless of how many you'd like to acquire (presuming it's not so much that supply and demand comes into play, but each vote is still just as valuable as each other vote regardless of the acquiring party).
Additionally, because IBDD uses a closed economy cut off from real world wealth, it is much more difficult for the ultra-wealthy to manipulate the outcome in their favour.
Now, QV does satisfy Popper's criterion in some cases, it's definitely possible to remove bad policy without violence.
However, it may also be possible for QV to violate it. Consider the case where an ultra-wealthy individual employs many thousands of people to take part on their behalf. These workers buy votes at a vastly reduced rate (compared with our wealthy person purchasing it all themselves).
The effect of this is a mostly economical attack to prevent the removal of bad policy. In addition to less money being included to begin with, the socialisation of proceeds also is spread far more in favour of our rich attacker, and so their risk and overheads are lower.
While this does still result in some redistribution of wealth in the direction of the dissenters, we've set up an economic equilibrium between the benefit of the policy and the cost of maintaining it. It is therefore theoretically possible such a system would prevent the removal of that policy indefinitely, or at least long enough for it to matter.
In the case of IBDD, however, gathering such a following requires opportunity cost on the follower's behalf, and additionally reduces the ability for the attacker to interact with other legislative processes. Thus I consider IBDD to more effectively solve this problem than QV.
Addendum: IBDD and Budgets
This is an unsolved problem.
One possible solution is simply to average the opinion of all voters. As far as 'wisdom of the crowd' goes, this doesn't seem too bad a place to start.
It does not seem like a budget is the sort of thing one person or source would be able to produce well consistently, and competing budgets aren't guaranteed to account for everything as well as we may like.
Is it appropriate for budget requirements might be attached to policy and automatically granted when the policy passes? Could we design some taxation system to facilitate this?
I am sure that if IBDD begins to succeed this problem will receive attention, and all problems are soluble.
Addendum: IBDD and War
This is also an unsolved problem. War is never a simple thing, and seems to lend itself more to centrally controlled systems than decentralised ones.
The naive approach seems to suggest a referendum, and certainly many people would like it if their governments were required to hold a referendum before going to war, but it's not clear to me that this would necessarily produce better or worse results.
In the limit, IBDD is predicted to unify political systems into a global system of governance (note: not a global government). Perhaps it is the case that this will never need to be solved? What if we later encounter some outside threat we did not predict, and then don't have the capacity to deal with it?
I believe this problem to be soluble.
Additional Reading
The Beginning of Infinity, David Deutsch, 2011
The Dictator's Handbook, Bruce Bueno De Mesquita, 2012
Rules for Rulers, an excellent summary of the selectorate theory by CGP Grey, approximately 20 minutes long, Youtube
You want people to guess the answer to some question. The answer is in ASCII, and all lowercase.
Additionally, you want automatic (though not instant) payout for correct answers. Fully trustless, though.
That question could be anything with a certain answer. Example: Who wrote Macbeth? (canonical full name).
To do it, this is what's required of the winning guesser. (this example is written in the context of a trustless host network like Ethereum).
Winning Players Actions:
Step 1
Come up with the answer. Got it? Great.
Step 2
Publish to the contract:
commit(bytes32 commitHash)
Where commitHash = keccak256(string answer, address yourAddress, bytes32 powOnAnswer)
(Keep these details private for the moment)
Aside
Why should we commit this collection of things, and why do we have powOnAnswer?
Does pow mean proof of work?
Step 3
After some time (set by the contract), publish this transaction:
reveal(string answer)
Wait some more time (set by the contract) and publish:
claim()
Done!
How and why you're safe
The most significant feature in this whole enterprise is automatic payouts.
There are significant design constraints when we want automatic payouts to coexist
with a completely public and trustless information.
The relationships we need are straight forward, though.
You need to be able to publicly reveal an answer with a near-certain guarentee that
a correct answer can't be stolen from you.
That means that everyone has to know you have a specific answer (and agree on that) before
you reveal it.
Collecting your winnings reveals the answer. But you also had to commit to your answer
some time before-hand. That means (with a well written contract) the only people you have
to worry about swooping in and taking your winnings are those people who had already made
commitment prior to yours. (And logically the winnings should be theirs, anyway)
There are two important time delays to make this happen:
Allow enough time for commitment to occur without risk of interception
Allow a significant enough time after the first answer is revealed to allow other
contestants to reveal their answers (say, 24 hrs).
If the minimum time for (1) is, say, an hour, then it's incredibly difficult for an individual
to censor a single transaction, which means many confirmations are near-certain.
Additionally, including your address in the outside hash renders replay attacks ineffective.
After such time you're free to publish your answer.
However, you don't publish just the answer and your address.
You publish additional information. This is the powOnAnswer parameter above.
The exact means of calcualting it aren't important, for reasons that we'll see.
Let's presume you're doing something like a password hashing algorithm (e.g. PBKDF2).
Your task is to recursively apply a hashing function 500 million times.
(if you think this is a bit weak, you'll see shortly the parameters are freely adjustable.)
Why on earth would you recursively apply a hash like that? You could never possibly prove
something like the correctness of such a hash on Etherem!
Well, let's think about how you'd attack such a scheme.
First off, the contract needs to store something to check if an answer is correct or not.
The usual way is to publish a hash, or something similar. But whatever is publish needs to be
verifiable (and cheaply). If you publish just keccak256(answer) then you're liable to be
brute forced (especially if the answer is something like a name). So that won't work.
What if you could publish something derivied from the answer that was work-intensive,
but quickly verifiable?
Not only is that the core idea behind proof of work (hard to generate, easy to verify), but
we can also use it in a totally different way: to introduce entropy.
The reason brute forcing a name is easy is that there just aren't that many combinations.
We can make it much harder to brute force by salting the answer with a time-intenstive
hash of the answer. It's not that we care about verifying that proof, it's that generating
it is energy intensive and requires knowledge of the answer in advance to be efficient.
So if you're guessing wildly, then you will have to hash each guess (in this case) 500
million times before you can compare it to the answer! (And, of course, there's no way to
guess the many-times-hashed guess in advance, since it's 32 bytes and psuedorandom.)
When you add all this together, I think you end up with a guessing game secure enough to
resist any attack that isn't attacking the blockchain itself (e.g. censoring transactions).
(Note: to be properly secure we want to include a nonce (which is public) in at least the
first hash in our recursive chain. This is to make sure that the work (hashing) being done
is unique enough that it likely hasn't been done before.)
(Note: you don't have to generate all the possible answers ahead of time, you can
make a new one up each time you need one)
If it returns false, discard the possible answer and move on to the next.
If it returns true, then take all remaining possible answer (with the most recent guess
in the first position) and take the first one (i.e. the answer).
Finally, run the answer through whatever function you need to give you the right
data to commit to smart contract above. (Which would need to know about your
address in this case.)
2015-ish - The Beginning of Infinity, David Deutsch (2011) http://beginningofinfinity.com/
update 2018-06-12: best book ever written. incredibly consistent. breathtakingly profound.
tags: philosophy, epistemology, AI, quantum physics, morality, general science, memes, society,
political philosophy, aesthetics, environmental philosophy
2017-late - The Fountainhead, Ayn Rand (1943) https://www.aynrand.org/novels/the-fountainhead
update 2018-06-12: deep book on the nature of integrity, relationships with others, the meaning and
purpose of one's actions (in the service of oneself and in the service of greater ideas / greatness).
i've thought about this book quite a bit over the last 6 months, not only is there more to learn, but
it's a source of great ideas to build on (esp with harder questions re: integrity).
2019-01-17 - I started a github repo called 'awesome-notes' where I'm going to note things I want to link back to or make publicly accessibly easily. https://github.com/XertroV/awesome-notes
2018-06-12 - The $30,000 pocket dial, Louis Rossmann (2013) https://www.youtube.com/watch?v=MyZnKjFWrxo
it's important to go through these sorts of journeys. they butt up against reality, and
you're forced to reconsider yourself and your ideas in search of improvement.
searching for improvement, and the desire to do so, is important.
tags: relatable experience, integrity, business, responsibility, early adulthood
2016-ish - Everyday Economics or How Does Trade Relate to Prosperity?, Marginal Revolution University (2016) https://www.youtube.com/watch?v=t9FSnvtcEbg&list=PL-uRhZ_p-BM6HPXRBdCIPgHri2eWhMv2t
30min course on fundamentals of economics and how it relates to society
Refutation of the idea that lockdowns are comparably bad to an unmitigated pandemic, OR evidence of humanity as a failed civilisation.
Edit 2020-08-03: I don't stand by this article. I think it's poorly written, and on a subject I don't know enough about. I'm glad I wrote it because it's been part of my progression to taking ideas (and seeking their improvement) more seriously. I've left it basically untouched besides these edits for posterity's sake.
Edit (June): this isn't written well enough, and I intend to move this to a category off the main page in the near future.
I could be wrong, but if I am I don't know why, so please enlighten me. As it stands, this feels robust to me atm.
Comments/discussion: https://github.com/xk-io/xk-io.github.io/issues/5 (FI link once posted)
Alternative: Since GitHub is not great for discussion, if you – for whatever reason – do not think it's a good enough makeshift-forum: please post on Curi’s open discussion, and if possible please post a link in the github thread or email/msg me so I’m aware.
Refutation of lockdowns are comparably bad to coronavirus, or, if that's wrong, then the alternate: we're all wrong and going to die soon regardless. (humanity: poof)
If lockdowns were comparably bad (i.e. similar enough) to the near-unmitigated spread of coronavirus and thus covid -- e.g. say the US drops the measures taken up recently; like if they lifted them today -- if lockdowns were that bad, then it means whatever society we had leading up to that has failed, epistemically and culturally. Whatever those values are that were held, were not good enough. That is because: even at society's pinnacle of technology, information, and ability (or at least capacity) to reason; despite all that we were UNABLE to devise a method of dealing with this problem that is preferable to the problem itself. A stubling at the beginning of inifinty. The view that lockdowns are comparably bad MUST imply that ALL popular strategies -- for society, political theory, whatever -- all were INCAPABLE of being preferred to the literal ravaging of society by a new disease we couldn't deal with.
Remember, this isn't just about the US or any one country, it's about all countries. If you have extreme cases you thought supported your case: have no weight here entirely because they're the exception -- unless they pointedly refute this conjecture. The only 'out' here is literally new knowledge. Knowledge young enough that it cannot have had enough time to even spread as a meme, let alone be instantiated as any real sort of governance system. No political position popular enough to be widely known at this point can satisfy this. It should be surprising to most people. That said, people claiming the lockouts are killing thousands of people, they don't have any of those new ideas.
If this horrid reality is the case, and if some countries have done better than others then maybe (but only maybe) there is hope, however if it is somehow the case that lockdowns are that bad, and that country suffers the same consequences in the long run, then this is the first time in modern (post ww2) history we have truly failed as humans and people. The first time we've been entirely incapable of any improvement. We were not able to rise to the challenge, and this is the fault of everyone in that arena. Left, Right, Libertarian, Liberalist, Fascist, Communist, and the worst of the lot: centrists3. They all failed. Moreover, we're starting to get to the point that maybe democracy failed. Which would be earth-shatteringly bad, and, in the absence of an equally earth-shattering innovation, gives us no reason besides blind hope for any future beyond this century, give or take. If we go backwards, we die; we can't support the level of specialisation needed for progress if we lose too many people. In any case:
Either:
the above implies that we as a species are NOT CAPABLE OF SUSTAINED EXISTANCE (we survive by luck),
that ENTIRELY NEW SYSTEMS WHICH MUST BE SURPRISING TO ALL DOMINANT PHILOSOPHIES (yes, this means yours1) are necessary for the continuation of the human 'experiment' (for lack of a better way to describe this thing we call life); or, finally,
that an assumption was wrong.
If I have not made an error, the only conclusions I know how to reach must be one of the above (maybe I've missed a possible conclusion, or you know something surprising I don’t).
Since (1) contradicts the principle of optimism it is not the case2. (2) must eventually be true unless we out pace it, it is possible it plays a role, but we need to compare it with (3), wherein I think the thesis is incorrect as it implies contradictions in reality and no sufficient reason is given for e.g. implying the principle of optimism is false.
So: (1) is wrong. (2) might be right, but there's no reason it needs to be if (3). If not (3) then (2), which means we don't have much time left.
[1]: Not necessarily if you’re reading this via FI – I just don’t know enough to claim that.
I mean there's the chance an good idea comes from elsewhere, I just don't have a good reason to believe that atm given the audience I predict.
[2]: I mean it might be that the principle of optimism is wrong, but we’d need some pretty extraordinary reasoning (or: that’s my suspicion).
[3]: centrist are the worst of the lot because they lack an even superficially consistent philosophy, which is the only way to stay centrist. At least you or I can try to persuade the others, but centrists can run around hiding behind whichever idea takes their fancy. A consistent philosophy must make some self-consistent and wide-reaching claims about reality which is incompatible with the notion of centrism, except in the most myopic of situations. This is, ofc, presuming the others are willing to entertain rational debate without violence; if you go past that point they are - for all intents and purposes, relevant here, at least - indistinguishable.
I've been doing a lot of work on philosophy recently.
Public work takes two main forms: the tutorials are all up on youtube,
and a site where I've been publishing related work.
I've hired Elliot Temple to tutor me with the ultimate goal of improving my writing.
We discuss a lot of things in the tutorials; topics so far include grammar, text analysis, controlling and directing one's habits, time and energy managements, epistemology (including one of Elliot's most significant contributions: Yes/No Philosophy(1)(2))
I've come to realise the current site I have is pretty bad for higher volume stuff, and jekyll is bad in general for maintaining a library of criticism.
So I'm thinking about potential new things to use for a site.
I want to migrate everything, have a large amount of customizability WRT posting, and it should do comments well. Building stuff is okay if it's feasible.
Oh, and it shouldn't have silly issues like jekyll does (e.g. sorting a list of pages can fail if they don't all have a field, e.g. date, in their frontmatter; it's hard to provide a default if date is missing, and even if it is present on all pages, you can't sort the pages by date if some of the values are Dates and some are Strings b/c Ruby doesn't know how to).
In Jan 2021 I unendorsed all my past work and ideas. There are details in the video, but basically it's an FYI that I have changed my mind on some important things, and I'm not sure that I believe the things I used to (but revisiting all those things takes energy, so that won't necessarily happen soon/ever).
I set the date of n/2036 to the date I made the video, but I don't think that was right. The video was posted, then, sure, but I didn't make the post then. I made the post ~now. I don't think I'll date things that way in future.
Also, I said I post-dated the post, but post-dating refers to dating something in the future (which this wasn't). It was just normal-dating.
I've been focusing on philosophy more lately.
During late January I decided to post my self-unendorsement video.
I've been adding to my curi.us microblog regularly.
That is where you should go to find my latest work.
I post almost everything I produce to that thread.
For the past few days I have been doing making daily updates to my makeup vlog
and will continue doing that for the next week and a bit.
I've also created a vanity URL for the playlist: https://xk.io/muv.
I link the playlist for these videos (and most videos individually) in my microblog, too.
I will continue posting to my microblog.
I consider my blog at xk.io mostly deprecated -- I don't anticipate posting much here until I find a new blog system.
If you've visited my site before, you might notice it's different now. It's a complete forum -- the blog part is done through permissions.
The structure is a tree and based on discussion at https://curi.us/2396-new-community-website-features-and-tech. It's highly customizable and supports things like CNAMEs for sub-forums (which can be ~disconnected from the main forum tree via permissions).
Every node in the tree supports RSS feeds. Hit the RSS button or add .feed to the end of a URL to get the feed.
I've made an Open Discussion node at /n/88 -- you can post whatever there.
Nodes can be viewed as any other type of node; atm there are 3 types: root, index, and topic. Add /as:<view> to the end of a node's URL to view it via that method, e.g. https://xk.io/n/0/as:topic. The main forum-nodes have descriptions but -- atm -- those aren't shown except via view-as or the RSS feeds.
Addendum by
u/max
after
2021-08-28 08:33:19 UTC
1 minute
ATM you need to register an account to post/reply, but anon comments are mb on the cards -- just not a priority right now.
The harassers (a small number of CritRats close to DD) are unwilling to take steps to deescalate the situation. They are unwilling to take even the most reasonable of steps to at least coexist in peace. AFAICT, one of Elliot's main goals is to be left alone (by CritRats) so that he can pursue his philosophy work. Elliot's actions are consistent with that goal.
Here is an outline of David Deutsch's position. DD was ET’s mentor, colleague and close friend for over 10 years. He is the leader of a community; they don't have an official name, usually they're referred to by the name CritRats. CritRat is a contraction of Critical Rationalism, the name of Popper's philosophy. This informal group, including DD, tolerate the harassment (or worse).
[...] success at making factual, scientific discoveries entails a commitment to all sorts of values that are necessary for making progress. The individual scientist has to value truth, and good explanations, and be open to ideas and to change. The scientific community, and to some extent the civilization as a whole, has to value tolerance, integrity and openness of debate.
— BoI Ch 5, p121 (emphasis mine)
If you had written that, given numerous reports that repeated harassment was being done in your name, do you think it would be reasonable to, say, write a tweet along the following lines? I've recently heard allegations of harassment targeting Elliot Temple by a small number of my fans. I unequivocally denounce harassment, against Elliot or anyone else. It's immoral and, if it is happening, it needs to stop. A tweet like this is not much; it doesn't even acknowledge that a problem exists (just that if it does exist, it should stop). Is that not a reasonable minimum to expect from a philosopher?
DD doesn't post to forums or blogs anymore, AFAIK. He does post to twitter, though. Not one of DD's 10.9k+ tweets mentions 'Harassment'. At what point does this kind of avoidance become negligent? At what point does the issue become bad enough that the leader of the community has a responsibility to make at least some gesture speaking out against bad behavior?
Earlier this year, Elliot had to disable open comments on his blog (after 18 years and over 20,000 comments1) due to the harassment.
When that happened, Elliot provided me with an account (as I'm sure he did for other members of the CF/FI community). Why is this important to say publicly?
[Anonymous:] When critrats inevitably discuss this thread in private, I wonder if they'll consider the restraint you've shown (over years). I mean, those quotes at the end are all at least 10 years old. A decade.
[Elliot:] I fear they'll just say you're my sock puppet and ignore your point. What proof is there that I ever gave anyone else an account?
I found out from multiple community members that DD personally contacted them (over 5 years ago) and tried to recruit them to his side and turn them against me. DD did this in writing and I've received documentation.
I started doing outdoor bouldering about a week ago (we've been in lockdown for the last ~2 months). I found an unclimbed wall that's close enough to home (for current exercise-distance restrictions) and have been working on that recently. I've posted some videos of this to my max's bouldering playlist on YT. I've also documented the wall on theCrag: https://www.thecrag.com/climbing/australia/new-south-wales-and-act/sydney-metropolitan/area/5156697663
Before lockdown I was 4-5 weeks in to my 'send all the purples' goal at my gym (purples are bouldering routes of a particular grade) -- my gym resets a few sections of wall every week and has a 6 week cycle.
This is a repost of a post I made to reddit on June 13th 2017. I found it again recently and wanted to document it here.
The Problem (what's the optimal encoding?)
You have a known list of n candidates on a ballot. In the act of voting, the voter must number each candidate from 1 to n, using each number once. The voter is thus describing a transformation on the list of candidates into a particular permutation. What is the minimum number of bits needed to exactly describe any order, and the method?
My naive approach is to count (from the left) the number of 'jumps' each candidate must take to arrive in their ranked position skipping any previously ranked candidates.
So if we had 3 candidates: cs = ['c1', 'c2', 'c3'] and a list of ranks rs = [2, 3, 1] we'd do this (where zip(cs, rs) matches candidates to ranks):
First, c1 should be in the 2nd position, which is 1 move from the far left open spot. Since c1could move 2 positions though we need to use 2 bits. Record 01
Second, 'c2' needs to be in 3rd position, which is also 1 move from the far left open spot (since we skip over the filled position from step 1). Because 'c2' could only move a maximum of 1 jump, we only need 1 bit: record 1.
Third, 'c3' trivially fits into the last space, so we need 0 bits.
Our final result is 011.
To decode the permutation, take N bits in the sequence ceil(log(2, n)), ceil(log(2, n-1)), ceil(log(2, n-2)), ..., ceil(log(2, 2)), which is [2, 1, 0] in our case. Use those bits (in this case ['01', '1']) to move each candidate into open spots.
By numerical analysis this method very closely fits O(n Log n) space complexity. (total_bits = 0.89 * n * log(2, n) seems to describe the pattern very closely).
The exact number of bits used will be t = Sigma(i=2, n)( ceil(log(2, i)) ).
It occurs to me that simply listing the positions is O(n log n) too, but the exact number of bits required is t = ceil(log(2, n))*n which is more than my method.
With n candidates, there are P(n) = n! permutations. Each voter selects one of these permutations, so a vote can be represented by an integer [0, n!-1].
2 more comments follow in that branch before I posted my soln
My Solution
clarification: in "base factorial" I'm using base in the same way as base 10 (our number system) or base 2 (binary numbers).
Basic idea is to use factoradics - sort of like base factorial where the kth integer is in [0, k] and to convert back to base 10 you'd multiply it by k!. e.g.: 3210 is 0*1! + 1*2! + 2*3! + 3*4!. Also 4321 + 1 = 10000 which is 1*5! = 120 in base 10.
These numbers uniquely enumerate permutations for a list of n unique elements and allow one to 'read' the permutation from the number. e.g. [a, b, c, d, e] in permutation 2310 leads to [d, c, a, e, b]. Put a in 2+1th free slot, put b in 3+1th free spot, put c in 1+1th free slot, put d in 0+1th free slot, put e in the last spot.
This means we can map all integers in [0, n!-1] to a unique permutation (via modulo-ing and subtracting successive factorials), and back again quickly and efficiently (though arbitrarily sized integers (e.g. in python or haskell) make this much easier than small integers - 150! is like 110 bytes long as a raw integer)
Finally a sanity check: for a list of k elements we have a maximum count of (k-1) (k-2) ... (k-k+1) where each bracketed expression is a digit. If we add 1 we end up with 1 followed by k zeros, which corresponds to k!, and so our maximum integer is 1 less at k!-1 which is what we expect :)
I think freedom and universality must have some deep connection.
If you have the freedom to do something, that means -- wrt that context -- you can do whatever you like. You're unconstrained; that aspect cannot (or at least should not) be a bottleneck.
That gives you some unboundedness wrt method, or content, or whatever the freedom relates to. Freedom of speech removes a bound on speech. Freedom of thought removes a bound on thought.
Freedom enables universality; and without certain freedoms, you cannot have certain universalities.
I thought that freedom and thinking were like orthogonal; both necessary for philosophy and good life, but neither sufficient. Now I suspect there's more there than I realized.
Climbing gyms (and everything else) have just reopened (for the vaccinated) in Sydney, following a ~3.5 month lockdown.
Now that climbing is frequent and regular again, I've been reconsidering climbing goals.
Before lockdown, my goal was to send every new grade 31 (out of 6) problem, every week, for 6 weeks (I'd done 5 weeks when lockdown started).
6 weeks is a full rotation, so achieving this means that I would have consistently been able to project any purple in 2-3 sessions.
The purpose of this goal was to emphasise consistency over results (better to do be capable of any grade 3 problem than some grade 3 and some grade 4).
The current goal I've settled on is to send every grade 3 climb (in the gym) in a week -- there's about 24 or so (give or take depending on what gets set).
I sent 10 this morning (over about 2hrs).
I don't think I'd have been able to do that if I hadn't done some outdoor climbing during lockdown, tho.
So far, so good.
I have been going through them methodically, with a few exceptions (picking ones I thought I could do towards the end of the session, for example).
So far no grade 3 has been particularly challenging, though crowd-beta has helped a bit.
Major bottleneck atm (session-to-session) is probably endurance -- going to monitor that and take corrective action if it doesn't improve this week.
the problems are color-graded, so grade 3 of 6 is purple, and 4 of 6 is black. there's no precise conversion to V-grades, but I guess purple is V2-5 and black is V4-? (6 or 7, maybe). ↩
I sent just 1 of the remaining 3 purples on Friday.
The current goal I've settled on is to send every grade 3 climb (in the gym) in a week
WRT this goal, I could go again this weekend, but I think it'd be better to rest.
That means I'll fail the goal.
What happens wrt the goal then?
The purpose of the goal was to build consistency and mastery of a particular grade.
Or maybe, that's what the goal aligned with -- it wasn't, in and of itself, to master the grade.
Rather, it was a breakpoint that I chose which aligned with master.
If I didn't meet it, then it's a criticism of the idea that I've mastered that grade.
Mastering a grade isn't a quality I'd lose unless I stopped climbing, though (or had some injury or something).
So what does failing this goal really mean?
Well, I'm not yet inconsistent when it comes to purples (since I had been sending them all prior to lockdown) -- so more time's needed to determine that.
It is a criticism of mastery though; it means there are still bottlenecks at this grade.
The reason I chose the original parameters that I did -- 1 week to send every new purple for a full cycle -- was that 1 week allowed for reasonable buffer, and a full cycle meant ~25 new climbs.
New routes are set Tuesday-Thursday, so 4 days of the week were the period of stability before the next wave.
Once those new climbs were set, a failure to send one that was taken down is a decisive criticism of both consistency and mastery.
Also, sending all the new routes (before the next wave was set) meant that I'd have excess capacity to try new climbs, or work on bottlenecks.
Even without sending every purple, I still have excess capacity -- different climbs use different resources.
And since I shouldn't lose the ability to send-every-purple if I keep improving, why would I ever stop?
I think mb goals like this are sometimes motivating (they have been to me) but there's a problem if you get invested in them.
That's because failing a goal is, like, something to take personally.
But that isn't what these sort of goals are really about.
A goal like this is a regular opportunity to take a benchmark -- and mb find a weakness in your abilities that you didn't know was there.
Knowing about that is a good thing!
If you don't know it's there (or if you know and ignore it) then it becomes a thing you can't rely on.
That's a problem if it ever needs to be foundational to other, more advanced knowledge.
(Like, it's a dependency.)
This is consistent (and I think related) with a different idea: achievement of goals is pointless if you're not challenged.
There's no point in me setting a goal if I know it's already too easy.
The achievement is hollow.
Moreover, if there are alwaysmeaningful goals to choose -- meaningful problems to solve -- then choosing goals that are foregone conclusions must be meaningless.
My guess is that -- for some projects -- it is moral to choose meaningful goals, and immoral to deliberately choose meaningless goals when meaningful ones could be chosen instead.
(Sometimes those projects/decisions only impact a single person, in which case they're probably amoral -- that's okay. Morality is about interpersonal harm, and people are free to live their own lives (wrt themselves) how they wish. People are also under no obligation to maximize morality -- just to not act immorally -- so provided there's no malice, most goals are okay.)
Getting back to purples: I think new climbs is an important breakpoint, so I'm going to monitor that going forward.
I'm also thinking I'll adopt the send-every-purple-every-week goal again, indefinitely -- it's a useful warning sign of bottlenecks, and means there's some variance in my excess capacity (which I can use to my advantage).
Maybe a good breakpoint to re-evaluate at is some level of consistency with blacks, like sending >50% each week.
That seems reasonable atm.
Addendum by
u/max
after
2021-10-23 03:59:29 UTC
7 days
I was so very wrong about morality and interpersonal harm.
I've read Galt's speech.
In n/10017 (the parent of this post) I said:
Sometimes those projects/decisions only impact a single person, in which case they're probably amoral -- that's okay. Morality is about interpersonal harm, [...].
I was so very wrong about this. Morality does concern one's actions, since some actions are right and some are wrong. But morality is not about harm (interpersonal or otherwise), at least, no more so than it is about rocks.
I am starting to understand.
I'm not sure enough, yet, to say what morality is about -- I could try, but I don't want to rush it. I want know it before I claim to know it. Soon, I will be sure enough.
I'm grateful that JustinCEO (on the CF forum) noticed this part of n/10017 and challenged me on it, and I'm grateful that he and ingracke found it worth their time to discuss it with me.
I am grateful for Ayn Rand -- her life, her works, and most of all: her mind. Right now, I'm grateful particularly for Atlas Shrugged.
I have a special gratitude for Elliot, not only for his participation in the above-linked thread (among many other things, too many to list here), but also -- and, right now, primarily -- for his labor, dedication and ideas that safeguard the closest thing I've ever known to a sanctuary.
Why do I express my gratitude? I admire their virtues. Why am I grateful? Their virtues have allowed me to profit, and I will continue to. And I know that they have and will, too.
Of those two problem purples, I sent one on Monday evening, and one on Thursday evening.
There were 6 new purples this week (3x set on Tuesday, and 3x on Thursday). On Wednesday morning I sent the 3x new purples that were set on Tuesday, and on Thursday evening I sent the 3x purples that were set that day (and 2x of the blacks that were set that day, too).
redlining in a controlled env can be useful for like measuring progress or something, but that shouldn't be the norm
by maintaining excap one can ensure there's room to move -- buffer. that's important while learning b/c you're focusing on creating new organization of knowledge and reorganizing existing knowledge. trying to learn without excap is like trying to do tetris with no v-space. with lots of v-space tetris is manageable, but it gets hyperbolically harder as you approach the top.
re long-running learning conflict: the more exploratory side means exhausting excap quickly (manageable when surveying, but not when trying to complete a project).
this script will speed up (or slow down) video and audio via ffmpeg.
put the following in e.g., ~/.local/bin/mod-video-speed and chmod +x it.
#!/usr/bin/env python3
importsubprocessimportsys,rethis_file=sys.argv[0]filename=this_file.split('/')[-1]args=sys.argv[1:]iflen(args)!=3:print(f'Error: need 3 args but you provided {len(args)}. {args}')print(f'\n\tUSAGE: {filename} INFILE OUTFILE SPEED')print(f'\n\tINFILE: input file (source)\n\tOUTFILE: output file (destination)')print(f'\tSPEED: the speed of the output video in the format `2x` or `1.43x` -- must end with `x`')sys.exit(1)in_file=sys.argv[1]out_file=sys.argv[2]speed_raw=sys.argv[3]speed_pattern=r'([0-9]+\.?[0-9]*)x'speed_re=re.match(speed_pattern,speed_raw)ifspeed_reisNone:print(f"Error: SPEED argument did not match `{speed_pattern}`\n\nExamples: `1x`, `0.5x`, `123.456x`, `99x`")sys.exit(2)speed=float(speed_re[1])pts_speed=1./speedprint(f"Processing {in_file} and modifying video speed to {speed}, outputting to {out_file}.")ffmpeg_cmd_list=['ffmpeg','-i',in_file,'-filter_complex',f"[0:v]setpts={pts_speed}*PTS[v];[0:a]atempo={speed}[a]",'-map','[v]','-map','[a]',out_file]print(f"Running ffmpeg cmd: {' '.join(ffmpeg_cmd_list)}")subprocess.run(ffmpeg_cmd_list)
You will never be punished, censored or moderated based on your ideas. You can disagree with whatever you want, advocate whatever you want, and even flame people. The most anyone will do to you is write words back or stop listening. Messages are neutral ground where everyone is equal; there are no special privileges.
Bitcoin is a disruptive technology, and since time immemorial disruptive technologies have challenged our existing theories and demanded improvement. I'm not going to beat around the bush trying to make Bitcoin conform to our existing schemas. We need to rethink what makes types of money that particular type; not look into why Bitcoin can function as a currency - that is already well understood. I'll outline what I think are the important constituents of money that help differentiate them today. We'll then look into how Bitcoin fits in, hopefully in such a way that convinces you Bitcoin is truly novel.
Broadly, the three well understood forms of money are as follows: fiat money, commodity money (CM), and commodity-backed money (CBM). People will often separate the former with the latter two based on the notion of 'intrinsic value'. While we can agree that all three have value, we also know that the value of fiat money is derived from legislation: not an intrinsic property.
While this categorisation schema works for the above, I believe there is a more pertinent distinction to be made: that of non-monetary use-value; i.e. the money type in question has some primary purpose other than to simply exchange value between parties. The primary use of fiat money is to exchange value, we can therefore see that not only does fiat not have any non-monetary use-value, but that the use-value of fiat money is fundamentally bound to the exchange-value. On the other hand, we see that CM and CBM derive their use value not just from exchange, but also from the intrinsic properties of the commodity itself. Therefore we can categorise these three forms of money as before, but with the determinant being non-monetary use-value.
The second property I'd like to introduce is the concept of 'deferred value', and 'direct value'. Ultimately you can think of these as 'an IOU', or 'not an IOU' respectively.
Deferred value is seen in both commodity-backed money, and fiat money. Both are not so much valuable because of what they are, but because of what they entitle the user to. This is easy to see in the case of commodity-backed money, such as a gold certificate, as it is redeemable for a commodity (in this case, gold). However, in the case of fiat money, it is not directly redeemable for any particular commodity, but is legislated that it has value. Put in another way: fiat money entitles the user to some value. We should consider that if the legislatory environment surrounding either of the above were to collapse, they would respectively become useless. Commodity money can never truly reach zero value, but both CBM and fiat can, and so participating in such systems is like passing a hot potato; it's okay as long as you're not the one to get burnt. This is part of the nature of deferred value.
On the other hand, direct value implies that the received value is intrinsically tied to the received object. In the case of commodity money - say, rice - it is trivial to see that the value of rice (that you can eat it) is bestowed to the recipient immediately upon receipt.
By viewing the property of non-monetary use-value in light of deferred or direct value, we can see that while a gold certificate may have no particular non-monetary uses in and of itself, by acting as a method of deferred value it can inherit non-monetary use value from the commodity it is tied to.
As a visual summary, here is what we've talked about so far:
Enter Bitcoin: the rule breaker, the status quo usurper. You might have noticed there is one particular combination of the above properties that has not been covered by traditional monetary systems. It's tempting to fill in the blank with Bitcoin; but we should remember that Bitcoin is merely an example of this missing puzzle piece, just as the US Dollar is just an example of fiat currency.
Definition: Factum Money
A stand-alone money system in which each unit, by its intrinsic properties alone, necessarily holds a non-zero value.
Rational:
Factum loosely translates from latin as 'done'; a play on words, as fiat loosely translates as 'let it be so'. Factum also lends itself to 'fact based currency': because of each individuals' knowledge of the system, it is able to be used to exchange value; an appropriate description.
To gain an intuitive understanding of what this really means, let us diverge from the topic for a moment to discuss aliens. (Bear with me!) It's an assumed property of the universe that no matter where you are spatiotemporally the number pi will be constant. You can express this in various ways; but the simplest is that the ratio between the radius and circumference of a perfect circle is always constant. I suggest that Factum Money has a similar property: that regardless of position in space and time, society, culture, species, or any other physical differences, true Factum Money is able to transfer value. Doubtless each instance of factum money can have local environmental factors that prohibit its use; Bitcoin is known to lack quantum computing resistance and will fail if SHA256 is broken, just as Litecoin relies on Scrypt. However, due to the particular conditions of today, Bitcoin is able to transfer value, and holds the mantle of 'Factum Money'.
Filling in the blank, we now we have a table that looks like this:
There's a great deal left to explore within this idea; of particular interest (which I'll explore in a follow up post) is what this actually means for Bitcoiners, and how we can predict and take advantage of this model. Cryptocurrency has many facets that have so far only been theoretically explored, in particular perfect money laundering, and distributed exchanges. I'll largely be exploring distributed exchanges in my next post.
I think I've changed my mind about Bitcoin as money. Particularly: it's a fiat currency (
money is just fiat in disguise), and I don't think it's sound money anymore. Just because it doesn't have government backing doesn't mean that it isn't fiat.The reason for the change-of-mind is that I've started reading Theory of Money and Credit by Mises (TMC). I don't think Mises would be in favor of Bitcoin; how does one answer what is the origin of Bitcoin's value? -- I don't think there's a substantive answer.
Related discussion: https://discuss.criticalfallibilism.com/t/crypto-currency-fraud/128/18 (this post and on).
I am planning on writing a post about this soon (after I finish and understand TMC).
Elliot criticized this post for not giving proper attribution:
I started reading TMC entirely because Elliot referenced it in the original 'related discussion' link. Elliot's original post:
Particularly: n/2002) and Elliot's comment implied there was a conflict between Mises's ideas and the reasons I thought Bitcoin was sound money (I now agree there is a conflict there). I thought (and, IMO, many Bitcoiners did and still do think) that Mises would have liked Bitcoin -- I was convinced otherwise by the first ~7 chapters of TMC which seem pretty cut-and-dry WRT the important qualities of the origin of the value of a money.
was the bit to convince me that it was worth reading TMC. The idea that Bitcoin (or anything) trying to be something other than commodity money was bad: I got that from Elliot. I was especially interested in reading TMC because I first learnt about Mises through articles and discussion around Bitcoin-as-money (prior to writingAdditionally wrt other unattributed ideas: re-reading the original factum money post; most of the foundational ideas (like non-monetary use-value) seem to be lifted from TMC, or at least they're descendants of ideas in TMC and elsewhere too (IDK enough about the history of econ ideas to know what they're descendants of, exactly). And, ofc, I was way less rigorous with them than Mises is in TMC.
There's also at least one thing I said that Mises refuted in TMC -- n/2002 that Bitcoin isn't fiat, which is the excuse to invent . I now think the entire bottom row of the table in n/2002 is all fiat:
; I haven't closely re-read OP to know if there are more. That mistake is a key point behind my argument inA Light URL Minifier using Flask and Redis
Lambsfry is a URL minifier like bit.ly or ow.ly.
GitHub repo
Demo - u.xk.io
Note: this post is easier to read on GitHub due to the fixed width ascii diagrams.
In my [first post](FactumMoney.md) in the factum series I looked at Factum Money, a new category of money which has no non-financial use (or if there is a non-financial use, it is a distance second when ranked by utility), and has no extrinsic requirement to hold value. That is to say, the main utility of Factum Money is as money, and we can see this by looking at the properties of said money and nothing more.
Commodity money and commodity backed money have common non-financial uses and often this is their primary use, and it is thus trivial to see they are not factum money. Fiat money (such as common government issued monies in use today) has no substantial non-financial use, however, by looking solely at the construction of fiat money we can see that it is not guaranteed to have a non-zero value. Most (if not all) of the time fiat money will acquire value through the exercise of authority, which almost ubiquitously takes a legal form.
Bitcoin, however, satisfies both requirements; it is a purely financial instrument (though it's technical implementation leads to other uses, like storing data in the blockchain) and one can see that Bitcoin has a non-zero (or should we say intrinsic) value by a pure examination of the internal mechanism.
Markets and Exchange
Money is a multidimensional problem. Sending value to other parties is all good and well, but there must exist a means of exchanging between monies; without One World Currency there will exist some merchants who won't accept your preferred flavour of money and some clients who don't have any of the flavours you accept.
Traditionally currencies are exchanged in an environment outside the system of money itself. They are walled off from the outside and require specific entry and exit procedures. These procedures are a method of deferring value (mentioned in the first in the factum series). Take the best known BTC/USD exchange: MtGox. The deposit process, for both USD and BTC, involves P1 (the first party) relinquishing their funds, and in return they are issued fungible tokens that operate within the MtGox environment, but are incompatible with any other payment network. These tokens have the unique property (belonging to none of their parents) of being exchangeable for other tokens (denominated in a different currency) on the market provided by MtGox. Thus tokens from P1 are swapped for tokens from P2 (the second party), and both parties direct the value back into their original forms, and the exchange is complete.
By issuing tokens for USD or BTC, MtGox are deferring the value. Put another way, the tokens only have value because of something we know about then outside of their construction: there is a legal gateway allowing value to flow into and out of the MtGox zone.
Because of the similarity in the constructions of tokens in these markets, and the construction of fiat money and commodity backed money, I feel it is appropriate to label these systems Fiat Exchanges. Two forms of fiat money are being traded. No real BTC is traded on the MtGox exchange, only MtGox-BTC -- a petty shadow by comparison.
There exists, however, another form of exchange. In this form the monies themselves are exchanged; not tokens in a segregated environment. When the two of us exchange gold coins for silver, the exchange takes place in an environment native to the subjects of the exchange (in this case the real world). A factum exchange that operates using Bitcoin would satisfy the same criterion; the exchange takes place in an environment native to the subjects of the exchange (the blockchain). Transactions on the Bitcoin network and transactions involved in the exchange are one and the same (in this hypothetical exchange system).
These exchanges, whether they involve real world objects or cryptocurrencies, will be labelled Factum Exchanges. In these cases, the ability for exchange to occur is built into the payment networks -- in the case of gold it is the physical transfer, in the case of Bitcoin it is a transaction on the network. When a factum exchange operates over factum money (and possible some fiat money) there should be no possibility of any particular exchange being censored, blocked, regulated, etc. There should only be two possibilities: the exchange fails (and the original money is returned) or the exchange succeeds (and both parties swap as agreed).
It is possible to host a factum exchange over fiat currencies. An exchange in person, or through banking networks, can be called a factum exchange because value is not deferred from the subjects of the exchange.
A Note on Cryptocurrencies
Although Bitcoin is a form of factum money, not all crypto currency needs to be. One can easily imagine a crypto currency minted by a central bank, and although mining might be decentralised, supply of the money is still regulated by the bank (even with distributed wealth generation). This would count as fiat currency because there is zero assurance (excepting legal agreements) that the currency won't be inflated through the roof. If the private key were compromised this would be an inevitability.
Cryptocurrency could even be a form of commodity backed money, but we shan't explore that here.
Exchange and Distributed Exchange
The Bitcoin community has felt the thorns of regulation for some time. Traditional methods of currency exchange are stifling growth and promoting unhealthy markets (such as we're seeing in the exchange differential between USD/BTC markets on MtGox, Bitstamp, and BTC-e). The Bitcoin community owes it to itself to build tools to better allow for this exchange, and to investigate what is going wrong currently (so we know how and to build, and the purpose of such tools).
The current system looks like this: (using MtGox as an example)
The regulatory efforts are applied to the fiat entry and exit points, with = indicating where those stresses are felt. The MtGox infrastructure requires value to be deferred into compatible tokens, which is the choke point in this system. If direct person to person transactions in USD were used instead, the regulator pressure would fall away. Unfortunately this isn't able to be checked from within the MtGox infrastructure, and thus relies on manual verification, in turn allowing for a number of attacks that make the system too burdensome to use.
The First Distributed Exchange: Ripple
A system such as Ripple looks like this:
It is nearly identical to MtGox, and involves using a payment processor to cash in and out. Bitstamp (BTS) was chosen here as it is a 'gateway' into the Ripple system for both Bitcoin and USD. It can be replaced with any other gateway, though additional steps are required. Ultimately this provides no advantage over the traditional model (see MtGox, above) besides that there are more options to cash in and out (though you have to exchange Bitstamp-USD for OtherGateway-USD as the two aren't fungible). I guess you'd say it's better than MtGox, but not substantially.
Cross Chain Trade
This algorithm is taken from the Contracts page.
Cross chain trade looks fairly simple, but in reality there is no market built behind it, so much of the communication is manual. This might be solved with some distributed layer over the top, but there is still the issue of P1 keeping X secret, locking funds away forever. There has been a suggested solution to this that involves timeout periods. This makes things a little more difficult:
In this case the trade will never fail: after R2 becomes active it is unsafe for P1 provide the secret X. Thus, if P1 is unable to redeem E2 she can wait for R1 become active. By placing the burden of providing the secret on P2, the transaction with the first reversal is guaranteed to occur first.
However, the cost of using this method is great; many confirmations are required for individuals to be certain they are safe executing the next step and there is a great deal of time for either party to renege on the transaction after the terms of exchange are set. Furthermore a reorganisation on one chain could lead to one party with both sides of the transaction.
Other Distributed Fiat Exchanges (Hypothetical)
None of these have been completed to my knowledge. They typically allow the creation of assets backed by some value (fiat), or offered IPO style (possibly fiat or factum).
Typically, a generic fiat exchange (GFiX) will take the following form:
Existing and Future Factum Exchanges
Mastercoin
Note: Marstercoin is a layer over Bitcoin (data stored in TXs on the Bitcoin network) and thus blocks occur at the same time. Note: The following may be incorrect. I've scraped together some information but details of the spec are pretty thin.
In english:
One issue is the buyer (of MSC) is able to renege on the trade before it is complete; this, however, is an issue with any asynchronous exchange. The process here is simpler than other asynchronous models because it is only possible to trade between Bitcoin and Mastercoin, and Mastercoin has interchain awareness to Bitcoin; when an order is filled (and paid for) the Mastercoin chain can release funds automagically. A 'pledge' system could easily be implemented to indicate one's intent to purchase. A market system could be implemented to remove the need to choose an order to fill.
Marketcoin
Marketcoin is a hypothetical currency/market I've begun to set out here
In english:
Currently, P1 can technically renege on the trade before transacting to P2. However, the order P1 places on the Marketcoin network (this is one of many possible implementations) require a fee to be paid (in MKC) that is proportional to the change of value in the last 24 hours (as that is the escrow length). If the trade times out the pledge is given to P2, so P2 is guaranteed to either make the trade, or receive compensation better than the best case scenario considering the last 24 hours. That is a powerful incentive to trade. This requirement of pledges is not required in symmetric exchange. Proof of payment is another issue; it may be possible to automate this process with deep scanning of the foreign blockchain, but this could easily be too intensive a task, and requires marking transactions on the foreign chain. Manual proof of payment is sufficient at this stage, and with software automation, not a burden at all.
Methods of Operation for Distributed Cross-Chain Exchanges
Asymmetric Exchange
As it stands, Marketcoin is asymmetric.
An asymmetric exchange has different rules on both sides. In the case of Marketcoin, it hosts the order book for both currencies, and Altcoin is unaware of it. This means asymmetric exchanges are backwards compatible (they can be made to 'read' a great many forms of blockchain), so a web between all currencies can be created with only asymmetric exchanges.
As mentioned before, the Marketcoin graph:
Using Marketcoin to move between currencies other than Marketcoin is a little more of a burden as the process must be repeated:
An ideal asymmetric exchange built into two cryptocurrencies looks like:
What is important is this is the only way this could happen. Bitcoin hosts the exchange and Litecoin 'reads' the exchange from the Bitcoin blockchain. This is about half-way to a symmetric exchange where both coins have both functionalities. We explore this graph in more detail later on.
Ignorant and Knowledgable Asymmetric Exchange
There is a clear difference between Marketcoin and an ideal asymmetric exchange. This is because in an ideal case, both chains have interchain awareness and can verify transactions on the other chain. In the case of Marketcoin and an existing cryptocurrency (say Bitcoin), the foreign coin is unable to read the local market (hosted entirely by Marketcoin). I call this ignorant asymmetric exchange, because Bitcoin is ignorant of the fact it is even occurring.
In the counter-case that a foreign chain is aware of the local chain, we can offload buy orders to the foreign chain, which can then enable some escrow like function to guarantee trades are atomic. I call this knowledgable asymmetric exchange. However, to guarantee determinism in such an exchange, buy orders will only be matched once they are learnt about by the other chain, and can only be canceled with the permission of the other chain (they will either be cancelled or a trade will occur before the cancelation takes place).
Combining Fiat Exchange with Factum Asymmetric Exchange
Consider a distributed Generic Fiat Exchange (GFiX). Often such an exchange will have a core cryptocurrency operating beneath the user-defined assets (think Ripples, BitShares, Freicoins, etc) and so should be compatible with a distributed cryptocurrency exchange. Then presume this chain also supports ignorant asymmetric exchange. In such a case it would be possible to buy arbitrary assets on the GFiX using Bitcoin in a trustless, multistep manner. In the best case where Xcoin supports asymmetric cross-chain trade in a similar way to Marketcoin and an asset market, you would experience the following process:
Symmetric Exchange
A symmetric exchange is one where both networks run identical rule sets. Each hosts one half of two exchanges - a 'bid' and an 'ask' order book. Consider MKC and XC, two crypto coins. In the case of MKC, the 'ask' order book is for XC/MKC (the MKC chain is authoritative) and the bid order book is for MKC/XC (the XC chain is authoritative). Likewise, the XC chain hosts an 'ask' order book for MKC/XC (XC chain authoritative), and a 'bid' order book for XC/MKC (MKC chain authoritative).
Technically this is equivalent to two knowledgable asymmetric exchanges running on both chains.
A 'brief' explanation of one possible symmetric exchange
Each market's deterministic execution is decided by the authoritative chain, based on best-effort updates shared between chains.
Side note: If you imagine a cryptocoin hosting 10 markets, not every market needs to be updated every block, in addition, preventing updates increases liquidity at the cost of transaction time. For fledgling, unpopular markets this may well be a positive thing, and evidence in favour of only including market updates for the markets you care about - after all, you'll need to be running those clients. In addition, a very flexible update method such as this lends itself to better compatibility with block chains progressing at different rates.
These best effort updates relay specific information about both order books.
In the case of the 'ask' order book - the market the local coin has authority over - all known but excluded bid updates (from the foreign chain) are amalgamated into a chunk and deterministically matched against the 'ask' order book. The details of the trades made are recorded and packaged into a market update, which is then relayed back to the foreign blockchain in a best-effort fashion. Market updates are recorded in the merkel root (or a similar device) and secured in a chain so one can not be omitted. These are in turn recorded under the full block chain headers of the foreign chain. The deterministic matching happens over one market update. If a local block happens to contain three foreign market updates then all three are lumped and evaluated against the local order book deterministically. This leads to the fairest exchange between all parties concerned. The corresponding market update, when relayed back to the foreign blockchain, alerts the chain which cross-chain escrow transactions may be released and in what proportions (if there is a change address, for example).
In the case of the 'bid' order book - the market the local coin has no authority over - once orders are made the coins are locked and in escrow. All new or changed bid orders are added to the market update. After this update is recorded in the blockchain, in order to maintain foreign blockchain authority, if the user wishes to cancel the order they will have to wait for the foreign chain to recognise and acknowledge the request (one must broadcast a cancel message, record this in a market update, have the foreign coin register this in a market update, and then have the local blockchain recognise that it is finally safe to release the coins from escrow). During this time it is possible the order may be filled and the cancel message will fail.
Overview
Let us examine an ideal distributed exchange which operates exclusively on cryptocurrencies; and the path of value (for a Bitcoin/Litecoin trade) looks like:
That is to say, in the first case, an order is placed on the Litecoin network (ask for BTC or bid LTC) and recorded in a block. At some other point an order is placed on the Bitcoin network (bid BTC or ask LTC) which overlaps with the previous order on the other blockchain. When this second order is recorded in a block, it is known to have matched with the order on the Litecoin network (deterministically) and is automatically sent (or assigned) to the receiving party. At the next block on the Litecoin network the trade is learned of and the other transaction is performed so the Litecoins are transferred to the receiving party.
Moving form Xcoin to Zcoin through Ycoin - all of which support symmetric exchange:
By voting on which market updates to accept (from which other cryptocurrencies) and which markets to run, it will be possible to create a dynamic mesh of markets, forming many possible paths between any cryptocurrencies. (In the worst situation, a coin can run an asymmetric market and use that to trade into and out of foreign block chains.)
Finding Efficiency
Every novel technology developed will be hindered by regulation in a unique way. The 'directness' of transferring fiat to crypto is a worry for a number of people (the same people as are behind the steering wheel of regulation). They feel they need to remain insulated from the untested Bitcoin. Perhaps they would feel more comfortable with a bridging technology, such as a cryptocoin pegged to a parent currency, that just so happens to natively interact with a distributed market.
Lets have a look at that, shall we?
This, however, will likely not be the first iteration. It is elegant, quick, and efficient, but there are likely to be many sticking points before that can be realised. Firstly, Bitcoin doesn't support knowledgeable asymmetric exchange, and there is a strong possibility that too much regulatory pressure will require CRYPTO-AUD to ship without an exchange. Therefore the first iteration might look something a little more like:
Not as pretty. Compared to a our current fiat exchanges, this doesn't look that appealing. That said, legally CRYPTO-AUD is far more comparable to a traditional payment processing system such as PayPal. By exploring the edge cases of regulation we can help find the inconsistencies and assist its evolution. An attack on Bitcoin by a Government will necessarily involve shutting down the flow of fiat into Bitcoin as much as possible. One strategy is to create useful technologies that are too similar to existing tech (PayPal, Visa) so we can stand on some very resilient legal precedents and standards if these systems are challenged. These middle ground crypto networks, then, cannot be made illegal without negatively effecting the current corporate monopoly because they are designed to resemble them so much. Whether that's possible is another matter.
Conclusions
We know that regulatory pressure can be applied to restrict a flow of Dollars/Yen/Euros anywhere. While this investigation into distributed exchange didn't identify how to expunge regulatory pressure, it did yield at least one other method of allowing value to move from Fiat to Bitcoin: distributed cross-chain markets for cryptocurrency. By building a PayPal-like network built on Bitcoin and minted/destroyed in the same fashion as PayPal (deposit -> mint, withdraw -> destroy). Ultimately some legal entity must be responsible for the cash-in/out process, but this can now safely be entertained without needing to worry about the regulation surrounding running an exchange - provided there is a cross-chain market out there already.
In addition to this, we've looked at the minimum ideal for a market and found that it is readily achievable with knowledgable asymmetric exchange (better yet, symmetric exchange). We've briefly looked at one way this can be achieved, providing a fast, resilient, cross-chain market that operates as a close to perfect market. We have not investigated this suggestion deeply, though (don't worry, that's coming soon). By extrapolating this across all coins (or even just a few) we can see an interconnected web of markets bridging the gaps between chains.
Where To?
For cryptocurrency to succeed a distributed exchange must be developed linking them together. By leveraging interchain awareness we can build factum exchanges that operate in the domain of the currency itself. Atomic exchange that is intwined in cryptocurrency is a strong motivator for adoption, and by creating a web of markets the canonical boundaries between *coins will be removed.
Efficient atomic cross chain trade is a necessity for the long term viability of cryptocurrency. Using an asymmetric exchange, any user of a *coin can move value into an alternate chain, letting them take advantage of any new innovative features produced on other chains.
Found an old wordpress blog where this was hosted: https://xkio.wordpress.com/2013/12/22/introducing-factum-exchange/
Originally published in the Bitcoin Magazine on January 15th, 2014. Reposted here on 1st April, 2014.
Proof of Work (PoW) is the only external method of powering the distributed consensus engine known as a blockchain. However, at least two alternatives have been proposed, and both are internal to the network (Proof of Stake (PoS), and Proof of Burn (PoB)). This is important as it uses virtual resources obtained within the network as a substitute for PoW, meaning they these methods consume virtually no energy, which has been a concern of late. The figures suggested will only occur in a system of absolute equilibrium (the market is saturated with the most efficient ASICs that are possible to produce), though even if the reality is one or two orders of magnitude lower than predicted, it is still alarming and still must be addressed.
Proof of Stake and Proof of Burn
Both PoS and PoB use similar mechanisms. The auditor makes a sacrifice - in the case of PoS it is coindays (which are difficult to acquire; also a good measure of economic activity), and in the case of PoB it is coins themselves (which are also difficult to acquire). Ultimately, any Proof of {something} must require a cost, whether that be electricity, coin days, or coins themselves.
Herein I suggest a fourth method, very similar to how a term deposit works (in that dusty old banking system).
Monetary Velocity and Value of Money
The equation of exchange tells us that as velocity increases the price should decrease, and when prices decrease the value of each unit of currency increase - this is only the case provided the monetary supply remains constant. In a late-stage currency we would expect a relatively low level of monetary inflation / deflation (as opposed to price inflation / deflation - an important distinction), so we'll discard the concern of constant monetary supply.
In a Proof of Stake fuelled network one is required to hold currency for some time before it is able to be used to mint a block. Because it cannot be used in a transaction it is essentially removed from the monetary supply as it is unavailable for a period of time (not technically true because one can spend it up until it is used to mint a block, be the economic effect is the same either way in terms of velocity). Because the money supply will effectively (but doesn't actually) decrease, prices should also fall by a small amount. One can imagine the network saying
Proof of Burn is used in a similar fashion: coins are destroyed in an unspendable transaction which is not immediately obvious to the network (the author suggests using a P2SH address). At some later date this is revealed and used to create a block. The miner is then rewarded with new coins and/or transactions fees (presumably more than the coins they've burned, else they've made a loss). This is like the network saying . Huh, that sounds familiar.
While they may sound very similar, there are a few differences in terms of public knowledge. In both cases it is unknown how many coins have been left waiting in the wings (similar to how it is impossible to tell how many bitcoins have been lost or abandoned over the years), though PoB provides a little more specificity allowing us to determine a narrower range of candidates (unspent P2SH addresses) than PoS (which includes all unspent transactions). The volume of coins in each case is also an indicator, as in both cases there will be some effective minimum required. However, PoB implies the number of coins burnt cannot be set in advance as both the date of redemption and volume of burnt coins are unknown. PoS does not destroy coins and so any extra volume of coindays destroyed is less important. These differences are subtle, but may become important as the systems are explored more deeply.
Economically speaking, the basis of both proofing systems relies on relinquishing the ability to use coins for some time. In PoS this is voluntary and the funds are spendable at any time, whereas in PoB uses a rather more permanent operation so the user commits immediately to mining a block in the future, regardless of whether it is profitable or not (provided they meet the difficulty requirement, else the coins may be lost forever; perhaps pooled mining might alleviate this concern, though), but the length of time till that utility will be used is unknown. In this case, as the ability to use coins is relinquished, there is no possibility they will increase monetary velocity and thus should (in theory) increase the value of the each coin in the total supply.
Proof of Deposit
Proof of Deposit (or PoD) fills a medium between the two methods. Simply put, PoD blocks have a difficulty proportional (or equal) to the amount of coins that must be offered for ‘deposit’ and have a known block reward. Deposited coins remain untouchable for some length of time and the block reward is delivered to the miner (either immediately or over a period of time like a dividend or interest payments). As there is one deposit per block there are a limited number of deposits available each year, and if deposits are appearing too fast then the return must be too high, so the difficulty is increased (which implies the return is lowered) and thus demand decreases. Our personified network might once again say “Here’s a small reward for temporarily removing your coins from the supply and making us a little more wealthy, in addition to auditing and securing the network.”
That’s getting awfully familiar…
Why yes, it is. This should come as no surprise, though. What resources are there internal to a currency besides the currency itself? Economically speaking, there’s very little substantive difference between these three methods, and their monetary implications are very similar; the main difference is the physical actions that help it propagate. If, however, humans are psychologically biased to one way over another, then those physical actions are exactly the things that will count in a showdown between these proofing methods.
Does it even work?
This is really the only important question here. If none of these schemes work, do we have a reason to care? A discerning reader like yourself might have noticed something peculiar about these three methods: you need money to make money. Without internal resources existing the network has no fuel.
Peercoin mitigates this concern by using both PoW and PoS in combination to create new blocks. Over time PoW blocks become less frequent and PoS blocks become more frequent, so it should eventually lead to an energy efficient network (or at least more so than the Bitcoin network). Whether this will pan out or not is difficult to say; the reward for attacking Peercoin is far lower than a well executed attack on Bitcoin, and without an increase in Peercoin’s popularity and/or accessibility we might never discover how easy an attack really is.
Where to from here?
The possibility of a zero energy currency is not something that should go without research, but should also be approached with a degree of scepticism. It has been argued that monetary monocultures contribute to financial instability due to the lower resilience of a homogeneous system (compared to one of high diversity). Is it possible that a reliance on internal states causes instability more generally, even in a currency that has no resistance to opt in to or out of? If it is still the case, can we build several different sorts of these systems together to help provide that resilience? Can one network’s security rely on actions in one or several other distinct currencies? These are important economic questions that may have profound consequences for the future of finance; they are novel because systems of this precision have been impossible under legal frameworks, and never before has any person been able to create a truly global currency in their garage. Experimentation is the future of currency, and I am excited to watch it happen.
Every PoW driven cryptonet has a state. The state of Bitcoin (and forks) is the particular set of Unspent Transaction Outputs (UTXOs) at the time - essentially the set of all Bitcoin able to be spent.
When a new block arrives, the usual process to update the state is simple:
{% highlight text %} Start with S[n,0] (state at block n) Apply the first transaction from the new block (B[0]) to S
S[n,k] + B[k] -> S[n,k+1] for all k in B
S[n+1,0] = S[n,max(k)+1] {% endhighlight %}
However, what happens when a new block arrives causing a reorganisation of the main chain?
{% highlight text %} . 3a← 4a <-- 3a and 4a are not in the main chain currently ↙ 1 ← 2 ← 3 ← 4 <-- 3 and 4 are in the main chain
1 ← 2 ← 3a← 4a← 5a <-- New main chain ↖ 3 ← 4 <-- Old main chain, 3 and 4 no longer in the main chain
In this case block #2 was the lowest common ancestor (a pivot point) of the two competing chains 3a->5a and 3->4. {% endhighlight %}
The problem of reorgs
Let's presume the distance from the lowest common ancestor (LCA) and the new head is
n
.Bitcoin et al solve the issue by stepping backwards through time.
Since Bitcoin transactions spend outputs, and outputs may be spent only once, playing the blockchain backwards is trivial:
{% highlight text %} for each transaction: remove it's outputs from the list of UTXOs. add the outputs it spends to the list of UTXOs. {% endhighlight %}
And bam! You can then play time forward from the LCA to calculate the new state. How nice.
What happens, though, when we move to a cryptonet that only operates on balances and doesn't use the input/output system of Bitcoin?
Well, provided we're recording every transaction it's quite simple. A transaction moving
X
coins fromA
toB
results inA-=X
andB+=X
. That is trivial to reverse. However, the caveat is that we must record every transaction. Once we start including complex mechanisms within the protocol that produce transactions that are not recorded but simply implied, we can no longer play time 'backwards' asS[m]
depends onS[m-1]
and without knowingS[m-1]
to calculate the implied transactions, we can't play time backwards. Of course, if we knowS[m-1]
we don't need to do any of this anyway, so we're sort of stuck. Examples of this sort of mechanism can be found in the way contracts create transactions in Ethereum and the market evaluation in Marketcoin.Remembering
S[m-1]
is easy but what if the reorg is of length 2, or 3, or 10? We can't just remember all the states.So, we can see that we have a problem.
Efficiently remembering states
The intuitive solution (to me, at least) is to know some but not all states at strategic intervals between the genesis block and the current head. When a reorg of length
n
occurs, the network has already committed to evaluatingn
new states. I define 'efficient' here to mean evaluating no more than2n
new states (in the worst case). Unfortunately, this means we'll need to remember about2*log(2,h)
states, whereh
is the height of the chain head. All the UTXOs in Bitcoin take up a few hundred meg of RAM, so for 500,000 blocks we're looking at no more than 40 states, but that's still ~10 GB of space (by Bitcoin's standards) which isn't ideal. It's unlikely that we'll see long reorganisations, but we'd still be storing half of the figures mentioned above, which, while better, isn't perfect.One solution may be to record the net implied change of state as the last transaction, but that solution might be more painful than the cure, and requires introducing extra complexity into the network architecture, which I'm against, so we won't consider this option here.
In addition to the above constraint on 'efficient', we also require that for each block building on the main chain we should only have to calculate one new state (the updated current state). This implies that when we step through the blockchain, we only ever forget cached states, with the exception of the new state produced by the next block.
Somewhat-formally:
{% highlight text %} Current head is of height n.
A[n] = {cached states at height n}
Block n+1 arrives:
assert A[n] is a superset of {all a in A[n+1] s.t. a is not of height n+1} {% endhighlight %}
Thus
A[n+1]
can be described as the set of some or all of the states inA[n]
and the state atn+1
, and therefore our collection of states does not requrie regeneration on each new block.I propose a solution below that has a number of desired properties:
n
requires computing no more than2n
statesk
states saved whereld(h) <= k <= 2*ld(h)
{% highlight text %} Initial conditions: - Reorg length: n - Current height: h >= 3 - i = 0; i < h
2k < h - i <= 2k+1 is always the case for some k if h-i == 2: set k to 1. (it would otherwise be 0)
After finding k, and while h-i > 1: 1. Cache states at height i + 2k and i + 2k-1. 2. i += 2k-1 {% endhighlight %}
and in python: (testing all combinations up to 213)
{% highlight python %} import math
h = 3 states = set([1,2])
while h <= 2*13: newStates = set() # find largest k s.t. 2k < h i = 0 while h-i >= 2: k = math.log(h-i)//math.log(2) newStates.add(int(2k)+i) newStates.add(int(2(k-1))+i) i += int(2*(k-1)) ts1 = set(states) # temp set for testing superset requirement ts1.add(h) # add the current state (instead of removing it from newStates) assert ts1 >= newStates # ts1 is a superset of newStates l = list(newStates) # temp list just to print l.sort() print(h, math.log(h)//math.log(2)+1, len(l), l) states = newStates h += 1 {% endhighlight %}
Because of the ~log(n) space requirement a very fast block time is not a major concern. A chain with a target time of 1 minute requires about 1.5x the storage capacity of an equivelant chain with a target time of 10 minutes in the first year, and this ratio rapidly approaches 1 in the following years.
That said, after the first year with a 1 minute block time, we'd be storing around 30 states. If we ignored all states more than 2000 blocks deep (a day and a bit) we're still storing more than 15, which isn't a particularly great optimisation. (When we have events like the Fork of March 2013 we would like clients to adjust quickly and efficiently).
I have some ideas about state-deltas to try and solve this issue (which is ungood, but not doubleplusungood) but that can wait for a future post.
Originally published at eudemonia.io.
Eleven months ago I started planning Marketcoin and since then I've not described the updated design. It has changed significantly since I first described it, and is far superior in many aspects.
Herein I'll describe what Marketcoin is designed to do, sometimes with little or no justification to how it is achieved. The implementation is highly technical and does not belong in a general introduction.
Marketcoin is an idea that manifests in a novel fashion. It's a bit like Bitcoin, and a bit like Mastercoin, and a bit like Ethereum, but also like none of them in many ways. It's not any one single network, but many that are able to communicate and transfer value from chain to chain. They're able to share a common unit of a consistent value across all chains supporting the correct standard. It is self pruning and selects on the most efficient markets, while still enabling diversity and innovation. It is a parallel cryptonet designed to span the Internet and enable trustless trade between both old and new chains.
Markets can be hosted anywhere, on any chain, but there will be a central hash rate source where most market-chains live by default, known as the Grachten. A market-chain may host many markets but the central unit is always common. This communal living is an important design decision for market-chains because it provides an environment where high quality markets can grow that have some level of mutual quality assurance due to their competitive environment. A high quality neighbourhood is important as chains will have to communicate to move the central unit between them; remember that confidence in the chain is inversely proportional to required confirmations, so less secure chains will naturally be slower to interact with their peers. Just as Namecoin is merge-mined with Bitcoin, market-chains can be merge-mined with the Grachten. Unlike the Namecoin / Bitcoin relationship, though, the Grachten has limited space, and it becomes more and more difficult to produce blocks as a miner attempts to include more data. For this reason cryptographic authentication is deferred to the Grachten, providing a competitive environment where market-chains can prove their efficiency. The more coins stored on that chain the higher the block reward is, increasing the incentive for a miner to mine that chain. Two chains competing for the same currency pair will each hinder the efforts of the other, so once one gains a majority the weaker will atrophy until it is discarded.
The main idea of Marketcoin is to provide a fair and unbiased market on which to trade various cryptocurrency and other smart properties. In the same way a human watches the Bitcoin blockchain to wait for payment, Marketcoin watches the Bitcoin blockchain to confirm trades. Since there is some internal unit and an external unit, it is conceptually easy to see that an exchange can take place.
The actual market design used in a market-chain can be of the community's choosing, however, a blockchain provides unique challenges that traditional market structures do not neatly fit. The proof of concept (PoC) due out in the near future will demonstrate a design based on a modified call market. Since orders must be inserted into the order book whether they are executed immediately or not, no distinction is made between orders waiting in the order book and orders made immediately. Due to the near perfect-information state of cryptonets, it's likely a market will react to an order before it is executed - helping to create liquidity and competition around every trade.
A typical market-chain will experience the following phenomena:
There is a possibility of miners using their power to manipulate the orderbook slightly in their favour before offering a block, however, execution is always at the beginning of a block ensuring only existing orders are executed, preventing too much manipulation on the part of a miner. They can, however, manipulate the orderbook very slightly with every block they produce, so that if the next block happens to invoke an execution they will be slightly advantaged. By analogy, this is the high frequency trading of Marketcoin - both are simply having a say when it most counts.
It should be evident at this point that novel market structures are easily implemented under Marketcoin, allowing the most efficient and desired market structure to emerge. This is an excellent example of the neutrality of Marketcoin's design.
While one market-chain may only support a few currency pairs (more may prove too cumbersome), other market-chains are easy to create and can maintain a two-way peg with the central unit provided a standard is followed. This standard dictates a few aspects important to the network. For example, the rate of central unit generation as a block reward should be proportional to the number of central units stored on that chain and inversely proportional to the block frequency of that chain. Since market-chains are designed to coexist on shared hashing power they have a natural resilience to changes in block production times, and the reward adjusts accordingly.
Creating a new market-chain will be extremely accessible (we're going to provide a library) but maintaining one is very costly (you have to convince people to mine it in a competitive environment). Because of this combination, I anticipate there will be a great evolutionary synergy, whereby there is no central Marketcoin chain and the central unit exists between many chains, abandoning them as they become insignificant and jumping on those that prove useful, novel, or advantageous. The unit of value will be extricated from the confines of just one blockchain, allowing for innovation not just in market structure but blockchain technology - all without needing hard-forks.
Marketcoin is designed to be the solution to distributed exchange: from an ethical launch to evolutionary agility, we want to cover every base to ensure the longevity of such an ambitious project.
The Ethereum devs have been mentioning microchains lately so I figured it was time to write up what my thoughts on this sort of thing have condensed into; they might differ from Gav Wood's thoughts.
As a note, I didn't coin the term microchain, though I've heard Gavin Wood use it (and Stephan Tual). I didn't have a term and I think this is perfect.
The point of a microchain is to provide a shared scalable PoW 'container' - a chain meant for nothing else but wrapping data in a PoW. Typically this has been done in a roundabout way (see AuxPoW or Mastercoin/Counterparty) that requires a lot of data, and is not efficient for any 'piggy-backing' chains hanging off the main chain. This isn't a huge issue; insofar as - in the case of AuxPoW - proofs just go from 80 bytes to ~500 bytes (unless you're using P2Pool or Eligius then it's a bunch more). This is because the whole chain from block hash to PoW must be included, which is
Hash(Header(MerkleTree(Coinbase(ScriptSig(BlockHash)))))
. Ugh!Additionally AuxPoW has a number of design flaws: using 'chain-ids' to dictate positions in merkle trees is just ugly. The point is to ensure uniqueness in the proof - that you can't secretly include two different block hashes (since data in a merkle tree can be hidden) and later launch a doublespend attack. It's trivial to see that a merkle patricia tree (MPT) is the better solution here as key-uniqueness is guaranteed.
Another flaw is the indirect and bulky nature of the proofs as described above.
A further flaw is the reliance on a central chain: Namecoin will never exist without Bitcoin (or at least requires a hardfork) and necessitates the use of the Bitcoin chain. It would be nice to have a system of merged mining that is coin-agnostic (doesn't favour Bitcoin, basically). Hardforks are bad, lets avoid them.
A related flaw is the dictation of data structure which favours bitcoin-forks. It introduces needless complexity for a ground-up chain to implement AuxPoW.
All in all, using AuxPoW has a lot of side effects, and it'd be nice to be able to avoid them.
Basic microchains
These are minimum structures to fairly support merged mining and general data-inclusion.
(The code is written to be roughly compatible with Encodium.
Intra-chain view
{% highlight python %} class Block: branch = MerklePatriciaBranch.Definition() header = BlockHeader.Definition() transactions = Transactions.Definition()
def valid_proof(self): root = branch.calculate_root(genesis_hash, header_hash) return root < header.target {% endhighlight %}
Inter-chain view
{% highlight python %} class MicrochainBlock: tree = MerklePatriciaTree.Definition( List.Definition( KeyValuePair.Definition( GenesisHash.Definition(), Hash.Definition() ) ) )
def proof_of_work(self): return self.tree.root {% endhighlight %}
Optimisations
Ensure genesis keys diverge as quickly as possible; Put a cap on proof length to avoid bloat for fun - putting two very similar keys can create a worst-case proof.
Change MicrochainBlock's proof of work to:
def pow(self): return hash(self.tree.root + nonce)
- this means we have O(1) updates to PoW whereas without this a k,v pair in the MPT must be altered, which is an O(log n) complexity update.More complex forms
There are a few more alterations I've been thinking of, especially:
Making microchains into a blockchain of their own (and the metadata is included in the tree like everything else - this metadata governs targets, difficulty, etc) which will aggregate mining power in a more formal manner. Additonally it means that a chain can just not worry about PoW, and simply take an authenticated list of hashes from the parent chain (for better or worse). And...
Deregulating block frequency on merged chains and allowing the microchain to govern update frequency. Which ties into...
Competition within the tree. By this I mean merged mining an additional chain incurs some cost, this drastically alters the incentive structures around attacking networks and merged mining (haven't done the math yet to figure out if it even can be benefcial).
Those three points mean the microchain could support many merged chains, and their block frequencies would be governed by how often they are mined in the microchain (and lower frequency means higher reward per block), and with added competition that means they will reach an equilibrium which allows a direct measurement of percieved economic value. More detail another day.
Slid.es
Ethereum
A brief introduction to Ethereum. ~ 20 min.
Bitcoin - Payment Protocol
Technical overview of Bitcoin as a payment protocol and a look into Bitcoin's script system.
Watch?
Making Money - Part 1 [Vimeo] (September 2012)
Direct Link ResourcesResources
Making Money (lost)
Part 1 is primarily concerned with the money creation process (fiat money, fractional reserve banking, and interest) and who benefits from this.Part 2 is a conceptual introduction to Bitcoin, why it is such a significant achievement, and why it is superior to our current monetary systems.
PDF download
Bio
I am an amateur philosopher practicing the school of Critical Fallibilsm. My favorite philosophy topics are: epistemology, learning, systems design, communication, morality, project planning and business, and general life skills and attitudes. I spent the last half of 2020 improving my thinking methods via (commercial) one-on-one philosophy tutoring from Elliot Temple. Those 52 tutorials (~100 hrs) are available free on YouTube and you can read my and my sitemicroblog to see the philosophy work I was doing at that time. The tutorials started with my goal of improving my writing quality, but they covered a very wide range of topics, like grammar, procrastination, social dynamics (e.g. analysing + understanding lies and dishonesty), yes/no philosophy (epistemology), learning methods, and having successful discussions.
As a result of the improvements to my thinking methods, in January 2021 I unendorsed all my previous ideas. That means that I revoked my implicit endorsement of projects, ideas, opinions, etc that I had previously worked on, advocated, promoted, etc. There's more details in the above-linked video. This about page was sorely out-of-date prior to the update on 11th July 2021. You can see previous versions on github.
For work, I do mostly software stuff. I'm pretty good at that, generally; you can check out my github profile if you like. I've built sophisticated systems of smart contracts (with 99.7% test coverage) as the backend for a secure online-voting SaaS product. I've built sophisticated IaC cloud systems with bespoke advanced automation and devops tools. SPAs and webapps in Elm, Purescript, and Typescript (VueJS) with features like signature input and PDF generation, headless browser automation, and crypto stuff like auditing an on-chain election.
I am very comfortable learning new languages, frameworks, toolkits, etc. I have a strong preference towards safe programming techniques. Some examples: rich static types (e.g. Haskell, Purescript); well integrated functional techniques (e.g. Rust); unique compiler-level safety (e.g. Rust, Elm); and declarative frameworks (e.g. Cloudformation/IaC, Tailwind CSS). I really like type-level programming and am disappointed at the lack of support for it in languages like Rust. I think things like higher-kinded types, functional dependencies, and type-level rows are incredibly powerful, but the current implementations and tooling are ultimately lacking, making the act of doing type-level programming harder than it needs to be. There are some exceptions to my preference for safe languages, too. I quite like Ruby and Rails (though have some criticisms, too), and I have a new-found appreciation for SQL. I don't like javascript much, but it's not so bad anymore with modern ecmascript and typescript. I don't hate it.
I contributed an in-depth chapter to Data61's Architecture for Blockchain Applications textbook about the smart contract architecture that I used for SecureVote's backend.
My two most well-known past projects are likely Flux and SecureVote.
Flux is a political party I founded in 2015 to introduce better methods of doing democracy. Particularly, I invented a new way to do democracy -- Issue Based Digital Democracy (IBDD) -- built around foundational concepts from epistemology and free market economics such as error correction, cycles of conjecture and criticism, specialization and trade, division of labor, comparative advantage, and arbitrage. Flux ran in state and federal elections (in Australia) in 2016, 2017, 2019, and 2020.
SecureVote is a startup (in indefinite hiatus), founded in 2016, that produces secure online voting software and infrastructure. We own a patent on the most space-efficient method of secure, online, p2p secret ballot. In 2017 I publicly ran a 24 hr stress test of our prototype high-capacity voting architecture -- this achieved 1.6 billion votes anchored to the Bitcoin blockchain and was able to be audited by the public. Based on those results, a 2016 15" MacBook Pro would have capable of processing up to 16 billion votes in 24 hours, i.e., the 1.6 billion-vote stress test used approximately 10% of that macbook's computational capacity on the bottleneck task: signature validation (I guess mb it would thermal throttle, tho 🤨).
Some other past projects of mine:
BitChomp Mining Pool (2011)
I ran a NMC/BTC mining pool (BitChomp) for a short while in 2011. That died after the pool became insolvent due to a repeating-payments bug in my BTC payout code. For whatever reason the same code worked fine for NMC payouts, but BTC payouts encountered an exception between the send BTC tx and record the payout in the DB steps. The regular cronjob to trigger payouts meant that (by the time I woke up) BitChomp's first Bitcoin mining payouts (about ~7-8 BTC out of 50 BTC reward) had been sent to miners 6 times over until the wallet was drained. The reward distribution model (PPLNS, or SMPPS, mb) meant that the pool was able to build up a buffer of excess reward, but that needed to be maintained to pay miners later during unlucky periods (or something like that). The takeaway is that distributing the excess like this meant that the internal accounting of the pool was out of wack, and meant it was insolvent. I learned a valuable lesson about handling atomic, irreversible events and DB synchronization; and I'm glad it happened early in my career and not in a high-stakes situation.
Marketcoin, Ethereum, Eudemonia, The Grachten, Quanta (May 2013 to August 2014)
I tried to launch a distributed-exchange blockchain -- Marketcoin -- in May 2013. The architecture is still workable and better (higher capacity, lower fee) than the s around today (like Uniswap/Balancer). In Dec 2013 -> Feb/March 2014 I worked with the Ethereum team doing self-direct work around smart contracts and wrote (to my knowledge) the first smart contract testing framework alongside a test implementation of Marketcoin's price matching and escrow engine and a precursor to BTC Relay comprised of 3 contracts: CHAINHEADERS (for Bitcoin's headers-only consensus), MERKLETRACKER, and SPV. To be clear: if the design of Ethereum smart contracts (and tooling around their authorship) had not changed significantly before launch, this is probably the earliest near-functional decentralized, cross-chain exchange.
In April-July 2014, a friend and I started a short-lived project, Eudemonia Research, where I returned to Marketcoin and wrote, from scratch, a blockchain framework for fast development of custom blockchains: Cryptonet. It's like Parity's Substrate, except 1,316 days older, written in Python, and dead. I used cryptonet for some other important prototypes, too.
One was The Grachten (originally GPDHT), an early implementation of a blockchain scalability soln based on merged-mining and something like pseudo-sharding. It later went on to be refined and coined [a microchain by Gav Wood. When Gav Wood said the following, he was talking about our conversation about this idea.
Another important prototype I built using cryptonet was Quanta -- which is the world's first implementation of the generalization of Nakamoto consensus to a DAG that is capable of merging histories from multiple parents. The method I created for Quanta was independently discovered a year later by Yoad Lewenberg, Yonatan Sompolinsky, and Aviv Zohar in Inclusive Block Chain Protocols.
All content is on GitHub: https://github.com/xk-io/xk-io.github.io.
This post and addendums are to track other bodies of work that aren't included directly on this site (or at least not when I post them).
My emails jmfarthing88@gmail.com.
By the way, amazing work with Flux.
Hi, FYI, Flux is dead: https://voteflux.org/2022/04/20/wrongful-deregistration/
I'm no longer interested in pursuing anything around digital democracy.
I now think that what Flux was trying to do (and widespread digital democracy more broadly) is a fool's errand. There are much bigger problems with govt/democracy, and those problems will prevent digital democracy making any substantial impact. The AEC's flagrant disrespect for the Electoral Act and consistent head-in-the-sand style denial of any fault is an example of this. Moreover, voluntarily participating in broken systems (e.g., starting a political party) is, in-essence, consenting to it and publicly supporting it as legitimate. I don't think that's a good thing to do, and the right thing is to opt-out of those systems to whatever extent is possible.
There are two philosophy books I recommend you read that discuss these sorts of issues. Karl Popper's Enemies of the Open Society, and Ayn Rand's Atlas Shrugged.
IMO, digital democracy actually presents a risk in some ways, in that it might make government interference a lot easier. IMO, the further the government is from your life, the better.
Originally written some time around 2 July, 2014. If you are interested in becoming a founding member of the NVB you can do so at our site: nvbloc.org.
Governance is currently a centralized endeavour in most developed nations, and inefficiencies arise in the modern age due to design assumptions made (typically) in the 17th through 20th centuries. Assumptions like speed of communication (snail mail), the ability of constituents to understand policy (non-universal education), travel time (days to weeks), and magnitude of population (orders less than today), among others. Since nearly all of these assumptions are now wrong, we expect that more efficient and just systems must now be possible. Furthermore, we expect there to exist some incremental solution by which a smooth transition into a new political and societal structure so that the transition can occur over decades (slowly and steady) instead of days (hasty revolution). A potential solution — a neutral voting bloc — is presented below aimed at the Australian Federal Senate due to the system of preference allocation as parties are eliminated from the race. Over time this system can diffuse into local and state governments. Additionally the structure is such that the bloc (as a party) can safely hold 51% (or more) of parliament without facilitating tyranny of the majority.
Many good ideas are floating around regarding the structure of what a good, neutral, and effective voting system could look like in today’s world. Most of these ideas surround liquid democracy (LD), a system that allows individuals to defer the potential of their vote to someone they trust. In this way, over a chain of several deferrals, responsive representative democracy is achieved (responsive because votes can be take away if a ‘politician’ misbehaves) for mundane issues that don’t attract attention. Similarly, if an issue attracts a great deal of attention, individual voters can directly vote, spending their vote potential personally instead of granting it to their trusted representative. Liquid democracy thus provides a smooth transition between referendums and oligarchical democracy, depending on the interest of the public, and ensures efficiency in representation through strong competition.
The underlying voting system, however, is not a recipe for success. These systems do not provide — on their own — an incremental transition into a better democratic system, they simply provide a means of allocating votes. Thus they are a means to an end; that end being just allocation of votes. Therefore, no matter how well designed voting software becomes, without citizens’ personal interests at stake such a system will not be adopted. Our task, then, is to design such a system as to always provide someone — anyone — with utility by participating, and once they participate, to offer the same value proposition to another individual, and so on. Without such a marginal increment in utility we cannot achieve an incremental solution.
The utility of such a system is the potential for the individual to cast their vote according to their preference. Therefore the system must strengthen whenever a new member joins regardless of their political ideology. This is achieved by using the party structure as a proxy for the alternate, superior, underlying voting network. As users participate the allocation of parliament (particularly in the upper house) belonging to the bloc will continue to increase with each election (as constituents cast their primary votes in favour of the bloc) providing the beginnings of incremental increase in utility.
However, this is not a novel idea (so far) and has been tried before. Senator Online is an Australian party acting as a direct democracy proxy service for each member. They received 0.06% of votes in the 2007 federal election, and 0.09% in the 2013 election. Clearly their model is not effective. Unfortunately, even though Senator Online may have some of the properties we seek, due to Australia’s preferential system increasing the granularity of representation, Senator Online may never have enough support to gain a seat. Direct democracy is very burdensome to the individual and so may appear unattractive to the majority of constituents (violating our requirement). Additionally there is no utility to be gained for an individual as there is no potential to actually vote if the party holds no seats. The combination of these factors easily explains the failure of Senator Online.
There are two additional issues, then, to overcome: 1) ensure liquidity of utility, and 2) make it easy and attractive for the layman — ideally no requirement for ongoing participation. Issue 1) can be overcome by ensuring the bloc has at least one seat, and issue 2) disappears when liquid democracy is used in place of direct democracy. It may be worth noting that LD requires less interaction than the contemporary Australian system, as a user can ‘set and forget’ by deferring to someone close and trusted, like a spouse, or child.
Ensuring the bloc has at least one seat is a difficult task if only members of the party behind the bloc are allowed to vote. This is because, in such a case, utility is only given to party members. An alternate design choice is to allow external voting too, and as such utility can be distributed beyond the membership itself. This provides an incentive to participate and cooperate even to those who are not even prospective members. The importance of this is realised with the following party policy: if another party grants preferences to the bloc, they gain a share of votes in proportion to their contribution (number of preferences) they’ve passed to the bloc. Thus competing parties are incentivised to pool votes via this preference allocation. Additionally, this pooling means that seats that would otherwise be given to major parties (due to the elimination process) are now shared among parties participating in the bloc — effectively. (It is worth noting that individual members play only a small role at this early stage, but since party policy must be a product of the members they have considerable power in the structure of the system). Due to the granularity forced by state-by-state allocation of seats, this also allows parties to gain seats they would have otherwise lost by combining preferences across state boundaries, which decreases the variance experienced during seat allocation — a phenomenon particularly harmful to minor parties. Therefore each and every party has an incentive to participate, even if the bloc is 3rd or 4th on their list of preferences. It forms a safety net so that in the case a party isn’t able to claim a seat for themselves, they can still have some stake in a seat, along with other minor parties. This is obviously preferable to simply giving those votes to major parties, and so should provide the incentive necessary for parties to consider cooperation and participation in their best interest.
Of course, there is the requirement that participating parties never have to trust each other as they would then become dependent on radically different parties which is unacceptable in a political environment. Trustless ledgers and numeric allocation systems (which could be used to track votes) is a field of computer science which is only now beginning to be researched. However, the structure at the core of these systems — a blockchain — is applicable to this situation which affords the trustless environment we seek. Due to the age of this structure it is not widely understood, yet, and so efforts in development and education need to be spent in order to convince even the most conservative of parties of the security of the underlying system. The particular design of an appropriate blockchain is a question as yet unanswered, and research into this is needed. Particularly decisions on the transparency of votes, voters, and voting need to be made in order not to violate the natural law and contemporary ethics which produce our standards for voting events, like secret ballot. Ultimately this is not a big hurdle and simply takes time for both the design and development of appropriate software.
Since the bloc is particularly effective at eating away at the fractional seats in the senate (particularly the last seat allocated), if it were to gain significant individual votes in elections a core allocation of seats will form that are not attributed to cooperation of parties, but to acceptance by the Australian people. Western culture often produces two party systems and it is not uncommon for a significant proportion of the population to be dissatisfied with both major parties. A neutral voting bloc provides a perfect avenue for those voters to express their dissatisfaction by effectively splitting their vote between all parties and people which have a stake in the bloc. If these voters facilitate (and ideally participate in) the formation of such a bloc then the utility of the bloc (from both the party and individual perspectives) massively increases as the bloc now controls more seats than preferences provide, meaning that participating parties actively increase their parliamentary exposure by preferencing the bloc. There is no other way —that the author is aware of— that parties can turn the zero sum game that currently exists into a positive sum game. Hence the proportion of parliament attributed to the bloc should continuously increase, as overall utility grows for all actors in Australia regardless of their participation. Therefore the requirement of increasing marginal utility is satisfied — at least while the bloc has less than a majority in parliament, after which the requirement is no longer necessary.
Democracy is not an easy road, and solutions are not always simple. However, with perseverance society can overcome these obstacles provided enabling technology is embraced. A system crafted to solve one very particular problem — decentralising the Australian Senate (and maybe the lower house as a consequence) — has been presented and shown to satisfy some basic requirements. With luck this design may be appropriated to produce more general solutions to solve this problem in other democracies globally. Solutions are on the horizon, let us boldly seek them out.
Intellectual property rights to any of the above are hereby forfeited, and so the work is in the public domain. Information wants to be free.
— Max
If you're interested in becoming a part of the NVB you can do so at our site: nvbloc.org.
Introduction
The Australian Senate uses an unintuitive system of preference voting. Due to the low barrier to appearing on the ballot there are sometimes many persons and parties vying for the six senate seats per state up for election. To help provide a reasonable and quick voting experience for voters, they have the option of voting 'above the line' where they select just one party (or group) to support and allow that party or group to set the list of preferences for them. In this way they vote with the party.
The seats themselves and the process of filling them is based on a quota system. This means that once a party has secured enough votes to guarantee them a seat, the number of votes dictated by the quota are removed, though all preferences continue to be counted using a weighting system (to account for the already elected candidate). Usually the first 3 or 4 seats are uncontentious, however, the 5th seat and particularly the 6th seat can often be extremely dependant on how preferences flow during the election. Of particular note is the election of Ricky Muir who was elected to the senate during the 2013 federal election. Without these complex preference flows he would not have been elected as the number of first-preference votes the Motoring Enthusiast Party (which he represents) received was well below the quota.
Thus this property of senate voting comes under criticism often as it is seen as a way for illegitimate candidates to become elected. However, I will explain that it also presents a great opportunity for an alternate system of electoral representation using a party as container in which to house a better democratic system.
Mechanism of the Hack
I will define a 'hack' as 'behaviour introduced to a system that subverts some restrictions or properties that were originally intended to hold'. It is not the case that all hacks lead to negative consequences but certainly some subset do, though it is unlikely that his hack can be used to destabilise Australia or cause any political controversy besides discussion of the use of the hack itself.
In this case the Senate Preference Hack (SPH) subverts the traditional Senate electoral method by giving the host party an unfair advantage. As long as a small number of first preference votes are received the hack allows a minor party to obtain a number of seats (expected to be the 6th elected seat) far in excess of what first preference votes would indicate.
The core mechanic of the Hack is to provide something in exchange for preferences. Particularly it is a stake in the seats which the host party is elected in to. In this way there is an incentive for every other party to preference the host party ahead of most other parties. In short the Hack provides a neutral 'better than nothing' alternative for all parties willing to participate. Additionally there is not cost for parties to participate as they are required to preference everyone anyway, and thus are physically required to participate at some level.
Because of this the Hack has the following properties:
Requirements for the host party
The host party must have some way of providing said stake to those other parties which preference it. Thus there is a requirement that said other parties are able to submit votes in such a way as to guarantee the host party does not manipulate votes to achieve their own ends. If this requirement is violated it is unknowable whether any tampering or censorship has gone on which violates the underlying implicit agreement. The agreement provides the incentive for other parties, thus must remain intact.
I do not think it is coincidence that the Hack therefore requires an open and transparent democratic system. As it also needs to provide some utility not found in our current system it provides an avenue to introduce nearly exclusively superior systems of democracy, with inferior systems failing to gain traction (though superior systems may also fail in this way).
While it is possible to use a simple direct-democracy-esq majority-rule system in which all parties and participants vote, I feel that would be a wasted opportunity, and provide a weaker incentive than a voting system that is more intricate and sophisticated. Particularly this intricacy must create some value that did not exist previously. Thus a system that provides the greatest chance of success is one that is again superior to our contemporary system, insofar as it meets some need better than its host's environment. I do not believe it is unreasonable to look to the internet and computers for a solution here. Most voting systems used in nations today were designed before the advent of the internet and so have a number of restrictions and compromises that were reasonable in a pre-information-age but now are less reasonable. A prime example of this is the philosophy of representative democracy as a whole. Without the internet and with vast distances between citizens it makes sense to elect delegates to act on your behalf. However, if there is no limit to how many others one is able to communicate with, and the speed with which one can do so, the foundation of representative democracy is less competitive.
Conclusion
The Senate Preference Hack enables a political party to gain unfair representation in the senate (by number) only if it then accurately represents those who preference it. It is conjectured to provide a viable method of testing and employing new systems of democracy without sacrificing the stability or integrity of Australian politics. To achieve this it must be wrapped within a new political party, but requires interaction with other parties so is naturally inclusive.
It occured to me that there is an unsolved problem with the NVB: death.
When a voter dies, if there is no connection between their physical death and their cryptographic identity; it continues to live on. This brings up a new-ish toxic market: dead people's ID. In short, since they don't expire they essentially become a 'vote for hire' if they can be recovered. Artificially inflating the voting pool in this way would almost certainly be to the long term detriment of the quality of governance we are obliged to accept (or is it? I presume yes for the moment).
So, one way to address this is for votes to 'time out'. Let's take a situation like the following:
Voters are empowered once every three months, and many at once. The empowerment of their ID only lasts 1 year at a time (or w/e). During that 1 year they are free to transfer away the right to vote to other identities. Particularly, they can do so through (synchronised?) CoinJoin operations. In this way, provided they only partake with members of their own pool, they can maintain relative anonymity while still linking their identity to only one large batch of people so an expiry date is maintained. The transaction they create will maintain constant voting rights and contstant expiry date. Mismatching expiry dates cause the whole transaction to be flagged as invalid.
That sort of situation would largely solve the problem. People who die are known and thus can't validate, so their token expires.
Aditionally you can choose your pool by timing when you register, or abstaning (by letting your token expire for a 3 month period or w/e, then revalidating for another block).
I guess people can also elect to be empowered instantly but this isn't the default and will lead to less privicy. Political active people are good candidates for going straight away, but those valuing secret ballet are encouraged to wait.
NB: This document is better read from github, due to fomatting issues with the code.
So, I recently wanted to broadcast a nonstandard tx and didn't want to wait for full blockchain sync on my dev machine.
I knew that Eligius supports the Free Tx Relay Policy, and that I'd sent them nonstandard txs before, but all the IPs I could find weren't accepting connections. Finally I found
68.168.105.168
and was able to connect and broadcast the message. Later I realised you can search getaddr.bitnodes.io for 'eligius' to find nodes.This is a great start, but we still need to send the transaction. By default Bitcoin Core does not relay transactions that:
At this point we are going to need to compile a version of Bitcoin Core. We will need to address the above two problems to broadcast a transaction. The second problem is included so we don't have to download the whole blockchain, allowing us to relay pretty much anything.
The first section of code to change is
IsStandard()
inscript\standard.cpp
. Source Link. The quickest way to fix this is add areturn true;
at the top of the function like so:Sweet, now all transactions are standard for our node. The next part is to relay transactions even when we can't validate the outputs they spend. When troubleshooting before I ran into this error message being thrown. To avoid more debugging I figured the best thing to do was just rip the whole block of code out. That is to say this:
Becomes this:
So, all that's left to do is compile bitcoind, run it with
-connect=68.168.105.168
, wait for the connection to initialize (which can be checked withbitcoin-cli getpeerinfo
) and thensendrawtransaction
when you've confirmed the connection. If all goes well the tx will appear in the next eligius block!A few days ago a Junkee article by Jane Gilmore of a similar title was posted to /r/Australia, and although Gilmore's heart was in the right place, the solution to just replace bad politicians with good independent politicians is no solution at all. However, there is still hope, as I shall explain.
Here's the crux of my argument: replacing party-politicians with independent-politicians will not work well. Without infrastructure to prevent re-party-fication and factionalization there is no reason to believe that independents can provide a long term solution, or will avoid joining new or existing parties, or that they will be any more effective than current representatives. Ignoring the lack of Gilmore's plan working en mass anywhere else in the world, the problem lies with a common obsession in democracies:
It is a toxic question that leads to faulty reasoning. It is epistemologically analogous to There is no authoritative source of knowledge, and likewise, there are no permanently good rulers.There is an additional problem with existing explanations of why democracy works: just as the philosophy of empiricism thought that somehow the laws of nature were 'written' on to the mind through observation, some proponents of democracy think that the right rulers are somehow 'read' from the minds of voters. But voters are not a magic eight ball, they are just as fallible as politicians, and perfectly capable of making the same mistakes. Without accepting this fallibility how can we design a democratic system that is immune? If we fight with reality we will not be able to keep from fooling ourselves.
We can see the who-should-rule question embedded all over the place:
Imagine if we re-framed this for knowledge:
. Of course, we could argue that some sort of market for correctness would be set up between the oracles but this is inconsistent with them being oracles. Likewise, expecting competition to prompt parties to somehow create good policy ignores the fact that the origin of good policy must therefore not be the parties.(Also of note is that Kangalooney criticizes the voters, particularly swing voters, not representatives or the system.)
Or from Jane Gilmore, the author of the original article:
Unfortunately, electing a benevolent dictator is not really a solution. Essentially, that is the fictional entity Gilmore imagines, and often what we all imagine, but it is a simplistic and flawed vision. The who-should-rule question has no good answer, especially for communities of our size. There are just too many ideas, people, interests, and motives for one independent to represent them all fairly. It doesn't matter how much someone wants to do good things, that in no way enables them to. Without an explanation of why results should improve we should not expect them to.
There are also a number of traditional political problems Gilmore's solution doesn't solve. For example, minor parties have never been good at working together: factions, ideology, dogma, these are what are expressed when you put minor parties in a room. It's not a magic recipe for success, and we've known that for a long time.
Furthermore, when you do get 'cooperation' in diverse coalition governments, a curious phenomenon occurs around the creation of policy. Party A suggests Policy A, which they think solves a problem. Party B suggest Policy B which they think will solve the same problem. After negotiations the resultant policy (AB) is a 'compromise' or mish-mash of the two, and curiously nobody thought AB would work in the first place! The solution is not just more proportional representation, we need to go deeper.
If our existing electoral system had the answer within it all along, then it would in itself be the solution to the problems we've had, the same problems that were created by that very system! We need new strategy, we need new systems of organization, we need new ways to rally minor parties to help them cooperate instead of bickering. We need something that has never been tried before because everything that has been tried has failed (else we wouldn't be in the state we are!)
Karl Popper had a useful criterion for determining the quality of a democratic government. First, a democracy is one in which bad policy can be removed without violence. Tick. Second,
the quality of a democracy is *how well* it can remove bad policy. (1)
Uhhh, we're probably about a 'D+' on that one. I challenge anyone to think of a better method of ranking democracies with such an objective and explicable quality.I will construct a truism:
The optimum political strategy is not to implement good policy and remove bad policy. (2)
We know this because:
Without addressing (2) how can we expect the policy we produce to improve? If we don't change how vulnerable our representatives are to removal, why should we expect any greater control than we have now? In short, to achieve anything resembling (2) requires change to both how power is structured and instantiated in people, furthermore, such a change cannot be obsessed with 'who should rule'.
I'll follow now in Gilmore's footsteps, and give you a few paragraphs of 'what ifs', of the requirements, the potential, the promises, and the hardships. Everything from here out can be started tomorrow, if we put the work in.
What if a small proportion of Australians realised we have to step into the unknown? That to solve these problems we must embrace and move past them, and to do so has never been done before; that it will be scary, and difficult -- for becoming pioneers has never been easy -- that we must abandon our previous ideas about why democracy is good, and look beyond ideology into a scalable future?
What if Australia had an unique entry point for new ideas, a way to experiment, boldly and safely, with out risking our political system? What if our Senate elections involved parties trading preferences so we could introduce elements to help reorganise our political landscape without permanently altering our parliament? What if we could craft a new political party just to house an experiment, an experiment that could give more back to minor parties, allowing them to specialize, and in doing so give each their ability to really help in the areas they know most about? What if this party used their elected candidates only as a proxy, and allowed novel experiments to feed into our real parliament? What if this party could have a candidate elected in every state with only 1% of the primary vote?
What if this political party could house a direct democracy, and allow all voters to participate whenever they felt it necessary? What if leaders were held to account throughout their term, and in the cases they needed removing, what if we could remove them? What if we embrace bold new voting systems, allowing rules and division of power based on issues? What if you could set a delegate like a family member, or friend, or community leader, instead of having to vote all the time? What if this happened through every level of our political system, so that power structures could be quickly rearranged without needing the whole population to vote? What if we could somehow introduce liquid democracy, to ensure that voting can be fast and cheap? What if we used modern cryptography to ensure anonymity, cost effectiveness, and immutable and transparent ballots?
What if this party was already in motion and on the way to gaining 550 members so it can register Federally and run for the next election? What if this party had a solid plan of action and the explanations to back it up? What if your actions today help decide the future of our parliament, and the future of Australia? What if you had the chance to join this party? What if you could be part of the 1% to take back democracy?
I am trying to start a political party to directly address the philosophical issues I explore through this post. It is called the Neutral Voting Bloc, requires ~1% of the primary vote to win 6 Senate seats, and it is novel in strategy, implementation, and philosophy.
The Neutral Voting Bloc is the only solution I know of, which is why I'm building it. Join me?
Responding to: Is Democracy in Trouble?
Hi John,
I think you make some excellent points and your suggestions are certainly sensible, but I also think there is a bigger beast to slay. The problems we have now are no more 'wicked' than any we have had in the past, however, we certainly need to adopt more modern explanations so that we can navigate their 'complex interdependencies'. I suggest the root of the problem comes from our obsession with the who-should-rule question, and our lack of innovation in policy-making, which can be improved once we understand a little more about where policy comes from.
Furthermore it does no more good to lament the issues we face: there will always be problems, and we will always have to solve them, and because of that we will always have such 'wicked' problems of increasing complexity, and 'contradictory ... requirements'. These point to nothing more than our systems themselves requiring improvement. Jones speaks of these 'wicked' problems as though the problem is some how to blame, but there is nothing inherently wicked about the universe or the problems it challenges us with, and there is a simpler (and more reassuring) explanation: some of our explanations are wicked, and more particularly it is bad philosophy that helps create and propagate these wicked things. This is why
, and why they have . However, there is hope because when the problem is now how we interact and the ideas we cultivate, and the question becomesI agree that Parliament is where the improvement needs to happen, but to suggest that we could keep electing the same people and achieve superior results cannot possibly fix the problems we face, because our true problem is with how we solve problems. Only by fixing the process of policy creation and removal can we ensure that we are able to avoid the bad explanations that produce bad policy.
Karl Popper suggests a criterion for judging democracies which is both objective and powerful: the quality of a democracy is how easy it is to remove bad policy without violence. Canonical democracy is able to do such a thing, however, we are by no means efficient at it. How are we to define 'bad policy', though? The answer is simple: the ease of varying policy is inversely related to how good it is. If we think about each extreme, it becomes clearer to see why, as a policy that is easy to vary must have little connection to reality, and a policy that is very deeply connected with reality must only have a very small number of possible forms (where any change to the 'why' would take it further from the truth). In actuality it is exceedingly difficult to make a policy so perfect it is impossible to improve upon, however, since our options for policies are diverse and finite, it is easy to compare them, and when no suitable explanation can be found to inspire policy we must create new ones.
This is truly where policy comes from: it is a creative endeavour that comes from our understanding of the world. It is the creation of new options that allows us to solve the wicked problems of today, and thus to 'fix' our parliament we must first 'fix' our method of creating new options. There are good reasons to believe that parties are not the vessel that can take us there, at least not in their current form. I don't doubt that a loosening of political discipline would move us in that direction, but to believe we can do it forever with canonical parties ignores the fact that parties are not the source of good debate or good policy, and thus there must be better methods of creating good policy, and solving wicked problems. I've written about this political philosophy just a few days ago: http://xk.io/2015/07/10/aus-two-party-failed-us/
I contend that if we were able to solve the problem of the explanations behind policy there will be no requirement to educate or train those involved with politics, because those best suited to create novel policy options are already qualified. The question is how to utilize them.
Finally, I'm starting a novel political party in an attempt to address the philosophical issues plaguing our parliament. It doesn't conform to the methods or structures of canonical parties, and may be able to win 6 senate seats with only 1% of the primary vote (with a trick I call the Senate Preference Hack). If you're interested, it is named the Neutral Voting Bloc and the website is http://nvbloc.org/
-- Max
Introduction
Development of Bitcoin has become dysfunctional. This post isn't about either side of the block size debate, though the debate itself is the inspiration for this post. While the correct course of action may not be apparent, the divide in the community (and corresponding lack of direction) is apparent.
Despite this, some of the community seems stead fast in finding a solution and moving forward in particular directions against the advice of some core developers. Unfortunately, we currently lack the ability to determine who truly has the most support in the public debate, or even if it is one sided. Herein I propose a way to:
All of this occurs on-blockchain where needed, and there are no trusted entities besides the compiler-maintainers, which can again be chosen in a similar decentralized manner. In the future perhaps we can use zero knowledge proofs to decentralize the compiling stage, but for the moment this is not considered.
This proposal requires bitcoins to be burnt, or destroyed, in order to form consensus. While this may be unpalatable for some, it provides a way to secure the distributed consensus.
Ingredients
There are four key parts to this proposal:
I presume the reader is familiar with the first two: Bitcoin and Git. Bitcoin provides the blockchain and Git provides our method of managing source code.
GitTorrent is a recent development that allows accessing and hosting source code via a DHT, similar to accessing torrents via magnet links.
BlocVoting is a protocol of my own design that facilitates liquid democracy (or delegative democracy) on the blockchain.
The Burn-Graph
Bitcoin offers us an important lesson: the irreversible conversion or destruction of resources provides a method for converging to consensus in a distributed manner. By burning coins in a way that resembles proof-of-work we can secure a blockchain like structure to manage identities.
Coins are burnt by sending them to an OP_RETURN output that contains linking information, among other things, and makes it impossible for these coins to be spent. Each burn transaction points to one or two previous burn transactions, which in turn points to previous burn transactions, etc. In this way, starting from a genesis burn tx, a graph (or list) of burnings can form. Like the blockchain has a
, the burn-graph will have one node with more coins cumulatively burnt than any other. This is the head of the graph, and used as the basis of the weighting system. In this way, an identity's weighting is determined by the volume of resources destroyed (number of coins). Because a weighting will only be obtained (for the burner) if the burning ends inside the burn-graph, there is an incentive to work off the top, in the same way that Bitcoin incentivises mining on top of the Bitcoin chain.After the burn-graph is established we can extract a map (or dictionary, or list of (key, value) pairs) that will continually be updated. This map is between an identity (which can be a Bitcoin address) and a weighting. One increases their weighting by burning more coins.
Membership and Voting
Using this map as a membership list (open to anyone willing to participate in the burn-graph) we can then allocate the number of votes for that identity based on the weighting. Votes could be used to indicated current preference for the hash of the git commit which they prefer as the canonical source used for Bitcoin compilation. One option would be to implement direct democracy on the blockchain, but that would be inefficient.
Instead, I propose an implementation of Delegative Democracy. This would allow most individuals (who do not have the technical prowess needed to read and understand Bitcoin source code) to choose a delegate, which they can change at any time, who can vote on their behalf. This delegate may in turn have a delegate of their own. This allows core developers to hold the same responsibility and power as they have in the past, until they are unable to solve problems effectively amongst themselves. This inevitably happens from time to time, and so at this point the next layer of delegates can take matters into their own hands and vote directly. If this increase in participation is still unable to solve the issue, the process can continue until we reach something similar to direct democracy where everyone is participating.
Furthermore, when there is little controversy within the community delegative democracy is incredibly light, perhaps requiring only a few kilobytes per release cycle. During times of controversy it is natural that participation will rise, and so the space requirements will rise accordingly.
This voting network would not physically transfer tokens as some voting proposals do. Rather a weighted graph of voters would be established and evaluated for each ballot.
Hosting Source Code
While (at this stage) we could hook up a git server to read the blockchain and publish information about the current git head, we can do better.
Using GitTorrent (source code) we can decentralize the source-code-hosting problem. Because we can decide on the latest commit with respect to the blockchain we no longer need to reference a) a hosted git repository, or b) a central authority. (These are the only two methods originally suggested, though the author does talk about using blockchain name resolution.) In this case, running a node to store and provide access to the Bitcoin source code would help the source-code-serving network (especially a node that tries to include as many branches as possible).
The final problem to solve is source code distribution. The same voting network is capable of voting on compiler-maintainers. These would be public key identities (probably well connected to real world identities) that would be responsible for deterministically compiling and hosting Bitcoin binaries, based on what the most recent ballot yielded as the git head. In this way we could at least know if any funny business was going on by comparing the various compiled binaries. There is the potential to decentralize this further, but for the moment the above is considered sufficient.
Funding Core Development
At the beginning I mentioned funding core development, though that has remained absent until now. There is no way that I know of to integrate this sort of funding in such a way that does not provide an advantage for an attacker. However, with sensible defaults we can heavily mitigate this possibility.
One possibility is for the protocol to mandate each vote requires 2 outputs with some ratio between the values (such as 1:1). One is an OP_RETURN output, and one standard output. By default users would be encouraged to select a core developer to donate to, though they could specify any address (including their own) to direct the second output at. Whether to include this at all is a design decision perhaps best left for later, but the possibility of funding development is tantalizing.
Summary
Using some novel technology and the immutability of the blockchain we can construct a framework to help manage decisions around what code to include in Bitcoin, including hard-forks and block-size updates. The unique combination of these technologies allows for a completely decentralized development process without the implicit trust that the Bitcoin community has endured (and now suffers from). We can host code trustlessly using GitTorrent and publish our preference for which code to use as the Bitcoin source code. Using a proof-of-burn based weighted graph we can ensure we maintain decentralized consensus using similar game theory to Bitcoin itself.
Background: we (Nathan and I) built a political party in ~3 weeks on FB via ads. Cost about $3500 or so.
The biggest thing I learnt was to try as many things as possible via split tests, and if possible try them quickly. We used https://adespresso.com and I can't recommend it highly enough. If you're going to spend more than a few hundred dollars it's well worth it (basic is $50/month and it's not on a contract or anything - maybe even has a free trial).
Particularly a few words can make a HUGE difference (like 30%+ clickthrough rates). An example of that was the following two:
Particularly I'd suggest split testing images and headlines, since they're the bits that grab people. We noticed a smaller difference with the descriptions/wordy bits, but still significant (15% or so). If you can nail both then there's an easy 50%+ you can just iterate towards, which goes a long way esp. with a NFP or when you're on a budget. Images also can make a 20%+ difference, so make sure to try a few.
Also, make sure to install the facebook tracking pixels so you can track conversions accurately. Facebook can tell you a lot about the people clicking through and signing up just by using that. For example, most clickthroughs we had were older folk (70% male, 50% over 50 or so, which was surprising for a tech based political party) HOWEVER, most signups were younger folk, so you can sort of start to tailor things better with that data.
I wouldn't recommend split testing over interests because you end up with smaller running ads, and you get smaller sample sizes => more money to get significant results. (And you also get info on who is interested in what anyway)
You get a frequency measurement through adespresso too; keep an eye on that because if it stays low you can pump the ad way more (as the same ppl aren't seeing it again). As frequency gets higher your dollars won't stretch as far.
I'd suggest heavy experimentation early on and then pump the ads that work really hard later on. That way you build engagement.
Oh yeah on that point, split tests create multiple ads, that means that comments are only on one single ad, not all of them. Engagement matters so be sure to try and not create ads later in the campaign because you lose that engagement.
Except from The Flux Guide.
Most democracy systems judge themselves by a criterion like “how well does this system represent the preference of the people”. We don’t think that’s a very good thing to measure against, though. Sometimes the preference of the people is evil (Hitler was voted in, remember), or maybe it’s just bad for them (even if they think otherwise), or maybe the outcome is very sensitive to change (like Colombia's peace referendum at 50.2% to 49.8%).
A fundamental background assumption is that the best form of democracy is comparatively better to other forms of democracy. This means that two (or two hundred) countries using different forms of democracy can be measured against one another, and the one with the better form of democracy will, over a long period, end up better off economically and socially.
Because we’re talking about long term challenges and improvements we go beyond mere preference. If a democratic society, in 1901, decided to ban electricity and kept that ban up to now, they’d be far behind the rest of the world. In other words: their preference didn’t help them, and accurately reflecting that preference wouldn’t have helped them. What would have helped them is a progress oriented democracy, on that made it untenable to not to adopt electricity. To some degree this goes on today, but we still see social issues taking decades to play out, instead of years or months, precisely because we wait to get ‘over the distribution hump’ of opinion. We take a long time because we try to accurately project the ‘will of the people’ too much!
IBDD solves this issue through the reorganisation of political power. It makes it expensive for people to hold society back, but also makes the political process accessible. This relationship is designed to allow policy improvements to happen as quickly as possible, which is crucial if we want the best democracy possible, and the best life possible - for all of us.
This doesn’t mean that just anyone can do anything - that could never work. It does mean we all have a chance to contribute though.
We use a different measure of democracies. Instead of obsessing over the preference of people, we focus on the progress a democracy provides to its people. Because progress is fundamentally connected to reality and truth we need to look at how and from what policy is formed, instead of who the policy is formed for. This means we need to bias towards good explanations instead of public preference. There are always some members of society years ahead of the curve, and we should focus on empowering them instead of satisfying a relatively non-specialised [1] majority.
Footnotes
Summary: Early December I travelled to Brazil to present this at the Wired Festival. In this 30 minute reproduction I talk about how political power and fallibilism interact and how we can take advantage of that to produce far superior policy.
Slides
I recently wrote the following in a correspondence with a colleague. It was too good not to post, so I hope the following helps you gain a grasp of why Flux and IBDD exist, and their founding philosophy.
The Flux Movement is founded on (what I now call) Deutschian Fallibilism. It's an evolution of Popperian Fallibilism and David Deutsch's book The Beginning of Infinity does a great job of explaining both the theory and exploring the breathtakingly profound consequences. In one light it's a book about epistemology, but the consequences are far-reaching.
In essence the book's thesis is that explanations are the basis of human knowledge, new knowledge can always be created, that all evils are due to a lack of knowledge (at a fundamental level: how to rearrange the atoms around us to alleviate some problem), and thus that the progress of people (and prosperity linked to that) is essentially unbounded - it's just a matter of creating the right knowledge.
The book also discusses myriad other topics, such as morality, Dawkinsian memes, aesthetics, AI, democracy, and many others. All these topics are brought together in a breathtakingly profound, consistent argument.
We've taken his lessons and built a novel form of democracy we call Issue Based Direct Democracy. IBDD comes at the problem of democracy from an entirely novel position: that democracy should be designed around solving problems, not
.The reasoning for this is quite simple: if canonical democracy (Rep Democ, Liquid Democ, or DD) is able to make decisions that reduce the prosperity of its citizens then those decisions are wrong, regardless of how the majority feels about it. Furthermore, it's very simple to see that a democracy that biases prosperity (via the creation of new knowledge) at a minimum must always be at least as good as canonical democracy since it is able to create policy that is of greater benefit than other forms of democracy.
The core method of creating new knowledge is a cycle of conjecture and criticism. The two processes we know of in the universe that create knowledge (evolution and human creativity) both use this method. In biology, the conjecture is gene mutation, and the criticism is death. In human creativity, a great deal of conjecture and criticism goes on in our mind before we even know we have a thought (perhaps this is what is happening in those 'empty' moments of creativity before an epiphany hits), and after we publish our thoughts peer review process and debate takes over. Karl Popper originally called these two things
.It's not that canonical democracy doesn't have this: election cycles, party politics, and citizen movements all involve conjecture and criticism, but it is far too slow and far more akin to biological evolution than human creativity.
We took the idea of conjecture and criticism and designed a system of democracy around it that we believe biases more correct knowledge. In other words, IBDD is a truth machine (or rather, a more-truer-than-what-we-had-last machine).
Because we've built democracy around epistemology (instead of the usual
) we view the policy creation process uniquely too: policy is simply an application of explanations we have around certain phenomena and instructions on how best to change our reality to cause some effect.Thus the most important part of policy formation should not be voter buy-in, but how well that policy is crafted and how good the underlying explanation is - and this is what should decide which policies are enacted. (The book goes into great detail about how we can tell if explanations are good or bad before we even test them, something that's very useful in policy since testing is often slow or inconclusive - no doubt due in part to the problem of creating a 'control' in society).
Furthermore, since mistakes are impossible to avoid (and necessary for progress) we put incredible emphasis on the ability to self-correct. This is one of the reasons Flux is called
and not - the outcome of a particular issue in IBDD can change day to day, based on how voters intend to interact (or perceive the benefits of their interaction) based on everything else that's going on that the time, and based on the new knowledge that's been created recently. (This also has the great benefit that it can help improve IBDD strictly more easily than the system that came before it.)We enable this dynamism by treating policy as an ecosystem (as opposed to each policy in isolation) and crucially allow voters to move their political power between issues via an auction market and neutral central liquidity token (think of it like money and stocks, except that you can't sell liquidity tokens and everyone starts on an even footing and is constantly pushed back towards an even footing). The end result is that by giving opportunity cost to every choice a voter makes, they are incentivised to move their political capital into the issues that are most prescient to them ending the problem of 'tyranny of the majority' and 'boaty mcboatface problem'.
An additional side effect is that both Arrow's Impossibility Theorem and Balinski and Young's apportionment paradox are avoided entirely. We don't 'solve' the problems (after all they are mathematical theorems) but avoid them by constructing democracy differently.
These discoveries are what has spurred Nathan and me into creating The Flux Movement. Put another way: we see the action of holding on to this information and not doing our best to instantiate IBDD as a grave crime against all current and future people, and thus we have a moral duty to bring this to bear as fast as possible (provided we don't destroy any methods of correcting mistakes).
As a final note: The Beginning of Infinity teaches us that a key property of knowledge is persistence, which is to say that knowledge instantiated in reality (by say, building a device) should persist. Thus IBDD's ability to persist is indicative of whether it actually is better suited to solving the problems we face as a society. We predict that once we start using IBDD anywhere (provided it's suitable) it should be rapidly adopted since the benefits of IBDD (producing new knowledge in policy efficiently) is associated with increasing the prosperity of the population that adopts it.
If we are right, IBDD will thus lead to the end of tyranny globally, since no system is better able to create new knowledge than one designed to do exactly that.
Comments welcome: leadership@voteflux.org
Max
In 1960 Popper published his paper Knowledge without Authority where he conjectured that the central question of political theory should not be
but something entirely different.What he put forward has become known as Popper's Criterion - a criterion I believe IBDD satisfies with flying colours.
Popper's Criterion
In response to the
question, Popper writes:There are two important political questions here:
Question 1 has morphed to become known as Popper's Criterion:
There are two important ways we must consider IBDD in the context of Popper's Criterion. The first regards the nature of authority and how IBDD is fundamentally anti-authority, and the second is a direct application where we investigate exactly how well IBDD is able to 'detect whether a ... policy is a mistake, and to remove ... policies without violence when they are'.
I would also like to extend Popper's Criterion to include the system of decision making itself. If we are unable to change the underlying system without violence — seeing as the system itself is a policy — then such a system must fail Popper's Criterion.
Authority and IBDD
A note on the use of 'good' and 'bad' when describing policy and explanations: I am using these words in the same manner as Deutsch uses them in The Beginning of Infinity. He does a great job of explaining them in detail, so I will summarise: good explanations are those which are hard to vary and account for the phenomena they purport to. Not all good explanations are the most correct, but the most correct explanations must be good. Since policy is a way to take an explanation relating to some problem, and provide instructions on how to solve it, policy inherits this property. Policy which is based on a bad explanation is bad policy, and vice versa for good policy.
Given that 'who should rule?' is 'who or what is the authoritative source of good policy?' in another form, the concept of authority plays a strong role in current political systems.
At a fundamental level Issue Based Direct Democracy (IBDD) attempts to acknowledge that there is no authority on good policy (or knowledge). In competing democratic systems, the answer is often in the form:
While IBDD is a form of direct democracy, we do not claim that any one source of policy is particularly better than another. Rather, we reason through the process a little differently.
While majoritarian systems (RD, DD, LD) do incorporate many sources into a ruling faction — either through a single party or a coalition of entities — they are tightly bound, and without political disturbance, or specific arrangements of representatives in a legislature, they are inextricable from one another. Often this collection of knowledge is referred to as a
.As explained in the Flux Philosophy White paper Redefining Democracy, the reason for this centralisation is due to the fundamental nature of majoritarian democracy.
Thus, in order to treat solutions to social problems independently we cannot have a system ignorant of distribution of knowledge in society. That is to say it must weight different sources of knowledge differently.
The selectorate theory of political power pioneered by Bueno de Mesquita et al. teaches us that the current weights on sources of knowledge is determined by political leaders and their need to satisfy key supporters (the selectorate). However, biasing sources of knowledge needed to keep a leader in power does not strongly bias good explanation. This is accounts for why these sources are largely inextricable from one another in such systems, and why there are fundamental limits on the speed of progress of RD, LD, and DD.
The problem we are concerned with now becomes
and it is to this question that IBDD has a direct answer.IBDD delivers its answer in the form of a specially crafted persistent and inclusive market. This allows the distribution of political expression on any one issue (or rather, on a particular policy conjectured to solve that issue) to be decided by the aggregate input of all voters considering that conjecture in the context of all present and future issues.
This allows voters to individually weigh the value of their conjectures and criticisms by their perceived benefit at a given time in a highly dynamic way.
It is possible, at this point, to argue that this is just authority in another form: that really we are answering the question of
in the form .However, this answer neglects other important aspects of IBDD, and particularly neglects the effect of such a system over time.
If we factor time in to the above answer, we reach a more compelling answer:
. By replacing with we can answer this question in a manner compatible with fallibilism, and indeed one which directly contains many core components of it.Note: the bulk of this essay concerns the 'policy' part of Popper's Criterion, and not the 'leaders' part. This is because IBDD does not have leaders in the sense that RD does. There is no Prime Minister or President (at least not in the current formation), however, this does not mean that leaders will not arise. While I don't deal with this in the main body, I've included an addendum that discusses this concern.
IBDD as a system for correcting mistakes
A system for correcting mistakes must necessarily act over some time period, and thus a key question becomes
. It is trivial to see that a slower system (RD, for example) is less useful than a faster system (which I conjecture IBDD is).Note: this is not to say that a system which instantly enacts legislation is superior, as this itself may be a way to introduce mistakes. I am in favour of a mandatory delay period for most legislation passed in IBDD, which would help protect methods of correcting mistakes without impacting the speed at which conjecture and criticism is offered.
Popper's question
has been explained in general quite well by both Popper and Deutsch. Their conclusion is a cycle of conjecture and criticism (or as put by Popper).Seeing as we've included this near verbatim in our answer to
, I will not stress that point again.Instead, let us think about the rate at which conjecture and criticism are applied in various democratic systems.
Representative Democracy is a highly exclusive system, where the ability to provide real conjecture and criticism (by proposing new policy or blocking the implementation of new policy) is a right given to very few actors. Without being the governing party, or having a majority through an arrangement of smaller parties, it is near impossible to do much beyond making a point. Additionally, in situations like a bicameral system where the same party has a majority in both houses, the ability to propose conjecture and criticism is reserved for those in the ruling party. It is the case that a minority party may be given a chance to propose criticism when given permission by some subset of the ruling party in the case of minority defection on some issue by members of the ruling party, in the form of a temporary alliance.
There is, of course, a matter of mass conjecture and criticism when the terms of some members of either house expire, whereby the aggregate of all voters may determine some other party should be the ruling party (the second party acts as conjecture by its very existence, and voters provide the criticism by voting for it over the ruling party).
The fact that conjecture and criticism is instantiated in RD to some degree does imply the possibility of progress, but because the set of conjectures and possible criticisms are so small we should not expect this process to be efficient.
Thus to increase the rate, volume, and quality of conjecture and criticism we must remove this exclusivity and simultaneously introduce the possibility of specialisation — allowing some conjectures and criticisms to be weighted more highly than others.
Furthermore, because we cannot predict the source of the best conjectures (as new knowledge is unpredictable) we cannot exclude any voter from this process.
These two conjectures in concert imply that an efficient system must be 'direct' (as in sourced from individual voters) in some way.
This is reassuring in that it creates a sense of 'permissionlessness' around the ability to propose new conjectures and apply criticism. A property we observe in many instances, such as the progress of science and the success of some businesses over others (though not all environments for business remain equal, and so there may be some external bias applied in the economic case).
Using Markets to spur the creation of new options and good policy
Introducing a market for votes is not sufficient to provide good quality conjecture and criticism, though. It is easy to imagine some market which includes a rule like
. It is clear that in such a case Person A has been selected as a better source of knowledge than other participants. This is a problem because it is not possible to consistently predict which sources of knowledge will be most correct (though I'm sure we can make good guesses), and to assert otherwise is prophesy or authoritarianism.However, it is possible to embody this uncertainty in a market by weighting all sources equally, at least initially. Through a well crafted market we will see that this produces interesting incentives aligned with the production of good policy, and crucially, the creation of new options.
We also need to lay out some ground rules for this market. Particularly that suggesting some new policy requires opportunity cost. At a minimum suggesting policy should be restricted over time, such that a bad actor cannot flood the system with bad conjecture. I believe it is, however, better to unify the opportunity cost of suggesting policy with the opportunity cost of voting on policy such that they use the same resources. In this latter case there is a slight bias against the proposer of a policy, increasing the likelihood of bad policy not passing, and incentivising the creation of better policy from the get-go.
An easy way to facilitate this is through a one-to-many market, where a central liquidity token (LT) mediates value between all issues and the ability to conjecture new policy.
Furthermore, the introduction of this LT allows us to manipulate the supply, and when we apply some constant inflation (say, 30% per annum) and distribute the newly created LTs evenly among the voting population. This provides an equalising force, constantly pushing political power back towards an even distribution, and also incentivises voters to act sooner rather than later (introducing opportunity cost over time as well as conjecture of policy).
By auctioning off the right to propose new policy (in the same way votes for various issues are auctioned, except there is no recipient for LTs spent in this way) we can control the rate of conjecture and ensure we maintain some sensible political environment (whatever that means). Since the number of spots for new policy conjecture is limited, we can adjust this up or down depending on how we perceive the system to be functioning. The exact method of this adjustment is an unsolved problem at this time, though I do not believe it is that relevant to the current discussion, provided edge cases are excluded.
In order to understand this setup, let us consider some cases that may arise in a voting market.
Case 1: Bad Policy Conjectured
If a proposed policy is based on a bad explanation then its predictions will not hold. In extreme cases it is easy to see that a majority of voters will be harmed and thus they will not relinquish their vote on that issue. In this case it is probably not worth the cost of even proposing it.
However, the case also exists where most people are not harmed, and the group in favour of said bad policy (A) is strictly larger than the group to be harmed by it (B). In this case A and B will both bid on the auction for votes on this policy, and thus both expend some LTs in an effort to pass or block the proposal respectively.
In the case such a policy passes, it is in group B's interest to make a counter-conjecture, at a minimum undoing the policy put forward by group A. In order to defend against this group B will once again need to expend LTs (like group A) to defend their policy.
Group B can continue this tactic indefinitely, which eventually ends in a 'war of attrition' where both group's ability to affect change is severely limited.
If we presume that group B has some good policy (if their policy is bad also we will look at that in case 5 and 6), we can see a pattern emerge:
As long as group A attempts to maintain their bad policy they will need to expend LTs, limiting their ability to affect change in other areas. Unless their policy is so beneficial that this is worth it (which is dubious as we've already presumed group B, who are directly harmed, is small) the path to be as politically productive as possible involves not continuing to put forward this bad policy.
Case 2: Bad (but Good) Policy
There is also the case where a good explanation is used to craft policy (perhaps by a leader aligning or acquiring key supporters, and thus benefiting some group disproportionately), but it is presented under the guise of a bad policy.
This is similar to case 1, whereby the maintenance of such a policy becomes a burden to those who benefit from it. In the case of a political faction this can stress the internal relationships as it requires a contribution of opportunity cost from all involved.
If some subset of this group is able to create good policy that provides an equivalent benefit to them (without the harmful externalities of the original policy) then they are incentivised to break from the main group and put forward their policy instead. Their optimum strategy, then, is to create new knowledge (in the form of their good policy) and become independent.
In this way IBDD acts as a decentralising force on the political landscape, and encourages specialisation.
Case 3: Good Policy simpliciter
In the case that good policy is conjectured that has very little in the way of negative externalities (granted, problems are inevitable so really this is a case where those negative externalities are seen as less important than the externalities of other policies) there is little incentive for any group to oppose such a policy, as doing so removes some of their ability to change other policies that would result in a greater improvement to the group.
However, this case is probably far less likely to occur than case 4, where some group already has some bias in their favour, and a good policy would remove or reduce that bias.
Note: for the purposes of this analysis I include
as a good policy roughly equivalent to .Case 4: Good Policy that harms some group
A good example of this might be increasing or reducing taxation on some strata of society. It is easy to conceive of a taxation system that unfairly biases some group, and a good policy may be to reduce or increase some tax bracket.
In this case we see a similar pattern to case 2 (Bad (but Good) Policy). The group that unfairly benefits (A) is incentivised to defend the past policy that biased them, and the group that is harmed (B) is incentivised to put forward some good solution.
A similar equilibrium exists, whereby the continued defence of bad policy by group A restricts their ability to affect change. In the issue of taxation this does include a monetary component (though you can argue all policy includes some economic component) and so it might be a little 'stickier' than other issues.
If policy just goes back and forward in a manner such as adjusting tax brackets it's easy to see this continuing forever in some kind of back-and-forth rally, or possibly converging (in the manner of a compromise) to some middle ground. But, this does not preclude the possibility that there is some better solution out there, and incentivises its creation in the same way as case 2.
Note: I personally suspect a far superior method of taxation exists than income/sales tax where tax is leveraged implicitly through deliberate inflation, and prices are expressed in terms of percentage points of M1. This is an example of a possible conjecture that would break a taxation rally as described above (provided it really is a good policy).
However, if the conjectured policy really is good the equilibrium lies in favour of the good policy. That is group A has less incentive to criticise the policy, and group B has every incentive to conjecture it. The case for both groups is that passing such a policy results in a net increase in their ability to affect other issues.
Case 5: Two opposing sides, both with Bad Policy, unwilling to cooperate
In the case where two opposing sides are competing for control of some policy area (environmentalists vs coal companies might be an easy example) they might both put forward bad policy.
The result of such a conflict (as explored above) is an expensive stalemate, due to the perpetual need to acquire votes to blockade the opponents, where neither group really gets what they want, and their ability to act on other issues is diminished.
Thus we would expect to see both groups either with very few, or quite a lot of LTs. One possibility is an active standoff, where each group continually puts forward their policy and we oscillate between them, and the other is a passive standoff where both groups stockpile LTs, ready for the 'final confrontation'.
In both possibilities it is clearly not an optimum strategy for the groups.
Case 6: Two opposing sides, both with Bad Policy, but willing to cooperate
If we take case 5, but add in their willingness to cooperate, then they are likely going to either seek a compromise or create some new option.
In the case of compromise they might manage to pass a new policy, but neither side will really be happy about it, and so there's always an incentive to resume the standoff in ambition of a more beneficial policy.
However, if the two groups have had a passive standoff and found some new option which is a good policy, they are now unified as one force, and both have substantial stockpiles of LTs they can use to further their agendas in new ways.
Thus the optimum strategy for groups in this position is to focus on creating new options, and removing the bad policy which came before it.
Case Conclusions
In each of the above cases we observe that the optimum strategy involves creating new options, cooperating with other groups, and producing good policy.
In answer to Popper's two questions:
We can see now that IBDD answers both quite well. We are able to avoid damage by introducing constant criticism and aligning incentives in the direction of preventing damage (and producing good policy, which should lead to progress and prosperity), and the detection and elimination of errors is enabled by direct participation, and incentivised via market dynamics.
Popper's Criterion and IBDD as a system - what happens when IBDD becomes the mistake
There is, of course, a final matter we must deal with. Because problems are inevitable, it is necessary that even if IBDD is far superior to our current systems of governance, it will one day become a prescient problem, and voters will once again need to support some new system.
I do not think the method determining and implementing that new system needs to be discussed here, as IBDD applies to a wide and diverse set of possible voters, and thus there will be many instances of it, it is entirely reasonable to presume that once some new system is tested somewhere and appears more desirable it will be rapidly implemented elsewhere. What such a system will look like is, of course, unpredictable, though I suspect it may better enshrine Fallibilism than IBDD does.
The particular thing we must discuss here is the difficulty of replacing various systems from within without violence.
Let us modify Deutsch's definition of Poppers Criterion such that it applies to systems:
I do not see any reason that this should be an invalid statement, or subject to some criticism that Popper's Criterion is not.
When viewed in this light, representative democracy does not fair well. If it did, democracies around the world would converge to a common implementation.
Part of the problem with RD is that such a change requires a referendum or overwhelming consensus in most cases, and additionally that the consent of the representatives who may be harmed by such an improvement is required. In the best case a referendum must be won, and so our best threshold for RD is the smallest set of voters larger than 50% of the voting population.
We can therefore predict that a system which fulfils Popper's Criterion better than RD should enable modification to the system with less than 50% of the voter base. Similar to some of the cases above, I predict a modification to IBDD constructed via a bad explanation should not fare well within IBDD.
However, in the case that such a modification would really improve IBDD, then the threshold for that improvement should be far lower than 50%, as it is good policy, and therefore those who agree with it are not incentivised to participate except in the case it may fail to pass.
In this way, error correction is so strongly embedded in IBDD that we should expect IBDD to exclusively improve itself more efficiently than the swap from whatever came before IBDD.
I know of no better way to satisfy Popper's Criterion than this.
As a final note, it is possible that IBDD does not act as predicted, and that returning to RD will result in better policy. However, even in that case, we see that at worst IBDD has the same threshold to change as the best case for RD. Thus, if IBDD does turn out to be a mistake, it will be no more difficult to change back, as it was to make the change in the first place. However, I doubt anything as extreme as this will come to pass. It's unlikely that IBDD would reach dominance without being criticised fatally, if such a criticism exists. Given it is likely to first appear in small communities (such as municipal governments) or hold the balance of power in some legislature, any such fatal flaws will likely come to the fore far earlier than a complete transition into IBDD could occur.
Conclusion
We've now seen that not only does IBDD satisfy Popper's Criterion (by providing mechanisms to remove bad policy and bad systems without violence), but also embodies many of the core philosophical conclusions Popper himself came to when thinking about the problem of progress, authority, and knowledge.
In this way, it is possible to view IBDD as an embodiment of Popper's Criterion: a system which takes the removal of bad policy and systems so seriously as to optimise for this effect.
Through the dynamic redistribution of political power we diversify groups, spurring specialisation and breaking down factions. This diversification then also increases the ability of any group to criticise established policy and affect its removal.
Furthermore, by carefully constructing an egalitarian market for political power, we actively incentivise the creation of new options by increasing the value of new conjecture.
Finally, due to the above properties we predict IBDD should improve itself far more effectively than any system it replaces, and will be replaced by a system that excels even further in this manner.
It occurs to me that the transition from RD to IBDD should mirror the transition from anti-rational memes to rational memes. It was not that anti-rational memes prevented progress absolutely, but that they were far less powerful at producing progress than rational memes (including by restricting the manner and form of such progress), and the groups which adopted rational memes progressed faster and became far more resilient. Although rational memes still require constant defence (as I imagine IBDD will to), they are sufficiently embedded in western philosophy that an immense change would be needed to extinguish them - unless that change were an improvement, of course.
Is it possible that IBDD embeds the values and explanations of our current rational memes more deeply than other forms of democracy? It is certainly the case that the rational meme of Fallibilism has compelled this author to design, improve, and implement IBDD.
It is my hope that IBDD is able to prove itself, is able to persist once instantiated, and that it is able to help spread rational memes. For if IBDD works, people will attempt to understand it, and if they do they will undoubtably come up against the rational meme of Fallibilism, and at such time they might themselves find that it compels them into action.
Addendum: Leaders in IBDD
I do not envision IBDD having leaders in the same way our old political systems required leaders. Rather, leaders will naturally arise through their ability to affect change, and as we have seen in a system like IBDD this requires producing good policy.
It may seems slightly circular, then, to apply Popper's Criterion to leaders in this case, as they are by definition not bad. Those aspiring leaders that are bad are heavily restricted via market mechanisms from ever making much of an impact.
However, it's easy to imagine (and wise to presume that) a leader may arise through the production of good policy, and then attempt to abuse their position, in effect transitioning into a bad leader, and, by Popper's Criterion, we must have an effective way to remove them without violence.
I predict this will unfold in a similar way to the cases of bad policy we considered earlier:
A leader who was previously good, but has now become bad, will have a track record of producing good policy. However, the new policy she produces will not adhere to this track record.
The only way in IBDD to obtain far greater political power than the average voter is to convince voters to delegate to you. Because voters are free to change delegates at any time (or revoke it entirely), leaders will need to continue providing some benefit to the voter — if they do not, there is every incentive for the voter to select a new delegate.
In this case, the act of becoming a bad leader actively diminishes that leader's ability to affect change, totally satisfying Popper's question 'How can we organise our political institutions so that bad or incompetent rulers cannot do too much damage?'
However, there is the case where a group may follow a leader without requiring progress from them. Examples of this might be extremist groups, interested in only their ideology, and prevented from accepting new options due to anti-rational memes.
In such a case, it is unfortunate but a reality of IBDD that they are a near-permanent inefficiency (I say near-permanent as they may have some capacity to produce good policy, and over time their anti-rational memes should weaken). However, if this opposition really does embody an anti-rational meme (which it probably will if IBDD acts as predicted) then the problem of convincing them to abandon a particular anti-rational meme is the same problem we've faced since the enlightenment.
There is, of course, the case where such a group has overwhelming power, and uses this to enforce their anti-rational memes. However, such a problem, just as before, is not unique to IBDD. Its potential is a constant in all societies, and all democracies. Critically, though, it would suffer from the same problem of all static societies — suppressing the ability to create the right knowledge to solve problems. As it stands, I consider this scenario no less likely in RD as IBDD, and crucially, the meme of IBDD works harder to propagate rational memes than RD does (though that may just be my bias or perspective).
Addendum: A Hypothesis from the Selectorate Theory
While investigating how selectorate theory, IBDD, and Fallibilism interact I noticed an extension of a pattern in selectorate theory.
As states transition from dictators to democracies, a key factor is the increased industrialisation of that state. Such progress necessitates new key supporters (perhaps in new industries), and as the number of key supporters grows dictatorships become less stable.
In the end, the only choice is to transition into democracy, though that doesn't guarantee the democracy will be stable at that point.
However, we are also witnessing a growing discontent with how democracy is done today. (Redefining Democracy covers this in greater detail). While transitioning back to dictatorship cannot support more key supporters (keys) and so it's entirely reasonable to conjecture that these keys are looking for a solution better able to accomodate them.
Due to IBDD's ability to incentivise the creation of new knowledge, it seems likely that IBDD possesses a far greater potential for satisfaction of keys, and we may, for the first time in human history, witness a transition from centralised leadership, where the leader wrangles the keys, into a decentralised leadership where keys are supported by the very system itself.
From this line of reasoning I created a hypothesis. I believe that if this hypothesis is true (for the most part) IBDD should inevitably succeed over RD, since it is predicted to support more keys.
That hypothesis is:
Whether this is true or not remains to be seen, but it certainly feels like it has merit.
Addendum: Quadratic Voting and Popper's Criterion
I mention Quadratic Voting (QV) briefly in the above essay, and have mentioned it in several other places, but I have not yet actually formed an argument as to why QV does not satisfy Popper's criterion as well as IBDD.
The setup for QV is this:
I have a few problems with QV. There are also some points I think make it superior in particular ways to RD, DD, and LD.
One advantage of QV is that it breaks down majoritarianism and allows for specialisation.
However, this specialisation is far more available to wealthy voters than to the vast majority of voters.
Furthermore, in the same way that splitting Bill Gates' $88 billion fortune between all people on Earth would only result in $12.57 per person, socialising the result of a quadratic vote does not substantially redistribute wealth from the point of view of other voters. It does not substantially increase their ability to participate.
A related problem is that by splitting a kitty between multiple voters you obtain far cheaper votes, even if this is only between 2 people. $
2y*2y
for 2y votes is twice $y*y + $y*y
for 2y votes. Thus there are potential economic attacks.QV is good at ensuring that (when dealing with equally wealthy advocates) the proposal with wider support wins, and good at preventing minority rule to some degree, but this comes at the cost of decreased specialisation, and thus a diminished capacity to effectively use the knowledge spread throughout society.
IBDD has some similar properties, but also has (in my opinion) some vastly superior properties. For example, in IBDD the cost (in liquidity tokens) to acquire 1 vote in any given issue is the same regardless of how many you'd like to acquire (presuming it's not so much that supply and demand comes into play, but each vote is still just as valuable as each other vote regardless of the acquiring party).
Additionally, because IBDD uses a closed economy cut off from real world wealth, it is much more difficult for the ultra-wealthy to manipulate the outcome in their favour.
Now, QV does satisfy Popper's criterion in some cases, it's definitely possible to remove bad policy without violence.
However, it may also be possible for QV to violate it. Consider the case where an ultra-wealthy individual employs many thousands of people to take part on their behalf. These workers buy votes at a vastly reduced rate (compared with our wealthy person purchasing it all themselves).
The effect of this is a mostly economical attack to prevent the removal of bad policy. In addition to less money being included to begin with, the socialisation of proceeds also is spread far more in favour of our rich attacker, and so their risk and overheads are lower.
While this does still result in some redistribution of wealth in the direction of the dissenters, we've set up an economic equilibrium between the benefit of the policy and the cost of maintaining it. It is therefore theoretically possible such a system would prevent the removal of that policy indefinitely, or at least long enough for it to matter.
In the case of IBDD, however, gathering such a following requires opportunity cost on the follower's behalf, and additionally reduces the ability for the attacker to interact with other legislative processes. Thus I consider IBDD to more effectively solve this problem than QV.
Addendum: IBDD and Budgets
This is an unsolved problem.
One possible solution is simply to average the opinion of all voters. As far as 'wisdom of the crowd' goes, this doesn't seem too bad a place to start.
It does not seem like a budget is the sort of thing one person or source would be able to produce well consistently, and competing budgets aren't guaranteed to account for everything as well as we may like.
Is it appropriate for budget requirements might be attached to policy and automatically granted when the policy passes? Could we design some taxation system to facilitate this?
I am sure that if IBDD begins to succeed this problem will receive attention, and all problems are soluble.
Addendum: IBDD and War
This is also an unsolved problem. War is never a simple thing, and seems to lend itself more to centrally controlled systems than decentralised ones.
The naive approach seems to suggest a referendum, and certainly many people would like it if their governments were required to hold a referendum before going to war, but it's not clear to me that this would necessarily produce better or worse results.
In the limit, IBDD is predicted to unify political systems into a global system of governance (note: not a global government). Perhaps it is the case that this will never need to be solved? What if we later encounter some outside threat we did not predict, and then don't have the capacity to deal with it?
I believe this problem to be soluble.
Additional Reading
Slides
Slides
Slides
Slides
Philosophy of Decentralising Democracy (FB Video)
Slides
Slides
Slides
Here's a guessing game that I believe is secure.
Setup
You want people to guess the answer to some question. The answer is in ASCII, and all lowercase.
Additionally, you want automatic (though not instant) payout for correct answers. Fully trustless, though.
That question could be anything with a certain answer. Example:
.To do it, this is what's required of the winning guesser. (this example is written in the context of a trustless host network like Ethereum).
Winning Players Actions:
Step 1
Come up with the answer. Got it? Great.
Step 2
Publish to the contract:
commit(bytes32 commitHash)
Where
commitHash = keccak256(string answer, address yourAddress, bytes32 powOnAnswer)
(Keep these details private for the moment)
Aside
Why should we commit this collection of things, and why do we have
powOnAnswer
? Doespow
mean proof of work?Step 3
After some time (set by the contract), publish this transaction:
reveal(string answer)
Wait some more time (set by the contract) and publish:
claim()
Done!
How and why you're safe
The most significant feature in this whole enterprise is automatic payouts. There are significant design constraints when we want automatic payouts to coexist with a completely public and trustless information.
The relationships we need are straight forward, though.
You need to be able to publicly reveal an answer with a near-certain guarentee that a correct answer can't be stolen from you.
That means that everyone has to know you have a specific answer (and agree on that) before you reveal it.
Collecting your winnings reveals the answer. But you also had to commit to your answer some time before-hand. That means (with a well written contract) the only people you have to worry about swooping in and taking your winnings are those people who had already made commitment prior to yours. (And logically the winnings should be theirs, anyway)
There are two important time delays to make this happen:
If the minimum time for (1) is, say, an hour, then it's incredibly difficult for an individual to censor a single transaction, which means many confirmations are near-certain.
Additionally, including your address in the outside hash renders replay attacks ineffective.
After such time you're free to publish your answer.
However, you don't publish just the answer and your address. You publish additional information. This is the
powOnAnswer
parameter above.The exact means of calcualting it aren't important, for reasons that we'll see. Let's presume you're doing something like a password hashing algorithm (e.g. PBKDF2). Your task is to recursively apply a hashing function 500 million times. (if you think this is a bit weak, you'll see shortly the parameters are freely adjustable.)
Why on earth would you recursively apply a hash like that? You could never possibly prove something like the correctness of such a hash on Etherem!
Well, let's think about how you'd attack such a scheme.
First off, the contract needs to store something to check if an answer is correct or not. The usual way is to publish a hash, or something similar. But whatever is publish needs to be verifiable (and cheaply). If you publish just
keccak256(answer)
then you're liable to be brute forced (especially if the answer is something like a name). So that won't work.What if you could publish something derivied from the answer that was work-intensive, but quickly verifiable?
Not only is that the core idea behind proof of work (hard to generate, easy to verify), but we can also use it in a totally different way: to introduce entropy.
The reason brute forcing a name is easy is that there just aren't that many combinations.
We can make it much harder to brute force by salting the answer with a time-intenstive hash of the answer. It's not that we care about verifying that proof, it's that generating it is energy intensive and requires knowledge of the answer in advance to be efficient.
So if you're guessing wildly, then you will have to hash each guess (in this case) 500 million times before you can compare it to the answer! (And, of course, there's no way to guess the many-times-hashed guess in advance, since it's 32 bytes and psuedorandom.)
When you add all this together, I think you end up with a guessing game secure enough to resist any attack that isn't attacking the blockchain itself (e.g. censoring transactions).
(Note: to be properly secure we want to include a nonce (which is public) in at least the first hash in our recursive chain. This is to make sure that the work (hashing) being done is unique enough that it likely hasn't been done before.)
Algorithm drafts
What
findAnswer
says is:First: find a function that will tell you if some input fails the proof of work criteria. (
failsProofOfWorkCritera target
) - let's call thatF
.Then, do the following in order:
One at a time, take a possible answer. Then, check if
F
returns true or false.(Note: you don't have to generate all the possible answers ahead of time, you can make a new one up each time you need one)
If it returns false, discard the possible answer and move on to the next. If it returns true, then take all remaining possible answer (with the most recent guess in the first position) and take the first one (i.e. the answer).
Finally, run the answer through whatever function you need to give you the right data to commit to smart contract above. (Which would need to know about your address in this case.)
I've long thought about compiling and documenting things I find that I think are worth watching / consuming
I'm starting to do that here: https://xk.io/stuff-worth-watching (includes more than just videos)
Legend:
really good stuff
2015-ish - The Beginning of Infinity, David Deutsch (2011) http://beginningofinfinity.com/
2017-late - The Fountainhead, Ayn Rand (1943) https://www.aynrand.org/novels/the-fountainhead
good stuff
2019-01-17 - I started a github repo called 'awesome-notes' where I'm going to note things I want to link back to or make publicly accessibly easily. https://github.com/XertroV/awesome-notes
2018-06-12 - The $30,000 pocket dial, Louis Rossmann (2013) https://www.youtube.com/watch?v=MyZnKjFWrxo
2016-ish - Everyday Economics or How Does Trade Relate to Prosperity?, Marginal Revolution University (2016) https://www.youtube.com/watch?v=t9FSnvtcEbg&list=PL-uRhZ_p-BM6HPXRBdCIPgHri2eWhMv2t
other stuff
Slides
Edit 2020-08-03: I don't stand by this article. I think it's poorly written, and on a subject I don't know enough about. I'm glad I wrote it because it's been part of my progression to taking ideas (and seeking their improvement) more seriously. I've left it basically untouched besides these edits for posterity's sake.
Edit (June): this isn't written well enough, and I intend to move this to a category off the main page in the near future.
I could be wrong, but if I am I don't know why, so please enlighten me. As it stands, this feels robust to me atm. Comments/discussion: https://github.com/xk-io/xk-io.github.io/issues/5 (FI link once posted) Alternative: Since GitHub is not great for discussion, if you – for whatever reason – do not think it's a good enough makeshift-forum: please post on Curi’s open discussion, and if possible please post a link in the github thread or email/msg me so I’m aware.
Refutation of
, or, if that's wrong, then the alternate: we're all wrong and going to die soon regardless. (humanity: poof)If lockdowns were comparably bad (i.e. similar enough) to the near-unmitigated spread of coronavirus and thus covid -- e.g. say the US drops the measures taken up recently; like if they lifted them today -- if lockdowns were that bad, then it means whatever society we had leading up to that has failed, epistemically and culturally. Whatever those values are that were held, were not good enough. That is because: even at society's pinnacle of technology, information, and ability (or at least capacity) to reason; despite all that we were UNABLE to devise a method of dealing with this problem that is preferable to the problem itself. A stubling at the beginning of inifinty. The view that lockdowns are comparably bad MUST imply that ALL popular strategies -- for society, political theory, whatever -- all were INCAPABLE of being preferred to the literal ravaging of society by a new disease we couldn't deal with.
Remember, this isn't just about the US or any one country, it's about all countries. If you have extreme cases you thought supported your case: have no weight here entirely because they're the exception -- unless they pointedly refute this conjecture. The only 'out' here is literally new knowledge. Knowledge young enough that it cannot have had enough time to even spread as a meme, let alone be instantiated as any real sort of governance system. No political position popular enough to be widely known at this point can satisfy this. It should be surprising to most people. That said, people claiming the lockouts are killing thousands of people, they don't have any of those new ideas.
If this horrid reality is the case, and if some countries have done better than others then maybe (but only maybe) there is hope, however if it is somehow the case that lockdowns are that bad, and that country suffers the same consequences in the long run, then this is the first time in modern (post ww2) history we have truly failed as humans and people. The first time we've been entirely incapable of any improvement. We were not able to rise to the challenge, and this is the fault of everyone in that arena. Left, Right, Libertarian, Liberalist, Fascist, Communist, and the worst of the lot: centrists3. They all failed. Moreover, we're starting to get to the point that maybe democracy failed. Which would be earth-shatteringly bad, and, in the absence of an equally earth-shattering innovation, gives us no reason besides blind hope for any future beyond this century, give or take. If we go backwards, we die; we can't support the level of specialisation needed for progress if we lose too many people. In any case:
Either:
If I have not made an error, the only conclusions I know how to reach must be one of the above (maybe I've missed a possible conclusion, or you know something surprising I don’t).
Since (1) contradicts the principle of optimism it is not the case2. (2) must eventually be true unless we out pace it, it is possible it plays a role, but we need to compare it with (3), wherein I think the thesis is incorrect as it implies contradictions in reality and no sufficient reason is given for e.g. implying the principle of optimism is false.
So: (1) is wrong. (2) might be right, but there's no reason it needs to be if (3). If not (3) then (2), which means we don't have much time left.
[1]: Not necessarily if you’re reading this via FI – I just don’t know enough to claim that. I mean there's the chance an good idea comes from elsewhere, I just don't have a good reason to believe that atm given the audience I predict.
[2]: I mean it might be that the principle of optimism is wrong, but we’d need some pretty extraordinary reasoning (or: that’s my suspicion).
[3]: centrist are the worst of the lot because they lack an even superficially consistent philosophy, which is the only way to stay centrist. At least you or I can try to persuade the others, but centrists can run around hiding behind whichever idea takes their fancy. A consistent philosophy must make some self-consistent and wide-reaching claims about reality which is incompatible with the notion of centrism, except in the most myopic of situations. This is, ofc, presuming the others are willing to entertain rational debate without violence; if you go past that point they are - for all intents and purposes, relevant here, at least - indistinguishable.
I've been doing a lot of work on philosophy recently.
Public work takes two main forms: the tutorials are all up on youtube, and a site where I've been publishing related work.
I've come to realise the current site I have is pretty bad for higher volume stuff, and jekyll is bad in general for maintaining a library of criticism. So I'm thinking about potential new things to use for a site.
I want to migrate everything, have a large amount of customizability WRT posting, and it should do comments well. Building stuff is okay if it's feasible.
Oh, and it shouldn't have silly issues like jekyll does (e.g. sorting a list of pages can fail if they don't all have a field, e.g.
date
, in their frontmatter; it's hard to provide a default ifdate
is missing, and even if it is present on all pages, you can't sort the pages bydate
if some of the values areDate
s and some areString
s b/c Ruby doesn't know how to).This is post-dated to when I posted the video.
In Jan 2021 I unendorsed all my past work and ideas. There are details in the video, but basically it's an FYI that I have changed my mind on some important things, and I'm not sure that I believe the things I used to (but revisiting all those things takes energy, so that won't necessarily happen soon/ever).
I set the date of n/2036 to the date I made the video, but I don't think that was right. The video was posted, then, sure, but I didn't make the post then. I made the post ~now. I don't think I'll date things that way in future.
Also, I said I
the post, but post-dating refers to dating something in the future (which this wasn't). It was just normal-dating.I've been focusing on philosophy more lately. During late January I decided to post my self-unendorsement video. I've been adding to my curi.us microblog regularly. That is where you should go to find my latest work. I post almost everything I produce to that thread.
For the past few days I have been doing making daily updates to my makeup vlog and will continue doing that for the next week and a bit. I've also created a vanity URL for the playlist: https://xk.io/muv. I link the playlist for these videos (and most videos individually) in my microblog, too.
I will continue posting to my microblog. I consider my blog at xk.io mostly deprecated -- I don't anticipate posting much here until I find a new blog system.
If you've visited my site before, you might notice it's different now. It's a complete forum -- the blog part is done through permissions.
The structure is a tree and based on discussion at https://curi.us/2396-new-community-website-features-and-tech. It's highly customizable and supports things like CNAMEs for sub-forums (which can be ~disconnected from the main forum tree via permissions).
Every node in the tree supports RSS feeds. Hit the RSS button or add
.feed
to the end of a URL to get the feed.I've made an Open Discussion node at /n/88 -- you can post whatever there.
Nodes can be viewed as any other type of node; atm there are 3 types:
root
,index
, andtopic
. Add/as:<view>
to the end of a node's URL to view it via that method, e.g.https://xk.io/n/0/as:topic
. The main forum-nodes have descriptions but -- atm -- those aren't shown except via view-as or the RSS feeds.ATM you need to register an account to post/reply, but anon comments are mb on the cards -- just not a priority right now.
For a summary of the harassment against Elliot and other members of the Fallible Ideas community, see https://curi.us/2412-harassment-summary.
For the full context of this post, see David Deutsch Harassment Update (curi.us), including the comments of that post.
I support Elliot Temple. Elliot is right, this is a major problem.
The harassers (a small number of CritRats close to DD) are unwilling to take steps to deescalate the situation. They are unwilling to take even the most reasonable of steps to at least coexist in peace. AFAICT, one of Elliot's main goals is to be left alone (by CritRats) so that he can pursue his philosophy work. Elliot's actions are consistent with that goal.
Here is an outline of David Deutsch's position. DD . He is the leader of a community; they don't have an official name, usually they're referred to by the name CritRats. is a contraction of Critical Rationalism, the name of Popper's philosophy. This informal group, including DD, tolerate the harassment (or worse).
If you were DD and had written The Beginning of Infinity, a book that claims:
If you had written that, given numerous reports that repeated harassment was being done in your name, do you think it would be reasonable to, say, write a tweet along the following lines? I've recently heard allegations of harassment targeting Elliot Temple by a small number of my fans. I unequivocally denounce harassment, against Elliot or anyone else. It's immoral and, if it is happening, it needs to stop. A tweet like this is not much; it doesn't even acknowledge that a problem exists (just that if it does exist, it should stop). Is that not a reasonable minimum to expect from a philosopher?
DD doesn't post to forums or blogs anymore, AFAIK. He does post to twitter, though. Not one of DD's 10.9k+ tweets mentions 'Harassment'. At what point does this kind of avoidance become negligent? At what point does the issue become bad enough that the leader of the community has a responsibility to make at least some gesture speaking out against bad behavior?
Earlier this year, Elliot had to disable open comments on his blog (1) due to the harassment.
When that happened, Elliot provided me with an account (as I'm sure he did for other members of the CF/FI community). Why is this important to say publicly?
I'm not a sockpuppet — that's obvious to anyone even vaguely familiar with my history.
DD's tacit approval2 of harassment is unacceptable. It should stop. Offending CritRats should stop. It's not okay.
https://curi.us/2469-video-talking-about-david-deutsch-and-andy-b-problems ↩
It's actually worse than just tacit approval. From https://curi.us/2476-david-deutsch-harassment-update
I haven't seen that evidence, but I trust that Elliot is being honest. Throughout the ongoing harassment, Elliot has repeatedly shown restraint and respect, and I have no reason to doubt his account of the situation. I've also witnessed the harassment first hand -- in that instance LessWrong moderators acknowledged curi (Elliot) was being harassed when they banned both him and the sockpuppet. ↩This comment (#5) was made by me: https://curi.us/2476-david-deutsch-harassment-update#5
Why do you live?
I live to use my mind to improve my life.
How?
By thinking.
About what?
Ideas worth living for.
Self-hosted replacement for
I started doing outdoor bouldering about a week ago (we've been in lockdown for the last ~2 months). I found an unclimbed wall that's close enough to home (for current exercise-distance restrictions) and have been working on that recently. I've posted some videos of this to my max's bouldering playlist on YT. I've also documented the wall on theCrag: https://www.thecrag.com/climbing/australia/new-south-wales-and-act/sydney-metropolitan/area/5156697663
Before lockdown I was 4-5 weeks in to my 'send all the purples' goal at my gym (purples are bouldering routes of a particular grade) -- my gym resets a few sections of wall every week and has a 6 week cycle.
Sent my first outdoor V3 on Sunday
This is a repost of a post I made to reddit on June 13th 2017. I found it again recently and wanted to document it here.
The Problem (what's the optimal encoding?)
Reddit user tilrman pointed out:
2 more comments follow in that branch before I posted my soln
My Solution
clarification: in
"base factorial"
I'm using in the same way as (our number system) or (binary numbers).I like a lot of Louis Rossmann's videos and his thoughts.
This topic is for discussing and collecting good ones. Here's two; one recent, one old
I think freedom and universality must have some deep connection.
If you have the freedom to do something, that means -- wrt that context -- you can do whatever you like. You're unconstrained; that aspect cannot (or at least should not) be a bottleneck.
That gives you some unboundedness wrt method, or content, or whatever the freedom relates to. Freedom of speech removes a bound on speech. Freedom of thought removes a bound on thought.
Freedom enables universality; and without certain freedoms, you cannot have certain universalities.
I thought that freedom and thinking were like orthogonal; both necessary for philosophy and good life, but neither sufficient. Now I suspect there's more there than I realized.
Climbing gyms (and everything else) have just reopened (for the vaccinated) in Sydney, following a ~3.5 month lockdown. Now that climbing is frequent and regular again, I've been reconsidering climbing goals.
Before lockdown, my goal was to send every new grade 31 (out of 6) problem, every week, for 6 weeks (I'd done 5 weeks when lockdown started). 6 weeks is a full rotation, so achieving this means that I would have consistently been able to project any purple in 2-3 sessions. The purpose of this goal was to emphasise consistency over results (better to do be capable of any grade 3 problem than some grade 3 and some grade 4).
The current goal I've settled on is to send every grade 3 climb (in the gym) in a week -- there's about 24 or so (give or take depending on what gets set). I sent 10 this morning (over about 2hrs). I don't think I'd have been able to do that if I hadn't done some outdoor climbing during lockdown, tho. So far, so good. I have been going through them methodically, with a few exceptions (picking ones I thought I could do towards the end of the session, for example). So far no grade 3 has been particularly challenging, though crowd-beta has helped a bit. Major bottleneck atm (session-to-session) is probably endurance -- going to monitor that and take corrective action if it doesn't improve this week.
the problems are color-graded, so grade 3 of 6 is purple, and 4 of 6 is black. there's no precise conversion to V-grades, but I guess purple is V2-5 and black is V4-? (6 or 7, maybe). ↩
Correction: I sent 13 / 25 purples (grade 3) on Monday.
I sent 9 of the remaining 12 this morning. so only 3 left for Friday morning -- lots of buffer!
I also sent my first grade 5 (yellow)! https://www.instagram.com/p/CU8YxTrBmeU/
I sent just 1 of the remaining 3 purples on Friday.
WRT this goal, I could go again this weekend, but I think it'd be better to rest. That means I'll fail the goal. What happens wrt the goal then?
The purpose of the goal was to build consistency and mastery of a particular grade. Or maybe, that's what the goal aligned with -- it wasn't, in and of itself, to master the grade. Rather, it was a breakpoint that I chose which aligned with master. If I didn't meet it, then it's a criticism of the idea that I've mastered that grade. Mastering a grade isn't a quality I'd lose unless I stopped climbing, though (or had some injury or something). So what does failing this goal really mean?
Well, I'm not yet inconsistent when it comes to purples (since I had been sending them all prior to lockdown) -- so more time's needed to determine that. It is a criticism of mastery though; it means there are still bottlenecks at this grade.
The reason I chose the original parameters that I did -- 1 week to send every new purple for a full cycle -- was that 1 week allowed for reasonable buffer, and a full cycle meant ~25 new climbs. New routes are set Tuesday-Thursday, so 4 days of the week were the period of stability before the next wave. Once those new climbs were set, a failure to send one that was taken down is a decisive criticism of both consistency and mastery. Also, sending all the new routes (before the next wave was set) meant that I'd have excess capacity to try new climbs, or work on bottlenecks.
Even without sending every purple, I still have excess capacity -- different climbs use different resources. And since I shouldn't lose the ability to send-every-purple if I keep improving, why would I ever stop?
I think mb goals like this are sometimes motivating (they have been to me) but there's a problem if you get invested in them. That's because failing a goal is, like, something to take personally. But that isn't what these sort of goals are really about. A goal like this is a regular opportunity to take a benchmark -- and mb find a weakness in your abilities that you didn't know was there. Knowing about that is a good thing! If you don't know it's there (or if you know and ignore it) then it becomes a thing you can't rely on. That's a problem if it ever needs to be foundational to other, more advanced knowledge. (Like, it's a dependency.)
This is consistent (and I think related) with a different idea: achievement of goals is pointless if you're not challenged. There's no point in me setting a goal if I know it's already too easy. The achievement is hollow.
Moreover, if there are always meaningful goals to choose -- meaningful problems to solve -- then choosing goals that are foregone conclusions must be meaningless. My guess is that -- for some projects -- it is moral to choose meaningful goals, and immoral to deliberately choose meaningless goals when meaningful ones could be chosen instead. (Sometimes those projects/decisions only impact a single person, in which case they're probably amoral -- that's okay. Morality is about interpersonal harm, and people are free to live their own lives (wrt themselves) how they wish. People are also under no obligation to maximize morality -- just to not act immorally -- so provided there's no malice, most goals are okay.)
Getting back to purples: I think new climbs is an important breakpoint, so I'm going to monitor that going forward. I'm also thinking I'll adopt the send-every-purple-every-week goal again, indefinitely -- it's a useful warning sign of bottlenecks, and means there's some variance in my excess capacity (which I can use to my advantage). Maybe a good breakpoint to re-evaluate at is some level of consistency with blacks, like sending >50% each week. That seems reasonable atm.
There is a discussion about the morality stuff at the end of this post on the CF forum: https://discuss.criticalfallibilism.com/t/whether-morality-is-primarily-about-social-interpersonal-stuff-or-about-dealing-with-reality-effectively/372
I've read Galt's speech.
In n/10017 (the parent of this post) I said:
I was so very wrong about this. Morality does concern one's actions, since some actions are right and some are wrong. But morality is not about harm (interpersonal or otherwise), at least, no more so than it is about rocks.
I am starting to understand.
I'm not sure enough, yet, to say what morality is about -- I could try, but I don't want to rush it. I want know it before I claim to know it. Soon, I will be sure enough.
I'm grateful that JustinCEO (on the CF forum) noticed this part of n/10017 and challenged me on it, and I'm grateful that he and ingracke found it worth their time to discuss it with me.
I am grateful for Ayn Rand -- her life, her works, and most of all: her mind. Right now, I'm grateful particularly for Atlas Shrugged.
I have a special gratitude for Elliot, not only for his participation in the above-linked thread (among many other things, too many to list here), but also -- and, right now, primarily -- for his labor, dedication and ideas that safeguard the closest thing I've ever known to a sanctuary.
Why do I express my gratitude? I admire their virtues. Why am I grateful? Their virtues have allowed me to profit, and I will continue to. And I know that they have and will, too.
Of those two problem purples, I sent one on Monday evening, and one on Thursday evening. There were 6 new purples this week (3x set on Tuesday, and 3x on Thursday). On Wednesday morning I sent the 3x new purples that were set on Tuesday, and on Thursday evening I sent the 3x purples that were set that day (and 2x of the blacks that were set that day, too).
Video: https://youtu.be/Vr6X2Z6Ag2w
Source code: https://github.com/XertroV/43-attempts/
The method is documented in the repo readme.
make sure to
chmod +x
it and should just work (provided you have curl and extended grep support)e.g., governor
re long-running learning conflict: the more exploratory side means exhausting excap quickly (manageable when surveying, but not when trying to complete a project).
this script will speed up (or slow down) video and audio via ffmpeg.
put the following in e.g.,
~/.local/bin/mod-video-speed
andchmod +x
it.license: public domain
archiving for my own future purposes + i don't know if this is replicated anywhere else on the web.
You can post whatever you like here. Also, anyone else can post whatever they like, too.
This section is inspired by curi.us:
Source: https://curi.us/2202-how-discussion-works
>> View more replies (children: 6, descendants: 6)