PacificHashing.com Bitcoin Micropayments, a New Enabling ...
21Co, A Bitcoin Miner In Every Device
Bitcoin Miners Will Take 120 More Years To Mine All 21 ...
Roger Ver say "The goal is not that everyone can afford to run a node.The goal is that everyone can afford to use Bitcoin."I say "The goal are both.And its name is IOTA.To be an user and to be a miner. IOTA = 0 fees + micropayments + no miners. Bitcoin Cash = miner's crypto
Is bitcoin really viable as a peer-to-peer electronic cash? Due to high transaction costs and slow transaction speeds, how can we ever use bitcoin to purchase, for example, a Sprite and a bag of potato chips at a gas station? If I want to send a micropayment, for example, of 1000 sats on-chain I can pretty much forget about it because no miner will for and the ftransaction hangs out in the memepool indefinitely. I had a lot of hope for lightning network, but I am now starting to have doubts about its long-term success. What happens when someone using lightning wants to settling a microtransaction on the bitcoin blockchain? How secure is bitcoin really? Remember when cz binance wanted to people to thank him for not ordering a re-org to recover lost funds? Isn't bitcoin mining dangerously centralized? What if in the future there is a terrorist attack by government or other criminal orgs that involve bombing or burning large bitcoin mining facilities. Satoshi writes in the white paper that we propose a solution to the double spending problem, but has this really been achieved? Double spends are still possible with a 51% attack, so what solution to double spending has been achieved. Can't large mining pools conspire to attack bitcoin. These are concerns I have for the long-term viability and intrinsic value of bitcoin.
FLETA is a blockchain platform for decentralized applications aimed at solving some of the blockchain’s biggest hurdles. They have made advances to solve the scalability issues, but still keeping the blockchain fast and decentralized through a unique consensus algorithm known as Proof-of-Formulation. Formulators are the key to FLETA’s technology. They are the block generators who mine and create new blocks. The mining process is configured in such a fair way that every formulator will get a chance to generate a block. This prevents conflicts and abuse because every miner is equal. Generated blocks are confirmed and signed in real-time by Observer Nodes. They are responsible for securing the network, preventing DDOS attacks, and making forks impossible. Forks cannot happen on FLETA because 3 out of 5 Observers are required to sign and confirm the block. The first block with 3 signatures is the only valid one. Proof-of-Formulation has been tested and verified in real-life scenarios. It is capable of achieving 14,000 transactions but remains highly secure due to the exclusive connection between Formulators and node Observers.
The Matic Network hopes to improve the scalability of Ethereum, by using PoS side chains, but without losing the critical elements of decentralization. Matic’s multiple side chains possibly scale to millions of transactions each second in the future. The transaction fees are inexpensive, and its Plasma framework results in new blocks being generated in less than 2 seconds. It makes Matic a well-suited platform for micropayments. FLETA is using Matic’s Plasma framework solutions on its Mainnet. FLETA has an auto-swap feature between the FLETA ERC-20 token and its native FLETA coin. The two projects have cooperated to improve the Deposit & Withdrawal options on FLETA and making them more decentralized.
TomoChain is a blockchain platform that uses a Proof-of-Stake Voting Consensus to combat scalability. It is based on a network of 150 Masternodes. This technology allows a network throughput of 2,000 transactions per second and a 2 seconds blocktime. TomoChain can be used by developers to build their own DApps. Also, by taking advantage of the TomoX Protocol, they can launch a decentralized exchange. The TomoP Protocol is a privacy feature allowing anonymous transactions. When enabled, it conceals information about the transacting parties, used addresses, and transaction amounts. FLETA and TomoChain have signed a technical agreement that foresees the use of the TomoZ Protocol that allows fees to be paid with different tokens. FLETA will be creating a FLETA Token that can provide broader use cases within the TomoChain ecosystem.
Neo is an open-source blockchain platform that uses smart contracts to digitize assets. The ownership of physical items from the real world can be registered, traded, and transferred via the Neo blockchain. Neo is a strong development platform that supports multiple coding languages and has an experienced development community. FLETA and Neo have signed a strategic partnership, which entails the use of NeoVM on FLETA’s Mainnet. NeoVM is a lightweight and scalable virtual machine for smart-contract development. With its cross-platform compatibility, FLETA will significantly benefit from it. Once deployed, FLETA and Neo will cooperate on several projects. The first planned one is a blockchain-based Real World Data-based Clinical Research Data Registry Platform for the medical industry. The project aims to activate medical data research and help researchers efficiently use the data.
Wanchain is a cross-chain compatible infrastructure that seeks to connect the world of decentralized finance into one interoperable ecosystem. Different blockchain systems are incompatible with each other, and they operate on their own. The answer to this is creating wrapped tokens of the original assets and incorporating them on the Wanchain platform. It allows the coins to be used in ways that weren’t possible before. For example, a wrapped Bitcoin token can be used in an Ethereum smart contract to take advantage of the Ethereum blockchain. The token can be easily exchanged back to real Bitcoin by using Wanchain technology. Wanchain is based on the codebase of Ethereum, but it uses a PoS consensus algorithm. The project has established a partnership with FLETA. With this understanding, both companies expect further to improve interoperability and the performance of their systems.
WINk is a gaming platform offering Live Casino Games, virtual sports, Slots, and E-gaming. WINk was previously known as TronBet, and it is located on the TRON Mainnet. The platform supports several different tokens: TRX, Dice, USDT, BTT, and of course, WIN token. Besides being a gaming community, WINk also features a staking option. By staking WIN tokens, users get the chance to earn daily staking rewards from the platform’s profits. WINk plans to integrate with Wallet Street, the social data platform of FLETA. Wallet Street allows stakeholders to communicate and create online communities. The two platforms will start a joint marketing campaign. Wallet Street allows its users to build their own buildings based on the number of coins they own. These structures become visible on a virtual map on Wallet Street. A WINk building will be constructed on Wallet Street’s map to advertise WINk project and its token.
Cooperation is essential for the crypto industry as it opens new possibilities. The sharing of information and knowledge is beneficial to success. Entering new markets allows companies to expand their user base. A broader reach increases use cases for blockchain technology and achieve the ultimate goal: massive adoption. FLETA has realized the importance of strong partnerships, and during 2020, their services will be taken to a whole new stage.
Welcome to dashpay! If you are new to Dash, we encourage you to check out our wiki, where the Dash project is explained from the ground up with many links to valuable information resources. Also check out the menu bar on top and the sidebar to the right. We have very active Discord and Telegram channels where the community is happy to answer any and all newcomer questions.
Purpose of this post
This post is directed towards community members who wish to rapidly access information on current developments surrounding the Dash cryptocurrency. Lately we've noticed how the pace of events picked up significantly within the Dash project due to many years of hard work coming together and pieces falling into place ("Evolution" is finally here. It's called Dash Platform). For the purpose of keeping these many pieces of information together, however, singular Reddit submissions are insufficient. Thus we decided to maintain a pinned thread collecting blog posts, interviews, articles, podcasts, videos & announcements. Check back regularly, as this thread will always feature the latest news around Dash, while also serving as a mid-term archive for important announcements and developments. Journalists looking for news and contact opportunities wrt Dash, please bookmark:
https://preview.redd.it/tb8bvi3nec351.png?width=1920&format=png&auto=webp&s=2c02d9d52f7b00d460ad0ccf87d069e1fc2d31b2 The First layer scaling solution is comprised of 3 different scaling mechanisms: · Sharding · Hard fork · SegWit In my last two articles, I have already covered Hard Fork and Sharding. So here in this article, I will focus on the last scaling solution i.e SegWit. What is SegWit? SegWit stands for Segregating Witness i.e separating the signatures from the transactions. In this process, certain parts of a transaction are removed, which will free up space so that more transactions can be added to the chain. The idea behind using this method is to overcome the block size limit of blockchain transactions. In simple terms, SegWit changed the way data are stored, therefore helping the Bitcoin network to run faster and more smoothly.
It was suggested as a soft fork change in the transaction format of Bitcoin in the Bitcoin Improvement Proposal number BIP141.
Problem Statement In the Bitcoin platform, Blocks are getting generated every 10 minutes and are constrained to a maximum size of 1 megabyte (MB). As the number of transactions is increasing, more blocks need to be added to the chain. But due to the block size constraint, only a certain number of transactions can be added to a block. The weight of the transactions can cause delays in processing and verifying transactions. Sometimes, it takes hours to confirm a transaction as valid. This can slow down further when the network is busy. The Solution To overcome the block size limit issue and to enhance the transaction speed, the transaction is divided into two segments. Removing the unlocking signature (witness) from the original portion and appending it as a separate structure at the end. The original portion will still have the sender and receiver data, and the new "witness" structure would contain scripts and signatures. The original data segment would be counted normally, but the new "witness" segment becomes one-fourth of its original size.
Digital signature accounts for 65% of the space in a given transaction.
SegWit is backward compatible, which means nodes that are updated with the SegWit Bitcoin protocol can still work with nodes that haven’t been updated. SegWit measures blocks by block weight. The formula used to calculate block weight: (tx size with witness data stripped) * 3 + (tx size) Since segregated witness creates a sidechain where witness data is stored, it prevents transaction IDs from being altered by dishonest users. It also addresses signature malleability, by serializing signatures separately from the rest of the transaction data, so that the transaction ID is no longer malleable. History Pieter Wuille, a bitcoin developer, first proposed the concept of SegWit. On 24 July 2017 as a part of the software upgrade process i.e Bitcoin Improvement Proposal (BIP) 91, the concept of Segregated Witness is activated at block 477,120. Within one week of implementation, the bitcoin price seen a spike of 50%. The transaction usage rate using SegWit further increased from 7% to 10% in the first week of October. As of February 2018, SegWit transactions exceed 30%. However, a group of China-based bitcoin miners were unhappy with the implementation and later forked to created Bitcoin Cash. Lightning Network - Layer 2 solution Lightning Network operates on top of bitcoin and is referred to as a “Layer 2” component. It is an off-chain micropayment system that is designed to enhance the transaction speed in the blockchain network. SegWit acts as a base component for the Lightning Network. By implementing SegWit, the transaction malleability issue can be prevented which will allow this secure payment system to process millions of transactions per second in the Bitcoin network. Advantages of SegWit: · Prevents transaction malleability problem. · Prevents signature malleability problem. · Helps in scaling the bitcoin network. · Increases block size. · Reduced transaction fees. · Acts as a base for the lightning protocol. Conclusion There is no doubt that Bitcoin technology is very revolutionary but like any other technology, it has certain drawbacks as well as challenges. Scaling is one of them which has restricted in large scale applications adopted. It is capable of processing only 7-10 transactions per second on the base layer. Many developers, researchers from the Bitcoin community are working hard to overcome the problem. SegWit along with the Lightning Network together aiming to allow Bitcoin to process millions (or more) transactions per second. But the real scenario will depend on the success of future projects. Read More: A Guide to Smart Contracts
How long with the Stellar consolidation last? A forecast from the XLMwallet
It’s been three weeks since the Bitcoin halving, but the rally so many were waiting for didn’t happen, at least for now. Stellar is consolidating and seems to be going up. When can you expect significant gains? As always XLMwallet analytics offer a forecast. Stellar is still trying to recover from the big hit it took on May 10, together with Bitcoin and the rest. It went from $0.072 to $0.064 — a 12% fall. It scrapped a lot of the gains made in the previous month. https://xlmwallet.co/ What happened? Why did XLM fall so much? Don’t panic — and definitely don’t sell. If you hold lumens in your XLMwallet, continue to do so. (We hope you do, because XLMwallet is awesome.) What happened was a banal Bitcoin liquidation. Just before the halving, there was lots of volatility in the market, with people getting excited. The price started rising and even tried to go beyond $10,000. For Bitcoin, it’s been a very strong level or resistance since last summer. When the price couldn’t cross the $10k mark, it became clear that the ceiling is reached for now. And people started to sell. The price began to fall. Then, a massive liquidation of short positions followed. Those are the futures positions on exchanges like BitMex: a lot of people were shorting BTC. Once some of the positions automatically closed, the price fell a bit further, more positions were liquidated, the price fell again, and so on. It was a chain reaction. Nothing happened to Bitcoin, nothing happened to Stellar. It was all just a technical process. Stellar, Ethereum and the others simply followed. That’s the unfortunate reality of crypto: all the altcoins follow BTC, like sheep follow a dog. Bitcoin has been struggling to grow back ever since, and so is Stellar. In addition, there are a lot of Bitcoin miners selling the BTC they accumulated in the past few months, creating additional downward pressure. What should you do? As we’ve said, definitely not sell. It will take Bitcoin a couple of months to get out of its consolidation stage, and then it can start growing properly, tagging the rest of the coins along. We expect very good gains for Stellar starting from July — definitely higher than where it was in early May. Moreover, the Stellar Foundation has just made another investment. It gave $550,000 to a startup called SatoshiPay. It’s a micropayments product that is now eyeing an expansion into the B2B territory, mostly to let creators of online content get paid across borders. As we all know, Stellar is the perfect cryptocurrency for cross-border payments: it’s extremely fast and cheap. As we always say, if you can HODL your lumens in the XLMwallet, do so. But if you have to pay with crypto somewhere, do it with Stellar rather than in BTC. Especially now, after the halving, when the BTC transaction fee has risen above $3 — many thousands of times more than with XLM! In a few words: our advice for XLM owners is hold. A rally is coming, though we’ll have to wait a couple of months for it. We’re pretty sure that the worst for Stellar is over this year, and the only way to go is up. Website — https://xlmwallet.co/ Medium — https://medium.com/@XLMwalletCo Teletype — https://teletype.in/@XLMwalletCo Twitter — https://twitter.com/XLMwalletCo Reddit — https://www.reddit.com/XLM_wallet/
Bitcoin (BTC) is a peer-to-peer cryptocurrency that aims to function as a means of exchange that is independent of any central authority. BTC can be transferred electronically in a secure, verifiable, and immutable way.
Launched in 2009, BTC is the first virtual currency to solve the double-spending issue by timestamping transactions before broadcasting them to all of the nodes in the Bitcoin network. The Bitcoin Protocol offered a solution to the Byzantine Generals’ Problem with ablockchainnetwork structure, a notion first created byStuart Haber and W. Scott Stornetta in 1991.
Bitcoin’s whitepaper was published pseudonymously in 2008 by an individual, or a group, with the pseudonym “Satoshi Nakamoto”, whose underlying identity has still not been verified.
The Bitcoin protocol uses an SHA-256d-based Proof-of-Work (PoW) algorithm to reach network consensus. Its network has a target block time of 10 minutes and a maximum supply of 21 million tokens, with a decaying token emission rate. To prevent fluctuation of the block time, the network’s block difficulty is re-adjusted through an algorithm based on the past 2016 block times.
With a block size limit capped at 1 megabyte, the Bitcoin Protocol has supported both the Lightning Network, a second-layer infrastructure for payment channels, and Segregated Witness, a soft-fork to increase the number of transactions on a block, as solutions to network scalability.
Bitcoin is a peer-to-peer cryptocurrency that aims to function as a means of exchange and is independent of any central authority. Bitcoins are transferred electronically in a secure, verifiable, and immutable way.
Network validators, whom are often referred to as miners, participate in the SHA-256d-based Proof-of-Work consensus mechanism to determine the next global state of the blockchain.
The Bitcoin protocol has a target block time of 10 minutes, and a maximum supply of 21 million tokens. The only way new bitcoins can be produced is when a block producer generates a new valid block.
The protocol has a token emission rate that halves every 210,000 blocks, or approximately every 4 years.
Unlike public blockchain infrastructures supporting the development of decentralized applications (Ethereum), the Bitcoin protocol is primarily used only for payments, and has only very limited support for smart contract-like functionalities (Bitcoin “Script” is mostly used to create certain conditions before bitcoins are used to be spent).
In the Bitcoin network, anyone can join the network and become a bookkeeping service provider i.e., a validator. All validators are allowed in the race to become the block producer for the next block, yet only the first to complete a computationally heavy task will win. This feature is called Proof of Work (PoW). The probability of any single validator to finish the task first is equal to the percentage of the total network computation power, or hash power, the validator has. For instance, a validator with 5% of the total network computation power will have a 5% chance of completing the task first, and therefore becoming the next block producer. Since anyone can join the race, competition is prone to increase. In the early days, Bitcoin mining was mostly done by personal computer CPUs. As of today, Bitcoin validators, or miners, have opted for dedicated and more powerful devices such as machines based on Application-Specific Integrated Circuit (“ASIC”). Proof of Work secures the network as block producers must have spent resources external to the network (i.e., money to pay electricity), and can provide proof to other participants that they did so. With various miners competing for block rewards, it becomes difficult for one single malicious party to gain network majority (defined as more than 51% of the network’s hash power in the Nakamoto consensus mechanism). The ability to rearrange transactions via 51% attacks indicates another feature of the Nakamoto consensus: the finality of transactions is only probabilistic. Once a block is produced, it is then propagated by the block producer to all other validators to check on the validity of all transactions in that block. The block producer will receive rewards in the network’s native currency (i.e., bitcoin) as all validators approve the block and update their ledgers.
The Bitcoin protocol utilizes the Merkle tree data structure in order to organize hashes of numerous individual transactions into each block. This concept is named after Ralph Merkle, who patented it in 1979. With the use of a Merkle tree, though each block might contain thousands of transactions, it will have the ability to combine all of their hashes and condense them into one, allowing efficient and secure verification of this group of transactions. This single hash called is a Merkle root, which is stored in the Block Header of a block. The Block Header also stores other meta information of a block, such as a hash of the previous Block Header, which enables blocks to be associated in a chain-like structure (hence the name “blockchain”). An illustration of block production in the Bitcoin Protocol is demonstrated below. https://preview.redd.it/m6texxicf3151.png?width=1591&format=png&auto=webp&s=f4253304912ed8370948b9c524e08fef28f1c78d
Block time and mining difficulty
Block time is the period required to create the next block in a network. As mentioned above, the node who solves the computationally intensive task will be allowed to produce the next block. Therefore, block time is directly correlated to the amount of time it takes for a node to find a solution to the task. The Bitcoin protocol sets a target block time of 10 minutes, and attempts to achieve this by introducing a variable named mining difficulty. Mining difficulty refers to how difficult it is for the node to solve the computationally intensive task. If the network sets a high difficulty for the task, while miners have low computational power, which is often referred to as “hashrate”, it would statistically take longer for the nodes to get an answer for the task. If the difficulty is low, but miners have rather strong computational power, statistically, some nodes will be able to solve the task quickly. Therefore, the 10 minute target block time is achieved by constantly and automatically adjusting the mining difficulty according to how much computational power there is amongst the nodes. The average block time of the network is evaluated after a certain number of blocks, and if it is greater than the expected block time, the difficulty level will decrease; if it is less than the expected block time, the difficulty level will increase.
What are orphan blocks?
In a PoW blockchain network, if the block time is too low, it would increase the likelihood of nodes producingorphan blocks, for which they would receive no reward. Orphan blocks are produced by nodes who solved the task but did not broadcast their results to the whole network the quickest due to network latency. It takes time for a message to travel through a network, and it is entirely possible for 2 nodes to complete the task and start to broadcast their results to the network at roughly the same time, while one’s messages are received by all other nodes earlier as the node has low latency. Imagine there is a network latency of 1 minute and a target block time of 2 minutes. A node could solve the task in around 1 minute but his message would take 1 minute to reach the rest of the nodes that are still working on the solution. While his message travels through the network, all the work done by all other nodes during that 1 minute, even if these nodes also complete the task, would go to waste. In this case, 50% of the computational power contributed to the network is wasted. The percentage of wasted computational power would proportionally decrease if the mining difficulty were higher, as it would statistically take longer for miners to complete the task. In other words, if the mining difficulty, and therefore targeted block time is low, miners with powerful and often centralized mining facilities would get a higher chance of becoming the block producer, while the participation of weaker miners would become in vain. This introduces possible centralization and weakens the overall security of the network. However, given a limited amount of transactions that can be stored in a block, making the block time too longwould decrease the number of transactions the network can process per second, negatively affecting network scalability.
3. Bitcoin’s additional features
Segregated Witness (SegWit)
Segregated Witness, often abbreviated as SegWit, is a protocol upgrade proposal that went live in August 2017. SegWit separates witness signatures from transaction-related data. Witness signatures in legacy Bitcoin blocks often take more than 50% of the block size. By removing witness signatures from the transaction block, this protocol upgrade effectively increases the number of transactions that can be stored in a single block, enabling the network to handle more transactions per second. As a result, SegWit increases the scalability of Nakamoto consensus-based blockchain networks like Bitcoin and Litecoin. SegWit also makes transactions cheaper. Since transaction fees are derived from how much data is being processed by the block producer, the more transactions that can be stored in a 1MB block, the cheaper individual transactions become. https://preview.redd.it/depya70mf3151.png?width=1601&format=png&auto=webp&s=a6499aa2131fbf347f8ffd812930b2f7d66be48e The legacy Bitcoin block has a block size limit of 1 megabyte, and any change on the block size would require a network hard-fork. On August 1st 2017, the first hard-fork occurred, leading to the creation of Bitcoin Cash (“BCH”), which introduced an 8 megabyte block size limit. Conversely, Segregated Witness was a soft-fork: it never changed the transaction block size limit of the network. Instead, it added an extended block with an upper limit of 3 megabytes, which contains solely witness signatures, to the 1 megabyte block that contains only transaction data. This new block type can be processed even by nodes that have not completed the SegWit protocol upgrade. Furthermore, the separation of witness signatures from transaction data solves the malleability issue with the original Bitcoin protocol. Without Segregated Witness, these signatures could be altered before the block is validated by miners. Indeed, alterations can be done in such a way that if the system does a mathematical check, the signature would still be valid. However, since the values in the signature are changed, the two signatures would create vastly different hash values. For instance, if a witness signature states “6,” it has a mathematical value of 6, and would create a hash value of 12345. However, if the witness signature were changed to “06”, it would maintain a mathematical value of 6 while creating a (faulty) hash value of 67890. Since the mathematical values are the same, the altered signature remains a valid signature. This would create a bookkeeping issue, as transactions in Nakamoto consensus-based blockchain networks are documented with these hash values, or transaction IDs. Effectively, one can alter a transaction ID to a new one, and the new ID can still be valid. This can create many issues, as illustrated in the below example:
Alice sends Bob 1 BTC, and Bob sends Merchant Carol this 1 BTC for some goods.
Bob sends Carols this 1 BTC, while the transaction from Alice to Bob is not yet validated. Carol sees this incoming transaction of 1 BTC to him, and immediately ships goods to B.
At the moment, the transaction from Alice to Bob is still not confirmed by the network, and Bob can change the witness signature, therefore changing this transaction ID from 12345 to 67890.
Now Carol will not receive his 1 BTC, as the network looks for transaction 12345 to ensure that Bob’s wallet balance is valid.
As this particular transaction ID changed from 12345 to 67890, the transaction from Bob to Carol will fail, and Bob will get his goods while still holding his BTC.
With the Segregated Witness upgrade, such instances can not happen again. This is because the witness signatures are moved outside of the transaction block into an extended block, and altering the witness signature won’t affect the transaction ID. Since the transaction malleability issue is fixed, Segregated Witness also enables the proper functioning of second-layer scalability solutions on the Bitcoin protocol, such as the Lightning Network.
Lightning Network is a second-layer micropayment solution for scalability. Specifically, Lightning Network aims to enable near-instant and low-cost payments between merchants and customers that wish to use bitcoins. Lightning Network was conceptualized in a whitepaper by Joseph Poon and Thaddeus Dryja in 2015. Since then, it has been implemented by multiple companies. The most prominent of them include Blockstream, Lightning Labs, and ACINQ. A list of curated resources relevant to Lightning Network can be found here. In the Lightning Network, if a customer wishes to transact with a merchant, both of them need to open a payment channel, which operates off the Bitcoin blockchain (i.e., off-chain vs. on-chain). None of the transaction details from this payment channel are recorded on the blockchain, and only when the channel is closed will the end result of both party’s wallet balances be updated to the blockchain. The blockchain only serves as a settlement layer for Lightning transactions. Since all transactions done via the payment channel are conducted independently of the Nakamoto consensus, both parties involved in transactions do not need to wait for network confirmation on transactions. Instead, transacting parties would pay transaction fees to Bitcoin miners only when they decide to close the channel. https://preview.redd.it/cy56icarf3151.png?width=1601&format=png&auto=webp&s=b239a63c6a87ec6cc1b18ce2cbd0355f8831c3a8 One limitation to the Lightning Network is that it requires a person to be online to receive transactions attributing towards him. Another limitation in user experience could be that one needs to lock up some funds every time he wishes to open a payment channel, and is only able to use that fund within the channel. However, this does not mean he needs to create new channels every time he wishes to transact with a different person on the Lightning Network. If Alice wants to send money to Carol, but they do not have a payment channel open, they can ask Bob, who has payment channels open to both Alice and Carol, to help make that transaction. Alice will be able to send funds to Bob, and Bob to Carol. Hence, the number of “payment hubs” (i.e., Bob in the previous example) correlates with both the convenience and the usability of the Lightning Network for real-world applications.
Schnorr Signature upgrade proposal
Elliptic Curve Digital Signature Algorithm (“ECDSA”) signatures are used to sign transactions on the Bitcoin blockchain. https://preview.redd.it/hjeqe4l7g3151.png?width=1601&format=png&auto=webp&s=8014fb08fe62ac4d91645499bc0c7e1c04c5d7c4 However, many developers now advocate for replacing ECDSA with Schnorr Signature. Once Schnorr Signatures are implemented, multiple parties can collaborate in producing a signature that is valid for the sum of their public keys. This would primarily be beneficial for network scalability. When multiple addresses were to conduct transactions to a single address, each transaction would require their own signature. With Schnorr Signature, all these signatures would be combined into one. As a result, the network would be able to store more transactions in a single block. https://preview.redd.it/axg3wayag3151.png?width=1601&format=png&auto=webp&s=93d958fa6b0e623caa82ca71fe457b4daa88c71e The reduced size in signatures implies a reduced cost on transaction fees. The group of senders can split the transaction fees for that one group signature, instead of paying for one personal signature individually. Schnorr Signature also improves network privacy and token fungibility. A third-party observer will not be able to detect if a user is sending a multi-signature transaction, since the signature will be in the same format as a single-signature transaction.
4. Economics and supply distribution
The Bitcoin protocol utilizes the Nakamoto consensus, and nodes validate blocks via Proof-of-Work mining. The bitcoin token was not pre-mined, and has a maximum supply of 21 million. The initial reward for a block was 50 BTC per block. Block mining rewards halve every 210,000 blocks. Since the average time for block production on the blockchain is 10 minutes, it implies that the block reward halving events will approximately take place every 4 years. As of May 12th 2020, the block mining rewards are 6.25 BTC per block. Transaction fees also represent a minor revenue stream for miners.
Why does Core refuse to increase block size and why all the censorship in R/Bitcoin?
From BTC's FAQ, I saw that Satoshi was ok with scaling up with a larger block size so why does Core refuse to increase block size and why all the censorship in Bitcoin? I mean they must have a good reason or at least a stated reason. What is it? I see that the miners are flocking over to Bitcoin Cash due to much higher profitability now. I heard Core is going to fork in November because they don't support 2MB which I thought they had already agreed before. So Bitcoin Cash (where the profit is) has miner support, Segwit has Core support but who's going to support Segwit2x? Isn't this supposed to be the only Bitcoin that was agreed upon in the New York Agreement? Which one is going to retain the name Bitcoin? I think all this contention is hurting cryptocurrency's broader adoption and it will be devastating if the Bitcoin name dies.
Technical: A Brief History of Payment Channels: from Satoshi to Lightning Network
Who cares about political tweets from some random country's president when payment channels are a much more interesting and are actually capable of carrying value? So let's have a short history of various payment channel techs!
Generation 0: Satoshi's Broken nSequence Channels
Because Satoshi's Vision included payment channels, except his implementation sucked so hard we had to go fix it and added RBF as a by-product. Originally, the plan for nSequence was that mempools would replace any transaction spending certain inputs with another transaction spending the same inputs, but only if the nSequence field of the replacement was larger. Since 0xFFFFFFFF was the highest value that nSequence could get, this would mark a transaction as "final" and not replaceable on the mempool anymore. In fact, this "nSequence channel" I will describe is the reason why we have this weird rule about nLockTime and nSequence. nLockTime actually only works if nSequence is not 0xFFFFFFFF i.e. final. If nSequence is 0xFFFFFFFF then nLockTime is ignored, because this if the "final" version of the transaction. So what you'd do would be something like this:
You go to a bar and promise the bartender to pay by the time the bar closes. Because this is the Bitcoin universe, time is measured in blockheight, so the closing time of the bar is indicated as some future blockheight.
For your first drink, you'd make a transaction paying to the bartender for that drink, paying from some coins you have. The transaction has an nLockTime equal to the closing time of the bar, and a starting nSequence of 0. You hand over the transaction and the bartender hands you your drink.
For your succeeding drink, you'd remake the same transaction, adding the payment for that drink to the transaction output that goes to the bartender (so that output keeps getting larger, by the amount of payment), and having an nSequence that is one higher than the previous one.
Eventually you have to stop drinking. It comes down to one of two possibilities:
You drink until the bar closes. Since it is now the nLockTime indicated in the transaction, the bartender is able to broadcast the latest transaction and tells the bouncers to kick you out of the bar.
You wisely consider the state of your liver. So you re-sign the last transaction with a "final" nSequence of 0xFFFFFFFF i.e. the maximum possible value it can have. This allows the bartender to get his or her funds immediately (nLockTime is ignored if nSequence is 0xFFFFFFFF), so he or she tells the bouncers to let you out of the bar.
Now that of course is a payment channel. Individual payments (purchases of alcohol, so I guess buying coffee is not in scope for payment channels). Closing is done by creating a "final" transaction that is the sum of the individual payments. Sure there's no routing and channels are unidirectional and channels have a maximum lifetime but give Satoshi a break, he was also busy inventing Bitcoin at the time. Now if you noticed I called this kind of payment channel "broken". This is because the mempool rules are not consensus rules, and cannot be validated (nothing about the mempool can be validated onchain: I sigh every time somebody proposes "let's make block size dependent on mempool size", mempool state cannot be validated by onchain data). Fullnodes can't see all of the transactions you signed, and then validate that the final one with the maximum nSequence is the one that actually is used onchain. So you can do the below:
Become friends with Jihan Wu, because he owns >51% of the mining hashrate (he totally reorged Bitcoin to reverse the Binance hack right?).
Slip Jihan Wu some of the more interesting drinks you're ordering as an incentive to cooperate with you. So say you end up ordering 100 drinks, you split it with Jihan Wu and give him 50 of the drinks.
When the bar closes, Jihan Wu quickly calls his mining rig and tells them to mine the version of your transaction with nSequence 0. You know, that first one where you pay for only one drink.
Because fullnodes cannot validate nSequence, they'll accept even the nSequence=0 version and confirm it, immutably adding you paying for a single alcoholic drink to the blockchain.
The bartender, pissed at being cheated, takes out a shotgun from under the bar and shoots at you and Jihan Wu.
Jihan Wu uses his mystical chi powers (actually the combined exhaust from all of his mining rigs) to slow down the shotgun pellets, making them hit you as softly as petals drifting in the wind.
The bartender mutters some words, clothes ripping apart as he or she (hard to believe it could be a she but hey) turns into a bear, ready to maul you for cheating him or her of the payment for all the 100 drinks you ordered from him or her.
Steely-eyed, you stand in front of the bartender-turned-bear, daring him to touch you. You've watched Revenant, you know Leonardo di Caprio could survive a bear mauling, and if some posh actor can survive that, you know you can too. You make a pose. "Drunken troll logic attack!"
I think I got sidetracked here.
Bears are bad news.
You can't reasonably invoke "Satoshi's Vision" and simultaneously reject the Lightning Network because it's not onchain. Satoshi's Vision included a half-assed implementation of payment channels with nSequence, where the onchain transaction represented multiple logical payments, exactly what modern offchain techniques do (except modern offchain techniques actually work). nSequence (the field, but not its modern meaning) has been in Bitcoin since BitCoin For Windows Alpha 0.1.0. And its original intent was payment channels. You can't get nearer to Satoshi's Vision than being a field that Satoshi personally added to transactions on the very first public release of the BitCoin software, like srsly.
Miners can totally bypass mempool rules. In fact, the reason why nSequence has been repurposed to indicate "optional" replace-by-fee is because miners are already incentivized by the nSequence system to always follow replace-by-fee anyway. I mean, what do you think those drinks you passed to Jihan Wu are, other than the fee you pay him to mine a specific version of your transaction?
Satoshi made mistakes. The original design for nSequence is one of them. Today, we no longer use nSequence in this way. So diverging from Satoshi's original design is part and parcel of Bitcoin development, because over time, we learn new lessons that Satoshi never knew about. Satoshi was an important landmark in this technology. He will not be the last, or most important, that we will remember in the future: he will only be the first.
Incentive-compatible time-limited unidirectional channel; or, Satoshi's Vision, Fixed (if transaction malleability hadn't been a problem, that is). Now, we know the bartender will turn into a bear and maul you if you try to cheat the payment channel, and now that we've revealed you're good friends with Jihan Wu, the bartender will no longer accept a payment channel scheme that lets one you cooperate with a miner to cheat the bartender. Fortunately, Jeremy Spilman proposed a better way that would not let you cheat the bartender. First, you and the bartender perform this ritual:
You get some funds and create a transaction that pays to a 2-of-2 multisig between you and the bartender. You don't broadcast this yet: you just sign it and get its txid.
You create another transaction that spends the above transaction. This transaction (the "backoff") has an nLockTime equal to the closing time of the bar, plus one block. You sign it and give this backoff transaction (but not the above transaction) to the bartender.
The bartender signs the backoff and gives it back to you. It is now valid since it's spending a 2-of-2 of you and the bartender, and both of you have signed the backoff transaction.
Now you broadcast the first transaction onchain. You and the bartender wait for it to be deeply confirmed, then you can start ordering.
The above is probably vaguely familiar to LN users. It's the funding process of payment channels! The first transaction, the one that pays to a 2-of-2 multisig, is the funding transaction that backs the payment channel funds. So now you start ordering in this way:
For your first drink, you create a transaction spending the funding transaction output and sending the price of the drink to the bartender, with the rest returning to you.
You sign the transaction and pass it to the bartender, who serves your first drink.
For your succeeding drinks, you recreate the same transaction, adding the price of the new drink to the sum that goes to the bartender and reducing the money returned to you. You sign the transaction and give it to the bartender, who serves you your next drink.
At the end:
If the bar closing time is reached, the bartender signs the latest transaction, completing the needed 2-of-2 signatures and broadcasting this to the Bitcoin network. Since the backoff transaction is the closing time + 1, it can't get used at closing time.
If you decide you want to leave early because your liver is crying, you just tell the bartender to go ahead and close the channel (which the bartender can do at any time by just signing and broadcasting the latest transaction: the bartender won't do that because he or she is hoping you'll stay and drink more).
If you ended up just hanging around the bar and never ordering, then at closing time + 1 you broadcast the backoff transaction and get your funds back in full.
Now, even if you pass 50 drinks to Jihan Wu, you can't give him the first transaction (the one which pays for only one drink) and ask him to mine it: it's spending a 2-of-2 and the copy you have only contains your own signature. You need the bartender's signature to make it valid, but he or she sure as hell isn't going to cooperate in something that would lose him or her money, so a signature from the bartender validating old state where he or she gets paid less isn't going to happen. So, problem solved, right? Right? Okay, let's try it. So you get your funds, put them in a funding tx, get the backoff tx, confirm the funding tx... Once the funding transaction confirms deeply, the bartender laughs uproariously. He or she summons the bouncers, who surround you menacingly. "I'm refusing service to you," the bartender says. "Fine," you say. "I was leaving anyway;" You smirk. "I'll get back my money with the backoff transaction, and posting about your poor service on reddit so you get negative karma, so there!" "Not so fast," the bartender says. His or her voice chills your bones. It looks like your exploitation of the Satoshi nSequence payment channel is still fresh in his or her mind. "Look at the txid of the funding transaction that got confirmed." "What about it?" you ask nonchalantly, as you flip open your desktop computer and open a reputable blockchain explorer. What you see shocks you. "What the --- the txid is different! You--- you changed my signature?? But how? I put the only copy of my private key in a sealed envelope in a cast-iron box inside a safe buried in the Gobi desert protected by a clan of nomads who have dedicated their lives and their childrens' lives to keeping my private key safe in perpetuity!" "Didn't you know?" the bartender asks. "The components of the signature are just very large numbers. The sign of one of the signature components can be changed, from positive to negative, or negative to positive, and the signature will remain valid. Anyone can do that, even if they don't know the private key. But because Bitcoin includes the signatures in the transaction when it's generating the txid, this little change also changes the txid." He or she chuckles. "They say they'll fix it by separating the signatures from the transaction body. They're saying that these kinds of signature malleability won't affect transaction ids anymore after they do this, but I bet I can get my good friend Jihan Wu to delay this 'SepSig' plan for a good while yet. Friendly guy, this Jihan Wu, it turns out all I had to do was slip him 51 drinks and he was willing to mine a tx with the signature signs flipped." His or her grin widens. "I'm afraid your backoff transaction won't work anymore, since it spends a txid that is not existent and will never be confirmed. So here's the deal. You pay me 99% of the funds in the funding transaction, in exchange for me signing the transaction that spends with the txid that you see onchain. Refuse, and you lose 100% of the funds and every other HODLer, including me, benefits from the reduction in coin supply. Accept, and you get to keep 1%. I lose nothing if you refuse, so I won't care if you do, but consider the difference of getting zilch vs. getting 1% of your funds." His or her eyes glow. "GENUFLECT RIGHT NOW." Lesson learned?
Payback's a bitch.
Transaction malleability is a bitchier bitch. It's why we needed to fix the bug in SegWit. Sure, MtGox claimed they were attacked this way because someone kept messing with their transaction signatures and thus they lost track of where their funds went, but really, the bigger impetus for fixing transaction malleability was to support payment channels.
Yes, including the signatures in the hash that ultimately defines the txid was a mistake. Satoshi made a lot of those. So we're just reiterating the lesson "Satoshi was not an infinite being of infinite wisdom" here. Satoshi just gets a pass because of how awesome Bitcoin is.
CLTV-protected Spilman Channels
Using CLTV for the backoff branch. This variation is simply Spilman channels, but with the backoff transaction replaced with a backoff branch in the SCRIPT you pay to. It only became possible after OP_CHECKLOCKTIMEVERIFY (CLTV) was enabled in 2015. Now as we saw in the Spilman Channels discussion, transaction malleability means that any pre-signed offchain transaction can easily be invalidated by flipping the sign of the signature of the funding transaction while the funding transaction is not yet confirmed. This can be avoided by simply putting any special requirements into an explicit branch of the Bitcoin SCRIPT. Now, the backoff branch is supposed to create a maximum lifetime for the payment channel, and prior to the introduction of OP_CHECKLOCKTIMEVERIFY this could only be done by having a pre-signed nLockTime transaction. With CLTV, however, we can now make the branches explicit in the SCRIPT that the funding transaction pays to. Instead of paying to a 2-of-2 in order to set up the funding transaction, you pay to a SCRIPT which is basically "2-of-2, OR this singlesig after a specified lock time". With this, there is no backoff transaction that is pre-signed and which refers to a specific txid. Instead, you can create the backoff transaction later, using whatever txid the funding transaction ends up being confirmed under. Since the funding transaction is immutable once confirmed, it is no longer possible to change the txid afterwards.
Todd Micropayment Networks
The old hub-spoke model (that isn't how LN today actually works). One of the more direct predecessors of the Lightning Network was the hub-spoke model discussed by Peter Todd. In this model, instead of payers directly having channels to payees, payers and payees connect to a central hub server. This allows any payer to pay any payee, using the same channel for every payee on the hub. Similarly, this allows any payee to receive from any payer, using the same channel. Remember from the above Spilman example? When you open a channel to the bartender, you have to wait around for the funding tx to confirm. This will take an hour at best. Now consider that you have to make channels for everyone you want to pay to. That's not very scalable. So the Todd hub-spoke model has a central "clearing house" that transport money from payers to payees. The "Moonbeam" project takes this model. Of course, this reveals to the hub who the payer and payee are, and thus the hub can potentially censor transactions. Generally, though, it was considered that a hub would more efficiently censor by just not maintaining a channel with the payer or payee that it wants to censor (since the money it owned in the channel would just be locked uselessly if the hub won't process payments to/from the censored user). In any case, the ability of the central hub to monitor payments means that it can surveill the payer and payee, and then sell this private transactional data to third parties. This loss of privacy would be intolerable today. Peter Todd also proposed that there might be multiple hubs that could transport funds to each other on behalf of their users, providing somewhat better privacy. Another point of note is that at the time such networks were proposed, only unidirectional (Spilman) channels were available. Thus, while one could be a payer, or payee, you would have to use separate channels for your income versus for your spending. Worse, if you wanted to transfer money from your income channel to your spending channel, you had to close both and reshuffle the money between them, both onchain activities.
Poon-Dryja Lightning Network
Bidirectional two-participant channels. The Poon-Dryja channel mechanism has two important properties:
No time limit.
Both the original Satoshi and the two Spilman variants are unidirectional: there is a payer and a payee, and if the payee wants to do a refund, or wants to pay for a different service or product the payer is providing, then they can't use the same unidirectional channel. The Poon-Dryjam mechanism allows channels, however, to be bidirectional instead: you are not a payer or a payee on the channel, you can receive or send at any time as long as both you and the channel counterparty are online. Further, unlike either of the Spilman variants, there is no time limit for the lifetime of a channel. Instead, you can keep the channel open for as long as you want. Both properties, together, form a very powerful scaling property that I believe most people have not appreciated. With unidirectional channels, as mentioned before, if you both earn and spend over the same network of payment channels, you would have separate channels for earning and spending. You would then need to perform onchain operations to "reverse" the directions of your channels periodically. Secondly, since Spilman channels have a fixed lifetime, even if you never used either channel, you would have to periodically "refresh" it by closing it and reopening. With bidirectional, indefinite-lifetime channels, you may instead open some channels when you first begin managing your own money, then close them only after your lawyers have executed your last will and testament on how the money in your channels get divided up to your heirs: that's just two onchain transactions in your entire lifetime. That is the potentially very powerful scaling property that bidirectional, indefinite-lifetime channels allow. I won't discuss the transaction structure needed for Poon-Dryja bidirectional channels --- it's complicated and you can easily get explanations with cute graphics elsewhere. There is a weakness of Poon-Dryja that people tend to gloss over (because it was fixed very well by RustyReddit):
You have to store all the revocation keys of a channel. This implies you are storing 1 revocation key for every channel update, so if you perform millions of updates over your entire lifetime, you'd be storing several megabytes of keys, for only a single channel. RustyReddit fixed this by requiring that the revocation keys be generated from a "Seed" revocation key, and every key is just the application of SHA256 on that key, repeatedly. For example, suppose I tell you that my first revocation key is SHA256(SHA256(seed)). You can store that in O(1) space. Then for the next revocation, I tell you SHA256(seed). From SHA256(key), you yourself can compute SHA256(SHA256(seed)) (i.e. the previous revocation key). So you can remember just the most recent revocation key, and from there you'd be able to compute every previous revocation key. When you start a channel, you perform SHA256 on your seed for several million times, then use the result as the first revocation key, removing one layer of SHA256 for every revocation key you need to generate. RustyReddit not only came up with this, but also suggested an efficient O(log n) storage structure, the shachain, so that you can quickly look up any revocation key in the past in case of a breach. People no longer really talk about this O(n) revocation storage problem anymore because it was solved very very well by this mechanism.
Another thing I want to emphasize is that while the Lightning Network paper and many of the earlier presentations developed from the old Peter Todd hub-and-spoke model, the modern Lightning Network takes the logical conclusion of removing a strict separation between "hubs" and "spokes". Any node on the Lightning Network can very well work as a hub for any other node. Thus, while you might operate as "mostly a payer", "mostly a forwarding node", "mostly a payee", you still end up being at least partially a forwarding node ("hub") on the network, at least part of the time. This greatly reduces the problems of privacy inherent in having only a few hub nodes: forwarding nodes cannot get significantly useful data from the payments passing through them, because the distance between the payer and the payee can be so large that it would be likely that the ultimate payer and the ultimate payee could be anyone on the Lightning Network. Lessons learned?
We can decentralize if we try hard enough!
"Hubs bad" can be made "hubs good" if everybody is a hub.
Smart people can solve problems. It's kinda why they're smart.
After LN, there's also the Decker-Wattenhofer Duplex Micropayment Channels (DMC). This post is long enough as-is, LOL. But for now, it uses a novel "decrementing nSequence channel", using the new relative-timelock semantics of nSequence (not the broken one originally by Satoshi). It actually uses multiple such "decrementing nSequence" constructs, terminating in a pair of Spilman channels, one in both directions (thus "duplex"). Maybe I'll discuss it some other time. The realization that channel constructions could actually hold more channel constructions inside them (the way the Decker-Wattenhofer puts a pair of Spilman channels inside a series of "decrementing nSequence channels") lead to the further thought behind Burchert-Decker-Wattenhofer channel factories. Basically, you could host multiple two-participant channel constructs inside a larger multiparticipant "channel" construct (i.e. host multiple channels inside a factory). Further, we have the Decker-Russell-Osuntokun or "eltoo" construction. I'd argue that this is "nSequence done right". I'll write more about this later, because this post is long enough. Lessons learned?
Bitcoin offchain scaling is more powerful than you ever thought.
This is a followup of my older post about the history of payment channel mechanisms. The "modern" payment channel system is Lightning Network, which uses bidirectional indefinite-lifetime channels, using HTLCs to trustlessly route through the network. However, at least one other payment channel mechanism was developed at roughly the same time as Lightning, and there are also further proposals that are intended to replace the core payment channel mechanism in use by Lightning. Now, in principle, the "magic" of Lightning lies in combining two ingredients:
Offchain updateable systems.
HTLCs to implement atomic cross-system swaps.
We can replace the exact mechanism implementing an offchain updateable system. Secondly we can replace the use of HTLCs with another atomic cross-system swap, which is what we would do when we eventually switch to payment points and scalars from payment hashes and preimages. So let's clarify what I'll be discussing here:
I will be discussing mechanisms for the offchain updateable system, which are generally called "payment channel mechanisms". The exact contracts that can be transported across such systems, such as HTLCs, the Scriptless-Script point-based variant, and Discrete Log Contracts, will have to wait another post.
Payment channel mechanisms are designed to be trust-minimized. They might not achieve this design goal (consider the broken Satoshi sequence numbers, or the pre-SegWit Spilman, which I still class as "payment channel mechanism"), but mechanisms which invoke trust in one participant or other as inherent parts of their design are not true payment channels. Such constructions might be of interest, but I will not discuss them here.
Now I might use "we" here to refer to what "we" did to the design of Bitcoin, but it is only because "we" are all Satoshi, except for Craig Steven Wright. So, let's present the other payment channel mechanisms. But first, a digression.
Digression: the new nSequence and OP_CHECKSEQUENCEVERIFY
The new relative-timelock semantics of nSequence. Last time we used nSequence, we had the unfortunate problem that it would be easy to rip off people by offering a higher miner fee for older state where we own more funds, then convince the other side of the channel to give us goods in exchange for a new state with tiny miner fees, then publish both the old state and the new state, then taunt the miners with "so which state is gonna earn you more fees huh huh huh?". This problem, originally failed by Satoshi, was such a massive facepalm that, in honor of miners doing the economically-rational thing in the face of developer and user demands when given a non-final nSequence, we decided to use nSequence as a flag for the opt-in replace-by-fee. Basically, under opt-in replace-by-fee, if a transaction had an nSequence that was not 0xFFFFFFFF or 0xFFFFFFFE, then it was opt-in RBF (BIP125). Because you'd totally abuse nSequence to bribe miners in order to steal money from your bartender, especially if your bartender is not a werebear. Of course, using a 4-byte field for a one-bit flag (to opt-in to RBF or not) was a massive waste of space, so when people started proposing relative locktimes, the nSequence field was repurposed. Basically, in Bitcoin as of the time of this writing (early 2020) if nSequence is less than 0x80000000 it can be interpreted as a relative timelock. I'll spare you the details here, BIP68 has them, but basically nSequence can indicate (much like nLockTime) either a "real world" relative lock time (i.e. the output must have been confirmed for X seconds before it can be spent using a transaction with a non-zero nSequence) or the actual real world, which is measured in blocks (i.e. the output must have been confirmed for N blocks before it can be spent using a transaction with a non-zero nSequence). Of course, this is the Bitcoin universe and "seconds" is a merely human delusion, so we will use blocks exclusively. And similarly to OP_CHECKLOCKTIMEVERIFY, we also added OP_CHECKSEQUENCEVERIFY in BIP112. This ensures that the nSequence field is a relative-locktime (i.e. less than 0x80000000) and that it is the specified type (block-based or seconds-based) and that it is equal or higher to the specified minimum relative locktime. It is important to mention the new, modern meaning of nSequence, because it is central to many of the modern payment channel mechanisms, including Lightning Poon-Dryja. Lessons learned?
Poetic justice is a thing. Go go new nSequence!
Decker-Wattenhofer "Duplex Micropayment Channels"
Mechanisms-within-mechanisms for a punishment-free bidirectional indefinite-lifetime payment channel. The Decker-Wattenhofer paper was published in 2015, but the Poon-Dryja "Lightning Network" paper was published in 2016. However, the Decker-Wattenhofer paper mentions the Lightning mechanism, specifically mentioning the need to store every old revocation key (i.e. the problem I mentioned last time that was solved using RustyReddit shachains). Maybe Poon-Dryja presented the Lightning Network before making a final published paper in 2016, or something. Either that or cdecker is the Bitcoin time traveler. It's a little hard to get an online copy now, but as of late 2019 this seems to work: copy Now the interesting bit is that Decker-Wattenhofer achieves its goals by combining multiple mechanisms that are, by themselves, workable payment channel mechanisms already, except each has some massive drawbacks. By combining them, we can minimize the drawbacks. So let's go through the individual pieces.
Indefinite-lifetime Spilman channels
As mentioned before, Spilman channels have the drawback that they have a limited lifetime: the lock time indicated in the backoff transaction or backoff branch of the script. However, instead of an absolute lock time, we can use a relative locktime. In order to do so, we use a "kickoff" transaction, between the backoff transaction and the funding transaction. Our opening ritual goes this way, between you and our gender-neutral bartender-bancho werebear:
First, you compute the txid for the funding transaction and the kickoff transaction. The funding transaction takes some of your funds and puts it into a 2-of-2 between you and the bartender, and the kickoff is a 1-input 1-output transaction that spends the funding transaction and outputs to another 2-of-2 between you and the bartender.
Then, you generate the backoff transaction, which spends the kickoff transaction and returns all the funds to you. The backoff has a non-zero nSequence, indicating a delay of a number of blocks agreed between you, which is a security/convenience tradeoff parameter
You sign the backoff transaction, then send it to the bartender.
The bartender signs the backoff, and gives back the fully-signed transaction to you.
You sign the kickoff transaction, then send it to the bartender.
The bartender signs the kickoff, and gives it back to you fully signed.
You sign and broadcast the funding transaction, and both of you wait for the funding transaction to be deeply confirmed.
The above setup assumes you're using SegWit, because transaction malleability fix. At any time, either you or the bartender can broadcast the kickoff transaction, and once that is done, this indicates closure of the channel. You do this if you have drunk enough alcoholic beverages, or the bartender could do this when he or she is closing the bar. Now, to get your drinks, you do:
Sign a transaction spending the kickoff, and adding more funds to the bartender, to buy a drink. This transaction is not encumbered with an nSequence.
Hand the signed transaction to the bartender, who provides you with your next drink.
The channel is closed by publishing the kickoff transaction. Both of you have a fully-signed copy of the kickoff, so either of you can initiate the close. On closure (publication and confirmation of the kickoff transaction), there are two cases:
You fail to pick up any chicks at the bar (I prefer female humans of optimum reproductive age myself rather than nestling birds, but hey, you do you) so you didn't actually spend for drinks at all. In this case, the bartender is not holding any transactions that can spend the kickoff transaction. You wait for the agreed-upon delay after the kickoff is confirmed, and then publish the backoff transaction and get back all the funds that you didn't spend.
You spend all your money on chicks and end up having to be kicked into a cab to get back to your domicile, because even juvenile birds can out-drink you, you pushover. The bartender then uses the latest transaction you gave (the one that gives the most money to him or her --- it would be foolish of him or her to use an earlier version with less money!), signs it, and broadcasts it to get his or her share of the money from the kickoff transaction.
Pro: Number of updates is limited only by the amount of money you have in the "payer" side of the channel.
Pro: no lifetime limit. You can keep the channel open indefinitely if you don't transact over it.
Pro: The delay can be very small.
Decrementing nSequence channels
Enforcing order by reducing relative locktimes. I believe this to be novel to the Decker-Wattenhofer mechanism, though I might be missing some predecessor. This again uses the new relative-locktime meaning of nSequence. As such, it also uses a kickoff transaction like the above indefinite-lifetime Spilman channel. Set up is very similar to the setup of the above indefinite-lifetime Spilman channel, except that because this is bidirectional, we can actually have both sides put money into the initial starting backoff transaction. We also rename the "backoff" transaction to "state" transaction. Basically, the state transaction indicates how the money in the channel is divided up between the two participants. The "backoff" we sign during the funding ritual is now the first state transaction. Both sides keep track of the current state transaction (which is initialized to the first state transaction on channel establishment). Finally, the starting nSequence of the first state transaction is very large (usually in the dozens or low hundreds of blocks). Suppose one participant wants to pay the other. The ritual done is then:
A new version of the current state transaction is created with more money in the payee side.
This new version has nSequence that is one block lower than the current state transaction (in practice it should be a few blocks lower, not just one, because sometimes miners find blocks in quick succession).
Both sides exchange signatures for the new state transaction.
Both sides set the new state transaction as the current state transaction that will be the basis for the next payment.
When the channel is closed by publication of the kickoff transaction, then the transaction with the lowest nSequence becomes valid earlier than the other state transactions. This is enough to enforce that the most recent state transaction (the one with the lowest nSequence, and thus the first to become valid) is published.
Pro: indefinite lifetime, at least if no updates are done.
Pro: it shows that life is not without a sense of irony. The original design for nSequence replacement required an incrementing nSequence using the original Satoshi's Vision interpretation of nSequence (which doesn't work). But this channel mechanism instead uses a decrementing nSequence using the new Bitcoin Core interpretation of nSequence as a relative timelock (which does, in fact, work).
Con: Number of updates is limited by the starting maximum nSequence delay. Increasing this delay increases the encumbrance if the channel is closed without any activity, but reducing this delay reduces the number of payments in either direction you can use before you have to close the channel and recreate it. For example, let's have a maximum of 144 blocks of delay. Each update, we decrement the nSequence by 4, because that handles up to the very rare case where up to 3 blocks arrive in very close succession to each other. That only gives us 36 updates for a worst-case of one day of delay, a very bad tradeoff.
Con: You can only be safely offline for a number of blocks equal to the "step", but the maximum delay you may incur is the product of the step times the number of updates you want to make. So you want a small step (because you don't want your worst-case lock time to be large) but you want a big step (because you want to still be safe even if you go offline for a long time).
Combining the ingredients of the Decker-Wattenhofer Duplex Micropayment Channels concoction. Of note is that we can "chain" these mechanisms together in such a way that we strengthen their strengths while covering their weaknesses. A note is that both the indefinite-lifetime nSequence Spilman variant, and the above decrementing nSequence mechanism, both have "kickoff" transactions. However, when we chain the two mechanisms together, it turns out that the final transaction of one mechanism also serves as the kickoff of the next mechanism in the chain. So for example, let's chain two of those decrementing nSequence channels together. Let's make them 144 blocks maximum delay each, and decrement in units of 4 blocks, so each of the chained mechanisms can do 37 updates each. We start up a new channel with the following transactions:
A funding transaction paying to a 2-of-2, confirmed deeply onchain. All other transactions are offchain until closure.
A kickoff transaction spending the funding transaction output, paying to a 2-of-2.
A "stage 1" decrementing nSequence state transaction, spending the kickoff, with current nSequence 144, paying to a 2-of-2.
A "stage 2" decrementing nSequence state transaction, spending the stage 1, with current nSequence 144, paying to the initial state of the channel.
When we update this channel, we first update the "stage 2" state transaction, replacing it with an nSequence lower by 4 blocks. So after one update our transactions are:
A funding transaction paying to a 2-of-2, confirmed deeply onchain. All other transactions are offchain until closure.
A kickoff transaction spending the funding transaction output, paying to a 2-of-2.
A "stage 1" decrementing nSequence state transaction, spending the kickoff, with current nSequence 144, paying to a 2-of-2.
A "stage 2" decrementing nSequence state transaction, spending the stage 1, with current nSequence 140, paying to the second state of the channel.
The first 3 transactions are the same, only the last one is replaced with a state transaction with lower `nSequence. Things become interesting when we reach the "stage 2" having nSequence 0. On the next update, we create a new "stage 1", with an nSequence that is 4 lower, and "reset" the "stage 2" back to an nSequence of 144. This is safe because even though we have a "stage 2" with shorter nSequence, that stage 2 spends a stage 1 with an nSequence of 144, and the stage 1 with nSequence of 140 would beat it to the blockchain first. This results in us having, not 36 + 36 updates, but instead 36 * 36 updates (1296 updates). 1296 updates is still kinda piddling, but that's much better than just a single-stage decrementing nSequence channel. The number of stages can be extended indefinitely, and your only drawback would be the amount of blockchain space you'd spend for a unilateral close. Mutual cooperative closes can always shortcut the entire stack of staged transactions and cut it to a single mutual cooperative close transaction. But that's not all! You might be wondering about the term "duplex" in the name "Duplex Micropayment Channels". That's because the last decrementing nSequence stage does not hold the money of the participants directly. Instead, the last stage holds two indefinite-lifetime Spilman channels. As you might remember, Spilman channels are unidirectional, so the two Spilman channels represent both directions of the channel. Thus, duplex. Let's go back to you and your favorite werebear bartender. If you were using a Decker-Wattenhofer Duplex Micropayment Channel, you'd have several stages of decrementing nSequence, terminated in two Spilman channels, a you-to-bartender channel and a bartender-to-you channel. Suppose that, while drinking, the bartender offers you a rebate on each drink if you do some particular service for him or her. Let us not discuss what service this is and leave it to your imagination. So you pay for a drink, decide you want to get the rebate, and perform a service that the bartender finds enjoyable. So you transfer some funds on the you-to-bartender direction, and then later the bartender transfers some funds in the bartender-to-you channel after greatly enjoying your service. Suppose you now exhaust the you-to-bartender direction. However, you note that the rebates you've earned are enough to buy a few more drinks. What you do instead is to update the staged decrementing nSequence mechanisms, and recreate the two Spilman directions such that the you-to-bartender direction contains all your current funds and the bartender-to-you direction contains all the bartender's funds. With this, you are now able to spend even the money you earned from rebates. At the same time, even if the staged decrementing nSequence mechanisms only have a few hundred thousand updates, you can still extend the practical number of updates as long as you don't have to reset the Spilman channels too often.
Pro: chaining allows more possible updates!
Pro: no "toxic waste"! That is, old backups of your channel state database won't cause you to lose funds automatically.
Con: unilateral closes have long lock times, due to the chaining of decrementing-nSequence mechanisms.
Con: unilateral closes put a lot of transactions onchain, due to the chaining of multiple nested mechanisms.
Con: HTLCs are affected by the total nSequence delay needed by the mechanism. This is because HTLCs have an absolute timelock in their contract, and this can only be enforced onchain. However, the existence of nSequence delays means that absolute timelocks need to trigger unilateral closes several blocks before the absolute timelock, by the nSequence total delta of all the stacked mechanisms. In Poon-Dryja you can safely keep a channel open until just before the absolute timelock expires.
Con: It's not clear to me if the cancellable HTLCs used by Lightning can be hosted by Spilman channels. The HTLCs used in Lightning are "cancellable" because of a nifty ability of every offchain update mechanism: every contract has an additional clause "... or if every signer of the offchain update mechanism agrees, we can ignore this contract and place its funds wherever we agree on". This is not a degradation of security since the HTLCs in a channel are between the two users of the channel, so both of them need to agree anyway in order to accept such a cancellation. This ability is used to propagate forwarding failures back to the payer: instead of waiting for the HTLCs to time out, the node just says to the sender "between you and me, this HTLC won't propagate anyway, because 'insert some reason here', so let's just put the money in it back to you". However, this seems unsafe with Spilman channels, as a cancelled HTLC will still be available on older states of the Spilman channel, and potentially claimable by the payee end up until the timelock. Removing the Spilman channels at the end would remove this issue, but now you are limited to a few hundred thousand updates even with lots of decrementing-nSequence layers.
Burchert-Decker-Wattenhofer Channel Factories
Because you like channels so much, you put channels inside channels so you could pay while you pay. I N C E P T I O N The Decker-Wattenhofer Duplex Micropayment Channels introduced the possibility of nesting a channel mechanism inside another channel mechanism. For example, it suggests nesting a decrementing-nSequence mechanism inside another decrementing-nSequence mechanism, and having as well an unlimited-lifetime Spilman channel at the end. In the Decker-Wattenhofer case, it is used to support the weakness of one mechanism with the strength of another mechanism. One thing to note is that while the unlimited-lifetime Spilman channel variant used is inherently two-participant (there is one payer and one payee), the decrementing-nSequence channel mechanism can be multiparticipant. Another thing of note is that nothing prevents one mechanism from hosting just one inner mechanism, just as it is perfectly fine for a Lightning Network channel to have multiple HTLCs in-flight, plus the money in your side, plus the money in the counterparty's side. As these are "just" Bitcoin-enforceable contracts, there is no fundamental difference between an HTLC, and a payment channel mechanism. Thus the most basic idea of the Burchert-Decker-Wattenhofer Channel Factories paper is simply that we can have a multiparticipant update mechanism host multiple two-party update mechanisms. The outer multiparticipant update mechanism is called a "channel factory" while the inner two-party update mechanisms are called "channels". The exact mechanism used in the Burchert-Decker-Wattenhofer paper uses several decrementing-nSequence mechanisms to implement the factory, and Decker-Wattenhofer Duplex Micropayment Channels to implement the channel layer. However, as noted before, there is no fundamental difference between a Poon-Dryja channel and an HTLC. So it is in fact possible to have chained Decker-Wattenhofer decrementing-nSequence mechanisms to implement the factory level, while the channels are simply Poon-Dryja channels.
So this concludes for now an alternative mechanism to the classic Poon-Dryja that Lightning uses. The tradeoffs are significantly different between Decker-Wattenhofer vs Poon-Dryja:
Decker-Wattenhofer: No toxic waste: old data stolen from you, or which you inadvertently use, is not going to lose all your funds.
Decker-Wattenhofer: Multiple participants in a single offchain mechanism, enabling things like Channel Factories.
Poon-Dryja: Doesn't have ridiculously long lock times in the unilateral close case.
Poon-Dryja: Supports HTLCs for trustless forwarding (not clear if Decker-Wattenhofer fully supports this without sacrificing the duplexed indefinite-lifetime Spilman channels at the end).
Copyright 2020 Alan Manuel K. Gloria. Released under CC-BY.
Bitcoin doesn't require any special hardware, as it can be used on any device which can do computations. To make a Bitcoin transaction you need to create a ECDSA signature, which is just math, something which all computers do well. You can do it both on resource-constrained like smart cards (think SIM cards) and on large servers alike. The idea that you need a special Bitcoin computer to use Bitcoin is outright harmful, as it limits your choices and dupes you into buying overpriced proprietary hardware which gives the vendor more control of what you can and cannot do. This is very much against the spirit of Bitcoin which can thrive only as an open system. So yeah, that thing 21 inc is trying to sell makes no sense, whatsoever. But a lot of people think that "there might be something in it", let me go through the theories of why this device makes sense:
"It is a dev kit!". Let me guess, you aren't a programmer. Or if you're a programmer, you're a shitty programmer and should be ashamed of yourself. You do not need any dev kit for Bitcoin, all you need is open source software (and, maybe, some internet services, optionally). When I wanted to try to do something Bitcoin related back in 2011, all I needed was to download bitcoind and install it on my $10/month VPS. Then I looked through RPC API call list and made a Bitcoin-settled futures exchange. The whole thing took me only a week. I didn't need to pay $400 for a devkit. Learning how to work with bitcoind took less than a day. There are hundreds of Bitcoin companies and thousands of hobbyist working on Bitcoin projects, none of them needed any sort of a dev kit.
"It is useful because it has APIs and pre-installed software!" No, see above. If needed, pre-installed software can be delivered in a form of a virtual machine (e.g. VirtualBox, VMware, etc), no need for a physical device.
"It is useful because it comes with a micropayment service/API". Nope. These things can be done in software, no need for custom hardware. Obviously, a micropayment system can be more widely adopted when it is open. If it is tied to custom hardware (which I doubt) then you have a vendor lock-in which is exactly the thing we're trying to avoid with Bitcoin.
"it comes with pre-installed marketplace". So what, we have marketplaces such as OpenBazaar. If there are useful features in the 21 inc's marketplace we can replicated them in open source software.
"It's convenient for users!" Are you saying that a $400 device which you need to be connected to a laptop is more convenient than a service which can run in a browser?
"It might offer better security". We already have devices such as Trezor which can protect bitcoins from unsecure operating system. Trezor costs much less than $400 and is actually useful. Even though it was done by a small company without much capital.
"It can be used for applications like a reputation system, etc." When telecom companies wanted an ability to differentiate between users, they created smartcard-based SIM cards. This technology is many decades old. Using Bitcoin for a reputation system is a bad idea, as it is not designed for that. If device holds 1000 satoshi to give it an identity weight, a guy who has 1 bitcoin can impersonate 10000 such devices. It just not going to work.
"A constant stream of bitcoins it mines is convenient for users." User has to pay for this device, he might as well just buy bitcoins. If it is necessary for bitcoins to be attached to hardware, this can be done using a tiny dongle which costs less than $1 to manufacture, or a smart card.
"But this device got backed by VCs and large companies, there must be something to it, we are just too stupid to comprehend its greatness". Well...
There is, indeed, a very simple explanation of this device's existnce: Balaji's reality distortion field. He is a prominent VC, so it was relatively easy to convince others that it's a worthy idea. The big vision behind it -- the financial network of devices -- is actually great. And then there is a question of execution. A guy like Balaji is supposed to be an expert in assessing feasibility of execution. So, as we can guess, investors trusted him. As many VCs tell, they invest in people. They cannot examine nitty-gritty technical details, but just look at skills, track record, etc. So the fact that it got large investments and generates a lot of hype doesn't mean much, there was a plenty of such companies during dotcom boom. It's quite like :CueCat. As we now know, an ability to scan a printed code and open a web page which it points to is very useful, a lot of people use QR codes, they are ubiquitous. This was exactly the vision behind CueCat. But it was implemented as a dedicated hardware device, not as a smartphone app, as there were no smartphones at that time. So after a lot of hype and aggressive marketing the company failed, but just few years later their vision became realized in QR reader apps. Hardware becomes increasingly irrelevant. As Mark Andreessen, Balaji's partner, [once said], software is eating the world. Solving problems which can be solved software using custom hardware is just silly. Balaji talks about internet-of-things applications where devices mine bitcoins and use them to buy services they need to function. Well, in the end, user pays for that, as he pays for physical chips and electricity. It would be more efficient for him to pay directly than to use this mining-based scheme. And it's possible to do so using software. E.g. imagine you have a lot of smart devices which use external services in your home. It would be nice if you can just aggregate the bill and pay it off automatically, say $2/month. Why only $2? Well, if there is a device consuming $20/month, it needs some serious mining abilities, so it will cost much more than $20 in electricity bills... Maybe 21 inc will eventually pivot into purely software solutions, they have a lot of money to play with. But the current generation of devices they make just makes no sense, whatsoever, and people who try to find something useful in them just waste their time. EDIT: One plausible case for using custom hardware is a possibility of off-chain microtransactions using trusted hardware. Not unlike MintChip conceptually. But size of the device as well as its price is puzzling in this case, as this can be implemented (and was already implemented) in smart card form factor.
Transcript of discussion between an ASIC designer and several proof-of-work designers from #monero-pow channel on Freenode this morning
[08:07:01] lukminer contains precompiled cn/r math sequences for some blocks: https://lukminer.org/2019/03/09/oh-kay-v4r-here-we-come/ [08:07:11] try that with RandomX :P [08:09:00] tevador: are you ready for some RandomX feedback? it looks like the CNv4 is slowly stabilizing, hashrate comes down... [08:09:07] how does it even make sense to precompile it? [08:09:14] mine 1% faster for 2 minutes? [08:09:35] naturally we think the entire asic-resistance strategy is doomed to fail :) but that's a high-level thing, who knows. people may think it's great. [08:09:49] about RandomX: looks like the cache size was chosen to make it GPU-hard [08:09:56] looking forward to more docs [08:11:38] after initial skimming, I would think it's possible to make a 10x asic for RandomX. But at least for us, we will only make an ASIC if there is not a total ASIC hostility there in the first place. That's better for the secret miners then. [08:13:12] What I propose is this: we are working on an Ethash ASIC right now, and once we have that working, we would invite tevador or whoever wants to come to HK/Shenzhen and we walk you guys through how we would make a RandomX ASIC. You can then process this input in any way you like. Something like that. [08:13:49] unless asics (or other accelerators) re-emerge on XMR faster than expected, it looks like there is a little bit of time before RandomX rollout [08:14:22] 10x in what measure? $/hash or watt/hash? [08:14:46] watt/hash [08:15:19] so you can make 10 times more efficient double precisio FPU? [08:16:02] like I said let's try to be productive. You are having me here, let's work together! [08:16:15] continue with RandomX, publish more docs. that's always helpful. [08:16:37] I'm trying to understand how it's possible at all. Why AMD/Intel are so inefficient at running FP calculations? [08:18:05] midipoet ([email protected]/web/irccloud.com/x-vszshqqxwybvtsjm) has joined #monero-pow [08:18:17] hardware development works the other way round. We start with 1) math then 2) optimization priority 3) hw/sw boundary 4) IP selection 5) physical implementation [08:22:32] This still doesn't explain at which point you get 10x [08:23:07] Weren't you the ones claiming "We can accelerate ProgPoW by a factor of 3x to 8x." ? I find it hard to believe too. [08:30:20] sure [08:30:26] so my idea: first we finish our current chip [08:30:35] from simulation to silicon :) [08:30:40] we love this stuff... we do it anyway [08:30:59] now we have a communication channel, and we don't call each other names immediately anymore: big progress! [08:31:06] you know, we russians have a saying "it was smooth on paper, but they forgot about ravines" [08:31:12] So I need a bit more details [08:31:16] ha ha. good! [08:31:31] that's why I want to avoid to just make claims [08:31:34] let's work [08:31:40] RandomX comes in Sep/Oct, right? [08:31:45] Maybe [08:32:20] We need to audit it first [08:32:31] ok [08:32:59] we don't make chips to prove sw devs that their assumptions about hardware are wrong. especially not if these guys then promptly hardfork and move to the next wrong assumption :) [08:33:10] from the outside, this only means that hw & sw are devaluing each other [08:33:24] neither of us should do this [08:33:47] we are making chips that can hopefully accelerate more crypto ops in the future [08:33:52] signing, verifying, proving, etc. [08:34:02] PoW is just a feature like others [08:34:18] sech1: is it easy for you to come to Hong Kong? (visa-wise) [08:34:20] or difficult? [08:34:33] or are you there sometimes? [08:34:41] It's kind of far away [08:35:13] we are looking forward to more RandomX docs. that's the first step. [08:35:31] I want to avoid that we have some meme "Linzhi says they can accelerate XYZ by factor x" .... "ha ha ha" [08:35:37] right? we don't want that :) [08:35:39] doc is almost finished [08:35:40] What docs do you need? It's described pretty good [08:35:41] so I better say nothing now [08:35:50] we focus on our Ethash chip [08:36:05] then based on that, we are happy to walk interested people through the design and what else it can do [08:36:22] that's a better approach from my view than making claims that are laughed away (rightfully so, because no silicon...) [08:36:37] ethash ASIC is basically a glorified memory controller [08:36:39] sech1: tevador said something more is coming (he just did it again) [08:37:03] yes, some parts of RandomX are not described well [08:37:10] like dataset access logic [08:37:37] RandomX looks like progpow for CPU [08:37:54] yes [08:38:03] it is designed to reflect CPU [08:38:34] so any ASIC for it = CPU in essence [08:39:04] of course there are still some things in regular CPU that can be thrown away for RandomX [08:40:20] uncore parts are not used, but those will use very little power [08:40:37] except for memory controller [08:41:09] I'm just surprised sometimes, ok? let me ask: have you designed or taped out an asic before? isn't it risky to make assumptions about things that are largely unknown? [08:41:23] I would worry [08:41:31] that I get something wrong... [08:41:44] but I also worry like crazy that CNv4 will blow up, where you guys seem to be relaxed [08:42:06] I didn't want to bring up anything RandomX because CNv4 is such a nailbiter... :) [08:42:15] how do you guys know you don't have asics in a week or two? [08:42:38] we don't have experience with ASIC design, but RandomX is simply designed to exactly fit CPU capabilities, which is the best you can do anyways [08:43:09] similar as ProgPoW did with GPUs [08:43:14] some people say they want to do asic-resistance only until the vast majority of coins has been issued [08:43:21] that's at least reasonable [08:43:43] yeah but progpow totally will not work as advertised :) [08:44:08] yeah, I've seen that comment about progpow a few times already [08:44:11] which is no surprise if you know it's just a random sales story to sell a few more GPUs [08:44:13] RandomX is not permanent, we are expecting to switch to ASIC friendly in a few years if possible [08:44:18] yes [08:44:21] that makes sense [08:44:40] linzhi-sonia: how so? will it break or will it be asic-able with decent performance gains? [08:44:41] are you happy with CNv4 so far? [08:45:10] ah, long story. progpow is a masterpiece of deception, let's not get into it here. [08:45:21] if you know chip marketing it makes more sense [08:45:24] linzhi-sonia: So far? lol! a bit early to tell, don't you think? [08:45:35] the diff is coming down [08:45:41] first few hours looked scary [08:45:43] I remain skeptical: I only see ASICs being reasonable if they are already as ubiquitous as smartphones [08:45:46] yes, so far so good [08:46:01] we kbew the diff would not come down ubtil affter block 75 [08:46:10] yes [08:46:22] but first few hours it looks like only 5% hashrate left [08:46:27] looked [08:46:29] now it's better [08:46:51] the next worry is: when will "unexplainable" hashrate come back? [08:47:00] you hope 2-3 months? more? [08:47:05] so give it another couple of days. will probably overshoot to the downside, and then rise a bit as miners get updated and return [08:47:22] 3 months minimum turnaround, yes [08:47:28] nah [08:47:36] don't underestimate asicmakers :) [08:47:54] you guys don't get #1 priority on chip fabs [08:47:56] 3 months = 90 days. do you know what is happening in those 90 days exactly? I'm pretty sure you don't. same thing as before. [08:48:13] we don't do any secret chips btw [08:48:21] 3 months assumes they had a complete design ready to go, and added the last minute change in 1 day [08:48:24] do you know who is behind the hashrate that is now bricked? [08:48:27] innosilicon? [08:48:34] hyc: no no, and no. :) [08:48:44] hyc: have you designed or taped out a chip before? [08:48:51] yes, many years ago [08:49:10] then you should know that 90 days is not a fixed number [08:49:35] sure, but like I said, other makers have greater demand [08:49:35] especially not if you can prepare, if you just have to modify something, or you have more programmability in the chip than some people assume [08:50:07] we are chipmakers, we would never dare to do what you guys are doing with CNv4 :) but maybe that just means you are cooler! [08:50:07] and yes, programmability makes some aspect of turnaround easier [08:50:10] all fine [08:50:10] I hope it works! [08:50:28] do you know who is behind the hashrate that is now bricked? [08:50:29] inno? [08:50:41] we suspect so, but have no evidence [08:50:44] maybe we can try to find them, but we cannot spend too much time on this [08:50:53] it's probably not so much of a secret [08:51:01] why should it be, right? [08:51:10] devs want this cat-and-mouse game? devs get it... [08:51:35] there was one leak saying it's innosilicon [08:51:36] so you think 3 months, ok [08:51:43] inno is cool [08:51:46] good team [08:51:49] IP design house [08:51:54] in Wuhan [08:52:06] they send their people to conferences with fake biz cards :) [08:52:19] pretending to be other companies? [08:52:26] sure [08:52:28] ha ha [08:52:39] so when we see them, we look at whatever card they carry and laugh :) [08:52:52] they are perfectly suited for secret mining games [08:52:59] they made at most $6 million in 2 months of mining, so I wonder if it was worth it [08:53:10] yeah. no way to know [08:53:15] but it's good that you calculate! [08:53:24] this is all about cost/benefit [08:53:25] then you also understand - imagine the value of XMR goes up 5x, 10x [08:53:34] that whole "asic resistance" thing will come down like a house of cards [08:53:41] I would imagine they sell immediately [08:53:53] the investor may fully understand the risk [08:53:57] the buyer [08:54:13] it's not healthy, but that's another discussion [08:54:23] so mid-June [08:54:27] let's see [08:54:49] I would be susprised if CNv4 ASICs show up at all [08:54:56] surprised* [08:54:56] why? [08:55:05] is only an economic question [08:55:12] yeah should be interesting. FPGAs will be near their limits as well [08:55:16] unless XMR goes up a lot [08:55:19] no, not *only*. it's also a technology question [08:55:44] you believe CNv4 is "asic resistant"? which feature? [08:55:53] it's not [08:55:59] cnv4 = Rabdomx ? [08:56:03] no [08:56:07] cnv4=cryptinight/r [08:56:11] ah [08:56:18] CNv4 is the one we have now, I think [08:56:21] since yesterday [08:56:30] it's plenty enough resistant for current XMR price [08:56:45] that may be, yes! [08:56:55] I look at daily payouts. XMR = ca. 100k USD / day [08:57:03] it can hold until October, but it's not asic resistant [08:57:23] well, last 24h only 22,442 USD :) [08:57:32] I think 80 h/s per watt ASICs are possible for CNv4 [08:57:38] linzhi-sonia where do you produce your chips? TSMC? [08:57:44] I'm cruious how you would expect to build a randomX ASIC that outperforms ARM cores for efficiency, or Intel cores for raw speed [08:57:48] curious [08:58:01] yes, tsmc [08:58:21] Our team did the world's first bitcoin asic, Avalon [08:58:25] and upcoming 2nd gen Ryzens (64-core EPYC) will be a blast at RandomX [08:58:28] designed and manufactured [08:58:53] still being marketed? [08:59:03] linzhi-sonia: do you understand what xmr wants to achieve, community-wise? [08:59:14] Avalon? as part of Canaan Creative, yes I think so. [08:59:25] there's not much interesting oing on in SHA256 [08:59:29] Inge-: I would think so, but please speak [08:59:32] hyc: yes [09:00:28] linzhi-sonia: i am curious to hear your thoughts. I am fairly new to this space myself... [09:00:51] oh [09:00:56] we are grandpas, and grandmas [09:01:36] yet I have no problem understanding why ASICS are currently reviled. [09:01:48] xmr's main differentiators to, let's say btc, are anonymity and fungibility [09:01:58] I find the client terribly slow btw [09:02:21] and I think the asic-forking since last may is wrong, doesn't create value and doesn't help with the project objectives [09:02:25] which "the client" ? [09:02:52] Monero GUI client maybe [09:03:12] MacOS, yes [09:03:28] What exactly is slow? [09:03:30] linzhi-sonia: I run my own node, and use the CLI and Monerujo. Have not had issues. [09:03:49] staying in sync [09:03:49] linzhi-sonia: decentralization is also a key principle [09:03:56] one that Bitcoin has failed to maintain [09:04:39] hmm [09:05:00] looks fairly decentralized to me. decentralization is the result of 3 goals imo: resilient, trustless, permissionless [09:05:28] don't ask a hardware maker about physical decentralization. that's too ideological. we focus on logical decentralization. [09:06:11] physical decentralization is important. with bulk of bitnoin mining centered on Chinese hydroelectric dams [09:06:19] have you thought about including block data in the PoW? [09:06:41] yes, of course. [09:07:39] is that already in an algo? [09:08:10] hyc: about "centered on chinese hydro" - what is your source? the best paper I know is this: https://coinshares.co.uk/wp-content/uploads/2018/11/Mining-Whitepaper-Final.pdf [09:09:01] linzhi-sonia: do you mine on your ASICs before you sell them? [09:09:13] besides testing of course [09:09:45] that paper puts Chinese btc miners at 60% max [09:10:05] tevador: I think everybody learned that that is not healthy long-term! [09:10:16] because it gives the chipmaker a cost advantage over its own customers [09:10:33] and cost advantage leads to centralization (physical and logical) [09:10:51] you guys should know who finances progpow and why :) [09:11:05] but let's not get into this, ha ha. want to keep the channel civilized. right OhGodAGirl ? :) [09:11:34] tevador: so the answer is no! 100% and definitely no [09:11:54] that "self-mining" disease was one of the problems we have now with asics, and their bad reputation (rightfully so) [09:13:08] I plan to write a nice short 2-page paper or so on our chip design process. maybe it's interesting to some people here. [09:13:15] basically the 5 steps I mentioned before, from math to physical [09:13:32] linzhi-sonia: the paper you linked puts 48% of bitcoin mining in Sichuan. the total in China is much more than 60% [09:13:38] need to run it by a few people to fix bugs, will post it here when published [09:14:06] hyc: ok! I am just sharing the "best" document I know today. it definitely may be wrong and there may be a better one now. [09:14:18] hyc: if you see some reports, please share [09:14:51] hey I am really curious about this: where is a PoW algo that puts block data into the PoW? [09:15:02] the previous paper I read is from here http://hackingdistributed.com/2018/01/15/decentralization-bitcoin-ethereum/ [09:15:38] hyc: you said that already exists? (block data in PoW) [09:15:45] it would make verification harder [09:15:49] linzhi-sonia: https://the-eye.eu/public/Books/campdivision.com/PDF/Computers%20General/Privacy/bitcoin/meh/hashimoto.pdf [09:15:51] but for chips it would be interesting [09:15:52] we discussed the possibility about a year ago https://www.reddit.com/Monero/comments/8bshrx/what_we_need_to_know_about_proof_of_work_pow/ [09:16:05] oh good links! thanks! need to read... [09:16:06] I think that paper by dryja was original [09:17:53] since we have a nice flow - second question I'm very curious about: has anyone thought about in-protocol rewards for other functions? [09:18:55] we've discussed micropayments for wallets to use remote nodes [09:18:55] you know there is a lot of work in other coins about STARK provers, zero-knowledge, etc. many of those things very compute intense, or need to be outsourced to a service (zether). For chipmakers, in-protocol rewards create an economic incentive to accelerate those things. [09:19:50] whenever there is an in-protocol reward, you may get the power of ASICs doing something you actually want to happen [09:19:52] it would be nice if there was some economic reward for running a fullnode, but no one has come up with much more than that afaik [09:19:54] instead of fighting them off [09:20:29] you need to use asics, not fight them. that's an obvious thing to say for an asicmaker... [09:20:41] in-protocol rewards can be very powerful [09:20:50] like I said before - unless the ASICs are so useful they're embedded in every smartphone, I dont see them being a positive for decentralization [09:21:17] if they're a separate product, the average consumer is not going to buy them [09:21:20] now I was talking about speedup of verifying, signing, proving, etc. [09:21:23] they won't even know what they are [09:22:07] if anybody wants to talk about or design in-protocol rewards, please come talk to us [09:22:08] the average consumer also doesn't use general purpose hardware to secure blockchains either [09:22:14] not just for PoW, in fact *NOT* for PoW [09:22:32] it requires sw/hw co-design [09:23:10] we are in long-term discussions/collaboration over this with Ethereum, Bitcoin Cash. just talk right now. [09:23:16] this was recently published though suggesting more uptake though I guess https://btcmanager.com/college-students-are-the-second-biggest-miners-of-cryptocurrency/ [09:23:29] I find it pretty hard to believe their numbers [09:24:03] well [09:24:09] sorry, original article: https://www.pcmag.com/news/366952/college-kids-are-using-campus-electricity-to-mine-crypto [09:24:11] just talk, no? rumors [09:24:18] college students are already more educated than the average consumer [09:24:29] we are not seeing many such customers anymore [09:24:30] it's data from cisco monitoring network traffic