Opcodes used in Bitcoin Script Ramon Quesada

Taproot, CoinJoins, and Cross-Input Signature Aggregation

It is a very common misconception that the upcoming Taproot upgrade helps CoinJoin.
TLDR: The upcoming Taproot upgrade does not help equal-valued CoinJoin at all, though it potentially increases the privacy of other protocols, such as the Lightning Network, and escrow contract schemes.
If you want to learn more, read on!

Equal-valued CoinJoins

Let's start with equal-valued CoinJoins, the type JoinMarket and Wasabi use. What happens is that some number of participants agree on some common value all of them use. With JoinMarket the taker defines this value and pays the makers to agree to it, with Wasabi the server defines a value approximately 0.1 BTC.
Then, each participant provides inputs that they unilaterally control, totaling equal or greater than the common value. Typically since each input is unilaterally controlled, each input just requires a singlesig. Each participant also provides up to two addresses they control: one of these will be paid with the common value, while the other will be used for any extra value in the inputs they provided (i.e. the change output).
The participants then make a single transaction that spends all the provided inputs and pays out to the appropriate outputs. The inputs and outputs are shuffled in some secure manner. Then the unsigned transaction is distributed back to all participants.
Finally, each participant checks that the transaction spends the inputs it provided (and more importantly does not spend any other coins it might own that it did not provide for this CoinJoin!) and that the transaction pays out to the appropriate address(es) it controls. Once they have validated the transaction, they ratify it by signing for each of the inputs it provided.
Once every participant has provided signatures for all inputs it registered, the transaction is now completely signed and the CoinJoin transaction is now validly confirmable.
CoinJoin is a very simple and direct privacy boost, it requires no SCRIPTs, needs only singlesig, etc.


Let's say we have two participants who have agreed on a common amount of 0.1 BTC. One provides a 0.105 coin as input, the other provides a 0.114 coin as input. This results in a CoinJoin with a 0.105 coin and a 0.114 coin as input, and outputs with 0.1, 0.005, 0.014, and 0.1 BTC.
Now obviously the 0.005 output came from the 0.105 input, and the 0.014 output came from the 0.114 input.
But the two 0.1 BTC outputs cannot be correlated with either input! There is no correlating information, since either output could have come from either input. That is how common CoinJoin implementations like Wasabi and JoinMarket gain privacy.

Banning CoinJoins

Unfortunately, large-scale CoinJoins like that made by Wasabi and JoinMarket are very obvious.
All you have to do is look for a transactions where, say, more than 3 outputs are the same equal value, and the number of inputs is equal or larger than the number of equal-valued outputs. Thus, it is trivial to identify equal-valued CoinJoins made by Wasabi and JoinMarket. You can even trivially differentiate them: Wasabi equal-valued CoinJoins are going to have a hundred or more inputs, with outputs that are in units of approximately 0.1 BTC, while JoinMarket CoinJoins have equal-valued outputs of less than a dozen (between 4 to 6 usually) and with the common value varying wildly from as low as 0.001 BTC to as high as a dozen BTC or more.
This has led to a number of anti-privacy exchanges to refuse to credit custodially-held accounts if the incoming deposit is within a few hops of an equal-valued CoinJoin, usually citing concerns about regulations. Crucially, the exchange continues to hold private keys for those "banned" deposits, and can still spend them, thus this is effectively a theft. If your exchange does this to you, you should report that exchange as stealing money from its customers. Not your keys not your coins.
Thus, CoinJoins represent a privacy tradeoff:


Let's now briefly discuss that nice new shiny thing called Taproot.
Taproot includes two components:
This has some nice properties:

Taproot DOES NOT HELP CoinJoin

So let's review!
There is absolutely no overlap. Taproot helps things that CoinJoin does not use. CoinJoin uses things that Taproot does not improve.

B-but They Said!!

A lot of early reporting on Taproot claimed that Taproot benefits CoinJoin.
What they are confusing is that earlier drafts of Taproot included a feature called cross-input signature aggregation.
In current Bitcoin, every input, to be spent, has to be signed individually. With cross-input signature aggregation, all inputs that support this feature are signed with a single signature that covers all those inputs. So for example if you would spend two inputs, current Bitcoin requires a signature for each input, but with cross-input signature aggregation you can sign both of them with a single signature. This works even if the inputs have different public keys: two inputs with cross-input signature aggregation effectively define a 2-of-2 public key, and you can only sign for that input if you know the private keys for both inputs, or if you are cooperatively signing with somebody who knows the private key of the other input.
This helps CoinJoin costs. Since CoinJoins will have lots of inputs (each participant will provide at least one, and probably will provide more, and larger participant sets are better for more privacy in CoinJoin), if all of them enabled cross-input signature aggregation, such large CoinJoins can have only a single signature.
This complicates the signing process for CoinJoins (the signers now have to sign cooperatively) but it can be well worth it for the reduced signature size and onchain cost.
But note that the while cross-input signature aggregation improves the cost of CoinJoins, it does not improve the privacy! Equal-valued CoinJoins are still obvious and still readily bannable by privacy-hating exchanges. It does not improve the privacy of CoinJoin. Instead, see https://old.reddit.com/Bitcoin/comments/gqb3udesign_for_a_coinswap_implementation_fo

Why isn't cross-input signature aggregation in?

There's some fairly complex technical reasons why cross-input signature aggregation isn't in right now in the current Taproot proposal.
The primary reason was to reduce the technical complexity of Taproot, in the hope that it would be easier to convince users to activate (while support for Taproot is quite high, developers have become wary of being hopeful that new proposals will ever activate, given the previous difficulties with SegWit).
The main technical complexity here is that it interacts with future ways to extend Bitcoin.
The rest of this writeup assumes you already know about how Bitcoin SCRIPT works. If you don't understand how Bitcoin SCRIPT works at the low-level, then the TLDR is that cross-input signature aggregation complicates how to extend Bitcoin in the future, so it was deferred to let the develoeprs think more about it.
(this is how I understand it; perhaps pwuille or ajtowns can give a better summary.)
In detail, Taproot also introduces OP_SUCCESS opcodes. If you know about the OP_NOP opcodes already defined in current Bitcoin, well, OP_SUCCESS is basically "OP_NOP done right".
Now, OP_NOP is a do-nothing operation. It can be replaced in future versions of Bitcoin by having that operation check some condition, and then fail if the condition is not satisfied. For example, both OP_CHECKLOCKTIMEVERIFY and OP_CHECKSEQUENCEVERIFY were previously OP_NOP opcodes. Older nodes will see an OP_CHECKLOCKTIMEVERIFY and think it does nothing, but newer nodes will check if the nLockTime field has a correct specified value, and fail if the condition is not satisfied. Since most of the nodes on the network are using much newer versions of the node software, older nodes are protected from miners who try to misspend any OP_CHECKLOCKTIMEVERIFY/OP_CHECKSEQUENCEVERIFY, and those older nodes will still remain capable of synching with the rest of the network: a dedication to strict backward-compatibility necessary for a consensus system.
Softforks basically mean that a script that passes in the latest version must also be passing in all older versions. A script cannot be passing in newer versions but failing in older versions, because that would kick older nodes off the network (i.e. it would be a hardfork).
But OP_NOP is a very restricted way of adding opcodes. Opcodes that replace OP_NOP can only do one thing: check if some condition is true. They can't push new data on the stack, they can't pop items off the stack. For example, suppose instead of OP_CHECKLOCKTIMEVERIFY, we had added a OP_GETBLOCKHEIGHT opcode. This opcode would push the height of the blockchain on the stack. If this command replaced an older OP_NOP opcode, then a script like OP_GETBLOCKHEIGHT 650000 OP_EQUAL might pass in some future Bitcoin version, but older versions would see OP_NOP 650000 OP_EQUAL, which would fail because OP_EQUAL expects two items on the stack. So older versions will fail a SCRIPT that newer versions will pass, which is a hardfork and thus a backwards incompatibility.
OP_SUCCESS is different. Instead, old nodes, when parsing the SCRIPT, will see OP_SUCCESS, and, without executing the body, will consider the SCRIPT as passing. So, the OP_GETBLOCKHEIGHT 650000 OP_EQUAL example will now work: a future version of Bitcoin might pass it, and existing nodes that don't understand OP_GETBLOCKHEIGHT will se OP_SUCCESS 650000 OP_EQUAL, and will not execute the SCRIPT at all, instead passing it immediately. So a SCRIPT that might pass in newer versions will pass for older versions, which keeps the back-compatibility consensus that a softfork needs.
So how does OP_SUCCESS make things difficult for cross-input signatur aggregation? Well, one of the ways to ask for a signature to be verified is via the opcodes OP_CHECKSIGVERIFY. With cross-input signature aggregation, if a public key indicates it can be used for cross-input signature aggregation, instead of OP_CHECKSIGVERIFY actually requiring the signature on the stack, the stack will contain a dummy 0 value for the signature, and the public key is instead added to a "sum" public key (i.e. an n-of-n that is dynamically extended by one more pubkey for each OP_CHECKSIGVERIFY operation that executes) for the single signature that is verified later by the cross-input signature aggregation validation algorithm00.
The important part here is that the OP_CHECKSIGVERIFY has to execute, in order to add its public key to the set of public keys to be checked in the single signature.
But remember that an OP_SUCCESS prevents execution! As soon as the SCRIPT is parsed, if any opcode is OP_SUCCESS, that is considered as passing, without actually executing the SCRIPT, because the OP_SUCCESS could mean something completely different in newer versions and current versions should assume nothing about what it means. If the SCRIPT contains some OP_CHECKSIGVERIFY command in addition to an OP_SUCCESS, that command is not executed by current versions, and thus they cannot add any public keys given by OP_CHECKSIGVERIFY. Future versions also have to accept that: if they parsed an OP_SUCCESS command that has a new meaning in the future, and then execute an OP_CHECKSIGVERIFY in that SCRIPT, they cannot add the public key into the same "sum" public key that older nodes use, because older nodes cannot see them. This means that you might need more than one signature in the future, in the presence of an opcode that replaces some OP_SUCCESS.
Thus, because of the complexity of making cross-input signature aggregation work compatibly with future extensions to the protocol, cross-input signature aggregation was deferred.
submitted by almkglor to Bitcoin [link] [comments]

Truth in one tweet. 😂

Truth in one tweet. 😂 submitted by scotty321 to btc [link] [comments]

Urbit meetup in North Texas

Hi everybody, I'm holding a meetup in the DFW area for people interested in Urbit next month. If you're interested in the project or want to learn more about it, come hang out! Details are at the end of the post. I've got the blessing of u/ZorbaTHut to post this here contingent on explaining why Urbit is interesting, both in general and for this audience, so I'll give you a brief outline of the project if you're not familiar, and answer questions you may have once I'm home from work on Monday (though I encourage anybody else who'd like to to chime in until then -- I have to go to bed soon.)

What is Urbit?

Urbit is an interenet decentralization project, and a full networked computing stack from the ground up. Urbit's ultimate goal is to build a new internet on top of the old one, that is architecturally designed to avoid the need for centralized services by allowing individuals to run and program robust personal servers that are simple to manage. When Urbit conquers the world, your digital identity will be something you personally permanently own as a cryptographic key, not an line in a corporation's database; Facebook and Twitter will be protocols -- encrypted traffic and data shared directly between you and your friends & family, with no middlemen spying on you; your apps, social software and anything you program will have secure cryptocurrency payment mechanisms as a system call, payed out of a wallet on a device you fully control; and you will tangibly own and control your computer and the networked software you use on it.
As I said, Urbit is a stack; at its core is Nock, a minimal, turing-complete function. Nock is built out into a deterministic operating system, Arvo, with its own functional programming language. For now, Arvo runs as a process, with a custom VM/interpreter on *nix machines. Your Arvo instance talks to other instances over a native, encrypted peer-to-peer network, though it can interface with the normal internet as well. Urbit's identity management system is called Azimuth, a public key infrastructure built on Ethereum. You own proof of your Urbit instance's identity as a token in the same way you own your Bitcoin wallet.
Because the peer-to-peer network is built into Arvo, you get it 'for free' with any software you write or run on it. You run your own personal server, and run all the software you use to communicate with the world yourself. Because all of your services are running on computer you control using a single secure identity system, you can think of what it aspires to like a decentralized, cypherpunk version of WeChat -- a programmable, secure platform for everything you want to do with your computer in one place, without the downsides of other people running your software.

Why is it interesting?

Urbit is extremely ambitious and pretty strange. Why throw out the entire stack we've spent half a century building? Because it's a giant ball of mud -- millions of lines of code in the Linux kernel alone, with all the attendant security issues and complexity. You can run a personal server today if you're technically sophisticated; spin up a VPS, install all the software you need, configure everything and keep it secure. It's doable, but it sucks, and your mom can't do it. Urbit is designed from the beginning to avoid the pitfalls that led to cascading system complexity. Nock has 12 opcodes, and Arvo is somewhere in the neighborhood of 30,000 lines of code. The core pieces of Urbit are also ticking towards being 'frozen' -- reaching a state where they can no longer be changed, in order to ensure that they remain absolutely minimal. The point of all of this is to make a diamond-hard, unchanging core that a single person can actually understand in its entirety, ensure the security of the architecture, prevent insane dependency hell and leaky abstractions from overgrowing it, and allow for software you write today to run in a century. It also aims to be simple enough that a normal person can pay a commodity provider $5/mo (or something), log into their Urbit on their devices, and control it as easily as their phone.
Urbit's network also has a routing hierarchy that is important to understand; while the total address space is 128-bit, the addresses are partitioned into different classes. 8-bit and 16-bit addresses act as network infrastructure, while human instances use 32-bit addresses. To use the network, you must be sponsored by the 16-bit node 'above' you -- which is to say 'be on good terms'. If you aren't on good terms, that sponsorship can be terminated, but that goes both ways -- if you don't like your sponsor, you can exit and choose another. Because 32-bit addresses are finite, they're scarce and have value, which disincentivizes spam and abuse. To be clear, the sponsor nodes only sign/deliver software updates, and perform peer discovery and NAT traversal; your connections with other people are direct and encrypted. Because there are many sponsor nodes, you can return to the network if you're kicked off unfairly. In the long term, this also allows for graceful political fragmentation of the network if necessary.
The world created by Urbit is a world where individuals control their own data and digital communities live according to their mores. It's an internet that isn't funded by mass automated surveillance and ad companies that know your health problems. It's also the internet as a frontier like it once was, at least until this one is settled. Apologies if this comes off a little true-believer-y, but this project is something I'm genuinely excited about.

For TheMotte

The world that Urbit aims to build is one not dissimilar from Scott's archipelago communism -- one of voluntaristic relations and communities, and exit in the face of conflict & coercion. It's technical infrastructure to move the internet away from the chokepoints of the major social media platforms and the concentration of political power that comes with centralized services. The seismic shifts affecting our institutions and society caused by the internet in the last decade have been commented on at length here and elsewhere, but as BTO said, you ain't seen nothin' yet. I suspect many people with a libertarian or anti-authoritarian bent would appreciate the principle of individual sovereignty over their computing and data. The project is also something I've discussed a few times with others on here, so I know there's some curiosity about it.
The original developer of Urbit is also rather well known online, especially around here. Yarvin is a pretty controversial figure, but he departed the project in early 2019.


There's a lot more that I haven't mentioned, but I hope this has piqued your interest. If you're in DFW, you can find details of the first meetup here. There will be free pizza and a presentation about Urbit, help installing & using it (Mac & Linux only for now), as well as the opportunity to socialize. All are welcome! Feel free to bring a friend.
If you're not in North Texas but are interested, there are also other regional meetups all over the world coming up soon.

Further reading:

submitted by p3on to TheMotte [link] [comments]

Hybrix: Blockchain for all chains

Third party services currently assist users to exchange one form of digital cash or asset for another, but a trusted third party is still required to mediate these transactions.
We propose a solution to the problem of these isolated digital currency systems using a meta-level transfer protocol with an extendable and modular design, making accessible any kind of ledger-based economy or other digital cash system for cross-blockchain and inter-systemic transactions.
Every hybrix protocol transaction yields profit to these respective ecosystems by paying transaction fees to their network supporting miners and stakers.
Technically Bitcoin earlier on had solved some of the problems of the reversibility of transactions and trust issues that plagued online commerce new players in the arena are offering replacements for Bitcoin's peer-to-peer payment solution.
Its transactions are stored in a data block inside the attachment section of a zero-value transaction on any distributed ledger system.
Transactions containing meta data pay the usual fees denominated in the base currency Our proposal is to create a protocol - called hybrix protocol -as a cross-ledger colored coin, making it technically borderless and not bound to a single ledger system.
Intersystemic transaction A transaction occurring between two distinct ledger systems.
Entanglement Informational connection between two transactions on separate ledger systems, that functionally relate them as a cross-ledger transaction.
Validator Network actor that analyses past transactions and makes available the legitimacy of these transactions according to the rules of the system protocol.
Double spend A transaction that illegitimately increases the money supply in a ledger system.
Immutability of past transactions attachment The data included with a transaction, sometimes called message or in the case of Bitcoin and its derived coins - OP RETURN. Primarily used on most ledger systems for annotation of the transaction.
Transactions have a unique transaction id OPRETURN An Bitcoin script opcode used to mark a transaction output as an attachment field for storing data 3 invalid.
Figure 2: The parsing function p parses the attachment of the base transaction into the required fields.
Intersystemic Transactions 3.2 Structured Data on a Ledger We define an electronic intersystemic token as a block of structured data that is inserted into the attachment section of a zerovalue transaction on a distributed ledger system.
The content of the attachment of transaction on a base ledger can be parsed into a second layer transaction of the meta ledger.
A parsing function p will extract the required meta transaction details from the base transactions attachment as well as using details from the base transaction that are still relevant.
Token ownership is secured by the underlying ledger system every time a transaction is done.
Each owner transfers their zero-value transaction containing the token data to another owner by digitally signing a hash of the previous transaction and the current transaction.
The only thing that is added to the recipe is the ledger symbol, and transaction hash of where the verification hash can be found.
Subsequently the token is minted on the same address using a followup transaction 3.
cross-ledger entangled transactions Other Types of Transactions tion and then choosing a branch that has not yet been validated.
When a transaction contains more data than a ledger system can handle in its attachmentstorage space, the transaction may be split up, and sent using a transaction accompaniedby tailing part transactions that complete thecontents of the entire operation 7.
A swap transaction is legitimate when the counterparty responds to a swap proposal using a signing transaction.
Finally a burn transaction returns spendable HRC1 token balance to address E on the Ethereum chain 9.
In case of a collision, validators will only accept the recipe that was proven first by way of the genesis transaction.
The older genesis transaction must also be recorded in the recipe, so the chain of mutations can be followed and approved by validators.
Validators check a new incoming recipe for validity first, by comparing its hashes with available data in the blockchain, and authenticating that the updated genesis transaction has been done using the same secret key as the first genesis transaction.
Validation of Transactions DR AF T 6 5 6.1 Mutation of Monetary Supply Validation as a Service External validation should be handled in a decentralized manner using a consensus amongst multiple validator nodes.
If a transaction fee is enforced by the ruleset, the supply is subtracted from on every transaction.
7 7 Examinations 7.1 Validating the Validators Validators need to be rigorously examined in order to find out if they are properly doing their job of validating transactions on the chains.
In the case that all is going according to plan validators check the transactions and record their findings for the public truthfully.
In sending a transaction they can opt to pay a higher fee, and this will result in more validators eager to validate the user's chain of transactions.
A decentralized consensus state database maintained by a pool of validators will consist of a sub tree Tn0 where n increments with each state update, providing a snapshot of the agreed upon valid transaction tree.
To ensure the recovery from a 51% attack on any one single chain, snapshotting by validators could enable network users to request the verification of the current ledger and balances state, regardless of a transaction history tainted by 51% attack damage.
Common hybrix Index Storing the genesis transaction ID, or other hash information in every transaction would require a significant amount of blockchain storage as the volume of transactions grows.
The token protocol Omni, on the contrary, uses an index number for the asset ID in every transaction.
Where less computing and storage resources are available a hybrixjslib client can be used to sign and interpret transactions and get necessary data from a publicly available hybrixd node API. AF T Deterministic Libraries and API Connectors For a meta ledger we define a seed k KL¯ that can be used to generate a corresponding key pair in each base ledger using the function χL¯ : L¯ K(` ) χL,j :K L j DR 9 ψL : KL AL We connect to a large variety of blockchain APIs using a peer-to-peer network daemon called hybrixd 10.
Deterministic functions are used to generate key pairs for all included 10 Conclusion We have proposed a system for meta-level transfers across multiple distributed ledgers 10 Notes without relying on centralized exchanges or decentralized atomic transaction compatibility.
The process of moving value between ledger systems is not controlled by a centralized party, as transactions can be created and signed client-side and sent peer-to-peer among users.
We started with the usual framework of second-layer tokens specified by storing data attached to transactions, which provides a method of accounting on top of existing ledgersystems, but is incomplete without a way to prevent double-spending.
submitted by ramanpandwar to XeraExchange [link] [comments]

Contrats d'exécution consensuels de VDS et processus du téléchargement à la chaîne

Résumé des contrats d’exécution consensuels
Le concept de base du contrat d’exécution consensuels
Contrats d’exécution consensuels, connu sous le nom de contrat intelligent dans l'industrie de la blockchain, mais l'équipe de VDS estime que ce terme est trop marketing, car nous n'avons pas trouvé à quel point la technologie de programmation contractuelle est intelligente jusqu'à présent, il s'agit simplement d'un système décentralisé dans le réseau distribué, la procédure prédéfinie de comportement consensuel formée par l'édition de code. Dans l'esprit de rechercher la vérité à partir des faits, nous pensons qu'il est plus approprié de renommer le contrat intelligent en tant que contrat d'exécution de consensus. Lorsque les humains combineront la technologie blockchain avec la technologie d'intelligence artificielle de AI à l'avenir, les obstacles à la compréhension des noms sont éliminés.
Le contrat d'exécution consensuel peut être appliqué à de nombreuses industries, telles que la finance, l'éducation, les systèmes administratifs, l'Internet des objets, le divertissement en ligne, etc. Grâce à la technologie de la blockchain, dans un réseau distribué spécifique, un script d'exécution qui est formé par l'édition de pré-code sans aucune intervention de tiers et le comportement de consensus des deux parties ou de plusieurs parties impliquées dans le protocole. Il garantit l’exécution sûre, stable et équitable des droits et intérêts de tous les participants au contrat.
Le contrat d'exécution consensuel a joué un rôle dans l'accélération de l'atterrissage de diverses applications pour le développement de l'industrie de la blockchain et a incité davantage de développeurs à y participer activement, révolutionnant l'expérience réelle des produits de la technologie de la blockchain. Tout découle des contributions exceptionnelles de l'équipe Ethereum, ouvrant une nouvelle porte à l'ensemble de l'industrie.
Structure de base et jonction
L’intégration de EVM
La machine virtuelle Ethereum (EVM) utilise un code machine 256 bits et est une machine virtuelle basée sur la pile utilisée pour exécuter les contrats d'exécution consensuels d'Ethereum. Étant donné que l'EVM est conçu pour le système Ethereum, le modèle de compte Ethereum (Account Model) est utilisé pour la transmission de valeurs. La conception de la chaîne VDS est basée sur le modèle Bitcoin UTXO. La raison de cette conception est, d'une part, c'est en raison de la nécessité de réaliser la fonction d'échange de résonance de VDS et la fonction d'échange inter-chaîne unidirectionnelle de bitcoin à chaîne VDS, qui peuvent réaliser la génération de deux adresses différentes de bitcoin et VDS avec une clé privée. D'autre part, l'équipe VDS estime que la structure sous-jacente des transactions Bitcoin est plus stable et fiable grâce à 10 ans de pratique sociale. Par conséquent, VDS utilise une couche d'abstraction de compte (Account Abstraction Layer) pour convertir le modèle UTXO en un modèle de compte qui peut être exécuté par EVM. De plus, VDS a ajouté une interface basée sur le modèle de compte, afin qu'EVM puisse lire directement les informations sur la chaîne VDS. Il convient de noter que la couche d'abstraction de compte peut masquer les détails de déploiement de certaines fonctions spécifiques et établir une division des préoccupations pour améliorer l'interopérabilité et l'indépendance de la plate-forme.
Dans le système Bitcoin, ce n'est qu'après la vérification du script de déverrouillage (Script Sig) et du script de verrouillage (Script Pub Key) que la sortie de transaction correspondante peut être dépensée.
Par exemple, le script de verrouillage verrouille généralement une sortie de transaction sur une adresse bitcoin (la valeur de hachage de la clé publique). Ce n'est que lorsque les conditions de configuration du script de déverrouillage et du script de verrouillage correspondent, que l'exécution du script combiné affiche le résultat sous la forme True (la valeur de retour de système est 1), de sorte que la sortie de transaction correspondante sera dépensée.
Dans le système distribué de VDS, nous soulignons l'opportunité de l'exécution du contrat d'exécution consensuel. Par conséquent, nous avons ajouté les opérateurs OP_CREATE et OP_CALL au script de verrouillage. Lorsque le système de VDS détecte cet opérateur, les nœuds de l'ensemble du réseau exécuteront la transaction. De cette façon, le rôle joué par le script Bitcoin est plus de transférer les données pertinentes vers EVM, pas seulement en tant que langage de codage. Tout comme Ethereum exécute un contrat d'exécution de consensus, le contrat déclenché par les opérateurs OP_CREATE et OP_CALL, EVM changera son état dans sa propre base de données d'état.
Compte tenu de la facilité d'utilisation du contrat d'exécution du consensus de la chaîne VDS, il est nécessaire de vérifier les données qui déclenchent le contrat et la valeur de hachage de la clé publique de la source de données.
Afin d'éviter que la proportion d'UTXO sur la chaîne de VDS ne soit trop importante, la sortie de transaction de OP_CREATE et OP_CALL est t conçue pour être dépensée. La sortie de OP_CALL peut envoyer des fonds pour d'autres contrats ou adresses de hachage de clé publique.
Tout d’abord, pour le contrat d'exécution consensuel créé sur la chaîne VDS, le système généreraune valeur de hachage de transaction pour l'appel de contrat.Le contrat nouvellement libéré a un solde initial de 0 (les contrats avec un solde initial ne sont pas 0 ne sont pas pris en charge). Afin de répondre aux besoins du contrat d'envoi de fonds, VDS utilise l'opérateur OP_CALL pour créer une sortie de transaction. Le script de sortie du contrat d'envoi de fonds est similaire à :
1: the version of the VM
10000: gas limit for the transaction
100: gas price in Qtum satoshis
0xF012: data to send to the contract (usually using the solidity ABI)
ripemd-160 hash of the contract txid OP_CALL
Ce script n'est pas compliqué et OP_CALL effectue la plupart du travail requis. VDS définit le coût spécifique de la transaction (sans tenir compte de la situation de out-of-gas) comme Output Value, qui est Gas Limit. Le mécanisme spécifique du Gas sera discuté dans les chapitres suivants. Lorsque le script de sortie ci-dessus est ajouté à la blockchain, la sortie établit une relation correspondante avec le compte du contrat et se reflète dans le solde du contrat. Le solde peut être compris comme la somme des coûts contractuels disponibles.
La sortie d'adresse de hachage de clé publique standard est utilisée pour le processus de base des transactions de contrat, et le processus de transaction entre les contrats est également généralement cohérent. En outre, vous pouvez effectuer des transactions par P2SH et des transactions non standard (non-standard transactions). Lorsque le contrat actuel doit être échangé avec un autre contrat ou une adresse de hachage de clé publique, la sortie disponible dans le compte du contrat sera consommée. Cette partie de la sortie consommée doit être présente pour la vérification des transactions dans le réseau de VDS, que nous appelons la transaction attendue du contrat (Expected Contract Transactions). Étant donné que la transaction attendue du contrat est générée lorsque le mineur vérifie et exécute la transaction, plutôt que d'être générée par l'utilisateur de la transaction, elle ne sera pas diffusée sur l'ensemble du réseau.
Le principe de fonctionnement principal de la transaction attendue du contrat est réalisé par le code OP_SPEND. OP_CREATE et OP_CALL ont deux modes de fonctionnement. Lorsque l'opérateur est utilisé comme script de sortie, EVM l'exécute, lorsque l'opérateur est utilisé comme script d'entrée, EVM ne sera pas exécuté (sinon il provoquera une exécution répétée). Dans ce cas, OP_CREATE et OP_CALL peuvent être utilisés comme Opération sans commandement. OP_CREATE et OP_CALL reçoivent la valeur de hachage de transaction transmise par OP_SPEND et renvoient 1 ou 0 (c'est-à-dire il peut être dépensé ou pas). Il montre l'importance de OP_SPEND dans la transaction attendue de l'intégralité du contrat. Plus précisément, lorsque OP_SPEND transmet la valeur de hachage de transaction à OP_CREATE et OP_CALL, OP_CREATE et OP_CALL comparent si la valeur de hachage existe dans la liste des transactions attendues du contrat. S'il existe, renvoyez 1 pour dépenser, sinon retournez 0, ce n'est pas pour dépenser. Cette logique fournit indirectement un moyen complet et sûr de garantir que les fonds du contrat ne peuvent être utilisés que par le contrat, ce qui est cohérent avec le résultat des transactions UTXO ordinaires.
Lorsque le contrat EVM envoie des fonds à l'adresse de hachage de clé publique ou à un autre contrat, une nouvelle transaction sera établie. À l'aide de l'algorithme de Consensus-critical coin picking, la sortie de transaction la plus appropriée peut être sélectionnée dans le pool de sortie disponible du contrat. La sortie de transaction sélectionnée sera utilisée comme script d'entrée pour exécuter un seul OP_SPEND, et la sortie est l'adresse cible des fonds, et les fonds restants seront renvoyés au contrat, tout en modifiant la sortie disponible pour la consommation. Ensuite, la valeur de hachage de cette transaction sera ajoutée à la liste des transactions attendues du contrat. Lorsque la transaction est exécutée, la transaction sera immédiatement ajoutée au bloc. Une fois que les mineurs de la chaîne ont vérifié et exécuté la transaction, la liste des transactions attendues du contrat est à nouveau parcourue. Une fois la vérification correcte, la valeur de hachage est supprimée de la table. De cette façon, l'utilisation de OP_SPEND peut effectivement empêcher l'utilisation de valeurs de hachage codées en dur pour modifier le coût de la sortie.
La couche d'abstraction des comptes VDS élimine la nécessité pour l'EVM d'accorder trop d'attention à coin-picking. Il lui suffit de connaître le solde du contrat et peut échanger des fonds avec d'autres contrats ou même des adresses de hachage de clé publique. De cette façon, seule une légère modification du contrat d'exécution du consensus Ethereum peut répondre aux exigences de fonctionnement du contrat VDS.
En d'autres termes, tant que le contrat d'exécution consensuel peut être exécuté sur la chaîne Ethereum, il peut s'exécuter sur la chaîne VDS.
Achèvement de AAL
La conception de la chaîne VDS est basée sur le modèle Bitcoin UTXO. La plate-forme générale de contrat d'exécution de consensus utilise le modèle de compte. Étant donné que le contrat en tant qu'entité nécessite un logo de réseau, ce logoest l'adresse du contrat, de sorte que le fonctionnement et la gestion du contrat d'exécution consensuel peuvent être effectués par cette adresse. La couche d'abstraction de compte est ajoutée à la conception du modèle (Account Abstraction Layer, AAL) de chaîne de VDS, qui est utilisée pour convertir le modèle UTXO en un modèle de compte qui peut être exécuté par le contrat.
Pour les développeurs qui exécutent des contrats par consensus, le modèle de compte de la machine virtuelle est relativement simple. Il prend en charge l'interrogation des soldes des contrats et peut également envoyer des fonds pour d'autres contrats. Bien que ces opérations semblent très simples et basiques, toutes les transactions de la chaîne VDS utilisent le langage de script Bitcoin, et il est plus compliqué que prévu d'être implémenté dans la couche d'abstraction de compte de la chaîne VDS basée sur le modèle Bitcoin UTXO. AAL a donc élargi sa base en ajoutant trois nouveaux opérateurs :
OP_CREATE est utilisé pour effectuer la création de contrats intelligents, transmettre le code d'octet transmis via la transaction à la base de données de stockage de contrats de la machine virtuelle et générer un compte de contrat.
OP_CALL est utilisé pour transférer les données pertinentes et les informations d'adresse nécessaires pour appeler le contrat et exécuter le contenu du code dans le contrat. (Cet opérateur peut également envoyer des fonds pour des contrats d'exécution consensuels).
OP_SPEND utilise la valeur de hachage de ID de contrat actuel comme transaction d'entrée HASH ou transaction HASH envoyée à l'UTXO du contrat, puis utilise OP_SPEND comme instruction de dépense pour créer un script de transaction.
Utilisation des Contrats et processus du téléchargement à la chaîne
Rédiger les contrats
Il est actuellement possible d'utiliser le langage Solidity pour rédiger des contrats d'exécution de consensus.
Utilisez Solidity Remix ou un autre Solidity IDE pour l'écriture et la compilation de code.
solidity remix(https://remix.ethereum.org/
Il est recommandé d'utiliser le mode homestead pour compiler.
Il est recommandé d'utiliser la version solidité 0.4.24 (si d'autres versions sont utilisées, cela peut provoquer des erreurs ou des échecs).
La syntaxe Solidity peut être référencée(https://solidity.readthedocs.io/en)
Compiler et déployer les contrats
Fonctionnement du contrat intelligent de vdsd
Examiner les variables de fonctionnement de l'environnement
vdsd -txindex=1 -logevents=1 -record-log-opcodes=1 -regtest=1
> Les tests sous contrat sont effectués dans l'environnement de test. Il est recommandé de tester après avoir atteint une hauteur de 440 blocs.
440 blocs hautement achevés l'opération de retour de fonds après les événements anormaux du contrat (refund) et (revert).
La commande de contrat de déploiement est :
```vds-cli deploycontract bytecode ABI parameters```
- bytecode (string, required) contract bytecode.
- ABI (string, required) ABI String must be JSON formatted.
- parameters (string, required) a JSON array of parameters.
Cette fonction est utilisée pour l'exécution du constructeur du contrat avec les paramètres entrants pour obtenir le ByteCode qui est finalement utilisé pour le déploiement.
(Cette méthode consiste à associer le bytecode à ABI et à le stocker localement pour l'enregistrement. Il peut appeler des méthodes internes localement et renvoyer le bytecode approprié)
```vds-cli createcontract bytecode (gaslimit gasprice senderaddress broadcast)```
- bytecode (string, required) contract bytecode.
- gaslimit (numeric or string, optional) gasLimit, default is DEFAULT_GAS_LIMIT, recommended value is 250000.
- gasprice (numeric or string, optional) gasprice, default is DEFAULT_GAS_PRICE, recommended value is 0.00000040.
- senderaddress (string, optional) The vds address that will be used to create the contract.
- broadcast (bool, optional, default=true) Whether to broadcast the transaction or not.
- changeToSender (bool, optional, default=true) Return the change to the sender.
La valeur de retour est : txid, éxpéditeur, hachage de l'expéditeur160, adresse du contrat
Consulter si la commande a été exécutée avec succès :
```vds-cli gettransactionreceipt txid```
La valeur de retour de txid pour les transactions non contractuelles est vide
La valeur de retour est : Les informations pertinentes de txid sur la BlockHash Hachage du bloc
- blockNumber Hauteur de bloc
- transactionHash Hachage de transaction
- transactionIndex La position de l'échange dans le bloc
- from Hachage de l’adresse de l’expéditeur 160
- to Le destinataire est l'adresse du contrat, le lieu de création de la transaction contractuelle est 00000000000000000000000000000
- cumulativeGasUsed Gas accumulé
- gasUsed Gaz réellement utilisé
- contractAddress Adresse du contrat
- excepted Y a-t-il des erreurs
- exceptedMessage Message d'erreur
Il convient de noter que le champ excepted n'est pas None, ce qui indique que l'exécution du contrat a échoué. Bien que la transaction puisse être vérifiée sur la chaîne, cela ne signifie pas que le contrat a été exécuté avec succès, c'est-à-dire que les frais de traitement pour l'exécution de ce contrat ne sont pas remboursables. Les frais de traitement ne seront remboursés que si la méthode revert est entrée dans le contrat, et les frais de méthode ne seront pas remboursés pour la méthode assert.
Appel des contrats
```vds-cli addcontract name contractaddress ABI decription```
- name (string required) contract name.
- contractaddress (string required) contract address.
- ABI (string, required) ABI String must be JSON formatted.
- description (string, optional) The description to this contract.
Cette fonction est utilisée pour ajouter le contrat ABI à la base de données locale.
```vds-cli getcontractinfo contractaddress```
- contractaddress (string required) contract address.
Cette fonction est utilisée pour obtenir les informations du contrat ajouté.
```vds-cli callcontractfunc contractaddress function parameters```
- contractaddress (string, required) The contract address that will receive the funds and data.
- function (string, required) The contract function.
- parameters (string, required) a JSON array of parameters.
Cette fonction renverra le résultat de l'exécution lors de l'appel de la méthode constante ordinaire, comme l'appel de la méthode d'opération de données de contrat retournera la chaîne de format hexadécimal du script d'opération.
```vds-cli sendtocontract contractaddress data (amount gaslimit gasprice senderaddress broadcast)```
- contractaddress (string, required) The contract address that will receive the funds and data.
- datahex (string, required) data to send.
- amount (numeric or string, optional) The amount in " + CURRENCY_UNIT + " to send. eg 0.1, default: 0
- gaslimit (numeric or string, optional) gasLimit, default is DEFAULT_GAS_LIMIT, recommended value is 250000.
- gasprice (numeric or string, optional) gasprice, default is DEFAULT_GAS_PRICE, recommended value is 0.00000040.
- senderaddress (string, optional) The vds address that will be used to create the contract.
- broadcast (bool, optional, default=true) Whether to broadcast the transaction or not.
- changeToSender (bool, optional, default=true) Return the change to the sender.
Cette fonction est utilisée pour envoyer le script d'opération de contrat au contrat spécifié et le faire enregistrer sur la blockchain.
Consultation des résultats d’exécution des contrats
```vds-cli gettransaction txid```
Cette commande est utilisée pour afficher les heures de confirmation de la transaction de portefeuille actuelle.
```vds-cli gettransactionreceipt txid```
Cette commande est utilisée pour vérifier les résultats d'exécution de la création de contrat et des transactions d'appel, s'il y a des exceptions levées et des consommations réelles de GAS.
`${datadir}/vmExecLogs.json` enregistrera les appels de contrat sur la blockchain. Ce fichier servira d'interface externe pour les événements de contrat.
Interface d'appel des contrats
l Interface de création de contrat createcontract
l Interface de déploiement de contrat deploycontract
l Interface d'ajout ABI addcontract
l Interface d’appel des contrats avec l’opération des fons sendtocontract
l Interface de lecture des informations sur les contrats callcontractfunc
l Interface d'acquisition d'informations sur l'exécution des transactions contractuelles gettransactionreceipt
L’expliquation des coûts d’expoitation des contrats
Les coûts de fonctionnement de la création d'un contrat sont toutes des méthodes estimées, et un succès d'exécution à 100% ne peut pas être garanti, car gas limit a une limite supérieure de 50000000, et les contrats dépassant cette limite entraîneront un échec. La chaîne de VDS utilise une méthode de rendre la monnaie, ce qui signifie que même si beaucoup de gaz est envoyé, le mineur n'utilisera pas tout le gas et restituera le gas restant. Alors ne vous inquiétez pas de dépenser trop de gas.
Le coût de création d'un contrat est approximativement de la taille du Byte Code * 300 comme gas limit, le gas price minimum est de 0.0000004, gas price * gas limit est le coût de création d'un contrat.
En ce qui concerne l'exécution de la méthode dans un contrat, le gas requis est estimé. En raison de la congestion du réseau, l'estimation ne garantit pas que 100% peuvent être téléchargés avec succès dans la chaîne. Par conséquent, je crains de tromper et de demander au développeur de vérifier les résultats.
submitted by YvanMay to u/YvanMay [link] [comments]

Bitcoin Forks Compared: BTC, BCH, BSV, & BSC

Bitcoin Forks Compared: BTC, BCH, BSV, & BSC


Written by Steven M.

To help understand the upcoming Bitcoin SC hard fork, this article will compare the hard forks for Bitcoin Cash (ABC) and Bitcoin Satoshi Vision, review the technical approach and parameters for these forks, and compare with Bitcoin SC’s approach.

Intro to Hard Forks

Blockchain hard forks happen when protocol or consensus rules are updated in node software to produce blocks and transactions that are not compatible with non-updated versions of nodes. This is generally described as the software not being “backward compatible” which is a bit of a misnomer, since the new version nodes are compatible with older blocks and transactions, thus preserving the full history of a blockchain. Node software enforces the protocol change at a block height certain. What is incompatible after a hard fork is the blocks going forward. After a hard fork, the blockchain is split and exists as two blockchains with separate characteristics.

For “consensus” hard forks, where the community agrees on the updates to a blockchain, a single “official” new blockchain will continue after the hard fork, and perhaps a split chain for laggards who didn’t update in time. However, if there is developer support for both chains after a hard fork, and technology — business — community interest in supporting two versions of the blockchain, the hard fork will give two blockchains going forward.

This report compares three blockchain splits from hard forks which are shown schematically below.

This timeline shows the Bitcoin Cash split from Bitcoin was on August 1, 2017, the Bitcoin SV split from Bitcoin Cash on November 15, 2018, and Bitcoin SC will split from Bitcoin in the June timeframe at a block height TBD.

Next, consideration of some of the technical issues for these hard forks.

Block Size

Block size is an important parameter in blockchain configuration since it controls scaling for transaction capacity, transactions per second, and node requirements. Block size has been a contentious issue in the blockchain community and has been a motivating factor for past chain splits.

Table 1 - Block Size

Bitcoin launched with a 1.0 MB block size, and has retained this size although adjustments using block “weight” for SegWit transactions allow larger blocks. Bitcoin Cash launched with an initial block size of 8 MB, and hard forked in May 2018 to a size of 32 MB.

Bitcoin SV features very large blocks, launched with 128 MB, and implemented the Quasar protocol in July 2019 allowing blocks up to 2 GB.

Bitcoin SC will launch with 2.0 MB blocks and is scalable up to 32 MB size (plus the SegWit “weight” adjustment).

Another way to examine block size and TPS is to see actual usage of blocks on-chain. Blockchains are occasionally overloaded, but most run at a lesser capacity than full blocks.

getchaintxstats give some statistics for the blockchain capacity usage over the past 4,320 blocks or 30 days. Table 2 gives transactions during the last 30 days (window_tx_count) and TPS (txrate) and shows an actual usage rate over the last month of 3.4 TPS for Bitcoin, 0.5 TPS for Bitcoin Cash and 6.3 TPS for Bitcoin SV.

Table 2 - getchainxstats

The commonly used value for Bitcoin TPS is 4, implying a transaction size of 417 bytes, and using SegWit transactions would give higher throughput. Bitcoin SC with 2 MB block size would give 2x Bitcoin TPS.

Block Height Delta

As you know, difficulty is adjusted every 2,016 blocks (~ 2 weeks) to maintain the 10-minute block spacing. In a perfect world, after splitting from the Bitcoin blockchain, the split chains would run block height roughly in sync with Bitcoin block height. However, various tweaks attempting to improve difficulty adjustment can decouple block height on the split chains.

By definition, at the hard fork block height, the main chain and split chain are exactly in sync. There are minimal practical issues with these different block heights, although it is nice when software does what you are expecting for block spacing. Perhaps the only implication for different block heights is that halvings will occur at different times, so more for reference, the approximate block height offsets are shown below.

Table 3 - Block Height Offset

Again, the practical implication of these block height offsets is that Bitcoin Cash and Bitcoin SV will reach their halvings a little over a month earlier than Bitcoin.

Bitcoin SC may use a more frequent and gentler difficulty adjustment algo, effectively tracking closer to the Bitcoin block height.

Replay Attacks

Since the addresses, private keys and coins are otherwise identical between Bitcoin and a forked chain, developers of the new split chain can add replay protection. Without replay protection, a signed transaction from one chain will validate and execute on the split chain in a “replay attack”, as Ethereum discovered in 2016.

Bitcoin Cash added replay protection in their hard fork by adding a marker so that signatures wouldn’t match between Bitcoin and Bitcoin Cash (two-way replay protection). Bitcoin SV did not initially add replay protection (for philosophical reasons). Bitcoin SC will add replay protection using a modified signature similar to Bitcoin Cash.

Opcodes and Bytecodes

Bitcoin and its forks use script opcodes for basic programming operations executed on a stack. By design, script has limited capability for safety and of the ~100 opcodes available, relatively few are used for normal transactions (pay2pubkeyhash, multi-sig, etc.).

Table 4 - Opcodes

There is a slight variation in opcodes between these projects. Table 4 shows the count in current release GetOpName() function. The Bitcoin SV count includes 16 opcodes (OP_1 — OP16) for pushing onto the stack but otherwise is in the same size range as Bitcoin and Bitcoin Cash.

Bitcoin SC, forked from Bitcoin v0.19, will include additional opcodes for interfacing with the smart contract layer, which will offer Turing-complete on-chain smart contract execution with ~100 bytecodes (e.g., a Constantinople-class virtual machine). In contrast to Bitcoin and these other forks, Bitcoin SC is a fully programmable blockchain, capable of running on-chain applications such as decentralized exchanges and DeFi solutions.

More info and sign your support for Bitcoin SC https://bsc.net/

Another kind of hard fork: American Gothic, Grant Wood, 1930
submitted by bitcoinSCofficial to BitcoinSCofficial [link] [comments]

The BCH blockchain is 165GB! How good can we compress it? I had a closer look

Someone posted their results for compressing the blockchain in the telegram group, this is what they were able to do:
Note, bitcoin by its nature is poorly compressible, as it contains a lot of incompressible data, such as public keys, addresses, and signatures. However, there's also a lot of redundant information in there, e.g. the transaction version, and it's usually the same opcodes, locktime, sequence number etc. over and over again.
I was curious and thought, how much could we actually compress the blockchain? This is actually very relevant: As I established in my previous post about the costs of a 1GB full node, the storage and bandwidth costs seem to be one of the biggest bottlenecks, and that CPU computation costs are actually the cheapest part, as were able almost to get away with ten year old CPUs.
Let's have a quick look at the transaction format and see what we can do. I'll have a TL;DR at the end if you don't care about how I came up with those numbers.
Before we just in, don't forget that I'll be streaming today again building a SPV node, as I've already posted about here. Last time we made some big progress, I think! Check it out here https://dlive.tv/TobiOnTheRoad. It'll start at around 15:00 UTC!

Version (32 bits)

There's currently two transaction types. Unless we add new ones, we can compress it to 1 bit (0 = version 1; and 1 = version 2).

Input/output count (8 to 72 bits)

This is the number of inputs the transaction has (see section 9 of the whitepaper). If the number of inputs is below 253, it will take 1 byte, and otherwise 2 to 8 bytes. This nice chart shows that, currently, 90% of Bitcoin transactions only have 2 inputs, sometimes 3.
A byte can represent 256 different numbers. Having this as the lowest granularity for input count seems quite wasteful! Also, 0 inputs is never allowed in Bitcoin Cash. If we represent one input with 00₂, two inputs with 01₂, three inputs with 10₂ and everything else with 11₂ + current format, we get away with only 2 bits more than 90% of the time.
Outputs are slightly higher, 3 or less 90% of the time, but the same encoding works fine.

Input (>320 bits)

There can be multiple of those. It has the following format:

Output (≥72 bits)

There can be multiple of those. They have the following format:

Lock time (32 bits)

This is FF FF FF FF most of the time and only occasionally transactions will be time-locked, and only change the meaning if a sequence number for an input is not FF FF FF FF. We can do the same trick as with the sequence number, such that most of the time, this will be just 1 bit.


So, in summary, we have:
Nice table:
No. of inputs No. of outputs Uncompressed size Compressed size Ratio
1 1 191 bytes (1528 bits) 128 bytes (1023 bits) 67.0%
1 2 226 bytes (1808 bits) 151 bytes (1202 bits) 66.5%
2 1 339 bytes (2712 bits) 233 bytes (1861 bits) 68.6%
2 2 374 bytes (2992 bits) 255 bytes (2040 bits) 68.2%
2 3 408 bytes (3264 bits) 278 bytes (2219 bits) 68.0%
3 2 520 bytes (4160 bits) 360 bytes (2878 bits) 69.2%
3 3 553 bytes (4424 bits) 383 bytes (3057 bits) 69.1%
Interestingly, if we take a compression of 69%, if we were to compress the 165 GB blockchain, we'd get 113.8GB. Which is (almost) exactly the amount which 7zip was able to give us given ultra compression!
I think there's not a lot we can do to compress the transaction further, even if we only transmit public keys, signatures and addresses, we'd at minimum have 930 bits, which would still only be at 61% compression ratio (and missing outpoint and value). 7zip is probably also able to utilize re-using of addresses/public keys if someone sends to/from the same address multiple times, which we haven't explored here; but it's generally discouraged to send to the same address multiple times anyway so I didn't explore that. We'd still have signatures clocking in at 512 bits.
Note that the compression scheme I outlined here operates on a per transaction or per block basis (if we compress transacted satoshis per block), unlike 7zip, which compresses per blockchain.
I hope this was an interesting read. I expected the compression ratio to be higher, but still, if it takes 3 weeks to sync uncompressed, it'll take just 2 weeks compressed. Which can mean a lot for a business, actually.

I'll be streaming again today!

As I've already posted about here, I will stream about building an SPV node in Python again. It'll start at 15:00 UTC. Last time we made some big progress, I think! We were able to connect to my Bitcoin ABC node and send/receive our first version message. I'll do a nice recap of what we've done in that time, as there haven't been many present last time. And then we'll receive our first headers and then transactions! Check it out here: https://dlive.tv/TobiOnTheRoad.
submitted by eyeofpython to btc [link] [comments]

Don't agree with POSM? Think only hash matters? Then by all means delete your social media account and buy a miner and leave us all alone.

The SV shilling has reached epic proportions.
I find it rather amusing that this team of trolls has absolutely plastered this entire sub with their drivel about POSM. Pot, meet kettle.
They don't even have the slightest clue what they're talking about. One seems to think that ABC has implemented a fixed block size cap. Huh? Same guy told me it's better to "just raise the cap and let miners figure it out." Newsflash.: miners are figuring it out. It's called Graphene and it'll work great with CTOR.
Another one seems to believe that "only hash matters". As though BCH is the majority chain. Yet another seems to believe "SV good because backed by miners" but at the same time "ABC bad because backed by miners"....?
I'm glad to participate in an uncensored sub where these bozos can clown around and make fools of themselves for everyone else to see. But I'll be more glad when they figure their shit out, or leave.
If CSW and his buddies want to fork the coin it's their prerogative. BCH is permissionless. Knock yourselves out. But be aware. ABC, XT, BU, Flowee, Bitcoin.com, Coinbase, and many more are in agreement on BCH.
It's you guys that are out in left field. Why?
Because you think the default value in a config file should be 128 not 32.
That's it. Yeah there's an opcode too but who cares. All anyone here talks about is how SV is "raising the block size."
But the block size doesn't need "raising". It is configurable. don't you know that?
So. You're going to split the community over a default value in a user-editable config file.
This is like splitting the community over the order of items in a drop down list.
But again it's your right to fork. I wish you guys the best... until the first time one of you mines hostile blocks on the BCH chain. Then I wish you the worst. Because you'll be doing BTC's dirty work for them. WAKE UP. Nothing will make Gmax happier than watching BCH fight itself.
For the sake of all that is holy, wake up and call this stupid fork off.
submitted by jessquit to btc [link] [comments]

A Response to Roger Ver

This post was inspired by the video “Roger Ver’s Thoughts on Craig Wright”. Oh, wait. Sorry. “Roger Ver’s Thoughts on 15th November Bitcoin Cash Upgrade”. Not sure how I mixed those two up.
To get it out of the way first and foremost: I have nothing but utmost respect for Roger Ver. You have done more than just about anyone to bring Bitcoin to the world, and for that you will always have my eternal gratitude. While there are trolls on both sides, the crucifixion of Bitcoin Jesus in the past week has been disheartening to see. As a miner, I respect his decision to choose the roadmap that he desires.
It is understandable that the Bitcoin (BCH) upgrade is causing a clash of personalities. However, what has been particularly frustrating is the lack of debate around the technical merits of Bitcoin ABC vs Bitcoin SV. The entire conversation has now revolved around Craig Wright the individual instead of what is best for Bitcoin Cash moving forward.
Roger’s video did confirm something about difference of opinions between the Bitcoin ABC and Bitcoin SV camps. When Roger wasn’t talking about Craig Wright, he spent a portion of his video discussing how individuals should be free to trade drugs without the intervention of the state. He used this position to silently attack Craig Wright for allegedly wanting to control the free trade of individuals. This appears to confirm what Craig Wright has been saying: that DATASIGVERIFY can be used to enable widely illegal use-cases of transactions, and Roger’s support for the ABC roadmap stems from his personal belief that Bitcoin should enable all trade regardless of legal status across the globe.
Speaking for myself, I think the drug war is immoral. I think human beings should be allowed to put anything they want in their own bodies as long as they are not harming others. I live in the United States and have personally seen the negative consequences of the drug war. This is a problem. The debasement of our currency and theft at the hands of central banks is a separate problem. Bitcoin was explicitly created to solve one of these problems.
Roger says in his video that “cryptocurrencies” were created to enable trade free from government oversight. However, Satoshi Nakamoto never once said this about Bitcoin. Satoshi Nakamoto was explicitly clear, however, that Bitcoin provided a solution to the debasement of currency.
“The root problem with conventional currency is all the trust that's required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust.” – Satoshi Nakamoto 02/11/2009
As we’ve written previously, the genesis block is often cited as a criticism of the 2008 bailout. However, the content of the article outlines that the bailout had already occurred. The article reveals that the government was poised to go a step further by buying up the toxic bank assets as part of a nationalization effort! In this scenario, according to the Times, "a 'bad bank' would be created to dispose of bad debts. The Treasury would take bad loans off the hands of troubled banks, perhaps swapping them for government bonds. The toxic assets, blamed for poisoning the financial system, would be parked in a state vehicle or 'bad bank' that would manage them and attempt to dispose of them while 'detoxifying' the main-stream banking system." The article outlines a much more nightmarish scenario than bank bailouts, one that would effectively remove any element of private enterprise from banking and use the State to seize the bank's assets.
The United States is progressively getting to a point where cannabis can be freely traded and used without legal repercussion. As a citizen, each election has given me the opportunity to bring us closer to enacting that policy at a national level. However, I have never had the ability to have a direct impact on preventing the debasement of the United States dollar. The dollar is manipulated by a “private” organization that is accountable to no one, and on a yearly basis we are given arbitrary interest rates that I have no control over. The government uses its arbitrary control over the money supply to enable itself to spend trillions of dollars it doesn’t have on foreign wars. Roger Ver has passionately argued against this in multiple videos available on the internet.
This is what Bitcoin promised to me when I first learned about it. This is what makes it important to me.
When the Silk Road was shut down, Bitcoin was unaffected. Bitcoin, like the US dollar, was just a tool that was used for transactions. There is an inherent danger that governments, whether you like it or not, would use every tool at their disposal to shut down any system that enabled at a protocol level illegal trade. They, rightfully or wrongfully, did this with the Silk Road. Roger’s video seems to hint that he thinks Bitcoin Cash should be an experiment in playing chicken with governments across the world about our right to trade freely without State intervention. The problem is that this is a vast underestimation of just how quickly Bitcoin (BCH) could be shut down if the protocol itself was the tool being used for illegal trade instead of being the money exchanged on top of illegal trade platforms.
I don’t necessarily agree or disagree with Roger’s philosophy on what “cryptocurrencies” should be. However, I know what Bitcoin is. Bitcoin is simply hard, sound money. That is boring to a lot of those in the “cryptocurrency” space, but it is the essential tool that enables freedom for the globe. It allows those in Zimbabwe to have sound currency free from the 50 billion dollar bills handed out like candy by the government. It allows those of us in the US to be free from the arbitrary manipulation of the Fed. Hard, sound, unchanging money that can be used as peer to peer digital cash IS the killer use case of Bitcoin. That is why we are here building on top of Bitcoin Cash daily.
When Roger and ABC want to play ball with governments across the globe and turn Bitcoin into something that puts it in legal jeopardy, it threatens the value of my bitcoins. Similar to the uncertainty we go through in the US every year as we await the arbitrary interest rates handed out by the Fed, we are now going to wait in limbo to see if governments will hold Bitcoin Cash miners responsible for enabling illegal trade at a protocol level. This is an insanely dangerous prospect to introduce to Bitcoin (BCH) so early in its lifespan. In one of Satoshi Nakamoto’s last public posts, he made it clear just how important it was to not kick the hornet’s nest that is government:
“It would have been nice to get this attention in any other context. WikiLeaks has kicked the hornet's nest, and the swarm is headed towards us.” – Satoshi Nakamoto 12/11/2010
Why anyone would want to put our opportunity of sound monetary policy in jeopardy to enable illegal trading at a base protocol level is beyond me. I respect anyone who has an anarcho-capitalist ideology. But, please don’t debase my currency by putting it at risk of legal intervention because you want to impose that ideology on the world.
We took the time to set up a Q&A with the Bitcoin SV developers Steve Shadders and Daniel Connolly. We posted on Reddit and gathered a ton of questions from the “community”. We received insanely intelligent, measured, and sane responses to all of the “attack vectors” proposed against increasing the block size and re-enabling old opcodes. Jonathon Toomim spent what must have been an hour or so asking 15+ questions in the Reddit thread of which we obtained answers to most. We have yet to see him respond to the technical answers given by the SV team. In Roger’s entire video today about the upcoming November fork, he didn’t once mention one reason why he disagrees with the SV roadmap. Instead, he has decided to go on Reddit and use the same tactics that were used by Core against Bitcoin Unlimited back in the day by framing the upcoming fork as “BCH vs BSV”, weeks before miners have had the ability to actually vote.
What Bitcoin SV wants to accomplish is enable sound money for the globe. This is boring. This is not glamorous. It is, however, the greatest tool of freedom we can give the globe. We cannot let ideology or personalities change that goal. Ultimately, it won’t. We have been continual advocates for miners, the ones who spend 1000x more investing in the network than the /btc trolls, to decide the future of BCH. We look forward to seeing what they choose on Nov 15th.
Roger mentions that it is our right to fork off and create our own chains. While that is okay to have as an opinion, Satoshi Nakamoto was explicit that we should be building one global chain. We adhere to the idea that miners should vote with their hashpower and determine the emergent chain after November 15th.
“It is strictly necessary that the longest chain is always considered the valid one. Nodes that were present may remember that one branch was there first and got replaced by another, but there would be no way for them to convince those who were not present of this. We can't have subfactions of nodes that cling to one branch that they think was first, others that saw another branch first, and others that joined later and never saw what happened. The CPU proof-of-worker proof-of-work vote must have the final say. The only way for everyone to stay on the same page is to believe that the longest chain is always the valid one, no matter what.” – Satoshi Nakamoto 11/09/2008
Edit: A clarification. I used the phrase "Bitcoin is boring". I want to clarify that Bitcoin itself is capable of far more than we've ever thought possible, and this statement is one I will be elaborating on further in the future.
submitted by The_BCH_Boys to btc [link] [comments]

Colin gives a rundown on Nexus layered architecture

This is an excerpt from a much larger impromptu Q&A on Nexus Telegram, and provides an excellent overview of Nexus architecture. (edited for clarity)
Paul Screen, [10.09.19 22:03]
[In reply to CryptoJoker]
yes it is. There's no question that ethereum and it's direct competitors that offer turing-complete programmable contracts are very powerful. But when you actually look at the requirements of businesses trying to onboard to blockchain, we found that most of them just need simple requirements met and don't need all of the complexity and baggage that comes with it.

CryptoJoker, [10.09.19 22:04]
[In reply to Paul Screen]
ok so ca u run only simple multi conditional transactions on the nexus VM or facebook type DAPPs on nexus ?

Viz., [10.09.19 22:09]
[In reply to CryptoJoker]
Nah the VM is the interpreter so the language fits on top, we haven’t designed nexus to be a programming language this is an approach we didn’t agree with. It is APi based so you can code in any language and work with the functionality of the blockchain layer for what you need it to do. No you can’t port EVM code into Nexus

CryptoJoker, [11.09.19 00:22]
[In reply to Viz.]
when you make an api request, does this result in computations done on the blockchain ?

Viz., [11.09.19 00:23]
It depends on the API request, if you do let’s say users/list/notifications then no as this is reading data, but finance/credit/account would since it broadcasts a transaction with OP::CREDIT

CryptoJoker, [11.09.19 00:24]
[In reply to Viz.]
ok thanks , and can you provide me an idea of the flexbility of the VM on nexus ? can it run like facebook type dapps for example ?

Viz., [11.09.19 00:28]
It depends, it depends on what you want the blockchain to do and don’t. Dapp is an overused word and overstated in capability because a blockchain isn’t a computer and shouldn’t ever be, it’s a verifier. So, if you wanted to make a social network on nexus, trade tokens, chat, sure you could. A lot of functionality will be in the logical Layer as there’s no point to compute let’s say an image compression on the blockchain, you would on the logical Layer compress it, hash it, then build an object Register to hold the meta data associated with it that includes the checksum, include description. Then to update, you change the state in the object register, etc. If you wanted to do conditions in some of these interactions you could program the Boolean expression in such as, I’ll sell you this object for 5 NXS, someone is able to claim the transfer based on the condition of their debit, so on.

CryptoJoker, [11.09.19 00:29]
[In reply to CryptoJoker]
how does the functinonality of the nexus scripts compare to that of bitcoin scripts ?
is it fair to assume this:
bitcoin scripting < nexus scripting < ethereum type smart contracts ?
it seems like its mostly built to handle only transactions ...

Viz., [11.09.19 00:31]
[In reply to CryptoJoker]
Not really, our architecture is completely different, hard to compare functionality. Let’s just say bitcoin scripts are slow, clunky, and stack based and only handle a Boolean expression to spend inputs, ours is a register based system with primitive operations and conditions that all interact to provide contract functionality

Viz., [11.09.19 00:31]
[In reply to CryptoJoker]
You miss what the term register means then
And “programmable data structures”

CryptoJoker, [11.09.19 00:31]
[In reply to Viz.]
yes, this is the first time i am encountering this
i also dont have a programming background unfortunately

Viz., [11.09.19 00:33]
A register is a structure that on hardware is what your cpu uses to store numbers in low latency memory (close to the CPU in its internal cache).... hang on, switching to computer...

CryptoJoker, [11.09.19 00:34]
[In reply to Viz.]
ok this is all fine and good, but i guess for a layman like me i just wanna know what its functionalities are in comparison to EVM type VMs ...
what can it do in comparison to EVMs, and is it fasteslower, more expensive/cheaper ?

Viz., [11.09.19 00:36]
Let me explain it like this: Ethereum is like everything put in one bucket, it has a turing complete byte code because they imagined you could program it like a universal computer. The reality is though, that most people abstract away from the EVM and use it for pure data storage, or managing accounts and tokens. They include operations like EXP for example, and use what is termed "Big Numbers" which are numbers that are in the range of 2^256 which is a number with over a hundred zeros. This has led to significant bottlenecks, and little value in being turing complete even though this was their value propositiion.

Viz., [11.09.19 00:36]
Now we get to Bitcoin, which was deliberatly not turing complete, and its scripts were designed to control the conditions on spending inputs in the UTXO model, in which it has proven useful, and some scripts such as OP_RETURN have allowed people to store data on it, but then again it aws not useful for much more than that.

Viz., [11.09.19 00:37]
Then we get to us, think of us between bitcoin and ethereum, but building contracts to act like contracts between people, rather than computer code.

Viz., [11.09.19 00:41]
So think of us as a blend of the two concepts, but in a way that is practical and useful for developers. From my research of talking to many companies that were using blockchain, I deduced a simple common denominator: nobody used ethereum for the turing completeness, they used it to store data. This was the foundation of the architecture that I developed for tritium that is a seven layer stack. So I'm going to break this down, and hope this communicates how it functions to create smart contracts or dapps in just about any capacity that's needed:

Viz., [11.09.19 00:41]
  1. Network - this is responsible for end to end communication between nodes, handling the relaying and receiving of ledger level data

Viz., [11.09.19 00:43]
  1. Ledger - this is responsible for ensuring all data is formed under consensus and is immutable by nature. This is where your 'sigchain' or blockchain account exists. A sigchain is a decentralized blockchain account that allows you to login from any node with a username, password, and pin without the need for wallet.dat files or constantly rescanning the database. This is an important piece to how the layers above work as well, think of it as a personal blockchain that allows decentralized access through the login system that does not store any credentials, but rather deterministically creates a 'lock' mathematically that only your credentials can unlock, using a few different cryptographic functions I won't name here

Viz., [11.09.19 00:46]
  1. Register - this layer is the data retention layer, or the layer that stores information relating to users. A register takes two forms: state and object. A state register is jsut a simple register that can store data in any sequence with no formatting enforced by the ledger. This would be for applications that have a state they want to remain immutable whih they can record in a state register. The second form is an object register, which is a programmable data type. What this means is that I can specify the fields of this register, and set some of the fields to be mutable or immutable such as lets say S/N would be immutable, but notes mutable. This allows objects to take the form much like a struct or class in object oritented languages, that can be accessed by any node, and only written to by the owning sigchain. Now registers sit on top of the ledger, and they can be transferred between sigchains or users, allowing them to take a natural form as assets or simple objects that would be included in a decentralized application such as a crypto kitty, or a post yoj make on social media, etc. This layer is responsible for managing all these states and ensuring the specified fields in these states are immutable, while other fi9elds can be updated like a program would do as it operates.

Viz., [11.09.19 00:51]
  1. Operation - this layer is what gives context to a register and causes some action to take place. It includes two aspects to this layer, Primitive operations and Conditional operations. A contract object is a self contained object containing: a register pre-state (the register that is being operated on), a primitive operation (only one primitive operation per contract), and a set of conditions (any amount of conditional ops may be used for a fee of course). The primitive operations are basic ones like WRITE, APPEND, DEBIT, CREDIT, TRANSFER, CLAIM, CREATE, AUTHORIZE, TRUST, CONBASE, GENERATE. Each of these has a specific operation on the register it is initiated in. This is how you would maintain the state of a decentralized app, lets say crypto kitties, you have an object register that you create with OP::CREATE that has a specific meta data format associated with it, you then OP::TRANSFER it to someone else, but you gie a condition saying they must send 500 NXS beforehand, and this is the stiuplation of the TRANSFER being CLAIMABLE. When this ondition is satisfied you are able to claim the other point allowing for forms of exhange. Other stipulations or conditions could be arbitration, escrow, etc. Conditions are when there is an interaction between two actors or sigchains, which happens with a DEBIT or TRANSFER. Otherwise the other primitive oeprations act on the register such as changing its state.

Viz., [11.09.19 00:52]
  1. API - This layer is responsible for giving an interface for the programmer to build their DAPP. This gies them direct access to login, create registers, create accounts, send coins, read data, manage notifications, etc. This is the layer develoeprs will interact with when building applications.

Viz., [11.09.19 00:53]
  1. Logical - This is the first 'developer' layer, menaing that this is the layer that will give most of the logic to the application. This coudl be simple things like, send message to this user if they have this object that has a value of 'you're my friend' ,or antything else. This layer is the 'backend' of the dapp, and what provides a lot of the functionality. States can be read and written into the register layer, information from the ledger can be shared, stipulations on interactions can be applied. etc.

Viz., [11.09.19 00:54]
  1. Interface - This is the 'user' layer, where the user will interact with. This in the facebook example would be the website you go to, and all the buttons that do fun stuff. This is the last layer of the 'developer' application space.

CryptoJoker, [11.09.19 00:54]
thank you
big applause!!!!

CryptoJoker, [11.09.19 00:56]
ok so to summarize my understanding of this
3 allows for creation and transfer of digital assets/objects
4 governs the operations that on objects in 3
5 interface between dapp logic and registeoperations layer
  1. this is where the "dapp" is written by developers
am i right ?

Viz., [11.09.19 00:57]
So as you can see, all these layers together are what form the foundation of a dapp, with the blockchain doing some things, the application does other things. They togheter give the blockchain scability and easy to build on, and also give the appliation powerful tools to utilize. For example of an object register, your NXS account. It contains fields identifier and balance. The identifier identfies the token's contract-id or object register, and the balance keeps track of how much you have at stake. Object registers can be polymorphic though, so you can create an object register with tehse two base types, but add, notes, which you could fill with personal notes and the DEBIT and CREDIT operations would process it off of the base object, or the account menaing that you can expand from these basic objects and create many different types and uses, creating object oriented and polymorphic behavior

Viz., [11.09.19 00:57]
3 is simply where they are store, it takes 4 to create the object
5 yes
6 yes plus 7, the dapp space is layers 6 and 7 together. If the dapp developer is really good though, they make custom API's with more complex contracts under the hood to provide additional functionality to their dapp, but we currently abstract the developer away from this to prevent them making mistakes that could lose people a lot of money

CryptoJoker, [11.09.19 00:59]
whereas for the EVM, 3,4,5,6 and 7 are all bundles into one entitiy, am i right ?

Viz., [11.09.19 00:59]
And last note, on layer 4, the conditional statements. These also operate on a register based VM that processes the conditional statements, and they can be grouped with as many different conditions as desired, so they can grow into quite complex contracts like we would see with legal contracts.

Viz., [11.09.19 01:00]
EVM doesn't really have layers
It's just EVM opcodes, and then the compiler for solidity which creates the byte code, so maybe two layers
Same with bitcoin scripts
But bitcoin scripts dont have a compiler that creates the byte code so you have to program it as a type of assembly

Viz., [11.09.19 01:01]
So long story short is, our techniques and architecture are quite unique, and designed around years of market research to ensure that it was built as something that people could use easily, but also powerful enough to power the dapps people want to see

Viz., [11.09.19 01:03]
The login account is really important for adoption in my opinion, because users having to manage keys wont bode well for applications that expand, lets say like supply chains or other mobile applications. Managing keys in a file on your computer I think is a big hurdle to mainstream adoption, the other one is the complexity of EVM and how little practical appliation it has, even though it contains a lot of functionality, most of it us unused or abstracted away from
submitted by scottsimon36 to nexusearth [link] [comments]

IOTA, and When to Expect the COO to be Removed

Hello All,
This post is meant to address the elephant in the room, and the #1 criticism that IOTA gets which is the existence of the Coordinator node.
The Coordinator or, COO for short, is a special piece of software that is operated by the IOTA Foundation. This software's function is to drop "milestone" transactions onto the Tangle that help in ordering of Transactions.
As this wonderful post on reddit highlights (https://www.reddit.com/Iota/comments/7c3qu8/coordinator_explained/)
When you want to know if a transaction is verified, you find the newest Milestone and you see if it indirectly verifies your transaction (i.e it verifies your transaction, or if verifies a transaction that verifies your transaction, or if it verifies a transaction that verifies a transaction that verifies your transaction, etc). The reason that the Milestones exist is because if you just picked any random transaction, there's the possibility that the node you're connected to is malicious and is trying to trick you into verifying its transactions. The people who operate nodes can't fake the signatures on Milestones, so you know you can trust the Milestones to be legit.
The COO protects the network, that is great right?
No, it is not.
The coordinator represents a centralized entity that draws the ire of the concurrency community in general is the reason behind a lot of FUD.
Here is where things get dicey. If you ask the IOTA Foundation, the last official response I heard was
We are running super computer simulations with the University of St. Peteresburg to determine when that could be a possibility.
This answer didn't satisfy me, so I've spent the last few weeks thinking about the problem and think I can explain the challenges that the IOTA Foundation are up against, what they expect to model with the super computer simulations, and what ultimately what my intuition (backed up by some back of the napkin mathematics) tells me that outcomes will be.
In order to understand the bounds of the problem, we first need to understand what our measuring stick is.
Our measuring stick provides measurements with respect to hashed per second. A hash, is a mathematical operation that blockchain (and DAG) based applications require before accepting your transaction. This is generally thought of as an anti-spam measure used to protect a blockchain network.
IOTA and Bitcoin share some things in common, and one of those things is that they both require Proof of Work in order to interact with the blockchain.
In IOTA, a single hash is completed for each Transaction that you submit. You complete this PoW at the time of submitting your Transaction, and you never revisit it again.
In Bitcoin, hashes are guessed at by millions of computers (miners) competing to be the first person to find solve the correct hash, and ultimately mint a new block.
Because of the competitive nature of the bitcoin mining mechanism, the bitcoin hashrate is a sustained hashrate, while the IOTA hashrate is "bursty" going through peaks and valleys as new transactions are submitted.
Essentially, IOTA performance is a function of the current throughput of the network. While, bitcoin's performance is a delicate balance between all collective miners, the hashing difficulty with the goal of pegging the block time to 10 minutes.
With all that said, I hope it is clear that we can come to the following conclusion.
The amount of CPU time required to compute 1 Bitcoin hash is much much greater then the amount of CPU time required to compute 1 IOTA hash.
T(BtcHash) >> T(IotaHash)
After all, low powered IOT devices are supposed to be able to execute the IOTA hashing function in order to submit their own transactions.
A "hash" has be looked at as an amount of work that needs to be completed. If you are solving a bitcoin hash, it will take a lot more work to solve then an IOTA hash.
When we want to measure IOTA, we usually look at "Transactions Per Second". Since each Transaction requires a single Hash to be completed, we can translate this measurement into "Hashes Per Second" that the entire network supports.
IOTA has seen Transactions Per Second on the order of magnitude of <100. That means, that at current adoption levels the IOTA network is supported and secured by 100 IOTA hashes per second (on a very good day).
Bitcoin hashes are much more difficult to solve. The bitcoin network is secured by 1 Bitcoin hash every 10 minutes (which adjust's it's difficult over time to remain pegged at 10 minutes). (More details on bitcoin mining: https://www.coindesk.com/information/how-bitcoin-mining-works/)
Without the COOs protection, IOTA would be a juicy target destroy. With only 100 IOTA hashes per second securing the network, that means that an individual would only need to maintain a sustained 34 hashes per second in order to completely take over the network.
Personally, my relatively moderate gaming PC takes about 60 seconds to solve IOTA Proof of Work before my transaction will be submitted to the Tangle. This is not a beastly machine, nor does it utilize specialized hardware to solve my Proof of Work. This gaming PC cost about $1000 to build, and provides me .0166 hashes per second.
**Using this figure, we can derive that consumer electronics provide hashing efficiency of roughly $60,000 USD / Hash / Second ($60k per hash per second) on the IOTA network.
Given that the Tx/Second of IOTA is around 100 on a good day, and it requires $60,000 USD to acquire 1Hash/Second of computing power we would need 34 * $60,000 to attack the IOTA network.
The total amount of money required to 34% the IOTA project is $2,040,00
This is a very small number. Not only that, but the hash rate required to conduct such an attack already exists, and it is likely that this attack has already been attempted.
The simple truth is, that due to the economic incentive of mining the hash rate required to attack IOTA is already centralized, and are foaming at the mouth to attack IOTA. This is why the Coordinator exists, and why it will not be going anywhere anytime soon.
The most important thing that needs to occur to remove the COO, is that the native measurement of transactions per second (which ultimately also measures the hashes per second) need to go drastically up in orders of magnitude.
If the IOTA transaction volume were to increase to 1000 transactions per second, then it would require 340 transactions per second from a malicious actor to compromise the network. In order to complete 340 transactions per second, the attacker would need now need the economic power of 340 * $60,000 to 34% attack the IOTA network.
In this hypothetical scenario, the cost of attacking the IOTA network is $20,400,000. This number is still pretty small, but at least you can see the pattern. IOTA will likely need to hit many-thousand transactions per second before it can be considered secure.
What we have to keep in mind here, is that IOTA has an ace up their sleeve, and that Ace is JINN Labs and the ternary processor that they are working on.
Ultimately, JINN is the end-game for the IOTA project that will make the removal of the COO a reality.
In order to understand what JINN is, we need to understand a little bit about computer architecture and the nature of computational instruction in general.
A "processor" is a piece of hardware that performs micro calculations. These micro calculations are usually very simple, such as adding two numbers, subtracting two numbers, incrementing, decrementing, and the like. The operation that is completed (addition, subtraction) is called the opcode while the numbers being operated on are called the operands.
Traditional processors, like the ones you find in my "regular gaming PC" are binary processors where both the opcode and operands are expected to be binary numbers (or a collection of 0s and 1s).
The JINN processor, provides the same functionality, mainly a hardware implementation of micro instructions. However, it expects the opcodes and operands to be ternary numbers (or a collection of 0s, 1s, and 2s).
I won't get into the computational data density of base 2 vs. base 3 processors, nor will get I get into the energy efficiency of those processors. What I will be getting into however, is how certain tasks are simpler to solve in certain number systems.
Depending on what operations are being executed upon the operands, performing the calculation in a different base will actually reduce the amount of steps required, and thus the execution time of the calculation. For an example, see how base 12 has been argued to be superior to base 10 (https://io9.gizmodo.com/5977095/why-we-should-switch-to-a-base-12-counting-system)
I want to be clear here. I am not saying that any 1 number system is superior to any other number system for all types of operations. I am simply saying, that there exist certain types of calculations that are easier to perform in base 2, then they are performed in base 10. Likewise, there are calculations that are vastly simpler in base 3 then they are in base 2.
The IOTA POW, and the algorithms required to solve for it is one of these algorithms. The IOTA PoW was designed to be ternary in nature, and I suggest that this is the reason right here. The data density and electricity savings that JINN provides are great, but the real design decision that has led to base 3 has been that they can now manufacture hardware that is superior at solving their own PoW calculations.
Binary emulation, is when a binary processor is asked to perform ternary operations. A binary processor is completely able to solve ternary hashes, but in order to do so it will need to emulate the ternary micro instructions at a higher level in the application stack from away from the hardware.
If you had access to a base 3 processor, and needed perform a base 3 addition operation you could easily ask your processor to natively perform that calculation.
If all you have access to, is a base 2 processor, you would need to emulate a base 3 number system in software. This would ultimately result in a higher number of instructions passing through your processor, more electricity being utilized, more time to complete.
Finally, let's review these figures.
It costs roughly $60k to acquire 1hash per second in BASE 2 consumer electrictronic. It costs roughly $2M to acquire enough BASE 2 hash rate to 34% the IOTA network.
JINN, will be specifically manufactured hardware that will solve base 3 hashes natively. What this likely means, is that $1 spent on JINN will be much more effective at acquiring base 3 hash rate then $1 spent on base 2 hash rate.
Finally, with bitcoin and traditional block chain applications there lies economic incentive to amass mining hardware.
It first starts out by a miner earning income from his mining rig. He then reinvests those profits on additional hardware to increase his income.
Eventually, this spirals into an arms raise where the players that are left in the game have increasingly available resources up until the point that there are only a handful of players left.
This economic incentive, creates a mass centralization of computing resources capable of being misused in a coordinated effort to attack a cryptocurrency.
IOTA aims to break this economic incentive, and the centralization that is causes. However, over the short term the fact that the centralization of such resources does exist is an existential peril to IOTA, and the COO is an inconvenient truth that we all have to live with.
Due to all the above, I think we can come to the following conclusions:
  1. IOTA will not be able to remove the COO until the transactions per second (and ultimately hashrate) increase by orders of magnitude.
  2. The performance of JINN processors, and their advantage of being able to compute natively on ternary operands and opcodes will be important for the value ratio of $USD / hash rate on the IOTA network
  3. Existing mining hardware is at a fundamental disadvantage to computing base 3 hashes when compared to a JINN processor designed specifically for that function
  4. Attrition of centralized base 2 hash power will occur if the practice of mining can be defeated and the income related to it. Then the incentive of amassing a huge amount of centralized computing power will be reduced.
  5. JINN processors, and their adoption in consume electronics (like cell phones and cars) hold the key in being able to provide enough "bursty" hash rate to defend the network from 34% attacks without the help of the COO.
  6. What are the super computer simulations? I think they are simulating a few things. They are modeling tip selection algorithms to reduce the amount of unverified transactions, however I think they may also be performing some simulations regarding the above calculations. JINN processors have not been released yet, so the performance benchmarks, manufacturing costs, retail costs, and adoption rates are all variables that I cannot account for. The IF probably has much better insight into all of those figures, which will allow them to better understand when the techno-economic environment would be conducive to the disabling of the COO.
  7. The COO will likely be decentralized before it is removed. With all this taken into account, the date that the COO will be removed is years off if I was forced to guess. This means, that decentralizing the COO itself would be a sufficient stop-gap to the centralized COO that we see today.
submitted by localhost87 to Iota [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:


Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

Bitcoin Cash Development video meeting - January 3 2019 - 8am UTC Bitcoin Q&A: The Lightning Network & Rootstock O que são os tais OPCODES e Stack Machine? (Ethereum e ... Pre-election BITCOIN price predictions? - YouTube Building on Bitcoin - Working with scripts with logical opcodes

The amount of bitcoin: bitcoin value in satoshis ... These are the stack-based operators and corresponding values that those opcodes evaluate. These are the order of operations required to spend ... Opcodes used in Bitcoin Script. This is a list of all Script words, also known as opcodes, commands, or functions. OP_NOP1-OP_NOP10 were originally set aside to be used when HASH and other security functions become insecure due to improvements in computing. False is zero or negative zero (using any number of bytes) or an empty array, and True is anything else. Constants. When talking about ... The opcodes used in the pubkey scripts of standard transactions are: Various data pushing opcodes from 0x00 to 0x4e (1–78). These aren’t typically shown in examples, but they must be used to push signatures and public keys onto the stack. See the link below this list for a description. OP_TRUE / OP_1 (0x51) and OP_2 through OP_16 (0x52–0x60), which push the values 1 through 16 to the ... Bitcoin operates on a fixed ruleset. So-called consensus rules include things such as the operation of the opcodes in Bitcoin Script, the rate at which new bitcoins are issued, the mathematical function used to calculate the target for the Difficulty algorithm and more. The protocol is agreed upon by the miners who control network operation. Return early if no valid opcodes found in CountWitnessSigOps(...). Prior to this commit this was the only place in the codebase where GetOp(...) was called without checking the return value.

[index] [29709] [5641] [36680] [22516] [5813] [32153] [45455] [3139] [1943] [14299]

Bitcoin Cash Development video meeting - January 3 2019 - 8am UTC

🌿 Send To Bitcoinnews365.crypto 🌿 💥Support The Channel!!💥 Our Sponsors 📱 https://www.gorillacasestore.com/ 📱 🌿 https://www.cannaplies.com Bitcoin Cash Development video meeting January 3 2018 - 8am UTC Participants: Amaury Séchet, Andrea Suisani, Antony Zegers, Jason B. Cox, Chris Pacia, Emil Oldenberg, Mark Lundeberg, Host: David ... Bitcoin is on the move again! Here we just follow the markets live and answer any questions you may have! Join the Premium List: https://intothecryptoverse.c... Salt Lake City, UT- Welcome to the 1 Bitcoin Show! Fiat freaks will love this show because BTC is setting or is about break all sorts of fiat price records! ... SUBSCRIBE: https://www.youtube.com/user/joshsnowman?sub_confirmation=1 Filtering the non-essentials and focusing on what’s important: the art of living. Each...