Bitcoin mining money per day -> diy bitcoin miner

03-10 04:07 - 'parallel block validation sort of addresses the big blocks problem and the sighash problem (though I would prefer a sigop limit on transactions). But as for the big blocks problem, miners set their Acceptance Depth (AD) to wha...' by /u/lexensi1 removed from /r/Bitcoin within 74-79min

'''
parallel block validation sort of addresses the big blocks problem and the sighash problem (though I would prefer a sigop limit on transactions). But as for the big blocks problem, miners set their Acceptance Depth (AD) to whatever they want. So a block that is too large will have to have many blocks mined on top of it as well before it is accepted. The only way that can happen is if a majority of miners agree that the block is not too large.
As for malleability, there is the flextrans proposal but I don't know if it's under consideration by BU or not. Segwit doesn't solve malleability once and for all either, because the old-style transactions are still valid, so exchanges and wallets and other software still need to take into account that it's possible in the way they program their software.
Not sure what the median EB attack is.
Firstly AD is now 12. Therefore EB=1MB miners can get 12 blocks orphaned, which would take an expected 4 hours. There would be no warning for users and they could see funds wiped from their wallets after 11 confirmations.
After this then all the EB = 1MB miners would have their sticky gates triggered, whilst all the EB = 1.1MB would have their sticky gates closed. Now a malicious miner can split the hashrate 50/50 again. This time the smaller blockers ironically on the larger block chain and vica versa. It would be a massive confusing mess.
It does not actually address either, unfortunately. A mining consortium would be perfectly capable of gaming Bitcoin mining with larger-than-tolerable blocks (or more-than-tolerable cumulative sighash ops within those blocks), regardless of whether smalleleaner alternative blocks were able to be validated in parallel to them.
This particular risk vector actually compounds on itself, too. Initially, a coalition of 50.1% of hashrate could (possibly even accidentally, especially due to network limiters like the Great Firewall of China) mine and extend-upon blocks that are larger than the other 49.9% are able to validate competitively. Even if the 49.9% of miners are able to validate smaller blocks in parallel, they will ultimately be doomed trying to compete with the 50.1%, and as their orphan rates climb and their profitability declines, they would eventually be forced to shut down (assuming they are motivated by profit). This means that the remaining 50.1% of miners now make up the entire mining network... and the process can then repeat, with fewer participants on each iteration.
Peter Todd also explained this idea very well years ago.
Parallel block validation, while an important step forward, unfortunately does nothing to address the underlying issue here.
That's why most Bitcoin engineers consider flex-cap proposals to be untenable unless they include proper incentive-alignment controls (e.g. the sacrifice of mining rewards in exchange for larger allowed block-sizes
'''
Context Link
Go1dfish undelete link
unreddit undelete link
Author: lexensi1
submitted by removalbot to removalbot [link] [comments]

Don't worry about the mempool being backed up now -- that's me liquidating the attacker's addresses

The attackers used p2sh addresses that had easily guessable scriptSigs (they lacked a signature altogether to redeem).
I ended up liquidating about ~1.2BCH of their funds just now in 3k tx's. Each tx has 133 inputs at about 15 sigops each. There is a sigop limit per block of 20,000.
So you will see the mempool now has lots of tx's and is 18MB full as of the time of this writing. These tx's are all the special tx's that have a lot of sigops that I made to liquidate (take) the attacker's funds.
It should clear in 2 days.
Your normal non-sigops abusing tx's will not be affected and will confirm way before mine!
I am the only one waiting in line. :)
But damn.. it felt good to hit the attackers back.
Here is a sample tx taking $0.23 at a time.. for a total of ~$500 :):
https://blockchair.com/bitcoin-cash/transaction/0354a371f08130986eeedaa08ef69b73630a2182b5f8a8e595a7a9f6603604f2
submitted by NilacTheGrim to btc [link] [comments]

ABC Bug Explained

Disclaimers: I am a Bitcoin Verde developer, not an ABC developer. I know C++, but I am not completely familiar with ABC's codebase, its flow, and its nuances. Therefore, my explanation may not be completely correct. This explanation is an attempt to inform those that are at least semi- tech-savvy, so the upgrade hiccup does not become a scary boogyman that people don't understand.
1- When a new transaction is received by a node, it is added to the mempool (which is a collection of valid transactions that should/could be included in the next block).
2- During acceptance into the mempool, the number of "sigOps" is counted, which is the number of times a signature validation check is performed (technically, it's not a 1-to-1 count, but its purpose is the same).
2a- The reason behind limiting sigops is because signature verification is usually the most expensive operation to perform while ensuring a transaction is valid. Without limiting the number of sigops a single block can contain, an easy DOS (denial of service) attack can be constructed by creating a block that takes a very long to validate due to it containing transactions that require a disproportionately large number of sigops. Blocks that take too long to validate (i.e. ones with far too many sigops) can cause a lot of problems, including causing blocks to be slowly propagated--which disrupts user experience and can give the incumbent miner a non-negligible competitive advantage to mine the next block. Overall, slow-validating blocks are bad.
3- When accepted to the mempool, the transaction is recorded along with its number of sigops.
3a- This is where the ABC bug lived. During the acceptance of the mempool, the transaction's scripts are parsed and each occurrence of a sigop is counted. When OP_CHECKDATASIG was introduced during the November upgrade, the procedure that counted the number of sigops needed to know if it should count OP_CHECKDATASIG as a sigop or as nothing (since before November, it was not a signature checking operation). The way the procedure knows what to count is controlled by a "flag" that is passed along with the script. If the flag is included, OP_CHECKDATASIG is counted as a sigop; without it, it is counted as nothing. Last November, every place that counted sigops included the flag EXCEPT the place where they were recorded in the mempool--instead, the flag was omitted and transactions using OP_CHECKDATASIG were logged to the mempool as having no sigops.
4- When mining a block, the node creates a candidate block--this prototype is completely valid except for the nonce (and the extended nonce/coinbase). The act of mining is finding the correct nonce. When creating the prototype block, the node queries the mempool and finds transactions that can fit in the next block. One of the criteria used when determining applicability is the sigops count, since a block is only allowed to have a certain number of sigops.
4a- Recall the ABC bug described in step 3a. The number of sigops for transactions using OP_CHECKDATASIG is recorded as zero--but only during the mempool step, not during any of the other operations. So these OP_CHECKDATASIG transactions can all get grouped up into the same block. The prototype block builder thinks the block should have very few sigops, but the actual block has many, many, sigops.
5- When the miner module is ready to begin mining, it requests the prototype block the in step 4. It re-validates the block to ensure it has the correct rules. However, since the new block has too many sigops included in it, the mining software starts working on an empty block (which is not ideal, but more profitable than leaving thousands of ASICs idle doing nothing).
6- The empty block is mined and transmitted to the network. It is a valid block, but does not contain any other transactions other than the coinbase. Again, this is because the prototype block failed to validate due to having too many sigops.
This scenario could have happened at any time after OP_CHECKDATASIG was introduced. By creating many transactions that only use OP_CHECKDATASIG, and then spending them all at the same time would create blocks containing what the mempool thought was very few sigops, but everywhere else contained far too many sigops. Instead of mining an invalid block, the mining software decides to mine an empty block. This is also why the testnet did not discover this bug: the scenario encountered was fabricated by creating a large number of a specifically tailored transactions using OP_CHECKDATASIG, and then spending them all in a 10 minute timespan. This kind of behavior is not something developers (including myself) premeditated.
I hope my understanding is correct. Please, any of ABC devs correct me if I've explained the scenario wrong.
EDIT: markblundeberg added a more accurate explanation of step 5 here.
submitted by FerriestaPatronum to btc [link] [comments]

Got this in my inbox a couple of minutes back

A new user sent me this to my inbox, its a description of the events after the fork, with a signed message at the bottom. I've gone through it once but its very late here in my timezone, have to go through it again tomorrow. I'm sure I'm not the the only receipient, but just in case pinging some people here.
https://honest.cash/kiarahpromise/sigop-counting-4528

*** EDIT 2 ***
Before you continue. From the Bitcoin whitepaper:
" The system is secure as long as honest nodes collectively control more CPU power than any cooperating group of attacker nodes."

*** EDIT ***
Ok, I have slept over this.
How big is the chance that these two events, the sigop tx spamming of the network and the intended theft of funds stuck in segwit by an unknown miner, were coordinated and not coincidential? I slept over this message and am wondering if that was one two-phased plan and even this message was planned (probably a bit different but it was adapted afterwards to the new situation, that's why the first half of it is such a mess to read) to spread fear after the two plans got foiled.

The plan consisted of various Acts
Act 1) Distract and spam the network with sigop transactions that exploit a bug to cause distraction and halt all BCH transaction volume. The mempool would become filled with unconfirmed transactions
Act 2) When a patch is deployed, start your mining pool and mine the hell out of it to quickly create a legitimate block. They prepared the theft transactions and would hide them in the (predicted) massive mempool of unconfirmed transactions that would have been accumulated. They would mine a big block, everyone would be so happy that BCH works again, and devs would be busy looking for sigop transactions.
Act 3) Hope that the chain gets locked in via checkpoint so the theft cannot be reverted
Act 4) Leak to the media that plenty of BCH were stolen after the fork and the ABC client is so faulty it caused a halt of the network after the upgrade
Act 5) Make a shitload of money by shorting BCH (there was news about a appearance of a big short position right after the fork)

But the people who planned this attack have underestimated the awareness and speed of the BCH dev team. They were probably sure that Act 1 would take hours or even days so the mempool would be extremely bloated (maybe they speculated that everyone paniced and wanted to get out of BCH) and Act 2 would consequently be successful because no one would spot their theft transactions quick enough.

But they didn't calculate that someone is working together with various BCH pools in precaution to prevent exactly this scenario (segwit theft) and even prepared transactions to move all locked coins back to their owners.

Prohashings orphaned block was likely unpredicted collateral damage as Jonathan suggests below, because they were not involved in the plan of the two pools who prepared to return the segwit coins. I'm guessing that the pools did not expect a miner with an attacking theft block that early and had to decide quickly what to do when they spotted it.

So now that both plans have been foiled, Plan B) is coming into place again. Guerrilla style fear mongering about how BCH is not decentralized. Spread this info secretly in the community with the proof in form of a signed message connected to the transactions. Of course, the attacker worked actually alone, attacked us for our own good, and will do so again, because the evil dictatorship devs have to be eradicated....

As an unwanted side effect of these events the BTC.top and BTC.com "partnership" has been exposed. So what do we do with this new revelation is a question that we probably have to discuss.

They worked together with someone who wanted to return the segwit coins and avoided a theft. They used their combined hashing dominance to avoid a theft. I applaud them for that. From a moral perspective this is defendable and my suspicion that we have more backing for BCH than you can see with your eye by following hash rate charts is now being revealed as true again.

But the dilemma BCH has is revealed again as well. we need more of the SHA-256 hash rate cake because we actually do not want that any entity in this space has more than 50% hash power.

*** EDIT 2 ***
Added Satoshi's quote from the whitepaper.
submitted by grmpfpff to btc [link] [comments]

How to properly do spam protection of mempool

Recently we've seen some shouts from memo users which touched on the mempool acceptance policies. This post is a higher level introduction of how we can manage mempool issues. This isn't a direct answer to those shouts, just brings a better understanding for all.
In any full node there is a mempool of validated transactions. Back in 2014 or so we had some attacks where people were sending millions of transactions to the network and the effect was full nodes going down because they ran out of memory.
We initially had some ideas on how to protect the node but we quickly realized that we had to have a simple goal;
Always accept real, money transactions while limiting inflow of non-money transactions.
What this means in real world terms is that if someone is spamming the network with silly transactions in order to make it slow and unusable, we can distinguish between those and others where people standing in the store and wanting to pay for something won't ever notice this "attack".
Again, all this is to protect the full node from being overwhelmed and having too many transactions in their memory-pool, causing it to run out of memory and crash.
The main way to do this has been discussed a couple of years ago. The main approach in Bitcoin Core is fees. And nothing but fees. Lets improve on that and define a list of priorities;
  1. Coin-age of spent coin (days-destroyed). Older is better.
  2. Ratio of inputs to outputs in one transaction. More inputs is better.
  3. Sigops count. Less is better.
  4. Transaction size in bytes. Smaller is better.
  5. Fees paid to the miner.
For instance we already have, and have had for many years, a free-transaction-limiter. Which means that zero-fee transactions are allowed, but only a certain number per minute are accepted.
In the memo case it violates the first rule in a particularly spectacular fashion. Without offsetting it with any of the other points being significantly better.
In the coming years we'll see all mining nodes implement the above priority list, where nodes protect themselves from being overwhelmed with cheap transactions by rejecting ones that show very low effort. At the same time people that spend money in the store will typically have a very good score on the priorities table and those will always be accepted in the mempool.
submitted by ThomasZander to btc [link] [comments]

"Infinity" patch for Bitcoin Core v0.12.1, v0.13.2, v0.14.0 — Support SegWit *and* larger blocks

If you…
…then this patch is for you.
This patch contains the minimal changes necessary to make Bitcoin Core accept blocks of any size (up to the overall message size limit of 32 MiB). It does this without removing or neutering the protections against blocks with excessive numbers of signature operations ("sigops"). The maximum number of sigops allowed scales linearly with the size (weight) of the block.
Blocks at or smaller than Core's current limit are treated exactly the same as by unpatched Bitcoin Core, meaning this patch will have no effect until and unless a hard fork to larger blocks occurs.
If a hard fork does occur, nodes running this patch will follow whichever chain demonstrates the most work, regardless of the sizes of the blocks in that chain. This means that nodes running this patch may diverge from nodes running unpatched Bitcoin Core. Apply this patch only if you understand and agree to bear the risks involved.
Why might you want to use this patch?
Core users: If there's a hard fork, you're going to want a way to control your BTU balance. Your Core wallet won't see BTU-only outputs. You could run an instance of Bitcoin Unlimited alongside your Bitcoin Core node to access these BTU-only outputs, but you might be concerned about bugs in Bitcoin Unlimited, and you might not want to actively participate in this whole "emergent consensus" thing. By running a second Bitcoin Core instance with this "Infinity" patch, you will be able to access your BTU balances without needing to run Bitcoin Unlimited.
Unlimited users: If you want to increase on-chain capacity, then you might want to support both SegWit and larger base blocks. Maybe you don't really know what to set "EB" and "AD" to; maybe you'd rather not have to care. If you simply want to follow whichever chain has the most work, then you don't need the complexity (and risks) of Bitcoin Unlimited. By running your node with this "Infinity" patch, you will have the best of both worlds.
Where is the patch?
You can get the patch for your preferred version of Bitcoin Core here (see the links at the bottom).
submitted by whitslack to btc [link] [comments]

Letting FEES float without letting BLOCKSIZES float is NOT a "market". A market has 2 sides: One side provides a product/service (blockspace), the other side pays fees/money (BTC). An "efficient market" is when players compete and evolve on BOTH sides, approaching an ideal FEE/BLOCKSIZE EQUILIBRIUM.

The term "fee market" is a stupid tired soundbite / meme, probably invented by some loser viral marketer who works for Blockstream, which could only sound cool to a bunch of brainwashed idiots who think they sound impressive parroting it on a stagnant backwater of a censored corporate internet forum run by some low-level US govt flunky in some remote flyover state in the US Midwest.
Anyone with an ounce of economic intuition and understanding knows that a market always has TWO sides:
Or to put it in other terms which pretty everyone has heard of: A market is about supply and demand (not just demand).
The terminology "fee market" is totally retarded: When you're looking at a market, you name it based on the product/service being provided, not based on the money being paid.
When you talk about the price of a loaf of bread or a gallon of milk, you don't talk about a goddamn "dollar market" - you talk about the "baked goods market" or the "dairy market".
And in a market, you don't freeze the supply of something. (Remember, the supply of BITCOINS is fixed. But the supply of BITCOIN TRANSACTIONS is not fixed - it can and should rise, to accommodate demand. This probably sounds too obvious to mention - but I have actually seen idiots posting on r\bitcoin who got these two things mixed up.)
When we say that we want a market to be "efficient", that's also a TWO-PART PROPOSITION:
Blockspace is a product/service, and like all products/services, it migrates to the cheapest place where it can be produced, which these days means mainly in China.
And like all products/services - we want the product/service to be the highest possible quality for the lowest possible price.
Translated into Bitcoin terms, that means that we want:
This whole post is based on the very important essay on Medium.com posted today by u/Noosterdam:
Core is Breaking Bitcoin's Store-of-Value Function: Artificially limiting the blocksize to create a “fee market” = a backdoor way to raise the 21M coin cap
https://np.reddit.com/btc/comments/5dutf0/core_is_breaking_bitcoins_storeofvalue_function/
Artificially Limiting the Blocksize to Create a “Fee Market” = Another Variety of Lifting the 21 Million Bitcoin Cap
https://medium.com/@Iskenderun/artificially-limiting-the-blocksize-to-create-a-fee-market-another-variety-of-lifting-the-21-f972b6e3afd8#.m7ms1yoob
That article is getting a lot of attention from some of the emerging top economic thinkers in Bitcoin, such as:
(It's time we started recognizing these people as being leading voices regarding the economic fundamentals of Bitcoin. They have emerged organically over the years, because they have been right about so many of Bitcoin's economic aspects - unlike many of the paid "experts" from Blockstream, many of whom have been totally clueless about Bitcoin's economic aspects.)
(And it's also time we started recognizing the dangers of a centralized cartel forming create artificial blockspace scarcity and artificial fee inflation - which, as u/Noosterdam reminded us today, is just as bad as money inflation.)
Ever heard of "supply and demand"?
The phrase "fee market" only talks about the demand side, while deliberately ignoring the supply side. Sorry, but that's not how you do economics.
"Demand-side economics" is just as ridiculous as "supply-side" economics. Both are fraudulent.
So, let's look honestly at both sides of the market. What do we see?
Miners and users are both important
Maybe users haven't seemed as "important" as miners so far, in the grand scheme of things.
"Fee-paying users" are of course a more decentralized group than "blockspace-providing miners" - which might be part of the reason why devs haven't invited users to meetings in Hong Kong or Silicon Valley to whisper sweet nothings in their ears about giving users what they want.
Each group (miners and users) has its own goals:
If you only support half of this (the "fee market" half, and not the "blockspace market"), then:
Either way, good luck with that. If Core / Blockstream / certain miners only focus on creating a "fee market" without also creating a "blockspace" market, then the only thing they're going to accomplish in the long run is turning Bitcoin into a shitcoin - because some other coin without artificial blockspacer scarcity quickly come along and efficiently use the bandwidth and disk space and memory and processing cycles and electricity available, and overtake Bitcoin. (This could be an alt-coin - or it could be an upgrade to Bitcoin, such as Bitcoin Unlimited.)
Bitcoin's value depends on two factors
The value proposition of Bitcoin is based on TWO aspects:
The price of a bitcoin is something we want to keep HIGH - to avoid DILUTING our wealth. This incentivizes us to keep the Bitcoin supply FIXED (21M).
The price of a Bitcoin transaction is something we want to keep LOW - to avoid ERODING our wealth (miners sucking up our BTC via high fees). This incentivizes us to keep Bitcoin fees LOW.
Don't let the miners unilaterally sneak artificial fee inflation into Bitcoin by artificially limiting the blocksize!
Seriously, it's time to throw the discredited, fraudulent phrase "fee market" into the dustbin of history - and use something that actually paints the correct economic picture, like "fee/blocksize equilibrium".
submitted by ydtm to btc [link] [comments]

Until there is a real, working, live release of lightning network, it is irresponsible to tout it as a solution

Furthermore, once it is out, it will have to pass the test of time, -- the same kind of test of time Bitcoin had to pass when it was released (at least a year or two to ensure its viable and working without major hiccups/crashes/other downfalls such as subject to extreme regulation(which I think is virtually inevitable especially if it grew to any significant size, or if it's peer-to-peer that that data would have to be stored via a blockchain.. lol, not doing anything to solve data-storage bloat which core members so adamantly are trying to limit (i.e.: lukejr wanting 300kb blocks).
segwit relies on lightning network for scaling, but we don't even know if it's practical (which I don't think it is), yet they are trying to give the cart before the horse. Imo it's like testing if a new type of bitcoin would be successful, having to go through all same growth cycles as bitcoin to become viable.
Also correct me if I'm wrong, if we do segwit, and it turns out lightning network is ineffective and we need to scale blocks the "old fashioned way of increasing the blocksize," then rather than a simple increase of 1mb->2mb increasing block size only 1mb, if we do this increase with segwit, it will cost us up to 4x as much per megabyte.
Is my understanding of this correct? If so this is a major setback for scaling when Bitcoin needs to grow to 4mb, and 8mb. (as potentially 4x more space is needed, creating more data storage requirements and subject to more spam vulnerability (ironically the same thing that lukejr and others are trying to avoid).
Edit: I'm unsure how much segwit will increase the average transaction size, but it's clear to me that it will increase average transaction size, since it would add more data/instructions within the Bitcoin blocks.
submitted by TommyEconomics to btc [link] [comments]

Roger Ver explains: Why he supports Bitcoin Unlimited and bigger blocks

submitted by BitcoinXio to btc [link] [comments]

Someone knows how is going the Classic dev team?

The last post date is 30 April... Xtreme Thinblocks? Would be great a merge from BU...
submitted by kostialevin to btc [link] [comments]

Stuck transactions

EDIT: The Bitcoin network is currently under a sophisticated DoS attack using the "fake sigOp" method...

Some weird shit is going on right now: mempool size is above 10 MB total, with a total size of transactions paying at least the default recommended fee rate (10 satoshi/byte) being 8.5 MB. (Numbers from here). So there is definitely enough fee-paying transaction to fill several blocks with.
However, most blocks are far from being full, they do not even reach arbitrary limits set by miners. For example, you can see block #385917, mined by a KnCminer, having size 912 KB, and the next block #385918 is only 487 KB. So we see that KnCminer is OK with mining blocks as big as 912 KB, but found only 487 KB of worthy transactions.
Thus we can conclude that some anti-spam filtration measures are on, but what are they?
I think we might benefit if miners state the policies they are using, as it gives wallet developers a chance to change the way they make transactions, or at least warn users if transaction won't be confirmed soon
I just got a transaction confirmed after 4 hours of waiting. It wasn't a low-fee transaction: it paid about 20 satoshi per byte, which is twice the default fee rate. I think I know why it was delayed: it had an output which is only 3400 satoshi, which is only slightly above dust threshold, perhaps it made transaction look spam-like.
It's frustrating that miners' policy is so opaque. Let's look at block #385927, which is mined by F2Pool. It is 340 KB in size, which is smaller than other blocks recently mined by F2Pool, so we know it didn't reach the block size F2Pool set. No block explorers I'm aware of show transaction fee distribution for a specific block, so we have to do this manually. However, we can take advantage of the fact that Bitcoin Core sorts transactions by fee rate, thus if F2Pool uses block constructor based on it we will see this pattern.
5 of the last 7 transactions in the block pay almost exactly 20 satoshi per byte, so this seems to be a cut-off rate. The last two pay less than that rate, however, they coalesce many outputs, so it might be a result of prioritization.
The question I have is why do we have to learn about it through guesswork, aren't miners themselves interested in Bitcoin being a reliable payment system?
It's hard to figure out what the fuck they want from us even using fairly sophisticated statistical tools.
submitted by killerstorm to Bitcoin [link] [comments]

The Astounding Incompetence, Negligence, and Dishonesty of the Bitcoin Unlimited Developers

On August 26, 2016 someone noticed that their Classic node had been forked off of the "Big Blocks Testnet" that Bitcoin Classic and Bitcoin Unlimited were running. Neither implementation was testing their consensus code on any other testnets; this was effectively the only testnet being used to test either codebase. The issue was due to a block on the testnet that was mined on July 30, almost a full month prior to anyone noticing the fork at all, which was in violation of the BIP109 specification that Classic miners were purportedly adhering to at the time. Gregory Maxwell observed:
That was a month ago, but it's only being noticed now. I guess this is demonstrating that you are releasing Bitcoin Classic without much testing and that almost no one else is either? :-/
The transaction in question doesn't look at all unusual, other than being large. It was, incidentally, mined by pool.bitcoin.com, which was signaling support for BIP109 in the same block it mined that BIP 109 violating transaction.
Later that day, Maxwell asked Roger Ver to clarify whether he was actually running Bitcoin Classic on the bitcoin.com mining pool, who dodged the question and responded with a vacuous reply that attempted to inexplicably change the subject to "censorship" instead.
Andrew Stone (the lead developer of Bitcoin Unlimited) voiced confusion about BIP109 and how Bitcoin Unlimited violated the specification for it (while falsely signaling support for it). He later argued that Bitcoin Unlimited didn't need to bother adhering to specifications that it signaled support for, and that doing so would violate the philosophy of the implementation. Peter Rizun shared this view. Neither developer was able to answer Maxwell's direct question about the violation of BIP109 §4/5, which had resulted in the consensus divergence (fork).
Despite Maxwell having provided a direct link to the transaction violating BIP109 that caused the chain split, and explaining in detail what the results of this were, later Andrew Stone said:
I haven't even bothered to find out the exact cause. We have had BUIP016 passed to adhere to strict BIP109 compatibility (at least in what we generate) by merging Classic code, but BIP109 is DOA -- so no-one bothered to do it.
I think that the only value to be had from this episode is to realise that consensus rules should be kept to an absolute, money-function-protecting minimum. If this was on mainnet, I'll be the Classic users would be unhappy to be forked onto a minority branch because of some arbitrary limit that is yet another thing would have needed to be fought over as machine performance improves but the limit stays the same.
Incredibly, when a confused user expressed disbelief regarding the fork, Andrew Stone responded:
Really? There was no classic fork? As i said i didnt bother to investigate. Can you give me a link to more info? Its important to combat this fud.
Of course, the proof of the fork (and the BIP109-violating block/transaction) had already been provided to Stone by Maxwell. Andrew Stone was willing to believe that the entire fork was imaginary, in the face of verifiable proof of the incident. He admits that he didn't investigate the subject at all, even though that was the only testnet that Unlimited could have possibly been performing any meaningful tests on at the time, and even though this fork forced Classic to abandon BIP109 entirely, leaving it vulnerable to the types of attacks that Gavin Andresen described in his Guided Tour of the 2mb Fork:
“Accurate sigop/sighash accounting and limits” is important, because without it, increasing the block size limit might be dangerous... It is set to 1.3 gigabytes, which is big enough so none of the blocks currently in the block chain would hit it, but small enough to make it impossible to create poison blocks that take minutes to validate.
As a result of this fork (which Stone was clueless enough to doubt had even happened), Bitcoin Classic and Bitcoin Unlimited were both left vulnerable to such attacks. Fascinatingly, this fact did not seem to bother the developers of Bitcoin Unlimited at all.
On November 17, 2016 Andrew Stone decided to post an article titled A Short Tour of Bitcoin Core wherein he claimed:
Bitcoin Unlimited is building the highest quality, most stable, Bitcoin client available. We have a strong commitment to quality and testing as you will see in the rest of this document.
The irony of this claim should soon become very apparent.
In the rest of the article, Stone wrote with venomous and overtly hostile rhetoric:
As we mine the garbage in the Bitcoin Core code together... I want you to realise that these issues are systemic to Core
He went on to describe what he believed to be multiple bugs that had gone unnoticed by the Core developers, and concluded his article with the following paragraph:
I hope when reading these issues, you will realise that the Bitcoin Unlimited team might actually be the most careful committers and testers, with a very broad and dedicated test infrastructure. And I hope that you will see these Bitcoin Core commits— bugs that are not tricky and esoteric, but simple issues that well known to average software engineers —and commits of “Very Ugly Hack” code that do not reflect the care required for an important financial network. I hope that you will realise that, contrary to statements from Adam Back and others, the Core team does not have unique skills and abilities that qualify them to administer this network.
As soon as the article was published, it was immediately and thoroughly debunked. The "bugs" didn't exist in the current Core codebase; some were results of how Andrew had "mucked with wallet code enough to break" it, and "many of issues were actually caused by changes they made to code they didn't understand", or had been fixed years ago in Core, and thus only affected obsolete clients (ironically including Bitcoin Unlimited itself).
As Gregory Maxwell said:
Perhaps the biggest and most concerning danger here isn't that they don't know what they're doing-- but that they don't know what they don't know... to the point where this is their best attempt at criticism.
Amusingly enough, in the "Let's Lose Some Money" section of the article, Stone disparages an unnamed developer for leaving poor comments in a portion of the code, unwittingly making fun of Satoshi himself in the process.
To summarize: Stone set out to criticize the Core developer team, and in the process revealed that he did not understand the codebase he was working on, had in fact personally introduced the majority of the bugs that he was criticizing, and was actually completely unable to identify any bugs that existed in current versions Core. Worst of all, even after receiving feedback on his article, he did not appear to comprehend (much less appreciate) any of these facts.
On January 27, 2017, Bitcoin Unlimited excitedly released v1.0 of their software, announcing:
The third official BU client release reflects our opinion that Bitcoin full-node software has reached a milestone of functionality, stability and scalability. Hence, completion of the alpha/beta phase throughout 2009-16 can be marked in our release version.
A mere 2 days later, on January 29, their code accidentally attempted to hard-fork the network. Despite there being a very clear and straightforward comment in Bitcoin Core explaining the space reservation for coinbase transactions in the code, Bitcoin Unlimited obliviously merged a bug into their client which resulted in an invalid block (23 bytes larger than 1MB) being mined by Roger Ver's Bitcoin.com mining pool on January 29, 2017, costing the pool a minimum of 13.2 bitcoins. A large portion of Bitcoin Unlimited nodes and miners (which naively accepted this block as valid) were temporarily banned from the network as a result, as well.
The code change in question revealed that the Bitcoin Unlimited developers were not only "commenting out and replacing code without understanding what it's for" as well as bypassing multiple safety-checks that should have prevented such issues from occurring, but that they were not performing any peer review or testing whatsoever of many of the code changes they were making. This particular bug was pushed directly to the master branch of Bitcoin Unlimited (by Andrew Stone), without any associated pull requests to handle the merge or any reviewers involved to double-check the update. This once again exposed the unprofessionalism and negligence of the development team and process of Bitcoin Unlimited, and in this case, irrefutably had a negative effect in the real world by costing Bitcoin.com thousands of dollars worth of coins.
In effect, this was the first public mainnet fork attempt by Bitcoin Unlimited. Unsurprisingly, the attempt failed, costing the would-be forkers real bitcoins as a result. It is possible that the costs of this bug are much larger than the lost rewards and fees from this block alone, as other Bitcoin Unlimited miners may have been expending hash power in the effort to mine slightly-oversized (invalid) blocks prior to this incident, inadvertently wasting resources in the doomed pursuit of invalid coins.
On March 14, 2017, a remote exploit vulnerability discovered in Bitcoin Unlimited crashed 75% of the BU nodes on the network in a matter of minutes.
In order to downplay the incident, Andrew Stone rapidly published an article which attempted to imply that the remote-exploit bug also affected Core nodes by claiming that:
approximately 5% of the “Satoshi” Bitcoin clients (Core, Unlimited, XT) temporarily dropped off of the network
In reddit comments, he lied even more explicitly, describing it as "a bug whose effects you can see as approximate 5% drop in Core node counts" as well as a "network-wide Bitcoin client failure". He went so far as to claim:
the Bitcoin Unlimited team found the issue, identified it as an attack and fixed the problem before the Core team chose to ignore it
The vulnerability in question was in thinblock.cpp, which has never been part of Bitcoin Core; in other words, this vulnerability only affected Bitcoin Classic and Bitcoin Unlimited nodes.
In the same Medium article, Andrew Stone appears to have doctored images to further deceive readers. In the reddit thread discussing this deception, Andrew Stone denied that he had maliciously edited the images in question, but when questioned in-depth on the subject, he resorted to citing his own doctored images as sources and refused to respond to further requests for clarification or replication steps.
Beyond that, the same incident report (and images) conspicuously omitted the fact that the alleged "5% drop" on the screenshotted (and photoshopped) node-graph was actually due to the node crawler having been rebooted, rather than any problems with Core nodes. This fact was plainly displayed on the 21 website that the graph originated from, but no mention of it was made in Stone's article or report, even after he was made aware of it and asked to revise or retract his deceptive statements.
There were actually 3 (fundamentally identical) Xthin-assert exploits that Unlimited developers unwittingly publicized during this episode, which caused problems for Bitcoin Classic, which was also vulnerable.
On top of all of the above, the vulnerable code in question had gone unnoticed for 10 months, and despite the Unlimited developers (including Andrew Stone) claiming to have (eventually) discovered the bug themselves, it later came out that this was another lie; an external security researcher had actually discovered it and disclosed it privately to them. This researcher provided the following quotes regarding Bitcoin Unlimited:
I am quite beside myself at how a project that aims to power a $20 billion network can make beginner’s mistakes like this.
I am rather dismayed at the poor level of code quality in Bitcoin Unlimited and I suspect there [is] a raft of other issues
The problem is, the bugs are so glaringly obvious that when fixing it, it will be easy to notice for anyone watching their development process,
it doesn’t help if the software project is not discreet about fixing critical issues like this.
In this case, the vulnerabilities are so glaringly obvious, it is clear no one has audited their code because these stick out like a sore thumb
In what appeared to be a desperate attempt to distract from the fundamental ineptitude that this vulnerability exposed, Bitcoin Unlimited supporters (including Andrew Stone himself) attempted to change the focus to a tweet that Peter Todd made about the vulnerability, blaming him for exposing it and prompting attackers to exploit it... but other Unlimited developers revealed that the attacks had actually begun well before Todd had tweeted about the vulnerability. This was pointed out many times, even by Todd himself, but Stone ignored these facts a week later, and shamelessly lied about the timeline in a propagandistic effort at distraction and misdirection.
submitted by sound8bits to Bitcoin [link] [comments]

Craig Wright tweet storm on Ryan X. Charles' twitter page. These were all posted while Craig was at The Future of Bitcoin conference. Very interesting read until we get the full livestream video of the conference.

https://twitter.com/ryanxcharles?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor
Simply look for every tweet that begins with "CSW:"
These were all posted while Craig was at The Future of Bitcoin conference.
Interesting tweets:
CSW: "Everyone wants SegWit" is 1984 doublespeak.
CSW: RBF is the biggest piece of shit ever created.
CSW: We need to attract companies. We want banks to use bitcoin. Not like luke-jr.
CSW: The quadratic scaling issue was added to bitcoin. It is easy to fix. Our team fixed it in 3 hrs.
CSW: We're going to distribute the petabytes of data. Jimmy will figure it out.
CSW: Jimmy will get upset if I tell you more. But we're doing a lot more.
CSW: I'm here for the long-term. Like it or not, you're not getting rid of me. We're here for 20 years.
CSW: nChain has an unlimited block size strategy.
CSW: We're going to scale radically. If you don't want to come along, stiff shit.
CSW: Our pool will reject segwit txs.
CSW: As a miner, I choose. I decide if I don't want segwit. It's about time miners figured out their role. Miners choose.
 
CSW: There is no king. There is no glorious leader. I am here to kill off Satoshi.
CSW: Everybody in the world should use bitcoin to buy coffee and whatever else they want.
CSW: We can achieve 500,000 sigops per second with a full node on a $20,000 machine
CSW: I don't care about raspberry pis.
 
CSW: Lightning is a mesh. Lots of little hops, central nodes, etc. Look at the math.
CSW: Any network with d=3+ can always be Sybiled. Lightning can have 80 hops.
CSW: LN is always vulnerable to attack. Read the paper. Read the results.
 
Talk now available (2:23:10 - including a short intro by Jon Matonis):
https://www.youtube.com/watch?v=YAcOnvOVquo&feature=youtu.be&t=8603
submitted by BitcoinIsTehFuture to btc [link] [comments]

Bitcoin dev IRC meeting in layman's terms (2015-11-05)

Once again my attempt to summarize and explain the weekly bitcoin developer meeting in layman's terms. Link to last weeks summarization Note that I crosspost this to Voat, bitcoin.com and the bitcoin-discuss mailing list every week. I can't control what's being talking about in the meeting, if certain things come up I might not be able to post here because of "guidelines".
Disclaimer
Please bear in mind I'm not a developer and I'd have problems coding "hello world!", so some things might be incorrect or plain wrong. Like any other write-up it likely contains personal biases, although I try to stay as neutral as I can. There are no decisions being made in these meetings, so if I say "everyone agrees" this means everyone present in the meeting, that's not consensus, but since a fair amount of devs are present it's a good representation. The dev IRC and mailinglist are for bitcoin development purposes. If you have not contributed actual code to a bitcoin-implementation, this is probably not the place you want to reach out to. There are many places to discuss things that the developers read, including this sub-reddit.
link to this week logs Meeting minutes by meetbot
Main topics discussed where:
Sigcache performance Performance goals for 0.12 transaction priority sigops flooding attack chain limits
Short topics/notes
Note: cfields, mcelrath and BlueMatt (and maybe more) missed the meeting because of daylight saving time.
Closing date for proposals for the scaling bitcoin workshop is the 9th.
Check to see if there are any other commits for the 0.11.2 RC. As soon as 6948 and 6825 are merged it seems good to go. We need to move fairly quick as there are already miners voting for CLTV (F2Pool). Also testnet is CLTV locked already and is constantly forking. 0.11.2 RC1 has been released as of today: https://bitcoin.org/bin/bitcoin-core-0.11.2/test/
Most of the mempool-limiting analysis assumed child-pays-for-parent, however that isn't ready for 0.12 yet, so we should think about possible abuses in context of the existing mining algorithm.
Because of time-constrains opt-in replace-by-fee has been deferred to next weeks meeting, but most people seem to want it in 0.12. sdaftuar makes a note that we need to make clear to users what they need to do if they don't want to accept opt-in transactions.
Sigcache performance
The signature cache, which is in place to increase performance (by not having to check the signature multiple times), and to mitigate some attacks currently has a default limit of 50 000 signatures. Sipa has a pull-request which proposes to: Change the limit from number of entries to megabytes Change the default to 40MB, which corresponds to 500 000 signatures Store salted hashes instead of full entries Remove entries that have been validated in a block
Sipa did benchmarks for various signature cache sizes on hitrate in blocks (how many of the cached signatures are in the block). The maximum sigcache size was 68MB, resulting in a 3% miss-rate. Some blocks though have extremely high miss rates (60%) while others have none. Likely caused by miners running different policies. Gmaxwell proposed to always run script verification for mempool transactions, even if these transactions get rejected into the mempool by the clients policy. The result of that is that even a 300MB sigcache size only gets down to 15% misses. So there's too much crap being relayed to keep any reasonable sized cache. Gmaxwell points out downsides to not checking any rejected transactions, namely: there are some DOS attacks possible, and you increase your misrate if you set a policy which is more restrictive than the typical network, which might result in a race to the bottom.
Sipa continues his work and seeks out other strategies
Performance goals for 0.12
Bitcoin-core 0.12 is scheduled for release December 1st.
Everybody likes to include secp256k1 ASAP, as it has a very large performance increase. Some people would like to include the sigcache pull-request, BIP30, modifyNewCoins and a createNewBlock rewrite if it's ready. Wumpus advises against merging last-minute performance improvements for 0.12.
Mentioned pull-requests should be reviewed, prioritizing CreateNewBlock
transaction priority
Each transaction is assigned a priority, determined by the age, size, and number of inputs. Which makes some transactions free.
Sipa thinks we should get rid of the current priority completely and replace it with a function that modifies fee or size of a transaction. There's a pull-request available that optimizes the current transaction priority, thereby avoiding the political debate that goes with changing the definition of transaction priority. Luke-jr thinks the old policy should remain possible.
Check to see if PR #6357 is safe and efficient enough.
sigops flooding attack
The number of ECDSA signature-checking operations or sigops is currently limited to 20 000 per block. This in order to prevent miners creating blocks that take ages to verify as those operations are time-consuming. You could however construct transactions that have a very high sigops count and since most miners don't take into account the sigops count they end up with very small blocks because the sigop limit is reached. This attack is described here.
Suggestion to take the number of sigops relative to the maximum blocksize into account with the total size. Meaning a 10k sigops transaction would currently be viewed as 500kB in size (for that single transaction, not towards the block). That suggestion would be easy to change in the mining code, but more invasive to try and plug that into everything that looks at feerate. This would also open up attacks on the mempool if these transactions are not evicted by mempool limiting. Luke-jr has a bytes-per-sigop limit, that filters out these attack transactions.
More analysis should be done, people seem fine with the general direction of fixing it.
chain limits
Chain in this context means connected transactions. When you send a transaction that depends on another transaction that has yet to be confirmed we talk about a chain of transactions. Miners ideally take the whole chain into account instead of just every single transaction (although that's not widely implemented afaik). So while a single transaction might not have a sufficient fee, a depending transaction could have a high enough fee to make it worthwhile to mine both. This is commonly known as child-pays-for-parent. Since you can make these chains very big it's possible to clog up the mempool this way. With the recent malleability attacks, anyone who made transactions going multiple layers deep would've already encountered huge problems doing this (beautifully explained in let's talk bitcoin #258 from 13:50 onwards) Proposal and github link.
sdaftuar's analysis shows that 40% of blocks contain a chain that exceeds the proposed limits. Even a small bump doesn't make the problem go away. Possible sources of these chains: a service paying the fees on other transactions (child-pays-for-parent), an iOS wallet that gladly spends unconfirmed change. A business confirms they use child-pays-for-parent when they receive bitcoins from an unspent chain. It is possible that these long chains are delivered to miners directly, in which case they wouldn't be affected by the proposed relay limits (and by malleability). Since this is a problem that needs to be addressed, people seem fine with merging it anyway, communicating in advance to let businesses think about how this affects them.
Merge "Policy: Lower default limits for tx chains" Morcos will mail the developer mailing list after it's merged.
Participants
morcos Alex Morcos gmaxwell Gregory Maxwell wumpus Wladimir J. van der Laan sipa Pieter Wuille jgarzik Jeff Garzik Luke-Jr Luke Dashjr phantomcircuit Patrick Strateman sdaftuar Suhas Daftuar btcdrak btcdrak jouke ??Jouke Hofman?? jtimon Jorge Timón jonasschnelli Jonas Schnelli 
Comic relief
20:01 wumpus #meetingend 20:01 wumpus #meetingstop 20:01 gmaxwell Thanks all. 20:01 btcdrak #exitmeeting 20:01 gmaxwell #nomeetingnonono 20:01 btcdrak #meedingexit 20:01 wumpus #endmeeting 20:01 lightningbot Meeting ended Thu Nov 5 20:01:29 2015 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . 20:01 btcdrak #rekt 
submitted by G1lius to Bitcoin [link] [comments]

F2Pool, largest bitcoin pool on 20mb blocks (revisiting old news here).

I was just reading back over this mailing list thread where a F2Pool representative explained to Gavin why 20MB blocks wouldn't work for them.
If someone propagate a 20MB block, it will take at best 6 seconds for us to receive to verify it at current configuration, result of one percent orphan rate increase. Or, we can mine the next block only on the previous block's header, in this case, the network would see many more transaction-less blocks.
Our orphan rate is about 0.5% over the past few months. If the network floods 20MB blocks, it can be well above 2%. Besides bandwidth, A 20MB block could contain an average of 50000 transactions, hundred of thousands of sigops, Do you have an estimate how long it takes on the submitblock rpccall?
For references, our 30Mbps bandwidth in Beijing costs us 1350 dollars per month. We also use Aliyun and Linode cloud services for block propagation. As of May 2015, the price is 0.13 U.S. dollars per GB for 100Mbps connectivity at Aliyun. For a single cross-border TCP connection, it would be certainly far slower than 12.5 MB/s.
I think we can accept 5MB block at most.
When people talk about low bandwidth miners being vulnerable to attack by large blocks, that remark by F2Pool I believe is what spawned the concern.
It didn't seem like that big of a deal to me, 6 seconds? And then I realized, F2Pool, in addition to being the largest bitcoin pool, is also the largest litecoin and dogecoin mining pool. Litecoin has 2.5min blocks, bandwidth equivalent to 4MB max block size in bitcoin, and dogecoin has 1min blocks, equivalent to 10MB max block size.
I just wonder if they might have been taking into account block flooding by those two networks in their bandwidth concern for this attack vector as well. If someone wanted to attack them by flooding big blocks they could do it extra effectively (and cheaply) by using those two coins, they already have potentially 14MB worth of block and transaction spam every 10min to worry about.
Just something I hadn't considered before, thought I'd share.
submitted by peoplma to bitcoinxt [link] [comments]

/u/jl_2012 comments on new extension block BIP - "a block reorg will almost guarantee changing txid of the resolution tx, that will permanently invalidate all the child txs based on the resolution tx"

Comments from jl_2012
I feel particularly disappointed that while this BIP is 80% similar to my proposal made 2 months ago ( https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-January/013490.html ), Matt Corallo was only the person replied me. Also, this BIP seems ignored the txid malleability of the resolution tx, as my major technical critique of xblock design.
But anyway, here I’m only making comments on the design. As I said in my earlier post, I consider this more as an academic topic than something really ready for production use.
This specification defines a method of increasing bitcoin transaction throughput without altering any existing consensus rules.
Softforks by definition tighten consensus rules
There has been great debate regarding other ways of increasing transaction throughput, with no proposed consensus-layer solutions that have proven themselves to be particularly safe.
so the authors don’t consider segwit as a consensus-layer solution to increase transaction throughput, or not think segwit is safe? But logically speaking if segwit is not safe, this BIP could only be worse. OTOH, segwit also obviously increases tx throughput, although it may not be as much as some people wish to have.
This specification refines many of Lau's ideas, and offers a much simpler method of tackling the value transfer issue, which, in Lau's proposal, was solved with consensus-layer UTXO selection.
The 2013 one is outdated. As the authors are not quoting it, not sure if they read my January proposal
extension block activation entails BIP141 activation.
I think extension block in the proposed form actually breaks BIP141. It may say it activates segregated witness as a general idea, but not a specific proposal like BIP141
The merkle root is to be calculated as a merkle tree with all extension block txids and wtxids as the leaves.
It needs to be more specific here. How are they exactly arranged? I suggest it uses a root of all txids, and a root of all wtxids, and combine them as the commitment. The reason is to allow people to prune the witness data, yet still able to serve the pruned tx to light wallets. If it makes txid and wtxid as pairs, after witness pruning it still needs to store all the wtxids or it can’t reconstruct the tree
Outputs signal to exit the extension block if the contained script is either a minimally encoded P2PKH or P2SH script.
This hits the biggest question I asked in my January post: do you want to allow direct exit payment to legacy addresses? As a block reorg will almost guarantee changing txid of the resolution tx, that will permanently invalidate all the child txs based on the resolution tx. This is a significant change to the current tx model. To fix this, you need to make exit outputs unspendable for up to 100 blocks. Doing this, however, will make legacy wallet users very confused as they do not anticipate funding being locked up for a long period of time. So you can’t let the money sent back to a legacy address directly, but sent to a new format address that only recognized by new wallet, which understands the lock up requirement. This way, however, introduces friction and some fungibility issues, and I’d expect people using cross chain atomic swap to exchange bitcoin and xbitcoin
To summarise, my questions are: 1. Is it acceptable to have massive txid malleability and transaction chain invalidation for every natural happening reorg? Yes: the current spec is ok; No: next question (I’d say no) 2. Is locking up exit outputs the best way to deal with the problem? (I tried really hard to find a better solution but failed) 3. How long the lock-up period should be? Answer could be anywhere from 1 to 100 4. With a lock-up period, should it allow direct exit to legacy address? (I think it’s ok if the lock-up is short, like 1-2 block. But is that safe enough?) 5. Due to the fungibility issues, it may need a new name for the tokens in the ext-block
Verification of transactions within the extension block shall enforce all currently deployed softforks, along with an extra BIP141-like ruleset.
I suggest to only allow push-only and OP_RETURN scriptPubKey in xblock. Especially, you don’t want to replicate the sighash bug to xblock. Also, requires scriptSig to be always empty
This leaves room for 7 future soft-fork upgrades to relax DoS limits.
Why 7? There are 16 unused witness program versions
Witness script hash v0 shall be worth the number of accurately counted sigops in the redeem script, multiplied by a factor of 8.
There is a flaw here: witness script with no sigop will be counted as 0 and have a lot free space
every 73 bytes in the serialized witness vector is worth 1 additional point.
so 72 bytes is 1 point or 0 point? Maybe it should just scale everything up by 64 or 128, and make 1 witness byte = 1 point . So it won’t provide any “free space” in the block.
Currently defined witness programs (v0) are each worth 8 points. Unknown witness program outputs are worth 1 point. Any exiting output is always worth 8 points.
I’d suggest to have at least 16 points for each witness v0 output, so it will make it always more expensive to create than spend UTXO. It may even provide extra “discount” if a tx has more input than output. The overall objective is to limit the UTXO growth. The ext block should be mainly for making transactions, not store of value (I’ll explain later)
Dust Threshold
In general I think it’s ok, but I’d suggest a higher threshold like 5000 satoshi. It may also combine the threshold with the output witness version, so unknown version may have a lower or no threshold. Alternatively, it may start with a high threshold and leave a backdoor softfork to reduce it.
Deactivation
It is a double-edged sword. While it is good for us to be able to discard an unused chain, it may create really bad user experience and people may even lose money. For example, people may have opened Lightning channels and they will find it not possible to close the channel. So you need to make sure people are not making time-locked tx for years, and require people to refresh their channel regularly. And have big red warning when the deactivation SF is locked in. Generally, xblock with deactivation should never be used as long-term storage of value.
———— some general comments:
  1. This BIP in current form is not compatible with BIP141. Since most nodes are already upgraded to BIP141, this BIP must not be activated unless BIP141 failed to activate. However, if the community really endorse the idea of ext block, I see no reason why we couldn’t activate BIP141 first (which could be done in 2 weeks), then work together to make ext block possible. Ext block is more complicated than segwit. If it took dozens of developers a whole year to release segwit, I don’t see how ext block could become ready for production with less time and efforts.
  2. Another reason to make this BIP compatible with BIP141 is we also need malleability fix in the main chain. As the xblock has a deactivation mechanism, it can’t be used for longterm value storage.
  3. I think the size and cost limit of the xblock should be lower at the beginning, and increases as we find it works smoothly. It could be a predefined growth curve like BIP103, or a backdoor softfork. With the current design, it leaves a massive space for miners to fill up with non-tx garbage. Also, I’d also like to see a complete SPV fraud-proof solution before the size grows bigger.
Source: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-April/013982.html
submitted by jonny1000 to Bitcoin [link] [comments]

The Astounding Incompetence, Negligence, and Dishonesty of the Bitcoin Unlimited Developers

On August 26, 2016 someone noticed that their Classic node had been forked off of the "Big Blocks Testnet" that Bitcoin Classic and Bitcoin Unlimited were running. Neither implementation was testing their consensus code on any other testnets; this was effectively the only testnet being used to test either codebase. The issue was due to a block on the testnet that was mined on July 30, almost a full month prior to anyone noticing the fork at all, which was in violation of the BIP109 specification that Classic miners were purportedly adhering to at the time. Gregory Maxwell observed:
That was a month ago, but it's only being noticed now. I guess this is demonstrating that you are releasing Bitcoin Classic without much testing and that almost no one else is either? :-/
The transaction in question doesn't look at all unusual, other than being large. It was, incidentally, mined by pool.bitcoin.com, which was signaling support for BIP109 in the same block it mined that BIP 109 violating transaction.
Later that day, Maxwell asked Roger Ver to clarify whether he was actually running Bitcoin Classic on the bitcoin.com mining pool, who dodged the question and responded with a vacuous reply that attempted to inexplicably change the subject to "censorship" instead.
Andrew Stone voiced confusion about BIP109 and how Bitcoin Unlimited violated the specification for it (while falsely signaling support for it). He later argued that Bitcoin Unlimited didn't need to bother adhering to specifications that it signaled support for, and that doing so would violate the philosophy of the implementation. Peter Rizun shared this view. Neither developer was able to answer Maxwell's direct question about the violation of BIP109 §4/5, which had resulted in the consensus divergence (fork).
Despite Maxwell having provided a direct link to the transaction violating BIP109 that caused the chain split, and explaining in detail what the results of this were, later Andrew Stone said:
I haven't even bothered to find out the exact cause. We have had BUIP016 passed to adhere to strict BIP109 compatibility (at least in what we generate) by merging Classic code, but BIP109 is DOA -- so no-one bothered to do it.
I think that the only value to be had from this episode is to realise that consensus rules should be kept to an absolute, money-function-protecting minimum. If this was on mainnet, I'll be the Classic users would be unhappy to be forked onto a minority branch because of some arbitrary limit that is yet another thing would have needed to be fought over as machine performance improves but the limit stays the same.
Incredibly, when a confused user expressed disbelief regarding the fork, Andrew Stone responded:
Really? There was no classic fork? As i said i didnt bother to investigate. Can you give me a link to more info? Its important to combat this fud.
Of course, the proof of the fork (and the BIP109-violating block/transaction) had already been provided to Stone by Maxwell. Andrew Stone was willing to believe that the entire fork was imaginary, in the face of verifiable proof of the incident. He admits that he didn't investigate the subject at all, even though that was the only testnet that Unlimited could have possibly been performing any meaningful tests on at the time, and even though this fork forced Classic to abandon BIP109 entirely, leaving it vulnerable to the types of attacks that Gavin Andresen described in his Guided Tour of the 2mb Fork:
“Accurate sigop/sighash accounting and limits” is important, because without it, increasing the block size limit might be dangerous... It is set to 1.3 gigabytes, which is big enough so none of the blocks currently in the block chain would hit it, but small enough to make it impossible to create poison blocks that take minutes to validate.
As a result of this fork (which Stone was clueless enough to doubt had even happened), Bitcoin Classic and Bitcoin Unlimited were both left vulnerable to such attacks. Fascinatingly, this fact did not seem to bother the developers of Bitcoin Unlimited at all.
On November 17, 2016 Andrew Stone decided to post an article titled A Short Tour of Bitcoin Core wherein he claimed:
Bitcoin Unlimited is building the highest quality, most stable, Bitcoin client available. We have a strong commitment to quality and testing as you will see in the rest of this document.
The irony of this claim should soon become very apparent.
In the rest of the article, Stone wrote with venomous and overtly hostile rhetoric:
As we mine the garbage in the Bitcoin Core code together... I want you to realise that these issues are systemic to Core
He went on to describe what he believed to be multiple bugs that had gone unnoticed by the Core developers, and concluded his article with the following paragraph:
I hope when reading these issues, you will realise that the Bitcoin Unlimited team might actually be the most careful committers and testers, with a very broad and dedicated test infrastructure. And I hope that you will see these Bitcoin Core commits— bugs that are not tricky and esoteric, but simple issues that well known to average software engineers —and commits of “Very Ugly Hack” code that do not reflect the care required for an important financial network. I hope that you will realise that, contrary to statements from Adam Back and others, the Core team does not have unique skills and abilities that qualify them to administer this network.
As soon as the article was published, it was immediately and thoroughly debunked. The "bugs" didn't exist in the current Core codebase; some were results of how Andrew had "mucked with wallet code enough to break" it, and "many of issues were actually caused by changes they made to code they didn't understand", or had been fixed years ago in Core, and thus only affected obsolete clients (ironically including Bitcoin Unlimited itself).
As Gregory Maxwell said:
Perhaps the biggest and most concerning danger here isn't that they don't know what they're doing-- but that they don't know what they don't know... to the point where this is their best attempt at criticism.
Amusingly enough, in the "Let's Lose Some Money" section of the article, Stone disparages an unnamed developer for leaving poor comments in a portion of the code, unwittingly making fun of Satoshi himself in the process.
To summarize: Stone set out to criticize the Core developer team, and in the process revealed that he did not understand the codebase he was working on, had in fact personally introduced the majority of the bugs that he was criticizing, and was actually completely unable to identify any bugs that existed in current versions Core. Worst of all, even after receiving feedback on his article, he did not appear to comprehend (much less appreciate) any of these facts.
On January 27, 2017, Bitcoin Unlimited excitedly released v1.0 of their software, announcing:
The third official BU client release reflects our opinion that Bitcoin full-node software has reached a milestone of functionality, stability and scalability. Hence, completion of the alpha/beta phase throughout 2009-16 can be marked in our release version.
A mere 2 days later, on January 29, their code accidentally attempted to hard-fork the network. Despite there being a very clear and straightforward comment in Bitcoin Core explaining the space reservation for coinbase transactions in the code, Bitcoin Unlimited obliviously merged a bug into their client which resulted in an invalid block (23 bytes larger than 1MB) being mined by Roger Ver's Bitcoin.com mining pool on January 29, 2017, costing the pool a minimum of 13.2 bitcoins. A large portion of Bitcoin Unlimited nodes and miners (which naively accepted this block as valid) were temporarily banned from the network as a result, as well.
The code change in question revealed that the Bitcoin Unlimited developers were not only "commenting out and replacing code without understanding what it's for" as well as bypassing multiple safety-checks that should have prevented such issues from occurring, but that they were not performing any peer review or testing whatsoever of many of the code changes they were making. This particular bug was pushed directly to the master branch of Bitcoin Unlimited (by Andrew Stone), without any associated pull requests to handle the merge or any reviewers involved to double-check the update. This once again exposed the unprofessionalism and negligence of the development team and process of Bitcoin Unlimited, and in this case, irrefutably had a negative effect in the real world by costing Bitcoin.com thousands of dollars worth of coins.
In effect, this was the first public mainnet fork attempt by Bitcoin Unlimited. Unsurprisingly, the attempt failed, costing the would-be forkers real bitcoins as a result. It is possible that the costs of this bug are much larger than the lost rewards and fees from this block alone, as other Bitcoin Unlimited miners may have been expending hash power in the effort to mine slightly-oversized (invalid) blocks prior to this incident, inadvertently wasting resources in the doomed pursuit of invalid coins.
On March 14, 2017, a remote exploit vulnerability discovered in Bitcoin Unlimited crashed 75% of the BU nodes on the network in a matter of minutes.
In order to downplay the incident, Andrew Stone rapidly published an article which attempted to imply that the remote-exploit bug also affected Core nodes by claiming that:
approximately 5% of the “Satoshi” Bitcoin clients (Core, Unlimited, XT) temporarily dropped off of the network
In reddit comments, he lied even more explicitly, describing it as "a bug whose effects you can see as approximate 5% drop in Core node counts" as well as a "network-wide Bitcoin client failure". He went so far as to claim:
the Bitcoin Unlimited team found the issue, identified it as an attack and fixed the problem before the Core team chose to ignore it
The vulnerability in question was in thinblock.cpp, which has never been part of Bitcoin Core; in other words, this vulnerability only affected Bitcoin Classic and Bitcoin Unlimited nodes.
In the same Medium article, Andrew Stone appears to have doctored images to further deceive readers. In the reddit thread discussing this deception, Andrew Stone denied that he had maliciously edited the images in question, but when questioned in-depth on the subject, he resorted to citing his own doctored images as sources and refused to respond to further requests for clarification or replication steps.
Beyond that, the same incident report (and images) conspicuously omitted the fact that the alleged "5% drop" on the screenshotted (and photoshopped) node-graph was actually due to the node crawler having been rebooted, rather than any problems with Core nodes. This fact was plainly displayed on the 21 website that the graph originated from, but no mention of it was made in Stone's article or report, even after he was made aware of it and asked to revise or retract his deceptive statements.
There were actually 3 (fundamentally identical) Xthin-assert exploits that Unlimited developers unwittingly publicized during this episode, which caused problems for Bitcoin Classic, which was also vulnerable.
On top of all of the above, the vulnerable code in question had gone unnoticed for 10 months, and despite the Unlimited developers (including Andrew Stone) claiming to have (eventually) discovered the bug themselves, it later came out that this was another lie; an external security researcher had actually discovered it and disclosed it privately to them. This researcher provided the following quotes regarding Bitcoin Unlimited:
I am quite beside myself at how a project that aims to power a $20 billion network can make beginner’s mistakes like this.
I am rather dismayed at the poor level of code quality in Bitcoin Unlimited and I suspect there [is] a raft of other issues
The problem is, the bugs are so glaringly obvious that when fixing it, it will be easy to notice for anyone watching their development process,
it doesn’t help if the software project is not discreet about fixing critical issues like this.
In this case, the vulnerabilities are so glaringly obvious, it is clear no one has audited their code because these stick out like a sore thumb
In what appeared to be a desperate attempt to distract from the fundamental ineptitude that this vulnerability exposed, Bitcoin Unlimited supporters (including Andrew Stone himself) attempted to change the focus to a tweet that Peter Todd made about the vulnerability, blaming him for exposing it and prompting attackers to exploit it... but other Unlimited developers revealed that the attacks had actually begun well before Todd had tweeted about the vulnerability. This was pointed out many times, even by Todd himself, but Stone ignored these facts a week later, and shamelessly lied about the timeline in a propagandistic effort at distraction and misdirection.
submitted by sound8bits to sound8bits [link] [comments]

Lies, FUD, and hyperbole

https://medium.com/@octskyward/the-resolution-of-the-bitcoin-experiment-dabb30201f7#.obcepgw0g
Lies, FUD, and hyperbole Part 1
With apologies to the length but Hearn does pack a lot of misrepresentations and lies into this article.
a system completely controlled by just a handful of people. Worse still, the network is on the brink of technical collapse.
This is patently untrue as power dynamics within bitcoin are a complex interwoven level of game theory shared by miners, nodes, developers, merchants and payment processors, and users. Even if one were to make the false assumption that Miners control all the power, the reality is mining pools are either made up of thousands of individual miners who can and do redirect their hashing power or private pools with companies controlled by multiple investors and owners.
Worse still, the network is on the brink of technical collapse.
If and when a fee event happens, bitcoin will be just fine. Wallets already can adjust for fees and tx fee pressures will be kept reasonable because they still need to compete with free off the chain solutions. Whether the Block size is raised to 2, 4, or 8 MB it will also be fine(in the short term) as long as corresponding sigop protections are included. The blocksize debate more has to do with bikeshedding and setting a long term direction for bitcoin than preventing a short term technical collapse.
Couldn’t move your existing money
Bitcoin functions as a payment rails system just fine, just ask Coinbase and bitpay.
Had wildly unpredictable fees that were high and rising fast
False, I normal pay 3-5 pennies , and tx instantly get to their destination and confirm between 5 min to 1 hour like normal. CC txs take weeks to months to confirm.
Allowed buyers to take back payments they’d made after walking out of shops, by simply pressing a button (if >you aren’t aware of this “feature” that’s because Bitcoin was only just changed to allow it)
RBF is opt in , and therefore payment processors won't accept this if they do 0 conf tx approvals.
Is suffering large backlogs and flaky payments
The block chain is full.
Blocks are 60-70% full on average . We have yet to see a continuous backlog lasting more than a few hours max. This conf backlog doesn't prevent tx from being processed unlike when the Visa/paypal network goes down and you cannot make a payment at all.
… which is controlled by China
People in China [b]partially [/b]Control one small aspect of the bitcoin ecosystem and why shouldn't they? They do represent 19% of the worlds population. This comment is both misleading and xenophobic.
… and in which the companies and people building it were in open civil war?
Most people are passionate but still friendly behind closed doors. The Blocksize debate has spurred decentralization of developer groups and new ideas which are good things. Sure there has been some unproductive infighting , but we will get through this and be stronger for it. "Civil wars" exist within and between all currencies anyways so this is nothing surprising.
Once upon a time, Bitcoin had the killer advantage of low and even zero fees, but it’s now common to be asked >to pay more to miners than a credit card would charge.
Credit cards charge 2.8% to 7% in the US and 5-8% in many other countries. Bitcoins once had fees up to 40 cents a tx , and for the past few years normal fees have been consistently between 2-8 pennies per tx on the chain and free off the chain.
Because the block chain is controlled by Chinese miners, just two of whom control more >than 50% of the hash >power.
At a recent conference over 95% of hashing power was controlled by a handful of guys sitting on a single stage.
Mining pools are controlled by many miners and interests , not individuals. Miners also share the control with many other competing interests and are limited in their ability to harm the bitcoin ecosystem if they so choose.
They have chosen instead to ignore the problem and hope it goes away.
Bitcoin core has already come to a consensus on a scaling proposal - https://bitcoincore.org/en/2015/12/21/capacity-increase/ https://bitcoincore.org/en/2015/12/23/capacity-increases-faq/ and various other implementations are developing theirs to propose to the community. Bitcoin Classic is another interesting implementations that appears to have found consensus around BIP102.
This gives them a perverse financial incentive to actually try and stop Bitcoin becoming popular.
The Chinese miners want bitcoin to scale to at least 2MB in the short term, something that both Core and Classic accommodate. Bitcoin will continue to scale with many other solutions and ultimately payment channels will allow it to scale to Visa like levels of TPS.
The resulting civil war has seen Coinbase — the largest and best known Bitcoin startup in the USA — be erased >from the official Bitcoin website for picking the “wrong” side and banned from the community forums.
Coinbase was re-added to bitcoin.org. Mike conveniently left that important datapoint off.
has gone from being a transparent and open community to one that is dominated by rampant censorship
There are more subreddits, more forums , and more information than ever before. The blocksize debate does sometimes create divisions in our ecosystem but the information is all there and easy for anyone to investigate.
But the inability to get news about XT or the censorship itself through to users has some problematic effects.
The failure of XT has nothing to do with the lack of information. If anything there is too much information available , being repeated over and over , in many different venues.
One of them, Gregory Maxwell, had an unusual set of views: he once claimed he had mathematically proven >Bitcoin to be impossible. More problematically, he did not believe in Satoshi’s original vision.
Satoshi never intended to be used as an argument from authority and if he does he can always come back and contribute. We should not depend upon an authority figure but evidence, valid reasoning, and testing.
And indeed back-of-the-envelope calculations suggested that, as he said to me, “it never really hits a scale >ceiling” even when looking at more factors than just bandwidth.
Hearn's calculations are wrong. More specifically they do not take into account TOR, decentralization in locations with bandwidth limitations, bandwidth softcaps imposed by ISP's, the true scale of historical bandwidth increases, and malicious actors attacking the system with sophisticated attacks.
Once the 5 developers with commit access to the code had been chosen and Gavin had decided he did not want >to be the leader, there was no procedure in place to ever remove one.
The 45 developers who contributed to Bitcoin Core in 2015 could be replaced instantly if the community wanted with little effort. Ultimately, the nodes, miners and users control which code they use and no group of developers can force them to upgrade. In fact Bitcoin Core deliberately avoids and auto-update feature with their releases at the cost of usability to specifically insure that users have to actively choose all new features and can opt out simply by not upgrading.
... end of part one...
submitted by bitusher to Bitcoin [link] [comments]

Forcenet: an experimental network with a new header format | Johnson Lau | Dec 04 2016

Johnson Lau on Dec 04 2016:
Based on Luke Dashjr’s code and BIP: https://github.com/luke-jbips/blob/bip-mmhf/bip-mmhf.mediawiki , I created an experimental network to show how a new header format may be implemented.
Basically, the header hash is calculated in a way that non-upgrading nodes would see it as a block with only the coinbase tx and zero output value. They are effectively broken as they won’t see any transactions confirmed. This allows rewriting most of the rules related to block and transaction validity. Such technique has different names like soft-hardfork, firmfork, evil softfork, and could be itself a controversial topic. However, I’d rather not to focus on its soft-hardfork property, as that would be trivial to turn this into a true hardfork (e.g. setting the sign bit in block nVersion, or setting the most significant bit in the dummy coinbase nLockTime)
Instead of its soft-HF property, I think the more interesting thing is the new header format. The current bitcoin header has only 80 bytes. It provides only 32bits of nonce space and is far not enough for ASICs. It also provides no room for committing to additional data. Therefore, people are forced to put many different data in the coinbase transaction, such as merge-mining commitments, and the segwit commitment. It is not a ideal solution, especially for light wallets.
Following the practice of segwit development of making a experimental network (segnet), I made something similar and call it the Forcenet (as it forces legacy nodes to follow the post-fork chain)
The header of forcenet is mostly described in Luke’s BIP, but I have made some amendments as I implemented it. The format is (size in parentheses; little endian):
Height (4), BIP9 signalling field (4), hardfork signalling field (3), merge-mining hard fork signalling field (1), prev hash (32), timestamp (4), nonce1 (4), nonce2 (4), nonce3 (compactSize + variable), Hash TMR (32), Hash WMR (32), total tx size (8) , total tx weight (8), total sigops (8), number of tx (4), merkle branches leading to header C (compactSize + 32 bit hashes)
In addition to increasing the max block size, I also showed how the calculation and validation of witness commitment may be changed with a new header. For example, since the commitment is no longer in the coinbase tx, we don’t need to use a 0000….0000 hash for the coinbase tx like in BIP141.
Something not yet done:
  1. The new merkle root algorithm described in the MMHF BIP
  2. The nTxsSigops has no meaning currently
  3. Communication with legacy nodes. This version can’t talk to legacy nodes through the P2P network, but theoretically they could be linked up with a bridge node
  4. A new block weight definition to provide incentives for slowing down UTXO growth
  5. Many other interesting hardfork ideas, and softfork ideas that works better with a header redesign
For easier testing, forcenet has the following parameters:
Hardfork at block 200
Segwit is always activated
1 minutes block with 40000 (prefork) and 80000 (postfork) weight limit
50 blocks coinbase maturity
21000 blocks halving
144 blocks retarget
How to join: codes at https://github.com/jl2012/bitcoin/tree/forcenet1 , start with "bitcoind —forcenet" .
Connection: I’m running a node at 8333.info with default port (38901)
Mining: there is only basic internal mining support. Limited GBT support is theoretically possible but needs more hacking. To use the internal miner, writeup a shell script to repeatedly call “bitcoin-cli —forcenet generate 1”
New RPC commands: getlegacyblock and getlegacyblockheader, which generates blocks and headers that are compatible with legacy nodes.
This is largely work-in-progress so expect a reset every couple weeks
jl2012
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 671 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20161205/126aae21/attachment.sig
original: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-Decembe013338.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

Summary of block size increase proposals from core devs and Gavin

Currently, there are 4 block size BIP by Bitcoin developers:
BIP100 by Jeff: http://gtf.org/garzik/bitcoin/BIP100-blocksizechangeproposal.pdf
BIP101 by Gavin: https://github.com/bitcoin/bips/blob/mastebip-0101.mediawiki
BIP102 by Jeff: https://github.com/bitcoin/bips/pull/173/files
BIP??? by Pieter (called "BIP103" below): https://gist.github.com/sipa/c65665fc360ca7a176a6
To facilitate further discussion, I'd like to summarize these proposals by a series of questions. Please correct me if I'm wrong. Something like sigop limit are less controversial and are not shown.
Should we use a miner voting mechanism to initiate the hardfork?
BIP100: Yes, support with 10800 out of last 12000 blocks (90%)
BIP101: Yes, support with 750 out of last 1000 blocks (75%)
BIP102: No
BIP103: No
When should we initiate the hardfork?
BIP100: 2016-01-11#
BIP101: 2 weeks after 75% miner support, but not before 2016-01-11
BIP102: 2015-11-11
BIP103: 2017-01-01

The network does not actually fork until having 90% miner support

What should be the block size at initiation?
BIP100: 1MB
BIP101: 8MB*
BIP102: 2MB
BIP103: 1MB
Should we allow further increase / decrease?
BIP100: By miner voting, 0.5x - 2x every 12000 blocks (~3 months)
BIP101: Double every 2 years, with linear interpolations in between (41.4% p.a.)
BIP102: No
BIP103: +4.4% every 97 days (double every 4.3 years, or 17.7% p.a.)
The earliest date for a >=2MB block?
BIP100: 2016-04-03^
BIP101: 2016-01-11
BIP102: 2015-11-11
BIP103: 2020-12-27
^ Assuming 10 minutes blocks and votes cast before 2016-01-11 are not counted
What should be the final block size?
BIP100: 32MB is the max, but it is possible to reduce by miner voting
BIP101: 8192MB
BIP102: 2MB
BIP103: 2048MB
When should we have the final block size?
BIP100: Decided by miners
BIP101: 2036-01-06
BIP102: 2015-11-11
BIP103: 2063-07-09
source dev mailing list: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009808.html
submitted by bigblocksduder to Bitcoin [link] [comments]

Bitcoin dev IRC meeting in layman's terms (2015-11-05)

Once again my attempt to summarize and explain the weekly bitcoin developer meeting in layman's terms. Link to last weeks summarization
On a personal note: I really don't like the fact someone pm'ed me telling me "a majority of bitcoiners have moved to btc", it's not (yet) true and comes across as very spammy. This combined with the tin-foiled hat people-bashing which seems to be popular makes me almost not want to join this community. I hope this can become like bitcoin, but with the freedom to discuss and mention any topic, not a mindless crusade against bitcoin, theymos, blockstream, etc.
Disclaimer
Please bear in mind I'm not a developer and I'd have problems coding "hello world!", so some things might be incorrect or plain wrong. Like any other write-up it likely contains personal biases, although I try to stay as neutral as I can. There are no decisions being made in these meetings, so if I say "everyone agrees" this means everyone present in the meeting, that's not consensus, but since a fair amount of devs are present it's a good representation. The dev IRC and mailinglist are for bitcoin development purposes. If you have not contributed actual code to a bitcoin-implementation, this is probably not the place you want to reach out to. There are many places to discuss things that the developers read, including this sub-reddit.
link to this week logs Meeting minutes by meetbot
Main topics discussed where:
Sigcache performance Performance goals for 0.12 transaction priority sigops flooding attack chain limits
Short topics/notes
Note: cfields, mcelrath and BlueMatt (and maybe more) missed the meeting because of daylight saving time.
Closing date for proposals for the scaling bitcoin workshop is the 9th.
Check to see if there are any other commits for the 0.11.2 RC. As soon as 6948 and 6825 are merged it seems good to go. We need to move fairly quick as there are already miners voting for CLTV (F2Pool). Also testnet is CLTV locked already and is constantly forking. 0.11.2 RC1 has been released as of today: https://bitcoin.org/bin/bitcoin-core-0.11.2/test/
Most of the mempool-limiting analysis assumed child-pays-for-parent, however that isn't ready for 0.12 yet, so we should think about possible abuses in context of the existing mining algorithm.
Because of time-constrains opt-in replace-by-fee has been deferred to next weeks meeting, but most people seem to want it in 0.12. sdaftuar makes a note that we need to make clear to users what they need to do if they don't want to accept opt-in transactions.
Sigcache performance
The signature cache, which is in place to increase performance (by not having to check the signature multiple times), and to mitigate some attacks currently has a default limit of 50 000 signatures. Sipa has a pull-request which proposes to: Change the limit from number of entries to megabytes Change the default to 40MB, which corresponds to 500 000 signatures Store salted hashes instead of full entries Remove entries that have been validated in a block
Sipa did benchmarks for various signature cache sizes on hitrate in blocks (how many of the cached signatures are in the block). The maximum sigcache size was 68MB, resulting in a 3% miss-rate. Some blocks though have extremely high miss rates (60%) while others have none. Likely caused by miners running different policies. Gmaxwell proposed to always run script verification for mempool transactions, even if these transactions get rejected into the mempool by the clients policy. The result of that is that even a 300MB sigcache size only gets down to 15% misses. So there's too much crap being relayed to keep any reasonable sized cache. Gmaxwell points out downsides to not checking any rejected transactions, namely: there are some DOS attacks possible, and you increase your misrate if you set a policy which is more restrictive than the typical network, which might result in a race to the bottom.
Sipa continues his work and seeks out other strategies
Performance goals for 0.12
Bitcoin-core 0.12 is scheduled for release December 1st.
Everybody likes to include secp256k1 ASAP, as it has a very large performance increase. Some people would like to include the sigcache pull-request, BIP30, modifyNewCoins and a createNewBlock rewrite if it's ready. Wumpus advises against merging last-minute performance improvements for 0.12.
Mentioned pull-requests should be reviewed, prioritizing CreateNewBlock
transaction priority
Each transaction is assigned a priority, determined by the age, size, and number of inputs. Which makes some transactions free.
Sipa thinks we should get rid of the current priority completely and replace it with a function that modifies fee or size of a transaction. There's a pull-request available that optimizes the current transaction priority, thereby avoiding the political debate that goes with changing the definition of transaction priority. Luke-jr thinks the old policy should remain possible.
Check to see if PR #6357 is safe and efficient enough.
sigops flooding attack
The number of ECDSA signature-checking operations or sigops is currently limited to 20 000 per block. This in order to prevent miners creating blocks that take ages to verify as those operations are time-consuming. You could however construct transactions that have a very high sigops count and since most miners don't take into account the sigops count they end up with very small blocks because the sigop limit is reached. This attack is described here.
Suggestion to take the number of sigops relative to the maximum blocksize into account with the total size. Meaning a 10k sigops transaction would currently be viewed as 500kB in size (for that single transaction, not towards the block). That suggestion would be easy to change in the mining code, but more invasive to try and plug that into everything that looks at feerate. This would also open up attacks on the mempool if these transactions are not evicted by mempool limiting. Luke-jr has a bytes-per-sigop limit, that filters out these attack transactions.
More analysis should be done, people seem fine with the general direction of fixing it.
chain limits
Chain in this context means connected transactions. When you send a transaction that depends on another transaction that has yet to be confirmed we talk about a chain of transactions. Miners ideally take the whole chain into account instead of just every single transaction (although that's not widely implemented afaik). So while a single transaction might not have a sufficient fee, a depending transaction could have a high enough fee to make it worthwhile to mine both. This is commonly known as child-pays-for-parent. Since you can make these chains very big it's possible to clog up the mempool this way. With the recent malleability attacks, anyone who made transactions going multiple layers deep would've already encountered huge problems doing this (beautifully explained in let's talk bitcoin #258 from 13:50 onwards) Proposal and github link.
sdaftuar's analysis shows that 40% of blocks contain a chain that exceeds the proposed limits. Even a small bump doesn't make the problem go away. Possible sources of these chains: a service paying the fees on other transactions (child-pays-for-parent), an iOS wallet that gladly spends unconfirmed change. A business confirms they use child-pays-for-parent when they receive bitcoins from an unspent chain. It is possible that these long chains are delivered to miners directly, in which case they wouldn't be affected by the proposed relay limits (and by malleability). Since this is a problem that needs to be addressed, people seem fine with merging it anyway, communicating in advance to let businesses think about how this affects them.
Merge "Policy: Lower default limits for tx chains" Morcos will mail the developer mailing list after it's merged.
Participants
morcos Alex Morcos gmaxwell Gregory Maxwell wumpus Wladimir J. van der Laan sipa Pieter Wuille jgarzik Jeff Garzik Luke-Jr Luke Dashjr phantomcircuit Patrick Strateman sdaftuar Suhas Daftuar btcdrak btcdrak jouke ??Jouke Hofman?? jtimon Jorge Timón jonasschnelli Jonas Schnelli 
Comic relief
20:01 wumpus #meetingend 20:01 wumpus #meetingstop 20:01 gmaxwell Thanks all. 20:01 btcdrak #exitmeeting 20:01 gmaxwell #nomeetingnonono 20:01 btcdrak #meedingexit 20:01 wumpus #endmeeting 20:01 lightningbot Meeting ended Thu Nov 5 20:01:29 2015 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . 20:01 btcdrak #rekt 
submitted by G1lius to btc [link] [comments]

BIP proposal: Increase block size limit to 2 megabytes | Gavin Andresen | Feb 05 2016

Gavin Andresen on Feb 05 2016:
This has been reviewed by merchants, miners and exchanges for a couple of
weeks, and has been implemented and tested as part of the Bitcoin Classic
and Bitcoin XT implementations.
Constructive feedback welcome; argument about whether or not it is a good
idea to roll out a hard fork now will be unproductive, so I vote we don't
go there.
Draft BIP:
https://github.com/gavinandresen/bips/blob/bump2mb/bip-bump2mb.mediawiki
Summary:
Increase block size limit to 2,000,000 bytes.
After 75% hashpower support then 28-day grace period.
With accurate sigop counting, but existing sigop limit (20,000)
And a new, high limit on signature hashing
Blog post walking through the code:
http://gavinandresen.ninja/a-guided-tour-of-the-2mb-fork
Blog post on a couple of the constants chosen:
http://gavinandresen.ninja/seventyfive-twentyeight

Gavin Andresen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20160205/75a2eca2/attachment.html
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012358.html
submitted by dev_list_bot to bitcoin_devlist [link] [comments]

Astro Elias - YouTube O LIVRO MAIS PERIGOSO DO MUNDO - YouTube 20 COISAS QUE VOCÊ NÃO SABIA SOBRE TODO MUNDO ODEIA O ... Bitcoin, Ethereum, Ripple, Stellar, TRON Análisis de precios Photo A Day For 10 Years (Age 13 to 23) - YouTube

For some reason, CoinDesk and other publications have taken to publishing the narrative that Bitcoin Cash leveled a “51% attack” on its own chain. Here is one of the articles in question for reference. It is alleged that two mining pools (BTC.com and BTC.top) both carried out the ‘attack’ in concert in order to prevent the Bitcoin Cash chain from being further compromised. What ... The test for the sigops limit assumes that CreateNewBlock fails with bad-blk-sigops. However it can also fail with bad-txns-vout-negative, if a naive developer lowers BLOCKSUBSIDY to 1*COIN. Solution BOOST_CHECK_EXCEPTION allows an additional predicate function. This commit uses this for all exceptions that are checked for in miner_tets.cpp: bad-blk-sigops bad-cb-multiple bad-txns-inputs ... Bitcoin Core integration/staging tree. Contribute to bitcoin/bitcoin development by creating an account on GitHub. bitcoin-${VERSION}-aarch64-linux-gnu.tar.gz: ... This means that a miner who produces a block with many transactions discouraged by your node will be relayed slower than one with only transactions already in your memory pool. The overall effect of such relay differences on the network may result in blocks which include widely- discouraged transactions losing a stale block race, and therefore ... Bitcoin/miner.cpp At Master Bitcoin/bitcoin Github // Skip entries in mapTx that are already in a block or are present // in mapModifiedTx (which implies that the mapTx ancestor state is // stale due to ancestor inclusion in the block) // Also skip transactions that we've already failed to add. This can happen if // we consider a transaction in mapModifiedTx and it fails: we can then ...

[index] [48488] [7138] [15380] [8223] [49999] [32749] [39548] [5683] [19123] [6406]

Astro Elias - YouTube

December 25th 2007 - 25th December 2017 (Age 13 years, 5 months - 23 years, 5 months) Song: Peter McIsaac - Beautiful Day (https://www.premiumbeat.com/royalt... Mais um desafio gigante no canal Irmãos Neto! Será que o Bruno acertou o Felipe? *INSCREVA-SE* Canal do Felipe - https://www.youtube.com/user/felipeneto Cana... Bem vindo ao meu mundo! Descobri o Astro Elias quando descobri o poder da minha alma. Gravo vídeos sobre os signos, como se comportam como sorriem e como cho... O livro de São Cipriano é conhecido pelo mundo inteiro, por ser um dos livros mais perigosos do mundo, e hoje vocês saberão o porque! ONDE COMPRAR O LIVRO OR... todo mundo odeia o chris - 20 coisas que vocÊ nÃo sabia !! a verdadeira histÓria !!

#