25% of Hashing Power is now Publicly Backing BIP100
At least 3 pools are now tagging their coinbase signatures with "BIP100" which combined amounts to about 25% of mining hashing power. This includes f2poolKano pool and Bitclub. It would appear BIP100 has quickly overtaken BIP101 in terms of hashing power support. F2pool previously called BitcoinXT an altcoin and it would appear they are pushing for BIP100's in Bitcoin Core as an alternative. Kano pool made a statement about the switch here. Edit: Looks like there's an article now. Edit2: BTCChina is now also backing BIP100, this brings the total pools backing BIP100 to about 35%, it's now almost inevitable that Antpool and BW will also switch to BIP100. Edit3: Bitfury backs BIP100.
BU is not a proposal. It's merely a tool to make it a bit more convenient for the people who run Bitcoin software to coordinate on blocksize policy without having to switch dev teams every time they as a community decide to go with a different policy.
Miners are not "switching to BU." They are switching to getting ready for Haipo Yang's blocksize increase plan. They just happen to be using BU to do it, merely because Core doesn't allow any plan other than 1MB and Segwit. Core deliberately provides software with a blocksize policy pre-baked in. The ONLY thing BU-style software changes is that baking in. It refuses to bundle controversial blocksize policy in with the rest of the code it is offering. It unties the blocksize settings from the dev teams, so that you don't have to shop for both as a packaged unit. The idea is that you can now have Core software security without having to submit to Core blocksize policy. Since there have been bugs in BU as it tries to do a lot of other new things (not related to blocksize settings), there is even a new project called BitcoinEC that really is just Core with adjustable blocksize, none of the extras that BU and Classic have (which have caused a few bugs). Instead of the community being spoonfed Core consensus or coordinatedly switching to a different spoonfeeder like XT, it is coordinating on its own. Absent Core's heavy hand on the scale, the community and market will coalesce on Haipo Yang's plan, or something else that is likely reasonable (and if it was unreasonable, how is there any way to prevent it from that anyway merely by forcing the community to get spoonfed from some implementation (Core, XT, etc.) rather than just not getting spoonfed?).
There is no such thing as BTU coin, because again BU is not a proposal. There is, for example, Haipo Yang's proposal that the miners are moving to. You could call it CoreCoin vs. YangCoin if you were so inclined, but to call something BTU or BUcoin is just ignorant. People have only done that because they are stuck in the Core paradigm where all the implementations have to come with a policy stance baked in, and where you must switch dev teams if you disagree with their blocksize preferences. Consider that BU devs are big blockers, yet you could use BU to enforce a 100kB blocksize limit if you wanted to.
Running Core is like buying a Sony TV that only lets you watch Fox, because the other channels are locked away and you have to know how to solder a circuit board to see them. To change the channel, you as a layman would have to switch to a different TV made by some other manufacturer, who you may not think makes as reliable of TVs. This is because Sony believes people should only ever watch Fox "because there are dangerous channels out there" or "because since everyone needs to watch the same channel, it is our job to decide what that channel is." So the community is stuck with either watching Fox on their nice, reliable Sony TVs, or switching to all watching ABC on some more questionable TVs made by some new maker (like, in 2015 the XT team was the new maker and BIP101 was ABC). BU (and now Classic and BitcoinEC) shatters that whole bizarre paradigm. BU is a TV that lets you tune to any channel you want, at your own risk. The community is free to converge on any channel it wants to, and since everyone in this analogy wants to watch the same channel they will coordinate to find one. Yet people who are accustomed to Sony are so confused by the idea that the community could coordinate itself that they assume BU is, like XT, a TV that tunes to a specific channel, and they rail against that channel as dangerous. This is quite silly considering you can use BU to tune to Fox! That is, you can run BU with exactly Core settings. So you know you are being snowed there. Although the following is hard to understand because of the fact that Satoshi introduced a meant-to-be temporary, non-controversial 1MB blocksize limit into the code, it is actually the Core devs who are tacitly proposing something bran new: By keeping the 1MB limit all these years while it gradually grew to be very controversial (due to increased Bitcoin usage), they have changed the situation from Satoshi's original one where the code didn't come with any controversial stances baked into it, to one where it does. Core has gradually moved to a lock-in model because they imagine the community to be incapable reaching consensus on its own, or at least incapable of reaching a good consensus. By retaining the 1MB cap well past its time, they have little by little snuck in a new governance model I will call Governance by Centralized Inconvenience Barrier (GCIB). This is identical to Sony's model where they try to govern what everyone watches by making it inconvenient to change the channel (you have to know how to mod your TV or go find another maker who may not make TVs as well). It is centralized because whatever goes into the Core repository counts as (they say) the "reference implementation," and any client that deviates from that is subject to extreme censorship on the Core mailing list as "off topic" and coincidentally the same applies on all the biggest Bitcoin forums (except this one) and bitcoin.org. Core's hope is that this new governance model will keep people from doing anything stupid and reckless, thanks to Core's paternalistic guidance. You can ironically know it is centralized because they focus on arguing that Core is somehow decentralized, using the flimsiest of reasoning: "anyone can contribute [but the committers must approve]" and "the team is decentralized because the devs live all over the world [so what?]" and "only unanimous votes among the 7 committers can make changes [that's still just an FOMC, and unanimity just means at best no changes at all and at worst total colluded central control]". The pretzel logic even extends further, as to avoid the accusation of central control they instead say, "Fine, no changes at all." Not realizing that this effectively disallows any kind of temporary measures, including Satoshi's temporary blocksize limit. That would be insane, especially in the face of hot competition from altcoins, so the pretzel logic extends further: only soft forks are allowed, and hard forks are especially not allowed under controversy. Yet this is the ultimate in silliness, because a controversial soft fork will merely incite everyone who is against it to hard fork as a defense. The whole paradigm where they think they can somehow "disallow" people from changing the channel (coordinating on blocksize settings without a group of devs rigging the consensus-finding) is hopelessly centralized. The whole paradigm where they think they can somehow "disallow" people from hard forking by only issuing soft forks is hopelessly centralized. The whole paradigm where Core = Bitcoin is hopelessly centralized. BU, Classic, BitcoinEC, and soon btcd are the new bread, the "rooted" clients in the way you root an iPhone. For the purist, BitcoinEC is rooted Core, a minimal patchset. Unlike Core devs, these devs all refuse to pretend they are the determiners of what Bitcoin is. They understand that Bitcoin is not held together by trivial inconvenience barriers erected by dev teams, as that would have a trivially easy attack vector, as you can't really stop people from running a patch or modding their code in the long run even if you could do it for a while by pointing out there are some risks now due to lack of coding talent and such. They understand that the 21M coin limit is not held in place by Core devs locking down the coin issuance settings, but by the fact that the community would never tune to any channel that had a different issuance schedule. So they understand there is no danger in letting users adjust settings, because it is not the inability deviate from the herd that keeps the herd moving together, but instead the incentives involved. It is time for Bitcoin grow up, to throw off childish things like the illusion that a group of devs are needed to set consensus, as well as the idea that that wouldn't be extremely dangerous as it would centralize control to the very extent to which it was necessary (meaning Bitcoin would barely be a thing at all; no wonder so many Core guys were longtime Bitcoin skeptics and seem to reject the idea of antifragility).
The Mike Hearn Show: Season Finale (and Bitcoin Classic: Series Premiere)
This post debunks Mike Hearn's conspiracy theories RE Blockstream in his farewell post and points out issues with the behavior of the Bitcoin Classic hard fork and sketchy tactics of its advocates I used to be torn on how to judge Mike Hearn. On the one hand he has done some good work with BitcoinJ, Lighthouse etc. Certainly his choice of bloom filter has had a net negative effect on the privacy of SPV users, but all in all it works as advertised.* On the other hand, he has single handedly advocated for some of the most alarming behavior changes in the Bitcoin network (e.g. redlists, coinbase reallocation, BIP101 etc...) to date. Not to mention his advocacy in the past year has degraded from any semblance of professionalism into an adversarial us-vs-them propaganda train. I do not believe his long history with the Bitcoin community justifies this adversarial attitude. As a side note, this post should not be taken as unabated support for Bitcoin Core. Certainly the dev team is made of humans and like all humans mistakes can be made (e.g. March 2013 fork). Some have even engaged in arguably unprofessional behavior but I have not yet witnessed any explicitly malicious activity from their camp (q). If evidence to the contrary can be provided, please share it. Thankfully the development of Bitcoin Core happens more or less completely out in the open; anyone can audit and monitor the goings on. I personally check the repo at least once a day to see what work is being done. I believe that the regular committers are genuinely interested in the overall well being of the Bitcoin network and work towards the common goal of maintaining and improving Core and do their best to juggle the competing interests of the community that depends on them. That is not to say that they are The Only Ones; for the time being they have stepped up to the plate to do the heavy lifting. Until that changes in some way they have my support. The hard line that some of the developers have drawn in regards to the block size has caused a serious rift and this write up is a direct response to oft-repeated accusations made by Mike Hearn and his supporters about members of the core development team. I have no affiliations or connection with Blockstream, however I have met a handful of the core developers, both affiliated and unaffiliated with Blockstream. Mike opens his farewell address with his pedigree to prove his opinion's worth. He masterfully washes over the mountain of work put into improving Bitcoin Core over the years by the "small blockians" to paint the picture that Blockstream is stonewalling the development of Bitcoin. The folks who signed Greg's scalability road map have done some of the most important, unsung work in Bitcoin. Performance improvements, privacy enhancements, increased reliability, better sync times, mempool management, bandwidth reductions etc... all those things are thanks to the core devs and the research community (e.g. Christian Decker), many of which will lead to a smoother transition to larger blocks (e.g. libsecp256k1).(1) While ignoring previous work and harping on the block size exclusively, Mike accuses those same people who have spent countless hours working on the protocol of trying to turn Bitcoin into something useless because they remain conservative on a highly contentious issue that has tangible effects on network topology. The nature of this accusation is characteristic of Mike's attitude over the past year which marked a shift in the block size debate from a technical argument to a personal one (in tandem with DDoS and censorship in /Bitcoin and general toxicity from both sides). For example, Mike claimed that sidechains constitutes a conflict of interest, as Blockstream employees are "strongly incentivized to ensure [bitcoin] works poorly and never improves" despite thousands of commits to the contrary. Many of these commits are top down rewrites of low level Bitcoin functionality, not chump change by any means. I am not just "counting commits" here. Anyways, Blockstream's current client base consists of Bitcoin exchanges whose future hinges on the widespread adoption of Bitcoin. The more people that use Bitcoin the more demand there will be for sidechains to service the Bitcoin economy. Additionally, one could argue that if there was some sidechain that gained significant popularity (hundreds of thousands of users), larger blocks would be necessary to handle users depositing and withdrawing funds into/from the sidechain. Perhaps if they were miners and core devs at the same time then a conflict of interest on small blocks would be a more substantive accusation (create artificial scarcity to increase tx fees). The rational behind pricing out the Bitcoin "base" via capacity constraint to increase their business prospects as a sidechain consultancy is contrived and illogical. If you believe otherwise I implore you to share a detailed scenario in your reply so I can see if I am missing something. Okay, so back to it. Mike made the right move when Core would not change its position, he forked Core and gave the community XT. The choice was there, most miners took a pass. Clearly there was not consensus on Mike's proposed scaling road map or how big blocks should be rolled out. And even though XT was a failure (mainly because of massive untested capacity increases which were opposed by some of the larger pools whose support was required to activate the 75% fork), it has inspired a wave of implementation competition. It should be noted that the censorship and attacks by members of /Bitcoin is completely unacceptable, there is no excuse for such behavior. While theymos is entitled to run his subreddit as he sees fit, if he continues to alienate users there may be a point of mass exodus following some significant event in the community that he tries to censor. As for the DDoS attackers, they should be ashamed of themselves; it is recommended that alt. nodes mask their user agents. Although Mike has left the building, his alarmist mindset on the block size debate lives on through Bitcoin Classic, an implementation which is using a more subtle approach to inspire adoption, as jtoomim cozies up with miners to get their support while appealing to the masses with a call for an adherence to Satoshi's "original vision for Bitcoin." That said, it is not clear that he is competent enough to lead the charge on the maintenance/improvement of the Bitcoin protocol. That leaves most of the heavy lifting up to Gavin, as Jeff has historically done very little actual work for Core. We are thus in a potentially more precarious situation then when we were with XT, as some Chinese miners are apparently "on board" for a hard fork block size increase. Jtoomim has expressed a willingness to accept an exceptionally low (60 or 66%) consensus threshold to activate the hard fork if necessary. Why? Because of the lost "opportunity cost" of the threshold not being reached.(c) With variance my guess is that a lucky 55% could activate that 60% threshold. That's basically two Chinese miners. I don't mean to attack him personally, he is just willing to go down a path that requires the support of only two major Chinese mining pools to activate his hard fork. As a side effect of the latency issues of GFW, a block size increase might increase orphan rate outside of GFW, profiting the Chinese pools. With a 60% threshold there is no way for miners outside of China to block that hard fork. To compound the popularity of this implementation, the efforts of Mike, Gavin and Jeff have further blinded many within the community to the mountain of effort that core devs have put in. And it seems to be working, as they are beginning to successfully ostracize the core devs beyond the network of "true big block-believers." It appears that Chinese miners are getting tired of the debate (and with it Core) and may shift to another implementation over the issue.(d) Some are going around to mining pools and trying to undermine Core's position in the soft vs. hard fork debate. These private appeals to the miner community are a concern because there is no way to know if bad information is being passed on with the intent to disrupt Core's consensus based approach to development in favor of an alternative implementation controlled (i.e. benevolent dictator) by those appealing directly to miners. If the core team is reading this, you need to get out there and start pushing your agenda so the community has a better understanding of what you all do every day and how important the work is. Get some fancy videos up to show the effects of block size increase and work on reading materials that are easy for non technically minded folk to identify with and get behind. The soft fork debate really highlights the disingenuity of some of these actors. Generally speaking, soft forks are easier on network participants who do not regularly keep up with the network's software updates or have forked the code for personal use and are unable to upgrade in time, while hard forks require timely software upgrades if the user hopes to maintain consensus after a hardfork. The merits of that argument come with heavy debate. However, more concerning is the fact that hard forks require central planning and arguably increase the power developers have over changes to the protocol.(2) In contrast, the 'signal of readiness' behavior of soft forks allows the network to update without any hardcoded flags and developer oversight. Issues with hard forks are further compounded by activation thresholds, as soft forks generally require 95% consensus while Bitcoin Classic only calls for 60-75% consensus, exposing network users to a greater risk of competing chains after the fork. Mike didn't want to give the Chinese any more power, but now the post XT fallout has pushed the Chinese miners right into the Bitcoin Classic drivers seat. While a net split did happen briefly during the BIP66 soft fork, imagine that scenario amplified by miners who do not agree to hard fork changes while controlling 25-40% of the networks hashing power. Two actively mined chains with competing interests, the Doomsday Scenario. With a 5% miner hold out on a soft fork, the fork will constantly reorg and malicious transactions will rarely have more than one or two confirmations.(b) During a soft fork, nodes can protect themselves from double spends by waiting for extra confirmations when the node alerts the user that a ANYONECANSPEND transaction has been seen. Thus, soft forks give Bitcoin users more control over their software (they can choose to treat a softfork as a soft fork or a soft fork as a hardfork) which allows for greater flexibility on upgrade plans for those actively maintaining nodes and other network critical software. (2) Advocating for a low threshold hard forks is a step in the wrong direction if we are trying to limit the "central planning" of any particular implementation. However I do not believe that is the main concern of the Bitcoin Classic devs. To switch gears a bit, Mike is ironically concerned China "controls" Bitcoin, but wanted to implement a block size increase that would only increase their relative control (via increased orphans). Until the p2p wire protocol is significantly improved (IBLT, etc...), there is very little room (if any at all) to raise the block size without significantly increasing orphan risk. This can be easily determined by looking at jtoomim's testnet network data that passed through normal p2p network, not the relay network.(3) In the mean time this will only get worse if no one picks up the slack on the relay network that Matt Corallo is no longer maintaining. (4) Centralization is bad regardless of the block size, but Mike tries to conflate the centralization issues with the Blockstream block size side show for dramatic effect. In retrospect, it would appear that the initial lack of cooperation on a block size increase actually staved off increases in orphan risk. Unfortunately, this centralization metric will likely increase with the cooperation of Chinese miners and Bitcoin Classic if major strides to reduce orphan rates are not made. Mike also manages to link to a post from the ProHashing guy RE forever-stuck transactions, which has been shown to generally be the result of poorly maintained/improperly implemented wallet software.(6) Ultimately Mike wants fees to be fixed despite the fact you can't enforce fixed fees in a system that is not centrally planned. Miners could decide to raise their minimum fees even when blocks are >1mb, especially when blocks become too big to reliably propagate across the network without being orphaned. What is the marginal cost for a tx that increases orphan risk by some %? That is a question being explored with flexcaps. Even with larger blocks, if miners outside the GFW fear orphans they will not create the bigger blocks without a decent incentive; in other words, even with a larger block size you might still end up with variable fees. Regardless, it is generally understood that variable fees are not preferred from a UX standpoint, but developers of Bitcoin software do not have the luxury of enforcing specific fees beyond basic defaults hardcoded to prevent cheap DoS attacks. We must expose the user to just enough information so they can make an informed decision without being overwhelmed. Hard? Yes. Impossible. No. Shifting gears, Mike states that current development progress via segwit is an empty ploy, despite the fact that segwit comes with not only a marginal capacity increase, but it also plugs up major malleability vectors, allows pruning blocks for historical data and a bunch of other fun stuff. It's a huge win for unconfirmed transactions (which Mike should love). Even if segwit does require non-negligible changes to wallet software and Bitcoin Core (500 lines LoC), it allows us time to improve block relay (IBLT, weak blocks) so we can start raising the block size without fear of increased orphan rate. Certainly we can rush to increase the block size now and further exacerbate the China problem, or we can focus on the "long play" and limit negative externalities. And does segwit help the Lightning Network? Yes. Is that something that indicates a Blockstream conspiracy? No. Comically, the big blockians used to criticize Blockstream for advocating for LN when there was no one working on it, but now that it is actively being developed, the tune has changed and everything Blockstream does is a conspiracy to push for Bitcoin's future as a dystopic LN powered settlement network. Is LN "the answer?" Obviously not, most don't actually think that. How it actually works in practice is yet to be seen and there could be unforseen emergent characteristics that make it less useful for the average user than originally thought. But it's a tool that should be developed in unison with other scaling measures if only for its usefulness for instant txs and micropayments. Regardless, the fundamental divide rests on ideological differences that we all know well. Mike is fine with the miner-only validation model for nodes and is willing to accept some miner centralization so long as he gets the necessary capacity increases to satisfy his personal expectations for the immediate future of Bitcoin. Greg and co believe that a distributed full node landscape helps maintain a balance of decentralization in the face of the miner centralization threat. For example, if you have 10 miners who are the only sources for blockchain data then you run the risk of undetectable censorship, prolific sybil attacks, and no mechanism for individuals to validate the network without trusting a third party. As an analogy, take the tor network: you use it with an expectation of privacy while understanding that the multi-hop nature of the routing will increase latency. Certainly you could improve latency by removing a hop or two, but with it you lose some privacy. Does tor's high latency make it useless? Maybe for watching Netflix, but not for submitting leaked documents to some newspaper. I believe this is the philosophy held by most of the core development team. Mike does not believe that the Bitcoin network should cater to this philosophy and any activity which stunts the growth of on-chain transactions is a direct attack on the protocol. Ultimately however I believe Greg and co. also want Bitcoin to scale on-chain transactions as much as possible. They believe that in order for Bitcoin to increase its capacity while adhering to acceptable levels of decentralization, much work needs to be done. It's not a matter of if block size will be increased, but when. Mike has confused this adherence to strong principles of decentralization as disingenuous and a cover up for a dystopic future of Bitcoin where sidechains run wild with financial institutions paying $40 per transaction. Again, this does not make any sense to me. If banks are spending millions to co-op this network what advantage does a decentralized node landscape have to them? There are a few roads that the community can take now: one where we delay a block size increase while improvements to the protocol are made (with the understanding that some users may have to wait a few blocks to have their transaction included, fees will be dependent on transaction volume, and transactions <$1 may be temporarily cost ineffective) so that when we do increase the block size, orphan rate and node drop off are insignificant. Another is the immediate large block size increase which possibly leads to a future Bitcoin which looks nothing like it does today: low numbers of validating nodes, heavy trust in centralized network explorers and thus a more vulnerable network to government coercion/general attack. Certainly there are smaller steps for block size increases which might not be as immediately devastating, and perhaps that is the middle ground which needs to be trodden to appease those who are emotionally invested in a bigger block size. Combined with segwit however, max block sizes could reach unacceptable levels. There are other scenarios which might play out with competing chains etc..., but in that future Bitcoin has effectively failed. As any technology that requires maintenance and human interaction, Bitcoin will require politicking for decision making. Up until now that has occurred via the "vote download" for software which implements some change to the protocol. I believe this will continue to be the most robust of options available to us. Now that there is competition, the Bitcoin Core community can properly advocate for changes to the protocol that it sees fit without being accused of co-opting the development of Bitcoin. An ironic outcome to the situation at hand. If users want their Bitcoins to remain valuable, they must actively determine which developers are most competent and have their best interests at heart. So far the core dev community has years of substantial and successful contributions under its belt, while the alt implementations have a smattering of developers who have not yet publicly proven (besides perhaps Gavin--although his early mistakes with block size estimates is concerning) they have the skills and endurance necessary to maintain a full node implementation. Perhaps now it is time that we focus on the personalities who many want to trust Bitcoin's future. Let us see if they can improve the speed at which signatures are validated by 7x. Or if they can devise privacy preserving protocols like Confidential Transactions. Or can they figure out ways to improve traversal times across a merkle tree? Can they implement HD functionality into a wallet without any coin-crushing bugs? Can they successfully modularize their implementation without breaking everything? If so, let's welcome them with open arms. But Mike is at R3 now, which seems like a better fit for him ideologically. He can govern the rules with relative impunity and there is not a huge community of open source developers, researchers and enthusiasts to disagree with. I will admit, his posts are very convincing at first blush, but ultimately they are nothing more than a one sided appeal to the those in the community who have unrealistic or incomplete understandings of the technical challenges faced by developers maintaining a consensus critical, validation-heavy, distributed system that operates within an adversarial environment. Mike always enjoyed attacking Blockstream, but when survey his past behavior it becomes clear that his motives were not always pure. Why else would you leave with such a nasty, public farewell? To all the XT'ers, btc'ers and so on, I only ask that you show some compassion when you critique the work of Bitcoin Core devs. We understand you have a competing vision for the scaling of Bitcoin over the next few years. They want Bitcoin to scale too, you just disagree on how and when it should be done. Vilifying and attacking the developers only further divides the community and scares away potential future talent who may want to further the Bitcoin cause. Unless you can replace the folks doing all this hard work on the protocol or can pay someone equally as competent, please think twice before you say something nasty. As for Mike, I wish you the best at R3 and hope that you can one day return to the Bitcoin community with a more open mind. It must hurt having your software out there being used by so many but your voice snuffed. Hopefully one day you can return when many of the hard problems are solved (e.g. reduced propagation delays, better access to cheap bandwidth) and the road to safe block size increases have been paved. (*) https://eprint.iacr.org/2014/763.pdf (q) https://github.com/bitcoinclassic/bitcoinclassic/pull/6 (b) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012026.html (c) https://github.com/bitcoinclassic/bitcoinclassic/pull/1#issuecomment-170299027 (d) http://toom.im/jameshilliard_classic_PR_1.html (0) http://bitcoinstats.com/irc/bitcoin-dev/logs/2016/01/06 (1) https://github.com/bitcoin/bitcoin/graphs/contributors (2) https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe012014.html (3) https://toom.im/blocktime (beware of heavy website) (4) https://bitcointalk.org/index.php?topic=766190.msg13510513#msg13510513 (5) https://news.ycombinator.com/item?id=10774773 (6) http://rusty.ozlabs.org/?p=573 edit, fixed some things. edit 2, tried to clarify some more things and remove some personal bias thanks to astro
Why is Blockstream CTO Greg Maxwell u/nullc trying to pretend AXA isn't one of the top 5 "companies that control the world"? AXA relies on debt & derivatives to pretend it's not bankrupt. Million-dollar Bitcoin would destroy AXA's phony balance sheet. How much is AXA paying Greg to cripple Bitcoin?
Typical semantics games and hair-splitting and bullshitting from Greg. But I guess we shouldn't expect too much honesty or even understanding from someone like Greg who thinks that miners don't control Bitcoin. AXA-owned Blockstream CTO Greg Maxwell u/nullc doesn't understand how Bitcoin mining works
Mining is how you vote for rule changes. Greg's comments on BU revealed he has no idea how Bitcoin works. He thought "honest" meant "plays by Core rules." [But] there is no "honesty" involved. There is only the assumption that the majority of miners are INTELLIGENTLY PROFIT-SEEKING. - ForkiusMaximus
Adam Back & Greg Maxwell are experts in mathematics and engineering, but not in markets and economics. They should not be in charge of "central planning" for things like "max blocksize". They're desperately attempting to prevent the market from deciding on this. But it will, despite their efforts.
Gregory Maxwell nullc has evidently never heard of terms like "the 1%", "TPTB", "oligarchy", or "plutocracy", revealing a childlike naïveté when he says: "‘Majority sets the rules regardless of what some minority thinks’ is the governing principle behind the fiats of major democracies."
People are starting to realize how toxic Gregory Maxwell is to Bitcoin, saying there are plenty of other coders who could do crypto and networking, and "he drives away more talent than he can attract." Plus, he has a 10-year record of damaging open-source projects, going back to Wikipedia in 2006.
https://np.reddit.com/btc/comments/4klqtg/people_are_starting_to_realize_how_toxic_gregory/ So here we have Greg this week, desperately engaging in his usual little "semantics" games - claiming that AXA isn't technically a bank - when the real point is that: AXA is clearly one of the most powerful fiat finance firms in the world. Maybe when he's talking about the hairball of C++ spaghetti code that him and his fellow devs at Core/Blockstream are slowing turning their version of Bitcoin's codebase into... in that arcane (and increasingly irrelevant :) area maybe he still can dazzle some people with his usual meaningless technically correct but essentially erroneous bullshit. But when it comes to finance and economics, Greg is in way over his head - and in those areas, he can't bullshit anyone. In fact, pretty much everything Greg ever says about finance or economics or banks is simply wrong. He thinks he's proved some point by claiming that AXA isn't technically a bank. But AXA is far worse than a mere "bank" or a mere "French multinational insurance company". AXA is one of the top-five "companies that control the world" - and now (some people think) AXA is in charge of paying for Bitcoin "development". A recent infographic published in the German Magazine "Die Zeit" showed that AXA is indeed the second-most-connected finance company in the world - right at the rotten "core" of the "fantasy fiat" financial system that runs our world today.
Who owns the world? (1) Barclays, (2) AXA, (3) State Street Bank. (Infographic in German - but you can understand it without knowing much German: "Wem gehört die Welt?" = "Who owns the world?") AXA is the #2 company with the most economic poweconnections in the world. And AXA owns Blockstream.
Blockstream is now controlled by the Bilderberg Group - seriously! AXA Strategic Ventures, co-lead investor for Blockstream's $55 million financing round, is the investment arm of French insurance giant AXA Group - whose CEO Henri de Castries has been chairman of the Bilderberg Group since 2012.
https://np.reddit.com/btc/comments/47zfzt/blockstream_is_now_controlled_by_the_bilderberg/ So, let's get a few things straight here. "AXA" might not be a household name to many people. And Greg was "technically right" when he denied that AXA is a "bank" (which is basically the only kind of "right" that Greg ever is these days: "technically" :-) But AXA is one of the most powerful finance companies in the world. AXA was started as a French insurance company. And now it's a French multinational insurance company. But if you study up a bit on AXA, you'll see that they're not just any old "insurance" company. AXA has their fingers in just about everything around the world - including a certain team of toxic Bitcoin devs who are radically trying to change Bitcoin:
And ever since AXA started throwing tens of millions of dollars in filthy fantasy fiat at a certain toxic dev named Gregory Maxwell, CTO of Blockstream, suddenly he started saying that we can't have nice things like the gradually increasing blocksizes (and gradually increasing Bitcoin prices - which fortunately tend to increase proportional to the square of the blocksize because of Metcalfe's law :-) which were some of the main reasons most of us invested in Bitcoin in the first place. My, my, my - how some people have changed!
Greg Maxwell used to have intelligent, nuanced opinions about "max blocksize", until he started getting paid by AXA, whose CEO is head of the Bilderberg Group - the legacy financial elite which Bitcoin aims to disintermediate. Greg always refuses to address this massive conflict of interest. Why?
Previously, Greg Maxwell u/nullc (CTO of Blockstream), Adam Back u/adam3us (CEO of Blockstream), and u/theymos (owner of r\bitcoin) all said that bigger blocks would be fine. Now they prefer to risk splitting the community & the network, instead of upgrading to bigger blocks. What happened to them?
AXA would be exposed as bankrupt in a world dominated by a "counterparty-free" asset class like Bitcoin.
AXA pays Greg's salary - and Greg is one of the major forces who has been actively attempting to block Bitcoin's on-chain scaling - and there's no way getting around the fact that artificially small blocksizes do lead to artificially low prices.
AXA kinda reminds me of AIG If anyone here was paying attention when the cracks first started showing in the world fiat finance system around 2008, you may recall the name of another mega-insurance company, that was also one of the most connected finance companies in the world: AIG.
Falling Giant: A Case Study Of AIG What was once the unthinkable occurred on September 16, 2008. On that date, the federal government gave the American International Group - better known as AIG (NYSE:AIG) - a bailout of $85 billion. In exchange, the U.S. government received nearly 80% of the firm's equity. For decades, AIG was the world's biggest insurer, a company known around the world for providing protection for individuals, companies and others. But in September, the company would have gone under if it were not for government assistance.
Bernanke did say he believed an AIG failure would be "catastrophic," and that the heavy use of derivatives made the AIG problem potentially more explosive. An AIG failure, thanks to the firm's size and its vast web of trading partners, "would have triggered an intensification of the general run on international banking institutions," Bernanke said.
http://fortune.com/2010/09/02/why-the-fed-saved-aig-and-not-lehman/ Just like AIG, AXA is a "systemically important" finance company - one of the biggest insurance companies in the world. And (like all major banks and insurance firms), AXA is drowning in worthless debt and bets (derivatives). Most of AXA's balance sheet would go up in a puff of smoke if they actually did "mark-to-market" (ie, if they actually factored in the probability of the counterparties of their debts and bets actually coming through and paying AXA the full amount it says on the pretty little spreadsheets on everyone's computer screens). In other words: Like most giant banks and insurers, AXA has mainly debt and bets. They rely on counterparties to pay them - maybe, someday, if the whole system doesn't go tits-up by then. In other words: Like most giant banks and insurers, AXA does not hold the "private keys" to their so-called wealth :-) So, like most giant multinational banks and insurers who spend all their time playing with debts and bets, AXA has been teetering on the edge of the abyss since 2008 - held together by chewing gum and paper clips and the miracle of Quantitative Easing - and also by all the clever accounting tricks that instantly become possible when money can go from being a gleam in a banker's eye to a pixel on a screen with just a few keystrokes - that wonderful world of "fantasy fiat" where central bankers ninja-mine billions of dollars in worthless paper and pixels into existence every month - and then for some reason every other month they have to hold a special "emergency central bankers meeting" to deal with the latest financial crisis du jour which "nobody could have seen coming". AIG back in 2008 - much like AXA today - was another "systemically important" worldwide mega-insurance giant - with most of its net worth merely a pure fantasy on a spreadsheet and in a four-color annual report - glossing over the ugly reality that it's all based on toxic debts and derivatives which will never ever be paid off. Mega-banks Mega-insurers like AXA are addicted to the never-ending "fantasy fiat" being injected into the casino of musical chairs involving bets upon bets upon bets upon bets upon bets - counterparty against counterparty against counterparty against counterparty - going 'round and 'round on the big beautiful carroussel where everyone is waiting on the next guy to pay up - and meanwhile everyone's cooking their books and sweeping their losses "under the rug", offshore or onto the taxpayers or into special-purpose vehicles - while the central banks keep printing up a trillion more here and a trillion more there in worthless debt-backed paper and pixels - while entire nations slowly sink into the toxic financial sludge of ever-increasing upayable debt and lower productivity and higher inflation, dragging down everyone's economies, enslaving everyone to increasing worktime and decreasing paychecks and unaffordable healthcare and education, corrupting our institutions and our leaders, distorting our investment and "capital allocation" decisions, inflating housing and healthcare and education beyond everyone's reach - and sending people off to die in endless wars to prop up the deadly failing Saudi-American oil-for-arms Petrodollar ninja-mined currency cartel. In 2008, when the multinational insurance company AIG (along with their fellow gambling buddies at the multinational investment banks Bear Stearns and Lehmans) almost went down the drain due to all their toxic gambling debts, they also almost took the rest of the world with them. And that's when the "core" dev team working for the miners central banks (the Fed, ECB, BoE, BoJ - who all report to the "central bank of central banks" BIS in Basel) - started cranking up their mining rigs printing presses and keyboards and pixels to the max, unilaterally manipulating the "issuance schedule" of their shitcoins and flooding the world with tens of trillions in their worthless phoney fiat to save their sorry asses after all their toxic debts and bad bets. AXA is at the very rotten "core" of this system - like AIG, a "systemically important" (ie, "too big to fail") mega-gigantic multinational insurance company - a fantasy fiat finance firm quietly sitting at the rotten core of our current corrupt financial system, basically impacting everything and everybody on this planet. The "masters of the universe" from AXA are the people who go to Davos every year wining and dining on lobster and champagne - part of that elite circle that prints up endless money which they hand out to their friends while they continue to enslave everyone else - and then of course they always turn around and tell us we can't have nice things like roads and schools and healthcare because "austerity". (But somehow we always can have plenty of wars and prisons and climate change and terrorism because for some weird reason our "leaders" seem to love creating disasters.) The smart people at AXA are probably all having nightmares - and the smart people at all the other companies in that circle of "too-big-to-fail" "fantasy fiat finance firms" are probably also having nightmares - about the following very possible scenario: If Bitcoin succeeds, debt-and-derivatives-dependent financial "giants" like AXA will probably be exposed as having been bankrupt this entire time. All their debts and bets will be exposed as not being worth the paper and pixels they were printed on - and at that point, in a cryptocurrency world, the only real money in the world will be "counterparty-free" assets ie cryptocurrencies like Bitcoin - where all you need to hold is your own private keys - and you're not dependent on the next deadbeat debt-ridden fiat slave down the line coughing up to pay you. Some of those people at AXA and the rest of that mafia are probably quietly buying - sad that they missed out when Bitcoin was only $10 or $100 - but happy they can still get it for $1000 while Blockstream continues to suppress the price - and who knows, what the hell, they might as well throw some of that juicy "banker's bonus" into Bitcoin now just in case it really does go to $1 million a coin someday - which it could easily do with just 32MB blocks, and no modifications to the code (ie, no SegWit, no BU, no nuthin', just a slowly growing blocksize supporting a price growing roughly proportional to the square of the blocksize - like Bitcoin always actually did before the economically illiterate devs at Blockstream imposed their centrally planned blocksize on our previously decentralized system). Meanwhile, other people at AXA and other major finance firms might be taking a different tack: happy to see all the disinfo and discord being sown among the Bitcoin community like they've been doing since they were founded in late 2014 - buying out all the devs, dumbing down the community to the point where now even the CTO of Blockstream Greg Mawxell gets the whitepaper totally backwards. Maybe Core/Blockstream's failure-to-scale is a feature not a bug - for companies like AXA. After all, AXA - like most of the major banks in the Europe and the US - are now basically totally dependent on debt and derivatives to pretend they're not already bankrupt. Maybe Blockstream's dead-end road-map (written up by none other than Greg Maxwell), which has been slowly strangling Bitcoin for over two years now - and which could ultimately destroy Bitcoin via the poison pill of Core/Blockstream's SegWit trojan horse - maybe all this never-ending history of obstrution and foot-dragging and lying and failure from Blockstream is actually a feature and not a bug, as far as AXA and their banking buddies are concerned.
The insurance company with the biggest exposure to the 1.2 quadrillion dollar (ie, 1200 TRILLION dollar) derivatives casino is AXA. Yeah, that AXA, the company whose CEO is head of the Bilderberg Group, and whose "venture capital" arm bought out Bitcoin development by "investing" in Blockstream.
If Bitcoin becomes a major currency, then tens of trillions of dollars on the "legacy ledger of fantasy fiat" will evaporate, destroying AXA, whose CEO is head of the Bilderbergers. This is the real reason why AXA bought Blockstream: to artificially suppress Bitcoin volume and price with 1MB blocks.
This trader's price & volume graph / model predicted that we should be over $10,000 USD/BTC by now. The model broke in late 2014 - when AXA-funded Blockstream was founded, and started spreading propaganda and crippleware, centrally imposing artificially tiny blocksize to suppress the volume & price.
"I'm angry about AXA scraping some counterfeit money out of their fraudulent empire to pay autistic lunatics millions of dollars to stall the biggest sociotechnological phenomenon since the internet and then blame me and people like me for being upset about it." ~ u/dresden_k
Bitcoin can go to 10,000 USD with 4 MB blocks, so it will go to 10,000 USD with 4 MB blocks. All the censorship & shilling on r\bitcoin & fantasy fiat from AXA can't stop that. BitcoinCORE might STALL at 1,000 USD and 1 MB blocks, but BITCOIN will SCALE to 10,000 USD and 4 MB blocks - and beyond
AXA/Blockstream are suppressing Bitcoin price at 1000 bits = 1 USD. If 1 bit = 1 USD, then Bitcoin's market cap would be 15 trillion USD - close to the 82 trillion USD of "money" in the world. With Bitcoin Unlimited, we can get to 1 bit = 1 USD on-chain with 32MB blocksize ("Million-Dollar Bitcoin")
Greg Maxwell has now publicly confessed that he is engaging in deliberate market manipulation to artificially suppress Bitcoin adoption and price. He could be doing this so that he and his associates can continue to accumulate while the price is still low (1 BTC = $570, ie 1 USD can buy 1750 "bits")
Why did Blockstream CTO u/nullc Greg Maxwell risk being exposed as a fraud, by lying about basic math? He tried to convince people that Bitcoin does not obey Metcalfe's Law (claiming that Bitcoin price & volume are not correlated, when they obviously are). Why is this lie so precious to him?
https://www.reddit.com/btc/comments/57dsgz/why_did_blockstream_cto_unullc_greg_maxwell_risk/ I don't know how a so-called Bitcoin dev can sleep at night knowing he's getting paid by fucking AXA - a company that would probably go bankrupt if Bitcoin becomes a major world currency. Greg must have to go through some pretty complicated mental gymastics to justify in his mind what everyone else can see: he is a fucking sellout to one of the biggest fiat finance firms in the world - he's getting paid by (and defending) a company which would probably go bankrupt if Bitcoin ever achieved multi-trillion dollar market cap. Greg is literally getting paid by the second-most-connected "systemically important" (ie, "too big to fail") finance firm in the world - which will probably go bankrupt if Bitcoin were ever to assume its rightful place as a major currency with total market cap measured in the tens of trillions of dollars, destroying most of the toxic sludge of debt and derivatives keeping a bank financial giant like AXA afloat. And it may at first sound batshit crazy (until You Do The Math), but Bitcoin actually really could go to one-million-dollars-a-coin in the next 8 years or so - without SegWit or BU or anything else - simply by continuing with Satoshi's original 32MB built-in blocksize limit and continuing to let miners keep blocks as small as possible to satisfy demand while avoiding orphans - a power which they've had this whole friggin' time and which they've been managing very well thank you.
Bitcoin Original: Reinstate Satoshi's original 32MB max blocksize. If actual blocks grow 54% per year (and price grows 1.542 = 2.37x per year - Metcalfe's Law), then in 8 years we'd have 32MB blocks, 100 txns/sec, 1 BTC = 1 million USD - 100% on-chain P2P cash, without SegWit/Lightning or Unlimited
https://np.reddit.com/btc/comments/5uljaf/bitcoin_original_reinstate_satoshis_original_32mb/ Meanwhile Greg continues to work for Blockstream which is getting tens of millions of dollars from a company which would go bankrupt if Bitcoin were to actually scale on-chain to 32MB blocks and 1 million dollars per coin without all of Greg's meddling. So Greg continues to get paid by AXA, spreading his ignorance about economics and his lies about Bitcoin on these forums. In the end, who knows what Greg's motivations are, or AXA's motivations are. But one thing we do know is this: Satoshi didn't put Greg Maxwell or AXA in charge of deciding the blocksize. The tricky part to understand about "one CPU, one vote" is that it does not mean there is some "pre-existing set of rules" which the miners somehow "enforce" (despite all the times when you hear some Core idiot using words like "consensus layer" or "enforcing the rules"). The tricky part about really understanding Bitcoin is this: Hashpower doesn't just enforce the rules - hashpower makes the rules. And if you think about it, this makes sense. It's the only way Bitcoin actually could be decentralized. It's kinda subtle - and it might be hard for someone to understand if they've been a slave to centralized authorities their whole life - but when we say that Bitcoin is "decentralized" then what it means is: We all make the rules. Because if hashpower doesn't make the rules - then you'd be right back where you started from, with some idiot like Greg Maxwell "making the rules" - or some corrupt too-big-to-fail bank debt-and-derivative-backed "fantasy fiat financial firm" like AXA making the rules - by buying out a dev team and telling us that that dev team "makes the rules". But fortunately, Greg's opinions and ignorance and lies don't matter anymore. Miners are waking up to the fact that they've always controlled the blocksize - and they always will control the blocksize - and there isn't a single goddamn thing Greg Maxwell or Blockstream or AXA can do to stop them from changing it - whether the miners end up using BU or Classic or BitcoinEC or they patch the code themselves.
The debate is not "SHOULD THE BLOCKSIZE BE 1MB VERSUS 1.7MB?". The debate is: "WHO SHOULD DECIDE THE BLOCKSIZE?" (1) Should an obsolete temporary anti-spam hack freeze blocks at 1MB? (2) Should a centralized dev team soft-fork the blocksize to 1.7MB? (3) OR SHOULD THE MARKET DECIDE THE BLOCKSIZE?
Core/Blockstream are now in the Kübler-Ross "Bargaining" phase - talking about "compromise". Sorry, but markets don't do "compromise". Markets do COMPETITION. Markets do winner-takes-all. The whitepaper doesn't talk about "compromise" - it says that 51% of the hashpower determines WHAT IS BITCOIN.
Clearing up Some Widespread Confusions about BU Core deliberately provides software with a blocksize policy pre-baked in. The ONLY thing BU-style software changes is that baking in. It refuses to bundle controversial blocksize policy in with the rest of the code it is offering. It unties the blocksize settings from the dev teams, so that you don't have to shop for both as a packaged unit. The idea is that you can now have Core software security without having to submit to Core blocksize policy. Running Core is like buying a Sony TV that only lets you watch Fox, because the other channels are locked away and you have to know how to solder a circuit board to see them. To change the channel, you as a layman would have to switch to a different TV made by some other manufacturer, who you may not think makes as reliable of TVs. This is because Sony believes people should only ever watch Fox "because there are dangerous channels out there" or "because since everyone needs to watch the same channel, it is our job to decide what that channel is." So the community is stuck with either watching Fox on their nice, reliable Sony TVs, or switching to all watching ABC on some more questionable TVs made by some new maker (like, in 2015 the XT team was the new maker and BIP101 was ABC). BU (and now Classic and BitcoinEC) shatters that whole bizarre paradigm. BU is a TV that lets you tune to any channel you want, at your own risk. The community is free to converge on any channel it wants to, and since everyone in this analogy wants to watch the same channel they will coordinate to find one.
Adjustable blocksize cap (ABC) is dangerous? The blocksize cap has always been user-adjustable. Core just has a really shitty inferface for it. What does it tell you that Core and its supporters are up in arms about a change that merely makes something more convenient for users and couldn't be prevented from happening anyway? Attacking the adjustable blocksize feature in BU and Classic as "dangerous" is a kind of trap, as it is an implicit admission that Bitcoin was being protected only by a small barrier of inconvenience, and a completely temporary one at that. If this was such a "danger" or such a vector for an "attack," how come we never heard about it before? Even if we accept the improbable premise that inconvenience is the great bastion holding Bitcoin together and the paternalistic premise that stakeholders need to be fed consensus using a spoon of inconvenience, we still must ask, who shall do the spoonfeeding? Core accepts these two amazing premises and further declares that Core alone shall be allowed to do the spoonfeeding. Or rather, if you really want to you can be spoonfed by other implementation clients like libbitcoin and btcd as long as they are all feeding you the same stances on controversial consensus settings as Core does. It is high time the community see central planning and abuse of power for what it is, and reject both:
Throw off central planning by removing petty "inconvenience walls" (such as baked-in, dev-recommended blocksize caps) that interfere with stakeholders coordinating choices amongst themselves on controversial matters ...
Make such abuse of power impossible by encouraging many competing implementations to grow and blossom
https://np.reddit.com/btc/comments/617gf9/adjustable_blocksize_cap_abc_is_dangerous_the/ So it's time for Blockstream CTO Greg Maxwell u/nullc to get over his delusions of grandeur - and to admit he's just another dev, with just another opinion. He also needs to look in the mirror and search his soul and confront the sad reality that he's basically turned into a sellout working for a shitty startup getting paid by the 5th (or 4th or 2nd) "most connected", "systemically important", "too-big-to-fail", debt-and-derivative-dependent multinational bank mega-insurance giant in the world AXA - a major fiat firm firm which is terrified of going bankrupt just like that other mega-insurnace firm AIG already almost did before the Fed rescued them in 2008 - a fiat finance firm which is probably very conflicted about Bitcoin, at the very least. Blockstream CTO Greg Maxwell is getting paid by the most systemically important bank mega-insurance giant in the world, sitting at the rotten "core" of the our civilization's corrupt, dying fiat cartel. Blockstream CTO Greg Maxwell is getting paid by a mega-bank mega-insurance company that will probably go bankrupt if and when Bitcoin ever gets a multi-trillion dollar market cap, which it can easily do with just 32MB blocks and no code changes at all from clueless meddling devs like him.
Hearn: "I know there are other companies that would like to be more overt [re block size preferences] but they're scared of theymos erasing them..."
Today, we got the news of Bitstamp's intention to move towards supporting BIP101. Accordingly, the other forum has promised1 a ban of Bitstamp discussion. Earlier this month, during an AMA, Mike Hearn said this:
"Industry has been pretty quiet over the past 7-8 months or so. Mostly I think they were hoping this whole [blocksize] nightmare would just go away. In recent days you saw Coinbase start to get more aggressive because they realised nothing was happening. I know there are other companies that would like to be more overt too but they're scared of theymos erasing them from bitcoin.org because they rely on referral traffic there."2
Consider that the forces opposing larger blocks have created an incentive for industry and miners to keep quiet until the last possible minute. That way, for businesses, censorship can be postponed (maximizing referral revenues), and for nodes/miners, DDoS attacks can be deferred (unfortunately, DDoS attacks have already been waged against early XT nodes). Therefore: fascinating times. We're approaching a tipping point where the free market will make its voice heard. In so doing, these contributors to the ecosystem will make a collective exit, and get banned / excised from the world of Core. Core-owned discussion venues, once lively with open discussion of the ecosystem, will have nothing left to talk about. Let's be cautiously optimistic: the appeal of a free, open, and valuable currency seems stronger than the appeal of closed systems. When has censorship ever been preferred to the alternative? References:
Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.
TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level. The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited). No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either. Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage. Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software. Longer Summary: People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you. Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid. But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin. Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else. Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture. This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure. The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees. These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years. Details: Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers. The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain). Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/ Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling. A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding" A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system. Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards. (Maybe you can already see where this is going...) Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on. Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.) Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.) Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do. Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees. Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements. The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems. Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC. Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem. And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe. We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections. Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results. It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain. Some people might object that those systems are different from Bitcoin. But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems. The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct. Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
an appropriate "decompose" operation,
an appropriate "recompose" operation,
the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors. This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency. The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
(DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable. In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream. ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work. Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code. An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code: https://youtu.be/uzahKc_ukfM?t=1101 Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it. The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing). The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth. This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code. Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline). Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives. Many of these sharding-based projects have many more nodes than the Bitcoin network. The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance. [email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set. [email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation. In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources. Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze. Data is merged into a database using [email protected] computers in Berkeley. The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused. Active users: 121,780 (January 2015)
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform. Active users 8,382 (March 2016)
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin? We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines. The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution. BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions. And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements. So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin? First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed. In the case of Bitcoin, the problem involves:
sequentializing (serializing) APPEND operations to a blockchain data structure
in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"? Yes we can! Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s). So, let's imagine how a possible future sharding-based architecture of Bitcoin might look. We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine. Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them. This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin. Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines. This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees. In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements. This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system. Conclusion Bitcoin isn't the only project in the world which is permissionless and distributed. Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives. Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
I finally added r/btc to my bookmarks and officially put it at the top of my bitcoin links. Excited about BIP 101.
I've been lurking here on and off whenever I see something that I think probably has a better discussion on btc or bitcoin_uncensored. But the size of these subreddits has always limited the enjoyment a bit for me, so I've continued to check bitcoin and likely will continue to do so going forward. However, reading about this BIP101 stuff has gotten me excited again about the possibility that we might finally have a solution to this thing. btc has been getting better steadily and I've officially chosen it as my first goto for Bitcoin news. So just saying "hey" I guess :)
Thoughts on day 1 of the Milan Scaling Bitcoin conference.
I just got back from day one of the conference. Everybody else went to a bar to drink and eat, but since I'm still jet lagged I came back to my hotel to get some rest. I was also at the "Satoshi's Vision" conference about 2 weeks ago, and the two conferences are very different in many ways. First off, the vibe here seems much more social. It seems that most people that are here are well connected socially. There are hardly any "loners" here. Contrast this to the Satoshi's Vision conference where most people there had never met anyone else there before. During the presentations, a lot of people seem to be not paying very much attention. A lot of people are on their computers and not really listening to the speakers. I also hear a lot of chatting amongst the attendees. At the Satoshi's Vision conference, I got the feeling most people were there to listen to the speakers, and social fraternizing was not on the top of anyone's reasons for being there. During the breaks, everyone would go outside to the snack tables and chat with each other. There is a lot of chatter. Everyone is talking with everyone else, and there is a high level of noise. When I say "noise" I don't mean people are being loud, the noise comes from so many people talking. After the break is over, people file back into the conference room, and the speaker has to quiet down the crowd before the speakers can begin. "Ladies and Gentlemen, can I have your attention.... We would like to get started....Can I have your attention..... Please everyone get seated.... Please take your seats we are about to get started". I don't mean to be demeaning, but it kind of reminds me of school children coming inside after recess. It takes a while for people to calm down and get ready to listen, everyone's conversations continue until the MC can get everyone to shut up. Maybe this is a function of the Scaling Bitcoin convention being much larger (almost 10x as many people are here as there were at Satoshi's Vision). The content of the presentations seem to be loosely scaling related. Maybe Day 2 will have more scaling content, but today hardly any scaling content. For some reason, there was a lot of talks on fungibility and there was even a presentation on Timestamping. What does timestamping have to do with scaling? It seems like these Scaling Bitcoin" conferences are turning into general bitcoin presentations, not strictly scaling related. This is a stark contrast to the first Scaling Bitcoin conference which was pretty much completely blocksize limit proposals (BIP100, BIP101, BIP102, etc). Also, it seems that there is very little new information coming from the presentations. The LN talks sounds like every previous talk on the LN. The Sidechain presentation sounded like it could have been the exact same presentation given 3 years ago. Very little progress seems to have been made in these areas. At least at Hong Kong segwit was announced, which added an air of excitement. This time around there isn;t hardly any excitement for any breakthrough. Overall, the day was pretty boring. I'm not particularly interested in making friends here, as I'm not interested in BTC for social reasons. I'm only here to feel the vibe and get a more close up look at everything plays out.
Let us Focus on Building the Freedom Loving Future We All Want
Following the latest proclamation by the usurpers of the social contract who, without principle nor legitimacy claim to hold authority over our will, it has become clear to all that what once could proudly be called core now stands as only a shell of its former self. Despite what Sauron, Saruman, Gollum, Grima and all their orcs claim, with no regards to truth, the facts speak for themself. The small blockers have censored all communication channels - from /bitcoin   to bitcoin.org to #bitcoin irc - all of which are by any principled analysis outright gifts to the community as a whole by that genius Satoshi, thus belong to all in freedom. No person of principle holds any legitimate right to engage in such vile and immoral outright censorship as well as blatant manipulation by hiding scores and arbitrary changing sorting to controversial, especially when our community is formed on the high principles of decentralisation and freedom and even more so when all these communication channels were given as a gift. They have abused satoshis name without shame to try and appeal to some authority while at all times have refrained in any way to engage in rational discourse, to use analysis, to provide logics, to engage maths or science. At all time they have appealed solely to emotion by dehumanising through baseless accusations of trolling, by raising the specter of NSA, kyc, thus terrify the manipulable into submission and once so terrified sell them centralised, non proof of work, non bitcoin and not secure, unproven, untested and non existent snake oil. They have engaged in criminal acts of DDosing pools and nodes and I, unashamedly and polemically accuse Peter Todd and btcdrak of being the instigators of these direct attacks for no one else has a motive to do so and because: "The SPV attack is a good idea! Lets do it, and lets do it anonymously." http://pastebin.com/4BcycXUuhttps://bitcointalk.org/index.php?topic=335658.0 They have publicly considered changing the open source license of bitcoin, taking out lawsuits, attacking XT nodes with such suggestions not by some junior dev, but el presidente: "Maybe there should be a campaign to run "noXT" nodes... [m]aybe I should go run one and put my miners behind it. Or a pool offer it? Maybe one could upgrade bitcoin SPV nodes to automatically recognise and ignore XT nodes... [o]r someone suggested bitcoin nodes could refuse connections from XT. (Or maybe teergrube them to increase their orphan rate)." https://np.reddit.com/Bitcoin/comments/3hb63g/bip_suggestion_lock_the_blockchain_to_only/cu5v2u2 Above all, they have shown complete disregard for the entire economy, for it's users, for Satoshi's ideas/suggestions and for bitcoin itself by engaging in unprincipled, unethical, illegitimate and at times even outright criminal acts. That much has been made clear. We need not therefore any longer begg them, reason with them, try and engage or persuade them, for they have brought their intentions to light. It is, therefore, far better for us to focus on building the freedom loving future we all signed up for. Bitcoin has a certain design which can be deducted from the statements of Satoshi, especially the statements he made when it was first published and he was defending his invention against pugnant criticisms by very smart men. Of importance, he stated bitcoin can scale to Visa levels, 0conf transactions can be pretty safe and there will be either no transactions or a lot of transactions. Diverging from his design outright breaks bitcoin. Bitcoin is not meant to be tor. Tor is a fringe niche used by some tiny amount of people in the grand scheme of thing and if you go to Times Square I would be surprised if even one person has heard of it. Zerocash is tor. Bitcoin is firefox. Agile, smart, quick, user friendly, privacy conscious, an all round good, wholesome, usable and convenient. Bitcoin is not meant to be 100% decentralised. 1000 datacentres across jurisdictions, universities, businesses (think of it as it becoming a necessary component of business life like the internet or nowadays even skype), researchers, hobbyists, even governments is sufficiently decentralised for the goal of non government issued, non inflatable, money. Bitcoin is not just gold. It's gold 2.0. It is money, it is gold, it is programmable record keeping, it is a million of things we can not even imagine just as no one could imagine google, facebook, youtube, skype, reddit. Bitcoin is freedom. We alone offer it. And we have the backing of the entire economy and people: "Bitcoin Wallet Software Providers Express Support for Block Size Increase:" http://cointelegraph.com/news/114530/bitcoin-wallet-software-providers-express-support-for-block-size-increase "Chinese Bitcoin Miners Support Increasing Blockchain Limit:" http://www.newsbtc.com/2015/06/17/chinese-bitcoin-miners-support-increasing-blockchain-limit/ "7 Leading Bitcoin Companies Pledge Support for BIP101 and Bigger Blocks" https://bitcoinmagazine.com/articles/7-leading-bitcoin-companies-pledge-support-bip101-bigger-blocks-1440450931 So let us work towards building the freedom loving future we all want. If you are a developer then contribute to XT, if you have the resources then run an XT node, if you are a miner then join an XT mining pool, if you are a researcher then provide evidence and analysis and if you are a contributor then educate others with analysis and facts. There is nothing one can do against what is self evidently true for a decentralised bitcoin is an idea whose time has come.
Step-by-step instructions for how to rent hashing power and point it at pools mining XT blocks
Hi friends - These are step-by-step instructions for how to rent hashing power and point it at a pool that is working on mining XT blocks. You can think of this as an alternative to the big block bounty and block vote ideas; they're all ways of showing support for XT, though I personally think this approach is more interesting. It also makes for an actual increase in the XT-supporting hashing power on the network. If you're super-lucky, you may even end up with more bitcoin than when you started! ;) I am fairly new to this myself, so I would be very grateful to any knowledgeable people who can point out mistakes in these steps or suggest ways that they can otherwise be improved. 1) Go to NiceHash 2) Click 'Register' 3) Enter (and then confirm) your email address. You'll be prompted to create a password. 4) Go to Account > Wallet. 5) Create a 'Deposit BTC address'. Once this address exists, you can send bitcoin to it. These are the funds you'll use to rent the hashing power. The funds you send will show up as 'Pending' until the transaction is confirmed and a few blocks deep in the blockchain. 6) While you wait for the funds confirmation, you can set up your target pool. To do this, go to Account > Manage my pools. 7) In the 'Add new pool' box, you will need 4 pieces of information: a. The IP address or hostname of the pool b. The port number c. Your username d. Your password
a/b: The addresses and port numbers for the currently active XT pools can be found here
c: Your username is actually your own personal bitcoin address. This is where your share of the reward will be sent when your pool finds finds a block.
d: This can be anything you want.
Once you've entered those four pieces of information, you can click the 'Pool verificator' link and NiceHash will do a quick handshake with the pool to make sure everything checks out. If that goes well, click 'Add' to save the pool. 8) Once your funds have moved over to the 'Confirmed' box, you're ready to rock. Go to 'Orders'. This page shows the list of currently active hashing rental contracts. 9) In the Algorithm drop-down on the right, select 'SHA256' (this is the hashing algorithm that bitcoin uses). 10) To create a new order, click 'Standard' or 'Fixed' (What's the difference?). Again, you'll need to provide 4 pieces of information:
a. Select your pool from the dropdown. This will be the one you created in step 7.
b. Enter a price. This is how much you're willing to pay per "unit" for hashing power. This will be locked if you selected 'Fixed.' If you picked Standard, you can probably just leave the default since it will be set to the lowest rate that is currently viable.
c. Specify how much hashing power you want to buy (probably a good idea to start with the minimum 5 TH/s on your first go-around).
d. Specify how much you want to spend on this order. The more you spend, the longer the contract will run, but obviously this number has to be lower than the amount of confirmed funds in your wallet.
11) Click Create. That's it! You're helping to move the revolution forward! I hope this has been helpful. :)
Bitcoin XT 0.11.0E will allow 2MB blocks, identical to Bitcoin Classic
A new Bitcoin XT release 0.11.0E will be available at nearly the same time as Bitcoin Classic. In this release, BIP101 support will be replaced with 2MB support, identical to Classic. Although many people worldwide supported BIP101, miners weren't ready for it. All other XT features are retained, such as double-spend relaying and bandwith usage controls. Two new ones have recently been merged:
Thin Blocks propagation acceleration, completed by Dagur Johannsson. Works even with non-XT peers!
Deterministic fee-based mempool limiting, aligning with thin blocks and core policy
Longer term, future releases will depend on developer interest. XT could continue to be a vehicle for more experimental changes.
Some research on BIP101 starting with 4MB instead of 8MB (Very promising)
$ per PB
UL per Block
Date = Self explanatory.
BSL = Block Size Limit.
BS = Block Size.
TPS = Transactions per second.
MD = Monthly Data (The amount of upstream and downstream data used by the node per month)
BcS = Blockchain Size (Total size of the blockchain).
$ per PB = $USD per Petabyte of HDD storage (SSD price is roughly 10x more).
HDD Total = $USD cost of the storage used by the blockchain.
ABDL = Average available bandwidth for download.
ABUL = Average available bandwidth for upload.
%DL = Percentage of total download bandwidth being used by the node from the available download bandwidth.
%UL = Percentage of total upload bandwidth being used by the node from the available upload bandwidth.
UL per Block = Total upload data per block when connected to 8 peers.
PT = Propagation time to propagate to 8 peers.
NNLD = New node download time in days when using 50% bandwidth (the amount of time it would take a user to download the entire blockchain from scratch).
. This the data for BIP101 with a slight change and fast propagating blocks. The only change is to add another two years to the schedule and start at a 4MB limit instead of 8MB. The reason I did this is that; based on the rate of change of HDD/SSD price and average available download and upload speeds, starting at 4MB should be enough to keep block propagation under 2 seconds through the entire schedule. This should be enough to mitigate any centralisation pressures. A key requirement of this data is that full blocks are not uploaded to each peer, but rather just the necessary information. According to Mike Hearn this would currently mean 70KB upstream data per peer. I have used this as a basis for the data. . KEY INFORMATION
Storage cost for the blockchain hit a peak of $4.59 for a HDD and $45.90 for an SSD in the year 2020 and steadily go down from there. I would say it is safe to say that this is a negligible cost for a node. [CHART]()
Percentage of download and upload bandwidth stay at around roughly 0.23% and 3% respectively. I think it is safe to say that this is a negligible amount for a node. It should be noted that for fast propagation 100% of the bandwidth will need to be used but only for 2 seconds out of every (average) 10 minutes.
Propagation time is kept under 2 seconds for the duration of the schedule. It would be great to get some data on how 'good' 2 seconds is. To me this would seem to be sufficient.
The time to download the entire blockchain reaches a max of roughly 5 days when using 50% of the available bandwidth. While this is not ideal it is also not terrible. People wait a week to get a new bank card and weeks for a new internet connection. I think under a week without having your internet bandwidth used up too badly is pretty reasonable.
Transactions per second reaches Paypal average levels (115tps) during year 2022. TPS reaches Visa average levels (2000tps) during year 2030. TPS reaches (current) total global non-cash transaction levels of 12,357tps during year 2035. TPS reaches a max of 57,344tps with 8GB blocks in year 2040. With a growth rate of 7% per year 12,357tps reaches 67,067tps by 2040. This would mean that bitcoin would be able to handle almost 100% of the total global non-cash transactions.
. DATA REFERENCES
Storage cost I used conservative linear logarithmic trend. HDD costs have historically reduced by 10 times every 5 years. This has slowed in the last few years but this is likely to be attributed to the flooding in South East Asia and the transition to SSD technology. SSD costs have also followed this same trend of cost reducing 10 times every 5 years
Average download bandwidth I used a conservative linear logarithmic trend increases in average download bandwidth of doubling every two years.
Average upload bandwidth I also used a conservative linear logarithmic trend increases in average upload bandwidth of doubling every two years.
Dr Peter R. Rizun, managing editor of the first peer-reviewed cryptocurrency journal, is an important Bitcoin researcher. He has also been attacked and censored for months by Core / Blockstream / Theymos. Now, he has now been *suspended* (from *all* subreddits) by some Reddit admin(s). Why?
Dr. Peter R. Rizun is arguably one of the most serious, prominent, and promising new voices in Bitcoin research today. He not only launched the first scientific peer-reviewed cryptocurrency journal - he has also consistently provided high-quality, serious and insightful posts, papers and presentations on reddit (in writing, at conferences, and on YouTube) covering a wide array of important topics ranging from blocksize, scaling and decentralization to networking theory, economics, and fee markets - including:
It was of course probably to be expected that such an important emerging new Bitcoin researcher would be constantly harrassed, attacked and censored by the ancien régime of Core / Blockstream / Theymos. But now, the attacks have risen to a new level, where some Reddit admin(s) have suspended his account Peter__R. This means that now he can't post anywhere on reddit, and people can no longer see his reddit posts simply by clicking on his user name (although his posts - many of them massively upvoted with hundreds of upvotes - are of course still available individually, via the usual search box). Questions:
What Reddit admin(s) are behind this reddit-wide banishing of Peter__R?
What is their real agenda, and why are they aiding and abbeting the censorship imposed by Core / Blockstream / Theymos?
Don't they realize that in the end they will only harm reddit.com itself, by forcing the most important new Bitcoin researchers to publish their work elsewhere?
(Some have suggested that Peter__R may have forgotten to use 'np' instead of 'www' when linking to other posts on reddit - a common error which subs like /btc will conveniently catch for the poster, allowing the post to be fixed and resubmitted. If this indeed was the actual justification of the Reddit admin(s) for banning him reddit-wide, it seems like a silly technical "gotcha" - and one which could easily have been avoided if other subs would catch this error the same way /btc does. At any rate, it certainly seems counterproductive for reddit.com to ban such a prominent and serious Bitcoin contributor.)
Why is reddit.com willing to risk pushing serious discussion off the site, killing its reputation as a decent place to discuss Bitcoin?
Haven't the people attempting to silence him ever heard of the Streisand effect?
Below are some examples of the kinds of outstanding contributions made by Peter__R, which Core / Blockstream / Theymos (and apparently some Reddit admin(s)) have been desperately trying to suppress in the Bitcoin community. Peer-Reviewed Cryptocurrency Journal
In case anyone missed it, Peter__R hit the nail on the head with this: "The reason we can't agree on a compromise is because the choice is binary: the limit is either used as an anti-spam measure, or as a policy tool to control fees."
"It's because most of them are NOT Bitcoin experts--and I hope the community is finally starting to recognize that" -- Peter R on specialists vs. generalists and the aptitudes of Blockstream Core developers
It is time to usher in a new phase of Bitcoin development - based not on crypto & hashing & networking (that stuff's already done), but based on clever refactorings of datastructures in pursuit of massive and perhaps unlimited new forms of scaling
Peter__R on RBF: (1) Easier for scammers on Local Bitcoins (2) Merchants will be scammed, reluctant to accept Bitcoin (3) Extra work for payment processors (4) Could be the proverbial straw that broke Core's back, pushing people into XT, btcd, Unlimited and other clients that don't support RBF
"My response to Pieter Wuille on the Dev-List has once again been censored, perhaps because I spoke favourably of Bitcoin Unlimited and pointed out misunderstandings by Maxwell and Back...here it is for those who are interested" -- Peter R
Obvious in hindsight: Consensus rules should've never been tied to dev teams
XT lobbed the ball, then BU knocked it out of the park. The reason XT pissed so many people off is because they had assumed Core's tying the blocksize settings to its trusted codebase was the only thing keeping the market of users from doing something stupid. Core effectively said to users, "You cannot use our trusted code without accepting the blocksize we decree." Like theymos, Core seeks to use its influential position to manipulate the emergence of consensus on what Bitcoin is - for the common good of course. By not making the blocksize settings configurable, Core arrogates to itself the power to dictate consensus parameters. Users have to mod the code if they want to change those parameters, which is a high enough hurdle to create a powerful Schelling point at 1MB or whatever Core decides. This is nice if you don't trust the market, though at the cost of concentrating ever more power in the Core team, opening Bitcoin to attack. XT wrecked this paradigm. No longer could Core rely on users trusting "the experts" to determine consensus settings like blocksize, because there were new two sets of experts, both with widely respected members in their camp. Not only that, XT made it so that users could use the trusted Core codebase* without having to follow Core's diktats on blocksize. Core's imperative of shepherding the masses to ensure they don't mess up had been undermined, and its power position was in some jeopardy. However, XT was not the full solution. XT was still stuck in the mindset of pushing the user into the XT devs' chosen blocksize settings. They still had illusions of controlling the consensus rules via the "wall of inconvenience" posed by locking down the blocksize cap (since many users are unable to mod the code to change it themselves). XT simply added a second option; the user was still forced to choose blocksize settings as a package deal with their choice of dev team:
Want big blocks but don't like Mike? You have to choose the lesser of two evils: tiny blocks, or big blocks and Mike.
Want small blocks but don't trust Blockstream? You have to choose the lesser of two evils: big blocks, or small blocks and Blockstream.
While superior to the situation before XT where the choice was [email protected] or "roll your own and do your own testing" (a huge barrier to the establishment of a Schelling point around which consensus could coalesce), having only Core and XT as options for blocksize still left major friction in the market's selection process. The market still could not choose its ideal blocksize settings freely, as there were only two choices and they were tied to specific dev teams. It would be better to "let a thousand implementations bloom," people reasoned. This was surely the logical endpoint of the movement XT started. That way there would be free choice. The problem of those choices about blocksize being tied to the specific dev team would have to persist as a necessary evil, even if mitigated by the variety. Enter BU. By making blocksize settings user-adjustable, Bitcoin Unlimited untethers this controversial consensus parameter from the Core dev team's trusted software offerings (as well as from the XT dev team's offerings, once someone releases an "XT Unlimited" client). The fact that the blocksize cap is controversial, previously used by Core as an excuse to take no action and keep it locked down, is revealed as the very reason it must NOT be locked down. The fact that the blocksize cap is controversial is the very reason it is too dangerous to be packaged in with the Core software. BU's stance is that no one should have that power. That power throws a monkey wrench into the market's process of finding consensus by jiggering the results. The true logical endpoint of the movement started by XT is Bitcoin Unlimited's approach of unbundling the consensus rules from each dev team's offerings. If Core won't do it in its own releases, someone else will do it for them, draining power away from Core. That is fine with me, but Core may want to consider the futility of their approach if they have any interest in maintaining their dominant position. In his classic presentation Silicon Valley's Ultimate Exit, Balaji Srinivasan says,
If a company or a country is in decline, you can try voice, or you can try exit. Voice is basically changing the system from within, whereas exit is leaving to create a new system, a new startup, or to join a competitor sometimes. Loyalty can modulate this; sometimes that's patriotism, which is voluntary, and sometimes it's lock-in, which are involuntary barriers to exit.
Gavin, Mike, and Jeff tried voice. Then they tried exit, via XT. They ran into loyalty, inertia, and trust issues as forms of lock-in, but as fees rise, transaction traffic jams happen, and altcoins rise menacingly, the lock-in at Core would likely be overwhelmed. Core would hemorrhage users as people jumped reluctantly to XT. However, there is an easy way for Core to prevent this: unbar users from selecting blocksize parameters. Let them choose their favorite BIP, at least. That way the market can come to a consensus independently of Core, not leaving Core if their only beef is with the blocksize settings. If Core is right that they are giving the market what it wants, they should have no qualms about doing this. If Core refuses to unlock its blocksize settings, BU and projects like it will steal users away with impunity. And once they're gone - who knows - they may find other reasons Core is not as trustworthy as they thought. BU is the ball leaving the park, the horse leaving the stable, and a shot across Core's bow on behalf of the Bitcoin community. *While XT has some other changes that have been controversial, there is also a "big blocks only" version that is just Core+BIP101.
F2Pool can be described as the world’s leading cryptocurrency mining pool for several leading cryptocurrencies such as Bitcoin, Ethereum, and Litecoin. F2Pool is the oldest mining pool and is based in China; presently it is the largest multi-currency mining pool around the world. With BIP101, we have a predictable rate of growth, much like how block rewards are scheduled. I could tell you right now that roughly in the year 2024 the block reward will be 3.125 BTC per block, and (with BIP101) the block size limit would be 128MB, allowing for roughly 640 transactions per second assuming average 350 bytes per tx. And why he doesn’t support BIP101 the current proposal for blocksize scaling submitted by Gavin Andresen. I first discovered Bitcoin in early 2011. Like many, I thought it was brilliant. Bitcoin transactions were decentralized… Essentially, early Bitcoin users get the cake and eat it too. Unfortunately, this won’t last. Throughout the course of Bitcoin’s young and colored history, Bitcoin Core developers have overcome bigger hurdles than the current block size debate. Bitcoin’s development community is the greatest asset this virtual currency has to offer, and it would be a shame to see developers divided because of this new issue that needs to be addressed. /r/btc was created to foster and support free and open Bitcoin discussion, Bitcoin news, and exclusive AMA (Ask Me Anything) interviews from top Bitcoin industry leaders! Bitcoin is the currency of the Internet. A distributed, worldwide, decentralized digital money.
Bitcoin AIR-DROP and Halving - Bitcoin 比特币 Latest News Binance English 16,227 watching Live now 'Fake Bitcoin' - How this Woman Scammed the World, then Vanished - Duration: 17:50. Play next; Play now; The Bitcoin News Show #74 - More Segwit Txs than BCH Txs, NSA Tracking Bitcoiners, Halong Ships! Hey Thanks For Joining The #TradeTravelTriumph Team! Bitcoin is a worldwide cryptocurrency and digital payment system. It is called the first decentralized d... Bitcoin Price Analysis & Crypto News! 👍 THUMBS UP & SUBSCRIBE NOW + 🔔! ***** 🚨 VIP ELITE PRIVATE TRADE ALERTS- https://t.me/joinchat/AAAAAEts9GFT3RV_6wLjOQ... My biggest fear in Bitcoin right now, and what I am doing. 💥 DEBIT CARD - $50 FREE CRYPTO https://cryptolark.co/DEBITCARD *** code is cryptolark 🔥 WEALTH ...